id
stringlengths 20
20
| content
stringlengths 211
8.3M
| dsir_books
float64 -20,679,750.45
-316.68
| fluency_en
listlengths 2
2
| rps_lines_ending_with_terminal_punctution_mark
float64 0
100
| modernbert_cleanliness
listlengths 6
6
| qurater
listlengths 4
4
| rps_doc_num_sentences
int64 1
84.7k
| rps_doc_word_count
float64 9
1.29M
| ad_en
listlengths 2
2
| rps_doc_frac_no_alph_words
float64 15.7
92
| modernbert_reasoning
listlengths 6
6
| rps_doc_frac_chars_top_2gram
float64 0
92.9
| rps_lines_uppercase_letter_fraction
float64 0.06
87.8
| rps_doc_frac_unique_words
float64 1.16
100
| rps_lines_numerical_chars_fraction
float64 0
85.7
| fineweb_edu
listlengths 1
1
| dsir_math
float64 -17,396,368.29
-306.94
| rps_doc_mean_word_length
float64 1.44
61
| dsir_wiki
float64 -20,772,722.61
-313.93
| rps_doc_frac_chars_top_3gram
float64 0
119
| rps_doc_unigram_entropy
float64 1.2
8.5
| modernbert_professionalism
listlengths 6
6
| modernbert_readability
listlengths 6
6
| sub_path
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BkiUd7A5qoaAwj4XOuFT
|
\section{Introduction}
\label{xxsec0}
Throughout let $\Bbbk$ be a base field that is algebraically
closed. Algebraic objects are defined over $\Bbbk$.
The Frobenius-Perron dimension of an object in a semisimple
finite tensor (or fusion) category was introduced by
Etingof-Nikshych-Ostrik in 2005 \cite{ENO2005}. Since then
it has become an extremely useful invariant in the study of
fusion categories and representations of semisimple (weak
and/or quasi-)Hopf algebras. By examining the Frobenius-Perron
dimension of all objects in a finite tensor category one can
determine whether the category is equivalent to the
representation category of a finite-dimensional quasi-Hopf
algebra \cite[Proposition 2.6]{EO2004}. The Frobenius-Perron
dimension of a fusion category is also a crucial invariant in
the classification of fusion categories as well as that of
semisimple Hopf algebras. An important project is to develop
the Frobenius-Perron theory for not-necessarily semisimple
tensor (or monoidal) categories. A step departing from
semisimple categories, or abelian categories of global
dimension 0, is to study the hereditary ones (of global
dimension one). Ultimately Frobenius-Perron theory should
provide powerful tools and useful invariants for projects like
\begin{project}
\label{xxpro0.1}
Describe and understand weak bialgebras
[Definition \ref{xxdef1.7}] and weak Hopf algebras
\cite[Definition 2.1]{BNS1999} that are hereditary
as associative algebras.
\end{project}
Note that an analogous classification project of hereditary
prime Hopf algebras was finished in a remarkable paper by
Wu-Liu-Ding \cite{WLD2016} a few years ago. Some recent
efforts pertaining on homological aspects of noetherian weak
Hopf algebras were presented in \cite{RWZ2020}. Recall from
\cite{NVY2019} that a {\it monoidal triangulated category}
is a monoidal category $\mathcal{T}$ in the sense of
\cite[Definition 2.2.1]{EGNO2015} that is triangulated and,
for which, the tensor product $\otimes_{\mathcal{T}}:
\mathcal{T} \times \mathcal{T} \to \mathcal{T}$ is an exact
bifunctor. Given a Hopf algebra (respectively, weak and/or
quasi- Hopf algebra/bialgebra) $H$, its comultiplication
induces a monoidal structure on the category of
representations of $H$. The corresponding derived category
has a canonical monoidal triangulated structure. Monoidal
triangulated structures appear naturally in several other
subjects.
{\it Algebraic geometry}.
A classical theorem of Gabriel \cite{Ga1962} states that a
noetherian scheme $\mathbb{X}$ can be reconstructed from the
abelian category of coherent sheaves over $\mathbb{X}$,
denoted by $coh(\mathbb{X})$. Hence the abelian category
$coh(\mathbb{X})$ captures all information about the space
$\mathbb{X}$. Recent development in derived algebraic geometry
suggests that the bounded derived category of coherent sheaves
over $\mathbb{X}$, denoted by $D^b(coh(\mathbb{X}))$, is
sometimes a better category to work with when we are
considering many geometric problems such as moduli problems.
When $\mathbb{X}$ is smooth, $D^b(coh(\mathbb{X}))$ is
equipped with a natural tensor (=symmetric monoidal)
triangulated structure in the sense of
\cite[Definition 1.1]{B2005}.
{\it Tensor triangulated geometry}.
Tensor triangulated categories have been studied by Balmer
\cite{B2005} and many others, where the study of tensor
triangulated categories has been sometimes
referred to as {\it tensor triangulated geometry}. Balmer
defined the prime spectrum, denoted by ${\rm{Spc}}(\mathcal{T})$,
of a small tensor triangulated category $\mathcal{T}$ by using
the thick subcategories which behave like prime ideals under the
tensor product. Note that ${\rm{Spc}}(\mathcal{T})$ is a locally
ringed space \cite[Remark 6.4]{B2005}. This idea has been shown
to be widely applicable to algebraic geometry, homotopy theory
and representation theory. Recently Vashaw-Yakimov and
Nakano-Vashaw-Yakimov \cite{VY2018, NVY2019} developed a
noncommutative version of the Balmer spectrum, or {\it
noncommutative tensor triangulated geometry} (in the
words of the authors of \cite{NVY2019}).
{\it Noncommutative algebraic geometry}. Following
Grothendieck, {\it to do geometry you really don't need a space,
all you need is a category of sheaves on this would-be space}
\cite[p.78]{Ma2018}. Following \cite{VY2018, NVY2019},
we would like to consider
or view monoidal triangulated categories as appropriate categories
for doing a new kind of noncommutative geometry. For example, if
$T$ is a noetherian Koszul Artin-Schelter regular algebra, then
the bounded derived category of the noncommutative projective
scheme associated to $T$, denoted by ${\mathcal T}:=D^b(\proj T)$,
has at least two different monoidal triangulated structures
[Example \ref{xxex7.9}]. In this situation, it would
be very interesting to understand how the ``geometry'' of
$\proj T$ interacts with ``monoidal triangulated
structures'' on $\mathcal{T}$. Fix a general triangulated category,
still denoted by $\mathcal{T}$, it is common that there are many
different monoidal triangulated structures on $\mathcal{T}$
(with the same underlying triangulated structure) that reflect
on different hidden properties of $\mathcal{T}$. So it is
worth distinguishing different types of monoidal triangulated
structures on $\mathcal{T}$ and finding a definition of the
``size'' of these structures.
{\it Quiver representations}.
A related subject is the representation theory of quivers that
has become a popular topic since Gabriel's work in the 1970s
\cite{Ga1972, Ga197374, Ga1973}. For a given quiver, it is
naturally equipped with a monoidal structure in the category of
its representations, induced by the vertex-wise tensor product
of vector spaces \eqref{E2.1.1}. The monoidal structure of
quiver representations has been studied by Strassen \cite{St2000}
in relation with orbit-closure degeneration in 2000, and later
by Herschend \cite{He2005, He2008a, He2008b, He2009, He2010}
in the relation with the bialgebra structure on the path
algebra during 2005-2012. Herschend solved the Clebsch-Gordan
problem for quivers of type $\mathbb{A}_n$, $\mathbb{D}_n$
and $\mathbb{E}_{6,7,8}$ in \cite{He2008b, He2009}. As for
tame type, Herschend also gave solutions for type
$\mathbb{\tilde{A}}_n$ in \cite{He2005} and the quivers
with relations that correspond to string algebras in
\cite{He2010}. One of our basic objects in this paper is
the bounded derived category $D^b(A-\Modfd)$ for a
finite dimensional weak bialgebra $A$. (We usually consider
hereditary, but not semisimple, algebras.) Since $A$ is a
weak bialgebra, $A-\Modfd$ has an induced monoidal abelian
structure; and hence, $D^b(A-\Modfd)$ is a monoidal
triangulated category in the sense of \cite{NVY2019}. Note
that, even for a finite quiver $Q$, \cite[Proposition 4]{He2008a}
and \cite[Theorem 3.2]{HT2013} give different weak bialgebra
structures on $A:=\Bbbk Q$ which produce different monoidal
triangulated structures on $D^b(A-\Modfd)$.
{\it Connections between geometry, quiver representations,
and weak bialgebras}.
Going back to classical geometry, let $\mathbb{X}$ be a smooth
projective scheme. If $\mathbb{X}$ is equipped with a full
strongly exceptional sequence (also called strong full
exceptional sequence by some authors) $\{\mathcal{E}_1,\cdots,
\mathcal{E}_n\}$ [Definition \ref{xxdef7.8}], then there
is a triangulated equivalence
\begin{equation}
\label{E0.1.1}\tag{E0.1.1}
D^b(coh(\mathbb{X}))\cong D^b(A-\Modfd)
\end{equation}
where $A$ is the finite dimensional algebra
$[\End_{D^b(coh(\mathbb{X}))}(\oplus_{i=1}^n \mathcal{E}_i)]^{op}$
(some details can be found at the end of Section \ref{xxsec7}).
Since $A$ is finite dimensional (and of finite global
dimension), it seems easier to study $A$ than to study
$\mathbb{X}$ in some aspects. Equivalence \eqref{E0.1.1} induces
a monoidal structure on $D^b(A-\Modfd)$ which usually does
not come from any weak bialgebra structures of $A$
[Example \ref{xxex7.9}]; or in some extreme cases, there
is no weak bialgebra structure on $A$ at all. For such an
algebra $A$, it is imperative to understand and even to
classify all possible monoidal triangulated structures of
$D^b(A-\Modfd)$ (though $A$ may not be a weak bialgebra).
Another well-known example of such a connection is from the
study of weighted projective lines, introduced by Geigle-Lenzing
\cite{GL1987} in 1985 (see Section \ref{xxsec6}). Since then
weighted projective lines have been studied extensively by
many researchers. Let $coh({\mathbb X})$ denote the category
of coherent sheaves over a weighted projective line $\mathbb{X}$.
When $\mathbb{X}$ is domestic, a version of \eqref{E0.1.1} holds
and the representation type of $A$ (appeared in the right-hand
side of \eqref{E0.1.1}) is tame, see Lemma \ref{xxlem6.1}(2).
This is one of the key facts that we will use in this paper.
Recently, a new definition of Frobenius-Perron dimension was
introduced in \cite{CGWZZZ2017, CGWZZZ2019} where the authors
extended its original definition from an object in a semisimple
finite tensor category to an endofunctor of any $\Bbbk$-linear
category. (We refer to Definition \ref{xxdef1.3} for some relevant
definitions.) It turns out that new Frobenius-Perron invariants
are sensitive to monoidal structures; as a consequence, these
are crucial to distinguish different monoidal triangulated
structures. One general goal of this paper is to provide evidence
that the Frobenius-Perron invariants are effective to study
monoidal triangulated structures. Some basic properties and
interesting applications of Frobenius-Perron-type invariants
can be found in \cite{CGWZZZ2017, CGWZZZ2019}.
In this paper, we focus on different weak bialgebra structures
on the path algebras of finite quivers and the Frobenius-Perron
theory for finite dimensional hereditary weak bialgebras. As
mentioned above, this is one step beyond the semisimple case.
\begin{definition}
\label{xxdef0.2}
Let ${\mathcal T}$ be a monoidal category and let $\mathcal{P}$
be a function
$$\{{\text{endofunctors of $\mathcal{T}$}}\}
\longrightarrow {\mathbb{R}}_{\geq 0}.$$
Note that $\mathcal{P}$ could be Frobenius-Perron dimension or
Frobenius-Perron curvature
as given in Definition \ref{xxdef1.3}(6,7). For every object
$M$ in ${\mathcal T}$, let $\mathcal{P}(M)$ denote the
$\mathcal{P}$-value of the tensor functor $M\otimes_{\mathcal{T}} -:
\mathcal{T}\to \mathcal{T}$.
\begin{enumerate}
\item[(1)]
We say $\mathcal{T}$ is {\it $\mathcal{P}$-finite} if
$\mathcal{P}(M)<\infty$ for all objects $M$. Otherwise,
$\mathcal{T}$ is called {\it $\mathcal{P}$-infinite}.
\item[(2)]
If $\mathcal{T}$ is $\mathcal{P}$-infinite and if
$\mathcal{P}(M)<\infty$ for all indecomposable objects $M$,
then $\mathcal{T}$ is called {\it $\mathcal{P}$-tame}.
\item[(3)]
If $\mathcal{T}$ is neither $\mathcal{P}$-finite nor
$\mathcal{P}$-tame, then it is called {\it $\mathcal{P}$-wild}.
\end{enumerate}
\end{definition}
Our first main result concerns the trichotomy of $\fpd$-finite/tame/wild
property. Let $\Repr(Q)$ be the category of finite dimensional
representations of a quiver $Q$.
\begin{theorem}
\label{xxthm0.3}
Let $Q$ be a finite acyclic quiver and let $\mathcal{T}$
be the triangulated category $D^b(\Repr(Q))$.
\begin{enumerate}
\item[(1)]
$Q$ is of finite type if and only if $\mathcal{T}$ is
$\fpd$-finite for every monoidal triangulated structure
on $\mathcal{T}$, and if and only if there is one monoidal
triangulated structure on $\mathcal{T}$ such that $\mathcal{T}$
is $\fpd$-finite.
\item[(2)]
$Q$ is of tame type if and only if there is a monoidal
triangulated structure on $\mathcal{T}$ such that $\mathcal{T}$
is $\fpd$-tame. In this case, there must be another monoidal
triangulated structure on $\mathcal{T}$ such that $\mathcal{T}$
is $\fpd$-wild.
\item[(3)]
$Q$ is of wild type if and only if $\mathcal{T}$ is
$\fpd$-wild for every monoidal triangulated structure
on $\mathcal{T}$.
\item[(4)]
If $Q$ is tame or wild, then every monoidal triangulated
structure on $\mathcal{T}$ is $\fpd$-infinite.
\end{enumerate}
\end{theorem}
Note that in part (2) of the above theorem, there are two
different monoidal triangulated structures on $\mathcal{T}$,
one of which is $\fpd$-tame and the other is not. We refer
to Definition \ref{xxdef3.1} for the definition of a discrete
monoidal structure. By the above theorem, it is
rare to have $\fpd$-finite monoidal triangulated structures
on $\mathcal{T}$. When it exists, we can say a bit more. The
canonical weak bialgebra structure on the path algebra
$\Bbbk Q$ is given in Lemma \ref{xxlem2.1}(1).
\begin{theorem}
\label{xxthm0.4}
Let $A$ be a finite dimensional hereditary weak bialgebra
such that the induced monoidal structure on $A-\Modfd$
is discrete. Then the following are equivalent:
\begin{enumerate}
\item[(a)]
$A$ is of finite representation type,
\item[(b)]
$\fpd(M)<\infty$ for every irreducible representation $M\in A-\Modfd$,
\item[(c)]
$\fpd(M)<\infty$ for every indecomposable representation $M\in A-\Modfd$,
\item[(d)]
$\fpd(M)<\infty$ for every representation $M\in A-\Modfd$,
\item[(e)]
$\fpd (X)<\infty$ for every indecomposable object $X\in D^b(A-\Modfd)$,
\item[(f)]
The induced monoidal triangulated structure on $D^b(A-\Modfd)$ is
$\fpd$-finite.
\end{enumerate}
\end{theorem}
Suppose further that $A$ is the path algebra $\Bbbk Q$ with canonical
weak bialgebra structure. It follows from Gabriel's theorem that
any of conditions {\rm{(a)}} to {\rm{(e)}} is equivalent to
\begin{enumerate}
\item[(g)]
$Q$ is a finite union of quivers of type $\mathbb{ADE}$.
\end{enumerate}
Since condition (a) in the above theorem is an algebra property, the
$\fpd$-finiteness of $D^b(A-\Modfd)$ only depends on the algebra
structure of $A$, though the definition of $\fpd(X)$ uses the
coalgebra structure of $A$. Note that condition (a) is not equivalent
to condition (b) if we remove the hereditary hypothesis in the above
theorem [Remark \ref{xxrem7.3}(3)].
Following BGP-reflection functors \cite{BGP1973}, Happel showed
that, for Dynkin quivers with the same underlying Dynkin diagram,
their derived categories are triangulated equivalent \cite{Ha1987}.
This remarkable theorem is one of most beautiful results in
representation theory of finite dimensional algebras. In contrast,
the story is very different when we are working with {\it monoidal
triangulated} structures of the derived category of Dynkin quivers,
see Theorem \ref{xxthm0.5} below. As indicated in
\cite{CGWZZZ2017, CGWZZZ2019}, Frobenius-Perron-type
invariants are extremely useful to study derived
(or triangulated) categories. Using the Frobenius-Perron
curvature, denoted by $\fpv$, of objects in $D^b(A-\Modfd)$
we can prove the following.
\begin{theorem}
\label{xxthm0.5}
Let $A$ and $B$ be finite dimensional hereditary weak
bialgebras. Assume either $A$ is a bialgebra or
$A-\Modfd$ is discrete. Suppose that the monoidal
triangulated categories $D^b(A-\Modfd)$ and
$D^b(B-\Modfd)$ are equivalent. Then $A-\Modfd$ and
$B-\Modfd$ are equivalent as monoidal abelian categories.
\end{theorem}
As a consequence, we have
\begin{corollary}
\label{xxcor0.6}
Suppose that the bounded derived categories of representations of
two finite acyclic quivers are equivalent as monoidal triangulated
categories. Then the quivers are isomorphic.
\end{corollary}
There is also a result concerning an analogue of a $t$-structure
in the monoidal triangulated setting, see Theorem \ref{xxthm0.7}
below. We introduce the notion of an $mtt$-structure on a monoidal
triangulated category in Section 5. Undefined terminologies
can be found in Sections 4 and 5.
\begin{theorem}
\label{xxthm0.7}
Let $A$ be a finite dimensional weak bialgebra that is hereditary
as an algebra. Suppose that the induced monoidal structure on
$A-\Modfd$ is discrete. Then there is a unique hereditary
$mtt$-structure with deviation zero on the monoidal triangulated
category $D^b(A-\Modfd)$.
\end{theorem}
It is clear that Theorem \ref{xxthm0.7} applies to $D^b(\Repr(Q))$
where $Q$ is a finite acyclic quiver. A $t$-structure on a
triangulated category has been studied extensively since it was
introduced by Beilinson-Bernstein-Deligne in \cite{BBD1981}. It
is natural to study all $mtt$-structures of a monoidal
triangulated category. In fact, $mtt$-structures service
as a compelling system of a monoidal triangulated category. It
is well-known that (hereditary) $t$-structures on $D^b(\Repr(Q))$
are not unique even for quivers of type ${\mathbb A}_n$, \
defined below, for $n\geq 3$. Therefore it is surprising that
certain $mtt$-structures (see Theorem \ref{xxthm0.7}) are unique.
This uniqueness result would have other significant consequences
than Theorem \ref{xxthm0.5} and Corollary \ref{xxcor0.6}. It is also
interesting to search for other classes of monoidal triangulated
categories such that the uniqueness property holds for certain
$mtt$-structures.
Though there are more than one tensor structures on the path
algebra $\Bbbk Q$ for a quiver $Q$, one of these structures
is from the natural coalgebra structure on $\Bbbk Q$,
similar to group algebras [Lemma \ref{xxlem2.1}(1)].
We will present more results concerning the Frobenius-Perron
dimensions of indecomposable representations under such
tensor structure. Before that we need to introduce some
notation. By definition, a type $\mathbb{A}$ quiver (or more
precisely, type $\mathbb{A}_n$ quiver) is a quiver of the
following form
\begin{equation}
\label{E0.7.1}\tag{E0.7.1}
\xymatrix{
1 \ar@{-}[r]^{\alpha_1}&2\ar@{-}[r]^{\alpha_2}
&\cdots\ar@{-}[r]^{\alpha_{i-1}}&i\ar@{-}[r]^{\alpha_i}
&\cdots\ar@{-}[r]^{\alpha_{n-1}}&n}
\end{equation}
where each arrow $\alpha_i$ is either $\longrightarrow$ or
$\longleftarrow$. For each quiver of type $\mathbb{A}_n$,
the arrows $\alpha_i$ will be specified. It is easy to see
that, for each $n\geq 3$, there are more than one
quivers of type $\mathbb{A}_n$ up to isomorphisms.
Let us fix a quiver of type $\mathbb{A}_n$, say $Q$,
as above. For $1\leq i\leq j\leq n$, we define a thin representation
of $Q$, denoted by $M\{i,j\}$, by
\begin{equation}
\label{E0.7.2}\tag{E0.7.2}
(M\{i,j\})_s=\begin{cases} \Bbbk & i\leq s\leq j,\\
0 & {\text{otherwise}}\end{cases}
\end{equation}
and
\begin{equation}
\label{E0.7.3}\tag{E0.7.3}
(M\{i,j\})_{\alpha_s}=\begin{cases} Id_{\Bbbk} & i\leq s<j,\\
0& {\text{otherwise}}.\end{cases}
\end{equation}
(This thin representation is sometimes called an interval module
by other researchers.) Then by \cite[p.63]{GR1992},
all such $M\{i,j\}$ form the complete list
of indecomposable representations of $Q$ [Lemma \ref{xxlem1.10}].
For all $i\leq j$, we say
$$M\{i,j\}\;\; {\text{is}} \;\;
\begin{cases}
{\text{a $sink$ $\qquad$ if $\alpha_{i-1}=\longrightarrow$ (or $i=1$) and
$\alpha_{j}=\longleftarrow$ (or $j=n$)}},\\
{\text{a $source\;$ $\quad$ if $\alpha_{i-1}=\longleftarrow$ (or $i=1$) and
$\alpha_{j}=\longrightarrow$ (or $j=n$)}},\\
{\text{a $flow$ $\qquad$ if $\alpha_{i-1}=\alpha_{j}$, and it is either
$\longrightarrow$ or $\longleftarrow$}}.
\end{cases}$$
Since $\Repr(Q)$ is hereditary, every indecomposable
object in the bounded derived category $D^b(\Repr(Q))$
is of the form $M\{i,j\}[m]$ for some $m\in {\mathbb Z}$.
We have the following result for type $\mathbb{A}_n$.
Some computation in the case of type $\mathbb{D}_n$ quivers
is given in \cite{ZWD2020}.
\begin{theorem}
\label{xxthm0.8}
Let $Q$ be a quiver of type $\mathbb{A}_n$ for some
positive integer $n$. Then the following
hold in the bounded derived
category $D^b(\Repr(Q))$ with tensor defined as in \eqref{E2.1.1}.
\begin{enumerate}
\item[(1)]
$\fpd (M\{i,j\}[m])=0$ for all $m<0$ and $m>1$.
\item[(2)]
$\fpd(M\{i,j\}[0])=\begin{cases}
1 & {\text{if $M\{i,j\}$ is a sink}},\\
\min\{i, n-j+1\} & {\text{if $M\{i,j\}$ is a source}},\\
1 &{\text{if $M\{i,j\}$ is a flow}}.
\end{cases}$
\item[(3)]
$\fpd(M\{i,j\}[1])=\begin{cases}
\min\{i-1,n-j\} & {\text{if $M\{i,j\}$ is a sink}},\\
0 & {\text{if $M\{i,j\}$ is a source}},\\
0 &{\text{if $M\{i,j\}$ is a flow}}.
\end{cases}$
\end{enumerate}
\end{theorem}
Related to Project \ref{xxpro0.1} we are also very much interested
in the following questions.
\begin{question}
\label{xxque0.9}
Let $A$ be a finite dimensional weak bialgebra or just an algebra,
or $\Bbbk Q$ where $Q$ is a finite acyclic quiver.
\begin{enumerate}
\item[(1)]
How to determine all monoidal abelian structures on the abelian
category $A-\Modfd$?
\item[(2)]
How to determine all monoidal triangulated structures on the derived
category $D^b(A-\Modfd)$?
\end{enumerate}
\end{question}
The paper is organized as follows. Section 1 contains some basic
definitions. In particular, we recall the definition of the
Frobenius-Perron dimension of an endofunctor. In Section 2 we review
some preliminaries on quiver representations. The notion
of a discrete monoidal abelian category is introduced in
Section 3. A natural example of a discrete monoidal structure
is $\Repr(Q)$ which is the main object in this paper. Theorem
\ref{xxthm0.4} is proved in Section 4. In Section 5, we
introduce the notion of an $mtt$-structure of a monoidal
triangulated category that is a monoidal version of the
$t$-structure of a triangulated category. Theorems \ref{xxthm0.5}
and \ref{xxthm0.7}, and Corollary \ref{xxcor0.6} are
proved near the end of Section 5. Section 6 focuses on
the proof of Theorem \ref{xxthm0.3} which uses some
detailed information about weighted projective lines. Section
7 contains various examples which indicate the richness
of monoidal triangulated structures from different
subjects. Section 8 consists of the proof of Theorem
\ref{xxthm0.8} with some non-essential details left out.
\section{Some basic definitions}
\label{xxsec1}
This section contains several basic definitions which will be used
in later sections.
Recall from \cite[Definition 2.1.1]{EGNO2015} that
a {\it monoidal category} ${\mathcal C}$ consists of the following data:
\begin{enumerate}
\item[($\bullet$)]
a category ${\mathcal C}$,
\item[($\bullet$)]
a bifunctor $\otimes: {\mathcal C}\times {\mathcal C}\to
{\mathcal C}$, called {\it tensor functor},
\item[($\bullet$)]
for each triple $(X,Y, Z)$ in ${\mathcal C}$, a natural isomorphism
$$\alpha_{X,Y,Z}: (X\otimes Y)\otimes Z\xrightarrow{\cong} X\otimes(Y\otimes Z),$$
\item[($\bullet$)]
an object ${\bf 1}\in {\mathcal C}$, called {\it unit object},
\item[($\bullet$)]
natural isomorphisms $l_X: {\bf 1}\otimes X \xrightarrow{\cong} X$
and $r_X: X\otimes {\bf} \xrightarrow{\cong} X$ for each $X$ in ${\mathcal C}$,
\end{enumerate}
such that the pentagon axiom \cite[(2.2)]{EGNO2015} and the triangle axiom
\cite[(2.10)]{EGNO2015} hold. The definitions of a braiding
$\{c_{X,Y}\}_{X,Y\in {\mathcal C}}$ on a monoidal category ${\mathcal C}$
and of a braided monoidal category are given in
\cite[Definitions 8.1.1 and 8.1.2]{EGNO2015} respectively.
By \cite[Definition 8.1.12]{EGNO2015}, a braided monoidal category
${\mathcal C}$ is called {\it symmetric} if
$$c_{Y,X}\circ c_{X,Y} = id_{X\otimes Y}$$
for all objects $X, Y\in {\mathcal C}$.
We are usually considering $\Bbbk$-linear categories. Now we
recall some definitions.
\begin{definition}
\label{xxdef1.1}
Let ${\mathcal C}$ be a monoidal category.
\begin{enumerate}
\item[(1)]
We say ${\mathcal C}$ is {\it monoidal $\Bbbk$-linear}
if
\begin{enumerate}
\item[(1a)]
${\mathcal C}$ is $\Bbbk$-linear,
\item[(1b)]
morphisms and functors involving in the definition of
monoidal category are all $\Bbbk$-linear, and
\item[(1c)]
the tensor functor preserves direct sums in each argument.
\end{enumerate}
\item[(2)]
We say ${\mathcal C}$ is {\it monoidal abelian}
if
\begin{enumerate}
\item[(2a)]
${\mathcal C}$ is a $\Bbbk$-linear abelian category,
\item[(2b)]
${\mathcal C}$ is monoidal $\Bbbk$-linear in the sense of
part (1),
\item[(2c)]
the tensor functor preserves exact sequences in each argument.
\end{enumerate}
\item[(3)] \cite{NVY2019}
We say ${\mathcal C}$ is {\it monoidal triangulated}
if
\begin{enumerate}
\item[(3a)]
${\mathcal C}$ is $\Bbbk$-linear triangulated category,
\item[(3b)]
${\mathcal C}$ is monoidal $\Bbbk$-linear in the sense of
part (1),
\item[(3c)]
the tensor functor preserves exact triangles and commutes
with the suspension in each argument.
\item[(3d)]
the suspension satisfies the anti-commuting diagram given
at the end of the definition of a {\it suspended monoidal}
category \cite[Definition 1.4]{Su-Al}.
\end{enumerate}
\end{enumerate}
\end{definition}
By the way, we will not be using the axiom (3d)
in the above definition in this paper.
A tensor triangulated category in the sense of \cite[Definition 1.1]{B2005}
is just a symmetric monoidal triangulated category. We refer the
reader to \cite{EGNO2015} for other details.
Let $({\mathcal C}, \otimes, {\bf 1})$ be a monoidal category
and ${\mathcal A}$ be another category. Following \cite[p.62]{JK2001},
by an {\it action} of ${\mathcal C}$ on ${\mathcal A}$ we mean a
strong monoidal functor
$$F = (f, \tilde{f}, f^{\circ}): {\mathcal C}\longrightarrow
[{\mathcal A}, {\mathcal A}],$$
where $[{\mathcal A}, {\mathcal A}]$ is the category of
endofunctors of ${\mathcal A}$, provided with a monoidal structure
$([{\mathcal A}, {\mathcal A}], \circ, Id_{\mathcal A})$ which is strict,
wherein $\circ$ denotes composition and $Id_{\mathcal A}$ is the identity endofunctor.
Here, to give the functor $f : {\mathcal C}\to [{\mathcal A}, {\mathcal A}]$
is equally to give a functor $\odot: {\mathcal C}\times {\mathcal A}
\to {\mathcal A}$ where $X\odot A = (fX)A$ for all $X\in {\mathcal C}$
and $A\in {\mathcal A}$; to give
the invertible and natural $\tilde{f}_{X,Y} : (fX) \circ (fY )
\to f(X \otimes Y )$ (or rather their inverses) is
equally to give a natural isomorphism with components
$$\alpha_{X,Y,A}: (X\otimes Y )\odot A\to X\odot (Y \odot A);$$
to give the invertible $f^{\circ}: Id_{\mathcal A}\to f {\bf 1}$
(or rather its inverse) is equally to give a natural
isomorphism with components $\lambda_{A} : {\bf 1} \odot A\to A$;
and the coherence conditions for $F$ become the commutativity
of the three diagrams \cite[(1.1), (1.2) and (1.3)]{JK2001}
which are the pentagon axiom involving the associator of
${\mathcal C}$ and the triangle axioms for the action of the
unit object ${\bf 1}$ on ${\mathcal A}$ compatible with the left
unitor of ${\mathcal C}$ respectively. It is clear that a monoidal
category ${\mathcal C}$ acts on itself by defining $\odot=\otimes$.
We refer to \cite{JK2001} for more details.
\begin{convention}
\label{xxcon1.2}
Let ${\mathcal C}$ be a monoidal category acting on another
category ${\mathcal A}$.
\begin{enumerate}
\item[(1)]
If both ${\mathcal C}$ and ${\mathcal A}$ are {\it $\Bbbk$-linear},
we automatically assume that
\begin{enumerate}
\item[(1a)]
morphisms and functors involving in the definition of
action are all $\Bbbk$-linear, and
\item[(1b)]
$\odot$ preserves direct sums in each argument.
\end{enumerate}
\item[(2)]
If both ${\mathcal C}$ and ${\mathcal A}$ are {\it abelian},
we automatically assume that
\begin{enumerate}
\item[(2a)]
morphisms and functors involving in the definition of
action are all $\Bbbk$-linear,
\item[(2b)]
${\mathcal C}$ is monoidal abelian in the sense of
Definition \ref{xxdef1.1}(2),
\item[(2c)]
$\odot$ preserves exact sequences in each argument.
\end{enumerate}
\item[(3)]
If both ${\mathcal C}$ and ${\mathcal A}$ are {\it triangulated},
we automatically assume that
\begin{enumerate}
\item[(3a)]
morphisms and functors involving in the definition of
action are all $\Bbbk$-linear,
\item[(3b)]
${\mathcal C}$ is monoidal triangulated in the sense of
Definition \ref{xxdef1.1}(3),
\item[(3c)]
$\odot$ preserves exact triangles and commutes
with the suspension in each argument.
\end{enumerate}
\end{enumerate}
\end{convention}
Next we recall some definitions concerning the Frobenius-Perron
dimension of an endofunctor. We refer to \cite{CGWZZZ2017} for
other related definitions. Let $\dim$ be $\dim_{\Bbbk}$.
\begin{definition} \cite{CGWZZZ2017}
\label{xxdef1.3}
Let $\mathcal{C}$ be a $\Bbbk $-linear category.
\begin{enumerate}
\item[(1)]
An object $X$ in $\mathcal{C}$ is called a {\it brick} if
$$\Hom_{\mathcal{C}}(X,X)=\Bbbk.$$
\item[(2)]
Let $\phi:=\{X_1,\ldots, X_n\}$ be a finite subset of nonzero objects
in $\mathcal{C}$. We say that $\phi$ is a {\it brick set} if each
$X_i\in \phi$ is a brick and
$$\Hom_{\mathcal{C}}(X_i,X_j)=0, \forall \; i\neq j.$$
\item[(3)]
Let $\phi:=\{X_1,\ldots, X_n\}$ and let $\sigma$ be an endofunctor
of $\mathcal{C}$. The {\it adjacency matrix} of $(\phi,\sigma)$
is defined to be
$$A(\phi,\sigma)=(a_{ij})_{n\times n},
\quad {\text{where}} \quad
a_{ij}=\dim \Hom_{\mathcal{C}}(X_i, \sigma(X_j))\;\;
\forall \; i,j.$$
\item[(4)]
Let $\Phi_b$ be the collection of all finite brick sets in
$\mathcal{C}$. The {\it Frobenius-Perron dimension} of an
endofunctor $\sigma$ is defined to be
$$\fpd(\sigma)
:= \sup\limits_{\phi\in \Phi_b}\{\rho(A(\phi,\sigma))\}$$
where $\rho(A)$ is the spectral radius of a square matrix $A$
\cite[Section 1]{CGWZZZ2017}, i.e. the largest absolute
value of $A$.
\item[(5)]
The {\it Frobenius-Perron curvature} of $\sigma$ is defined to be
$$\fpv (\sigma):=\sup_{\phi\in \Phi_{b}} \{\limsup_{n\to\infty} \;
(\rho(A(\phi,\sigma^n)))^{1/n} \}.$$
\item[(6)]
If $\mathcal{C}$ is a monoidal $\Bbbk$-linear category
acting on a $\Bbbk$-linear category ${\mathcal A}$
and $M$ is an object in
$\mathcal{C}$, the {\it Frobenius-Perron dimension} of $M$
is defined to be
$$\fpd(M):=\fpd(M\odot -)$$
where $M\odot -$ is considered as an endofunctor of
${\mathcal A}$ and $\fpd(M\odot -)$ is defined in part (4).
Similarly, the {\it Frobenius-Perron curvature} of $M
\in {\mathcal C}$ is
defined to be
$$\fpv(M):=\fpv(M\odot -)$$
where $M\odot -$ is considered as an endofunctor of
${\mathcal A}$ and $\fpd(M\odot -)$ is defined in part (5).
\item[(7)]
As a special case of (6),
if $\mathcal{C}$ is a monoidal $\Bbbk$-linear category
and $M$ is an object in
$\mathcal{C}$, the {\it Frobenius-Perron dimension} of $M$ is
defined to be
$$\fpd(M):=\fpd(M\otimes -)$$
where $\fpd(M\otimes -)$ is defined in part (4).
Similarly, the {\it Frobenius-Perron curvature} of $M$ is
defined to be
$$\fpv(M):=\fpv(M\otimes -)$$
where $\fpv(M\otimes -)$ is defined in part (5).
\end{enumerate}
\end{definition}
When $\mathcal{C}$ is $R-\Modfd$ for an algebra $R$, a brick set
is also called a semibrick \cite{As2020}. If both ``full'' and
``exceptional'' conditions [Definition \ref{xxdef7.8}(2,3)] are
satisfied, this is also known as a simple-minded collection, see
\cite[Definition 3.2]{KY2014}.
Now we recall the definition of representation types.
\begin{definition}
\label{xxdef1.4}
Let $A$ be a finite dimensional algebra over $\Bbbk$.
\begin{enumerate}
\item[(1)]
We say $A$ is of {\it finite type} or {\it finite representation
type} if there are only finitely many isomorphism classes of
finite dimensional indecomposable left $A$-modules.
\item[(2)]
We say $A$ is {\it tame} or {\it of tame representation type}
if it is not of finite representation type, and
for every $n\in {\mathbb N}$, all but finitely many
isomorphism classes of $n$-dimensional indecomposables occur in
a finite number of one-parameter families.
\item[(3)]
We say $A$ is {\it wild} or {\it of wild representation type}
if, for every finite dimensional $\Bbbk$-algebra $B$, the
representation theory of $B$ can be representation embedded
into that of $A$.
\end{enumerate}
\end{definition}
We always assume that the base field $\Bbbk$ is algebraically closed.
A famous trichotomy result due to Drozd \cite{D1979} states
that every finite dimensional algebra is either of finite,
tame, or wild representation type. By classical theorems
of Gabriel \cite{Ga1972} and Nazarova \cite{N1973}, the quivers
of finite and tame representation types correspond to the
$\mathbb{ADE}$ and $\widetilde{\mathbb{A}}\widetilde{\mathbb{D}}
\widetilde{\mathbb{E}}$ diagrams respectively.
By \cite[Theorem 0.3]{CGWZZZ2017}, the representation type
of a quiver $Q$ is indicated by the value of the Frobenius-Perron
dimension of the suspension functor of the derived category
$D^b(\Repr(Q))$.
To show some monoidal structure is $\fpd$-infinite
[Definition \ref{xxdef0.2}(1)], we need the following concepts.
\begin{definition}
\label{xxdef1.5}
Let $\mathcal{C}$ be a $\Bbbk$-linear category.
\begin{enumerate}
\item[(1)]
Let $\phi$ be an infinite set of objects in $\mathcal{C}$.
We say $\phi$ is an {\it infinite brick set} if
$$\Hom_{\mathcal{C}}(X,Y)=\begin{cases} \Bbbk & {\text{ if }}
X=Y\quad {\text{in $\phi$}},\\
0 & {\text{ if }}
X\neq Y \quad {\text{in $\phi$}}.\end{cases}$$
\item[(2)]
Suppose $\mathcal{C}$ is abelian or triangulated. A brick set
$\phi$ (either finite or infinite) is called a {\it connected brick set}
if $\Ext^1_{\mathcal{C}}(X,Y)\neq 0$ for
all $X,Y\in \phi$.
\end{enumerate}
\end{definition}
The next is about the definition of a weak bialgebra.
\begin{definition}
\label{xxdef1.6}
Let $A$ be an algebra with a $\Bbbk$-linear morphism
$\Delta: A\rightarrow A\otimes A$. We say $\Delta$ is a
{\it prealgebra morphism} if
\begin{equation}
\nota
\Delta(ab)=\Delta(a)\Delta(b)
\end{equation}
for all $a,b\in A.$
\end{definition}
A prealgebra morphism is an algebra morphism if and
only if $\Delta(1)=1\otimes 1$ where $1$ is the
identity (or unit) element of $A$.
\begin{definition} \cite[Definition 2.1]{BNS1999}
\label{xxdef1.7}
A {\it weak bialgebra} is a vector space $B$ over the
base field $\Bbbk$ with the structures of
\begin{enumerate}
\item[(a)]
an associative algebra $(B, m, 1 )$ with multiplication
$m: B\otimes B\to B$ and unit $1 \in B$, and
\item[(b)]
a coassociative coalgebra $(B, \Delta, \varepsilon)$ with
comultiplication $\Delta: B\to B\otimes B$ and couint
$\varepsilon: B\to \Bbbk$
\end{enumerate}
satisfying the following conditions.
\begin{enumerate}
\item[(i)]
The comultiplication $\Delta: B\to B\otimes B$ is a
prealgebra morphism.
\item[(ii)]
The unit and counit satisfy
\begin{equation}
\label{E1.7.1}\tag{E1.7.1}
(\Delta(1 )\otimes 1 )(1 \otimes \Delta(1 ))
=(\Delta\otimes Id) \Delta(1 )
=(1 \otimes \Delta(1 ))(\Delta(1 )\otimes 1 )
\end{equation}
and
\begin{equation}
\label{E1.7.2}\tag{E1.7.2}
\varepsilon( xyz)=\sum\varepsilon(x y_{(1)})\varepsilon(y_{(2)}z)=
\sum\varepsilon(x y_{(2)})\varepsilon(y_{(1)}z),
\end{equation}
where $\Delta(y)=\displaystyle\sum y_{(1)}\otimes y_{(2)}$ is the Sweedler notation.
\end{enumerate}
\end{definition}
We refer to \cite{BCJ2011, BNS1999, NTV2003, NV2002} for many
other basic definitions related to weak bialgebras and weak Hopf
algebras. The tensor structure of left modules over a weak
bialgebra \cite[Proposition 2]{NTV2003} is given below.
\begin{definition}
\label{xxdef1.8}
Let $A$ be a weak bialgebra over $\Bbbk$. For two left $A$-modules
$M$ and $N$, define $M\otimes^l N=\Delta(1)(M\otimes_{\Bbbk} N)$
where $\otimes_{\Bbbk}$ is the tensor product over $\Bbbk$.
\end{definition}
The following lemma is clear.
\begin{lemma}
\label{xxlem1.9}
Let $A$ be a weak bialgebra.
\begin{enumerate}
\item[(1)]
With the tensor product $-\otimes^l-$ given in
Definition \ref{xxdef1.8}, both $A-\Modfd$ and $A-\Mod$ are
monoidal abelian categories.
\item[(2)]
Both
$D^b(A-\Modfd)$ and $D^b(A-\Mod)$ are monoidal triangulated.
\end{enumerate}
\end{lemma}
Finally we mention a fact in quiver representations.
\begin{lemma} \cite[p.63]{GR1992}
\label{xxlem1.10}
Let $Q$ be a quiver of type $\mathbb{A}_n$. Then
$M\{i,j\}$, for $1\leq i<j\leq n$, defined as in
\eqref{E0.7.2}-\eqref{E0.7.3}, form the
complete list of indecomposable representations of $Q$,
up to isomorphisms.
\end{lemma}
\begin{convention}
\label{xxcon1.11}
For the rest of the paper, we will use $A$ for an algebra over
$\Bbbk$. It could have a bialgebra or weak bialgebra structure.
We will use $\mathcal{A}$ for the abelian category of finite
dimensional left $A$-modules, also denoted by $A-\Modfd$.
Let $\mathcal{T}$ be a triangulated category that could have
extra monoidal triangulated structure. Sometimes $\mathcal{T}$
denotes the bounded derived category $D^b(\mathcal{A})$.
A general $\Bbbk$-linear or monoidal category is denoted by
$\mathcal{C}$.
\end{convention}
\section{Preliminaries on quiver representations}
\label{xxsec2}
We refer to \cite{ASS2006} for some basic concepts in quiver
representation theory. Here we fix some convention. Let $Q=
(Q_0, Q_1, s, t)$ be a quiver where $Q_0$ is the set of
vertices of $Q$, $Q_1$ is the set of arrows of $Q$, and
$s,t: Q_1\to Q_0$ are source and target maps of $Q$ respectively.
Let $M$ be a representation of $Q$. For each vertex $i\in Q_0$,
let $(M)_i$ denote the vector space at $i$. For each arrow
$\alpha\in Q_1$ from vertex $i:=s(\alpha)$ to vertex
$j:=t(\alpha)$, let $(M)_{\alpha}$ denote the $\Bbbk$-linear map
from $(M)_{i}$ to $(M)_{j}$ corresponding to $\alpha$. Let
${\text{Rep}}(Q)$ be the category of all representations of $Q$
and $\Repr(Q)$ be the full subcategory of ${\text{Rep}}(Q)$
consisting of finite dimensional representations. By
\cite[Theorem 1.7 in Chapter VII]{ASS2006}, every finite
dimensional hereditary algebra $A$ is Morita equivalent to
the path algebra $\Bbbk Q$ of a finite acyclic quiver $Q$.
The definition of a weak bialgebra is given in
Definition \ref{xxdef1.7}.
The path algebra $\Bbbk Q$ is naturally equipped with a
coalgebra structure that makes it a weak bialgebra, see
\cite[Example 2.5]{NV2002} and \cite[Section 3]{He2008a}.
We state this known fact as follows.
\begin{lemma}
\label{xxlem2.1}
Let $Q$ be a finite quiver.
\begin{enumerate}
\item[(1)]
Its path algebra $\Bbbk Q$ is a cocommutative weak bialgebra
whose coalgebra structure is determined by
$$\Delta(p)=p\otimes p \quad \mathrm{and }\quad \varepsilon(p)=1$$
for any path $p=\alpha_1\alpha_2\cdots \alpha_m$ of length
$m\geq 0$.
\item[(2)]
The weak bialgebra structure in part {\rm{(1)}} is a bialgebra if
and only if $|Q_0|=1$.
\end{enumerate}
\end{lemma}
Since $\Bbbk Q$ is a cocommutative weak bialgebra,
$\Repr(Q)(\cong \Bbbk Q-\Modfd)$ is a symmetric monoidal
abelian category where the tensor product is given in Definition
\ref{xxdef1.8}. For two representations $M=((M)_i,(M)_\alpha)$
and $N=((N)_i,(N)_\alpha)$ of $Q$ where $i\in Q_0$ and
$\alpha\in Q_1$, we can define the {\it vertex-wise tensor
product} $M\otimes^{v} N$ by
\begin{equation}
\label{E2.1.1}\tag{E2.1.1}
(M\otimes^{v} N)_i=(M)_i \otimes_{\Bbbk} (N)_i, \quad {\text{and}}\quad
(M\otimes^{v} N)_\alpha=(M)_\alpha \otimes_{\Bbbk} (N)_\alpha,
\end{equation}
for all $i\in Q_0$ and $\alpha\in Q_1$. Then the tensor product
$M\otimes^l N$ given in Definition \ref{xxdef1.8} is exactly equal to
the vertex-wise tensor product $M\otimes^{v} N$ give in \eqref{E2.1.1}.
Therefore, we do not distinguish these two tensors and denote them by
$M\otimes N$. The tensor structure of quiver representations has been
studied by many researchers, see, for example, \cite{He2005,
He2008a, He2008b, He2009, KS2012, Ki2010}. Note that the
bounded derived category $D^b(\Repr(Q))$ is a tensor triangulated
category in the sense of \cite[Definition 1.1]{B2005}; consequently,
it is a monoidal triangulated category.
In this paper we study more than one tensor structures of the quiver
representations. But, in this section, we are only working on the
tensor structure defined by \eqref{E2.1.1}. We start with some
details about quiver representations.
We have defined the Frobenius-Perron dimension, denoted by
$\fpd$, of an object in a monoidal category $\mathcal{C}$
in Definition \ref{xxdef1.3}(4). A nice property of $\fpd$
is a duality property when applied to objects in $\Repr(Q)$.
\begin{definition}
\label{xxdef2.2}
Let $Q=(Q_0,Q_1,s_Q,t_Q)$ be a quiver and $M$ be a finite-dimensional
representation of $Q$.
\begin{enumerate}
\item[(1)]
Define the {\it opposite quiver} of $Q$, denoted by $Q^{op}$,
to be the quiver which reverses all arrows in $Q_1$, that is
$$Q^{op}_0=Q_0, Q^{op}_1=Q_1, s_{Q^{op}}=t_Q, t_{Q^{op}}=s_Q.$$
\item[(2)]
Define the {\it dual} of $M$, denoted by $M^*$, to be the
representation of $Q^{op}$ that is determined by
$$(M^*)_i=((M)_i)^*, (M^*)_{\alpha}=((M)_{\alpha})^*,$$
for all vertices $i$ and arrows $\alpha$.
\end{enumerate}
\end{definition}
We give an easy example.
\begin{example}
\label{xxex2.3}
Let $Q$ be $\xymatrix{1 \ar[r] & 2}$ and $M$ be
$\xymatrix{\Bbbk \ar[r]^{(1,0)^T} & \Bbbk^2}$. Then we have
$$Q^{op}: \xymatrix{1 & 2\ar[l]} \quad
{\text{and}} \quad
M^*= \xymatrix{\Bbbk & \Bbbk^2\ar[l]_{(1,0)} }.$$
\end{example}
For two finite dimensional $\Bbbk$-vector spaces $U,V$, we
have
$$(V\otimes U)^*= U^*\otimes V^*\cong V^*\otimes U^*.$$
Furthermore, if we have linear maps between finite dimensional
$\Bbbk$-vector spaces, say $f:V \rightarrow V'$ and
$g: U\rightarrow U'$, then we have the commutative diagram
$$
\xymatrix{
(V\otimes U)^* \ar[d]^{\simeq }&
(V'\otimes U')^*\ar[d]^{\simeq }\ar[l]_{(f\otimes g)^*} \\
V^*\otimes U^* & V'^* \otimes U'^*. \ar[l]_{f^*\otimes g^*}}
$$
The above commutative diagram holds for objects in
$\Repr(Q)$ since $\Bbbk Q$ is a commutative weak bialgebra
[Lemma \ref{xxlem2.1}(1)]. It is clear that the $\Bbbk$-linear
dual induces a contravariant equivalence between the abelian
categories $\Repr(Q)$ and $\Repr(Q^{op})$. Combining these
two facts, we have
\begin{align}
\label{E2.3.1}\tag{E2.3.1}
\Hom_{(\Repr(Q))^{op}}(X,M\otimes N)
&\cong \Hom_{\Repr(Q)}(M\otimes N, X)\\
\notag &\cong
\Hom_{\Repr(Q^{op})}(X^*, M^*\otimes N^*)
\end{align}
for $M,N,X\in \Repr(Q)$. Now the following
lemma follows from \eqref{E2.3.1}.
\begin{lemma}
\label{xxlem2.4}
Let $Q$ be a finite quiver and $M$ be a finite
representation of $Q$. Then
$$\fpd(M\otimes_{{\Repr(Q)}^{op}}-)
=\fpd(M^*\otimes_{{\Repr(Q^{op})}}-)$$
where $M$ is considered as an object in the tensor
category ${\Repr(Q)}^{op}$ and $M^*$ an object
in $\Repr(Q^{op})$.
The same statement holds for other Frobenius-Perron
invariants such as $\fpv$.
\end{lemma}
Next we study some brick sets of quiver representations.
Let $S(i)$ denote the simple representation (of $Q$) at
vertex $i$ where
\begin{equation}
\label{E2.4.1}\tag{E2.4.1}
S(i)_j=\begin{cases}
\Bbbk & j=i\\
0 & j\neq i
\end{cases}
\quad \mathrm{and} \quad
S(i)_\alpha=0,\;\; \forall \;\; \alpha\in Q_1,
\end{equation}
and $e_i$ denote the trivial path at vertex $i$.
By the tensor structure of $\Repr(Q)$ \eqref{E2.1.1},
we have the following.
\begin{lemma}
\label{xxlem2.5}
Let $S(i)$ be the simple left $\Bbbk Q$-module defined as above
and $M$ in $\Repr(Q)$. Then $S(i)\otimes M$ is
isomorphic to a direct sum of finitely many copies of $S(i)$.
\end{lemma}
In the above lemma, $S(i)\otimes M$ could be 0.
\begin{proposition}
\label{xxpro2.6}
Let $M$ be in $\Repr(Q)$. Then
$$\fpd(M)\geq d,$$
where $d=\max\limits_{v\in Q_0}\{\dim ((M)_v)\}$.
\end{proposition}
\begin{proof}
Let $a=\dim (M)_v$ and let $\phi_0=\{S(v)\}$ for $v\in Q_0$. Then
$$\Hom_{\Repr(Q)}(S(v),M\otimes S(v))
=\Hom_{\Repr(Q)}(S(v),S(v)^{\oplus a})=\Bbbk^{\oplus a}$$
which implies that $A(\phi_0, M\otimes -)=(a)_{1\times 1}$. Therefore
$\fpd(M)\geq a$ for all $a$. The assertion follows.
\end{proof}
Note that $\fpd(M)$ may be infinite as the next example
shows (and as predicted by Lemma \ref{xxlem6.4}).
\begin{example}
\label{xxex2.7}
Let $Q$ be the Kronecker quiver $\xymatrix{1 \ar@<1ex>[r]^{\alpha}
\ar[r]_{\beta} & 2}$. Let $S(1)$ be defined as in \eqref{E2.4.1}.
For every $c\in \Bbbk$, we define an object in $\Repr(Q)$:
\begin{equation}
\label{E2.7.1}\tag{E2.7.1}
M_c:=\xymatrix{\Bbbk \ar@<1ex>[r]^{\alpha=Id} \ar[r]_{\beta=c Id} & \Bbbk}.
\end{equation}
Then $M_c$ is a brick object (and such an object is also called a band
module of $Q$ \cite[pp.160-161]{BR1987}). It is easy to see that
$\Hom(M_c, S(1))\cong \Bbbk$ and that $\{M_c,M_{c'}\}$ is a brick set
if $c\neq c'$. As a consequence, $\{M_c, \mid c\in \Bbbk\}$
is an infinite brick set.
Let $T$ be any finite subset of $\Bbbk$ and let $\phi:=
\{M_c \mid c\in T\}$. The $A(\phi, S(1)\otimes -)$ is
a $|T|\times|T|$ matrix in which all entries are 1. Then
$\rho(A(\phi, S(1)\otimes -))=|T|$. Since $\Bbbk$ is infinite,
we obtain that $\fpd(S(1))=\infty$.
\end{example}
Let us consider a slightly more general situation.
\begin{example}
\label{xxex2.8}
Suppose $Q$ is another quiver and $p_1$ and $p_2$ are two paths from
vertex $i$ to vertex $j$ that do not intersect except at
the two endpoints. Then we can consider a similar brick object
$M_c$ so that
$$\begin{aligned}
(M_c)_v&=\begin{cases}
\Bbbk & \quad {\textrm{if $v$ is in either $p_1$ or $p_2$}},\\
0 & \quad {\textrm{otherwise,}}\end{cases}
\\
(M_c)_{\alpha}&=\begin{cases}
Id & \quad {\textrm{if $\alpha$ is in either $p_1$ or $p_2$, but
not the first arrow in $p_2$}},\\
cId & \quad {\textrm{if $\alpha$ is the first arrow in $p_2$}},\\
0 & \quad {\textrm{otherwise,}}\end{cases}
\end{aligned}
$$
or, similar to \eqref{E2.7.1}, we can write it as
$$M_c:=\xymatrix{\Bbbk \ar@<1ex>[r]^{p_1=Id} \ar[r]_{p_2=cId }
& \Bbbk}.$$
Then $\{M_c, \mid c\in \Bbbk\}$ is an infinite brick set.
\end{example}
We will use this example later.
\section{Discrete categories}
\label{xxsec3}
In this section we will prove some basic lemmas for
monoidal abelian categories that are needed in the
proof of Theorem \ref{xxthm0.4}. We start with a definition.
\begin{definition}
\label{xxdef3.1}
Let $\mathcal{C}$ be a monoidal abelian category.
We say $\mathcal{C}$ is {\it discrete} if
\begin{enumerate}
\item[(a)]
$\mathcal{C}$ is $\Hom$-finite, namely
$\Hom_{\mathcal{C}}(M,N)$ is finite dimensional over $\Bbbk$ for objects
$M,N$ in ${\mathcal C}$,
\item[(b)]
every object in $\mathcal{C}$ has finite length,
\item[(c)]
$\mathcal{C}$ has finitely many simple objects, say
$\{S_1,\cdots,S_n\}$, up to isomorphisms, and
\item[(d)]
for all simple objects $S_i$ and $S_j$ in
$\mathcal{C}$,
\begin{equation}
\label{E3.1.1}\tag{E3.1.1}
S_i\otimes S_j\cong \begin{cases} S_i & {\textrm{ if $i=j$}}\\
0 & {\textrm{ if $i\neq j$}}.\end{cases}
\end{equation}
\end{enumerate}
\end{definition}
Note that an essentially small category $\mathcal{C}$ satisfying
condition (b) is called a {\it length} category \cite{Ga1973,KV2018}.
Let $Q$ be a finite quiver. Then there is a canonical monoidal
abelian structure on $\Repr(Q)$ induced by the weak bialgebra
structure defined in Lemma \ref{xxlem2.1}. The following lemma
follows immediately from the definition, see \eqref{E2.1.1}.
\begin{lemma}
\label{xxlem3.2}
Let $Q$ be a finite acyclic quiver. Then the canonical
monoidal abelian structure on $\Repr(Q)$ is discrete.
\end{lemma}
For the rest of this section we assume that $\mathcal{C}$
is discrete.
As a consequence, $\mathcal{C}$ is a Krull-Schmidt category.
For $M\in \mathcal{C}$, let $\ell(M)$ denote the length of
$\mathcal{C}$. Let $IC(M)$ denote the {\it isomorphism class}
of all (possibly repeated) simple subquotients of $M$.
This can be obtained by considering any composition
series of $M$. Even though composition series of $M$ is not
unique, $IC(M)$ is unique, so well-defined.
\begin{lemma}
\label{xxlem3.3}
Let $\mathcal{C}$ be a $\Hom$-finite monoidal abelian category with
finitely many simple objects. Let $\mathbf{1}$ be the unit object.
\begin{enumerate}
\item[(1)]
$\ell(-)$ is additive.
\item[(2)]
For every nonzero object $M$ there is a simple object
$S\in IC(\mathbf{1})$ such that $S\otimes M\neq 0$. By symmetry,
there is a simple object $T\in IC(\mathbf{1})$ such that
$M\otimes T\neq 0$.
\item[(3)]
If $S\in IC(\mathbf{1})$ and $T$ is
a simple object in $\mathcal{C}$,
then $S\otimes T$ is either 0 or a simple object. For each $T$,
there is only one $S\in IC(\mathbf{1})$ such that
$S\otimes T\neq 0$.
\item[(4)]
The multiplicity of any simple object $S$ in $IC(\mathbf{1})$ is 1.
\item[(5)]
If $S, T\in IC(\mathbf{1})$, then
$$S\otimes T\cong \begin{cases} S & {\textrm{ if $S=T$}}\\
0 & {\textrm{ if $S\neq T$}}.\end{cases}
$$
\item[(6)]
$\mathcal{C}$ is discrete if and only if
$S\in IC(\mathbf{1})$ for all simple objects $S$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Clear from the definition.
(2) By part (1), we have
\begin{equation}
\label{E3.3.1}\tag{E3.3.1}
\ell(M)=\ell(\mathbf{1}\otimes M)=\sum_{S\in IC(\mathbf{1})}
\ell(S\otimes M).
\end{equation} Therefore there is an $S\in IC(\mathbf{1})$
such that $S\otimes M\neq 0$.
(3) If $S\otimes T\neq 0$, then, by \eqref{E3.3.1}, we have
$$1=\ell(T)=\ell(\mathbf{1}\otimes T)
= \sum_{S'\in IC(\mathbf{1})} \ell(S'\otimes T)
\geq \ell(S\otimes T)\geq 1.$$
Therefore $\ell(S\otimes T)= 1$ and $\ell(S'\otimes T)=0$
for all other $S'\in IC(\mathbf{1})$.
(4) This follows from \eqref{E3.3.1} by taking a simple
object $M$ with $S\otimes M\neq 0$.
(5) It remains to show that $S\otimes T=0$ if
$S$ and $T$ are distinct elements in $IC(\mathbf{1})$.
Suppose on the contrary that $U:=S\otimes T\neq 0$.
By part (3), it is a simple object. Since $U=S\otimes T$ is
a subquotient of $\mathbf{1}\otimes \mathbf{1}$,
$U$ is in $IC(\mathbf{1})$. Since $S\neq T$, we have
either $U\neq S$ or $U\neq T$. By symmetry, we assume
that $U\neq S$. By part (3), there is only one
$W\in IC(\mathbf{1})$ such that $W\otimes U\neq 0$.
This implies that $W\otimes S\neq 0$ as $U=S\otimes T$.
There are two different objects, namely, $S,U\in IC(\mathbf{1})$
such that $W\otimes S\neq 0$ and $W\otimes U\neq 0$.
By the left-version of part (3) this is impossible.
The assertion follows.
(6) If $IC(\mathbf{1})$ contains all simple objects, then
by part (5), $\mathcal{C}$ is discrete.
Conversely, suppose $\mathcal{C}$ is discrete.
For every simple object $T$, by part (2), there is
an $S\in IC(\mathbf{1})$ such that $S\otimes T\neq 0$.
By the definition of discreteness, $T=S$.
So $T\in IC(\mathbf{1})$.
\end{proof}
\begin{proposition}
\label{xxpro3.4}
Let $A$ be a finite dimensional algebra of finite
global dimension. Suppose that $(A-\Modfd, \otimes)$
is a discrete monoidal abelian category. Then, for any simple
left $A$-module $S$ and any $M\in A-\Modfd$,
$$M\otimes S\cong S^{\oplus n}$$
where $n$ is the number of copies of $S$ in the
the composition series of $M$.
\end{proposition}
\begin{proof}
By the 'no loops conjecture', which was proved by Igusa \cite{Ig1990},
\begin{equation}
\label{E3.4.1}\tag{E3.4.1}
\Ext^1_{A}(S,S)=0.
\end{equation}
By definition, $-\otimes-$ is biexact. Hence
$M\otimes S$ has a composition series that is induced
by the composition series of $M$. Let $T$ be a simple
subquotient of $M$. Then $T\otimes S$ is either $S$
when $T\cong S$ or $0$ if $T\not\cong S$. Thus
$M\otimes S$ has a composition series with each
simple subquotient being $S$. The assertion follows
from \eqref{E3.4.1}.
\end{proof}
Recall that $\otimes^v$ is the
canonical tensor given in \eqref{E2.1.1}.
We have an immediate consequence.
\begin{corollary}
\label{xxcor3.5}
Let $Q$ be a finite acyclic quiver.
If $(\Repr(Q), \otimes)$ is another discrete monoidal abelian
structure on $\Repr(Q)$, then for any $M\in \Repr(Q)$
and any simple representation $S$ over $Q$,
$$M\otimes S\cong M\otimes^v S$$
where $\otimes^v$ is defined as in \eqref{E2.1.1}.
\end{corollary}
There are a lot of monoidal categories that are not
discrete. For example, for a finite quiver $Q$,
if $\Repr(Q)$ is equipped with other
bialgebra structure, it may not be discrete,
see Proposition \ref{xxpro7.7}(a-d).
We conclude this section with the definition of
a discrete action.
\begin{definition}
\label{xxdef3.6}
Let $\mathcal{C}$ be a monoidal abelian category acting on
an abelian category ${\mathcal A}$. Assume that both
${\mathcal C}$ and ${\mathcal A}$ satisfy Definition
\ref{xxdef3.1}(a,b,c). Let $\{T_1,\cdots,T_n\}$ (respectively,
$\{S_1,\cdots,S_m\}$) be the complete list of simple objects
in ${\mathcal C}$ (respectively, ${\mathcal A}$), where $m\geq n$.
The action of $\mathcal{C}$ on ${\mathcal A}$ is called
{\it discrete} if
\begin{enumerate}
\item[(d)']
there is a permutation $\sigma\in S_n$ such that
\begin{equation}
\label{E3.6.1}\tag{E3.6.1}
T_i\odot S_j\cong \begin{cases} S_j & {\textrm{ if $j=\sigma(i)$}}\\
0 & {\textrm{ if $j\neq \sigma(i)$}}.\end{cases}
\end{equation}
\end{enumerate}
\end{definition}
\section{Proof of Theorem \ref{xxthm0.4}}
\label{xxsec4}
The aim of this section is to prove Theorem \ref{xxthm0.4}.
We need first recall some facts from representation theory
of quivers.
\begin{proposition}
\cite[Proposition 2.5 in Chapter VII]{ASS2006}
\label{xxpro4.1}
Let $Q$ be a finite, connected, and acyclic quiver and $M$
be a brick such that there exists $a\in Q_0$ with
$\dim (M)_a >1$. Let $Q'$ be the quiver defined as
follows: $Q'=(Q'_0,Q'_1)$, where $Q'_0=Q_0\cup \{b\}$;
$Q'_1=Q_1\cup \{\alpha\}$; and $\alpha:b\rightarrow a$.
Then $\Bbbk Q'$ is of infinite representation type.
\end{proposition}
By duality and Proposition \ref{xxpro4.1}, if $\alpha$
is an arrow of the form $a\rightarrow b$, then $\Bbbk Q'$
is also of infinite representation type.
\begin{lemma}\cite[Corollary 5.14 in Chapter VII]{ASS2006}
\label{xxlem4.2}
If $Q$ is a quiver of type $\mathbb{ADE}$, see
\cite[p.252]{ASS2006}, then every indecomposable
representation of $Q$ is a brick.
\end{lemma}
Recall from Definition \ref{xxdef1.4}(3) that an algebra $A$
is {\it wild} or {\it of wild representation type}
if there is a faithful exact embedding of abelian categories
\begin{equation}
\label{E4.2.1}\tag{E4.2.1}
Emb : \Bbbk\langle x_1, x_2 \rangle -\Modfd
\longrightarrow \mathcal{A}:=A-\Modfd
\end{equation}
that preserves indecomposables and respects isomorphism classes
(namely, for all objects $M_1,M_2$ in $\Bbbk\langle
x_1, x_2\rangle-\Modfd$, $Emb(M_1) \cong Emb(M_2)$ if and only if
$M_1\cong M_2$). A stronger notion of wildness is the following.
An algebra A is called {\it strictly wild}, or {\it fully wild},
if $Emb$ in \eqref{E4.2.1} is a fully faithful embedding
\cite[Proposition 5]{Ar2005}. By definition, strictly wild is
wild, but the converse is not true. It is well-known that a wild
path algebra $\Bbbk Q$ is always strictly wild, see a comment of
Gabriel \cite[p.140]{Ga197374} or \cite[Proposition 7]{Ar2005}.
\begin{lemma}
\label{xxlem4.3}
Let $A$ be a finite dimensional algebra that is strictly wild.
Let $\mathcal{C}$ be an abelian category containing
$\mathcal{A}$ as a full subcategory. Then
$\mathcal{C}$ contains an infinite connected brick set. As a
consequence, if $Q$ is a finite acyclic quiver that is wild,
then $\Repr(Q)$ contains an infinite connected brick set.
\end{lemma}
\begin{proof} The consequence follows from the fact that
a wild quiver is strictly wild. So we only prove the main
assertion.
Let $A$ be strictly wild. By definition, there is
a fully faithful embedding
\begin{equation}
\notag
Emb : \Bbbk\langle x_1, x_2 \rangle -\Modfd\longrightarrow
\mathcal{A}\longrightarrow \mathcal{C}.
\end{equation}
For each $c\in \Bbbk$, let $M(c)$ denote the 1-dimensional simple
module $\Bbbk\langle x_1, x_2\rangle/(x_1-c,x_2)$ and let $N_c$
be $Emb(M_c)$. By taking a free resolution $M(c)$, one can check
that $\Ext^1_{\Bbbk\langle x_1, x_2\rangle}(M(c),M(c'))\neq 0$
for all $c,c'$. Hence $\{M(c)\mid c\in\Bbbk\}$ is an infinite
connected brick set in $\Bbbk\langle x_1, x_2 \rangle-\Modfd$.
Since $Emb$ a fully faithful embedding, $\{N_c\mid c\in\Bbbk\}$
is an infinite connected brick set of $\mathcal{C}$.
\end{proof}
\begin{lemma}
\label{xxlem4.4}
Let $\mathcal{C}$ be an abelian category of finite
global dimension and let $\mathcal{T}$ be the bounded
derived category $D^b(\mathcal{C})$. Suppose that
\begin{enumerate}
\item[(1)]
$\mathcal{T}$ is triangulated equivalent to
$D^b(B-\Modfd)$ for a finite dimensional hereditary
algebra $B$ via tilting object $X$, namely,
$$\RHom_{\mathcal{T}}(X,-):\mathcal{T}
\to D^b(B-\Modfd)$$
is a triangulated equivalence where $B\cong
\RHom_{\mathcal{T}}(X,X)$, and
\item[(2)]
$\mathcal{C}$ contains an infinite {\rm{(}}respectively,
infinite connected{\rm{)}} brick set.
\end{enumerate}
Then $B-\Modfd$ contains an infinite {\rm{(}}respectively,
infinite connected{\rm{)}} brick set.
\end{lemma}
Note that, if $\mathcal{C} =A-\Modfd$ for some
finite dimensional algebra $A$ and if $\mathcal{T}$
is triangulated equivalent to $D^b(B-\Modfd)$,
then, by tilting theory, the existence of $X$ is
automatic.
\begin{proof}[Proof of Lemma \ref{xxlem4.4}]
We only prove the assertion for ``infinite brick set''.
The proof for ``infinite connected brick set''
is similar.
Let
$$F:=\RHom_{\mathcal{T}}(X, -): \mathcal{T}
\longrightarrow D^b(B-\Modfd)$$
be an equivalence of triangulated categories. Let
$\{N(c)\mid c\in U\}$ be an infinite brick set of $\mathcal{C}$
by hypothesis. Then
$$\{F(N(c)) \mid c\in U\}$$
is an infinite brick set of $D^b(B-\Modfd)$.
Since $X$ has finite projective dimension, there is an integer
$n$ independent of $c\in U$ such that
\begin{equation}
\label{E4.4.1}\tag{E4.4.1}
{\text{$H^i(F(N(c)))=0$ for all $|i| > n$.}}
\end{equation}
Note that $B-\Modfd$ is hereditary, which implies that
every indecomposable object in $D^b(B-\Modfd)$ is of the
form $M[i]$ for some indecomposable object $M\in B-\Modfd$
and for some $i$ \cite[Section 2.5]{Ke2007}. By \eqref{E4.4.1},
$F(N(c))=M_c[i_c]$ for some indecomposable object
$M_c\in B-\Modfd$ and some integer $|i_c|\leq n$.
Since $U$ is infinite, there is an infinite subset $U'\subseteq U$
such that $i_c$ is a constant for all $c\in U'$. Let $i_0$ denote
such $i_c$. Thus $\{M_c[i_0]\mid c\in U'\}$ is an infinite brick
set in $D^b(B-\Modfd)$. Since the suspension $[1]$ is an
isomorphism of $D^b(B-\Modfd)$, $\{M_c\mid c\in U'\}$ is an
infinite brick set in $D^b(B-\Modfd)$. Finally, using the
fact that $B-\Modfd$ is a full subcategory of
$D^b(B-\Modfd)$, we obtain that $\{M_c\mid c\in U'\}$ is an
infinite brick set in $B-\Modfd$.
\end{proof}
\begin{lemma}
\label{xxlem4.5}
Let $A$ be a finite dimensional hereditary algebra that is not
of finite representation type. Then
the abelian category $A-\Modfd$ contains an infinite brick
set. As a consequence, if $Q$ is a finite acyclic quiver not of type
$\mathbb{ADE}$, then $\Repr(Q)$ contains an infinite brick set.
\end{lemma}
\begin{proof} By \cite[Theorem 1.7 in Chapter VII]{ASS2006}
every such $A$ is Morita equivalent to a path algebra $\Bbbk Q$ for
some finite acyclic quiver $Q$. By Lemma \ref{xxlem4.4}, we may
assume that $A$ is $\Bbbk Q$.
Since $A$ is not of finite type, $Q$ is not of finite type.
Lemma 4.3 settles the case where Q is of wild representation type.
Case 1: $Q$ is of type $\widetilde{\mathbb{A}}$. Since $Q$ is acyclic,
there exist two different paths $p_1$ and $p_2$ from $v$ to $u$,
where $v\neq u\in Q_0$. We can further assume that
the length $p_1$ is smallest among all such choices. In this case,
$\Repr(Q)$ contains an infinite brick set by Example \ref{xxex2.8}.
Case 2: $Q$ is of type $\widetilde{\mathbb{D}}\widetilde{\mathbb{E}}$.
We consider a slightly more general situation and then apply the
assertion to the special case (see quivers in
\cite[Corollary 2.7 in Chapter VII]{ASS2006}). If there exists a
subquiver $Q'$ of $Q$ and an indecomposable representation $M$
of $Q'$ satisfying:
\begin{enumerate}
\item[(a)]
$Q'$ is a quiver of type $\mathbb{D}$ or $\mathbb{E}$,
\item[(b)]
there exists $x\in Q'_0$, $\dim (M)_x>1$,
\item[(c)]
$\{y\} \in Q_0\setminus Q'_0$,
\item[(d)]
there exists an arrow $\alpha\in Q_1$ such that $\alpha:y\rightarrow x$,
\end{enumerate}
then we construct a new representation $M(\lambda)$ as follows:
\[(M(\lambda))_v=
\begin{cases}
(M)_v & \mathrm{if}~ v\in Q'_0\\
\Bbbk & \mathrm{if}~ v=y\\
0 & \mathrm{otherwise},
\end{cases}
\qquad\qquad
(M(\lambda))_{\beta}=
\begin{cases}
(M)_{\beta} & \mathrm{if}~ \beta\in Q'_1\\
\lambda & \mathrm{if}~ \beta=\alpha\\
0 & \mathrm{otherwise},
\end{cases}
\]where $\lambda: \Bbbk \rightarrow (M)_x$ is a $\Bbbk$-linear map.
Then by the proof of \cite[Proposition 2.5 in Chapter VII]{ASS2006},
each $M(\lambda)$ is a brick and there exists infinitely many
pairwise non-isomorphic bricks of the form $M(\lambda)$. In fact,
the proof of \cite[Proposition 2.5 in Chapter VII]{ASS2006} shows that
there is an infinite set of $U:=\{\lambda: \Bbbk \to M_x\}$ such that
$\Hom_{\Repr(Q)}(M(\lambda), M(\lambda'))=0$ for all
$\lambda,\lambda'\in U$. This means that $\Repr(Q)$ contains an
infinite brick set. Dually, If we change the condition (d) into (d)':
\begin{enumerate}
\item[(d)']
there exists an arrow $\alpha\in Q_1$ such that $\alpha:x\rightarrow y$,
\end{enumerate}
we can still construct an infinite brick set as above.
Now we go back to a quiver of type $\widetilde{\mathbb{D}}\widetilde{\mathbb{E}}$.
By Lemma \ref{xxlem4.4}, we can
assume that
\begin{enumerate}
\item[(e)]
$Q'$ is one of the quivers in
\cite[Corollary 2.6 in Chapter VII.2]{ASS2006}, and that
\item[(f)]
(c) and (d) hold.
\end{enumerate}
Note that (e) implies that (a) holds.
By \cite[Corollary 2.6 in Chapter VII.2]{ASS2006}, (b) holds.
Therefore we proved that $\Repr(Q)$ contains an infinite brick
set.
\end{proof}
\begin{lemma}
\label{xxlem4.6}
Let $\mathcal{C}$ be a monoidal abelian category acting on
an abelian category ${\mathcal A}$. Assume that both
${\mathcal C}$ and ${\mathcal A}$ satisfy Definition
\ref{xxdef3.1}(a,b,c). Suppose that
\begin{enumerate}
\item[(a)]
$\mathcal{A}$ contains an infinite brick set, and that
\item[(b)]
the action of ${\mathcal C}$ on ${\mathcal A}$ is
discrete.
\end{enumerate}
Then there is a simple $T\in {\mathcal C}$ such that
$\fpd(T)=\infty$.
\end{lemma}
Lemma \ref{xxlem4.6} may fail if the action is not discrete.
Let $Q$ be the Kronecker quiver in Example \ref{xxex2.7} and
$A$ be its path algebra equipped with the cocommutative bialgebra
structure in Proposition \ref{xxpro7.7}(a). Then $S(1)$ is the unit object
in $\mathcal{A}$ and $S(2)\otimes M=S(2)^{\oplus \dim(M)}$ for
any $M\in \mathcal{A}$. Since all indecomposables in $\Repr(Q)$
are well-understood, one can check that $\fpd(S(1))=\fpd(S(2))=1$
(details are omitted).
\begin{proof}[Proof of Lemma \ref{xxlem4.6}]
Let $\{N(c)\mid c\in U\}$ is an infinite brick set of
$\mathcal{A}$ and let $\{S_1,\cdots,S_n\}$ be the
complete list of simple objects in $\mathcal{A}$
up to isomorphism. For each $1\leq i\leq n$, define
$$U_i:=\{c\in U\mid \Hom_{\mathcal{A}}(N(c),S_i)\neq 0\}.$$
For each $c\in U$, there is an $i$ such that
$\Hom_{\mathcal{A}}(N(c),S_i)\neq 0$. This implies
that $U=\bigcup_{i=1}^n U_i$. Therefore there is
an $i$ such that $U_i$ is infinite. Without loss of
generality, we may assume that $U=U_1$ is infinite.
Since the action is discrete, there is a simple object
$T\in {\mathcal C}$ such that $T\odot S_1\cong S_1$.
Now $\Hom_{\mathcal A}(N(c), S_1)\neq 0$ implies
that every simple subquotient of $T\odot N(c)$ is
isomorphic to $S_1$. In particular, $T\odot N(c)$
contains a copy of $S_1$ for all $c$.
Let $W$ be any finite subset of $U$ and let
$\phi=\{N(c) \mid c\in W\}$. Using the above paragraph,
$$\Hom_{\mathcal{A}}(N(c), T\odot N(c'))
\neq 0$$
for all $c,c'\in W$. This implies that
$\rho(A(\phi, T\odot -))\geq |W|$ and $\fpd(T)
\geq |W|$. Since $|W|$ can be arbitrarily
larger, $\fpd (T)=\infty$.
\end{proof}
The following is a part of Theorem \ref{xxthm0.4}.
\begin{theorem}
\label{xxthm4.7}
Let $A$ be a finite dimensional hereditary algebra
and let ${\mathcal A}=A-\Modfd$.
Let ${\mathcal C}$ be a monoidal abelian category
satisfying Definition \ref{xxdef3.1}(a,b,c).
Suppose that there is an action of $\mathcal{C}$
on ${\mathcal A}$ that is discrete. Then the following
are equivalent:
\begin{enumerate}
\item[(a)]
$A$ is of finite representation type,
\item[(b)]
$\fpd(M)<\infty$ for every irreducible object $M\in \mathcal{C}$,
\item[(c)]
$\fpd(M)<\infty$ for every indecomposable object $M\in \mathcal{C}$,
\item[(d)]
$\fpd(M)<\infty$ for every object $M\in \mathcal{C}$
\end{enumerate}
\end{theorem}
\begin{proof}
(a) $\Longrightarrow$ (d):
If $A$ is of finite representation type, then $\mathcal{A}$
has only finitely many indecomposable objects. This means that there
are only finitely many brick sets. Then, by definition, $\fpd(\sigma)$
is finite for every endofunctor $\sigma$ of $\mathcal{A}$. In particular,
$\fpd(M)$ is finite for every representation $M\in \mathcal{C}$.
(d) $\Longrightarrow$ (c) $\Longrightarrow$ (b): Clear.
(b) $\Longrightarrow$ (a): It suffices to show that if $A$ is not
of finite representation type, then $\fpd(M)=\infty$ for some irreducible
representation $M\in \mathcal{C}$. The assertion follows from Lemmas
\ref{xxlem4.5} and \ref{xxlem4.6}.
\end{proof}
We will use the following lemma concerning a bound of
spectral radius of a matrix.
\begin{lemma}
[Gershgorin Circle Theorem \cite{Ger1931}]
\label{xxlem4.8}
Let $A$ be a complex $n\times n$ matrix, with entries $a_{ij}$.
For $i\in \{1,\dots ,n\},$ let
$R_{i}=\sum\limits_{j\neq i}\left|a_{{ij}}\right|$
be the sum of the absolute values of the non-diagonal entries in
the $i$-th row. Let $D(a_{ii},R_{i})\subseteq \mathbb {C}$ be a
closed disc centered at $a_{ii}$ with radius $R_{i}$. Then every
eigenvalue of $A$ lies within at least one of the Gershgorin
discs $D(a_{ii},R_{i}).$ As a consequence,
$\rho(A)\leq \max_i\{|a_{ii}|+R_i\}$.
\end{lemma}
\begin{proposition}
\label{xxpro4.9}
Suppose $\mathcal{T}$ is a triangulated category
satisfying
\begin{enumerate}
\item[(a)]
$\mathcal{T}$ is $\Hom$-finite and hence Krull-Schmidt,
\item[(b)]
there are objects $\{X_1,\cdots,X_N\}$ such that every
indecomposable object in $\mathcal{T}$ is of the form
$X_i[m]$ for some $1\leq i\leq N$ and $m\in \mathbb{Z}$, and
\item[(c)]
for every two indecomposable objects $X,Y$ in $\mathcal{T}$,
$\Hom_{\mathcal{T}}(X,Y[m])=0$ for $|m|\gg 0$.
\end{enumerate}
Then the following hold.
\begin{enumerate}
\item[(1)]
$\fpd(\sigma)<\infty$ for every endofunctor $\sigma$ of
$\mathcal{T}$.
\item[(2)]
If ${\mathcal C}$ is a monoidal triangulated category acting
on ${\mathcal T}$, then $\fpd(M)<\infty$ for
every object $M\in {\mathcal C}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\sigma$ be an endofunctor of $\mathcal{T}$. Since there are
only finitely many $X_i$ in hypothesis (b),
we can assume that every $\sigma(X_i)$ is a direct summand
of
\begin{equation}
\label{E4.9.1}\tag{E4.9.1}
X=\left(\bigoplus_{i=1}^{N} \bigoplus_{j=-\delta}^{\delta-1}
X_i[j]\right)^{\oplus \xi}
\end{equation}
for some fixed $\delta$ and $\xi$.
We make some definitions. Let
$$
\begin{aligned}
\alpha&=\max\{\dim \Hom_{\mathcal{T}}(X_i[s],X)\mid \;\forall \;\; s,i\},\\
\gamma&=\max\{|s| \mid \; \Hom_{\mathcal{T}}(X_i[s],X)\neq 0
{\text{ for some $i$}}\}.
\end{aligned}
$$
For any given finite brick set $\phi$, it is always is a subset of
$$\Phi:=\bigcup_{j=-D}^{D-1} \{X_1[j],\cdots,X_N[j]\}$$
for some large $D\gg 0$. Since $\phi$ is a subset of $\Phi$,
we have
$$\rho(A(\phi, \sigma))
\leq \rho(A(\Phi, \sigma)).$$
By Definition \ref{xxdef1.3}(4), it is enough to show that
$\rho(A(\Phi, \sigma))$ is uniformly bounded
on $\Phi$ (for each fixed $X$ as given in \eqref{E4.9.1}).
For the next calculation we make a linear order on the
objects in $\Phi$ as
\begin{align}
\label{E4.9.2}\tag{E4.9.2}
\Phi&=\{X_1[-D],\cdots,X_N[-D]\} \cup
\{X_1[-D+1],\cdots,X_N[-D+1]\} \cup \\
&\qquad \cdots \cup
\{X_1[D-2],\cdots,X_N[D-2]\} \cup
\{X_1[D-1],\cdots,X_N[D-1]\} \notag
\end{align}
and write is as $\Phi=\{Y_1,\cdots, Y_{2ND}\}$.
Write the adjacency matrix $A(\Phi, \sigma)$
as $(a_{ij})$. For each pair $(i,j)$, by definition,
$$a_{ij}=\dim \Hom_{\mathcal T}(X_{s_i}[w_i], \sigma(X_{s_j}[w_j]))
\leq \dim \Hom_{\mathcal T}(X_{s_i}[w_i], X[w_j])
\leq \alpha,$$
for some $s_i,s_j,w_i,w_j$; and by the ordering in \eqref{E4.9.2},
we obtain
$$a_{ij}=0 \quad {\text{if $|i-j|>2N\delta +\gamma+2.$}}$$
Then each $R_i$ in the Lemma \ref{xxlem4.8} is bounded
by $(2N\delta +\gamma+2)\alpha$.
By Lemma \ref{xxlem4.8} (Gershgorin Circle Theorem),
there is a bound of $\rho(A(\Phi, \sigma))$
which is independent of $D$. Since every finite brick set
$\phi$ is a subset of $\Phi$ for some large $D$,
$\rho(A(\phi, \sigma))$ has a bound
that is independent of $\phi$. Therefore
$\fpd(\sigma)$ is finite as desired.
\end{proof}
We will use the following special case.
Recall that ${\mathcal A}=A-\Modfd$
and that ${\mathcal T}=D^b({\mathcal A})$.
\begin{corollary}
\label{xxcor4.10}
Let $A$ be a finite dimensional hereditary algebra that
is of finite representation type. Then every
monoidal triangulated structure on $\mathcal{T}$ is
$\fpd$-finite.
\end{corollary}
\begin{proof} Since $A$ is of finite type, we can list
all indecomposable left $A$-modules $\{X_1,\cdots,X_N\}$.
Since $A$ is hereditary, every indecomposable
object in $\mathcal{T}$ is of the form $X_i[s]$
for some $1\leq i\leq N$ and $s\in \mathbb{Z}$
\cite[Lemma 3.3]{CGWZZZ2017}. Finally,
since $A$ is hereditary, then
$\Hom_{\mathcal{T}}(X_i, X_j[m])=0$ for $m\neq 0,1$.
Thus $\mathcal{T}$ satisfies hypotheses (a,b,c) in
Proposition \ref{xxpro4.9}. Then the assertion follows from
Proposition \ref{xxpro4.9}(2) by setting ${\mathcal C}=
{\mathcal T}$ and $\odot=\otimes$.
\end{proof}
\begin{lemma}
\label{xxlem4.11}
Let $A$ be a finite dimensional hereditary algebra.
Let ${\mathcal C}$ be a monoidal abelian category
satisfying Definition \ref{xxdef3.1}(a,b,c).
Suppose that $\mathcal{C}$ acts on ${\mathcal A}$
via $\odot$. Let $\odot_{D}$ be the induced action
of $D^b({\mathcal C})$ on ${\mathcal T}$.
Let $M$ be an object in $\mathcal{C}$, also viewed
as an object in $D^b({\mathcal C})$.
\begin{enumerate}
\item[(1)]
If $n\neq 0,1$, then $\fpd(M[n]\odot_{D}-)=0$.
\item[(2)]
$\fpd(M\odot_{D}-)=\fpd(M\odot -)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Suppose $n\geq 2$.
Let $\phi$ be a (finite) brick set. Since $\mathcal{A}$ is
hereditary, every indecomposable object is of the form $X[m]$. Then
we can write $\phi=\bigcup_{\lambda \in \mathbb{Z}} \phi_{\lambda}$ where
$\phi_{\lambda}$ is either empty or
$\{X_{\lambda,1}[\lambda],X_{\lambda,2}[\lambda],\cdots,
X_{\lambda,t_\lambda}[\lambda]\}$.
Since $\mathcal{A}$ is hereditary,
$$\Hom_{{\mathcal{T}}}(X_{\lambda,s}[\lambda], M[n]\odot_{D}
X_{\delta,s'}[\delta])=
\Hom_{{\mathcal{T}}}(X_{\lambda,s}[\lambda], (M\odot X_{\delta,s'})[n+\delta])
=0$$
for all $\lambda\leq \delta$.
Then $A(\phi, M[n]\odot_{D}-)$ is strictly upper
triangular. Therefore $\rho(A(\phi, M[n]\odot_{D}-))=0$.
As a consequence the assertion follows.
The proof for $n<0$ is similar.
(2) Let $\phi$ be a brick set as in part (1).
Similar to the proof of part (1), also see
\cite[Lemma 6.1]{CGWZZZ2017}, we obtain that
$A(\phi, M\odot_{D}-)$ is a block lower triangular
matrix. So we only need to consider the case that
$\phi=\{X_1[d],X_2[d],\cdots X_t[d]\}$ for the same
$d$. In this case, $A(\phi, M\odot_{D}-)
=A(\phi[-d],M\odot-)$. Therefore the assertion
follows.
\end{proof}
Now we are ready to prove Theorem \ref{xxthm0.4}.
We will use the notation introduced in Theorem
\ref{xxthm4.7} and Lemma \ref{xxlem4.11}.
\begin{theorem}
\label{xxthm4.12}
Let $A$ be a finite dimensional hereditary algebra
and let ${\mathcal A}=A-\Modfd$.
Let ${\mathcal C}$ be a monoidal abelian category
satisfying Definition \ref{xxdef3.1}(a,b,c).
Suppose that there is an action of $\mathcal{C}$
on ${\mathcal A}$ that is discrete. Then the following
are equivalent:
\begin{enumerate}
\item[(a)]
$A$ is of finite representation type,
\item[(b)]
$\fpd(M)<\infty$ for every irreducible object $M\in \mathcal{C}$,
\item[(c)]
$\fpd(M)<\infty$ for every indecomposable object $M\in \mathcal{C}$,
\item[(d)]
$\fpd(M)<\infty$ for every object $M\in \mathcal{C}$,
\item[(e)]
$\fpd(M \odot_{D} -)<\infty$ for every indecomposable object
$M\in D^b(\mathcal{C})$,
\item[(f)]
$\fpd(M \odot_{D} -)<\infty$ for every object $M\in D^b(\mathcal{C})$.
\end{enumerate}
Suppose $A$ is the path algebra $\Bbbk Q$ for some finite
quiver $Q$. Then any of
conditions {\rm{(a)}} to {\rm{(f)}} is equivalent to
\begin{enumerate}
\item[(g)]
$Q$ is a finite union of quivers of type $\mathbb{ADE}$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Theorem \ref{xxthm4.7}, the
first four conditions are equivalent.
(a) $\Longrightarrow$ (f): This follows from
Proposition \ref{xxpro4.9} and the proof of Corollary
\ref{xxcor4.10}.
(f) $\Longrightarrow$ (e): Clear.
(e) $\Longrightarrow$ (c): This follows from Lemma
\ref{xxlem4.11}(2).
\end{proof}
Clearly Theorem \ref{xxthm0.4} is a special case of Theorem
\ref{xxthm4.12}.
\section{$mtt$-structures of a monoidal triangulated category}
\label{xxsec5}
First we recall the definition on a $t$-structure on a triangulated
category. The notion of a $t$-structure was introduced by
Beilinson-Bernstein-Deligne in \cite{BBD1981}. We make a
small change in the definition below.
\begin{definition}
\label{xxdef5.1}
Let $\mathcal{T}$ be a triangulated category.
\begin{enumerate}
\item[(1)]
A {\it $t$-structure} on $\mathcal{T}$ is a pair of full
subcategories $(\mathcal{T}^{\leq 0},
\mathcal{T}^{\geq 0})$ satisfying the following conditions.
\begin{enumerate}
\item[(1a)]
$\mathcal{T}^{\leq 0}\subseteq \mathcal{T}^{\leq 1}$ and
$\mathcal{T}^{\geq 0}\supseteq \mathcal{T}^{\geq 1}$ where
we use notation $\mathcal{T}^{\leq n}=\mathcal{T}^{\leq 0}[-n]$
and $\mathcal{T}^{\geq n}=\mathcal{T}^{\geq 0}[-n]$.
\item[(1b)]
If $M\in \mathcal{T}^{\leq 0}$ and $N\in \mathcal{T}^{\geq 1}$,
then $\Hom_{\mathcal{T}}(M,N)=0$.
\item[(1c)]
For any object $X\in \mathcal{T}$, there is a distinguished
(exact) triangle
$$M\to X\to N\to M[1]$$
with $M\in \mathcal{T}^{\leq 0}$ and $N\in \mathcal{T}^{\geq 1}$.
\end{enumerate}
\item[(2)]
The {\it heart} of the $t$-structure is the
full subcategory
$$\mathcal{T}^{\geq 0}\cap \mathcal{T}^{\leq 0}$$
which is denoted by $\mathcal{H}$ or $\mathcal{H}(\mathcal{T})$.
\item[(3)]
\cite[p.1427]{CR2018}
A $t$-structure is called {\it bounded} if for each $X\in
\mathcal{T}$, there exist $m\leq n$
such that $X\in \mathcal{T}^{\leq n} \cap \mathcal{T}^{\geq m}$.
\item[(4)]
\cite[p.1427]{CR2018}
A bounded $t$-structure is called {\it hereditary} if
$\Hom_{\mathcal{T}}(X, Y[n])=0$ for $n\geq 2$ and
$X,Y\in \mathcal{H}$.
\end{enumerate}
\end{definition}
As a classical example, if $\mathcal{T}$ is the derived category
$D^b(A-\Modfd)$, there is a natural $t$-structure on
$\mathcal{T}$ by setting $\mathcal{T}^{\leq 0}$ to be the complexes
concentrated in degrees less than or equal to 0 (and similarly for
$\mathcal{T}^{\geq 0}$). In this case the heart of this
$t$-structure is $A-\Modfd$.
Note that hereditary $t$-structures are very special. Even for the
path algebra of a quiver $Q$ of type $\mathbb{A}_3$, there is a
$t$-structure in $D^b(\Repr(Q))$ that is not hereditary, see
\cite{KV1988} for a classification of $t$-structures of $D^b(\Repr(Q))$
of a quiver of Dynkin type.
We would like to introduce a version of the $t$-structure in a monoidal
triangulated category. We use $mtt$ for ``monoidal triangulated $t$''
in the next definition.
\begin{definition}
\label{xxdef5.2}
Let $\mathcal{T}$ be a monoidal triangulated category
in parts (1,2,3) and a triangulated category in part (4).
\begin{enumerate}
\item[(1)]
A $t$-structure $(\mathcal{T}^{\leq 0},\mathcal{T}^{\geq 0})$
on $\mathcal{T}$ is called an {\it $mtt$-structure}
if the following conditions hold.
\begin{enumerate}
\item[(a)]
$\mathcal{T}^{\leq 0}\otimes \mathcal{T}^{\leq 0}\subseteq
\mathcal{T}^{\leq 0}$ and
$\mathcal{T}^{\leq 0}\otimes \mathcal{T}^{\leq 0}\not\subseteq
\mathcal{T}^{\leq -1}$.
\item[(b)]
Both $\mathcal{T}^{\leq 0}$ and $\mathcal{T}^{\geq 0}$
are closed under taking direct summands.
\item[(c)]
There is an integer $D\geq 0$ such that
$\mathcal{T}^{\geq D}\otimes \mathcal{T}^{\geq D}\subseteq
\mathcal{T}^{\geq D}$.
\end{enumerate}
\item[(2)]
The minimal integer $D$ in condition (c) is called the
{\it deviation} of the $mtt$-structure of $\mathcal{T}$.
\item[(3)]
The {\it deviation} of $(\mathcal{T}, {\bf 1}, \otimes)$
is defined to be
$$D_{\otimes}({\mathcal T})=
\inf \{ {\text{deviations of all possible $mtt$-structures of
$(\mathcal{T}, {\bf 1}, \otimes)$}}\}.$$
\item[(4)]
Suppose ${\mathcal T}$ is a triangulated category. The
{\it upper deviation} of ${\mathcal T}$ is defined to be
$$UD({\mathcal T})=
\sup \{ D_{\otimes}({\mathcal T})
\mid {\text{all possible monoidal triangulated structures
on ${\mathcal T}$}}\}.$$
The
{\it lower deviation} of ${\mathcal T}$ is defined to be
$$LD({\mathcal T})=
\inf \{ D_{\otimes}({\mathcal T})
\mid {\text{all possible monoidal triangulated structures
on ${\mathcal T}$}}\}.$$
\end{enumerate}
\end{definition}
\begin{example}
\label{xxex5.3} We give two classical examples.
\begin{enumerate}
\item[(1)]
If $A$ is a finite dimensional weak Hopf algebra (or a weak
bialgebra), then $A-\Modfd$ has a natural monoidal abelian
structure, and consequently,
$\mathcal{T}:=D^b(A-\Modfd)$ has an induced
monoidal triangulated structure. It is clear that
${\mathcal T}$ has a canonical $mtt$-structure
by setting $\mathcal{T}^{\leq 0}$ (respectively,
$\mathcal{T}^{\geq 0}$) to be the complexes concentrated in
degrees less than or equal to 0 (respectively, greater than
or equal to 0). In this case the deviation of the $mtt$-structure
is $0$. If $A$ is hereditary as an algebra, then the above
$t$-structure is hereditary.
By definition, $D_{\otimes}(\mathcal{T})=0$ when we consider
the monoidal triangulated structure given above. As a consequence,
$LD({\mathcal T})=0$ when ${\mathcal T}$ is considered as a
triangulated category. A special case is $LD(D^b(\Repr(Q)))=0$
for all finite acyclic quivers $Q$.
\item[(2)]
If $\mathbb{X}$ is a smooth projective scheme of dimension $d$,
then ${\mathcal T}:=D^b(coh(\mathbb{X}))$ has a canonical $mtt$-structure
by setting $\mathcal{T}^{\leq 0}$ (respectively,
$\mathcal{T}^{\geq 0}$) to be the complexes concentrated in
degrees less than or equal to 0 (respectively, greater than
or equal to 0).
If $\mathbb{X}$ is of dimension 1, then the above $t$-structure
is hereditary.
Note that the deviation of the canonical $mtt$-structure of
${\mathcal T}$ is at most $d$.
By definition, $D_{\otimes}(\mathcal{T})\leq d$ with the natural
monoidal triangulated structure. As a consequence,
$LD({\mathcal T})\leq d$ when $\mathcal{T}$ is considered as
a triangulated category.
\end{enumerate}
\end{example}
\begin{lemma}
\label{xxlem5.4}
Let ${\mathcal T}$ be a monoidal triangulated category
with an $mtt$-structure $(\mathcal{T}^{\leq 0},\mathcal{T}^{\geq 0})$
of deviation zero. Suppose that
$(\mathcal{T}^{\leq 0},\mathcal{T}^{\geq 0})$
is a hereditary $t$-structure of $\mathcal{T}$.
Then the heart of the $mtt$-structure is a monoidal
abelian category.
\end{lemma}
\begin{proof}
By \cite[Theorem 1.3.6]{BBD1981}, the heart $\mathcal{H}$
is an abelian category.
Since $\mathcal{T}$ is a monoidal triangulated category, there
is a unit object ${\bf 1}\in \mathcal{T}$. First we claim
that ${\bf 1}\in \mathcal{H}$. By definition, there is a
distinguished triangle
\begin{equation}
\label{E5.4.1}\tag{E5.4.1}
M\to {\bf 1}\to N\to M[1]
\end{equation}
where $M\in \mathcal{T}^{\leq 0}$ and $N\in \mathcal{T}^{\geq 1}$.
For any object $X\in \mathcal{H}$, since $X\otimes-$ is an exact
functor, $$X\otimes M\to X\to X\otimes N\to X\otimes M[1]$$
is a distinguished triangle. However, $X\otimes N\in
\mathcal{T}^{\geq 1}$ as the deviation is zero. Then
$\Hom(X,X\otimes N)=0$ by the definition of $t$-structure,
and
\begin{equation*}
\label{E5.4.2}\tag{E5.4.2}
X\otimes M[1]\cong (X\otimes N)\oplus X[1]=X\otimes (N\oplus {\bf 1}[1]).
\end{equation*}
By hypothesis the $mtt$-structure is hereditary. By
\cite[Lemma 2.1]{CR2018}, \eqref{E5.4.2} holds for all
$X\in \mathcal{T}$.
Take $X={\bf 1}$, then $M[1]\cong N\oplus {\bf 1}[1]$
and in \eqref{E5.4.1}, the morphism from {\bf 1} to $N$ is zero.
Hence ${\bf 1}$ is isomorphic to a direct summand of $M$,
which is in $\mathcal{T}^{\leq 0}$.
Similarly, for $Y\in \mathcal{T}^{\leq 0}$ and $f: Y\to {\bf 1}[-1]$,
there is a distinguished triangle:
$$Y \stackrel{f}{\longrightarrow} {\bf 1}[-1]\to Z\to Y[1].$$
Apply the exact functor $X\otimes-$ on the above triangle for
all $X\in \mathcal{H}$, and then we obtain $f=0$, i.e.
$\Hom(Y, {\bf 1}[-1])=0$ for all $Y\in \mathcal{T}^{\leq 0}$.
Therefore, ${\bf 1}\in \mathcal{T}^{\geq 0}$.
Finally, ${\bf 1}\in \mathcal{T}^{\geq 0}\cap \mathcal{T}^{\leq 0}=\mathcal{H}$.
Thus we proved the claim.
As for the tensor product bifunctor $\otimes$, since the deviation is zero,
$\mathcal{H}$ is closed under $\otimes$. Hence $\mathcal{H}$ is a monoidal
category with the induced tensor product $\otimes$. The exactness of
$\otimes$ in $\mathcal{H}$ follows from the exactness
of $\otimes$ in ${\mathcal T}$, see \cite[p.1426]{CR2018}.
\end{proof}
\begin{lemma}
\label{xxlem5.5}
Let $\mathbb{X}$ be a smooth projective curve
and let ${\mathcal T}$ be the monoidal triangulated
category $D^b(coh(\mathbb{X}))$.
\begin{enumerate}
\item[(1)]
The deviation of every hereditary $mtt$-structure
on ${\mathcal T}$ is positive.
\item[(2)]
For any finite dimensional weak bialgebra $A$,
$D^b(A-\Modfd)$ with canonical monoidal structure
is not isomorphic to
$\mathcal{T}$ as monoidal triangulated categories.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Suppose on the contrary that there is a hereditary
$mtt$-structure on $\mathcal{T}$ with deviation zero.
Let $\mathcal{H}$ be its heart. By Lemma \ref{xxlem5.4},
${\mathcal H}$ is a monoidal abelian category.
Let ${\mathcal O}_x$ be the skyscraper sheaf
at a point $x\in {\mathbb X}$. There is an integer
$n$ such that $M:={\mathcal O}_x [n]$ is in
${\mathcal H}$. Then $M\otimes M$ is in
${\mathcal H}$. By an easy
computation,
$$M\otimes M\cong {\mathcal O}_x [2n] \oplus {\mathcal O}_x [2n-1]
\cong M[n]\oplus M[n-1]$$
which cannot be in ${\mathcal H}$ for any $n$.
This yields a contradiction. Therefore the assertion
follows.
(2) It is clear that the deviation of the
canonical $mtt$-structure of $D_{\otimes}(D^b(A-\Modfd))$
is zero [Example \ref{xxex5.3}(1)].
This $mtt$-structure is also hereditary. Now
the assertion follows from part (1).
\end{proof}
For the rest of this section, we will use Frobenius-Perron
curvature, see Definition \ref{xxdef1.3}(5), to study the
uniqueness of $mtt$-structures with deviation zero, and
then prove Theorems \ref{xxthm0.5} and \ref{xxthm0.7}.
\begin{definition}
\label{xxdef5.6}
Let ${\mathcal C}$ be a monoidal abelian category and
$M\in {\mathcal C}$. The {\it curvature} of $M$ is defined to be
$$v(M)=\overline{\lim\limits_{n\rightarrow \infty}}
( \ell(M^{\otimes n}))^{\frac{1}{n}}$$
where $\ell(-)$ denotes the length of an object.
\end{definition}
\begin{lemma}
\label{xxlem5.7}
Let ${\mathcal C}$ be a monoidal abelian category
satisfying Definition \ref{xxdef3.1}(a,b,c).
Let $A$ be a finite dimensional weak bialgebra
and $\mathcal{A}$ be $A-\Modfd$.
Let $M$ be
an object in $\mathcal{C}$ or $\mathcal{A}$.
\begin{enumerate}
\item[(1)]
If $M$ is in ${\mathcal C}$, then
\begin{equation}
\notag
\fpv(M)\leq v(M)<\infty.
\end{equation}
\item[(2)]
If $M$ is in ${\mathcal A}$, then
\begin{equation}
\label{E5.7.1}\tag{E5.7.1}
\fpv(M)\leq v(M)\leq \dim M.
\end{equation}
\item[(3)]
If $A=\Bbbk Q$ for some finite acyclic quiver $Q$ with the
tensor defined as in \eqref{E2.1.1}, then
\begin{equation}
\label{E5.7.2}\tag{E5.7.2}
\fpv(M)=v(M)=\max_{i\in Q_0} \{ \dim (M)_{i}\}.
\end{equation}
\item[(4)]
If $\mathcal{C}$ is discrete, then,
for every nonzero object $M\in \mathcal{C}$,
$\fpv(M)$ is positive.
\item[(5)]
Suppose that ${\mathcal C}$ acts on a general abelian
category ${\mathcal A}$ such that the action is
discrete in the sense of Definition \ref{xxdef3.6}.
Then, for every object $M$ in ${\mathcal C}$,
$$1\leq \fpv(M)< \infty.$$
\end{enumerate}
\end{lemma}
\begin{proof}
(1)
Let $\Hom$ denote $\Hom_{\mathcal{C}}$.
Let
$$\alpha:=\max\{\ell(X_i\otimes X_j)\mid {\text{
$X_i$ and $X_j$ are simple}}\},$$
and
$$\beta:=\max\{\dim \End(X_i) \mid {\text{
$X_i$ is simple}}\}.$$
Then, for any objects $X$ and $Y$ in
${\mathcal C}$, we have
\begin{equation}
\label{E5.7.3}\tag{E5.7.3}
\ell(X\otimes Y)\leq \alpha \ell(X)\ell(Y),
\end{equation}
and
\begin{equation}
\label{E5.7.4}\tag{E5.7.4}
\dim \Hom(X, Y)\leq \beta\ell(X)\ell(Y).
\end{equation}
By induction,
$\ell(X^{\otimes n})\leq \alpha^{n-1}
\ell(X)^n$ which implies that
$v(X)\leq \alpha \ell(X)<\infty$.
Given a brick set $\phi=\{X_1,\cdots, X_r\}$, define
$\ell (\phi):=\max\limits_{X\in \phi} \{\ell( X)\}$. By
Lemma \ref{xxlem4.8},
$$\rho(A(\phi, M^{\otimes n}\otimes_{\mathcal{C}}-))
\leq \max\limits_{i=1,\cdots, r} \{\sum_{j=1}^r \dim
\Hom(X_i, M^{\otimes n}\otimes X_j)\}.$$
By \eqref{E5.7.3} and \eqref{E5.7.4}, we have
\begin{eqnarray*}
\dim \Hom(X_i, M^{\otimes n}\otimes X_j)
&\leq & \alpha \beta \ell( X_i) (\ell(M^{\otimes n}) \ell( X_j))\\
&\leq & \alpha \beta (\ell(\phi))^2 (\ell(M^{\otimes n}))\\
&\leq & \alpha \beta (\ell(\phi))^2 (v(M)+\varepsilon)^n)
\end{eqnarray*}
for arbitrary small $\varepsilon>0$ and for $n\gg 0$.
Therefore,
$$\rho(A(\phi, M^{\otimes n}\otimes_{\mathcal{C}}-))
\leq \alpha \beta r (\ell( \phi))^2 (v(M)+\varepsilon)^n),$$
which implies that
\begin{equation}
\label{E5.7.5}\tag{E5.7.5}
\rho(A(\phi, M^{\otimes n}\otimes_{\mathcal{C}}-))^{\frac{1}{n}}
\leq (\alpha \beta r (\dim \phi)^2)^{\frac{1}{n}}(v(M)+\varepsilon),
\end{equation}
for $n\gg 0$. When $n\rightarrow \infty$, the limit of right side of
inequality \eqref{E5.7.5} is $v(M)+\varepsilon$, so $\fpv(M)\leq v(M)+
\varepsilon$ for every small $\varepsilon$. The assertion follows.
(2) It follows from Definition \ref{xxdef1.8} that
\begin{equation}
\label{E5.7.6}\tag{E5.7.6}
\dim M\otimes N\leq (\dim M)(\dim N)
\end{equation}
for all $M,N\in \mathcal{A}$. It is also clear that
\begin{equation}
\label{E5.7.7}\tag{E5.7.7}
\dim \Hom_{\mathcal{A}}(M, N)\leq \dim \Hom_{\Bbbk}(M,N)=(\dim M)
(\dim N).
\end{equation}
By \eqref{E5.7.6}, $\dim M^{\otimes n}
\leq (\dim M)^n$, which implies that
$v(M)\leq \dim M$. Now the assertion follows from
part (1).
(3) Let $\phi=\{S(i)\}$ where $i$ is a vertex of $Q$.
Write $\dim (M)_{i}=d_i$. Then $\rho(A(\phi, M^{\otimes n}))$ is the
integer $d_i^n$ and
$\lim\limits_{n\rightarrow \infty} \rho(A(\phi, M^{\otimes n}))^{\frac{1}{n}}=d_i.$
Hence $\fpv(M)\geq d_i$ for all $i$. It is clear that $v(M)=
\max\{d_i\mid i\in Q_0\}$. Therefore part (1) implies that $\fpv(M)=v(M).$
(4) Suppose $\mathcal{A}$ is discrete. Then
there is a simple object $S$ such that $M\otimes S\neq 0$
and $\Hom_{\mathcal{A}}(S, M\otimes S)\neq 0$. By induction,
one can show that $\Hom_{\mathcal{A}}(S, M^{\otimes n}\otimes S)\neq 0$
for all $n$. Therefore $\fpv(M)\geq 1$.
(5) Using a similar proof of part (1), one sees that
$\fpv(M)< \infty$. Using the proof of part (4),
one can show that
$\fpv(M)\geq 1$. Details are omitted.
\end{proof}
\begin{remark}
\label{xxrem5.8}
\begin{enumerate}
\item[(1)]
Let $\mathcal{C}$ be a monoidal abelian category acting on
an abelian category ${\mathcal A}$. Assume that ${\mathcal C}$
satisfies Definition \ref{xxdef3.1}(a,b,c). The action of
$\mathcal{C}$ on ${\mathcal A}$ is called {\it $\fpv$-positive} if
\begin{enumerate}
\item[(e)]
$\fpv(M)>0$ for every nonzero object
$M$ in $\mathcal{C}$. We say $\mathcal{C}$ is
$\fpv$-positive if the natural action of $\mathcal{C}$
on itself is $\fpv$-positive.
\end{enumerate}
\item[(2)]
By Lemma \ref{xxlem5.7}(5) if an action of $\mathcal{C}$
on ${\mathcal A}$ is discrete, then it is {\it $\fpv$-positive}.
\item[(3)]
Suppose an action of $\mathcal{C}$
on ${\mathcal A}$ is $\fpv$-positive.
Let $\mathcal{C}'$ be a monoidal abelian subcategory
$\mathcal{C}$. Then the induced action of $\mathcal{C}'$
on ${\mathcal A}$ is $\fpv$-positive. In general,
such an action is not discrete.
\item[(4)]
There are other natural examples that the action of $\mathcal{C}$
on ${\mathcal A}$ is not discrete, but $\fpv$-positive, see
below.
If $A$ is a finite-dimensional bialgebra and let
${\mathcal C}=A-\Modfd$, then $\mathcal{C}$
is $\fpv$-positive. Let $0\neq M\in \mathcal{C}$ and
let $S_0$ be a simple submodule of $M$. Then $M\otimes S_0\neq 0$
as $\dim M\otimes S_0=\dim M \dim S_0$ (when $A$ is
a bialgebra). For each $i \geq 1$, we define $S_{i}$
inductively to be a simple module of $M\otimes S_{i-1}$.
So $S_i\subseteq M\otimes S_{i-1}$ for all $i\geq 1$.
Continuing this process, we will obtain a set of simple object
$\Gamma=\{S_0,S_1,S_2,\cdots\}$.
Since $A$ has finite many simples, $|\Gamma|<\infty$. Hence,
there exists $m<n\in \mathbb{Z}^{+}$, such that $S_n\cong S_m$.
For all $i\geq n$, we redefine $S_i$ to be $S_{i-k(n-m)}$
where $k$ is an integer such that $m\leq i-k(m-n)<n$.
By the construction, we have, for all $i\geq m$ and all
$s\geq 0$,
$$\dim \Hom(S_{i+1}, M\otimes S_{i})\geq 1 \text{ and }
\dim \Hom(S_{i+s}, M^{\otimes s}\otimes S_{i})\geq 1.$$
Therefore, by taking the brick set $\phi=\{S_m,S_{m+1},\cdots, S_{n}\}$,
one sees that, for each $s$,
$A(\phi, M^{\otimes s}\otimes-)$ is a non-negative matrix
that contains a permutation matrix. As a consequence,
$\rho(A(\phi, M^{\otimes s}\otimes-))\geq 1$ for all $s\geq 1$,
which implies that $\fpv(M)\geq 1$.
\item[(5)]
In $\Repr(Q)$ where the tensor defined as in \eqref{E2.1.1},
$\fpv(M)$, unlike $\fpd(M)$, is an invariant only dependent on the
the dimension vector of $M$ (which is independent of the orientations
of arrows in the quiver).
\end{enumerate}
\end{remark}
Next we investigate $mtt$-structures on
$D^b(\mathcal{C})$.
\begin{lemma}
\label{xxlem5.9}
Suppose that a monoidal abelian category ${\mathcal C}$
acts on an arbitrary abelian category ${\mathcal A}$.
Let ${\mathcal T}=D^b(\mathcal{C})$. Assume that
\begin{enumerate}
\item[(a)]
the above action is either discrete or $\fpv$-positive,
\item[(b)]
${\mathcal C}$ is hereditary, and
\item[(c)]
$(\mathcal{T}^{\leq 0},\mathcal{T}^{\geq 0})$
is any hereditary $mtt$-structure of deviation zero
on ${\mathcal T}$.
\end{enumerate}
Let $\mathcal{H}$ be the heart of the above $mtt$-structure.
Then the following hold.
\begin{enumerate}
\item[(1)]
\cite[Lemma 2.1]{CR2018}
If $M$ is an indecomposable object in $\mathcal{T}$,
then $M$ is in $\mathcal{T}^{\leq b}\cap
\mathcal{T}^{\geq b}$ for some integer $b$.
\item[(2)]
If $M\in \mathcal{C}$, then $M$ is in the heart $\mathcal{H}$.
\item[(3)]
The $mtt$-structure $(\mathcal{T}^{\leq 0},\mathcal{T}^{\geq 0})$
given in (c) is the canonical $mtt$-structure of ${\mathcal T}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(2,3) We only prove this when the action is discrete.
First we claim that $\fpv(M \odot_{\mathcal T} -)>0$ if
$0\neq M\in {\mathcal C}$. It is clear that
$$\fpv(M \odot_{\mathcal T} -)\geq \fpv(M \odot_{\mathcal C} -).$$
Now the claim follows from Lemma \ref{xxlem5.7}(5).
Let $M$ be an indecomposable object in $\mathcal{C}$.
Then $M\in \mathcal{T}^{\leq b}\cap\mathcal{T}^{\geq b}$ for
some $b$. If $b\neq 0$, then $M^{\otimes n}\in \mathcal{T}^{\leq nb}
\cap \mathcal{T}^{\geq nb}$ by Definition \ref{xxdef5.2}(a,c).
For any fixed brick set $\phi$, $A(\phi, M^{\otimes n}
\odot_{\mathcal T} -)$ is zero for $n\gg 0$ by the hereditary
property of Definition \ref{xxdef5.1}(4). Therefore
$\fpv(M\odot_{\mathcal T} -)=0$. By the first paragraph,
$\fpv(M\odot_{\mathcal T} -)>0$, yielding a contradiction.
Therefore $b=0$, or equivalently, $M\in \mathcal{T}^{\leq 0}
\cap\mathcal{T}^{\geq 0}=:\mathcal{H}$. This implies that ${\mathcal C}
\subseteq {\mathcal H}$. By \cite[Lemma 2.1]{CR2018}
and \cite[Lemma 3.6]{SR2016}, ${\mathcal C} ={\mathcal H}$,
and consequently, the $mtt$-structure $(\mathcal{T}^{\leq 0},
\mathcal{T}^{\geq 0})$ in hypothesis (c) must be the canonical
$mtt$-structure of ${\mathcal T}$.
\end{proof}
The following is basically Theorem \ref{xxthm0.7}.
\begin{theorem}
\label{xxthm5.10}
Let $A$ be a finite dimensional hereditary weak bialgebra.
Suppose that the monoidal abelian category $\mathcal{A}$ is
either discrete or $\fpv$-positive.
\begin{enumerate}
\item[(1)]
There is a unique hereditary $mtt$-structure with deviation
zero on $D^b(\mathcal{A})$.
\item[(2)]
The $\mathcal{A}$ is the heart of any hereditary $mtt$-structure
with deviation zero on $D^b(\mathcal{A})$.
\item[(3)]
The $\mathcal{A}$ is uniquely determined by the monoidal
triangulated structure on $D^b(\mathcal{A})$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let ${\mathcal C}={\mathcal A}$. Then we can easily check
all hypotheses in Lemma \ref{xxlem5.9}. Then part (1) follows
from Lemma \ref{xxlem5.9}(3).
(2,3) Follow directly from part (1).
\end{proof}
\begin{proof}[Proof of Theorem \ref{xxthm0.5}]
If $A$ is a bialgebra, by Remark \ref{xxrem5.8}(4),
$\mathcal{A}$ is $\fpv$-positive. Therefore
the hypothesis of Theorem \ref{xxthm5.10} is
satisfied. Now the assertion follows from the
uniqueness of hereditary $mtt$-structure with
deviation zero in Theorem \ref{xxthm5.10}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{xxcor0.6}]
Let $Q$ and $Q'$ be two quivers such that
$D^b(\Repr(Q))$ and $D^b(\Repr(Q'))$ are
equivalent as monoidal triangulated categories.
By Theorem \ref{xxthm0.5}, this equivalent induces
an equivalence between $\Repr(Q)$ and $\Repr(Q')$.
Recall that $Q$ and $Q'$ are acyclic. For each acyclic
quiver there are finitely many simple representations,
say $\{S_i\}_{i=1}^n$, that are associated to vertices
$\{1,\cdots,n\}$ of the quiver. The correspondence
between those simple representations gives rise to a
bijective map $f: Q_0 \rightarrow Q'_0$. By
\cite[Lemma 2.12, p.84]{ASS2006}, $\dim \Ext^1 (S_i, S_j)$
is the number of arrows from vertex $i$ to vertex $j$.
Therefore the number of arrows from $f(i)$ to $f(j)$ is
the same as that from $i$ to $j$. Thus $Q\cong Q'$.
\end{proof}
\section{Proof of Theorem \ref{xxthm0.3}}
\label{xxsec6}
The proof of Theorem \ref{xxthm0.3} uses several results
about weighted projective lines and takes several pages
in total. The final step of the proof is given
at the end of this section. First we recall some basic
definitions concerning weighted projective lines. Details
can be found in \cite[Section 1]{GL1987}.
For $t\geq 1$, let ${\bf p}:=(p_0,p_1,\cdots,p_t)$ be a
$(t+1)$-tuple of positive integers, called the {\it weight}
or {\it weight sequence}. Let
${\bf D}:=(\lambda_0, \lambda_1,\cdots, \lambda_t)$ be a
sequence of distinct points of the projective line
${\mathbb P}^1$ over $\Bbbk$. We normalize ${\bf D}$
so that $\lambda_0=\infty$, $\lambda_1=0$ and
$\lambda_2=1$ (if $t\geq 2$). Let $R$ denote the commutative
algebra
\begin{equation}
\label{E6.0.1}\tag{E6.0.1}
\Bbbk[X_0,X_1,\cdots,X_t]/(X_i^{p_i}-X_1^{p_1}+\lambda_i X_0^{p_0},
i=2,\cdots,t).
\end{equation}
The image of $X_i$ in $R$ is denoted by $x_i$ for all $i$.
Let ${\mathbb L}$ be the abelian group of rank 1
generated by $\overrightarrow{x_i}$ for $i=0,1,\cdots,t$
and subject to the relations
$$p_0 \overrightarrow{x_0}= \cdots =p_i \overrightarrow{x_i}=\cdots
=p_t \overrightarrow{x_t}=: \overrightarrow{c}.$$
The algebra $R$ is ${\mathbb L}$-graded by setting $\deg x_i=
\overrightarrow{x_i}$. The corresponding
{\it weighted projective line},
denoted by ${\mathbb X}({\bf p},{\bf D})$ or simply ${\mathbb X}$,
is a noncommutative space whose category of coherent sheaves is
given by the quotient category
$$coh({\mathbb X}):=\frac{\gr^{\mathbb L}-R}{\gr_{f.d.}^{\mathbb L}-R},$$
see \cite[p.155]{Le2011}.
The weighted projective lines are classified into the following
three classes:
\begin{equation}
\label{E6.0.2}\tag{E6.0.2}
{\mathbb X} \;\; {\rm{is}}\;\;
\begin{cases} domestic \;\; & {\rm{if}} \;\; {\bf p}
\;\; {\rm{is}}\; (p, q), (2,2,n), (2,3,3), (2,3,4), (2,3,5);\\
tubular \;\; & {\rm{if}} \;\; {\bf p}
\;\; {\rm{is}}\; (2,3,6), (3,3,3), (2,4,4), (2,2,2,2);\\
wild \;\; & {\rm{otherwise}}.
\end{cases}
\end{equation}
In \cite[Section 4.4]{Sc2012}, domestic (respectively, tubular,
wild) weighted projective lines are called {\it parabolic}
(respectively, {\it elliptic, hyperbolic}). Let ${\mathbb X}$
be a weighted projective line. A sheaf $F\in coh({\mathbb X})$
is called {\it torsion} if it is of finite length in
$coh({\mathbb X})$. Let $Tor({\mathbb X})$ denote the full
subcategory of $coh({\mathbb X})$ consisting of all torsion
objects. By \cite[Lemma 4.16]{Sc2012}, the category $Tor({\mathbb X})$
decomposes as a direct product of orthogonal blocks
\begin{equation}
\label{E6.0.3}\tag{E6.0.3}
Tor({\mathbb X})=\prod_{x\in {\mathbb P}^1\setminus
\{\lambda_0,\lambda_1,\cdots,\lambda_{t}\}} Tor_{x}
\; \times \; \prod_{i=0}^{t} Tor_{\lambda_i}
\end{equation}
where $Tor_{x}$ is equivalent to the category of nilpotent
representations of the Jordan quiver (with one vertex and
one arrow) over the residue field $\Bbbk_{x}$ and where
$Tor_{\lambda_i}$ is equivalent to the category of nilpotent
representations over $\Bbbk$ of the cyclic quiver of length
$p_i$. A simple object in
$coh({\mathbb X})$ is called {\it ordinary simple} (see
\cite{GL1987}) if it is the skyscraper sheaf ${\mathcal O}_x$ of
a closed point $x\in {\mathbb P}^1\setminus
\{\lambda_0,\lambda_1,\cdots,\lambda_{t}\}$.
Let $Vect({\mathbb X})$ be the full subcategory of
$coh({\mathbb X})$ consisting of all vector bundles. Similar
to the elliptic curve case \cite[Section 4]{BB2007}, one can
define the concepts of {\it degree}, {\it rank} and
{\it slope} of a vector bundle on a weighted projective
line ${\mathbb X}$; details are given in \cite[Section 4.7]{Sc2012}
and \cite[Section 2]{LM1994}. For each
$\mu\in {\mathbb Q}$, let $Vect_{\mu}({\mathbb X})$ be the
full subcategory of $Vect({\mathbb X})$ consisting of all
semistable vector bundles of slope $\mu$. By convention,
$Vect_{\infty}({\mathbb X})$ denotes $Tor({\mathbb X})$.
By \cite[Comments after
Corollary 4.34]{Sc2012}, every indecomposable object in
$coh({\mathbb X})$ is in
\begin{equation}
\nota
\bigcup_{\mu\in {\mathbb Q}\cup\{\infty\}} Vect_{\mu}({\mathbb X}).
\end{equation}
The {\it dualizing element} of $\mathbb{X}$ is denoted by
\begin{equation}
\label{E6.0.4}\tag{E6.0.4}
\omega_0:= (t-2)\overrightarrow{c}
-\sum_{i=1}^n \overrightarrow{x}_i \in \mathbb{L}.
\end{equation}
Below we collect some nice properties of weighted projective lines.
The definition of a stable tube (or simply tube) was introduced
in \cite{Ri1984}.
\begin{lemma} \cite[Lemma 7.9]{CGWZZZ2017}
\label{xxlem6.1}
Let ${\mathbb X}={\mathbb X}({\bf p}, {\bf D})$ be a weighted
projective line.
\begin{enumerate}
\item[(1)]
$coh({\mathbb X})$ is noetherian and hereditary.
\item[(2)]
$$D^b(coh({\mathbb X})) \cong
\begin{cases}
D^b(\Repr( \widetilde{\mathbb A}_{p, q})) & {\rm{if}}\;\; {\bf p}=(p,q),\\
D^b(\Repr( \widetilde{\mathbb D}_n)) & {\rm{if}}\;\; {\bf p}=(2,2,n),\\
D^b(\Repr( \widetilde{\mathbb E}_6)) & {\rm{if}}\;\; {\bf p}=(2,3,3),\\
D^b(\Repr( \widetilde{\mathbb E}_7)) & {\rm{if}}\;\; {\bf p}=(2,3,4),\\
D^b(\Repr( \widetilde{\mathbb E}_8)) & {\rm{if}}\;\; {\bf p}=(2,3,5).
\end{cases}
$$
\item[(3)]
Let ${\mathcal S}$ be an ordinary simple
object in $coh({\mathbb X})$.
Then $\Ext^1_{\mathbb X}({\mathcal S},{\mathcal S})=\Bbbk$.
\item[(4)]
If ${\mathbb X}$ is tubular or domestic, then $\Ext^1_{\mathbb X}(X,Y)=0$
for all $X\in Vect_{\mu'}({\mathbb X})$ and $Y\in Vect_{\mu}({\mathbb X})$
with $\mu'< \mu$.
\item[(5)]
If ${\mathbb X}$ is domestic, then $\Ext^1_{\mathbb X}(X,Y)=0$
for all $X\in Vect_{\mu'}({\mathbb X})$ and $Y\in Vect_{\mu}({\mathbb X})$
with $\mu'\leq \mu<\infty$.
\item[(6)]
Suppose ${\mathbb X}$ is tubular or domestic.
Then every indecomposable vector bundle ${\mathbb X}$
is semistable.
\item[(7)]
Suppose ${\mathbb X}$ is tubular
and let $\mu\in {\mathbb Q}$.
Then each $Vect_{\mu}({\mathbb X})$ is a uniserial category.
Accordingly indecomposables in $Vect_{\mu}({\mathbb X})$
lies in Auslander-Reiten components, which all are
stable tubes of finite rank. In fact, for every
$\mu\in {\mathbb Q}$, $$Vect_{\mu}({\mathbb X})\cong
Vect_{\infty}({\mathbb X})=Tor({\mathbb X}).$$
\end{enumerate}
\end{lemma}
\begin{lemma}
\label{xxlem6.2}
Let ${\mathbb X}={\mathbb X}({\bf p}, {\bf D})$ be a weighted
projective line.
\begin{enumerate}
\item[(1)]
\cite[Theorem 2.2(ii)]{Le2011}
Let $\mathcal{T}$ be $D^b(coh(\mathbb{X}))$. Then
$\mathcal{T}$ has Serre duality in the form of
$$\Hom_{\mathcal{T}}(X,Y)^{\ast}
\cong \Hom_{\mathcal{T}}(Y,S(X)),$$
where the Serre functor $S$ is $-(\omega_0)[1]$
and where the dualizing element $\omega_0$ is
in \eqref{E6.0.4}.
\item[(2)]
\cite[Proposition 1.10]{LR2006}
Each indecomposable vector bundle has a nonzero
morphism to $Tor_{x}$ for every point $x$ in
$\mathbb{P}^1$.
\end{enumerate}
\end{lemma}
The following linear algebra lemma is needed to estimate
the spectral radius of some matrices.
\begin{lemma}
\label{xxlem6.3}
Let $\Gamma$ be the $n\times n$-matrix $(a_{ij})_{n\times n}$
where
\begin{equation}
\label{E6.3.1}\tag{E6.3.1}
a_{ij}=\begin{cases} 1 & {\text{if $i=1$, or $j=1$,}}\\
0 & {\text{otherwise.}}\end{cases}.
\end{equation}
Then the spectral radius $\rho(\Gamma)\geq \sqrt{n}$.
\end{lemma}
\begin{proof}
It is not hard to check that the characteristic polynomial
of $\Gamma$ is
\[
f(x)=x^n-x^{n-1}-(n-1)x^{n-2}=x^{n-2}(x^2-x-(n-1)).
\]
Then
$$\rho(\Gamma)=\frac{1+\sqrt{4n-3}}{2}\geq \sqrt{n}.$$
\end{proof}
\begin{lemma}
\label{xxlem6.4}
Suppose $\mathcal{T}$ be a triangulated category
satisfying
\begin{enumerate}
\item[(a)]
there is an infinite brick set $\phi$,
\item[(b)]
there is a brick object $B$ in $\mathcal{T}$ such that
$\Hom_{\mathcal{T}}(B,X)\neq 0$ for all $X\in \phi$,
\item[(c)]
there is an integer $m$ such that
$\Hom_{\mathcal{T}}(B[s],X)=\Hom_{\mathcal{T}}(X, B[s])= 0$
for all $X\in \Phi$ and for all $|s|\geq m$,
\item[(d)]
$\mathcal{T}$ has a Serre functor $S$, and
\item[(e)]
there is an integer $m_0$ such that
$\Hom_{\mathcal{T}}(B[m_0], S(X))\neq 0$ for all
$X\in \phi$.
\end{enumerate}
Let $\mathcal{C}$ be a monoidal triangulated category
acting on $\mathcal{T}$. Then there is an object
$M\in \mathcal{C}$ such that $\fpd(M)=\infty$.
\end{lemma}
\begin{proof} In the following proof let $\odot$ denote the
action of $\mathcal{C}$ on $\mathcal{T}$ and $\Hom$ denote
$\Hom_{\mathcal{T}}$.
By condition (d), $\mathcal{T}$ has a Serre functor
$S:\mathcal{T}\to \mathcal{T}$ such that
\begin{equation}
\label{E6.4.1}\tag{E6.4.1}
\Hom(X,Y)^{\ast} \cong \Hom(Y,S(X))
\end{equation}
for all $X,Y$ in $\mathcal{T}$.
Let ${\bf 1}\in \mathcal{C}$ be the unit object with
respect to the monoidal tensor of $\mathcal{C}$. Let $m$
and $m_0$ be the integers given in conditions (c) and (e),
and let $M$ be the object ${\bf 1}[m]\oplus {\bf 1}\oplus {\bf 1}[m_0-m]$ in
$\mathcal{C}$. It is enough to show that $\fpd(M)=\infty$.
Let $\phi_n$ be a brick set consisting of $(n-1)$ objects in
$\phi$ and one extra special object, namely $B[m]$, where $m$
is in condition (c). Write
$$\phi_n=\{X_1:=B[m], X_2, X_3,\cdots,X_n\}$$
where $X_i\in \phi$ for all $i=2,3,\cdots,n$. Let
$A:=(a_{ij})$ denote the adjacency matrix
$A(\phi_n, M\odot -)$.
We claim that $a_{1i}\neq 0$ and $a_{j1}\neq 0$ for all $i,j$.
Case 1:
$$\begin{aligned}
a_{11}&= \dim \Hom(B[m], M\odot B[m])\\
&\geq \dim \Hom(B[m], {\bf 1}\odot B[m])\\
&= \dim \Hom(B, B)\\
&=\dim \Bbbk=1 \qquad\qquad\qquad\qquad {\text{by condition (b)}}.
\end{aligned}
$$
Case 2: for every $i\geq 2$,
$$\begin{aligned}
a_{1i}&= \dim \Hom(B[m], M\odot X_i)\\
&\geq \dim \Hom(B[m], {\bf 1}[m]\odot X_i)\\
&= \dim \Hom(B[m], X_i[m])\\
& \geq \dim \Bbbk=1 \qquad\qquad\qquad\qquad {\text{by condition (c)}}.
\end{aligned}
$$
Case 3: for every $j\geq 2$,
$$\begin{aligned}
a_{j1}&= \dim \Hom(X_j, M\odot B[m])\\
&\geq \dim \Hom(X_j, {\bf 1}[m_0-m]\odot B[m])\\
&= \dim \Hom(X_j, B[m_0])\\
&= \dim \Hom(B[m_0], S(X_j)) \quad\;\; {\text{by \eqref{E6.4.1}}}\\
& \geq \dim \Bbbk=1 \qquad\qquad\qquad\qquad {\text{by condition (e)}}.
\end{aligned}
$$
Therefore we proved the claim. This means that
every entry in $A$ is larger than or equal to
the corresponding entry in $\Gamma$ as given in
Lemma \ref{xxlem6.3}. By linear algebra,
$$\rho(A)\geq \rho(\Gamma)\geq \sqrt{n}$$
where the last inequality is Lemma \ref{xxlem6.3}.
Then, by definition, $\fpd(M)\geq \sqrt{n}$ for all $n$.
Thus $\fpd(M)=\infty$ as desired.
\end{proof}
Now we are ready to show that every monoidal structure
on weighted projective line is $\fpd$-infinite.
\begin{proposition}
\label{xxpro6.5}
Let $\mathbb{X}$ be a weighted projective line and
let $\mathcal{T}$ be $D^b(coh(\mathbb{X}))$.
\begin{enumerate}
\item[(1)]
Let $\mathcal{C}$ be a monoidal triangulated category
acting on $\mathcal{T}$. Then there is an object
$M\in \mathcal{C}$ such that $\fpd(M)=\infty$.
\item[(2)]
Every monoidal structure on $\mathcal{T}$ is
$\fpd$-infinite.
\end{enumerate}
\end{proposition}
\begin{proof} Since part (2) is a special case of
part (1), it suffices to show part (1).
We need to verify hypotheses (a)-(e) in Lemma
\ref{xxlem6.4}.
Let $\phi$ be the set $\{\mathcal{O}_x\mid x\in
\mathbb{P}^1\setminus\{\lambda_0,\cdots,\lambda_t\}\}$
and let $B$ be the trivial bundle $\mathcal{O}_{\mathbb{X}}$.
It is clear that $\phi$ is infinite, so (a) holds.
By Lemma \ref{xxlem6.2}(2), (b) holds. Since $coh(\mathbb{X})$
has global dimension 1, (c) holds. By Lemma
\ref{xxlem6.2}(1), $D^b(coh(\mathbb{X}))$ has a Serre
functor $S$ which is
$\mathcal{O}_{\mathbb{X}}(\omega_0)[1]\otimes_{\mathbb{X}}-$.
Then $S(\mathcal{O}_x)=\mathcal{O}_x[1]$ for all
$x\in \mathbb{P}^1\setminus\{\lambda_0,\cdots,\lambda_t\}$.
Therefore (e) holds. Finally the assertion follows from
Lemma \ref{xxlem6.4}.
\end{proof}
It is not hard to check that Proposition \ref{xxpro6.5} also
holds if $\mathbb{X}$ is an irreducible smooth projective
scheme of dimension at least 1.
We still need quite a few lemmas before we can prove
Theorem \ref{xxthm0.3}. Recall that the definition of
$\fpd$-wild is given in Definition \ref{xxdef0.2}(3).
\begin{lemma}
\label{xxlem6.6}
Let $\mathcal{T}$ be a triangulated category. Suppose that,
for each $n$, there is a connected brick set $\phi$ with
$|\phi|>n$.
\begin{enumerate}
\item[(1)]
Let $\mathcal{C}$ be a $\Hom$-finite Krull-Schmidt
monoidal triangulated category acting on $\mathcal{T}$.
Then there is an indecomposable object $M\in \mathcal{C}$
such that $\fpd(M)=\infty$.
\item[(2)]
Suppose further that $\mathcal{T}$ is $\Hom$-finite
Krull-Schmidt. Then every monoidal triangulated structure
on $\mathcal{T}$ is $\fpd$-wild.
\end{enumerate}
\end{lemma}
\begin{proof} Since part (2) is a special case of
part (1), it suffices to show part (1).
Let $(\mathcal{C},\otimes, \mathbf{1})$ be a monoidal
triangulated category acting on $\mathcal{T}$
where $\mathbf{1}$ is the unit object of $\mathcal{C}$.
Write $\mathbf{1}$ as a direct sum of indecomposable objects
$$\mathbf{1}=\bigoplus_{i=1}^d M_i.$$
By hypothesis, for each $n$, there is a connected brick set
$\phi^n$ with $|\phi^n|>dn$. Define
$$\phi^n_i:=\{ X\in \phi^n \mid M_i\odot X\neq 0\}.$$
Since $X=\mathbf{1}\odot X=\bigoplus_{i=1}^d (M_i\odot X)$
and $X$ is indecomposable, there is exactly one $i$ such that
$M_i\odot X\neq 0$, and for that $i$, we have $M_i\odot X=X$.
Hence, for each $n$, $\phi^n$ is a disjoint union of $\phi^n_i$
for $i=1,\cdots, d$. By the pigeonhole principle, there is at least
$i$ such that $|\phi^n_i|>n$. This implies that there is at least
one $j$ such that, with this fixed $j$, there is an infinite
sequence $n_j$ such that $|\phi^{n_j}_j|>n_j$. Using this
sequence of brick sets, one sees that
$$\Hom_{\mathcal{T}}(M_j[-1]\odot X,Y)=\Hom_{\mathcal{T}}(X,Y[1])
\neq 0$$
for all $X,Y\in \phi^{n_j}_j$.
By definition, $\fpd(M_j[-1])\geq n_j$ as $|\phi^{n_j}_j|\geq n_j$.
Since $n_j$ goes to infinity, $\fpd(M_j[-1])=\infty$ as desired.
\end{proof}
Next we recall more detailed structures concerning
weighted projective lines.
Let ${\bf p}$ be the weight of $\mathbb{X}$ and
$B_0=\gcd(p_i\in {\bf p})$.
Define $\nu$ to be the group homomorphism from $\mathbb{L}$ to
$\mathbb{Z}$ such that
$\nu(\overrightarrow{x}_i)=\prod_{s\neq i} p_s$. It is easy to
see that the image of $\nu$ is $B_0\mathbb{Z}$. In fact,
we can assume that $B_0=1$, so $\nu: \mathbb{L}\to \mathbb{Z}$
is a surjective morphism. Since $\rank(\ker(\nu))=0$,
the kernel of $\nu$ is finite.
\begin{lemma}
\label{xxlem6.7}
Let $\mathbb{X}$ be a weighted projective line and let
$\mathcal{T}$ be $D^b(coh(\mathbb{X}))$.
\begin{enumerate}
\item[(1)]
There is a positive integer $B_1$, only dependent on
$\mathbb{X}$, such that, if $\omega_1,\omega_2$ are in
$\mathbb{L}$ satisfying $\nu(\omega_2-\omega_1)\geq B_1$,
then $\Hom_{\mathbb{X}}(\mathcal{O}_{\mathbb{X}}(\omega_1),
\mathcal{O}_{\mathbb{X}}(\omega_2)) \neq 0$.
\item[(2)]
For every $N$, there is a positive integer $B_3(N)$,
only dependent on $\mathbb{X}$ and $N$ such that
$$\dim
\Hom_{\mathbb{X}}(\mathcal{O}(\omega_1),\mathcal{O}(\omega_2))
\leq B_3(N)$$
for all $\omega_1,\omega_2$ in
$\mathbb{L}$ satisfying $0\leq \nu(\omega_2-\omega_1)\leq N$.
\end{enumerate}
\end{lemma}
\begin{proof} (1) We may assume that $\omega_1=0$. Let
$B_1=(t-1)\prod_{s=0}^t p_i$. For $\omega_2 \in \mathbb{L}$
with $\nu(\omega_2)\geq B_1$, write $\omega_2=\sum_{s=0}^{t-1} a_s
\overrightarrow{x}_s + a_t \overrightarrow{x}_t$ where
$0\leq a_s\leq p_s$ for all $0\leq s\leq t-1$. Since
$\nu(\omega_2)\geq B_1$, $a_t\geq 0$. Then the $\omega_2$-degree
component of $R$ (see \eqref{E6.0.1}) is not zero and hence
$\Hom_{\mathbb{X}}(\mathcal{O}_{\mathbb{X}},
\mathcal{O}_{\mathbb{X}}(\omega_2))=R_{\omega_2}\neq 0$.
(2) Again we can assume that $\omega_1=0$. Since there are
only finitely many $\omega_2$ such that $\nu(\omega_2)$ is
in between $0$ and $N$. Let $B_3(N)$ be the
maximum of all possible
$$\dim \Hom_{\mathbb{X}}(\mathcal{O},\mathcal{O}(\omega_2))$$
where $\omega_2$ runs over all $\omega_2\in \mathbb{L}$
such that $0\leq \nu(\omega_2)\leq N$. Then the
assertion follows.
\end{proof}
The next lemma concerns domestic weighted projective lines.
Some un-defined terms can be found in \cite{KLM2013}. Let
$\omega_0$ be the dualizing element defined in \eqref{E6.0.4}.
\begin{lemma}
\label{xxlem6.8}
Let $\mathbb{X}$ be a weighted projective line.
\begin{enumerate}
\item[(1)]
\cite[Proposition 5.1(ii)]{KLM2013}
Suppose that the weight ${\bf p}$ is either $(2, 2, n)$, or
$(2, 3, 3)$, or $(2, 3, 4)$ or $(2, 3, 5)$. Let $\Delta$
be the attached Dynkin diagram and $\widetilde{\Delta}$
its extended Dynkin diagram. The Auslander-Reiten quiver
$\Gamma(Vect(\mathbb{X}))$ of $Vect(\mathbb{X})$ consists
of a single standard component having the form
$\mathbb{Z} \widetilde{\Delta}$. Moreover, the category of
indecomposable vector bundles on $\mathbb{X}$, denoted by
$ind(Vect(\mathbb{X}))$, is equivalent to the mesh category
of $\Gamma(Vect(\mathbb{X}))$.
\item[(2)]
Under the hypotheses of part {\rm{(1)}}, there is a finite
set of indecomposable vector bundles $\{V_i\}_{i\in I}$ such
that every indecomposable vector bundle is of the form $V_i(n
\omega_0)$ for some $n\in \mathbb{Z}$ and some $i\in I$.
\item[(3)]
\cite[Sect. 5.1, page 217]{KLM2013}
If the weight ${\bf p}$ is of the form $(p,q)$, then each
indecomposable vector bundle is a line bundle $\mathcal{O}(\omega)$
for $\omega\in \mathbb{L}$.
\item[(4)]
Under the hypotheses of part {\rm{(3)}}, there is a finite set of
indecomposable vector bundles $\{V_i\}_{i\in I}$ such that
every indecomposable vector bundle is of the form $V_i(n
\omega_0)$ for some $n\in \mathbb{Z}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(2) There is a ($[-1]$-shifted) Serre functor $F:=-(\omega_0)$ which
is also a functor from $ind(Vect(\mathbb{X}))$ to itself. It is easy
to check that $\nu(\omega_0)<0$. Then $F$ induces an automorphism of the
Auslander-Reiten quiver $\Gamma(Vect(\mathbb{X}))$ by shifting
forward a distance $\nu(\omega_0)$.
Therefore there is a finite set of indecomposable vector bundles
$\{V_i\}_{i\in I}$ such that every indecomposable vector bundle
is of the form $V_i(n\omega_0)$ for some $n\in \mathbb{Z}$
and some $i\in I$.
(4) Since the map $\nu: \mathbb{L}
\to \mathbb{Z}$ is a group homomorphism with finite
kernel, there are only finitely many
$\omega$ such that $\nu(\omega)=0$.
Similarly, there are only finitely many $\omega\in \mathbb{L}$
such that $\nu(\omega)=0,1,\cdots,-\nu(\omega_0)-1$. Then
the set $\{\mathcal{O}(\omega)\mid 0\leq \nu(\omega)\leq -\nu(\omega_0)-1\}$
has the desired property.
\end{proof}
We introduce some temporary notation. By Lemma
\ref{xxlem6.8}(2,4), if $\mathbb{X}$ is domestic,
then there is a finite set of indecomposable vector
bundles, say $\mathbb{K}:=\{K_1, \cdots K_{B_4}\}$, such that
every indecomposable vector bundle is of the form
$K_s(n\omega_0)$ for some $1\leq s\leq B_4$ and
some $n\in \mathbb{Z}$. (Here $\omega_0 \in \mathbb{L}$ is
the dualizing element given in \eqref{E6.0.4}.) For
each $K_s$ we fix a sequence of sub-bundles
\begin{equation}
\label{E6.8.1}\tag{E6.8.1}
0=:V_{s,0}\subset V_{s,1}\subset V_{s,2}\subset \cdots
\subset V_{s,Y_s}:=K_s
\end{equation}
such that each subquotient $V_{s,i}/V_{s,i-1}$ is a line
bundle of the form $\mathcal{O}_{\mathbb{X}}(\omega_{s,i})$
for some $\omega_{s,i}\in \mathbb{L}$. Let $\Omega(\mathbb{X})$
be the collection of all such $\omega_{s,i}$'s. Hence
$\Omega(\mathbb{X})$ is finite. Let
$$\begin{aligned}
\max(\Omega)&= \max\{\nu(\omega)\mid \omega\in \Omega(\mathbb{X})\},\\
\min(\Omega)&= \min\{\nu(\omega)\mid \omega\in \Omega(\mathbb{X})\}.
\end{aligned}
$$
For every vector bundle $V$, we write $V=K_s(n\omega_0)$ for some
$s$ and $n$. Then we fix a sequence of sub-bundles of
$V:=K_s(n\omega_0)$ by applying $-(n\omega_0)$ to \eqref{E6.8.1}.
We have a series of subquotients
$$V_{s,i}(n\omega_0)/V_{s,i-1}(n\omega_0)
\cong \mathcal{O}_{\mathbb{X}}(\omega_{s,i}+n\omega_0)$$
induced by \eqref{E6.8.1}. Let $\nu(V)$ denote the positive
different between the largest of all $\nu(\omega_{s,i}+n\omega_0)$
and the smallest of all $\nu(\omega_{s,i}+n\omega_0)$. Then it is
clear that $\nu(V)\leq \max(\Omega)-\min(\Omega)$. So we have
proved part (1) of the follows proposition.
\begin{proposition}
\label{xxpro6.9}
Let $\mathbb{X}$ be a domestic weighted projective line.
\begin{enumerate}
\item[(1)]
Let $V$ be an indecomposable vector bundle on $\mathbb{X}$.
Then the $\nu(V)$ is uniformly bounded by
$B_5:=\max(\Omega)-\min(\Omega)$.
\item[(2)]
Let $V$ be an indecomposable vector bundle on $\mathbb{X}$.
Then the rank $V$ is uniformly bounded by an integer $B_6$
{\rm{(}}only dependent on $\mathbb{X}${\rm{)}}.
\item[(3)]
Suppose $\phi$ is a brick set consisting of vector bundles
on $\mathbb{X}$. Then the size of $\phi$ is uniformly bounded
by $B_7$ {\rm{(}}only dependent on $\mathbb{X}${\rm{)}}.
\item[(4)]
Suppose $\phi$ is a brick set consisting of vector bundles
on $\mathbb{X}$. Then, up to a degree shift, $\phi$ is a
subset of $\bigcup_{n=-N}^{N} \mathbb{K}(n\omega_0)$ for some
integer $N$. As a consequence, $\sum_{V\in \phi} \nu(V)$ is
uniformly bounded, say, by $B_8$ {\rm{(}}only dependent on
$\mathbb{X}${\rm{)}}.
\item[(5)]
Fix a vector bundle $V$ on $\mathbb{X}$. For every
brick set consisting of vector bundles
$\{X_1,\cdots,X_n\}$,
$\dim \Hom_{\mathbb{X}}(X_i, V\otimes_{\mathbb{X}} X_j)$
is uniformly bounded by $B_9(V)$ for all $i,j$
{\rm{(}}only dependent on $V$ and $\mathbb{X}${\rm{)}}.
\item[(6)]
Fix a vector bundle $V$ on $\mathbb{X}$. For every
brick set consisting of vector bundles
$\{X_1,\cdots,X_n\}$,
$\dim \Hom_{\mathbb{X}}(V\otimes_{\mathbb{X}} X_i, X_j)$
is uniformly bounded by $B_{10}(V)$ for all $i,j$
{\rm{(}}only dependent on $V$ and $\mathbb{X}${\rm{)}}.
\end{enumerate}
\end{proposition}
\begin{proof}
(2) This is part of \cite[Theorem 6.1]{LR2006}. It also can be shown
directly as follows.
Since every indecomposable vector bundle $V$ is of the form
$K_s(\omega)$ for $1\leq s\leq B_4$, the rank of $V$ is uniformly
bounded, say by $B_6$.
(3) Since $\nu(\omega_0)$ is negative, there is an $N_1$ such that
for all $n\geq N_1$ and for all $s_1,s_2$,
$$\nu(\omega_{s_2,Y_{s_2}})-\nu(\omega_{s_1,1}-n \omega_0)\geq B_1$$
where $B_1$ is the constant given in Lemma \ref{xxlem6.7}(1).
By Lemma \ref{xxlem6.7}(1), for such $n$, $s_1,s_2$,
$$\Hom_{\mathbb{X}}(\mathcal{O}_{\mathbb{X}}(\omega_{s_2,Y_{s_2}})),
\mathcal{O}_{\mathbb{X}}(\omega_{s_1,1}-n\omega_0))\neq 0.$$
By \eqref{E6.8.1},
\begin{equation}
\label{E6.9.1}\tag{E6.9.1}
\Hom_{\mathbb{X}}(K_{s_2},K_{s_1}(-n\omega_0))\neq 0
\end{equation}
for all $s_1,s_2$ and all $n\geq N_1$.
Let $\phi$ be a brick set of vector bundles. We claim that
$|\phi|\leq N_1|\mathbb{K}|=:B_7$. If not, by the pigeonhole
principle, there is an $s$ such that $\phi$ contains
a subset
$$\{K_s(n_1\omega_0),\cdots,K_s(n_q\omega_0)\}$$
for some $q> N_1$ where $n_1<n_1<\cdots <n_q$. Then,
by \eqref{E6.9.1},
$$
\Hom_{\mathbb{X}}(K_s(n_q\omega_0),K_s(n_1\omega_0))
=\Hom_{\mathbb{X}}(K_s,K_s((n_1-n_q)\omega_0))
\neq 0.$$
This contradicts that $\phi$ is a brick set. Therefore
we proved the claim.
(4) Without loss of generality, we may assume that
$\phi$ contains $K_1$. Let $K_s(n\omega_0)$ be
any other object in $\phi$. By \eqref{E6.9.1},
$|n|< N_1$ where $N_1$ is given in the proof of
part (3). Therefore $\phi$ is a subset of
$\bigcup_{n=-N_1}^{N_1} \mathbb{K}(n\omega_0)$.
As a consequence, $\sum_{X\in \phi} \nu(X)$ is
uniformly bounded, say by $B_8$.
(5) By part (4), up to a degree shift, we can assume that
$\phi$ is a subset of $\bigcup_{n=-N}^{N} \mathbb{K}
(n\omega_0)$ for a fixed integer $N$. Note that
the global degree shift will not change the assertion.
Then the assertion follows by the fact that
$\bigcup_{n=-N}^{N} \mathbb{K}(n\omega_0)$ is a fixed
set.
(6) Similar to the proof of part (5).
\end{proof}
\begin{lemma}
\label{xxlem6.10}
Let $\mathbb{X}$ be a weighted projective line.
Let $\mathcal{T}$ be $D^b(coh(\mathbb{X}))$.
\begin{enumerate}
\item[(1)]
Let $M$ be a brick object in $\mathcal{T}$.
Then $M\cong N[n]$ where $n\in \mathbb{Z}$ and there
$N\in coh(\mathbb{X})$ is either a vector bundle, or
an ordinary simple $\mathcal{O}_x$, or an indecomposable
object in $Tor_{\lambda_i}$.
\item[(2)]
If a brick set $\phi$ consists of indecomposable
objects in $Tor_{\lambda}$ for some $\lambda\in \mathbb{P}^1$,
then $|\phi|$ is uniformly bounded by $B_{11}$
{\rm{(}}only dependent on $\mathbb{X}${\rm{)}}.
\item[(3)]
If $M$ is a brick object in $Tor(\mathbb{X})$, then
$\dim M$ is uniformly bounded by $B_{12}$
{\rm{(}}only dependent on $\mathbb{X}${\rm{)}}.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) It is well-known that every indecomposable object in
$coh(\mathbb{X})$ is either a vector bundle or a torsion
sheaf. The assertion follows by \eqref{E6.0.3} and the fact that
$coh(\mathbb{X})$ is hereditary.
(2) This is trivial if $\lambda \in \mathbb{P} \setminus
\{\lambda_0,\cdots,\lambda_t\}$. If $\lambda=\lambda_i$ for some $i$,
$Tor_{\lambda_i}$ is a standard tube of
rank $p_i$ with $p_i^2$ brick objects, see
\cite[Section 2.2]{CGWZZZ2019}. So the assertion follows.
(3) By \eqref{E6.0.3}, $M\in Tor_{\lambda}$ for
some $\lambda \in \mathbb{P}$. It is trivial if
$\lambda \in \mathbb{P} \setminus
\{\lambda_0,\cdots,\lambda_t\}$. Now assume that $\lambda=\lambda_i$.
All brick objects in $Tor_{\lambda_i}$ are given in
\cite[Corollary 2.8]{CGWZZZ2019}. As a consequence,
$\dim M\leq p_i$. The assertion follows.
\end{proof}
Since $R$ in \eqref{E6.0.1} is commutative, there is a
natural tensor product on $coh(\mathbb{X})$, denoted
by $\otimes_{\mathbb{X}}$. Note that $\otimes_{\mathbb{X}}$
is not (bi)exact. The derived category $\mathcal{T}:=
D^b(coh(\mathbb{X}))$ has a canonical monoidal structure
where the tensor functor is defined by
$$ - \otimes_{\mathcal{T}} - : = -\otimes_{\mathbb{X}}^L -$$
(the derived tensor product). Note that
$\otimes_{\mathcal{T}}$ is biexact so that $\mathcal{T}$ is
a monoidal triangulated category. Next we show that
this monoidal triangulated structure is $\fpd$-tame
when $\mathbb{X}$ is domestic.
\begin{theorem}
\label{xxthm6.11}
Retain the notation introduced above. If $\mathbb{X}$
is domestic, then the canonical monoidal triangulated
structure on $D^b(coh(\mathbb{X}))$ is
$\fpd$-tame.
\end{theorem}
\begin{proof} Let $\mathcal{T}$ denote
$D^b(coh(\mathbb{X}))$. By Proposition \ref{xxpro6.5},
$\mathcal{T}$ is $\fpd$-infinite. By definition, it
remains to show that $\fpd(M)<\infty$ for every
indecomposable object $M$ in $\mathcal{T}$.
Since $M$ is indecomposable and $coh(\mathbb{X})$ is
hereditary, by \cite[Lemma 3.3]{CGWZZZ2017}, $M$ is of
the form $N[n]$ for some $N\in coh(\mathbb{X})$ and
$n\in \mathbb{Z}$. By Lemma \ref{xxlem6.10}(1), $N$ is
either a vector bundle or a torsion. So we fix an $N$
and consider the following two cases.
Case 1: $N$ is a vector bundle. In this case
$N\otimes_{\mathbb{X}}-$ is exact and
$N\otimes_{\mathcal{T}} Y=N\otimes_{\mathbb{X}} Y$
for all $Y\in coh(\mathbb{X})$.
If $n\neq 0,1$, by the proof of Lemma \ref{xxlem4.11},
$\fpd(N[n]\otimes_{\mathcal{T}}-)=0$. Now we deal with the case
$n=0$ or $M=N$.
Let $\phi$ be a brick set. By Lemma \ref{xxlem6.10}(1),
we can write $\phi=\bigcup_{\delta \in \mathbb{Z}} \phi_{\delta}$,
with $\delta$ integers ranging from small to large, where
$\phi_{\delta}$ is either empty or of the form
$$\{X_{\delta,1}[\delta],X_{\delta,2}[\delta],\cdots,
X_{\delta,t_\delta}[\delta]\}$$
for some $X_{\delta,s}\in coh(\mathbb{X})$. Since
$$\Hom_{\mathcal{T}}(X_{\delta,s}[\delta],N\otimes_{\mathcal{T}}
X_{\delta',s'}[\delta'])=0$$
for all $\delta>\delta'$, the adjacency matrix
$A(\phi, N\otimes_{\mathcal{T}}-)$ is a upper triangular
block matrix. Now the idea of \cite[Lemma 6.1]{CGWZZZ2017} implies
that we only need to consider blocks, namely, we can assume that $\phi=
\phi_{\delta}$ for some $\delta$. For each block associated to
$\phi_{\delta}$, we can further assume that $\delta=0$ and
$\phi_0=\{X_1,\cdots,X_t\}$ for some $X_s\in coh(\mathbb{X})$.
Without loss of generality, we assume that
$$\phi=\phi_0=\{X_1,\cdots,X_t\}$$
for some $X_1,\cdots, X_t\in coh(\mathbb{X})$. If $\phi$ contains
an ordinary simple $\mathcal{O}_x$, then, by Lemma \ref{xxlem6.2}(2),
$\phi$ does not contain any vector bundle. In this case, one can
further decompose $\phi$ according to \eqref{E6.0.3} so that
$A(\phi, N\otimes_{\mathcal{T}}-)$ is a block diagonal matrix.
For each block, $\phi$ is either $\{\mathcal{O}_x\}$ or consisting of
objects in $Tor_{\lambda_i}$. So we consider these two subcases.
If $\phi=\{\mathcal{O}_x\}$, it is easy to see that
$\Hom_{\mathcal{T}}(\mathcal{O}_x, N\otimes_{\mathcal{T}}
\mathcal{O}_x)$ has dimension bounded by the rank of $N$.
This is uniformly bounded. If $\phi$ is a subset of
$Tor_{\lambda_i}$, then there are only finitely many
possibilities [Lemma \ref{xxlem6.10}(2)]. Hence entries and size of
the $A(\phi, N\otimes_{\mathcal{T}}-)$ is uniformly bounded.
Therefore $\rho(A(\phi, N\otimes_{\mathcal{T}}-))$ is
uniformly bounded. The second case is when $\phi$
does not contain any ordinary simple $\mathcal{O}_x$.
Then the size of $\phi$ is uniformly bounded by
Proposition \ref{xxpro6.9}(3) and Lemma \ref{xxlem6.10}(2).
We claim that each entry in $A(\phi, N\otimes_{\mathcal{T}}-)$
is uniformly bounded, or $d_{ij}:=\dim \Hom_{\mathbb{X}}(X_i,N
\otimes_{\mathbb{X}} X_j)$ is uniformly
bounded for all $X_i, X_j$ in $\phi$. If both
$X_i$ and $X_j$ are vector bundles, the assertion follows
from Proposition \ref{xxpro6.9}(5). If $X_i$ is in $Tor_{\lambda_i}$
and $X_j$ is a vector bundle, then $d_{ij}=0$. If
$X_i$ is a vector bundle and $X_j$ is in $Tor_{\lambda_i}$,
then $d_{ij}$ is bounded by $\rank(X_i)\rank(N)\dim X_j$,
which is uniformly bounded by Proposition \ref{xxpro6.9}(2)
and Lemma \ref{xxlem6.10}(3). If $X_i$ and $X_j$ are
both in $Tor_{\lambda_i}$, then $d_{ij}$ is bounded by
$(\dim X_i)\rank(N) (\dim X_j)$ which is uniformly bounded.
Combining all these cases, one proves that $\fpd(N)$ is finite
by Lemma \ref{xxlem4.8} (Gershgorin Circle Theorem).
Next we deal with the case $n=1$ (namely, $M=N[1]$) and re-cycle
some notation used in the previous paragraphs. By Lemma
\ref{xxlem6.10}(1), we can write
$\phi=\bigcup_{\delta \in \mathbb{Z}} \phi_{\delta}$,
with $\delta$ being integers ranging from small to large, where
$\phi_{\delta}$ is either empty or of the form
$\{X_{\delta,1}[\delta],X_{\delta,2}[\delta],\cdots,
X_{\delta,t_\delta}[\delta]\}$.
Since $coh(\mathbb{X})$ is hereditary,
$$\Hom_{\mathcal{T}}(X_{\delta,s}[\delta],N[1]\otimes_{\mathcal{T}}
X_{\delta',s'}[\delta'])=0$$
for all $s,s'$ and all $\delta<\delta'$. Therefore the adjacency
matrix $A(\phi, N[1]\otimes_{\mathcal{T}}-)$ is a lower triangular
block matrix. For each block we can assume that $\delta=0$
and $\phi=\{X_1,\cdots,X_t\}$ as in the case $n=0$. If $\phi$
contains an ordinary simple $\mathcal{O}_x$, then, by
Lemma \ref{xxlem6.2}(2), $\phi$ does not contain any
vector bundle. In this case, one can further decompose $\phi$
according to \eqref{E6.0.3} so that
$A(\phi, N[1]\otimes_{\mathcal{T}}-)$ is a block
diagonal matrix. For each block, $\phi$ is either
$\{\mathcal{O}_x\}$ or consisting of objects in $Tor_{\lambda_i}$.
So we consider these two subcases. If $\phi=\{\mathcal{O}_x\}$,
then
$$\Hom_{\mathcal{T}}(\mathcal{O}_x, N[1]\otimes_{\mathcal{T}}
\mathcal{O}_x)=\Ext^1_{\mathbb{X}}(\mathcal{O}_x, N\otimes_{\mathcal{T}}
\mathcal{O}_x)$$
which is bounded by the $\rank(N)$. If $\phi$ is a subset of
$Tor_{\lambda_i}$, then there are only finitely many
possibilities, see the proof of Lemma \ref{xxlem6.10}(2).
Hence the entries and the size of the $A(\phi, N[1]\otimes_{\mathcal{T}}-)$
are uniformly bounded. Therefore
$\rho(A(\phi, N[1]\otimes_{\mathcal{T}}-))$ is uniformly bounded.
The second case is when $\phi$ does not contain any ordinary
simple $\mathcal{O}_x$. Then the size of $\phi$ is uniformly
bounded by Proposition \ref{xxpro6.9}(3) and Lemma
\ref{xxlem6.10}(2). We claim that each entry in
$A(\phi, N[1]\otimes_{\mathcal{T}}-)$
is uniformly bounded, or
$$\begin{aligned}
d_{ij}:&=\dim \Hom_{\mathbb{X}}(X_i,N[1]\otimes_{\mathbb{X}} X_j)
=\dim \Ext^1_{\mathbb{X}}(X_i,N\otimes_{\mathbb{X}} X_j)\\
&=\dim \Hom_{\mathbb{X}}(N\otimes_{\mathbb{X}} X_j, X_i(\omega_0))
=\dim \Hom_{\mathbb{X}}(N(-\omega_0)\otimes_{\mathbb{X}} X_j, X_i)
\end{aligned}
$$
is uniformly bounded for all $X_i, X_j$ in $\phi$. Note that
the third equality is Serre duality. If both
$X_i$ and $X_j$ are vector bundles, the assertion follows
and Proposition \ref{xxpro6.9}(6). If $X_i$ is in $Tor_{\lambda_i}$
and $X_j$ is a vector bundle, we obtain
that
$$d_{ij}\leq \rank(X_j)\rank(N(-\omega_0))\dim X_i,$$
which is uniformly bounded by Proposition \ref{xxpro6.9}(2)
and Lemma \ref{xxlem6.10}(3). If
$X_i$ is a vector bundle and $X_j$ is in $Tor_{\lambda_i}$,
then $d_{ij}=0$. If $X_i$ and $X_j$ are
both in $Tor_{\lambda_i}$, then
$$d_{ij}\leq \dim(X_j)\rank(N(-\omega_0))\dim X_i,$$
which is uniformly bounded.
Combining all these cases, one proves that $\fpd(N[1])$ is finite
by Lemma \ref{xxlem4.8} (Gershgorin Circle Theorem).
Case 2: $N$ is a torsion. By definition,
$N\otimes_{\mathcal{T}}-=N\otimes_{\mathbb{X}}^L -$.
If $n\neq -1, 0, 1$, a proof similar to Lemma \ref{xxlem4.11}(1)
shows that $\fpd(N[n])=0$. We need to analyze the cases
$n=-1,0,1$. The following proof is independent of $n$.
Since $N$ is torsion and indecomposable, by \eqref{E6.0.3},
$N$ is either in $Tor_{x}$ or $\Tor_{\lambda_i}$. We will use
Gershgorin Circle Theorem [Lemma \ref{xxlem4.8}].
Let $\phi=\{X_1,\cdots,X_m\}$ be any brick set in
$\mathcal{T}$ and let $(d_{ij})_{m\times m}$ denote
the adjacency matrix $A(\phi, N[n]\otimes_{\mathcal{T}}-)$
where
$$d_{ij}=\dim \Hom_{\mathcal{T}}(X_i, N[n]\otimes_{\mathcal{T}} X_j).$$
By Lemma \ref{xxlem4.8}, it suffices to show
\begin{enumerate}
\item[(a)]
each $d_{ij}$ is uniformly bounded (only dependent on $M:=N[n]$).
\item[(b)]
For each $j$, there are only uniformly-bounded-many $i$ such that
$d_{ij}\neq 0$.
\end{enumerate}
\noindent
{\bf Proof of (a):} For each $j$, write $X_j=Y_j[s_j]$ for some
$Y_j\in coh(\mathbb{X})$ and $s_j\in \mathbb{Z}$. Since
$N\in Tor_{\lambda}$, $H^s_N(X_j):=H^s(N[n]\otimes_{\mathcal{T}} X_j)$
is zero for $s\neq n+s_j-1,n+s_j$ and $H^s_N(X_j)$ is in
$Tor_{\lambda}$ for $s=n+s_j-1,n+s_j$. Since $coh(\mathbb{X})$ is
hereditary,
$$N[n]\otimes_{\mathcal{T}} X_j
=\sum_{s} H^s(N[n]\otimes_{\mathcal{T}} X_j)[-s],$$
see \cite[Lemma 2.1]{CR2018}. If $Y_j$ is a vector
bundle, then
$$\dim H^s_N(X_j)\leq (\dim N)(\rank(Y_j))$$
for all $s$. If $X_j$ is torsion, then
$$\dim H^s_N(X_j)\leq (\dim N)(\dim Y_j)$$
for all $s$. In both cases, $\dim H^s(X_j)$ is uniformly
bounded by Proposition \ref{xxpro6.9}(2) and Lemma
\ref{xxlem6.10}(3). Using the Serre duality and Proposition
\ref{xxpro6.9}(2) and Lemma \ref{xxlem6.10}(3) again, one sees that
$$\sum_{s,t\in \mathbb{Z}}\dim \Hom_{\mathcal{T}}(X_i[t], H^s(N[n]
\otimes_{\mathcal{T}}X_j)[s])=
\sum_{s,t}\dim \Hom_{\mathcal{T}}(X_i[t], H^s_N(X_j)[s])$$
is uniformly bounded. Hence
$$d_{ij}=\Hom_{\mathcal{T}}(X_i, N[n]\otimes_{\mathcal{T}} X_j)=
\sum_{s}\Hom_{\mathcal{T}}(X_i, H^s(N[n]\otimes_{\mathcal{T}} X_j)[s])$$
is uniformly bounded.
\noindent
{\bf Proof of (b):} As noted before, $\fpd(N[n])=0$ when
$n\neq -1,0,1$. So, in this proof, we assume that $n$ is
$-1$ or $0$ or $1$. Without loss of generality, we only prove
that there are only uniformly-bounded-many $i$ such that
$d_{i1}\neq 0$. By a complex shift, we can assume that
$X_1\in coh(\mathbb{X})$. Since $coh(\mathbb{X})$ is hereditary,
one can check that, if $X_i\in coh(\mathbb{X})[m]$ for $|m|\geq 3$,
then $d_{i1}=0$.
For each $m$ with $|m|\leq 2$, let $\phi_m$ consist of
$Y_i\in coh(\mathbb{X})$ such that $X_i=Y_i[m]\in \phi$
and $d_{i1}\neq 0$. If $\phi_m$ does not contain any
ordinary simple $\mathcal{O}_x$, then, by
Proposition \ref{xxpro6.9}(3) and Lemma 6.10(2), $|\phi_m|$
is uniformly bounded. If $\phi_m$ contain an
ordinary simple $\mathcal{O}_x$, then $d_{i1}\neq 0$
implies that $x$ is in the support of $N\otimes_{\mathbb{X}}
X_1$. Therefore there are only finitely many
possible $x$. Further, $X_1$ is either $\mathcal{O}_x$
or a vector bundle, and in the latter case, $d_{i1}\neq 0$
implies that $N$ must be $\mathcal{O}_x$.
In both case, $N[n]\otimes_{\mathcal{T}} X_1$ is
supported at $x$. Therefore $\phi_m$ consists of
a single element $\mathcal{O}_x$. Combining above, we obtain
that $\sum_{|m|\leq 2} |\phi_m|$ is uniformly bounded. As a consequence,
(b) holds.
Now it follows by Lemma \ref{xxlem4.8}, $\fpd(N[n])<\infty$.
Combining Cases 1 and 2, we finish the proof.
\end{proof}
Now we are ready to prove Theorem \ref{xxthm0.3}.
\begin{proof}[Proof of Theorem \ref{xxthm0.3}]
(1) If $Q$ is of finite type, by
Corollary \ref{xxcor4.10} every monoidal
triangulated structure on $D^b(\Repr(Q))$ is
fpd-finite. The converse follows from
Lemmas \ref{xxlem6.1}(2), \ref{xxlem4.3} and \ref{xxlem4.5}
and Proposition \ref{xxpro6.5}.
(2) Suppose $Q$ is tame. By Lemma
\ref{xxlem6.1}(2) and Theorem \ref{xxthm6.11},
there is a $\fpd$-tame monoidal structure on
$\mathcal{T}$. Applying Lemma \ref{xxlem4.6}
to $\mathcal{A}=\mathcal{C}=\Repr(Q)$,
there is a $\fpd$-wild monoidal structure.
(3) This follows from parts (1,2), Lemmas \ref{xxlem4.3} and
\ref{xxlem6.6}.
(4) This follows from part (1).
\end{proof}
\begin{corollary}
\label{xxcor6.12}
Let $Q$ be a finite acyclic quiver.
\begin{enumerate}
\item[(1)]
$Q$ is of finite type if and only if $\Repr(Q)$ does not
contain an infinite brick set.
\item[(2)]
$Q$ is of tame type if and only if $\Repr(Q)$
contains an infinite brick set and does not contain an
infinite connected brick set.
\item[(3)]
$Q$ is of wild type if and only if $\Repr(Q)$
contains an infinite connected brick set.
\end{enumerate}
\end{corollary}
\begin{proof} (1) If $Q$ is of finite type,
$\Repr(Q)$ contains only finitely many indecomposable
objects. So $\Repr(Q)$ does not contains an infinite
brick set.
For the converse, we assume that $\Bbbk Q$ is of tame or
wild type. By Lemmas \ref{xxlem4.3} and \ref{xxlem4.5},
$\Repr(Q)$ contains an infinite brick set. This yields
a contradiction. Therefore the assertion follows.
(3) If $Q$ is of wild type, by Lemmas \ref{xxlem4.3},
$\Repr(Q)$ contains an infinite connected brick set.
Conversely suppose $\Repr(Q)$ contains an infinite connected
brick set. By Lemma \ref{xxlem6.6}, every monoidal
triangulated structure on $D^b(\Repr(Q))$ is
$\fpd$-wild. By Theorem \ref{xxthm0.3}(3),
$Q$ is of wild type.
(2) Follows from parts (1,3).
\end{proof}
\section{Examples}
\label{xxsec7}
The natural construction of weak bialgebras associated
to quivers, given in Lemma \ref{xxlem2.1}, produces many
monoidal triangulated categories by Lemma \ref{xxlem1.9}(2).
The main goal of this section is to construct other
examples of (weak) bialgebras most of which are related
to finite quivers. We will see that, given a quiver $Q$,
there are different weak bialgebra structures on $\Bbbk Q$
such that the induced tensor products over $\Repr(Q)$ are
different from \eqref{E2.1.1}. As a consequence, there are
several different monoidal abelian structures on $\Repr(Q)$
generally. We will also see that there are monoidal
triangulated structures on derived categories
associated to noncommutative projective schemes.
The first example comes from \cite{HT2013}.
\begin{example}
\label{xxex7.1}
This example follows some ideas from \cite[Theorem 3.2]{HT2013}.
Let $Q$ be a quiver with $n$ vertices.
We label vertices of $Q$ as $1,2,\cdots,n$.
Suppose that $1$ is either a source or a sink, namely, $Q$
satisfies the following condition, either
\begin{enumerate}
\item[(1)]
there is no arrows from $1$ to $j$ for every $j$, or
\item[(2)]
there is no arrows from $j$ to $1$ for every $j$.
\end{enumerate}
Let $e_i$ be the idempotent corresponding
to the vertex $i$, and we use $p$ for a path of length
at least 1.
First we define a bialgebra structure on $\Bbbk Q$ by
$$\begin{aligned}
\varepsilon(e_1)&=1,\quad \Delta(e_1)=e_1\otimes e_1,\\
\varepsilon(e_i)&=0, \quad \Delta(e_i)=
\sum_{s<i} (e_i\otimes e_s+e_s\otimes e_i) +e_i\otimes e_i,\\
\varepsilon(p)&=0,\quad \Delta(p)=e_1\otimes p+p\otimes e_1\\
\end{aligned}
$$
for all $i>1$ and all paths $p$ of length at least 1.
It is routine to check that this defines a cocommutative
bialgebra structure on $\Bbbk Q$.
By the above definition, $\Delta(x)=e_1\otimes x+x\otimes e_1$
for all $x$ in the ideal $J$ generated by arrows of $Q$ (this
is also the graded Jacobson radical of $\Bbbk Q$). Let $I$ be any
sub-ideal of $J$. Then it is clear that $I$ is a bialgebra
ideal of $\Bbbk Q$. Therefore there is an induced bialgebra
structure on $\Bbbk Q/I$.
\end{example}
Let $Q$ is a finite acyclic quiver. Let $(\Delta,\varepsilon)$
be a coalgebra structure on $\Bbbk Q$. Suppose $|Q_0|=n$,
then $\Delta$ is called a {\it partitioning morphism}
(cf. \cite[p.460]{He2008a}) if
\begin{enumerate}
\item[(1)]
there are $E_1,\cdots,E_n$ which are subsets of
$E=\{(i,j)\mid1\leq i,j\leq n\}$,
\item[(2)]
$E_i\cap E_j=\emptyset$ if $i\neq j$, and
\item[(3)]
for every $1\leq k\leq n$,
$\Delta(e_k)= \sum\limits_{(i,j)\in E_k} e_i\otimes e_j$.
\end{enumerate}
Let $Q(i,j)$ be the set of paths from vertex $i$ to
vertex $j$, then:
\begin{proposition}\cite[Proposition 4]{He2008a}
\label{xxpro7.2}
Let $Q$ be a finite acyclic quiver. Suppose $\Bbbk Q$ has
a coalgebra structure $(\Bbbk Q, \Delta, \varepsilon)$. Then
$\Bbbk Q_0$ is a subcoalgebra of $\Bbbk Q$ and
$\Delta$ is a prealgebra map if and only if
\begin{enumerate}
\item[(1)]
$\Delta$ is a partitioning morphism,
\item[(2)]
$\Delta(\alpha_1\cdots\alpha_m)=\Delta(\alpha_1)\cdots\Delta(\alpha_m)$
where $\alpha_i\in Q_1$,
\item[(3)]
$\Delta(\alpha)\in
\bigoplus\limits_{(i,j)\in E_k,(i',j')\in E_l}
\Bbbk Q(i,i')\otimes \Bbbk Q(j,j')$
for any $\alpha:k\rightarrow l$.
\end{enumerate}
\end{proposition}
\begin{proof}
Note the fact that if $\Bbbk Q_0$ is a subcoalgebra of $\Bbbk Q$ and $\Delta$
is a prealgebra morphism, then $\Delta$ is a partitioning
morphism. The rest of the proof is similar to \cite[Proposition 4]{He2008a}, and
we omit it here.
\end{proof}
\begin{remark}
\label{xxrem7.3}
\begin{enumerate}
\item[(1)]
Following Proposition \ref{xxpro7.2}, our first step is to
understand all weak bialgebra structures on $\Bbbk^{\oplus n}$.
This is already a non-trivial task and we post it as a question.
$$\text{\it Can we classify all weak bialgebra structures
on $\Bbbk^{\oplus n}$?}$$
When $n=2$, see Lemma \ref{xxlem7.5} below.
\item[(2)]
There are algebras $A$ which do not admit any weak bialgebra
structure. Let $A$ be the algebra $\Bbbk[x]/(x^n)$ for some $n$.
Then $A$ admits a (weak) bialgebra structure if and only if
$n=p^t$ where $p={\rm{char}}\; \Bbbk>0$ and $t\geq 1$. We give
a sketch proof of one implication. Suppose that $A:=\Bbbk[x]/(x^n)$
is a weak bialgebra. Note that $A$ is local which implies that
both the target and source counital subalgebras of $A$ are
$\Bbbk$. As a consequence, $A$ is a bialgebra. So the augmentation
ideal $J:=\ker \epsilon$ is the Jacobson radical of $A$. So the
associated graded Hopf algebra ${\text{gr}}_{J} A$, which is
isomorphic to $A$ as an algebra, is the restricted enveloping
algebra of a restricted Lie algebra. Therefore the
$\Bbbk$-dimension of $A$ is $p^t$ for some $t\geq 1$. The
assertion follows.
\item[(3)]
Suppose ${\textrm{char}}\; \Bbbk=p>0$.
Let $A$ be the finite dimensional Hopf algebra
$$\Bbbk [x_1,\cdots,x_n]/(x_1^p,\cdots,x_n^p)$$
for some $n\geq 2$. The coalgebra structure of $A$ is
determined by
$$\Delta(x_i)=x_i\otimes 1+1\otimes x_i$$
for all $i$. Since $A$ is local, the only
brick object in $\mathcal{A}$ is the
trivial module $\Bbbk$. Therefore $\fpd(M)<\infty$
for every object in $\mathcal{A}$.
On the other hand, $A$ is wild when $n\geq 2$.
Therefore conditions (a) and (b) in Theorem \ref{xxthm0.4}
are not equivalent
if we remove the hereditary hypothesis.
\end{enumerate}
\end{remark}
\begin{definition}
\label{xxdef7.4}
Let $A$ be an algebra. Two (weak) bialgebra structures $(\Delta_1,
\varepsilon_1)$ and $(\Delta_2,\varepsilon_2)$ on $A$ are called
{\it equivalent} if there is an algebra automorphism
$\sigma$ of $A$ such that $\Delta_1 \sigma =(\sigma\otimes \sigma)
\Delta_2$ and $\varepsilon_1 \sigma=\varepsilon_2$.
\end{definition}
\begin{lemma}
\label{xxlem7.5}
Let $B=\Bbbk^{\oplus 2}=\Bbbk e_1\oplus \Bbbk e_2$. Then there are
five different weak bialgebra structures on $B$:
\begin{enumerate}
\item[(a)]
$\Delta(e_1)=e_1\otimes e_1, \Delta(e_2)=e_2\otimes e_2+e_1\otimes e_2
+e_2\otimes e_1$, $\varepsilon(e_1)=1$ and $\varepsilon(e_2)=0$.
\item[(b)]
$\Delta(e_1)=e_1\otimes e_1+e_2\otimes e_2, \Delta(e_2)=e_2\otimes e_1
+e_1\otimes e_2$, $\varepsilon(e_1)=1$ and $\varepsilon(e_2)=0$.
\item[(c)]
$\Delta(e_2)=e_2\otimes e_2, \Delta(e_1)=e_1\otimes e_1+e_1\otimes e_2
+e_2\otimes e_1$, $\varepsilon(e_1)=0$ and $\varepsilon(e_2)=1$.
\item[(d)]
$\Delta(e_2)=e_2\otimes e_2+e_1\otimes e_1, \Delta(e_1)=e_1\otimes e_2
+e_2\otimes e_1$, $\varepsilon(e_1)=0$ and $\varepsilon(e_2)=1$.
\item[(e)]
$\Delta(e_1)=e_1\otimes e_1, \Delta(e_2)=e_2\otimes e_2$,
$\varepsilon(e_1)=1$ and $\varepsilon(e_2)=1$.
\end{enumerate}
Note that {\rm{(a)}} and {\rm{(c)}} are equivalent bialgebra
structures {\rm{(}}and so are {\rm{(b)}} and {\rm{(d)}}{\rm{)}}.
The fifth one is a weak bialgebra, but not a bialgebra.
\end{lemma}
Note that (e) in the above lemma is the direct sum of two
copies of trivial Hopf algebra $\Bbbk$. Consequently,
it is a weak Hopf algebra. Other bialgebra algebras in
the above lemma are not (weak) Hopf algebras.
\begin{proof} Fix a (weak) bialgebra structure $(\Delta,\varepsilon)$
on $B$. Let $B_t$ and $B_s$ be target and source counital
subalgebras of $B$, see \cite[Definition 2.2.3]{NV2002}.
Case 1: $\dim B_t=1$, then $B_t=B_s=\Bbbk 1_B$. In this case,
$B$ is a bialgebra. As a consequence,
$\varepsilon(e_1)+\varepsilon(e_2)=\varepsilon(e_1+e_2)=\varepsilon(1)=1$.
Since $e_i$ are idempotents, $\varepsilon(e_i)$ is 1 or 0.
First we assume that $\varepsilon(e_1)=1$ and $\varepsilon(e_2)=0$.
Write $\Delta(e_1)=\sum\limits_{i,j} a_{ij} e_i\otimes e_j$.
By the counital axiom, we obtain that $\Delta(e_1)=e_1\otimes e_1$ or
$\Delta(e_1)=e_1\otimes e_1+e_2\otimes e_2$. If
$\Delta(e_1)=e_1\otimes e_1$, we obtain case (a); if
$\Delta(e_1)=e_1\otimes e_1+e_2\otimes e_2$, we obtain
case (b). The other situation is $\varepsilon(e_1)=0$ and
$\varepsilon(e_2)=1$. By symmetric, we have (c) and (d).
Case 2: $\dim B_t=2$. Then $B_t=B_s=B$. By
\cite[Lemma 2.7]{BXYZZ2020}, $\Delta(e_1)=e_1\otimes e_1$
and $\Delta(e_2)=e_2\otimes e_2$. Then it is easy to check
that we obtain (e).
\end{proof}
\begin{lemma}
\label{xxlem7.6}
Let $A$ be a bialgebra and $J$ be its Jacobson radical. Suppose
that $J$ is nilpotent. If $B:=A/J\cong \Bbbk^{\oplus n}$ as an
algebra for some positive integer $n$, then $B$ is a quotient
bialgebra of $A$.
\end{lemma}
\begin{proof}
Let $\pi$ be the canonical quotient map from $A$ to $B$. It's
clear that $\pi$ is an algebra map. Consider the composition
of algebra maps:
$$A \xrightarrow{\Delta} A\otimes A
\xrightarrow{\pi \otimes \pi} B\otimes B.$$
Since $B\otimes B$ doesn't have nilpotent elements and $J$ is
nilpotent, the above algebra map from $A\to B\otimes B$ factors
through the quotient map $\pi$, that is, there exists a unique
algebra map $\Delta_B$ from $B\to B\otimes B$, such that the
following diagram commutes
$$\xymatrix{
A \ar[rr]^{\Delta}\ar[d]_{\pi} & & A\otimes A\ar[d]^{\pi\otimes \pi}\\
B \ar[rr]^{\Delta_{B}} & & B\otimes B.}$$
Furthermore, $(\Delta_B\otimes Id)\Delta_B$ and
$(Id\otimes \Delta_B)\Delta_B$ are the algebra maps induced
by algebra maps
$(\pi\otimes \pi\otimes \pi)(\Delta\otimes Id)\Delta$
and $(\pi\otimes \pi\otimes \pi)(Id\otimes \Delta)\Delta$
respectively from $A\to B\otimes B\otimes B$. Then $\Delta_B$ is
coassociative since $\Delta$ is coassociative.
Similarly, let $\varepsilon_B: B\to \Bbbk$ be the algebra map
induced by $\varepsilon:A\to \Bbbk$. It is not hard to verify
that $\varepsilon_B$ satisfies the counital axiom. Consequently,
$B$ is a quotient bialgebra of $A$ and $J$ is a bi-ideal.
\end{proof}
Now we are ready to classify (weak) bialgebras on a small quiver.
\begin{proposition}
\label{xxpro7.7}
Suppose $Q$ is the quiver with two vertices $\{1,2\}$ and
$w$ arrows from $1$ to $2$ with $w\geq 1$. Let $A$ be the
path algebra $\Bbbk Q$. Then there are 5 types of weak bialgebra
structures on $A$ up to equivalences.
\begin{enumerate}
\item[(a)]
$\Delta(e_1)=e_1\otimes e_1, \Delta(e_2)=e_2\otimes e_2+e_1\otimes e_2
+e_2\otimes e_1$, $\varepsilon(e_1)=1, \varepsilon(e_2)=0$, and for any
arrow $r$ from 1 to 2, $\Delta(r)= e_1\otimes r+r\otimes e_1$ and
$\varepsilon(r)=0$.
\item[(b)]
$\Delta(e_1)=e_1\otimes e_1+e_2\otimes e_2, \Delta(e_2)=e_2\otimes e_1
+e_1\otimes e_2$, $\varepsilon(e_1)=1, \varepsilon(e_2)=0$,
and for any arrow $r$ from 1 to 2, $\Delta(r)=r\otimes e_1
+e_1\otimes r$ and $\varepsilon(r)=0$.
\item[(c)]
$\Delta(e_2)=e_2\otimes e_2, \Delta(e_1)=e_1\otimes e_1+e_1\otimes e_2
+e_2\otimes e_1$, $\varepsilon(e_2)=1, \varepsilon(e_1)=0$, and for any
arrow $r$ from 1 to 2, $\Delta(r)= e_2\otimes r+r\otimes e_2$ and
$\varepsilon(r)=0$.
\item[(d)]
$\Delta(e_2)=e_1\otimes e_1+e_2\otimes e_2, \Delta(e_1)=e_2\otimes e_1
+e_1\otimes e_2$, $\varepsilon(e_2)=1, \varepsilon(e_1)=0$,
and for any arrow $r$ from 1 to 2, $\Delta(r)=r\otimes e_2
+e_2\otimes r$ and $\varepsilon(r)=0$.
\item[(e)]
$\Delta(e_i)=e_i\otimes e_i$, $\varepsilon(e_i)=1$ for $i=1,2$,
and the Jacobson radical $J$ of $A$ is a subcoalgebra of $A$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $J$ be the Jacobson radical of $A$, which is the ideal
generated by the arrows from $1$ to $2$. It is clear that
$J^2=0$ and $A/J\cong B$ where $B$ is as given in Lemma
\ref{xxlem7.5}.
We first consider bialgebra structures on $A$.
By Lemma \ref{xxlem7.6}, $A/J$ is a quotient bialgebra of
$A$ and $J$ is a bi-ideal of $A$. All bialgebra structures
on $B\cong A/J$ are classified in Lemma \ref{xxlem7.5}. We
will use this classification to analyze the bialgebra
structures on $A$.
Case 1: Suppose the bialgebra structure on $B$ is as in
Lemma \ref{xxlem7.5}(a). Lifting the bialgebra structure on
$B$ to $A$, we have
$$\Delta(e_1)=e_1\otimes e_1+e_1\otimes t_1+ e_2\otimes t_2
+t_3\otimes e_1+t_4\otimes e_2+ T,$$
$$\Delta(e_2)=e_1\otimes e_2+e_2\otimes e_1+e_2\otimes e_2-
e_1\otimes t_1-e_2\otimes t_2
-t_3\otimes e_1-t_4\otimes e_2- T,$$
where $T\in J\otimes J$ and $t_i\in J$ for $1\leq i\leq 4$, and
$$\varepsilon (e_1)=1, \quad \varepsilon(e_2)=0,
\quad \varepsilon(r)=0 {\text{ for all }} r\in J.$$
By counital axiom, we have $t_1=t_3=0$. By using the equation
$\Delta(e_1e_2)=0$, we have $t_2=t_4=0$.
In the bialgebra structure of $A$, we have, for every arrow
$r$ from $1$ to $2$,
$$\Delta(r)=e_1\otimes r+r\otimes e_1+ f(r)\otimes e_2+e_2\otimes
g(r)+w(r)$$
where $f(r),g(r)\in J$ and $w(r)\in J\otimes J$. Using the fact
that $r=re_1$, we obtain that $f(r)=g(r)=0$ for all $r$.
Pick any $\Bbbk$-basis of $J$, say $\{r_i\}$, we can write,
\begin{eqnarray*}
\Delta(e_1)&= &e_1\otimes e_1+\sum_{i,j} a_{ij} r_i \otimes r_j,\\
\Delta(e_2)&= &e_1\otimes e_2+e_2\otimes e_1
+e_2\otimes e_2-\sum_{i,j} a_{ij} r_i \otimes r_j,\\
\Delta(r_i)&= &e_1\otimes r_i+r_i\otimes e_1+\sum_{j,k} c^{jk}_i r_j\otimes r_k.
\end{eqnarray*}
Suppose $\deg (e_1)=\deg(e_2)=0$ and $\deg(r_i)=1$.
Let $\equiv$ denote $=$ modulo higher degree terms. Then the
coalgebra structure above can be written as
$$\begin{aligned}
\Delta(e_1)&\equiv e_1\otimes e_1 \\
\Delta(e_2)&\equiv e_1\otimes e_2+e_2\otimes e_1+ e_2\otimes e_2\\
\Delta(r_i)&\equiv e_1\otimes r_i +r_i \otimes e_1.
\end{aligned}
$$
By \cite[Lemma 3.1]{HT2013}, if two different bialgebra structures
on $A$ both satisfy the above equations, then they are isomorphic.
Therefore, in this case, there exists a unique bialgebra structure
on $A$ up to isomorphism, that is,
\begin{eqnarray*}
\Delta(e_1)&= &e_1\otimes e_1,\\
\Delta(e_2)&= &e_1\otimes e_2+e_2\otimes e_1+e_2\otimes e_2,\\
\Delta(r_i)&= &e_1\otimes r_i+r_i\otimes e_1,
\end{eqnarray*}
which is exactly (a).
Case 2: Suppose the bialgebra structure on $B$ is as in
Lemma \ref{xxlem7.5}(b). Lifting the bialgebra structure on
$B$ to $A$, we have
$$\Delta(e_1)=e_1\otimes e_1+ e_2\otimes e_2+ e_1\otimes t_1+
e_2\otimes t_2+ t_3\otimes e_1+t_4\otimes e_2+ T,$$
where $T\in J\otimes J$ and $t_i\in J$ for $1\leq i\leq 4$, and
$$\Delta(e_2)=e_1\otimes e_2+e_2\otimes e_1-e_1\otimes t_1-
e_2\otimes t_2-t_3\otimes e_1-t_4\otimes e_2- T,$$
$$\varepsilon (e_1)=1, \quad \varepsilon(e_2)=0,
\quad \varepsilon(r)=0 {\text{ for all }} r\in J.$$
By counital axiom, we have $t_1=t_3=0$.
By the fact $e_i$ is an idempotent, we have $T=0$.
In the bialgebra structure of $A$, for every arrow $r$ from
1 to 2, we have
$$\Delta(r)=e_1\otimes r+r\otimes e_1+ f(r)\otimes e_2+e_2\otimes
g(r)+w(r)$$
where $f(r),g(r)\in J$ and $w(r)\in J\otimes J$.
Using the fact
that $e_1r=0$, we obtain that $f(r)=g(r)=0$ for all $r$ and
$w(r)+r\otimes t_2+t_4\otimes r=0$.
Hence, for all $t\in J$,
$$\Delta(t)=e_1\otimes t+ t\otimes e_1- t\otimes t_2-
t_4\otimes t.$$
Moreover, the coassociative axiom, $(Id\otimes \Delta)\Delta(e_2)
=(\Delta\otimes Id)\Delta(e_2)$, implies $t_2=t_4=0.$
We obtain (b).
Case 3 and 4: When the bialgebra structure on $B$ is as in
Lemma \ref{xxlem7.5}(c) and (d), it's similarly to case 1 and case 2
respectively, and we obtain (c) and (d).
Next, we consider weak bialgebra, but not bialgebra, structures on $A$.
Let $A_t$ and $A_s$ be the target and source counital subalgebras.
By \cite[(2.1) and Proposition 2.4]{BNS1999},
$\dim A_t=\dim A_s$ and $A_s$ commutes with $A_t$.
If $\dim A_t=\dim A_s=1$, by \cite[Lemma 8.2]{N1998},
$A$ is a bialgebra since $\Delta(1)=1\otimes 1$, which is the case
we have just finished above. So $\dim A_t=\dim A_s\geq 2$.
Since $A_s$ is separable (hence semisimple), $A_s\cap J=\{0\}$.
Thus there is an injective map
$$A_s\longrightarrow A \xrightarrow{\pi} B$$
which implies that $\dim A_t=\dim A_s=2$ and
that $\pi(A_s)=B$.
Now we claim that $A_t=A_s\cong B$. Since $\pi(A_s)=B$,
we can write $A_s=span\{1,e_1+p\}$ where $p\in J$. In this case,
$A_t=A_s$ since the space of elements which commute with $e_1+p$
is $A_s$ itself, and they both are weak bialgebras. Let $l$
denote the idempotent $e_1+p$. Assume that
$$\Delta(1)=a_1 1\otimes 1+a_2 l\otimes 1+a_3 1\otimes l+a_4 l\otimes l$$
where $a_i\in \Bbbk$.
By \cite[Equations (2.7a) and (2.7b)]{BNS1999}, $a_1+a_2=0$ and
$\Delta(l)=(a_2+a_4)l\otimes l$.
By $\Delta(1)=\Delta(1^2)$, $a_2=a_3$ and one of following equalities hold:
\begin{enumerate}
\item[(i)]
$\Delta(1)=l\otimes l$,
\item[(ii)]
$\Delta(1)=1\otimes 1- l\otimes 1- 1\otimes l+ l\otimes l$,
\item[(iii)]
$\Delta(1)=1\otimes 1- l\otimes 1- 1\otimes l+2 l\otimes l$.
\end{enumerate}
However, (i) implies that $l$ is a scalar multiple of $1$ and
(ii) implies that $1-l$ is a scalar multiple of $1$,
which both are impossible. So (iii) holds and
$\Delta(1)=(1-l)\otimes (1-l)+ l\otimes l$,
which means $A_t=A_s\cong B$ as weak bialgebra,
where the weak bialgebra structure on $B$
as in Lemma \ref{xxlem7.5}(e).
Re-write $l_1=l$ and $l_2=1-l$. Then
$\Delta(l_1)=l_1\otimes l_1$ and $\Delta(l_2)=l_2\otimes l_2$.
Note that $A=A_t\oplus J$ as vector space.
Then for any arrow $r$ from 1 to 2, we have
$$\Delta(r)=f(r)\otimes l_1+g(r)\otimes l_2+
l_1\otimes p(r)+ l_2\otimes q(r)+w(r),$$
where $f(r),g(r),p(r),q(r)\in J$ and $w(r)\in J\otimes J$.
By $rl_1=r$ and $l_2r=r$, $f(r)=g(r)=p(r)=q(r)=0$ for all $r$.
That is $J$ is a subcoalgebra of $A$.
It's not hard to check any coalgebras structure over $J$
satisfy conditions in Definition \ref{xxdef1.7}.
Moreover, let $\sigma: (A,\Delta,\varepsilon)\to (A,\Delta',\varepsilon')$
via $\sigma(l_i)=e_i$ and $\sigma(r)=r$ for $r\in J$, where
$(A,\Delta',\varepsilon')$ is the weak bialgebra as in (e).
Then $\sigma$ is an algebra automorphism and $(\Delta,\varepsilon)$
is equivalent to $(\Delta',\varepsilon')$.
\end{proof}
We finish this section with examples related to both
commutative projective varieties and noncommutative
projective schemes in the sense of \cite{AZ1994}.
\begin{definition} \cite[p. 1230]{HP2011}
\label{xxdef7.8}
Let ${\mathbb X}$ be a smooth projective scheme.
\begin{enumerate}
\item[(1)]
A coherent sheaf ${\mathcal E}$ on ${\mathbb X}$ is called
{\it exceptional} if $\Hom_{\mathbb X}({\mathcal E},{\mathcal E})
\cong \Bbbk$ and $\Ext^i_{\mathbb X}({\mathcal E},{\mathcal E}) =0$
for every $i \geq 0$.
\item[(2)]
A sequence ${\mathcal E}_1, \cdots, {\mathcal E}_n$ of exceptional
sheaves is called an {\it exceptional sequence} if $\Ext^k_{\mathbb X}
({\mathcal E}_i,{\mathcal E}_j) = 0$ for all $k$ and for all $i > j$.
\item[(3)]
If an exceptional sequence generates $D^b(coh({\mathbb X}))$, then
it is called {\it full}.
\item[(4)]
If an exceptional sequence satisfies
$$\Ext^k_{\mathbb X}({\mathcal E}_i,{\mathcal E}_j) = 0$$
for all $k > 0$ and all $i, j$, then it is called a {\it strongly
exceptional sequence}.
\end{enumerate}
\end{definition}
The above concepts are extended to an arbitrary triangulated category
in \cite[Definition 4.1]{Mo2013}. The existence of a full (strongly)
exceptional sequence has been proved for many smooth projective schemes.
However, on Calabi-Yau varieties there are no exceptional sheaves.
When ${\mathbb X}$ has a full exceptional sequence
${\mathcal E}_1, \cdots, {\mathcal E}_n$, then there is a triangulated
equivalence
\begin{equation}
\label{E7.8.1}\tag{E7.8.1}
\RHom_{\mathbb{X}}(\oplus_{i=1}^n {\mathcal E}_i,-):\quad
D^b(coh({\mathbb X}))\cong D^b(\Modfd-A)
\end{equation}
where $A$ is the finite dimensional algebra
$\End_{\mathbb X}(\oplus_{i=1}^n {\mathcal E}_i)$, see
\cite[Theorem 4.2]{Mo2013} (or \cite[Theorem 3.1.7]{BVdB2003}).
By Example \ref{xxex5.3}(2), there is a canonical
monoidal triangulated structure on $D^b(coh({\mathbb X}))$
induced by $\otimes_{\mathbb{X}}$. Then we obtain a monoidal
triangulated structure on $D^b(\Mod_{f.d}-A)$ via
\eqref{E7.8.1}. By Example \ref{xxex5.3}(1), if $A$ is a
weak bialgebra, there is a (different) canonical monoidal
triangulated structure on $D^b(\Mod_{f.d}-A)$ (or
equivalently, on $D^b(coh({\mathbb X}))$). In short, there
are possibly many different monoidal triangulated structures
on a given triangulated category.
Next we give an explicit example related to noncommutative
projective schemes.
\begin{example}
\label{xxex7.9}
Let $T$ be a connected graded noetherian Koszul
Artin-Schelter regular algebra of global dimension at least
2. If $T$ is commutative, then $T$ is the polynomial ring
$\Bbbk[x_0,x_1,\cdots,x_n]$ for some $n\geq 1$. Let
$\mathbb{X}$ be the noncommutative projective scheme
associated to $T$ in the sense of \cite{AZ1994}. In
\cite{AZ1994} $\proj T$ denotes the category of coherent
sheaves on $\mathbb{X}$, but here we use $coh(\mathbb{X})$
instead. When $T$ is the commutative polynomial ring
$\Bbbk[x_0,x_1,\cdots,x_n]$, then $\mathbb{X}$ is the
commutative projective $n$-space $\mathbb{P}^n$. On the
other hand, there are many noetherian Koszul Artin-Schelter
regular algebras $T$ that are not commutative. Let $r$ be
the global dimension of $T$ and $\mathcal{O}$ be the
structure sheaf of $\mathbb{X}$. Then
$$\{\mathcal{O}(-(r-1)),\mathcal{O}(-(r-2)),
\cdots,\mathcal{O}(-1), \mathcal{O}\}$$
is a full strongly exceptional sequence for $\mathbb{X}$
in the sense of \cite[Definition 4.1]{Mo2013}.
By \eqref{E7.8.1} or \cite[Theorem 4.2]{Mo2013},
\begin{equation}
\label{E7.9.1}\tag{E7.9.1}
D^b(coh(\mathbb{X}))\cong D^b(A-\Modfd)
\end{equation}
where $A$ is the opposite ring of
$\End_{\mathbb X}(\oplus_{i=1}^n {\mathcal O}_i)$.
By \cite[Definition 4.6 and Theorem 4.7]{Mo2013},
$A$ is the opposite ring of the Beilinson algebra
(which is denoted by $R$ in \cite[Definition 4.6]{Mo2013}).
By the description in \cite[Definition 4.7]{MM2011}, the
Beilinson algebra is an upper triangular matrix with
diagonal entries being $\Bbbk$. Then $A$ can be written as
$\Bbbk Q/I$ where $Q$ is a quiver with $r$ vertices and
the number of arrows from vertex $i$ to vertex $j$ equals the
dimension of $T_{j-i}$. It is clear that $Q$ satisfies
condition (2) in Example \ref{xxex7.1}. By Example
\ref{xxex7.1}, there is a cocommutative bialgebra structure
on $A$. Similarly, vertex $r$ in $Q$ satisfies condition
(1) in Example \ref{xxex7.1}, which implies that there is
another cocommutative bialgebra structure on $A$. Via
\eqref{E7.9.1}, $D^b(coh(\mathbb{X}))$ has at least two
different monoidal triangulated structures induced by
two different bialgebra structures on $A$.
Now let $T$ be the polynomial ring $\Bbbk[x_0,x_1]$.
Then $\mathbb{X}=\mathbb{P}^1$ and
\begin{equation}
\nota
D^b(coh(\mathbb{P}^1))\cong D^b((B)^{op}-\Modfd)
\end{equation}
where $B$ is the Beilinson algebra associated to $T$.
By \cite[Definition 4.7]{MM2011},
$$B=\begin{pmatrix} \Bbbk & \Bbbk x+\Bbbk y\\
0& \Bbbk\end{pmatrix}.$$
It is clear that $B$ is the path algebra of the Kronecker
quiver given in Example \ref{xxex2.7}. In this case we have
two monoidal triangulated structures on
$D^b(coh(\mathbb{P}^1))$. One is the monoidal structure
induced by $\otimes_{\mathbb{P}^1}$, and the other comes
from the canonical weak bialgebra structure of $B=\Bbbk Q$
[Lemma \ref{xxlem2.1}(1)]. Together with two bialgebra
structures on $B$, see the above paragraph, we obtain four
different monoidal triangulated structures on
$D^b(coh(\mathbb{P}^1))$. To show these monoidal triangulated
structures are not equivalent, one need to use some arguments
in the proof of Lemma \ref{xxlem5.9} (details are omitted).
\end{example}
\section{Proof of Theorems \ref{xxthm0.8}}
\label{xxsec8}
It is important and interesting to calculate explicitly
$\fpd(M)$ of some objects $M$ in
a monoidal abelian (or triangulated) category. Generally
this is very difficult task and dependent on complicated
combinatorial structures of the brick sets. In this section
we will work out one example. Note that some non-essential
details are omitted.
A type $\mathbb{A}_n$ quiver is defined to be a quiver of
form \eqref{E0.7.1}:
\begin{equation}
\nota
\xymatrix{
1 \ar@{-}[r]^{\alpha_1}&2\ar@{-}[r]^{\alpha_2}
&\cdots\ar@{-}[r]^{\alpha_{i-1}}&i\ar@{-}[r]^{\alpha_i}
&\cdots\ar@{-}[r]^{\alpha_{n-1}}&n}
\end{equation}
where each arrow $\alpha_i$ is either $\longrightarrow$ or
$\longleftarrow$. For each $n\geq 3$, there are
more than one isomorphism classes of type $\mathbb{A}_n$
quivers with $n$ vertices, though we denote all of them
by $\mathbb{A}_n$. In this section we provide
fairly detailed computation of $\fpd(M)$ for every indecomposable
object in the monoidal abelian category $\Repr(\mathbb{A}_n)$.
Using Lemma \ref{xxlem4.11}, we obtain $\fpd(M)$ for every
indecomposable object $M$ in the monoidal triangulated
category $D^b(\Repr (\mathbb{A}_n))$. The result is
summarized in Theorem \ref{xxthm0.8}. Throughout this
section, the tensor product is defined as in \eqref{E2.1.1}.
First we try to understand brick sets in $\Repr(\mathbb{A}_n)$.
Recall that $M\{i,j\}$, for $i\leq j$, denotes the representation
of $\mathbb{A}_n$ defined by
\begin{align}
\notag
(M\{i,j\})_s&=\begin{cases} \Bbbk & i\leq s\leq j,\\
0 & {\text{otherwise,}}\end{cases} \\
\notag
(M\{i,j\})_{\alpha_s}&=\begin{cases} Id_{\Bbbk} & i\leq s<j,\\
0& {\text{otherwise}}.\end{cases}
\end{align}
We start with easy observations.
\begin{lemma}
\label{xxlem8.1}
If $\{M\{1,m\},M\{k,l\}\}$ is a brick set and $m\geq k \geq 3$,
then $\{M\{2,m\}$, $M\{k,l\}\}$ also is a brick set.
\end{lemma}
\begin{proof}
This is clear since $k\geq 3$.
\end{proof}
\begin{lemma}
\label{xxlem8.2}
For any $1\leq i<j\leq n$, $\{M\{1,i\},M\{1,j\}\}$ is not a
brick set.
\end{lemma}
\begin{proof}
There are two cases.
Case 1: $s(\alpha_i)=i$.
Let $f:M\{1,j\}\rightarrow M\{1,i\}$ be
$(f)_k=\begin{cases}
Id & k\leq i\\
0 & k>i
\end{cases}.$
Then it is clear that $f\in \Hom(M\{1,j\},M\{1,i\})$
and $\Hom(M\{1,j\},M\{1,i\})\neq 0$.
Case 2: $t(\alpha_i)=i$.
Let $g:M\{1,i\}\rightarrow M\{1,j\}$ be
$(g)_k=\begin{cases}
Id & k\leq i\\
0 & k>i
\end{cases}.$
Then $g\in \Hom(M\{1,i\},M\{1,j\})$ and $\Hom(M\{1,i\},M\{1,j\})\neq 0$.
Combining these two cases, one sees that $\{M\{1,i\},M\{1,j\}\}$
is not a brick set.
\end{proof}
In the above, we can replace $1$ by any positive integer
no more that $i$.
\begin{lemma}
\label{xxlem8.3}
Suppose $i\leq j\leq k$. Then one of spaces
$\Hom(M\{i,j\},M\{i,k\})$ and $\Hom(M\{i,k\},M\{i,j\})$ is
isomorphic to $\Bbbk $ while the other is zero.
\end{lemma}
\begin{proof}
An idea similar to the proof of Lemma \ref{xxlem8.2} shows that
one of spaces is nonzero and the other is zero. For the one that
is nonzero, it must be $\Bbbk$ by Lemma \ref{xxlem4.2}.
\end{proof}
\begin{lemma}
\label{xxlem8.4}
If $f:M\{i,k\}\rightarrow M\{i,l\}$ is a non-zero morphism and $k\neq l$,
then for any $j\leq i$, $\Hom(M\{i,l\}, M\{j,k\})=0$ and
$\Hom(M\{j,l\}, M\{i,k\})=0$.
\end{lemma}
\begin{proof}
Assume that $g:M\{i,l\}\rightarrow M\{j,k\}$ is non-zero morphism, then
it can induce a non-zero morphism $\hat g:M\{i,l\}\rightarrow M\{i,k\}$.
By Lemma \ref{xxlem8.3}, $f=0$ which contradicts the assumption.
Therefore, $\Hom(M\{i,l\}, M\{j,k\})=0$. Similarly,
$\Hom(M\{j,l\}, M\{i,k\})=0$.
\end{proof}
Next we define a binary relation, denoted by $\succ$, that
does not necessarily satisfy the usual axioms of an order.
\begin{definition}
{\label{xxdef8.5}}
For $N,N'\in \Repr(\mathbb{A}_n)$, we write $N\succ N'$ if
$\Hom(N,N')\cong \Bbbk$. Usually we only consider
indecomposable objects $N,N'$.
\end{definition}
Another easy observation, following from Lemma \ref{xxlem8.3}, is
\begin{lemma}
\label{xxlem8.6}
Let $I\subset \{1,2,\cdots,n\}$ and
$\mathcal{S}_I=\{X_i\mid X_i=M\{1,i\}, i\in I\}$.
Then $(\mathcal{S}_I,\succ)$ is a totally ordered set.
Similarly, $\{Y_i\mid Y_i=M\{i,n\}, i\in I\}$
is a totally ordered set.
\end{lemma}
\begin{lemma}
{\label{xxlem8.7}}
Let $N=M\{i,j\}$, $N'=M\{k,l\}$ and $k\leq j< l$.
\begin{enumerate}
\item[(1)]
If $s(\alpha_j)=j$ and $i\leq k$,
then $\Hom(N',N\otimes N')\cong \Bbbk$ where
$N\otimes N'=M\{k,j\}$.
\item[(2)]
If $t(\alpha_j)=j$,
then for all $m\leq j$, $\Hom(N', M\{m,j\})=0.$
\end{enumerate}
\end{lemma}
\begin{proof}
(1) In this case, we have $i\leq k\leq j$. By definition,
$N'=M\{k,l\}$ and $N\otimes N'=M\{k,j\}$.
Let $f:N'\rightarrow N\otimes N$ be defined by
$(f)_s=\begin{cases}
Id & \mathrm{if} ~k\leq s\leq j\\
0 & \mathrm{otherwise}
\end{cases}.$
Then it is not hard to check $0\neq f\in \Hom(N', N\otimes N')$.
If $f'\in \Hom(N',N\otimes N')$, then there is a scalar $c\in \Bbbk$
such that $(f')_s=c Id$ for all $k\leq s\leq j$.
Then $f'=c f$ and $\Hom(N', N\otimes N')\cong \Bbbk$.
(2) Since $k\leq j<l$, $(N')_{j+1}=\Bbbk$. Let $f\in
\Hom(N',M\{m,j\})$. Then, for every $s>j$, $f_{s}=0$
as $(M\{m,j\})_s=0$. So we have
$$(f)_j (N')_{\alpha_j}=(M\{m,j\})_{\alpha_j} (f)_{j+1}=0.$$
Since $(N')_{\alpha_j}=Id_{\Bbbk}$, we obtain $(f)_j=0$.
Using a similar equation as above and induction, one sees
that $f_s=0$ for all $s<j$. Therefore $f=0$ as desired.
\end{proof}
For the rest of this section we use $\phi$ for a brick set in
$\Repr (\mathbb{A}_n)$. Given a brick set $\phi$ and an
indecomposable representation $M\{i,j\}$, we define three subsets
of $\phi$ according to $\{i,j\}$:
\begin{enumerate}
\item[(1)]
$\phi_i=\{N\in \phi \mid (N)_i\cong \Bbbk, (N)_j=0\},$
\item[(2)]
$\phi_j=\{N\in \phi \mid (N)_i=0, (N)_j\cong \Bbbk\},$
\item[(3)]
$\phi_{ij}=\{N\in \phi \mid (N)_i\cong \Bbbk, (N)_j\cong \Bbbk\}.$
\end{enumerate}
It is clear that $\phi$ contain the disjoint union of $\phi_i$,
$\phi_j$ and $\phi_{ij}$. Note that $\phi_l$, for $l$ being either
$i$ or $j$, can be divided into the following two parts:
\begin{equation*}
\begin{split}
\hat\phi_l &= \{N\in \phi_l \mid M\{i, j\}\otimes N\succ M\{i, j\}\},\\
\tilde\phi_l &= \{N\in \phi_l \mid M\{i, j\}\succ M\{i, j\}\otimes N\}.
\end{split}
\end{equation*}
\begin{lemma}
\label{xxlem8.8}
Let $N$ be an object in $\phi$ that satisfies either
$M\{i,j\}\otimes N=0$ or $M\{i,j\}\otimes N=N$. Then
$$\rho(A(\phi,M\{i,j\}\otimes -))=
\max\{a, \quad \rho(A(\phi\setminus \{N\},M\{i,j\}\otimes -))\}$$
where $a=\begin{cases}
0 & \text{if } M\{i,j\}\otimes N=0,\\
1 & \text{if } M\{i,j\}\otimes N=N.
\end{cases}$
\end{lemma}
\begin{proof}
Write $\phi=\{N_1, \cdots, N_m\}$ where $N_1=N$. By the
hypothesis on $N$,
$$\dim \Hom(N_k,M\{i,j\}\otimes N)=0$$
for $2\leq k\leq m$. Hence, in the matrix
$A(\phi,M\{i,j\}\otimes -)$, $a_{k1}=0$ for all $k\geq 2$.
As a consequence,
$$\rho(A(\phi,M\{i,j\}\otimes -))=
\max\{a, \quad \rho(A(\phi\setminus \{N_1\},M\{i,j\}\otimes -))\}$$
where $a:=a_{11}$ is the $(1,1)$-entry in $A(\phi,M\{i,j\}\otimes -)$.
Clearly $a$ has the desired property.
\end{proof}
\begin{lemma}
{\label{xxlem8.9}}
Let $N\in \phi_i$ and $N'\in \phi_j$. Then
$\{N, M\{i,j\}\otimes N'\}$ and $\{M\{i,j\}\otimes N, N'\}$ are
brick sets.
\end{lemma}
\begin{proof}
Write $N$ as $M\{i',j'\}$. Then $i'\leq i$ and $j'<j$.
Similarly, $N'=M\{k, l\}$ for some $k>i$ and $l\geq j$,
and consequently, $M\{i,j\}\otimes N'=M\{k, j\}$. A version
of Lemma \ref{xxlem8.1} shows that $\{M\{i',j'\},M\{k,l\}\}$
being a brick set implies that $\{M\{i',j'\}, M\{k,j\}\}$ is a
brick set. Therefore $\{N, M\{i,j\}\otimes N'\}$ is a
brick set. A similar argument shows that
$\{M\{i,j\}\otimes N, N'\}$ is brick set.
\end{proof}
\begin{lemma}
\label{xxlem8.10}
Let $j$ be a positive integer no more than $n$. If
$j=n$ or $s(\alpha_j)=j$, then $A(\phi_j, M\{i,j\}\otimes -)$ is similar
to an upper triangular matrix in which all diagonal entries are 1.
\end{lemma}
\begin{proof}
If $j=n$, then $|\phi_j|=1$ and $A(\phi_j, M\{i,j\}\otimes -)=(1)_{1\times 1}$
by Lemma \ref{xxlem8.2}.
If $j<n$ and $s(\alpha_j)=j$, by Lemma \ref{xxlem8.6}, the set
$(\{M\{i,j\}\otimes N \mid N\in \phi_j\}, \succ)$ is a totally
ordered set. Let $|\phi_j|=m$ and we can label the objects in
$\phi_j$ so that
$$M\{i,j\}\otimes N_1\succ \cdots\succ M\{i,j\}\otimes N_{m}.$$
By Definition \ref{xxdef8.5},
\begin{equation}
\label{E8.10.1}\tag{E8.10.1}
\Hom(M\{i,j\}\otimes N_k, M\{i,j\}\otimes N_l)\cong
\begin{cases}
0 & \mathrm{if} ~l< k,\\
\Bbbk & \mathrm{if} ~l\geq k.
\end{cases}
\end{equation}
And, by Lemma \ref{xxlem8.7}(1),
\begin{equation}
\label{E8.10.2}\tag{E8.10.2}
\Hom(N_k, M\{i,j\}\otimes N_k)\cong \Bbbk.
\end{equation}
Combine \eqref{E8.10.1} and \eqref{E8.10.2}, then
$$\dim \Hom(N_k,M\{i,j\}\otimes N_l)=
\begin{cases}
0 & \mathrm{if} ~l<k,\\
1 & \mathrm{if} ~l\geq k.
\end{cases}$$
The assertion follows.
\end{proof}
The next theorem is Theorem \ref{xxthm0.8}(2).
\begin{theorem}
\label{xxthm8.11}
Let $Q$ be a quiver of type $\mathbb{A}_n$ given in
\eqref{E0.7.1} for some $n\geq 2$. Then the following
hold in $\Repr(Q)$:
$$\fpd(M\{i,j\})=\begin{cases}
1 & {\text{if $M\{i,j\}$ is a sink}},\\
\min\{i, n-j+1\} & {\text{if $M\{i,j\}$ is a source}},\\
1 &{\text{if $M\{i,j\}$ is a flow}}.
\end{cases}$$
\end{theorem}
\begin{proof} First we show that
\begin{equation}
\label{E8.11.1}\tag{E8.11.1}
\fpd(M\{i,j\})\geq \begin{cases}
1 & {\text{if $M\{i,j\}$ is a sink}},\\
\min\{i, n-j+1\} & {\text{if $M\{i,j\}$ is a source}},\\
1 &{\text{if $M\{i,j\}$ is a flow}}.
\end{cases}
\end{equation}
Let $\phi$ be the singleton consisting of $M\{i,j\}$.
It is clear that $A(\phi, M\{i,j\}\otimes -)$ is $(1)_{1\times 1}$.
Hence $\fpd(M\{i,j\})\geq 1$. Now suppose that
$M\{i,j\}$ is a source. Let $d=\min\{i, n-j+1\}$.
We construct a brick set with $d$ elements as follows.
By Lemma \ref{xxlem8.6},
$(\{M\{k,i\}\mid 1\leq k\leq i\},\succ)$ and
$(\{M\{j,m\}\mid j\leq m\leq n\}, \succ)$
are two totally ordered sets. We list elements
in these two sets as
\begin{equation}
\label{E8.11.2}\tag{E8.11.2}
{\text{
$M\{k_1,i\}\succ \cdots\succ M\{k_i,i\}$ and
$M\{j,m_1\}\succ \cdots\succ M\{j,m_{n-j+1}\}$}}
\end{equation}
where $\{k_l\}_{l=1}^i$ and $\{m_l\}_{l=1}^{n-j+1}$
are distinct integers from $1$ to $i$ and
from $j$ to $n$ respectively.
Since $d=\min\{i,n-j+1\}$, we have a set of $d$ elements
$$\phi=\{M\{k_1,m_{n-j+1}\}, M\{k_2,m_{n-j}\}, \cdots,
M\{k_d, m_{n-j+2-d}\}\}.$$
We claim that $\phi$ is a brick set. If there is a nonzero
map from $M\{k_s,m_{n-j+2-s}\}$ to $M\{k_t,m_{n-j+2-t}\}$
for some $s<t$, then, when restricted to vertices
$\{j,j+1,\cdots,n\}$, we obtain a nonzero map from
$M\{j,m_{n-j+2-s}\}$ to $M\{j,m_{n-j+2-t}\}$. This
contradicts the second half of \eqref{E8.11.2}. Therefore
there is no nonzero morphism from $M\{k_s,m_{n-j+2-s}\}$ to
$M\{k_t,m_{n-j+2-t}\}$ for $s<t$. Similarly, there is
no nonzero morphism from $M\{k_t,m_{n-j+2-t}\}$ to
$M\{k_s,m_{n-j+2-s}\}$ for $s<t$, by using the first
half of \eqref{E8.11.2}. Thus we prove our claim.
Using this brick set, one see that every entry in
the matrix $A(\phi, M\{i,j\}\otimes -)$ is $1$,
consequently, $\rho(A(\phi, M\{i,j\}\otimes -))=d$.
Therefore $\fpd(M\{i,j\})\geq d$ if $M\{i,j\}$ is a
source. Combining with the inequality $\fpd(M\{i,j\})\geq 1$,
we obtain \eqref{E8.11.1}.
It remains to show the opposite inequality of
\eqref{E8.11.1}, or equivalently, to show that
\begin{equation}
\label{E8.11.3}\tag{E8.11.3}
\rho(A(\phi, M\{i,j\}\otimes -))\leq \begin{cases}
1 & {\text{if $M\{i,j\}$ is a sink}},\\
\min\{i, n-j+1\} & {\text{if $M\{i,j\}$ is a source}},\\
1 &{\text{if $M\{i,j\}$ is a flow}},
\end{cases}
\end{equation}
for every brick set $\phi$ in $\Repr(Q)$. We use induction
on the integer $|\phi|+n$. If $|\phi|+n$ is 1, nothing needs to
be proved. So we assume that $|\phi|+n\geq 2$.
If $|\phi|=1$, then $A(\phi, M\{i,j\}\otimes -)$
is either $(0)_{1\times 1}$ or $(1)_{1\times 1}$.
It is clear that the assertion holds. Now we assume that
$|\phi|\geq 2$. This forces that $n\geq 3$ (but we will
not use this fact directly). If there is an object $N\in
\phi$ such that either $M\{i,j\}\otimes N=0$ or
$M\{i,j\}\otimes N=N$, then \eqref{E8.11.3} follows
from Lemma \ref{xxlem8.8} and the induction
hypothesis.
For the rest of the proof we can assume that
$$N\not\cong M\{i,j\}\otimes N\neq 0$$
for every object $N\in \phi$. Note that the
above condition implies that $\phi$ is the disjoint
of $\phi_i$, $\phi_j$ and $\phi_{ij}$. Now it
suffices to consider $\phi$ satisfying the following
conditions:
\begin{enumerate}
\item[(*)] $\phi=\phi_i\cup \phi_j\cup \phi_{ij}$,
\item[(**)] for every $N\in \phi$, $M\{i,j\}\otimes N\not\cong N.$
\end{enumerate}
Let $w$ be the number of objects in $\phi$. Suppose that
$\phi_j$ is not empty. If there is an $N\in \phi_j$ such that
$N\otimes M\{i,j\} \succ M\{i,j\}$, we let $N_w$ be the object in
$\phi_j\cup \phi_{ij}$ such that $N_w\otimes M\{i,j\}$
is largest in the set
$$\{ N\otimes M\{i,j\}\mid N\in \phi_j \cup \phi_{ij}\}.$$
Such an object $N_w$ exists by a version of Lemma
\ref{xxlem8.6}. It is easy to see that $N_w\in \phi_j$.
By the choice of $N_w$, one can show that,
for every $N_k\in \phi_j \cup \phi_{ij}$ with $k\neq w$,
$$\Hom(N_k, N_w\otimes M\{i,j\})=0.$$
If $N_k\in \phi\setminus (\phi_j\cup\phi_{ij})$,
then, by Lemma \ref{xxlem8.9},
$$\Hom(N_k, N_w\otimes M\{i,j\})=0.$$
Therefore
$a_{kw}=0$ for all $k<w$ as an entry in the adjacency
matrix $A(\phi, M\{i,j\}\otimes -)$. As a consequence,
$$\rho(A(\phi, M\{i,j\}\otimes -))=\max\{1, \rho(A(\phi\setminus\{N_w\},
M\{i,j\}\otimes -))\}.$$
Assertion \eqref{E8.11.3} follows by induction hypothesis.
The other possibility is that for every $N\in \phi_j$ we
have $M\{i,j\}\succ N\otimes M\{i,j\}$. Now let $N_1$ be the object in
$\phi_j\cup \phi_{ij}$ such that $N_1\otimes M\{i,j\}$
is smallest in the set
$$\{ N\otimes M\{i,j\}\mid N\in \phi_j \cup \phi_{ij}\}.$$
Such an object $N_1$ exists by a version of Lemma
\ref{xxlem8.6}. It is easy to see that $N_1\in \phi_j$.
By the choice of $N_1$, one sees,
for every $N_k\in \phi_j \cup \phi_{ij}$ with $k\neq 1$,
$$\Hom(N_1, N_k\otimes M\{i,j\})=0.$$
If $N_k\in \phi\setminus (\phi_j\cup\phi_{ij})$,
then, by Lemma \ref{xxlem8.9}
$$\Hom(N_1, N_k\otimes M\{i,j\})=0.$$
Therefore
$a_{1k}=0$ for all $k>1$ as an entry in the adjacency
matrix $A(\phi, M\{i,j\}\otimes -)$. As a consequence,
$$\rho(A(\phi, M\{i,j\}\otimes -))=\max\{1, \rho(A(\phi\setminus\{N_1\},
M\{i,j\}\otimes -))\}.$$
Assertion \eqref{E8.11.3} follows by induction hypothesis.
Combining these two cases, we show that \eqref{E8.11.3}
holds by induction when $\phi_j$ is not empty.
Similarly, \eqref{E8.11.3} holds by induction when $\phi_i$
is not empty. The remaining case is when $\phi_i$ and
$\phi_j$ are empty, or
\begin{enumerate}
\item[$(^{\ast\ast\ast})$]
$\phi=\phi_{ij}$.
\end{enumerate}
We divide the rest of the proof into 5 small subcases.
Subcase 1: $t(\alpha_{i-1})=i$.
Pick any object in $\phi$, say $N$.
Suppose that $(N_1)_{i-1}\neq 0$. Then
$\Hom(N_1, M\{i,j\})=0$. Note that in
this case $M\{i,j\}=M\{i,j\}\otimes N$
for all $N\in \phi$. Therefore
$a_{1k}=0$ for all $k>1$ as an entry in the adjacency
matrix $A(\phi, M\{i,j\}\otimes -)$. As a consequence,
$$\rho(A(\phi, M\{i,j\}\otimes \rho(A(\phi\setminus\{N_1\},
M\{i,j\}\otimes -)).$$
Assertion \eqref{E8.11.3} follows by induction hypothesis.
Therefore, without loss of generality, we can assume that
$(N_1)_{i-1}=0$ for all $N_1\in \phi$. Now everything can be computed
in the subquiver quiver $Q\setminus\{1\}$. Then we reduce the number
of vertices from $n$ to $n-1$. Again the assertion follows from the
induction hypothesis.
Subcase 2: $t(\alpha_j)=j$. This is equivalent to Subcase 1 after one
relabels vertices of $Q$ by setting $i'=n+1-i$ for all
$1\leq i\leq n$.
Subcase 3: $i=1$. Since $\phi=\phi_{ij}$, by Lemma \ref{xxlem8.6},
$\phi$ consists of single object. As a consequence,
$A(\phi, M\{i,j\}\otimes -)$ is either $(0)_{1\times 1}$
or $(1)_{1\times 1}$. Then $\rho(A(\phi, M\{i,j\}\otimes -))
\leq 1$ and the assertion follows trivially.
Subcase 4: $j=n$. This is equivalent to Subcase 3 after one
re-labels vertices of $Q$ by setting $i'=n+1-i$ for all
$1\leq i\leq n$.
Subcase 5: Not cases 1-4, namely, $i>1$, $j<n$, $t(\alpha_{i-1})=i-1$
and $t(\alpha_j)=j+1$.
In this case $M\{i,j\}$ is a source. We list all objects in
$\phi=\phi_{ij}$ as
$$M\{i_1,j_1\}, \cdots, M\{i_w,j_w\}$$
where $1\leq i_s\leq i$ and $j\leq j_s\leq n$.
By Lemma \ref{xxlem8.3}, all $i_s$ are distinct.
The same holds true for $j_s$. Therefore $|\phi|=w\leq
d:= \min\{i,n-j+1\}$. Since every
entry of $A(\phi, M\{i,j\}\otimes -)$ is
at most 1, we obtain that
$\rho(A(\phi, M\{i,j\}\otimes -))\leq |\phi|\leq d$
as desired.
Combining \eqref{E8.11.1} with \eqref{E8.11.3},
we finish the proof.
\end{proof}
Note that, for $M,N\in \Repr(Q)$,
$$\Hom_{D^b(\Repr(Q))}(M[0], N[1])\cong
\Ext^1_{\Repr(Q)}(M,N).$$
For the rest of this section we use
$\Ext^1(M,N)$ instead of $\Ext^1_{\Repr(Q)}(M,N)$.
The {\it Euler characteristic} of two representations
$M$ and $N$ of $Q$ is defined to be
$$\langle\mathbf{dim} M,\mathbf{dim} N\rangle_{Q}
=\sum_{v\in Q_0}x_v y_v-\sum_{\alpha\in Q_1}
x_{s(\alpha)} y_{t(\alpha)}$$
where $\mathbf{dim}$ denotes the dimension vector and
$x_v=\dim((M)_v)$, $y_v=\dim ((N)_v)$ for any
$v\in Q_0$. By \cite[p.65]{GR1992}, we have
\begin{equation}
\label{E8.11.4}\tag{E8.11.4}
\dim \Hom(M,N)-\dim \Ext^1(M,N)=
\langle\mathbf{dim} M,\mathbf{dim} N\rangle_{Q}.
\end{equation}
One can verify the following.
\begin{lemma}
\label{xxlem8.12}
Assume $Q$ is of type $\mathbb{A}_n$. Let
$N=M\{i_1,j_1\},N'=M\{i_2,j_2\}$ and $i_1\leq i_2$.
\begin{enumerate}
\item[(1)]
If $j_1\leq i_2-2$, then $\Ext^1(N,N')=\Ext^1(N',N)=0.$
\item[(2)]
Suppose that $j_1=i_2-1$.
\begin{enumerate}
\item
If $s(\alpha_{j_1})=j_1$, then $\Ext^1(N,N')\cong\Bbbk$, $\Ext^1(N',N)=0$.
\item
If $s(\alpha_{j_1})=i_2$, then $\Ext^1(N,N')=0$, $\Ext^1(N',N)\cong\Bbbk$.
\end{enumerate}
\item[(3)]
Suppose either $i_1<i_2\leq j_1<j_2$ or $i_1<i_2\leq j_2<j_1$.
\begin{enumerate}
\item
If $\Hom(N,N')\cong\Bbbk$, then $\Ext^1(N,N')=0$, $\Ext^1(N',N)\cong\Bbbk$.
\item
If $\Hom(N',N)\cong\Bbbk$, then $\Ext^1(N,N')\cong\Bbbk$, $\Ext^1(N',N)=0$.
\item
If $\{N,N'\}$ is a brick set, then $\Ext^1(N,N')=\Ext^1(N',N)=0$.
\end{enumerate}
\item[(4)]
If $i_1=i_2$ or $j_1=j_2$, then $\Ext^1(M,N)=\Ext^1(N,M)=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
When $j_1\leq i_2-2$, it is easy to see
\[
\dim \Hom(N,N')=\dim \Hom(N',N)=0
\]
and
\[
\langle\mathbf{dim} N,\mathbf{dim} N'\rangle_{Q}=
\langle\mathbf{dim} N',\mathbf{dim} N\rangle_{Q}=0.
\]
Therefore, $\Ext^1(N,N')=\Ext^1(N',N)=0.$
As for (2), (3) and (4), the proofs are similar and we omit them here.
\end{proof}
A direct corollary of Lemma \ref{xxlem8.12} is
\begin{corollary}
\label{xxcor8.13}
Assume $Q$ is of type $\mathbb{A}_n$.
If $\Hom(M\{i_1,j_1\},M\{i_2,j_2\})\cong \Bbbk$,
then \[\Ext^1(M\{i_1,j_1\},M\{i_2,j_2\})=0.\]
\end{corollary}
Any brick set $\phi$ in $\Repr(Q)$ is also a brick set in $D^b(\Repr(Q))$.
In the next lemma we are working with the category $D^b(\Repr(Q))$
and $\phi$ (respectively, $\phi_i$ and $\phi_j$) still denotes a brick
set in $\Repr(Q)$.
\begin{lemma}
\label{xxlem8.14}
Retain the notation above. Then $A(\phi_i, M\{i,j\}[1]\otimes-)$
is similar to a strictly lower triangular matrix.
\end{lemma}
\begin{proof}
By Lemma \ref{xxlem8.6}, $(\{M\{i,j\}\otimes N\mid N\in \phi_i\},\succ)$
is a totally ordered set, which can be listed as
\begin{equation}
\label{E8.14.1}\tag{E8.14.1}
M\{i, j_1\}\succ M\{i,j_2\}\succ \cdots M\{i,j_{|\phi|_i}\}.
\end{equation}
When we compute the adjacency matrix $A(\phi_i, M\{i,j\}[1]\otimes-)$,
we order elements in $\phi_i$ according to \eqref{E8.14.1}.
For any two objects $M\{i_{s_1}, j_{s_1}\},M\{i_{s_2}, j_{s_2}\}$ in
$\phi_i$ with $s_1< s_2$, we have $M\{i, j_{s_1}\}\succ M\{i, j_{s_2}\}$.
An easy analysis shows that either
$\{M\{i_{s_1}, j_{s_1}\}, M\{i, j_{s_2}\}\}$ is a brick set or
$\Hom(M\{i_{s_1}, j_{s_1}\}, M\{i, j_{s_2}\})\cong \Bbbk$.
By Lemma \ref{xxlem8.12}(3),
$$\Ext^1 (M\{i_{s_1}, j_{s_1}\}, M\{i,j\}\otimes M\{i_{s_2}, j_{s_2}\})=0.$$
By Lemma \ref{xxlem8.12}(4), we have
$$\Ext^1 (M\{i_{s_1}, j_{s_1}\}, M\{i,j\}\otimes M\{i_{s_1}, j_{s_1}\})=
\Ext^1 (M\{i_{s_1}, j_{s_1}\}, M\{i, j_{s_1}\})=0.$$
As a consequence, $A(\phi_i,M\{i,j\}[1]\otimes -)$
is a strictly lower triangular matrix.
\end{proof}
\begin{lemma}
\label{xxlem8.15}
Let $N\in \hat\phi_j$, $N'\in \phi_i\cup \phi_{ij}$ and
$N''\in \tilde\phi_j$. then
$$\Ext^1 (N, M\{i,j\}\otimes N')=\Ext^1 (N', M\{i,j\}\otimes N'')=
\Ext^1 (N, M\{i,j\}\otimes N'')=0.$$
\end{lemma}
\begin{proof}
Similar to the proof of Lemma \ref{xxlem8.12}, we only prove the first
equation and leave out the proof of the last two equations.
Write $N=M\{i_1, j_1\}$ and $N'=M\{i_2, j_2\}$. By definition,
$\hat\phi_j$ is nonempty. This implies that $s(\alpha_{i_1-1})=i_1-1$.
First we suppose that $N'\in \phi_i$. If $j_2\neq i_1-1$, then,
by Lemmas \ref{xxlem8.1} and \ref{xxlem8.12}(1,3c),
$\Ext^1 (N, M\{i,j\}\otimes N')=0$. If $j_2=i_1-1$, then
$\Ext^1 (N, M\{i,j\}\otimes N')=0$ by Lemma \ref{xxlem8.12}(2).
Therefore, $\Ext^1 (N, M\{i,j\}\otimes N')=0$ always holds for
$N'\in \phi_i$.
Next we suppose that $N'\in \phi_{ij}$. Then either $\{N,M\{i,j\}\}$
is a brick set or $\Hom(N,M\{i,j\})\cong \Bbbk$. By Lemma
\ref{xxlem8.12}(3), $\Ext^1 (N, M\{i,j\}\otimes N')=0$ since
$M\{i,j\}\otimes N'=M\{i,j\}$.
The assertion follows.
\end{proof}
Now, we prove Theorem \ref{xxthm0.8}(3).
\begin{theorem}
\label{xxthm8.16}
Let $Q$ be a quiver of type $\mathbb{A}_n$ given in
\eqref{E0.7.1} for some $n\geq 2$. Then
$$\fpd(M\{i,j\}[1])=\begin{cases}
\min\{i-1, n-j\} & {\text{if $M\{i,j\}$ is a sink}},\\
1 & {\text{if $M\{i,j\}$ is a source}},\\
1 &{\text{if $M\{i,j\}$ is a flow}}.
\end{cases}$$
\end{theorem}
\begin{proof} Since this is a statement about the derived
category $D^b(\Repr(Q))$, we need to consider all brick
objects in this derived category. However, by the
argument given in the proof of Lemma\ref{xxlem4.11} (2), we
only need to consider brick sets of the form
$$\phi=\{N_1, \cdots, N_m\mid N_s\in \Repr(Q)\}$$
which consists of objects in the abelian category $\Repr(Q)$.
The rest of the proof is somewhat similar to the proof of
Theorem \ref{xxthm8.11}.
If there exists an object $N_1=M\{i_0, j_0\}\in \phi$ satisfying
$M\{i,j\}\otimes N_1\cong N_1$, by Lemma \ref{xxlem8.2}, there
exist at most one object $N_2=M\{i_1,j_1\}\in \phi$ satisfying $j_1=i_{0} - 1$
and at most one object $N_3=M\{i_2,j_2\}\in \phi$ satisfying $i_2=j_0+1$.
Then, by Lemmas \ref{xxlem8.1} and \ref{xxlem8.12},
in the first column and the first row of $A(\phi, M\{i, j\}[1]\otimes-)$,
all entries are zero except for $a_{12}, a_{21}, a_{13}, a_{31}$, and
$a_{12} a_{21}=a_{13} a_{31}=0$.
No matter which case is, we always have
$$\rho(A(\phi,M\{i,j\}[1]\otimes -))
=\rho(A(\phi\setminus \{N_1\},M\{i,j\}[1]\otimes -)).$$
Also, if there is an object $N\in \phi$ satisfying
$M\{i,j\}\otimes N=0$, we also have
$$\rho(A(\phi,M\{i,j\}[1]\otimes -))
=\rho(A(\phi\setminus \{N\},M\{i,j\}[1]\otimes -)).$$
Similar to the proof of Theorem \ref{xxthm8.11}, it suffices
to consider the brick set $\phi$ satisfying the following conditions:
\begin{enumerate}
\item[(*)] $\phi=\phi_i\cup \phi_j\cup \phi_{ij}$,
\item[(**)] for every $N\in \phi$, $M\{i,j\}\otimes N\not\cong N.$
\end{enumerate}
By Lemma \ref{xxlem8.14}, if we re-arrange objects in
$\phi$ as $\hat\phi_j$, $\hat\phi_i$, $\phi_{ij}$, $\tilde\phi_i$
and $\tilde\phi_j$, then $A(\phi, M\{i, j\}[1]\otimes-)$ is
a block lower triangular matrix.
By Lemma \ref{xxlem8.15},
$$\rho(\phi, M\{i,j\}[1]\otimes-)=\rho(\phi_{ij}, M\{i,j\}[1]\otimes-).$$
Therefore, for the rest we consider the brick set $\phi$
satisfying $\phi_{ij}=\phi$.
We divide the rest of the proof into 3 small cases.
Case 1: $M\{i,j\}$ is a source. In this case, for any
$N\in \phi$, $\Hom(N, M\{i,j\})\cong \Bbbk$. Then by Lemma
\ref{xxlem8.12}(3), $\Ext^1 (N, M\{i,j\})=0$.
As a consequence, the adjacency matrix
$A(\phi,M\{i,j\}[1]\otimes -)$ is a zero matrix.
Therefore, in this case, $\fpd(M\{i,j\}[1])=0$.
Case 2: $M\{i,j\}$ is a flow, without loss of generality, assume that
$\alpha_{i-1}=\alpha_j=\longleftarrow$.
For any $N=M\{i_1,j_1\}\in \phi$,
if $i_1=i$, by Lemma \ref{xxlem8.12}(4),
$\Ext^1 (N,M\{i,j\})=0$.
If $i_1<i$, either $\Hom(N, M\{i,j\})\cong \Bbbk$ or
$\{N, M\{i,j\}\}$ is a brick set, then by Lemma \ref{xxlem8.12}(3),
$\Ext^1 (N,M\{i,j\})=0.$
As a consequence, the adjacency matrix
$A(\phi,M\{i,j\}[1]\otimes -)$ is a zero matrix.
Therefore, in this case, $\fpd(M\{i,j\}[1])=0$.
Case 3: $M\{i,j\}$ is a sink.
In this case, for any $N=M\{i_1,j_1\}\in \phi$,
if $i_1=i$, by Lemma \ref{xxlem8.12}(4),
$\Ext^1 (N,M\{i,j\})=0$.
If $j_1=j$, by Lemma \ref{xxlem8.12}(4),
$\Ext^1 (N,M\{i,j\})=0$.
Therefore, since $M\{i,j\}\otimes N=M\{i,j\}$,
we can assume that $i_1<i$ and $j_1>j$.
Now, it's easy to see $\Ext^1 (N,M\{i,j\})\cong \Bbbk$
by Lemma \ref{xxlem8.12}(3).
As a consequence, all entries in the adjacency matrix
$A(\phi,M\{i,j\}[1]\otimes -)$ are 1 and
$\rho(A(\phi,M\{i,j\}[1]\otimes -))=|\phi_{ij}|\leq \min\{i-1,n-j\}$.
On the other hand, by Lemma \ref{xxlem8.6},
$(\{M\{k,i\}\mid 1\leq k\leq i-1\},\succ)$ and
$(\{M\{j,m\}\mid j+1\leq m\leq n\}, \succ)$
are two totally ordered sets. We list elements
in these two sets as
\begin{equation*}
{\text{
$M\{k_1,i\}\succ \cdots\succ M\{k_{i-1},i\}$ and
$M\{j,m_1\}\succ \cdots\succ M\{j,m_{n-j}\}$}}
\end{equation*}
where $\{k_l\}_{l=1}^{i-1}$ and $\{m_l\}_{l=1}^{n-j}$
are distinct integers from $1$ to $i-1$ and
from $j+1$ to $n$ respectively.
Let $d=\min\{i-1,n-j\}$, then we have a set of $d$ elements
$$\phi=\{M\{k_1,m_{n-j}\}, M\{k_2,m_{n-j-1}\}, \cdots,
M\{k_d, m_{n-j+1-d}\}\}$$ which is a brick set.
Using this brick set, one see every entry in the
matrix $A(\phi, M\{i, j\}[1]\otimes-)$
is 1 by Lemma \ref{xxlem8.12}, consequently,
$\rho(A(\phi, M\{i, j\}[1]\otimes-))=d$.
Hence, in this case, $\fpd(M\{i,j\}[1])=\min\{i-1,n-j\}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{xxthm0.8}]:
(1) This follows from Lemma \ref{xxlem4.11}(1).
(2) This follows from Lemma \ref{xxlem4.11}(2)
and Theorem \ref{xxthm8.11}.
(3) This follows from Theorem \ref{xxthm8.16}.
\end{proof}
\subsection*{Acknowledgments}
The authors thank the referee for his/her very careful reading and
valuable comments and thank Professors Jianmin Chen and
Xiao-Wu Chen for many useful conversations on the subject.
J.J. Zhang was partially supported by the US National Science
Foundation (Grant Nos. DMS-1700825 and DMS-2001015). J.-H. Zhou
was partially supported by Fudan University Exchange Program
Scholarship for Doctoral Students (Grant No. 2018024).
| -177,811.307387 |
[
-2.154296875,
2.021484375
] | 26.086957 |
[
-2.662109375,
0.51171875,
-2.37109375,
-5.5,
-0.88720703125,
8.171875
] |
[
2.765625,
9.15625,
0.85693359375,
5.59765625
] | 2,237 | 23,382 |
[
-3.380859375,
4.0625
] | 35.132918 |
[
-4.921875,
-3.611328125,
-5.49609375,
-2.5,
1.7490234375,
13.203125
] | 0.522325 | 12.684943 | 13.878197 | 3.965004 |
[
1.713843584060669
] | -103,035.385472 | 5.256693 | -175,366.771175 | 0.260349 | 6.297867 |
[
-1.3779296875,
-3.185546875,
-3.91796875,
-5.20703125,
1.87109375,
12.25
] |
[
-5.90234375,
-2.20703125,
-2.310546875,
-1.3154296875,
4.01953125,
4.68359375
] | |
BkiUfza6NNjgBpvIEAyi
|
\section{INTRODUCTION}
A classical nova has been thought to be a thermonuclear runaway of
hydrogen-rich gas accumulated onto a white dwarf in a close binary
system (\cite{Trur82}; \cite{Gehr98} and references therein). Recent
observations show that about 30\% of well-studied events are classified
as oxygen-neon-magnesium (ONeMg) novae. Observationally, ONeMg novae are
characterized by strong line emissions in neon and other
intermediate-mass elements like magnesium, aluminum, silicon, and sulfur
in their ejected shells (\cite{Livi94}). The presence of these elements
implies that the accumulated gases must have been substantially enriched
through the dredge-up from the ONeMg cores.
ONeMg novae have been suggested to be a promising production site of
$\gamma$-ray emitters $^7$Be, $^{22}$Na, and $^{26}$Al (\cite{Star78};
\cite{Weis90}; \cite{Nofa91}; \cite{Star93}; \cite{Coc95};
\cite{Poli95}; \cite{Hern96}; Wanajo et al. 1997a, b; \cite{Jose97};
\cite{Jose98}; \cite{Star98}). However, the following three
uncertainties confront us when studying nucleosynthesis in ONeMg
novae. First, the mass of the ONeMg white dwarf is not constrained from
theoretical models any more than $\sim 1.1-1.4 M_\odot$, which results
from the $8 - 10 M_\odot$ stellar evolution models (Nomoto 1984, 1987;
\cite{Iben85}). On the other hand, only a few observational estimates of
the white dwarf masses have been reported (\cite{Pare95}; \cite{Krau96};
\cite{Rett97}). Second, there is a serious disagreement on the accreted
masses onto white dwarfs between observational estimates and current
theories. The ONeMg white dwarfs in previous hydrodynamic studies
accumulate a few $10^{-5} M_\odot$ of the envelope masses at ignition
(\cite{Poli95}; \cite{Star98}; \cite{Jose98}). On the other hand, the
estimated ejecta masses of QU Vul, V838 Her, and V1974 Cyg are $\sim
10^{-4} - 10^{-3} M_\odot$ (\cite{Tayl87}; \cite{Gree88}; \cite{Saiz92};
\cite{Wood92}; \cite{Pave93}; \cite{Shor93}; \cite{Saiz96};
\cite{Vanl96}; \cite{Wood97}), which are $10 - 100$ times larger than
theoretical estimates. Starrfield et al. (1998) have shown that the
envelope mass increases with decreasing mass accretion rate and white
dwarf luminosity (see also \cite{Pria95}; \cite{Kove97}). However, it is
still significantly lower than observational estimates. Third, there has
been no consensus on the mixing mechanism between the white dwarf matter
and the accreted gas, though a few hypotheses such as diffusion, shear
mixing, and convective overshooting have been proposed (\cite{Pria84};
\cite{Kutt87}; \cite{Iben91}; \cite{Glas97}; \cite{Kerc98},
b). Furthermore, the metallicity estimates for the observed ejecta of
ONeMg novae show a wide spread between 0.09 and 0.86 in mass fraction
(\cite{Livi94}; \cite{Poli95}; \cite{Star98}). The initial composition
of an envelope may significantly affect the nucleosynthesis result as
well as the energetics of the outburst (\cite{Kove97}; \cite{Jose98}).
The purpose of this study is to examine nucleosynthesis in ONeMg novae
with the wide ranges of three parameters: the white dwarf mass, the
envelope mass, and the mixing ratio of the core-surface matter into the
envelope. In \S~\ref{sec:method}, we describe our quasi-analytic nova
models and an updated nuclear reaction network. We then, in
\S~\ref{sec:comp}, compare the nucleosynthesis results for one sequence
with a previous hydrodynamic calculation. In \S~\ref{sec:result}, we
constrain the ranges of white dwarf and envelope masses, comparing the
nucleosynthesis results with observational abundance estimates; where
the effect of changing the initial composition is considered. Finally,
the $\gamma$-ray line emissions from $^7$Be, $^{22}$Na, and $^{26}$Al
are discussed in \S~\ref{sec:gamma}.
\section{METHOD OF CALCULATION}
\label{sec:method}
\subsection{Nova Model}
\label{sec:model}
Our nova models are based on the quasi-analytic approach for the
hydrogen shell flash on a white dwarf (\cite{Sugi78}; Fujimoto 1982a,
b). The temperature and density structures of an envelope are obtained
analytically for a given set of a white dwarf mass ($M_{\rm WD}$) and an
envelope mass ($M_{\rm env}$), on the assumption that the spherical
envelope expands in hydrostatic equilibrium. We have constructed models
for 49 sets of $M_{\rm WD}$ ($1.05 - 1.35 M_\odot$) and $M_{\rm env}$
($10^{-6} - 10^{-3} M_\odot$). The former corresponds to the masses of
ONeMg cores which results from $8 - 10 M_\odot$ stellar evolutions
(Nomoto 1984, 1987; \cite{Iben85}), and the latter covers those both
from theories ($\sim 10^{-5}-10^{-4} M_\odot$; \cite{Trur77};
\cite{Poli95}; \cite{Star98}; \cite{Jose98}) and from observations
($\gtrsim 10^{-4} M_\odot$; \cite{Tayl87}; \cite{Gree88}; \cite{Saiz92};
\cite{Wood92}; \cite{Pave93}; \cite{Shor93}; \cite{Saiz96};
\cite{Vanl96}; \cite{Wood97}). The dots in Figure~\ref{fig1} are the
sequences at which our numerical calculations are performed, while
squares, triangles, and stars are taken from hydrodynamic studies by
Politano et al. (1995, hereafter PSTWS95), Starrfield et al. (1998,
hereafter STWS98), and Jos\'e \& Hernanz (1998, hereafter JH98). The
solid lines show the mass accretion rates onto the white dwarfs required
for each set of ($M_{\rm WD}$, $M_{\rm env}$), calculated by Fujimoto
(1982b). These are in reasonable agreement with those by PSTWS95,
STWS98, and JH98 ($\sim 10^{-10}-10^{-9} M_\odot$~yr$^{-1}$), but
somewhat overestimated since the luminosities of white dwarfs are
neglected and the radii are assumed to be of Chandrasekhar (see
Figure~\ref{fig3}) in Fujimoto (1982b). Note that no outburst is
achievable by an accreting white dwarf below the dashed line due to the
high accretion rate. It is obvious that a rather low accretion rate (or
a low luminosity of the white dwarf) is required to obtain a massive
envelope such as $\sim 10^{-4}-10^{-3} M_\odot$ as expected by
observations.
The quasi-analytic nova model has been elaborated by Sugimoto \&
Fujimoto (1978) and Fujimoto (1982a, b). Let us discuss the model in
some detail, since it can characterize the nova burst very well. The
pressure and the density at the base of the envelope are expressed in
terms of $M_{\rm WD}$ and $M_{\rm env}$:
\begin{eqnarray} \label{eqn:pb}
P_{\rm b} & = & \frac{GM_{\rm WD}M_{\rm env}}{4\pi {R_{\rm WD}}^4}
f_{\rm b} \:, \\ \label{eqn:rhob}
\rho_{\rm b} & = & \frac{M_{\rm env}}
{4\pi {R_{\rm WD}}^3}V_{\rm b}f_{\rm b} \:,
\end{eqnarray}
where $R_{\rm WD}$ is the radius of the white dwarf, $V$ is a
homologous invariant defined by
$$
V\equiv -\frac{d\ln P}{d\ln r} = \frac{GM\rho}{rP} \:.
$$
Hereafter the subscript `b' denotes a quantity at the base of the
envelope. The flatness parameter $f$ in the equations~(\ref{eqn:pb}) and
(\ref{eqn:rhob}) decreases monotonically as the shell flash proceeds:
\begin{equation}\label{eqn:f}
f\left(x,N\right) \equiv
\frac{x^{N+1}\left(1-x\right)^{3-N}}
{\left(N+1\right)B_x\left(N+1,3-N\right)} \:,
\end{equation}
where
\begin{equation}\label{eqn:x}
x\equiv \frac{N+1}V \quad (0<x<1) \:.
\end{equation}
The value of $f_{\rm b}$ denotes the degree of the `flatness' of the
envelope. For $f_{\rm b} \sim 1$ ($x_{\rm b} \sim 0$), the envelope is
thin and strongly degenerated, and thus is flat. On the other hand, for
$f_{\rm b} \sim 0$ ($x_{\rm b} \sim 1$), the envelope is thick and non
degenerate, and thus is spherical. The polytropic index $N$ in the
equations~(\ref{eqn:f}) and (\ref{eqn:x}) is defined by $$ \frac
N{N+1}\equiv \frac{d\ln \rho}{d\ln P} \:, $$ and $B_x\left(p,q\right)$
is the incomplete beta function defined by $$ B_x\left(p,q\right)\equiv
\int_0^xt^{p-1}\left(1-t\right)^{q-1}dt \quad (0<x<1) \:. $$
$N$ is assumed to be adiabatic and constant throughout the envelope, but
vary with time. The effect of the spatial variation in $N$ is quite
small for a typical convective envelope (\cite{Fuji82a}). The value of
$N$ is approximately 1.5 at the beginning of a shell flash, and
approaches $\sim 3$ at the end due to the increasing radiation pressure.
The shell flash starts with $f_{\rm b} \sim 1$ ($x_{\rm b} \sim 0$).
The envelope is then heated up by nuclear burning to a thermal runaway,
and cools down when $f_{\rm b}$ decreases to $\sim 0$ ($x_{\rm b} \sim
1$). The equations (\ref{eqn:pb}) and (\ref{eqn:rhob}) are valid if
\begin{equation}
\theta\equiv \frac{Ux}{1-x}\ll 1
\label{eqn:theta}
\end{equation}
is satisfied, where $$ U\equiv \frac{d\ln M}{d\ln r} = \frac{4\pi
r^3\rho}{M} $$ is another homologous invariant. This condition is
violated only near the last phase of the shell flash ($f_{\rm b} \sim
0$). At this phase, major nuclear reactions are frozen out except for
the pp-chain, the CNO cycle, and $\beta^+$-decay. Thus, our
nucleosynthesis results may not be significantly affected.
Figure~\ref{fig2} illustrates contours for $P_{\rm b}/f_{\rm b}$ and
$\rho_{\rm b}/V_{\rm b}f_{\rm b}$ in the $M_{\rm WD}$--$M_{\rm env}$
space. These are the the proper quantities for each set of ($M_{\rm
WD}$, $M_{\rm env}$). The stronger dependence of the former on $M_{\rm
WD}$ is due to the higher power of $R_{\rm WD}$ as seen in the
equation~(\ref{eqn:pb}) and (\ref{eqn:rhob}). The temperature at the
base of the envelope $T_{\rm b}$ can be calculated by solving the
equation of state with the use of equations (\ref{eqn:pb}) and
(\ref{eqn:rhob}). The spatial variations of the pressure, the density,
and the temperature are given when the condition (\ref{eqn:theta}) is
satisfied, by
\begin{eqnarray}
P\left(x\right)
& = & P_{\rm b}\left(\frac{x}{x_{\rm b}}\right)^{N+1}
\left(\frac{1-x}{1-x_{\rm b}}\right)^{-(N+1)} \nonumber \\
\rho\left(x\right)
& = & \rho_{\rm b}\left(\frac{x}{x_{\rm b}}\right)^N
\left(\frac{1-x}{1-x_{\rm b}}\right)^{-N} \nonumber \\
T\left(x\right)
& = & T_{\rm b}\left(\frac{x}{x_{\rm b}}\right)^{(N+1)\nabla}
\left(\frac{1-x}{1-x_{\rm b}}\right)^{-(N+1)\nabla} \:, \nonumber
\end{eqnarray}
where $\nabla \equiv d\ln T/d\ln P$ is assumed to be adiabatic and
constant throughout the envelope, but vary with time (on the deviation
from constant $\nabla$, see Fujimoto 1982a). The value of $x$ decreases
monotonically with increasing radius, approaching zero at the surface of
the envelope. The surface radius $R$ is given when the condition
(\ref{eqn:theta}) is satisfied, by
\begin{equation} \label{eqn:r}
R = \frac{R_{\rm WD}}{1 - x_{\rm b}}.
\end{equation}
Now we know the envelope structure completely.
The progress of a shell flash is derived by energy conservation,
\begin{equation} \label{eqn:dsdt}
\frac{ds}{dt}=\frac{\varepsilon_{\rm N}}
{\left\langle T\right\rangle} \:,
\end{equation}
where $\varepsilon_{\rm N}$ is the nuclear energy generation rate per
unit mass, $s$ is the specific entropy which is spatially constant in
the convective envelope, and $\left\langle T \right\rangle$ is the mass
averaged temperature over the envelope.
The energy inflow from the white dwarf and loss from the photosphere are
neglected, being much smaller than the nuclear energy during the
explosive hydrogen burning. The time variation of $x_{\rm b}$ is then
calculated from the equations~(\ref{eqn:pb}), (\ref{eqn:rhob}), and
(\ref{eqn:dsdt}) with the use of the equation of state. The expansion
velocity of the envelope $v_{\rm exp}$ is derived from the
equation~(\ref{eqn:r}) as $$ v_{\rm exp} = \frac{R}{1 - x_{\rm
b}}\frac{dx_{\rm b}}{dt}. $$ Each calculation is started with the
initial temperature $T_{\rm b} = 5 \times 10^7$~K, and ceased when the
nuclear luminosity decreases to the Eddington luminosity where no
further heavy elements are synthesized.
The $M_{\rm WD}$--$R_{\rm WD}$ relation is derived for an isothermal
core ($2 \times 10^7$~K) consists of oxygen, neon ($ = 5 : 3$), and
partially degenerate electron gases including the effect of the Coulomb
interaction (\cite{Ichi94}), as shown in Figure~\ref{fig3}. The
solid line denotes our results and the triangles are taken from PSTWS95
and STWS98. Our results are between those of carbon and magnesium white
dwarfs by Hamada \& Salpeter (1961), and somewhat smaller than by
PSTWS95 and STWS98. A variation of $R_{\rm WD}$ significantly influences
the density due to $\rho_{\rm b} \propto {R_{\rm WD}}^{-3}$ as seen in
the equation~(\ref{eqn:rhob}), much more than the temperature ($\propto
{R_{\rm WD}}^{-1}$). Note that the ONe white dwarf is unable to increase
its mass beyond $1.38 M_\odot$ because the electron capture on $^{20}$Ne
and $^{24}$Mg triggers the collapse (denoted by a dot on the solid line;
Nomoto 1984, 1987).
\subsection{Nuclear reaction network and initial composition}
\label{sec:ntwk}
The nuclear reaction network used in this work contains 87 stable and
proton-rich isotopes from hydrogen to calcium (Table~\ref{tab:ntwk}),
including all relevant nuclear reactions and weak interactions. The
reaction $^8$B($p$, $\gamma$)$^9$C, which can be a sink for the $^7$Be
production (\cite{Boff93}), is also included. The ground and isomeric
states of $^{26}$Al take longer than the mean lifetime of the isomer
($\simeq 9.2$~s) to be equilibrated for $\lesssim 4 \times 10^8$~K
(\cite{Ward80}). The peak temperatures in the models responsible for the
observed ONeMg novae may be less than $4 \times 10^8$~K as will be
discussed in \S~\ref{sec:obs}. Thus, the two states are separated as
different isotopes. The nuclear reaction rates are taken from Thielemann
et al. (1995). They are based on the rates by Caughlam \& Fowler (1988),
those calculated by a statistical model (\cite{Trur87}), and the latest
experimental data (\cite{VanW94}, etc.). We also include new reaction
rates by Herndl et al. (1995) and Iliadis et al. (1996). The rate
$^{26}$Si($p$, $\gamma$)$^{27}$P (\cite{Hern95}) may have a special
importance, being $10^3 - 10^4$ times larger than the previous one in
the typical nova temperature range. The rates $^{25}$Mg($p$,
$\gamma$)$^{26}$Al and $^{25}$Al($p$, $\gamma$)$^{26}$Si (\cite{Ilia96})
may be also of importance for $^{26}$Al production, though the latter
involves a large uncertainty. In our computations, all nuclear reaction
rates are mass-averaged over the envelope except for $\beta^+$-decay
which does not depend on density and temperature.
\begin{table}[b]
\caption{Nuclear Reaction Network Employed}
\label{tab:ntwk}
\smallskip
\begin{tabular}{rrrrrr}
\hline
\hline
Element & $A_{\rm min}$ & $A_{\rm max}$ &
Element & $A_{\rm min}$ & $A_{\rm max}$ \\
\hline
H \dotfill & 1 & 2 & Na\dotfill & 20 & 23 \\
He\dotfill & 3 & 4 & Mg\dotfill & 21 & 26 \\
Li\dotfill & 7 & 7 & Al\dotfill & 22 & 27 \\
Be\dotfill & 7 & 7 & Si\dotfill & 24 & 30 \\
B \dotfill & 8 & 11 & P \dotfill & 27 & 31 \\
C \dotfill & 9 & 13 & S \dotfill & 28 & 34 \\
N \dotfill & 13 & 15 & Cl\dotfill & 31 & 37 \\
O \dotfill & 14 & 18 & Ar\dotfill & 32 & 38 \\
F \dotfill & 17 & 19 & K \dotfill & 35 & 39 \\
Ne\dotfill & 18 & 22 & Ca\dotfill & 36 & 40 \\
\hline
\end{tabular}
\end{table}
\begin{table}[b]
\caption{Abundances of the ONeMg Core at the Surface}
\label{tab:onemg}
\smallskip
\begin{tabular}{rrrr}
\hline
\hline
Nucleus & Mass Fraction & Nucleus & Mass Fraction \\
\hline
$^{12}$C & 3.95E-02 & $^{24}$Mg & 4.20E-02 \\
$^{16}$O & 5.42E-01 & $^{25}$Mg & 6.29E-03 \\
$^{20}$Ne & 3.31E-01 & $^{26}$Mg & 4.57E-03 \\
$^{21}$Ne & 2.87E-03 & $^{27}$Al & 1.25E-02 \\
$^{22}$Ne & 1.34E-03 & $^{28}$Si & 2.46E-03 \\
$^{23}$Na & 1.65E-02 & \\
\hline
\end{tabular}
\end{table}
The initial composition of an envelope is assumed to be a mixture of the
solar composition gas and the dredged-up matter from the surface of the
ONeMg white dwarf. The solar abundances are adopted from \cite{Ande89},
and the abundances of the ONeMg core matter from \cite{Hash93} for the
1.35 $M_\odot$ ONeMg core (Table~\ref{tab:onemg}). As can be seen in
Table~\ref{tab:onemg}, ${\rm O} : {\rm Ne} : {\rm Mg} \approx 10 : 6 :
1$, which is in good agreement with those in Nomoto and Hashimoto (1988)
for $M_{\rm WD} = 1.26, 1.36 M_\odot$ and Ritossa, Garc\'{\i}a, \& Iben
(1996) for $M_{\rm WD} = 1.2 M_\odot$. This implies that the
composition of an ONeMg core does not significantly depend on its
mass. The mass fraction of the dredge-up matter from the ONeMg core in
the envelope $X_{\rm WD}$, which is the third parameter in this study,
is of importance on the nucleosynthesis results as will be discussed in
\S~\ref{sec:depz}. However, abundance estimates in the observations of
nova ejecta involve large uncertainties as pointed out by Livio \&
Truran (1994). The estimated metallicities of the six observed ONeMg
nova ejecta range widely (see Table~\ref{tab:obs}) and, unfortunately,
different authors have provided different values even for the identical
events (\cite{Will85}; \cite{Snij87}; \cite{Saiz92}; \cite{Andr94};
\cite{Aust96}; \cite{Saiz96}; \cite{Vanl96}; \cite{Vanl97}). In
addition, no consensus has been achieved in theoretical modeling how and
when the core-matter mixes into the envelope (\cite{Pria84};
\cite{Iben91}; \cite{Kutt87}; \cite{Glas97}; \cite{Kerc98}, b). Thus, we
examine all the combinations of ($M_{\rm WD}$, $M_{\rm env}$) for $X_{\rm
WD} = 0.1$ (case A), 0.4 (case B), and 0.8 (case C), which cover
observational uncertainties in abundance determinations. The initial
compositions for each case are given in Table~\ref{tab:init}.
\begin{table*}[p]
\begin{center}
Table 3: Initial Compositions of the Envelope by Mass \\
\label{tab:init}
\medskip
\begin{tabular}{rrrr}
\hline
\hline
$X_{\rm WD}$ & 0.1 & 0.4 & 0.8 \\
\hline
$p$ & 6.36E-01 & 4.24E-01 & 1.41E-01 \\
D & 4.33E-05 & 2.88E-05 & 9.62E-06 \\
$^{ 3}$He & 2.64E-05 & 1.76E-05 & 5.87E-06 \\
$^{ 4}$He & 2.48E-01 & 1.65E-01 & 5.51E-02 \\
$^{ 7}$Li & 8.43E-09 & 5.62E-09 & 1.87E-09 \\
$^{11}$B & 4.26E-09 & 2.84E-09 & 9.46E-10 \\
$^{12}$C & 6.69E-03 & 1.76E-02 & 3.22E-02 \\
$^{13}$C & 3.29E-05 & 2.19E-05 & 7.31E-06 \\
$^{14}$N & 9.96E-04 & 6.64E-04 & 2.21E-04 \\
$^{15}$N & 3.93E-06 & 2.62E-06 & 8.74E-07 \\
$^{16}$O & 6.28E-02 & 2.22E-01 & 4.35E-01 \\
$^{17}$O & 3.50E-06 & 2.34E-06 & 7.79E-07 \\
$^{18}$O & 1.95E-05 & 1.30E-05 & 4.34E-06 \\
$^{19}$F & 3.65E-07 & 2.43E-07 & 8.11E-08 \\
$^{20}$Ne & 3.45E-02 & 1.33E-01 & 2.65E-01 \\
$^{21}$Ne & 2.90E-04 & 1.15E-03 & 2.29E-03 \\
$^{22}$Ne & 2.51E-04 & 6.15E-04 & 1.10E-03 \\
$^{23}$Na & 1.68E-03 & 6.60E-03 & 1.32E-02 \\
$^{24}$Mg & 4.66E-03 & 1.71E-02 & 3.37E-02 \\
$^{25}$Mg & 6.89E-04 & 2.55E-03 & 5.04E-03 \\
$^{26}$Mg & 5.27E-04 & 1.88E-03 & 3.67E-03 \\
$^{27}$Al & 1.31E-03 & 5.05E-03 & 1.00E-02 \\
$^{28}$Si & 8.34E-04 & 1.38E-03 & 2.10E-03 \\
$^{29}$Si & 3.09E-05 & 2.06E-05 & 6.86E-06 \\
$^{30}$Si & 2.12E-05 & 1.41E-05 & 4.71E-06 \\
$^{31}$P & 7.35E-06 & 4.90E-06 & 1.63E-06 \\
$^{32}$S & 3.57E-04 & 2.38E-04 & 7.93E-05 \\
$^{33}$S & 2.90E-06 & 1.94E-06 & 6.45E-07 \\
$^{34}$S & 1.68E-05 & 1.12E-05 & 3.74E-06 \\
$^{35}$Cl & 2.28E-06 & 1.52E-06 & 5.07E-07 \\
$^{37}$Cl & 7.70E-07 & 5.13E-07 & 1.71E-07 \\
$^{36}$Ar & 6.98E-05 & 4.65E-05 & 1.55E-05 \\
$^{38}$Ar & 1.39E-05 & 9.24E-06 & 3.08E-06 \\
$^{39}$K & 3.13E-06 & 2.08E-06 & 6.95E-07 \\
$^{40}$Ca & 5.40E-05 & 3.60E-05 & 1.20E-05 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\addtocounter{table}{1}
\section{Comparison with nucleosynthesis by a hydrodynamic model}
\label{sec:comp}
Up to now, a number of works on nucleosynthesis in ONeMg novae have been
performed (\cite{Hill82}; \cite{Weis90}; \cite{Nofa91}, and references
therein). Their nova models were, however, based on one-zone envelopes,
using the spatially constant temperature and density profiles taken from
hydrodynamic studies (\cite{Star78}; \cite{Star88}). Coc et al. (1995)
have studied $^{22}$Na and $^{26}$Al production in ONeMg novae with
another semi-analytic method (\cite{MacD83}). Their nova model and ours
give similar envelope structures in temperature and density. However,
our model includes the effect of the partially degenerate and
relativistic electron gas, while Coc et al. (1995) treated electrons as
the ideal gas. The electron degeneracy can not be neglected in the early
phase of outbursts. Hernanz et al. (1996) and Jos\'e, Hernanz, \& Coc
(1997) have also examined nucleosynthesis in novae with the use of a
hydrodynamic method. However, they focused on $^7$Li or $^{26}$Al
production, and gave only a few synthesized isotopes in their papers.
Hence, we compare our model with sequence~6 in STWS98 to see the
differences of nucleosynthesis between the quasi-analytic and
hydrodynamic methods. The nova model in STWS98 was identical to that of
PSTWS95, except that the former included the updated nuclear reaction
rates (\cite{VanW94}; \cite{Hern95}) and OPAL opacity tables
(\cite{Igle93}). In addition, STWS98 employed a lower white dwarf
luminosity and a lower mass accretion rate to obtain a more massive
ignition envelope. Furthermore, an important change was that STWS98 used
a longer mixing length of $(2 - 3) \times$~pressure scale height. We do
not compare our results with JH98 which has studied nucleosynthesis in
ONeMg (and CO) novae using a hydrodynamic code, since the white dwarf
radii are not presented. Their results showed, however, similar trends
to PSTWS95 and STWS98. We use the same initial composition, the nuclear
reaction rates, $M_{\rm WD}$ ($ = 1.25 M_\odot$), $M_{\rm env}$ ($ = 4.5
\times 10^{-5} M_\odot$), and $R_{\rm WD}$ as STWS98 for
comparison. Note that the nucleosynthesis results in this work are
obtained for the whole envelope, while those in STWS98 for the ejected
matter. Thus, STWS98 may strongly reflect the composition of the outer
region. Figure~\ref{fig4} shows the ratios of isotopes (dots) and
elements (triangles) between ours and STWS98 (sequence 6). Our
calculation obtains a higher peak temperature ($\simeq 3.17 \times
10^8$~K) than that of STWS98 ($\simeq 3.00 \times 10^8$~K), since the
latter model ignited hydrogen one zone above the base so that the
envelope is effectively thinner (see STWS98). The prominent
underproduction of several isotopes like $^{15}$N, $^{18}$O, $^{21}$Ne,
$^{22}$Na (and perhaps $^{23}$Na not shown in STWS98; see PSTWS95 for
instance), $^{24}$Mg, and $^{26}$Mg is due to our assumption of a fully
convective one-zone envelope. Since these isotopes are rather fragile
against the ($p$, $\gamma$) or ($p$, $\alpha$) reactions, they decrease
significantly even at the late phase of the outburst. In contrast, these
isotopes were able to survive in STWS98, escaping from the hotter
convective region into the cooler radiative region at the late
phase. Especially, $^{15}$N is extremely fragile against the ($p$,
$\alpha$) reaction, thus being underproduced by more than 5 orders of
magnitude in this work. As a result, nitrogen (mostly $^{14}$N in this
work) is also underproduced compared to STWS98 in which $^{15}$N is
dominant. On the other hand, carbon ($^{12}$C and $^{13}$C) are
significantly overproduced, transferred from $^{15}$N. We should be
careful on these differences in comparing the nucleosynthesis results
with observations. However, both results are in excellent agreement for
other isotopes, and especially for elements (except for carbon and
nitrogen) which are more important for comparison with observations.
\section{Nucleosynthesis in ONeMg novae}
\label{sec:result}
\subsection{Nuclear flows in the $N$--$Z$ plane}
\label{sec:nzpl}
In this section, we present some important aspects of nucleosynthesis in
ONeMg novae, referring to the results of several ($M_{\rm WD}$, $M_{\rm
env}$) models. Figure~\ref{fig5} shows the final abundances and
the net nuclear flows in the $N$--$Z$ plane. The size of a circle
denotes the mole fraction of the isotope defined by $Y_i \equiv X_i/A_i$
in the logarithmic scale. The initial composition is shown by dotted
circles. The net nuclear flow of a reaction from the $i$-th to $j$-th
isotope, defined as
$$
F_{ij} \equiv
\int \left[ \dot Y_i\left( i\rightarrow j\right)
-\dot Y_j\left(j\rightarrow i\right) \right] dt \:,
$$
is denoted by the length of an arrow in the same scale. The mixing ratio
$X_{\rm WD}$ is assumed to be 0.4 (case~B) throughout this section,
which is close to the average metallicity of the ejecta estimated from
observations (see `$Z$' in Table~\ref{tab:obs}).
Figure~\ref{fig6} shows the peak temperature at the base $T_{\rm
peak}$, the cooling timescale $\tau$ defined as the duration from the
peak to its half in temperature, the peak nuclear energy generation rate
per unit mass $\varepsilon_{\rm peak}$, and the ejection velocity
$v_{\rm ej}$ in the $M_{\rm WD}$--$M_{\rm env}$ space. Here, $v_{\rm
ej}$ is defined as the expansion velocity $v_{\rm exp}$ when it equals
the escape velocity $v_{\rm esc}$ (for the models denoted by
circles). For the models denoted by crosses in which $v_{\rm exp}$ is
below $ v_{\rm esc}$ throughout the calculations, $v_{\rm ej}$ is
replaced with $v_{\rm exp}$ at the maximum. As seen in
Figure~\ref{fig6}, $\tau$ has a weaker dependence on $M_{\rm WD}$
than $T_{\rm peak}$, while the trend of $\varepsilon_{\rm peak}$ is
similar to $T_{\rm peak}$. As a result, among the models of the same
peak temperature, the explosion is more violent for the smaller $M_{\rm
WD}$ due to its smaller gravitational potential. This is also seen in
the panel of $v_{\rm ej}$, which shows the similar trend to $\tau$ in
the $M_{\rm WD}$--$M_{\rm env}$ space. In order to obtain the fast
ejection velocities such as $\gtrsim 1000$~km~s$^{-1}$ as derived by
recent observations (\cite{Gehr98} and references therein), the cooling
timescale must be $\lesssim 1000$~s where the $\beta^+$-decay of
$^{14}$O ($\tau \simeq 102$~s) and $^{15}$O ($\tau \simeq 176$~s) plays
an important role.
\subsubsection{Low temperature sequences}
For the model ($M_{\rm WD}/M_\odot$, $M_{\rm env}/M_\odot$) = (1.10,
$10^{-4.5}$), the initially present $^{24}$Mg is entirely transferred to
silicon, even though $T_{\rm peak}$ is as low as $\sim 2 \times 10^8$~K
(Figure~\ref{fig5}). In contrast, the initial $^{20}$Ne remains
mostly unburnt, though minor nuclear flows appear through the Ne-Na
cycle. A part of the initial $^{16}$O is converted to $^{17}$O,
$^{12}$C, $^{13}$C, and $^{14}$N. The HCNO cycle is active near the peak
in temperature, turning to the CNO cycle as the temperature
decreases. Thus, almost all $^{15}$N is eventually converted to
$^{14}$N, $^{12}$C, and $^{13}$C. Note that, for the models with $T_{\rm
peak} \lesssim 2 \times 10^8$~K, $v_{\rm exp}$ is too small to overcome
$v_{\rm esc}$ as seen in Figure~\ref{fig6}.
\subsubsection{Moderate temperature sequences}
The nucleosynthesis results for ($M_{\rm WD}/M_\odot$, $M_{\rm
env}$ $/M_\odot$) = (1.15, $10^{-4.0}$) and (1.35, $10^{-5.5}$) (hereafter
N1540B and N3555B, respectively) differ significantly, regardless of
their mostly same $T_{\rm peak}$ ($\simeq 2.9 \times 10^8$~K) as seen in
Figure~\ref{fig5}. This can be explained as follows.
Figure~\ref{fig7} shows the time variations of $T_{\rm b}$ and
$\varepsilon$ for each model. The cooling timescale for N1540B ($\tau
\simeq 190$~s) is more than one order shorter than for N3555B ($\tau
\simeq 2400$~s). This is a consequence of the weaker gravitational
potential for N1540B owing to its smaller $M_{\rm WD}$ and thus its
larger $R_{\rm WD}$ (Figure~\ref{fig3}). In addition, the nuclear
energy generation rate remains as high as $\sim
10^{14}$~erg~g$^{-1}$~s$^{-1}$ even after the envelope expands and the
temperature decreases to $\sim 10^8$~K, owing to the $\beta^+$-decay of
$^{14}$O, $^{15}$O, and other unstable nuclei. As a result, the
expansion of the envelope is accelerated and then the temperature drops
fairly quickly, even when its structure is returning to the static
configuration. In contrast, for N3555B, almost all the short-lived
$\beta^+$-unstable nuclei have decayed at the late phase. Hence, the
temperature drops slowly with the decreasing nuclear energy generation
rate. The patterns of the temperature decreases are, therefore, not
similar between these models. The critical cooling timescale between the
slow (N3555B) and fast (N1540B) expansion is $\tau \sim 1000$~s. The
cooling timescale for N1540B is comparable to the $\beta^+$-decay
lifetime of $^{15}$O (=~176~s). As a result, $^{15}$N survives the
following ($p$, $\alpha$) reactions and significantly enhances. For
similar reasons, $^{18}$O, $^{25}$Mg, and $^{26}$Al are prominent in
N1540B, while they are absent in N3555B. Note that the somewhat higher
$\varepsilon_{\rm peak}$ in N1540B is due to the higher density at the
base (Figure~\ref{fig2}).
It is noteworthy that the net nuclear flows of $^{24}$Mg($p$,
$\gamma$)$^{25}$Al have overcome the initial abundance of $^{24}$Mg for
both N1540B and N3555B (Figure~\ref{fig5}), owing to substantial
nuclear flux from the Ne-Na region. It implies that the initial amount
of $^{24}$Mg does not significantly affect the production of isotopes $A
\ge 24$ for the models $T_{\rm peak} \gtrsim 3 \times 10^8$~K. Note that
N1540B also obtains the significantly higher ejection velocity ($\simeq
2100$~km~s$^{-1}$) than N3555B ($\simeq 1200$~km~s$^{-1}$). As seen in
Figure~\ref{fig6}, for all the models with $T_{\rm peak} \gtrsim 3
\times 10^8$~K, $v_{\rm exp}$ exceeds $v_{\rm esc}$ and obtains $v_{\rm
ej} \gtrsim 1000$~km~s$^{-1}$, which is in good agreement with recent
observations of ONeMg novae.
\subsubsection{High temperature sequences}
For the models ($M_{\rm WD}/M_\odot$, $M_{\rm env}/M_\odot$) = (1.20,
$10^{-4.0}$) and (1.20, $10^{-3.5}$) (hereafter N2040B and N2035B,
respectively), substantial nuclear fluxes appear in the Ne-Na region
because of their high $T_{\rm peak}$ ($\simeq 3.3 \times 10^8$~K and
$4.2 \times 10^8$~K, respectively) as seen in
Figure~\ref{fig5}. In addition, various nuclear paths open in the
Mg-S region. The abundance of $^{26}$Al is highly enhanced in N2040B due
to the substantial nuclear flux from the Ne-Na region via $^{23}$Na($p$,
$\gamma$)$^{24}$Mg. On the other hand, $^{26}$Al is less abundant in
N2035B because of its higher peak temperature. Instead, $^{18}$O,
$^{22}$Na, and $^{23}$Na are highly enhanced in N2035B, since $\tau$
($\simeq9.5$~s) is comparable to the $\beta^+$-decay lifetimes of
$^{18}$Ne (2.4~s), $^{22}$Mg (5.6~s), and $^{23}$Mg (16~s). For the
extremely high temperature ($T_{\rm peak} \simeq 7.3 \times 10^8$~K)
model ($M_{\rm WD}/M_\odot$, $M_{\rm env}/M_\odot$) = (1.30,
$10^{-3.0}$), almost all the initial $^{20}$Ne is burned out, and the
nuclear flow extends to calcium by the rp-process
(Figure~\ref{fig5}). The leakage from the CNO cycle via the
$\alpha$-capture of $^{14}$O and $^{15}$O appears, though its
contribution to the heavy element production is negligible.
\subsection{Element and isotope production}
\label{sec:mmpl}
{\scriptsize
\begin{table*}[t]
\caption{Observed ONeMg Nova Abundances}
\label{tab:obs}
\smallskip
\begin{tabular}{rlllllllllll}
\hline
\hline
& H & He & C & N & O & Ne & Mg & Al & Si & S & $Z$ \\
\hline
V693 CrA\tablenotemark{1} & 2.8E-01 & 3.2E-01 & 5.1E-03 & 8.4E-02 & 1.2E-01 & 1.7E-01 & 7.6E-03 & 3.4E-03 & 2.6E-03 & & 4.0E-01 \\
V693 CrA\tablenotemark{2} & 1.6E-01 & 1.8E-01 & 7.9E-03 & 1.4E-01 & 2.1E-01 & 2.7E-01 & 1.8E-02 & & 6.9E-03 & & 6.5E-01 \\
V693 CrA\tablenotemark{3} & 3.9E-01 & 2.0E-01 & 4.3E-03 & 8.0E-02 & 7.5E-02 & 2.3E-01 & 2.9E-03 & 1.9E-03 & 8.7E-03 & & 4.1E-01 \\
V1370 Aql\tablenotemark{4} & 4.9E-02 & 8.8E-02 & 3.5E-02 & 1.4E-01 & 5.1E-02 & 5.2E-01 & 6.8E-03 & & 1.8E-03 & 1.0E-01 & 8.6E-01 \\
V1370 Aql\tablenotemark{2} & 4.5E-02 & 1.0E-01 & 5.0E-02 & 1.9E-01 & 3.7E-02 & 5.6E-01 & 7.9E-03 & & 4.6E-03 & & 8.5E-01 \\
QU Vul\tablenotemark{5} & 3.0E-01 & 6.0E-01 & 1.0E-03 & 2.1E-02 & 1.6E-02 & 2.3E-02 & 1.7E-03 & & 4.0E-02 & & 1.0E-01 \\
QU Vul\tablenotemark{2} & 3.3E-01 & 2.7E-01 & 9.6E-03 & 7.4E-02 & 1.8E-01 & 8.7E-02 & 3.7E-03 & 9.9E-03 & 3.2E-02 & 1.2E-02 & 4.0E-01 \\
V351 Pup\tablenotemark{6} & 3.8E-01 & 2.4E-01 & 5.9E-03 & 7.4E-02 & 1.9E-01 & 1.1E-01 & & 4.3E-03 & 1.9E-03 & & 3.8E-01 \\
V838 Her\tablenotemark{3} & 6.0E-01 & 3.1E-01 & 1.2E-02 & 1.4E-02 & 2.5E-03 & 5.8E-02 & & & & 2.8E-03 & 9.0E-02 \\
V1974 Cyg\tablenotemark{7} & 1.8E-01 & 3.1E-01 & 5.4E-02 & 7.7E-02 & 2.7E-01 & 1.1E-01 & & & & & 5.1E-01 \\
\hline
\end{tabular}
\smallskip
\noindent
References: $^1$Williams et al. 1985, $^2$Andre\"a, Drechsel,
\& Starrfield 1994, $^3$Vanlandingham, Starrfield, \& Shore 1997,
$^4$Snijder et al. 1987, $^5$Saizar et al. 1992, $^6$Saizar et al. 1996,
$^7$Austin et al. 1969
\end{table*}
}
In this section, we discuss the global trends of element production and
isotope ratios in the $M_{\rm WD}$--$M_{\rm env}$ space, referring to
the abundances of ONeMg nova ejecta estimated from recent
observations. Table~\ref{tab:obs} shows the abundances for the recent
six ONeMg novae, V693 CrA (\cite{Will85}; \cite{Andr94}; \cite{Vanl97}),
V1370 Aql (\cite{Snij87}; \cite{Andr94}), QU Vul (\cite{Saiz92};
\cite{Andr94}), V351 Pup (\cite{Saiz96}), V838 Her (\cite{Vanl97}), and
V1974 Cyg (\cite{Aust96}). Note that the abundances of the elements not
presented in the above references are assumed to be zero, thus involving
errors by a few percent. The average metallicity for these ONeMg novae
is $\simeq 0.43$ by mass. The mixing ratio $X_{\rm WD}$ is, therefore,
assumed to be 0.4 (case~B) throughout this section. However, V1370 Aql
and V838 Her show significantly different metallicities from case~B. The
dependence on the initial composition is discussed in \S~\ref{sec:depz}.
When temperature is higher than $\sim 2 \times 10^8$~K, proton captures
are fast enough to compete with the $\beta^+$-decay of various unstable
isotopes. As a result, the nucleosynthesis results are significantly
deviated from those in steady nuclear flows like the CNO and Ne-Na
cycles. Figures~\ref{fig8}--\ref{fig14} show the final
abundances and isotope ratios by mass in the $M_{\rm WD}$--$M_{\rm env}$
space. The abundances are shaded from white (0.1) to black ($10^{-5}$)
in the logarithmic scale (except for beryllium and boron). In the rest
of this paper, all abundances are given in mass fraction. As described
below, we find that there exist two types of elements, namely, those
correlated to $T_{\rm peak}$ (e.g., oxygen, neon, and sulfur) and to
$\tau$ (e.g., carbon, sodium, and magnesium).
\subsubsection{Beryllium and Boron}
\label{sec:beb}
As seen in Figure~\ref{fig8}, the abundance of $^7$Be (in mass
fraction) reaches $\sim 10^{-6}$ for $T_{\rm peak} \sim 2.5 - 4 \times
10^8$~K (Figure~\ref{fig6}), by the $\alpha$-capture of the
initially present $^3$He. For the same $T_{\rm peak}$, the lower $M_{\rm
WD}$ models produce more $^7$Be than higher ones. This is due to the
higher densities for the formers as seen in Figure~\ref{fig2}. When
density is less than $\sim 10^3$~g~cm$^{-3}$ at temperature $\sim 2 - 4
\times 10^8$~K, the proton capture of $^7$Be is suppressed by its
inverse reaction (\cite{Boff93}). For $T_{\rm peak} \gtrsim 4 \times
10^8$~K, $^7$Be decreases by its $\alpha$-capture. As a result, the
abundance of $^{11}$B reaches $\sim 10^{-7}$. For $T_{\rm peak} \gtrsim
6 \times 10^8$~K, the abundance of $^{11}$B decreases owing to the
reaction $^{11}$C($\alpha$, $p$).
\subsubsection{Carbon and nitrogen}
\label{sec:cn}
In the steady flow of the CNO cycle ($\lesssim 2 \times 10^8$~K), the
most abundant isotope is $^{14}$N and the isotope ratios are determined
by the nuclear reaction rates as
\begin{eqnarray}
^{12}{\rm C}/^{13}{\rm C}
& = & \lambda\left[^{13}{\rm C}(p,\gamma)\right]/
\lambda\left[^{12}{\rm C}(p,\gamma)\right]\sim 2-4 \nonumber \\
^{14}{\rm N}/^{15}{\rm N}
& = & \lambda\left[^{15}{\rm N}(p,\alpha)\right]/
\lambda\left[^{14}{\rm N}(p,\gamma)\right] \nonumber \\
& \sim & 5000-50000. \nonumber
\end{eqnarray}
When temperature exceeds $\sim 2 \times 10^8$~K, the CNO cycle is
replaced with the HCNO cycle via $^{13}$N($p$,
$\gamma$)$^{14}$O($\beta^+\nu$)$^{14}$N. The abundance patterns of the
carbon and nitrogen (Figure~\ref{fig9}) mainly depend on $\tau$
(Figure~\ref{fig6}) as follows: (1) For $\tau \gg 1000$~s, the
carbon and nitrogen isotopes show the typical feature of the steady CNO
cycle, i.e., ${\rm C/N} \ll 1$, $^{12}{\rm C}/^{13}{\rm C} \sim 3$, and
$^{14}{\rm N}/^{15}{\rm N} \sim 30000$. (2) For $\tau \sim 1000$~s,
however, these isotope ratios approach $\sim 1$, due to the
$\beta^+$-decay lifetimes of $^{13}$N ($\simeq 862$~s) and $^{15}$O
($\simeq 176$~s) comparable to the cooling timescale. The thermonuclear
runaway ceases before most $^{13}$N (and some $^{15}$O) decays, and thus
the ratio C/N also reaches $\sim 1$. (3) For $\tau \ll 1000$~s, the
thermonuclear runaway ceases during the active HCNO cycle where $^{14}$O
and $^{15}$O are abundant, resulting in ${\rm C/N} \ll 1$. The ratio
$^{12}{\rm C}/^{13}{\rm C}$ is unchanged ($\sim 3$), while $^{14}{\rm
N}/^{15}{\rm N}$ is significantly reduced to $\sim 0.1$.
The abundance of nitrogen is $\sim 0.1$ in the whole area of the $M_{\rm
WD}$--$M_{\rm env}$ space, regardless of the ratio $^{14}{\rm
N}/^{15}{\rm N}$ ranging over 5 orders of magnitude. In contrast, the
abundance of carbon ranges widely ($\sim 0.001 - 0.1$) reaching its
maximum at $\tau \sim 1000$~s, while the ratio $^{12}{\rm C}/^{13}{\rm
C}$ is not significantly changed in the $M_{\rm WD}$--$M_{\rm env}$
space. The above results explain the abundance feature of the recent
ONeMg novae (Table~\ref{tab:obs}), in which the abundance of carbon
spreads widely ($\sim 0.001 - 0.01$) while that of nitrogen is $\sim
0.1$. Note that the abundance of nitrogen for QU Vul (\cite{Saiz92}) and
V838 Her (\cite{Vanl96}) is as low as $\sim 0.02$, owing to the
significantly lower metallicities ($\sim 0.1$). For V838 Her and V1974
Cyg, the ratio of C/N is $\sim 1$ which is obtained by the models with
$\tau \sim 1000$~s.
It should be noted that our models may significantly underproduce
$^{15}$N, that causes the too large ratio C/N as discussed in
\S~\ref{sec:comp}. This may be, however, only the case in the models
$\tau \gg 1000$~s. For the models $\tau \lesssim 1000$~s, the abundance
of $^{15}$N is not significantly reduced as described above, and thus
the results may not be changed substantially.
\subsubsection{Oxygen and fluorine}
\label{sec:of}
The abundance of oxygen is mainly correlated to $T_{\rm peak}$ but is
also dependent on $\tau$ (Figure~\ref{fig10}), owing to the presence
of three isotopes. The ratio $^{16}{\rm O}/^{17}{\rm O}$ has a clear
correlation to $T_{\rm peak}$. It reaches the minimum ($\sim 0.3$) at
$T_{\rm peak} \sim 3 \times 10^8$~K, and is nearly constant ($\sim 3$
for $T_{\rm peak} \lesssim 2 \times 10^8$~K and $\sim 10$ for $T_{\rm
peak} \gtrsim 4 \times 10^8$~K), due to the different nuclear reaction
cycles (Figure~\ref{fig5}). In contrast, the ratio $^{16}{\rm
O}/^{18}{\rm O}$ shows a clear correlation with the cooling timescale
(Figure~\ref{fig10}), being significantly small for $\tau \lesssim
100$~s. As a result, the abundance of oxygen reaches $\sim 0.03 - 0.1$
for (1) $T_{\rm peak} \lesssim 3 \times 10^8$~K ($^{16}$O and $^{17}$O
are abundant) or for (2) $\tau \lesssim 100$~s ($^{18}$O is
abundant). Note that oxygen is always abundant in the models $M_{\rm WD}
\lesssim 1.15 M_\odot$, where one of these conditions is satisfied.
Fluorine ($^{19}$F) is not significantly enhanced in the all models
(Figure~\ref{fig10}). The reason is that the reaction $^{18}$F($p$,
$\gamma$)$^{19}$Ne, which is followed by the $\beta^+$-decay to
$^{19}$F, is much slower than $^{18}$F($p$, $\alpha$)$^{15}$O. The
abundance of $^{19}$F is $\sim 10^{-4}$ at most for $\tau \sim 10$~s,
which is comparable to the $\beta^+$-decay lifetime of $^{19}$Ne
($\simeq 25$~s).
The oxygen-rich ONeMg novae ($\sim 0.1 - 0.3$ by mass) V693 CrA, QU Vul,
V351 Pup, and V1974 Cyg (Table~\ref{tab:obs}) can be explained by the
following models: (1) $M_{\rm WD} \lesssim 1.15 M_\odot$, (2) $T_{\rm
peak} \lesssim 2 \times 10^8$~K, or (3) $\tau \lesssim 10$~s. On the
other hand, V838 Her is fairly oxygen poor ($\simeq 3.3 \times
10^{-3}$), which could be explained by a rather massive model ($M_{\rm
WD} \sim 1.3 M_\odot$). It should be noted, however, that its estimated
metallicity is $\simeq 0.09$ (Table~\ref{tab:obs}), being significantly
less than assumed in this section (see \S~\ref{sec:depz}).
The ratio C/O can be $\gtrsim 1$ for $\tau \sim 1000$~s where the
abundance of carbon is $\sim 0.1$ and that of oxygen is $\lesssim
0.1$. It implies that the carbon-rich ONeMg novae, i.e., V1370 Aql and
V838 Her, may be explained by the models with $\tau \sim 1000$~s. Note
that carbon tends to be overproduced in the models with $\tau \gg
1000$~s (\S~\ref{sec:cn}). This may not, however, change the above
result with $\tau \sim 1000$~s.
\subsubsection{Neon and sodium}
\label{sec:nena}
Neon is the second most abundant metal in the initial composition
(Table~\ref{tab:init}). The abundance of neon is not significantly
reduced for $T_{\rm peak} \lesssim 4 \times 10^8$~K, due to its rather
slow proton capture (Figure~\ref{fig11}). Nevertheless, the
substantial nuclear flow appears in the Ne-Na cycle even for $T_{\rm
peak} \sim 2 - 3 \times 10^8$~K (Figure~\ref{fig5}) owing to the
abundant neon initially present. The ratio $^{20}{\rm Ne}/^{21}{\rm Ne}$
is clearly correlated with the cooling timescale, being small for the
shorter $\tau$, where the $\beta^+$-decay lifetime of $^{21}$Na ($\simeq
32$~s) is not negligible. On the other hand, the ratio $^{20}{\rm
Ne}/^{22}{\rm Ne}$ is clearly correlated to the peak temperature,
increasing with a rise in $T_{\rm peak}$. This is due to the faster
proton capture on $^{22}$Ne than on $^{20}$Ne.
The abundance of sodium is $\lesssim 10^{-3}$ for $\tau \gtrsim 100$~s,
due to the steady Ne-Na cycle where $^{20}$Ne is most abundant
(Figure~\ref{fig11}). The isotope ratio is also determined by their
reaction rates as $$ ^{22}{\rm Na}/^{23}{\rm Na} =
\lambda\left[^{23}{\rm Na}(p,\alpha)\right]/ \lambda\left[^{22}{\rm
Na}(p,\gamma)\right]\sim 10 \:, $$ in the temperature range $\sim 2 - 4
\times 10^8$~K. On the other hand, sodium is abundant ($\sim 0.01 - 0.1$
by mass) for $\tau \lesssim 100$~s, where the $\beta^+$-decay lifetimes
of $^{22}$Mg ($\simeq 6$~s) and $^{23}$Mg ($\simeq 16$~s) are not
negligible. Thus, a part of sodium, which is the decayed product of the
magnesium isotopes, survives the subsequent proton capture. The ratio
$^{22}{\rm Na}/^{23}{\rm Na}$ reaches $\sim 1$, owing to the abundant
$^{22}$Mg and $^{23}$Mg in the Ne-Na region during outbursts. The
abundance of $^{22}$Na shows a similar trend to that of sodium, clearly
correlated to the cooling timescale. This abundance can be changed by
the large uncertainty of the $^{22}$Na($p$, $\gamma$)$^{23}$Mg rate
(\cite{Kubo94}; \cite{Schm95}; \cite{Coc95}; \cite{Kubo96}). However, it
may not be significantly affected for $\tau \lesssim 100$~s, since the
explosive burning ceases while $^{22}$Mg is abundant.
The enrichment in neon is characteristic of all the observed ONeMg novae.
On the other hand, no positive detection of sodium has been reported for
recent ONeMg novae (\cite{Gehr94}), due to lack of useful lines and,
probably, little enrichment in sodium in the nova ejecta. An alternative
way to check the nucleosynthesis in the Ne-Na region is to compare with
the result of the $\gamma$-ray line survey of the $^{22}$Na decay from a
nearby ONeMg nova by CGRO or INTEGRAL in the near future.
\subsubsection{Magnesium and aluminum}
\label{sec:mgal}
Magnesium is one of the abundant elements initially present, but rather
fragile against proton capture. As a result, it is mostly transferred to
aluminum and silicon via the opened Mg-Al cycle (\cite{Timm88};
\cite{Cham88}). As seen in Figure~\ref{fig12}, the abundance of
magnesium reaches its minimum at $\tau \sim 1000$~s, in contrast to
carbon (Figure~\ref{fig9}). For $\tau \lesssim 1000$~s, it reaches
$\sim 10^{-2}$ due to the substantial leakage from the Ne-Na cycle and
the non-negligible $\beta^+$-decay lifetime of $^{25}$Al ($\simeq
10$~s). Note that the most abundant isotope is always $^{25}$Mg due to
its slowest proton capture. The isotope ratios $^{24}{\rm Mg}/^{25}{\rm
Mg}$ and $^{24}{\rm Mg}/^{26}{\rm Mg}$ are clearly correlated to the
cooling timescale. They are, however, not monotonic with $\tau$ but
complicated due to the inflow from the Ne-Na cycle and the leakage from
the Mg-Al region, and the various nuclear paths at high temperature
(Figure~\ref{fig5}).
The abundance of aluminum shows a similar trend to that of magnesium,
correlated to the cooling time- scale (Figure~\ref{fig12}). The ratio
$^{26}{\rm Al}/^{27}{\rm Al}$ is not significantly changed, being close
to $$^{26}{\rm Al}/^{27}{\rm Al} = \lambda\left[^{27}{\rm
Al}(p,\gamma)\right]/ \lambda\left[^{26}{\rm Al}(p,\gamma)\right]\sim
0.1-0.5 $$ in the temperature range $\sim 1 - 4 \times 10^8$~K. However,
the ratio decreases with a reduction in the cooling timescale, due to
the non-negligible $\beta^+$-decay lifetime of $^{27}$Si ($\simeq 6$~s)
which is the parent isotope of $^{27}$Al. Note that, for rather high
temperature models ($T_{\rm peak} \gtrsim 4 \times 10^8$~K), the proton
capture on $^{25}$Al is faster than its $\beta^+$-decay. The subsequent
isotope $^{26}$Si decays to $^{26}$Mg in $\sim 12$~s through the
isomeric state of $^{26}$Al, bypassing its ground state. The double
peaks in $^{26}$Al ($\sim 3 \times 10^{-3}$ by mass) can be seen in
Figure~\ref{fig12}. The one at lower peak temperatures ($\sim 1.8
\times 10^8$~K) is consistent with PSTWS95, STWS98, and JH98, in which
the abundance of $^{26}$Al decreases with increasing white dwarf
mass. The other peak at higher peak temperatures ($\gtrsim 3 \times
10^8$~K) is the consequence of the substantial nuclear flux from the
Ne-Na region. The latter peak, which has not been presented in the
previous works, is of importance on whether ONeMg novae can be the
significant contributors of the Galactic $^{26}$Al. Note that the
abundance of $^{26}$Al in the latter case does not substantially depend
on the initial abundance of $^{24}$Mg (\S~\ref{sec:nzpl}). There are
large uncertainties in the reaction rates of $^{25}$Al($p$,
$\gamma$)$^{26}$Si (\cite{Wies86}; \cite{Coc95}; \cite{Ilia96}),
$^{26}$Si($p$, $\gamma$)$^{27}$P (\cite{Hern95}), $^{25}$Mg($p$,
$\gamma$)$^{26}$Al (\cite{Coc95}; \cite{Ilia96}), and $^{26}$Al($p$,
$\gamma$)$^{27}$Si (\cite{Coc95}; \cite{Cham93}; \cite{Coc95}). Our
trial calculations for a few models suggest that these uncertainties
change the abundance of $^{26}$Al by a factor of $\sim 2-3$.
The clear dependence of magnesium on the cooling timescale is useful to
constrain ($M_{\rm WD}$, $M_{\rm env}$) for observed ONeMg novae. The
estimated abundance of magnesium is $\sim 4 \times 10^{-3} - 2 \times
10^{-2}$ for V693 CrA, V1370 Aql, and QU Vul (Table~\ref{tab:obs}),
corresponding to $\tau \lesssim 100$~s or $\tau \gtrsim 10^6$~s (see
Figures~\ref{fig6} and \ref{fig12}). The abundance of
aluminum does not significantly vary in the $M_{\rm WD}$--$M_{\rm env}$
space, being not useful to constrain ($M_{\rm WD}$, $M_{\rm
env}$). Nevertheless, the abundance estimates of aluminum are $\sim 3
\times 10^{-3} - 10^{-2}$ for V693 CrA, QU Vul, and V351 Pup
(Table~\ref{tab:obs}), which is in good agreement with our results.
\subsubsection{Silicon and phosphorus}
\label{sec:sip}
The abundance of silicon reaches $\sim 3 \times 10^{-2}$ for $T_{\rm
peak} \gtrsim 2 \times 10^8$~K (Figure~\ref{fig13}) via the
substantial nuclear flux from the Mg-Al region. The abundance is only
weakly correlated to the cooling timescale. On the other hand, the
ratios $^{28}{\rm Si}/^{29}{\rm Si}$ and $^{28}{\rm Si}/^{30}{\rm Si}$
are clearly correlated to the cooling timescale, because of various
competitions between proton capture and $\beta^+$-decay
(Figure~\ref{fig5}).
The abundance of phosphorus ($^{31}$P) reaches $\sim 10^{-3} - 10^{-2}$
for $T_{\rm peak} \gtrsim 3 \times 10^8$~K, due to the faster proton
capture on $^{30}$P than its $\beta^+$-decay
(Figure~\ref{fig13}). Since the Si-P cycle is not closed as seen in
Figure~\ref{fig5}, phosphorus is not significantly destroyed.
The abundance of silicon in the ejecta of V693 CrA, V1370 Aql, and V351
Pup is as small as $\sim 2 - 7 \times 10^{-3}$, corresponding to $T_{\rm
peak} \lesssim 2 \times 10^8$~K. In contrast, that in QU Vul ($\sim 3 -
4 \times 10^{-2}$) is in agreement with the models $T_{\rm peak} \gtrsim
2 \times 10^8$~K. The discovery of phosphorus has been reported in the
ejected shell of V1974 Cyg by a near infrared spectroscopy
(\cite{Wagn96}). It suggests that V1974 Cyg can be explained by the
model with a rather high peak temperature, although an accurate
abundance of phosphorus is required to constrain ($M_{\rm WD}$, $M_{\rm
env}$). It is also interesting to note that significantly enhanced
phosphorus has been detected on the white dwarf in a dwarf nova system
(\cite{Sion97}) and in the broad line system of a QSO (\cite{Shie96}),
which might originate from ONeMg novae.
\subsubsection{Sulfur and other heavy elements}
\label{sec:sca}
The abundance of sulfur reaches $\sim 10^{-2}$ for $T_{\rm peak} \gtrsim
3 \times 10^8$~K, through leakage from the Si-P region
(Figure~\ref{fig14}). The abundance does not exceed $10^{-2}$ in the
models $M_{\rm WD} \lesssim 1.15 M_\odot$ because of the shorter cooling
timescale (Figure~\ref{fig6}). This condition is, however, highly
dependent on the initial composition as will be discussed in
\S~\ref{sec:depz}. For $T_{\rm peak} \gtrsim 3 \times 10^8$~K, the
ratios $^{32}{\rm S}/^{33}{\rm S}$ and $^{32}{\rm S}/^{34}{\rm S}$
decrease with a rise in peak temperature, due to the increasing nuclear
paths (Figure~\ref{fig5}). For $T_{\rm peak} \lesssim 3 \times
10^8$~K, these ratios approach those determined by their reaction rates.
At least a half of the observed ONeMg novae, V1370 Aql, QU Vul, and V838
Her, are abundant in sulfur in their ejecta (Table~\ref{tab:obs}). In
addition, the sulfur enrichment has been confirmed in the V1974 Cyg
ejecta from near infrared spectroscopies (\cite{Wood92}; \cite{Wood95};
\cite{Wagn96}). These novae can be explained by the models with such
high peak temperatures as $T_{\rm peak} \gtrsim 3 \times 10^8$~K. The
estimated abundance of sulfur for V1370 Aql is much more abundant than
by any models in the $M_{\rm WD}$--$M_{\rm env}$ space
(Figure~\ref{fig14}). It should be noted, however, the estimated
metallicity for V1370 Aql is twice as much as assumed in this section
(see \S~\ref{sec:depz}).
Heavier elements, from chlorine to calcium, are not substantially
enhanced for $T_{\rm peak} \lesssim 4 \times 10^8$~K
(Figure~\ref{fig14}). In addition, their enhancement is never seen
in the models $M_{\rm WD} \lesssim 1.15 M_\odot$ due to the shorter
cooling timescale. Nevertheless, the enrichment in chlorine has been
reported for the ejecta of V1974 Cyg by a near infrared spectroscopy
(\cite{Wagn96}). The accurate abundance of chlorine would severely
constrain ($M_{\rm WD}$, $M_{\rm env}$) for V1974 Cyg.
\subsection{Dependence on the initial composition}
\label{sec:depz}
So far we have discussed the nucleosynthesis results for only one set of
the initial composition $X_{\rm WD} = 0.4$ (case~B). However, the
metallicities of the ejecta for V1370 Aql, QU Vul by Saizar et
al. (1992), and V838 Her are significantly deviated from 0.4
(Table~\ref{tab:obs}). In addition, the different authors present
different metallicity estimates for the same nova events. In particular,
the discrepancy is serious for QU Vul between Saizar et al. (1992)
($\simeq 0.10$) and Andre\"a et al. (1994) ($\simeq 0.40$). It is,
therefore, difficult to judge whether the dispersion of the
metallicities is real or due to observational errors. In the following,
we discuss how the initial composition influences the nucleosynthesis
results, comparing the low ($X_{\rm WD} = 0.1$; case~A) and high
($X_{\rm WD} = 0.8$; case~C) metallicity cases.
As discussed in \S~\ref{sec:model}, the density and temperature
structures of an envelope are determined uniquely by a set of ($M_{\rm
WD}$, $M_{\rm env}$) in our model, being independent of its time
evolution (but slightly dependent on the time variation in mean
molecular weight). As a result, case~C is at most 20~\% higher than
case~A in peak temperature for each ($M_{\rm WD}$, $M_{\rm env}$) as
seen in Figure~\ref{fig15}. The higher temperature in case~C is due
to the larger mean molecular weight. In contrast, a variation in initial
composition is crucial for the cooling timescale
(Figure~\ref{fig15}). For $T_{\rm peak} \gtrsim 2 \times 10^8$~K,
case~C is more than 10 times shorter than case~A in $\tau$. This is a
consequence of the higher nuclear energy in case~C
(Figure~\ref{fig16}) due to the abundant nuclear fuel. The ejection
velocity is also affected by the initial composition. As seen in
Figure~\ref{fig16}, case~C obtains significantly higher $v_{\rm
ej}$ than case~A in each model.
A prominent distinction between case~A (N0540A) and case~C (N0540C) can
be seen in Figure~\ref{fig17}, which shows the nuclear flows and
the final yields in the model ($M_{\rm WD}/M_\odot$, $M_{\rm
env}/M_\odot$) = (1.05, 10$^{-4.0}$). In N0540A, the nuclear flow
extends to sulfur due to the longer cooling timescale ($\simeq
23000$~s), while that in N0540C ($\tau \simeq 1800$~s) to silicon. The
model N0540A consumes most of oxygen initially present, in contrast to
N0540C.
Figures~\ref{fig18}--\ref{fig20} show the abundances of
important elements and $\gamma$-ray emitters in the
$M_{\rm WD}$--$M_{\rm env}$ space for case~A and case~C.
These results are explained as follows:\\
1. The abundance of carbon is still clearly correlated to $\tau$ as
in case~B (\S~\ref{sec:cn}), reaching its maximum at $\tau \sim 1000$~s
for both cases (Figure~\ref{fig18}). The abundance is roughly
proportional to $X_{\rm WD}$ among the models with the same cooling
timescale.\\
2. Magnesium is another element clearly correlated to $\tau$ as in case~B
(\S~\ref{sec:mgal}). In case~A, the abundance is significantly smaller
than in case~C, not enhanced even for $\tau \lesssim 1000$~s. This is a
consequence of the longer $\tau$ in case~A, where the nuclear flow
extends to heavier elements than magnesium (Figure~\ref{fig17}).\\
3. Silicon is also an element showing a correlation to $\tau$ in case~B,
not significantly changed for $T_{\rm peak} \gtrsim 2 \times 10^8$~K
(\S~\ref{sec:sip}). This feature holds for case~C. However, the
abundance in case~A has a correlation to $T_{\rm peak}$
rather than $\tau$, reaching its maximum at
$T_{\rm peak} \sim 2.5 \times 10^8$~K (Figure~\ref{fig19}).
The depletion of silicon in case~A for high $T_{\rm peak}$ is due to
the long cooling timescale.\\
4. The trend of oxygen abundance significantly differs between case~A
and case~C (Figure~\ref{fig18}). The abundance
in case~B is correlated to both $T_{\rm peak}$ and $\tau$, being more
abundant in the lower $M_{\rm WD}$ models (\S~\ref{sec:of}).
In case~C, however, the abundance is not significantly changed in the
($M_{\rm WD}$, $M_{\rm env}$) space, being $\sim 0.3$.
On the other hand, that in case~A is clearly correlated to
the peak temperature, significantly depleted for $T_{\rm peak}
\gtrsim 2.5 \times 10^8$~K (Figure~\ref{fig18}).\\
5. The abundance of sulfur shows a correlation to $T_{\rm peak}$ in
all cases. In case~C, however, the abundance is $\lesssim
10^{-3}$ for the models $M_{\rm WD} \lesssim 1.15 M_\odot$ because of
the shorter $\tau$. On the other hand, that in case~A
reaches $\sim 3\times 10^{-2}$ at $\gtrsim 3 \times 10^8$~K, since
$\tau$ is longer and thus the nuclear flow extends to heavier elements.\\
6. The radioactive species $^7$Be, $^{22}$Na, and $^{26}$Al are not
significantly enhanced in case~A because of its longer cooling timescale
(Figure~\ref{fig20}). On the other hand, these abundances in
case~C show similar trends to case~B in the $M_{\rm WD}$--$M_{\rm env}$
space (Figures~\ref{fig8}).
The estimated metallicity for the ejecta of V1370 Aql is extremely high
($Z \sim 0.85$; Table~\ref{tab:obs}), which is close to case~C. However,
the abundance of oxygen is significantly small ($\sim 4 - 5 \times
10^{-2}$), being inconsistent with our results ($\gtrsim 0.1$). In
addition, sulfur in the ejecta is extremely abundant ($\sim 0.1$ by
mass), which is also in disagreement with our results ($\lesssim
10^{-2}$). These features, i.e., the abundances of low oxygen and high
sulfur, could be explained by lower $X_{\rm WD}$ models rather than
higher ones (Figure~\ref{fig10}, \ref{fig14}, \ref{fig18},
and \ref{fig19}). Thus, this extremely high metallicity in this
nova ejecta may not be real but due to the difficulties in the
observational estimations.
\begin{table*}[t]
\caption{Ejected Masses of recent ONeMg Novae}
\label{tab:menv}
\smallskip
\begin{tabular}{lrl}
\hline
\hline
& $M_{\rm ej}/M_\odot$ & Observations \\
\hline
QU Vul\tablenotemark{1} & $8\times 10^{-4}$ & Radio emission \\
QU Vul\tablenotemark{2} & $\ge 9\times 10^{-4}$ & Infrared emission \\
QU Vul\tablenotemark{3} & $0.2-1.5\times 10^{-4}$ & Multiwavelength study \\
V351 Pup\tablenotemark{4} & $1\times 10^{-7}$ & Multiwavelength study \\
V838 Her\tablenotemark{5} & $6.4-9\times 10^{-5}$ & Infrared emission \\
V838 Her\tablenotemark{6} & $1.8\times 10^{-4}$ & Optical and UV emission \\
V1974 Cyg\tablenotemark{7} & $\ge 7\times 10^{-5}$ & Radio emission \\
V1974 Cyg\tablenotemark{8} & $1-4\times 10^{-4}\times Y^{-1/2\;\dag}$
& UV emission \\
V1974 Cyg\tablenotemark{9} & $2-5\times 10^{-4}$ & Infrared emission \\
\hline
\end{tabular}
\smallskip
{\footnotesize
References: $^1$Taylor et al. 1987, $^2$Greenhouse et al. 1988,
$^3$Saizar et al. 1992, $^4$Saizar et al. 1996,
$^5$Woodward et al. 1992, $^6$Vanlandingham et al. 1996,
$^7$Pavelin et al. 1993, $^8$Shore et al. 1993, $^9$Woodward et al. 1997
$^{\dag}Y$ is the enhancement factor for the helium abundance
}
\end{table*}
For the QU Vul ejecta, Saizar et al. (1992) gave a much lower
metallicity estimate ($Z \simeq 0.10$) corresponding to case~A than
Andre\"a et al. (1994). The low abundance estimates of carbon, oxygen,
and magnesium by Saizar et al. (1992) are in good agreement with our
results for $T_{\rm peak} \lesssim 2 \times 10^8$~K
(Figures~\ref{fig15}, \ref{fig18}, and
\ref{fig19}). However, the abundance of silicon ($\sim 4 \times
10^{-2}$ by mass) suggests that the nova has obtained $T_{\rm peak} \sim
2 - 3 \times 10^8$~K, which is inconsistent with the above result. Thus,
there is no ($M_{\rm WD}$, $M_{\rm env}$) model which explains the
abundance estimates by Saizar et al. (1992) within reasonable
observational errors.
The V838 Her ejecta also shows a rather low metallicity estimate ($Z
\simeq 0.09$), which again corresponds to case~A. The abundance features
of the ejected shell, i.e., the low oxygen and high sulfur, are well
reproduced in our results for $T_{\rm peak} \sim 2 - 3 \times 10^8$~K
(Figures~\ref{fig18} and \ref{fig19}). Hence, the low
metallicity for this case implies the presence of a real dispersion in
metallicity among the observed nova ejecta.
\section{Comparison with observations}
\label{sec:obs}
In this section, we discuss which ($M_{\rm WD}$, $M_{\rm env}$) models
best match the recent ONeMg nova observations from the nucleosynthetic
point of view, using the results of case~B ($X_{\rm WD} = 0.4$). For
V838 Her, however, those of case~A ($X_{\rm WD} = 0.1$) are used
(\S~\ref{sec:depz}). The abundances for QU Vul by Saizar et al. (1992)
and V1370 Aql are not discussed in this section, since they are not
reproduced in our models (\S~\ref{sec:depz}).
Figure~\ref{fig21} shows the models which are in agreement with the
abundance estimates for recent ONeMg novae, within a factor of three for
V693 CrA (\cite{Vanl97}; triangles), V351 Pup (\cite{Saiz96};
asterisks), and V1974 Cyg (\cite{Aust96}; stars), and of five for QU Vul
(\cite{Andr94}; circles) and V838 Her (\cite{Vanl97}; squares). The
thick symbol for each nova is the best model, whose ratio to its
observation is shown in Figure~\ref{fig22}. Interestingly, at least
four events (V693 CrA, QU Vul, V838 Her, and V1974 Cyg) are well
explained by the models $\simeq 1.1 M_\odot$, which is near the lower
limit to ONeMg cores (\cite{Nomo84}). This is in contrast to the mass
range of $1.25 - 1.35 M_\odot$ used by PSTWS95 and STWS98, that is near
the upper bound to ONeMg cores. Table~\ref{tab:menv} shows the estimated
ejecta masses of QU Vul (\cite{Tayl87}; \cite{Gree88}; \cite{Saiz92}),
V351 Pup (\cite{Saiz96}), V838 Her (\cite{Wood92}; \cite{Vanl96}), and
V1974 Cyg (\cite{Pave93}; \cite{Shor93}; \cite{Wood97}) from
observations. These significantly high ejecta masses compared with
theoretical estimates are reasonably explained by our nucleosynthesis
results if we assume that almost all the envelope is eventually blown
off. In addition, for the models with $M_{\rm env} \gtrsim 10^{-4}
M_\odot$, the expansion velocities exceed $v_{\rm esc}$ and obtain
$v_{\rm ej} \gtrsim 1000$~km s$^{-1}$ (Figures~\ref{fig6} and
\ref{fig16}), which are in good agreement with observations. Note
that the abundances of carbon and nitrogen by our results are also in
good agreement with those by observations, regardless of their
uncertainties (\S~\ref{sec:comp}). This is a consequence that these
novae are well explained by the models with $\tau \lesssim 1000$~s where
the uncertainties (caused by the depletion of $^{15}$N) may be small
(\S~\ref{sec:cn}).
\subsection{V693 CrA}
\label{sec:v693}
The high oxygen abundance ($\sim 0.1 - 0.2$ by mass) in the V693 CrA
ejecta (\cite{Will85}; \cite{Andr94}; \cite{Vanl97}) implies that it was
an event with $T_{\rm peak} \lesssim 2 \times 10^8$~K or with $M_{\rm
WD} \lesssim 1.15 M_\odot$ (\S~\ref{sec:of}). The low magnesium and high
silicon abundances by Vanlandingham, Starrfield, \& Shore (1997) suggest
that the cooling timescale was $\lesssim 1000$~s (\S~\ref{sec:mgal} and
\ref{sec:sip}). On the other hand, Williams et al. (1985) and Andre\"a,
Drechsel, \& Starrfield (1994) present somewhat higher magnesium and
lower silicon abundances. We compare our results with the abundance
estimates by Vanlandingham, Starrfield, \& Shore (1997), since others
used the overexposed spectrum as pointed out by Andre\"a, Drechsel, \&
Starrfield (1994). As a result, the model ($M_{\rm WD}/M_\odot$, $M_{\rm
env}/M_\odot$) = (1.05, $10^{-3}$) (case~B) is in good agreement with
the observation within a factor of 3 (Figures~\ref{fig21} and
\ref{fig22}).
\subsection{QU Vul}
\label{sec:qu}
The high abundance of sulfur implies that the nova obtained such a high
temperature as $T_{\rm peak} \gtrsim 3 \times 10^8$~K
(Figures~\ref{fig6} and \ref{fig14}). Furthermore, the
abundance of oxygen despite such a high temperature suggests that the
white dwarf mass was $\lesssim 1.15 M_\odot$ (\S~\ref{sec:of}). Our
results are in agreement with the observational estimates within a
factor of 5 for the models ($M_{\rm WD}/M_\odot$, $M_{\rm env}/M_\odot$)
= ($1.05 - 1.1$, $10^{-3.5} - 10^{-3}$) (case~B). These high envelope
masses are in good agreement with the observational estimates of the
nova ejecta (Table~\ref{tab:menv}). Note that the both high abundances
of oxygen and sulfur were not explained by previous hydrodynamic
studies, with much smaller envelope masses.
\subsection{V351 Pup}
\label{sec:v351}
The ejected shell of V351 Pup shows the high oxygen and low silicon
abundances (\cite{Saiz96}). This feature is well explained with the low
temperature models of $T_{\rm peak} \lesssim 2 \times 10^8$~K
(Figures~\ref{fig6}, \ref{fig10}, and \ref{fig13}). Our
results are in good agreement with the observational estimates within a
factor of 3 for the models ($M_{\rm WD}/M_\odot$, $M_{\rm env}/M_\odot$)
= ($1.05 - 1.1$, $10^{-5.5} - 10^{-5}$), ($1.15 - 1.2$, $10^{-6} -
10^{-5.5}$), and the best for (1.25, $10^{-6}$) (case~B). In such low
temperature models, magnesium must be abundant (\S~\ref{sec:mgal}),
though it is not presented in Saizar et al. (1996). The above low
envelope masses may be due to mass accreting at high rate from a giant
companion, that is also suggested by the optical spectral analysis
(\cite{Saiz96}). The estimated ejecta mass, $2 \times 10^{-7} M_\odot$
(Table~\ref{tab:menv}), implies that this nova occurred in such a
massive white dwarf as $M_{\rm WD} \gtrsim 1.25 M_\odot$.
\subsection{V838 Her}
\label{sec:v838}
The low oxygen and high sulfur abundances in the V838 Her ejecta are the
prominent feature in the low metallicity models (case~A) with $T_{\rm
peak} \sim 2.5 - 3 \times 10^8$~K (\S~\ref{sec:depz}). In addition, the
ratios ${\rm C}/{\rm N} \sim 1$ and ${\rm C}/{\rm O} \gtrsim 1$ suggest
that the cooling timescale was $\sim 1000$~s (\S~\ref{sec:cn} and
\ref{sec:of}). Thus, the nova may have occurred with the low $M_{\rm
WD}$ and the high $M_{\rm env}$ (Figure~\ref{fig15}). Our results
are in agreement with the observational estimates within a factor of 5
for the model ($M_{\rm WD}/M_\odot$, $M_{\rm env}/M_\odot$) = (1.05,
$10^{-4} - 10^{-3.5}$).
\subsection{V1974 Cyg}
\label{sec:v1974}
Unfortunately, the abundances heavier than neon are not presented in
Austin et al. (1996), due to lack of these lines. The high oxygen
abundance suggests that the peak temperature was $\lesssim 2 \times
10^8$~K or the white dwarf mass was $\lesssim 1.15 M_\odot$
(\S~\ref{sec:of}). In addition, the ratio ${\rm C}/{\rm N} \sim 1$
implies that the cooling timescale was $\sim 1000$~s
(\S~\ref{sec:cn}). Our results are in good agreement with the
observational estimates within a factor of 3 for the models ($M_{\rm
WD}/M_\odot$, $M_{\rm env}/M_\odot$) = (1.05, $10^{-3.5}$), (1.1,
$10^{-4}$), and (1.2, $10^{-5}$), and the best for (1.1, $10^{-4.5}$)
(case~B; N0535B, N1040B, N2050B, and N1045B). They are in reasonable
agreement with the estimated mass of the ejecta $\gtrsim 5 \times
10^{-5} M_\odot$ (Table~\ref{tab:menv}). Their white dwarf masses are
also in agreement with the estimates from observations $\sim 0.75 - 1.1
M_\odot$ (\cite{Pare95}; \cite{Rett97}) but smaller than $\sim 1.25
M_\odot$ (\cite{Krau96}). The observation shows a factor of 2 lower
hydrogen abundance than our result (Figure~\ref{fig22}). This might
be due to the subsequent steady hydrogen burning on the white dwarf as
pointed out by Krauter et al. (1996). Hayward et al. (1996) have derived
the neon and magnesium abundances relative to solar values from a
mid-infrared observation. If their ratio ${\rm Ne}/{\rm Mg} \sim 30$ is
adopted, the abundance of magnesium would be $\sim 3 \times 10^{-3}$. It
favors a relatively high envelope mass model
(Figure~\ref{fig12}). As a result, N1040B would be the best in this
case. A recent near infrared measurement has shown the presence of the
lines of phosphorus and chlorine together with sulfur in the V1974 Cyg
ejecta (\cite{Wagn96}). This suggests that V1974 Cyg experienced $T_{\rm
peak} \gtrsim 3 \times 10^8$~K. In this case, the higher $M_{\rm env}$
models are also favorable. In addition, the ejection velocity in N1040B
is $\simeq 1800$~km s$^{-1}$ being good agreement with observations
($\simeq 2300$~km s$^{-1}$; \cite{Gehr98}), while that in N1045B is
$\simeq 190$~km s$^{-1}$. Obviously, further analysis of heavy elements
is needed to constrain the parameters ($M_{\rm WD}$, $M_{\rm env}$) for
V1974 Cyg.
\section{Production of the radioactive isotopes}
\label{sec:gamma}
\begin{table*}[t]
\caption{$^{22}$Na Production in ONeMg Novae}
\label{tab:na22}
\smallskip
\begin{tabular}{cccccc}
\hline
\hline
& year & $d$ (kpc) & $XM_{\rm env}/M_\odot$
& $F_0$\tablenotemark{a} (cm$^{-2}$ s$^{-1}$)
& $F_{\rm up}$\tablenotemark{b} (cm$^{-2}$ s$^{-1}$) \\
\hline
V693 CrA & 1981 & 11.8 & $2.0\times 10^{-5}$ & $5.5\times 10^{-4}$ & \\
QU Vul & 1984 & 2.8 & $8.9\times 10^{-7}$ & $4.4\times 10^{-4}$ & \\
V351 Pup & 1991 & 3.5 & $4.0\times 10^{-11}$ & $1.3\times 10^{-8}$ & $5.5\times 10^{-5}$ \\
V838 Her & 1991 & 3.4 & $1.9\times 10^{-8}$ & $6.3\times 10^{-6}$ & $3.3\times 10^{-5}$ \\
V1974 Cyg & 1992 & 1.8 & $2.5\times 10^{-8}$ & $3.0\times 10^{-5}$ & $2.3\times 10^{-5}$ \\
\hline
\end{tabular}
\smallskip
{\footnotesize
$^a$initial flux by this work (best sequence)
$^b$upper limit by Iyudin et al. (1995)
}
\end{table*}
In this section, we discuss the possibilities of detecting the
$\gamma$-ray emitters $^7$Be, $^{22}$Na, and the contribution to the
Galactic $^{26}$Al, based on our nucleosynthesis results in ONeMg novae.
Figure~\ref{fig23} shows the total masses of $^7$Be, $^{22}$Na,
and $^{26}$Al produced per event for $X_{\rm WD} = 0.4$ (case~B). As
seen in this figure, the models $M_{\rm WD} \simeq 1.1 M_\odot$ with
$M_{\rm env} \gtrsim 10^{-4} M_\odot$, which are in good agreement with
most of observations (\S~\ref{sec:obs}), produce significant amounts of
these isotopes. In the rest of this section, all the envelope is assumed
to be eventually blown off.
\subsection{Gamma-ray emission from $^7$Be electron captures}
\label{sec:be7}
Classical novae might be a possible site for the $^7$Li production (the
electron-capture products of $^7$Be) in the solar neighborhood
(\cite{Star78}; \cite{Dant91}; \cite{Wana98}). Recently, some works
claim that they cannot be the major contributors (\cite{Matt95};
JH98). Nevertheless, the $\gamma$-rays (at 478~KeV) from the $^7$Be
electron capture would be detectable by CGRO or INTEGRAL (\cite{Hern96};
\cite{Jose98}; \cite{Gome98}).
The $\gamma$-ray line flux from the $^7$Be electron capture is estimated
as
\begin{eqnarray}
F\left(^7{\rm Be}\right)
& \sim & 4\times 10^{-5} {\rm cm}^{-2} {\rm s}^{-1} \nonumber \\
& \times &
\left(\frac{X\left(^7{\rm Be}\right)M_{\rm env}}
{5\times 10^{-9}M_\odot}\right)
\left(\frac d{\rm 3~kpc}\right)^{-2} \nonumber \\
& \times & e^ {-t/\tau \left(^7{\rm Be}\right)} \:, \nonumber
\end{eqnarray}
where $d$ is the distance of the nova system from the sun. For the CGRO
sensitivity (a few $10^{-5}$~cm$^{^2}$~s$^{-1}$) and a typical distance
($d \sim 3$~kpc), the mass of $^7$Be $\sim 5 \times 10^{-9} M_\odot$ per
event is required to be detected. The mass of $^7$Be per event in our
model is over 5--10 times smaller than required
(Figure~\ref{fig23}). However, an ONeMg nova will be a promising
target of $^7$Be $\gamma$-rays for INTEGRAL in the near future. Note
that CO novae may produce about 10 times higher $^7$Be than ONeMg novae
(JH98; \cite{Wana98}).
\subsection{Gamma-ray emission from $^{22}$Na decays}
\label{sec:na22}
There has been increasing expectations that an ONeMg nova might be the
first stellar object for the detection of the $\gamma$-ray emitter
$^{22}$Na (\cite{Weis90}; \cite{Star93}; \cite{Coc95}; PSTWS95; Wanajo
1997a, b; STWS98; JH98; \cite{Gome98}). Nevertheless, no positive
detection has been reported by COMPTEL on board CGRO for the recent
ONeMg novae, V351 Pup, V838 Her, and V1974 Cyg (\cite{Iyud95}).
As seen in Figure~\ref{fig23}, the total mass of $^{22}$Na per
event (case~B) is significantly high in the models $M_{\rm env} \gtrsim
10^{-4}M_\odot$. The $\gamma$-ray line flux from the $\beta^+$-decay of
$^{22}$Na is estimated as
\begin{eqnarray}
F\left(^{22}{\rm Na}\right)
& \sim & 4\times 10^{-5} {\rm cm}^{-2} {\rm s}^{-1} \nonumber \\
& \times &
\left(\frac{X\left(^{22}{\rm Na}\right)M_{\rm env}}
{1\times 10^{-7}M_\odot}\right)
\left(\frac d{\rm 3~kpc}\right)^{-2} \nonumber \\
& \times & e^{-t/\tau \left(^{22}{\rm Na}\right)} \:. \nonumber
\end{eqnarray}
For the CGRO sensitivity and a typical distance ($\sim 3$~kpc), the mass
of $^{22}$Na $\sim 10^{-7} M_\odot$ per event is required. This
corresponds to an envelope mass of $\sim 10^{-4} M_\odot$
(Figure~\ref{fig23}). Table~\ref{tab:na22} shows the distances,
the masses of $^{22}$Na per event from our results (for the best fitted
models, see \S~\ref{sec:obs}), the expected initial $\gamma$-ray line
fluxes at 1.275~MeV, and the upper limits to the COMPTEL observations
(\cite{Iyud95}) for V693 CrA, QU Vul, V351 Pup, V838 Her, and V1974 Cyg.
V1370 Aql is omitted here, not explained by any ($M_{\rm WD}$, $M_{\rm
env}$) models as discussed in \S~\ref{sec:obs}. V693 CrA and QU Vul may
have emitted the $\gamma$-rays as high as $\sim 5 \times
10^{-4}$~cm$^{-2}$s$^{-1}$. However, their fluxes have decreased to,
respectively, $6 \times 10^{-6}$ and $1 \times
10^{-5}$~cm$^{^2}$~s$^{-1}$ at present, and will decrease to $3 \times
10^{-6}$ and $5 \times 10^{-6}$~cm$^{^2}$~s$^{-1}$ at the launch of
INTEGRAL ($\sim 2001?$). In contrast to the above two novae, V351 Pup
and V838 Her may have yielded much lower $\gamma$-ray fluxes, which are
consistent with the upper limits by COMPTEL. The envelope mass of V351
Pup might be $\lesssim 10^{-5}M_\odot$ (\S~\ref{sec:v351}) so that
little $^{22}$Na may have been produced. Although V838 Her may have
obtained a massive envelope such as $\sim 10^{-4}M_\odot$
(\S~\ref{sec:v838}), the cooling timescale was so long owing to the low
metallicity ($\sim 10^4$~s) that little $^{22}$Na survived. The
$\gamma$-ray flux of the $^{22}$Na decay from V1974 Cyg may have been
near the sensitivity limit to COMPTEL, with the abundance in the best
model (\S~\ref{sec:v1974}) and the distance of $\sim 1.8$~kpc
(\cite{Choc97}). Thus, if the ejected mass was as high as a few $10^{-4}
M_\odot$ indeed, our model would have produced observable $^{22}$Na (or
the estimated distance is too short).
It seems that at least four ONeMg novae (V693 CrA, QU Vul, V838 Her, and
V1974 Cyg) in the past twenty years have produced sufficient $^{22}$Na
for the high sensitivity of INTEGRAL ($\sim 4 - 5 \times
10^{-6}$~cm$^{^2}$~s$^{-1}$). The next ONeMg nova in the first decade of
the 21st century will be a promising candidate for detecting the
$\gamma$-ray emitter $^{22}$Na.
\subsection{Galactic $^{26}$Al production}
\label{sec:al26}
Since the discovery by HEAO 3 (Mahoney et al. 1984), many studies have
been carried out to explain the presence of $\sim 1 - 3 M_\odot$ of
$^{26}$Al in the Galaxy. In particular, ONeMg novae have been considered
to be a promising stellar site for the $^{26}$Al production
(\cite{Weis90}; \cite{Nofa91}; \cite{Star93}; \cite{Coc95}; PSTWS95;
\cite{Kolb97}; \cite{Jose97}; STWS98; JH98), as well as AGB stars
(\cite{Fore91}), Wolf-Rayet stars (\cite{Pran86}; \cite{Meyn97}), and
Type II supernovae (\cite{Walt89}; \cite{Pran93};
\cite{Timm95}). However, the detailed observations of $\gamma$-ray lines
at 1.8~MeV by COMPTEL (\cite{Dieh94}; \cite{Dieh95}) have shown that the
Galactic $^{26}$Al originates from the youngest stellar population
associated with the spiral arms and the local groups (\cite{Pran96a};
\cite{Pran96b}). This may imply that the major sources of the Galactic
$^{26}$Al are Type II supernovae or Wolf-Rayet stars.
The mass of $^{26}$Al per event is up to $\sim 3 \times 10^{-7}M_\odot$
in the models $M_{\rm env} \gtrsim 10^{-4}M_\odot$
(Figure~\ref{fig23}). The upper limit to the Galactic $^{26}$Al
from ONeMg novae is thus estimated as
{\small
\begin{eqnarray}
M\left(^{26}{\rm Al}\right) & \sim & 3 M_\odot \nonumber \\
& \times &
\left(\frac{R_{\rm nova}}{40 {\rm yr}^{-1}}\right)
\left(\frac{f_{\rm ONeMg}}{0.25}\right)
\left(\frac{X(^{26}{\rm Al})M_{\rm env}}{3\times 10^{-7}}\right) \:, \nonumber
\end{eqnarray}
}
where $R_{\rm nova}$ is the nova rate in the Galaxy and $f_{\rm ONeMg}$
is the fraction of ONeMg novae. This is in good agreement with the
estimate from the CGRO results. If ONeMg novae are not be the major
contributors to the Galactic $^{26}$Al, its typical mass per event must
be somewhat smaller than the above value. There are some uncertainties
in the Galactic nova rates (\cite{Yung97}; \cite{Shaf97}; \cite{Hata97})
and the fraction of ONeMg novae (\cite{Ritt91}; \cite{Livi94}). However,
these uncertainties may be much smaller (a factor of $\sim 2$) than
those in the $^{26}$Al yields (about 2 orders of magnitude as can be
seen in Figure~\ref{fig23}). The INTEGRAL survey on the diffuse
component of the Galactic $^{26}$Al, together with a search of $^{22}$Na
from an individual ONeMg nova, will impose a severe constraint on the
current nova models.
\section{Conclusions}
\label{sec:concl}
In this paper we have examined nucleosynthesis in ONeMg novae with the
wide ranges of the three parameters, i.e., the white dwarf mass ($M_{\rm
WD} = 1.05 - 1.35 M_\odot$), the envelope mass ($M_{\rm env} = 10^{-6} -
10^{-3} M_\odot$), and the initial metallicity ($X_{\rm WD} = 0.1 -
0.8$). We used a quasi-analytic nova model with a one-zone envelope,
coupled with an updated nuclear reaction network code. Our
nucleosynthesis results are in good agreement with those of previous
hydrodynamic calculations except for several fragile isotopes.
We have found that the explosion is more violent in a lower $M_{\rm WD}$
model among those with the same peak temperature, due to its smaller
gravitational potential. There exists the critical cooling timescale
($\sim 1000$~s), at which the energy generation by the $\beta^+$-decay
of $^{14}$O and $^{15}$O plays a crucial role to the envelope expansion.
For the models with $\tau \lesssim 1000$~s, the nucleosynthesis results
significantly deviate from those expected in steady nuclear flows (e.g.,
the CNO and Ne-Na cycle). These models also obtain high ejection
velocities ($\gtrsim 1000$~km~s$^{-1}$), which are in good agreement
with recent observations.
There are a couple of characteristic trends for the abundances in
the $M_{\rm WD}$--$M_{\rm env}$ space as follows (case~B):\\
1. The abundances of oxygen, neon, phosphorus, and sulfur (and $^7$Be)
are clearly correlated to the peak temperatures, although those of
oxygen and sulfur are also dependent of the cooling timescales.
The abundance of oxygen is always abundant in the models
$M_{\rm WD} \lesssim 1.15 M_\odot$. The heavier elements than sulfur
show no significant enrichment in the models with
$T_{\rm peak} \lesssim 4 \times 10^8$~K.\\
2. The abundances of carbon, fluorine, sodium, and magnesium
(and $^{22}$Na, $^{26}$Al) are clearly correlated to the cooling
timescales. The abundance of $^{22}$Na is significantly high in the
models with $\tau \lesssim 100$~s. On the other hand, that of $^{26}$Al
shows double peaks in the $M_{\rm WD}$--$M_{\rm env}$ space.\\
3. The abundances of nitrogen, aluminum, and silicon are not
significantly changed in the $M_{\rm WD}$--$M_{\rm env}$ space,
although those are weakly dependent of the cooling timescales.
The initial metallicity $X_{\rm WD}$, as well as $M_{\rm WD}$ and
$M_{\rm env}$, is a crucial parameter to the nucleosynthesis
results. For smaller $X_{\rm WD}$, the explosion is less violent and
thus the cooling timescale is longer, because of the smaller nuclear
fuel. As a result, the models with low $X_{\rm WD}$ (case~A) produce
more sulfur but less oxygen than those with high $X_{\rm WD}$ (cases~A
and B). The former case is unfavorable for the production of $^7$Be,
$^{22}$Na, and $^{26}$Al.
Comparison of our nucleosynthesis results with observational abundance
estimates enables us to constrain the model parameters ($M_{\rm WD}$,
$M_{\rm env}$) for the observed ONeMg novae. We have found that the
white dwarf masses of at least four of the observed six ONeMg novae are
as low as $\simeq 1.1 M_\odot$. This is significantly smaller than the
prediction of $M_{\rm WD} \sim 1.25 - 1.35 M_\odot$ obtained by previous
hydrodynamic studies. On the other hand, our results suggest that their
envelope masses were $\gtrsim 10^{-4} M_\odot$ which are consistent with
the observational estimates of their ejected masses. In addition, the
observed fast ejection velocities for these novae ($\gtrsim
1000$~km~s$^{-1}$) are also obtained in those models. There remains a
discrepancy between these high ejected masses and those estimated by
previous hydrodynamic studies. However, a low mass white dwarf ($M_{\rm
WD} \simeq 1.1 M_\odot$) may be able to accumulate such a massive
envelope with a small mass accretion rate and a low surface temperature
(Starrfield et al. 1998).
Our results also show that the models $M_{\rm WD} \simeq 1.1 M_\odot$
with $M_{\rm env} \gtrsim 10^{-4} M_\odot$, which are the possible
explanations for most of the observed ONeMg novae,
produce significant amounts of
$^7$Be, $^{22}$Na, and $^{26}$Al:\\
1. The $\gamma$-ray line flux from the $^7$Be electron
capture is too weak to be detected with the CGRO sensitivity for the
typical distance from the sun. However, a nearby ONeMg nova could emit
the $\gamma$-rays detectable by INTEGRAL in the near future.\\
2. The mass of $^{22}$Na per event is significantly high in the models
$M_{\rm env} \gtrsim 10^{-4} M_\odot$. V1974 Cyg may have produced an
interesting amount of $^{22}$Na which is near the upper limit to the
COMPTEL sensitivity. Furthermore, we suggest that at least four ONeMg
novae in the past twenty years have produced enough $^{22}$Na to the
INTEGRAL sensitivity. The next ONeMg nova will be a promising target for
the detection of the $\gamma$-ray emitter $^{22}$Na.\\
3. The mass of $^{26}$Al per event is also significantly high in the
models $M_{\rm env}\gtrsim 10^{-4} M_\odot$. The mass of
the Galactic $^{26}$Al which originates from ONeMg novae is estimated
to be $\lesssim 3 M_\odot$, being consistent to the COMPTEL result.
They may not be, however, major contributors according to the
$\gamma$-ray survey at 1.8~MeV by COMPTEL. The $\gamma$-ray line survey
by INTEGRAL will significantly constrain the ranges of ($M_{\rm WD}$,
$M_{\rm env}$) for ONeMg novae.
We should emphasize that hydrodynamic investigations including
multi-dimensional calculations, especially with a massive envelope, are
necessary to prove our conclusions with the one-zone approximation.
There are also other observables besides the abundances which cannot be
dealt with in this study (e.g., the surface luminosity and the ejecta
masses). Nevertheless, our results afford some new perspectives on the
future nova modelings. The future INTEGRAL survey for the $\gamma$-ray
emitters, together with abundance analyses by ultraviolet, optical, and
near infrared spectroscopies, will also impose a severe constraint on
the current nova models.
\acknowledgments
We would like to acknowledge useful discussions with T. Kajino,
S. Kubono, I. Hachisu, and J. W. Truran. We would like to express
sincere appreciation to F. -K. Thielemann for providing the data of
nuclear reaction rates. This work has been supported in part by the
grant-in-Aid for Scientific Research (05242102, 06233101) and COE
research (07CE2002) of the Ministry of Education, Science, and Culture
in Japan, and from Japan Society for Promotion of Science.
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\bigskip
\small
| -65,911.98308 |
[
-3.2265625,
3
] | 13.377049 |
[
-3.7890625,
0.2314453125,
-1.6787109375,
-6.5,
-0.5595703125,
8.9765625
] |
[
1.8955078125,
7.4296875,
4.11328125,
4.3515625
] | 904 | 11,569 |
[
-2.62109375,
2.64453125
] | 35.291721 |
[
-6.1328125,
-3.234375,
-3.263671875,
-2.33203125,
1.6416015625,
10.953125
] | 0.735747 | 11.05796 | 17.935863 | 7.67143 |
[
2.395704984664917
] | -47,041.364041 | 5.051776 | -63,681.565324 | 0.814455 | 6.183816 |
[
-3.240234375,
-3.580078125,
-3.306640625,
-4.375,
2.458984375,
11.453125
] |
[
-6.0546875,
-2.255859375,
-2.28125,
-1.326171875,
3.8125,
5.125
] | |
BkiUdfw5qoTAjdlTMp-x
|
\section{Introduction}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/fig1.pdf}
\caption{The main idea of \textit{RandomRooms}. To generate two different layouts, we randomly place the same set of objects sampled from synthetic datasets in rectangular rooms. With the proposed object-level contrastive learning, models pre-trained on these pseudo scenes can serve as a better initialization for downstream 3D object detection task.}
\label{fig:intro}
\vspace{-10pt}
\end{figure}
Recent years have witnessed great progress in 3D deep learning, especially on 3D point clouds. With the emergence of powerful models, we are now able to make significant breakthroughs on many point cloud tasks, ranging from object-level understanding ones~\cite{klokov2017escape,dgcnn,so-net-li2018so,li2018pointcnn} to scene-level understanding ones, such as 3D object detection~\cite{shi2019pointrcnn,yang20203dssd,li2019stereo,shi2019pv} and 3D semantic segmentation~\cite{kundu2020virtual,zhang12356deep,hu2020jsenet,choy20194d,hou20193d}. These scene-level tasks are considered to be more complicated and more important as they often require higher level understanding compared to object level tasks like shape classification. One of the most important tasks for 3D point cloud scene understanding is the 3D object detection, which aims at localizing the objects of interest in the point cloud of the scene and telling the category they belong to. However, one major bottleneck that hinders the researchers from moving forward is the lack of large-scale real datasets, considering the difficulty in collecting and labeling high-quality 3D scene data. Compared to 2D object detection task where we have large annotated real datasets COCO~\cite{lin2014microsoft}, the real datasets here we use for 3D object detection task are much smaller in scales, and generating a synthesized scene dataset also involves a heavy workload in modeling and rendering.
A preferred solution is to utilize synthetic CAD object models to help the learning of 3D object detector since it is much easier to access such type of data. Considering we have no annotation of bounding box for synthetic CAD data, this idea can be achieved in a similar way as the unsupervised pre-training for 2D vision tasks where we first pre-train on a large-scale dataset in an unsupervised manner and then fine-tune on a smaller annotated dataset. Yet, most previous works focus on the pre-training for single object level tasks~\cite{L2G-liu2019l2g,yang2018foldingnet,deng2018ppf,gadelha2018multiresolution,pointglr}, such as reconstruction, shape classification or part segmentation, or on some low-level tasks like registration~\cite{deng2018ppf,zeng20173dmatch,elbaz20173d}. A recent work~\cite{xie2020pointcontrast}, namely PointContrast, first explores the possibility of pre-training in the context of 3D representation learning for higher level scene understanding tasks, i.e. 3D detection and segmentation. Nevertheless, they conduct the pre-training on the real scene dataset and provide a failure case when pre-training the backbone model on ShapeNet~\cite{chang2015shapenet}, which consists of synthetic CAD object models. They attribute this unsuccessful attempt to two reasons, that is, the domain gap between real and synthetic data as well as the insufficiency of capturing point-level representation by directly training on single objects. Despite these difficulties, it is still desirable to make the ShapeNet play the role of ImageNet in 2D vision since it is easy to obtain a large number of synthetic CAD models.
In this work, we put forward a new framework to show the possibility of using a synthetic CAD model dataset, i.e. ShapeNet, for the 3D pre-training before fine-tuning on downstream 3D object detection task. To this end, we propose a method named RandomRoom. In particular, we propose to generate two different layouts using one set of objects which are randomly sampled out of the ShapeNet dataset. Having these two scenes that are made up of the same set of objects, we can then perform the contrastive learning at the object level to learn the 3D scene representation.
Different from PointContrast~\cite{xie2020pointcontrast} where the contrastive learning is performed at the point level, our approach has two advantages. One is to remove the requirement of point correspondence between two views, which is indispensable in PointContrast framework given that it is necessary to exploit such information to obtain positive and negative pairs for the contrastive learning. This requirement limits the applications of PointContrast, since the CAD model datasets like ShapeNet and many other real-world datasets like SUN RGB-D~\cite{song2015sun} cannot provide such information.
The other advantage is that our method can support more diverse backbone models. Most state-of-the-art models~\cite{puy2020flot,qi2019deep,shi2019pv} on tasks like 3D object detection apply PointNet++~\cite{qi2017pointnet++} style models as their backbone, and replacing it with Sparse Res-UNet may lead to the drop of accuracy, according to the PointContrast. However, PointContrast cannot well support the pre-training of PointNet++ style model as the UNet-like models, since the point correspondence may be missing after each abstraction level in PointNet++. With the proposed RandomRoom, we are enabled to perform contrastive learning at the level of objects
and thus better support the pre-training of PointNet++ like models as we no longer need to keep the point correspondence for contrastive learning like PointContrast.
Our method is straightforward yet effective. We conduct the experiments on the 3D object detection task where only the geometric information is available for input as the models in CAD datasets do not carry color information. The results of empirical study strongly demonstrate the effectiveness of our method. In particular, we achieve the state-of-the-art of 3D object detection on two widely-used benchmarks, ScanNetV2 and SUN-RGBD. Furthermore, our method can achieve even more improvements when much less training samples are used, demonstrating that our model can learn a better initialization for 3D object detection.
\section{Related Work}
\begin{figure*}
\centering
\includegraphics[width=0.85\linewidth]{figs/fig2.pdf}
\caption{The overview of our framework. Given the objects randomly sampled from synthetic datasets, pairs of pesudo scenes are constructed following object augmentation, layout generation and scene augmentation. We pretrain the model with shared weights on two corresponding random rooms. An object-level contrastive learning (OCL) method is proposed to help the network learn discriminative representation. }
\label{fig:method}
\vspace{-10pt}
\end{figure*}
\noindent \textbf{3D Deep Learning. }
3D deep learning~\cite{jiang2020pointgroup,pointglr,shi2019pointrcnn,yang20203dssd,li2019stereo,shi2019pv,hou20193d,wang2018sgpn,wei2019conditional} has attracted much attention in recent years, especially on 3D point cloud analysis~\cite{qi2017pointnet,qi2017pointnet++,klokov2017escape,dgcnn,so-net-li2018so,li2018pointcnn}. As the pioneer work, PointNet \cite{qi2017pointnet} introduces deep learning to 3D point cloud analysis . With the max pooling layer, it is able to directly operate on unordered set. As a follow up, PointNet++ \cite{qi2017pointnet++} employs PointNet as a basic module to hierachically extract features. Different from~\cite{qi2017pointnet,qi2017pointnet++}. Many other variants of PointNet++ are also devised to further improve feature capacity~\cite{klokov2017escape,thomas2019kpconv}. Thanks to these architectures, significant processes have been made in many 3D applications \cite{klokov2017escape,dgcnn,so-net-li2018so,li2018pointcnn,shi2019pv,qi2019deep,hou20193d,wang2018sgpn}. As the data-driven methods, these works either use object-level synthetic training data or leverage point clouds from real scenes. Exploring the great power of both synthetic and real-world datasets, our method bridges the gaps between object and scene level 3D understanding.
\vspace{5pt} \noindent \textbf{3D Object Detection. }
Due to the broad real-world applications, more and more works \cite{shi2019pointrcnn,yang20203dssd,li2019stereo,shi2019pv,hou20193d,wang2018sgpn,xie2020pointcontrast} focus on 3D scene understanding. As a fundamental 3D task, 3D object detection focuses on the problem of detecting objects' tight bounding boxes in 3D space. F-PointNet \cite{qi2018frustum} predicts 3D bounding boxes from the points in frustums and achieves efficiency as well as high recall for small objects. It can also handle strong occlusion or cases with very sparse points. Inspired by Hough voting process, VoteNet \cite{qi2019deep} leverages voting mechanism to capture scene context around objects centers. Based on VoteNet, H3DNet \cite{zhang2020h3dnet} predicts different modalities of geometric primitives and aggregate them to generate final 3D bounding boxes. Benefiting from hybrid features, H3DNet achieves state-of-the-art performance. However, these 3D scene understanding methods mainly make use of the real data from 3D sensors. On the contrary, our method aims at bringing the semantic knowledge in synthetic datasets to high-level 3D understanding tasks.
\vspace{5pt} \noindent \textbf{Model Pre-training. }
Pre-training has been the common practice for many machine learning tasks, ranging from vision~\cite{xie2020pointcontrast,simclr,moco,mocov2,infomin,girshick2014rich} to NLP tasks~\cite{peters2018deep,radford2018improving,howard2018universal,devlin2019bert}. In the context of 2D vision, the pre-training is often conducted on ImageNet~\cite{deng2009imagenet} with full supervision, and we can then fine-tune the pre-trained backbone model on downstream tasks like detection~\cite{girshick2014rich,ren2015faster,girshick2015fast}. More recently, unsupervised pre-training on ImageNet~\cite{simclr,moco,mocov2} has been been showed to be effective. Compared to 2D vision, less exploration has been made on 3D vision tasks. Previously, most methods on 3D pre-training either focus on the tasks at single object level, like classification, reconstruction and part segmentation~\cite{yang2018foldingnet,gadelha2018multiresolution,pointglr,hassani2019unsupervised}, or on some low-level 3D tasks like registration~\cite{deng2018ppf,zeng20173dmatch,elbaz20173d}. Pre-training for higher level 3D scene understanding tasks like detection and segmentation has not been studied only until a recent work~\cite{xie2020pointcontrast}, which exploits the point correspondence to learn the representation in an unsupervised manner. Compared to theirs, our method can pre-train on synthetic CAD datasets like ShapeNet and support more types of backbone model.
\section{RandomRooms}
In this section, we describe the details of the proposed RandomRooms method. We first briefly review existing contrastive representation learning methods and illustrate the intuition of our method in Section~\ref{sec:CLR}. Then, we describe how to use synthetic objects to construct random rooms in~\ref{sec:construction}. In Section~\ref{sec:learning}, we show our pretrain task for learning scene level representation from the pseudo scenes. The overview of our framework is presented in Figure~\ref{fig:method}.
\subsection{Overview of Contrastive Learning}\label{sec:CLR}
We begin by reviewing the existing contrastive representation learning methods for 2D and 3D understanding to illustrate the motivation of our method.
Contrastive learning is at the core of several recent methods on unsupervised learning, which exhibits promising performance on both 2D~\cite{instance-dis-wu2018unsupervised,cpc-henaff2019data,tian2019contrastive,moco,mocov2,simclr,byol,infomin} and 3D~\cite{xie2020pointcontrast,pointglr} tasks and shows impressive generalization ability as a new type of pre-training method for various downstream tasks. The key ingredient of contrastive learning is constructing positive and negative pairs to learn discriminative representation, which inherits the idea of conventional contrastive learning in metric learning literature~\cite{hadsell2006dimensionality}. Given an input $x$ and its positive pair $x_+$ and a set of negative examples $\{x_i\}$, a commonly used training objective for contrastive representation learning is based on InfoNCE~\cite{cpc-henaff2019data,tian2019contrastive}:
\begin{equation}
\mathcal{L}_\text{contrastive} = -\log \frac{\exp(\varphi(x) \cdot \varphi(x_+)/\tau)}{\sum_i \exp(\varphi(x) \cdot \varphi(x_i)/\tau)},
\end{equation}
where $\varphi$ is the encoder network that maps the input to a feature vector and $\tau$ is a temperature hyper-parameter following~\cite{instance-dis-wu2018unsupervised,moco,simclr}. Intuitively, the contrastive learning methods supervise models by encouraging the features of the different \textit{views} of the same sample to be close to each other and distinguishable from other samples~\cite{n-pair-sohn2016improved,schroff2015facenet}. Hence the quality of positive pairs and negative examples is a critical factor to learn the encoder.
Since category annotations are not available in the unsupervised learning scenario, a common practice~\cite{examplar,instance-dis-wu2018unsupervised, moco} is using different augmentations of an input as the positive pairs and treating all other samples as negative examples. Although this design has proven to be effective in image representation learning, we argue there is a better solution to construct positive pairs for 3D understanding. One fundamental difference between 2D and 3D data is that the spatial structures of pixels do not reflect the actual geometric structures of the objects, but the spatial structures in 3D data always faithfully illustrate the layouts in the real world. This property suggests that it may be easier to manipulate or \textit{augment} 3D data compared to 2D images. Inspired by the rendering techniques in computer graphics, we propose to generate positive pairs of 3D scenes by randomly manipulating the layouts of 3D objects in a scene. Since we only need 3D objects instead of the whole scene in this process, our method makes it possible to use 3D object models to promote scene level representation learning.
It is worth noting that a recent work, namely PointContrast~\cite{xie2020pointcontrast}, explores 3D contrastive representation learning by using 3D point clouds from different views as the positive pair, where a point level contrastive loss is designed. This method is based on the multi-view point cloud sequences provided in ScanNetV2~\cite{dai2017scannet}. Instead, our method focuses on leveraging object level 3D data, which are easier to collect and have more diverse categories.
\subsection{Random Rooms from Synthetic Objects}\label{sec:construction}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/fig_example.pdf}
\caption{Some randomly selected examples of random rooms (a) and scene from ScanNetV2 (b). }
\label{fig:example}
\vspace{-10pt}
\end{figure}
Compared to ScanNetV2~\cite{dai2017scannet}, which contains $\sim$15k objects from 17 categories, synthetic shape datasets like ShapeNet~\cite{wu2015shapenet} provide a more plentiful source for 3D understanding. For example, ShapeNetCore~\cite{wu2015shapenet} contains $\sim$52k objects from 55 categories). Therefore, the primary goal of this paper is to study how to use synthetic CAD models collected by ShapeNet to improve downstream tasks like 3D detection and segmentation on real-world datasets.
Previous work~\cite{xie2020pointcontrast} shows that directly pre-training on ShapeNet will not yield performance improvement on downstream detection and segmentation task. We suspect the main reason is the domain gap between the single object classification task on ShapeNet and the multiple objects localization task on real-world datasets. In order to bridge the gap, we propose to generate pseudo scenes (we name them as \textbf{\textit{random rooms}}) from synthetic objects to construct the training data that are helpful for scene level understanding.
Given a set of randomly sampled objects, we generate a random room following the three steps:
\begin{itemize}
\item \textbf{Object Augmentation:} We first resize the object to a random size in [0.5m, 2.0m] to ensure the objects have similar sizes as the objects in ScanNetV2. Then, we apply commonly used object point cloud augmentation techniques~\cite{qi2017pointnet,qi2017pointnet++,rscnn-liu2019relation} including rotation, point dropping, jittering.
\item \textbf{Layout Generation:} For the ease of implementation, we place objects in a rectangular room. The size of the room is adaptively adjusted according to the overall area of the augmented objects. The layout is generated based on two simple principles: 1) non-overlapping: any two objects should not occupy the same space in the room; 2) gravity: objects should not float in the air, and larger objects should not be placed over the smaller ones. In turn, we place objects in the descending order of the area. Inspired by \textit{Tetris}~\footnote{https://en.wikipedia.org/wiki/Tetris}, for each object, we first randomly choose a position in the X-Y plane that satisfies the above principles, then determine the location (the Z value) based on the current maximum height of the position. The object will not be placed in a position if the current maximum height of the position exceeds 2m.
\item \textbf{Scene Augmentation:} Lastly, we apply data augmentation like rotation along the Z axis, point dropping, jittering to the whole scene. To make the generated scenes more similar to the real scenes, we also add the floor and walls as confounders.
\end{itemize}
Some examples of the random rooms are illustrated in Figure~\ref{fig:example}.
\subsection{Representation Learning from Random Rooms}\label{sec:learning}
To utilize the generated random rooms, we devise an object-level contrastive learning (OCL) method, which learns discriminative representation without category annotations.
Given $n$ randomly sampled objects $\{x_1,x_2,...,x_n\}$, we first generate two random rooms $R_A = \{x_1^A, x_2^A, ..., x_n^A \}$ and $R_B = \{x_1^B, x_2^B, ..., x_n^B \}$ by conducting the above-mentioned steps individually. Then, we employ the point cloud encoder-decoder network $\mathcal{M}$ (\eg PointNet++~\cite{qi2017pointnet++} with feature propagation layers) to extract per-point features of the two scenes $F_A = \mathcal{M}(R_A)$ and $F_B = \mathcal{M}(R_B)$. Since the random room is constructed by several individual objects, the instance labels can be naturally defined. The goal of object-level contrastive learning is to exploit the instance labels as a source of free and plentiful supervisory signals for training a rich representation for point cloud understanding. To obtain the feature of each object, we apply the average pooling operation $\mathcal{A}$ on per-point features belonging to this object:
\begin{equation}
\{h_1^A, h_2^A, ..., h_n^A\}= \mathcal{A}(F_A), \quad \{h_1^B, h_2^B, ..., h_n^B\}= \mathcal{A}(F_B). \nonumber
\end{equation}
Similar to the common practice in contrastive learning~\cite{mocov2,simclr}, the object features then are projected onto a unit hypersphere using a multi-layer perceptron network (MLP) followed by L2 normalization. The object-level contrastive learning objective can be written as:
\begin{equation}
\begin{split}
\mathcal{L}_\text{OCL} = &- \frac{1}{n} \sum_{i=1}^n \log \frac{\exp(f_i^A \cdot f_i^B/\tau)}{\sum_{f\in \mathcal{F}} \exp(f_i^A \cdot f/\tau)} \\
&- \frac{1}{n} \sum_{i=1}^n \log \frac{\exp(f_i^B \cdot f_i^A/\tau)}{\sum_{f\in \mathcal{F}} \exp(f_i^B \cdot f/\tau)} ,
\end{split}
\end{equation}
where $f_i^A = \phi(h_i^A)$ and $f_i^B = \phi(h_i^B)$ are the projected features of the $i$-th object in $R_A$ and $R_B$ respectively, $\phi$ is the projection head, and $\mathcal{F}$ is the set of all projected features in the batch. Note that compared to point-level contrastive learning task in PointContrast~\cite{xie2020pointcontrast}, our method further utilizes the instance-level knowledge thanks to the generation mechanism of RandomRooms. We argue that object-level contrastive learning introduces more semantic knowledge and can be more helpful for downstream localization tasks (Some empirical evidence can be found in Table~\ref{tb:method}).
\section{Experiments}
One primary goal of representation learning is to learn representation that can transfer to downstream tasks. To apply our RandomRooms method to scene level understanding task like 3D object detection, we adopt the \textit{unsupervised pre-training} + \textit{supervised fine-tuning} pipeline~\cite{moco,xie2020pointcontrast}. Specifically, we first pre-train the backbone model on ShapeNet using our method, then we use the pre-trained weights as the initialization and further fine-tune the model on the downstream 3D object detection task.
\subsection{Pre-training Setups}
We perform the pre-training on ShapeNet~\cite{chang2015shapenet}, a dataset composed of richly-annotated shapes represented by 3D CAD models of objects from 55 common categories. To generate the random room, we first need to randomly sample multiple objects from the the dataset. The number of objects we sample is a random integer from 12 to 18, which is similar to the average number of objects in ScanNetV2 scenes. Then for each sampled object, we perform the random room generation algorithm mentioned in Section~\ref{sec:construction}. The object-level contrastive learning loss is used to train the model in an unsupervised manner.
For the downstream 3D object detection task, we use the backbone models proposed in~\cite{qi2019deep} and~\cite{zhang2020h3dnet}, which take as input 40,000 points. Following the network configurations in these two works, we use the 1024-point feature as the output of the backbone models and perform contrastive learning on this feature. During pre-training, we use the Adam optimizer~\cite{adam-kingma2014adam} with initial learning 0.001. We train the model for 300 epochs and the learning rate is multiplied by 0.1 at the 100-th and 200-th epcoh. The batch size is set to 16 such that roughly 200$\sim$300 unique objects are involved in the contrastive learning at every iteration.
\subsection{3D Object Detection}
\begin{table*}[t!]
\caption{3D object detection results on ScanNetV2 validation set. Per-category results of average precision (AP) with IOU threshold 0.25 are reported. We also show the mean of AP across all semantic classes with IoU threshold 0.25.}
\newcolumntype{g}{>{\columncolor{Gray}}c}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{r|c|*{18}{c}|g}
\toprule
& Input & cab & bed & chair & sofa & tabl & door & wind & bkshf & pic & cntr & desk & curt & fridg & showr & toil & sink & bath & ofurn & mAP\\
\midrule
3DSIS-5\cite{hou20193d} & Geo+RGB & 19.8 & 69.7 & 66.2 & 71.8 & 36.1 & 30.6 & 10.9 & 27.3 & 0.0 & 10.0 & 46.9 & 14.1 & 53.8 & 36.0 & 87.6 & 43.0 & 84.3 & 16.2 & 40.2 \\
3DSIS\cite{hou20193d} & Geo & 12.8 & 63.1 & 66.0 & 46.3 & 26.9 & 8.0 & 2.8 & 2.3 & 0.0 & 6.9 & 33.3 & 2.5 & 10.4 & 12.2 & 74.5 & 22.9 & 58.7 & 7.1 & 25.4 \\
\midrule
Votenet\cite{qi2019deep} & Geo & 36.3 & 87.9 & 88.7 & 89.6 & 58.8 & 47.3 & 38.1 & 44.6 & 7.8 & 56.1 & 71.7 & 47.2 & 45.4 & 57.1 & 94.9 & 54.7 & 92.1 & 37.2 & 58.6 \\
Ours + VoteNet & Geo & 37.2 & 87.4 & 88.9 & 89.8 & 61.9 & 45.3 & 42.6 & 53.5 & 7.8 & 51.7 & 67.2 & 53.5 & 54.0 & 66.4 & 96.8 & 62.6 & 92.0 & 43.6 & 61.3 \\
\midrule
H3DNet\cite{zhang2020h3dnet} & Geo & 49.4 & 88.6 & 91.8 & {90.2} & 64.9 & {61.0} & 51.9 & {54.9} & {18.6} & {62.0} & 75.9 & {57.3} & 57.2 & 75.3 & 97.9 & 67.4 & {92.5} & 53.6 & 67.2 \\
Ours + H3DNet & Geo & {53.6} & {89.7} & {92.1} & 90.1 & {71.5} & 58.2 & {54.2} & 53.0 & 16.6 & 60.5 & {79.1} & 56.1 & {58.1} & {85.0} & {98.8} & {71.1} & 89.5 & {57.4} & \textbf{68.6} \\
\bottomrule
\end{tabular}
\label{Table:Quantitative:Result:ScanNetCat}
\end{adjustbox}
\end{table*}
\begin{table}
\caption{3D object detection results on ScanNetV2 validation set. We show mean of average precision (mAP) across all semantic classes with 3D IoU threshold 0.25 and 0.5. }
\newcolumntype{g}{>{\columncolor{Gray}}c}
\begin{adjustbox}{width=0.9\columnwidth, center}
\centering
\begin{tabular}{r | c | g | g }
\toprule
& Input & mAP$_{25}$ & mAP$_{50}$ \\
\midrule
DSS\cite{song2016deep} & Geo $+$ RGB & 15.2 & 6.8 \\
F-PointNet\cite{qi2018frustum} & Geo + RGB & 19.8 & 10.8 \\
GSPN\cite{yi2019gspn} & Geo + RGB & 30.6 & 17.7\\
3D-SIS \cite{hou20193d} & Geo + 5 views & 40.2 & 22.5 \\
\midrule
PointContrast~\cite{xie2020pointcontrast} & Geo only & 58.5 & 38.0 \\
\midrule
VoteNet \cite{qi2019deep} & Geo only & 58.6 & 33.5 \\
Ours + VoteNet & Geo only & 61.3 & 36.2 \\
\midrule
H3DNet~\cite{zhang2020h3dnet} & Geo only & 67.2 & 48.1 \\
Ours + H3DNet & Geo only & 68.6 & 51.5 \\
\bottomrule
\end{tabular}
\label{Table:scannet:map:0.5}
\end{adjustbox}
\end{table}
\noindent\textbf{Datasets.}
We conduct experiments on two widely-used 3D detection benchmarks, ScanNetV2~\cite{dai2017scannet} and SUN-RGBD~\cite{song2015sun}. ScanNetV2 is a richly annotated dataset of 3D reconstructed meshes of indoor scenes. It contains 1,513 scanned and reconstructed real scenes, which consists of 18 different categories of objects of various size and shape. Currently, it is the largest one that was created with a light-weight RGB-D scanning procedure. Yet, it is still much smaller in scale when compared to datasets in 2D vision. We split the the whole dataset into two subsets with 1,201 and 312 scenes for training and testing following~\cite{qi2019deep,dai2017scannet}. SUN RGB-D is a single-view RGB-D dataset for 3D scene understanding. It contains of 10,335 indoor RGB and depth images with object bounding boxes and per-point semantic labels with 10 different categories of objects. We also strictly follow the splits described in~\cite{qi2019deep,dai2017scannet}, with 5,285 samples as training data and 5,050 as testing data.
\vspace{5pt} \noindent \textbf{Detection Models.}
We compare our method with two recently proposed state-of-the-art approaches: One is VoteNet~\cite{qi2019deep}, which is a geometric-only detector that combines deep point set networks and a voting procedure; the other is H3DNet, which predicts a hybrid set of geometric primitives. Both of them take colorless 3D point clouds as input.
We also include GSPN~\cite{yi2019gspn}, 3D-SIS~\cite{hou20193d}, DSS~\cite{song2016deep}, F-PointNet~\cite{qi2018frustum}, 2D-Driven~\cite{lahoud20172d}, and Cloud of gradient (COG)~\cite{ren2016three}, which use other types of information for object detection, into the comparison.
\vspace{5pt} \noindent \textbf{Implementation Details. }
We show the effectiveness of our method by the improvement upon VoteNet and H3DNet. We load the pre-trained part into the model at the beginning of the training, and follow their training setting. Specifically, we train the model for 360 iterations in total. The initial learning is 1e-2 and 1e-3 for ScanNetV2 and SUN-RGBD respectively. We evaluate the performance by mAP with 3D IoU threshold as 0.25 and 0.5. Please refer the original paper for more details with regard to the experimental settings.
\vspace{5pt} \noindent \textbf{ScanNetV2. }
We first report the results of [email protected] as well as [email protected] for all semantic classes in Table~\ref{Table:Quantitative:Result:ScanNetCat}. With the pre-training, we improve the mAP by 2.6 point and 1.4 points for VoteNet and H3DNet respectively. These results indicate that our pre-training can truly improve the fine-tuning on high-level detection tasks. Moreover, for 11 out of 18 categories, improvement of the average precision can be observed. This indicates the pre-training can boost the detection of most common categories.
We further report the results of [email protected], which is a more difficult metric, and add the comparison with other 3D object detection approaches that utilize the color information in Table~\ref{Table:scannet:map:0.5}. For both [email protected] and [email protected] metric, our method achieves the state-of-the-art. In particular, for [email protected], the improvement is even larger than [email protected], that is, we improve by 2.7 points and 3.4 points upon VoteNet and H3DNet respectively. This indicates we can obtain more accurate bounding box prediction with the help of proposed pre-training strategy.
\vspace{5pt} \noindent \textbf{SUN RGB-D. }
We also conduct the experiments on SUN RGB-D. We report the results in Table~\ref{Table:Quantitative:Result:SUN}. With pre-training, we again achieve the state-of-the-art. For [email protected], we improve 1.5 points for both VoteNet and H3DNet. For [email protected], we improve 2.5 points and 4.1 points for VoteNet and H3DNet. This result once again illustrates our method can predict more accurate bounding box. As for the average precision of each class, improvement can be observed for 7 out of 10 categories.
\iffalse
\begin{table}
\caption{3D object detection results on SUN RGB-D validation set. Per-category results of average precision (AP) with IOU threshold 0.25 are reported. We also show mean of average precision (mAP) across all semantic classes with 3D IoU threshold 0.25 and 0.5. }
\newcolumntype{g}{>{\columncolor{Gray}}c}
\begin{adjustbox}{width=\columnwidth, center}
\centering
\begin{tabular}{r | c | g | g }
\toprule
& Input & mAP$_{25}$ & mAP$_{50}$ \\
\midrule
DSS\cite{song2016deep} & Geo + RGB & 42.1 & - \\
COG\cite{ren2016three} & Geo + RGB & 47.6 & -\\
2D-driven\cite{lahoud20172d} & Geo + RGB & 45.1 & - \\
F-PointNet~\cite{qi2018frustum} & Geo + RGB & 54.0 & -\\
\midrule
VoteNet \cite{qi2019deep} & Geo only & 57.7 & 32.9 \\
Ours + VoteNet & Geo only & 59.2 & 35.4 \\
\midrule
H3DNet~\cite{zhang2020h3dnet} & Geo only & 60.1 & 39.0 \\
Ours + H3DNet & Geo only & \textbf{61.6} & \textbf{43.1} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{Table:Quantitative:Result:All}
\end{table}
\fi
\begin{table*}
\caption{3D object detection results on SUN RGB-D val dataset. We report per-category results of average precision (AP) with 3D IoU threshold 0.25, and mean of AP across all semantic classes with 3D IoU threshold 0.25 and 0.5. For fair comparison, with previous methods, the evaluation is on the SUN RGB-D V1 data.}
\label{Table:Quantitative:Result:SUN} \centering
\newcolumntype{g}{>{\columncolor{Gray}}c}
\begin{adjustbox}{width=0.95\textwidth}
\begin{tabular}{r | c | *{10}{c} | g | g }
\toprule
& Input & bathtub & bed & bkshf & chair & desk & drser & nigtstd &sofa &table & toilet & mAP$_{25}$ & mAP$_{50}$ \\
\midrule
DSS\cite{song2016deep} & Geo + RGB & 44.2 & 78.8 & 11.9 & 61.2 & 20.5 & 6.4& 15.4& 53.5& 50.3& 78.9 & 42.1 & - \\
COG\cite{ren2016three} & Geo + RGB & 58.3 & 63.7 & 31.8 & 62.2 & 45.2 & 15.5 & 27.4& 51.0 & 51.3 & 70.1 & 47.6 & -\\
2D-driven\cite{lahoud20172d} & Geo + RGB & 43.5 & 64.5 & 31.4 & 48.3 & 27.9 & 25.9 & 41.9 & 50.4 & 37.0 & 80.4 & 45.1 & - \\
F-PointNet\cite{qi2018frustum} & Geo + RGB & 43.3 & 81.1 & 33.3 & 64.2 & 24.7 & 32.0 & 58.1 & 61.1 & 51.1 & 90.9 & 54.0 & - \\
\midrule
PointContrast~\cite{xie2020pointcontrast} & Geo & - & - & - & - & - & - & - & - & - & - & 57.5 & 34.8 \\
\midrule
VoteNet \cite{qi2019deep} & Geo & 74.7 & 83.0 & 28.8 & 75.3 & 22.0 & 29.8 & 62.2 & 64.0 & 47.3 & 90.1 & 57.7 & 32.9 \\
Ours + VoteNet & Geo & 76.2 & 83.5 & 29.2 & 76.7 & 25.1 & 33.2 & 64.2 & 63.8 & 49.0 & 91.2 & 59.2 & 35.4 \\
\midrule
H3DNet~\cite{zhang12356deep} & Geo & {73.8} & 85.6 & 31.0 & 76.7 & {29.6} & 33.4 & 65.5 & 66.5 & {50.8} & 88.2 & 60.1 & 39.0 \\
Ours + H3DNet & Geo & 71.2 & {86.4} & {38.7} & {77.8} & 28.0 & {36.5} & {68.3} & {67.7} & 50.3 & {91.0} & \textbf{61.6} & \textbf{43.1} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table*}
\vspace{5pt} \noindent \textbf{Less Training Data. }
To show our method can truly learn a better initialization through pre-training, we further conduct the empirical studies with much less training data. We report the results in Table~\ref{Table:few-shot}. We use 5$\%$, 10$\%$, 25$\%$ and 50$\%$ of the training data from ScanNetV2 dataset. As can be seen from the Table~\ref{Table:few-shot}, the improvement under this few-shot setting is still obvious, especially for [email protected]. The improvement on [email protected] is even growing larger when less data is used. Notably, the improvement of [email protected] is larger than 5 points when we use less than 10$\%$ training data. On the other hand, the improvement on [email protected] is almost unchanged compared to [email protected]. This indicates our pre-training method can help the model of downstream high-level tasks to achieve a better coarse understanding of the scene when less data is available. But to gain more accurate understanding, we still need supervised learning with annotated data.
\begin{table*}
\caption{Effects of the training data size. We show the mean of AP across all semantic classes with 3D IoU threshold 0.25 and 0.5 when training on ScanNetV2 with less data. We report the results of using 5$\%$, 10$\%$, 25$\%$ and 50$\%$ data.}
\centering
\begin{adjustbox}{width=0.9\textwidth}
\begin{tabular}{r | cc | cc | cc | cc | cc }
\toprule
& \multicolumn{2}{c|}{100\%} & \multicolumn{2}{c|}{50\%} & \multicolumn{2}{c|}{25\%} & \multicolumn{2}{c|}{10\%} & \multicolumn{2}{c}{5\%}\\
& mAP$_{25}$ & mAP$_{50}$ & mAP$_{25}$ & mAP$_{50}$ & mAP$_{25}$ & mAP$_{50}$ & mAP$_{25}$ & mAP$_{50}$ & mAP$_{25}$ & mAP$_{50}$ \\
\midrule
VoteNet \cite{qi2019deep} & 58.6 & 33.5 & 47.0 & 25.3 & 35.5 & 20.0 & 25.1 & 14.3 & 12.6 & 3.2\\
\rowcolor{Gray} Ours + VoteNet & 61.3 & 36.2 & 53.0 & 30.2 & 38.2 & 23.2 & 28.9 & 17.2 & 19.1 & 10.1\\
\midrule
H3DNet~\cite{zhang12356deep}& 67.2 & 48.1 & 61.5 & 40.6 & 51.6 & 30.9 & 37.0 & 20.7 & 26.6 & 11.3 \\
\rowcolor{Gray} Ours + H3DNet & 68.6 & 51.5 & 63.2 & 43.6 & 54.4 & 33.5 & 42.2 & 23.4 & 32.0 & 13.9 \\
\bottomrule
\end{tabular}
\label{Table:few-shot}
\end{adjustbox}
\end{table*}
\vspace{5pt} \noindent \textbf{Ablation Study.}
In Table~\ref{Tab:ablation}, we conduct three groups ablation studies. All these ablation studies are conducted on ScanNetV2 dataset with VoteNet as the backbone. We use [email protected] as the evaluation metric.
We first study the choice of datasets where the pre-training is performed. From Table~\ref{Tab:ablation:a}, we observe that pre-training on either ShapeNet or ScanNetV2 can both improve the performance. Yet, thanks to the larger scale of ShapeNet, i.e. more samples from more diverse categories, pre-training on it can achieve better results compared to ScanNetV2. Furthermore, we exhibit the possibility to combine both datasets to help the pre-training. Having the objects from both datasets, we can achieve even better fine-tuning result compared to one single dataset is used.
We then study the effect of loss function used for pre-training in Table~\ref{Tab:ablation:b}. Compared to the point-level contrastive loss used by PointContrast, we can achieve even better pre-training results with the instance-level contrastive loss. This indicates the object-level contrastive learning can better help the downstream localization tasks by incorporating more instance-level knowledge. Considering that the label of objects in ShapeNet is easy to access, we also add an additional segmentation loss by assigning all the points of an object with the corresponding object label. This can bring some marginal improvement with additional supervision signal being used. This illustrates the fact that our complete unsupervised pre-training strategy can achieve comparable performance with the supervised pre-training on synthetic dataset.
We finally show the necessity of some strategies used in scene generation. In Table~\ref{Tab:ablation:c}, we verify the necessity of gravity principle and the need of floor and wall in a scene. Without these components, we can still improve upon the baseline, but the larger domain shift between real scene and generated scene may hamper the pre-training from obtaining better model for fine-tuning on the real dataset of downstream tasks.
\begin{table*}
\caption{Ablation analysis on the proposed RandomRooms method. We investigate the effects of pre-training datasets, learning losses and random room generation methods. We report the mAP$_{25}$ results of VoteNet on ScanNetV2. }
\centering
\subfloat[Ablation studies on pre-training datasets.]
{\makebox[0.3\linewidth][c]{
\tablestyle{12pt}{1.2}
\begin{tabular}{c|c}
\textbf{Pre-training dataset} & \textbf{mAP} \\
\shline
baseline & 58.6 \\
\hline
ScanNetV2 & 60.2 \\
\rowcolor{Gray} ShapeNet & 61.3 \\
ShapeNet + ScanNetV2 & 61.5 \\
\end{tabular}
\label{Tab:ablation:a}
}
}
\hfill
\subfloat[Ablation studies on pre-training losses.\label{tb:method}]
{\makebox[0.3\linewidth][c]{
\tablestyle{12pt}{1.2}
\begin{tabular}{c|c}
\textbf{Pre-training loss} & \textbf{mAP} \\
\shline
baseline & 58.6 \\
\hline
point-level contrastive & 59.2 \\
\rowcolor{Gray} instance-level contrastive & 61.3 \\
instance-level contrastive + seg. & 61.5 \\
\end{tabular}
\label{Tab:ablation:b}
}
}
\hfill
\subfloat[Ablation studies on room generation.
\label{tab:analysis:pixel}]
{\makebox[0.3\linewidth][c]{
\tablestyle{12pt}{1.2}
\begin{tabular}{c|c}
\textbf{Generation method} & \textbf{mAP} \\
\shline
baseline & 58.6 \\
\hline
\rowcolor{Gray} RandomRooms & 61.3 \\
w/o gravity & 60.5 \\
w/o floor/wall & 60.7 \\
\end{tabular}
\label{Tab:ablation:c}
}
}
\vspace{-5pt}
\label{Tab:ablation}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/curve.pdf}
\caption{Training from scratch vs. fine-tuning with RandomRooms pre-trained weights. We report the 3D detection training loss and the validation [email protected] of VoteNet on ScanNetV2. }
\label{fig:curve}
\vspace{-10pt}
\end{figure}
\begin{table}
\caption{We compare our method with PointContrast on ScanNetV2 and SUN R-GBD using PointNet++ as backbone. We show mean of average precision (mAP) across all semantic classes with 3D IoU threshold 0.25. }
\newcolumntype{g}{>{\columncolor{Gray}}c}
\begin{adjustbox}{width=\columnwidth, center}
\centering
\begin{tabular}{r | g | g }
\toprule
& ScanNetV2 & SUN RGB-D \\
\midrule
Sparse Res-UNet w/o pre-training & 56.7 & 55.6\\
Sparse Res-UNet w/ PointContrast & 58.5 & 57.5 \\
\midrule
PointNet++ w/o pre-training & 58.6 & 57.7\\
PointNet++ w/ PointContrast & 58.5 & 57.9 \\
\midrule
PointNet++ w/ RandomRooms & \textbf{61.3} & \textbf{59.2} \\
\bottomrule
\end{tabular}
\label{Table:scannet:pretrain:0.25}
\end{adjustbox}
\end{table}
\vspace{5pt} \noindent \textbf{Comparison with PointContrast. } To show our pre-training method is more suitable for the 3D object detection task, we compare with another pre-training method, namely PointContrast, on ScanNetV2 and SUN RGB-D using VoteNet~\cite{qi2019deep} as the detection model, and we use [email protected] as the evaluation metric. The results are reported in Table~\ref{Table:scannet:pretrain:0.25}
We find that using Sparse Res-UNet instead of PointNet++ as the backbone model leads to worse detection performance when training from scratch. However, the improvement brought by PointContrast to the detectors based on PointNet++ is quite marginal, and the final performance is on par with the detectors using Sparse Res-UNet as the backbone. On the contrary, considering there is no need to keep the point correspondence, our RandomRooms method can learn a much better initialization for the PointNet++ style model, which is stronger backbone for current state-of-the-art 3D object detectors. This demonstrates our method is superior on the object detection task compared to PointContrast.
\vspace{5pt} \noindent \textbf{Learning Curve. } We show the learning curve of our method as well as the baseline VoteNet in Figure~\ref{fig:curve}. We observe that our pre-trainig weights significantly help improve the learning speed and stabilize the training process. The model with pre-training weights can achieve lower training loss and better validation mAP, which clearly demonstrates the effectiveness of the proposed method.
\vspace{5pt} \noindent \textbf{Visualization. } We visualize the detection results of the baseline VoteNet that is trained from scarch and the pretrained model using our method on ScanNet. The results are shown in Figure~\ref{fig:vis_det}. We see the pre-trained model can produce more accurate detection results with less false positives, and is closer to the ground-truth bounding boxes. The visual results further confirm the effectiveness of the proposed method.
\vspace{5pt} \noindent \textbf{Discussions.}
Though we follow many heuristic rules when generating the \emph{random rooms}, there still exist domain gap between the real scene and generated one. The extensive experimental results shed light on an interesting fact, that is, in 3D representation learning the layout of objects may not be that important for recognition as in 2D vision. We only need to ensure the set of objects can spread out in the space, while the interaction among objects does not matter that much as 2D vision where hidden interactions may play as an important cue for many high-level scene understanding tasks like detection. This may be due to the ovelap is not that severe in complex 3D scenes. We think this may open a path for future research on 3D learning.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/vis_det.pdf}
\caption{Visual Results on ScanNetV2. We compare the qualitative detection results with the baseline VoteNet method. The pre-trained model can produce more accurate detection results with less false positives, and is closer to the ground-truth bounding boxes. }
\label{fig:vis_det}
\vspace{-10pt}
\end{figure}
\section{Conclusion}
In this paper, we have proposed a new pipeline, namely RandomRoom, for 3D pre-training that can make use of the synthetic CAD model dataset to help the learning on real dataset on high-level 3D object detection task. Unlike previous works performing contrastive learning at the level of points, we perform contrastive learning at the object level by composing two different scenes with same set of objects that are randomly sampled from the CAD model dataset.
Empirically, we show consistent improvements in downstream 3D detection tasks on several base models, especially when less training data are used. Benefiting from the rich semantic knowledge and diverse objects from synthetic data, our method establishes the new state-of-the-art on widely-used 3D detection benchmarks ScanNetV2 and SUN RGB-D. We except this work can open a new path for future research on how to exploit easily accessible synthetic
objects for more complex tasks for 3D scene understanding.
\subsection*{Acknowledgements}
This work was supported in part by the National Key Research and Development Program of China under Grant 2017YFA0700802, in part by the National Natural Science Foundation of China under Grant 61822603, Grant U1813218, and Grant U1713214, in part by a grant from the Beijing Academy of Artificial Intelligence (BAAI), and in part by a grant from the Institute for Guo Qiang, Tsinghua University.
\section*{A. Details about Random Room Generation}
To more clearly show the generation process of random rooms, we provide pseudo-code and explanatory
comments of our room generation method in Algorithm~\ref{alg:supp}.
\begin{algorithm*}[p]
\caption{Pseudo-code of Random Room Generation}
\label{alg:supp}
\begin{lstlisting}[language=Python]
# objects: list of object point clouds
# object level data augmentation
objcts = object_augmentation(objcts)
# sort objects by their areas
objects, object_ind, obj_area = sort_object(objects)
# set the overall area of the rectangular room
overall_area = sum(obj_area) * 2 * 10000 * (random.random() * 0.4 + 0.6)
a_value = np.sqrt(overall_area)
a = random.randint(int(a_value*0.75), int(a_value*1.25))
b = int(overall_area) // a
a_m = float(a) / 100.
b_m = float(b) / 100.
room_state = np.zeros((a, b), dtype=np.float)
final_layout = []
instance_label = []
# place object to the room
for i in range(len(objcts)):
obj = objects[i]
x, y, z = get_object_size(obj)
for _ in range(max_iter):
# generate the position from beta distribution
pos_x = np.random.beta(0.5, 0.5) * (a_m - x)
pos_y = np.random.beta(0.5, 0.5) * (b_m - y)
state_part = room_state[int(pos_x*100):int((pos_x+x)*100), int(pos_y*100): int((pos_y+y)*100)]
max_height = state_part.max()
if (max_height + z < 2.0 and max_height < 0.5) or max_height < 1e-3:
break
room_state[int(pos_x * 100):int((pos_x + x) * 100), int(pos_y * 100): int((pos_y + y) * 100)] += z
obj[:, 0] += pos_x
obj[:, 1] += pos_y
obj[:, 2] += max_height
final_layout.append(obj)
instance_label.append(np.ones((obj.shape[0],), dtype=int) * (object_ind[i] + 1))
# add floor and walls
final_layout, instance_label = add_floor_wall(final_layout, instance_label)
# form the final scene point cloud
final_layout = np.concatenate(final_layout, axis=0)
instance_label = np.concatenate(instance_label, axis=0)
# normalize coordinates
final_layout[:, 0:2] = final_layout[:, 0:2] - final_layout[:, 0:2].mean(axis=0, keepdims=True)
\end{lstlisting}
\end{algorithm*}
\section*{B. Visualization of Random Rooms}
We show more examples of the generated random room pairs in Figure~\ref{fig:example}.
\begin{figure*}[p]
\centering
\includegraphics[width=0.95\linewidth]{fig_supp_example.pdf}
\caption{Visualization of the pairs of \textit{Random Rooms}. }
\label{fig:example}
\end{figure*}
{\small
\bibliographystyle{ieee_fullname}
| -29,862.929679 |
[
-0.19287109375,
0.64013671875
] | 30.472103 |
[
-2.775390625,
1.4248046875,
-1.388671875,
-3.603515625,
-0.7080078125,
5.234375
] |
[
2.298828125,
5.296875,
0.833984375,
5.37890625
] | 661 | 5,758 |
[
-2.375,
2.63671875
] | 28.433845 |
[
-6.97265625,
-4.890625,
-4.65234375,
-1.5634765625,
2.94140625,
12.7734375
] | 0.442402 | 21.709276 | 26.693296 | 6.903921 |
[
2.139604091644287
] | -21,706.878496 | 6.084752 | -29,370.774824 | 0.727823 | 6.177447 |
[
-3.197265625,
-3.5625,
-3.4375,
-4.1796875,
2.755859375,
11.03125
] |
[
-6.20703125,
-1.8681640625,
-2.73828125,
-1.3388671875,
4.03515625,
5.6015625
] | |
BkiUbLTxaJJQn1aABSu7
|
\chapter{Introduction}
Supersymmetric extensions of the Standard Model (SM) are among
the most promising candidates for new physics above
the weak scale $M_{Z}$.
In the minimal supersymmetric SM (MSSM) all particles of the
SM are promoted to chiral $N=1$ supermultiplets
with one additional Higgs doublet added [\MSSM].
Supersymmetry is assumed to be broken by explicit but soft breaking
terms which appear naturally in the low energy limit
of spontaneously broken supergravity theories.
This soft supersymmetry breaking introduces a number of new
parameters into the Lagrangian which control
the mass spectrum of the fermion's superpartners.
The parameter space spanned by the soft terms
has been the subject of numerous investigations,
mostly under some simplifying assumptions [\MSSM].
It has also been studied
within the context of further extensions such as superstring
theory [\BG --\BIM].
Without specifying the precise origin of the non-perturbative
effects in superstring theory but with
a number of plausible assumptions, it has been possible
to observe interesting features of the soft terms [\IL, \KL, \BIM].
In particular, many signatures at $M_{Z}$ are entirely controlled
by perturbative couplings in string theory and (almost) independent
of the assumption about the unknown non-perturbative properties.
The relevant perturbative couplings have indeed been computed
for many string vacua at the leading order (tree level)
in string perturbation theory.
One finds that in most cases the standard assumptions
of the MSSM are not fulfilled. For example,
non-universal scalar masses as well as non-proportional
$A$-terms with arbitrary CP-violating phases are easily generated.
Non-universal scalar masses are particularly dangerous
since they generically induce unacceptably large contributions
to rare processes such as flavor changing neutral currents
(FCNC) [\FCNC--\DKS]. One way out is to look for possible
mechanisms which naturally suppress non-universal scalar masses.
This could be natural if SUSY breaking is communicated to the light
particles by gauge interactions [\DiNe] or in models with a
nonabelian
horizontal symmetry [\DLK]. (Abelian horizontal symmetries could
align quark and squark mass matrices, thus suppressing FCNC without
squark degeneracy [\NS,\LNS].)\foot{Other recent investigations
of the problem include refs.~[\MN, \CEKLP].}
In the context of string theory, universality
of squark masses is achieved if the dominant source for supersymmetry
breaking is the dilaton $F$-term [\KL] or in `no-scale' type of
models [\noscale].
However, in both cases non-universal scalar masses might arise
through couplings generated at the 1-loop level of string perturbation
theory. Although for such couplings much less (string) information is
currently available, it is possible to estimate the typical size of
these corrections and hence estimate the physical implications for weak
scale phenomenology. Furthermore, the generic CP-violating phases
of the $A$ and $B$-terms are constrained in both scenarios
and can be confronted with the bounds for
the electric dipole moment of the neutron (EDMN)[\EDMN].
This paper is organized as follows: In Section 2 we present the
general form of string loop corrections. Their implications on FCNC and
CP violation when supersymmetry is broken by the dilaton $F$-term
are studied in section~3.
A similar analysis for supersymmetry breaking by moduli $F$-terms
(including some cosmological consequences) are studied in section~4.
A summary is given in Section~5.
\chapter{String Loop Corrections}
We first summarize the generic structure of
the couplings in the low energy effective Lagrangian of string
theory. In addition to the gravitational and gauge multiplets,
the massless spectrum contains two types of chiral supermultiplets.
First, matter fields $Q^I$ which are charged under the
low energy gauge group $G$ and which contain the quark and lepton
multiplets of the SM. Second, there are the gauge neutral
supermultiplets $S$ (dilaton) and $M^i$ (moduli)
which are flat directions of the perturbative effective potential and
whose VEVs parameterize the perturbative degeneracy of the
string vacuum.\foot{Strictly speaking there can also be
singlet supermultiplets which are not moduli,
\ie\ which are not a flat direction of the effective
potential. For the purpose of this article we include
them among the matter fields $Q^I$.} The couplings of the
low energy effective Lagrangian for the massless multiplets
are encoded in three scalar functions:
the real K\"ahler\ potential ${K}$, the holomorphic superpotential
$W$ and the holomorphic gauge kinetic function $f$.
$K$ summarizes the kinetic energy terms and
at low energies can be expanded in the matter fields
$$
{K}\ =\ \kappa^{-2}\, \hat K\ +\ Z_{{\bar I} J}
\smash{\overline\matter}\vphantom{\matter}^{\bar I} e^{2V} Q^J\ +\ ({1\over2}H_{IJ}
Q^IQ^J+{\rm h.c.})\ +\ \cdots \ ,
\eqn\Kexpansion$$
where the `$\cdots$' in eq.~\Kexpansion\ correspond to terms which
are irrelevant for the present investigation.
The matter fields $Q^I$ carry canonical dimension one whereas
$S$ and $M^i$ are expected to receive Planck-sized VEVs and
therefore are chosen to be dimensionless. The couplings $\hat K$,
$Z_{{\bar I} J}$ and $H_{IJ}$ are dimensionless functions of $S$
and $M^i$ and only further constrained by the fact that the
dilaton $\mathop{\rm Re}\nolimits S$ serves as the string-loop counting parameter. At the
string tree level the dilaton couples universally in all
string vacua; this universality is lost at the loop level but
all couplings can be expanded in powers of $\mathop{\rm Re}\nolimits S$:
$$
\eqalign{
\hat K \ &=\ - \ln (S + \overline S)\ +\
\sum_{n=0}^\infty {\hat K^{(n)} (M,\smash{\overline M}\vphantom{M})\ \over\,
[ 8\pi^2 (S + \overline S )]^n} \, , \cr
Z_{{\bar I} J}\ &=\ \sum_{n=0}^\infty {Z^{(n)}_{{\bar I} J} (M,\smash{\overline M}\vphantom{M})\
\over\, [ 8\pi^2 (S + \overline S )]^n} \, , \cr
H_{I J}\ &=\ \sum_{n=0}^\infty {H^{(n)}_{I J} (M,\smash{\overline M}\vphantom{M})\ \over\,
[8\pi^2 (S + \overline S )]^n} \, , }
\eqn\dilatonkol
$$
where
$\hat K^{(n)}$, $Z^{(n)}_{{\bar I} J}$ and $H^{(n)}_{IJ}$ do not depend
on the dilaton and their moduli dependence in general cannot be
further constrained
(they do depend on the details of the internal superconformal
field theory).\foot{$\hat K^{(1)}$ and $Z^{(1)}_{{\bar I} J}$
are the four-dimensional analogue of the Green-Schwarz term and have
recently been computed in some orbifold vacua [\AGNT].}
The scalar potential and the Yukawa
couplings $\tilde Y_{IJL}$ are determined
by the superpotential $W$ which is not renormalized
at any order in string perturbation theory.
The perturbative $W$ is completely independent of the dilaton $S$
but non-perturbative corrections can introduce
further dilaton (and moduli) dependence into $W$.
Expanding in $Q^I$ we have
$$
W=\hat W(S, M^i)+{1\over2}\tilde\mu_{IJ}(S, M^i)Q^IQ^J+
{1\over3}\tilde Y_{IJL}(M^i)Q^IQ^JQ^L+\cdots.
\eqn\Super
$$
where the `$\cdots$' stand for non-renormalizable interactions.
$\hat W$ is identically zero at any order in string perturbation
theory and arises only from non-perturbative physics.
(Similarly, the $S$-dependence in $\tilde\mu$ is induced at the
non-perturbative level.)
Without specifying the precise nature of such non-perturbative
effects they can be parameterized by $\hat W$.
We assume that $\hat W$ is such that it breaks
supersymmetry by generating non-vanishing
moduli $F$-terms $\vev{F^i}$ and/or a dilaton $F$-term $\vev{F^S}$.
To simplify our notation let us introduce an index $\phi$
which runs over both the moduli and dilaton direction, \ie\
$\phi = (i, S)$. Using this notation the $F$-terms are given by
$$
\smash{\overline F}\vphantom{F}^{\bar \phi} = \kappa^2 e^{\hat K/2} \hat K^{\bar{\phi} \phi}
(\partial_{\phi} \hat W + \hat W \partial_\phi \hat K)\ ,
\eqn\fterms$$
while the scale of supersymmetry breaking is parameterized
by the (complex) gravitino mass
$$m_{3/2}=\kappa^2 e^{ \hat K/2}\hat W.\eqn\mth$$
We further assume that, at the minimum of the potential,
a dilaton VEV $\vev S$ and moduli VEVs $\vev{M^i}$
are generated and hence the perturbative vacuum degeneracy is
(partially) lifted. Finally, the cosmological constant is assumed to
be zero which implies
\foot{
Recently various mechanisms have been studied which also include
low energy quantum corrections to the cosmological
constant [\KN,\FKZ]. Most of our analysis here is
insensitive to the details of the mechanism responsible for the
vanishing of the cosmological constant.}
$$
|m_{3/2}|^2 =\coeff13 \hat{K}_{\phi \bar{\phi}} F^\phi \smash{\overline F}\vphantom{F}^{\bar{\phi}} .
\eqn\cosmo$$
Under these assumptions (spelled out in more detail in ref.~[\KL]) soft
supersymmetry breaking terms are generated in the observable sector.
In particular, the potential for the observable matter scalars (which
we also call $Q^I$) contains the following soft supersymmetry breaking
terms:
$$
V^{({\rm SSB})}=m_{I{\bar J}}^2 Q^I \smash{\overline\matter}\vphantom{\matter}^{{\bar J}}\ +\
({1\over3}A_{IJL}Q^IQ^JQ^L+{1\over2}B_{IJ}Q^IQ^J+{\rm h.c.}),
\eqn\VSSB$$
where the parameters $m^2, A, B$ are moduli and dilaton dependent and
not necessarily flavour diagonal [\IL, \KL, \BIM]. Specifically,
$$m^2_{I\bar J}=|m_{3/2}|^2Z_{I\bar J}-F^\phi\overline F^{\bar{\phi} }
R_{\phi\bar{\phi} I\bar J},\eqn\MsIJ$$
where the flavour dependence can arise through the (perturbative)
curvature couplings
$$R_{\phi \bar{\phi} I\bar J}=\partial_\phi \bar\partial_{\bar{\phi}}Z_{I\bar
J}-
\Gamma^N_{\phi I}Z_{N\bar L}\bar\Gamma^{\bar L}_{\bar{\phi} \bar J},\ \ \
\Gamma^N_{\phi I}=Z^{N\bar J}\partial_\phi Z_{\bar JI},\eqn\RGamma$$
and hence the standard assumption of universal (flavour independent)
soft masses might not hold. Furthermore,
$$
A_{IJL}=F^\phi (\partial_\phi Y_{IJL}-\Gamma^N_{\phi (I}Y_{JL)N}
+{1\over2}\hat K_\phi Y_{IJL}) \ ,
\qquad Y_{IJK} = e^{\hat K/2}\tilde Y_{IJK}\ ,
\eqn\AIJL
$$
where the first two terms are in general not proportional to the
Yukawa couplings. Similarly,
$$\eqalign{
B_{IJ}=&2|m_{3/2}|^2H_{IJ}+m_{3/2}F^\phi D_\phi H_{IJ}-\bar m_{3/2}
\overline F^{\bar \phi }\bar\partial_{\bar \phi }H_{IJ}-
\overline F^{\bar \phi}
F^\phi D_\phi\bar\partial_{\bar \phi }H_{IJ}\cr +&e^{\hat K/2}
[F^\phi (\partial_\phi \tilde\mu_{IJ}+\hat K_\phi \tilde\mu_{IJ}
-2\Gamma^K_{I\phi }\tilde\mu_{KJ})-\bar m_{3/2}\tilde\mu_{IJ}],\cr
}\eqn\muBIJ$$
(where $D_\phi H_{IJ}=\partial_\phi H_{IJ}-2\Gamma^K_{I\phi }H_{KJ}$)
is not necessarily proportional to
\foot{In the context of the MSSM there is only one $B$-term
allowed by gauge invariance and $R$-invariance and hence no flavour
dependent matrix exists. What the non-proportionality means in this case
is that $\mu$ can be zero with $B$ staying finite (or vice versa).}
$$
\mu_{IJ}=e^{\hat K/2}\tilde\mu_{IJ}+m_{3/2}H_{IJ}-
\overline F^{\bar{\phi} }\bar\partial_{\bar{\phi} }H_{IJ}\ .
\eqn\muIJ
$$
Hence, the parameters of $V^{({\rm SSB})}$ in eq.~\VSSB\ in general
do not satisfy the property of flavour-independence
which is commonly assumed in the MSSM.
Finally, there is one more soft term induced:
the gauginos aquire a mass given by
$$
\tilde m_a = F^\phi \partial_\phi \ln g_a^{-2}\ ,
\eqn\gauginomass$$
where $g_a^{-2}$ are the gauge couplings ($a$ labels the simple
factors in the gauge group).
In string theory the gauge couplings are universal
at the leading order and determined by the VEV of the dilaton.
Non-universality and moduli dependence is only introduced
via one-loop threshold corrections $\Delta_a$ [\DKLb]
$$
g_a^{-2}(M_{\rm String})= \mathop{\rm Re}\nolimits S + {\Delta_a (M,\smash{\overline M}\vphantom{M})\over 16\pi^2}\ .
\eqn\gaugecoup$$
$M_{\rm String}$ denotes the characteristic scale of string theory;
numerically $M_{\rm String} \approx 5\times 10^{17}$ GeV which is close to
the supersymmetric GUT-scale $M_{\rm GUT}\approx 3\times 10^{16}$ GeV.
In this paper we do not make any distinction between the two scales
and denote them both by $M_X$.
As a consequence of the very special field dependence of
the gauge couplings, the gaugino masses are universal
at the leading order and obey
$$
\tilde m_a = \tilde{m}_{1/2} + {\alpha_{X}\over 4\pi} \tilde m^{(1)}_a + \cdots\ ,
\eqn\gmassim $$
where
$$
\eqalign{
\tilde{m}_{1/2} =\ & {F^S\over (S + \overline S)}\ , \qquad
\tilde m^{(1)}_a = F^i \partial_i \Delta_a - F^S \Delta_a\, , \cr
\alpha_{X} =\ & {g^2 (M_X) \over 4\pi} = {1\over 2\pi (S+\overline S)}\ .}
\eqn\mgapprox $$
Note that the universal gaugino mass $\tilde{m}_{1/2}$ is directly proportional
to the dilaton $F$-term $F^S$ and that both $\tilde{m}_{1/2}$ and
$\tilde m^{(1)}_a$ are of order ${\cal O}(m_{3/2})$.
On the other hand, the scalar masses given by eq.~\MsIJ\
are in general flavour-dependent (non-universal)
already at the leading order of perturbation theory
when $Z_{I\bar J}$ is approximated by its tree level
contribution $Z_{I\bar J}^{(0)}$.
However, there are scenarios where universal scalar masses
and $A$-terms do appear at the leading order
and non-universality is only introduced at the one-loop level.
For those cases - which are the focus of this paper - we have
$$
\eqalign{
m^2_{I{\bar J}} =&\ m^{2}_0\, Z_{I\bar J}^{(0)}
+ {\alpha_{X}\over 4\pi}\, m^{2\, (1)}_{I{\bar J}}
+ \cdots\ , \cr
A_{IJL} =&\ A_0\, Y_{IJL}
+ {\alpha_{X}\over 4\pi}\, A^{(1)}_{IJL}+ \cdots\ . \cr}
\eqn\softexp $$
\chapter{Supersymmetry Breaking by the Dilaton}
Under the assumption that only a dilaton $F$-term $\vev{F^S}$
is generated by the non-perturbative physics, the soft parameters
simplify considerably at the leading order and the scalar masses
and $A$-terms are indeed universal. This is a consequence
of the universal couplings of the dilaton at the string tree level.
Specifically one finds [\KL, \BIM]
$$
m^{2}_0 = |m_{3/2}|^2 = \coeff13 {|F^S|^2\over (S + \overline S)} ,\qquad
A_0 = - {F^S \over (S + \overline S)}, \qquad
\tilde{m}_{1/2} = {F^S \over (S + \overline S)}, \eqn\FS
$$
while $B$ and $\mu$ are independent parameters.
(If, in addition, $\tilde \mu =0$ holds in eq.~\muIJ, $B$ and $\mu$
are related via $B = 2\, \bar{m}_{3/2}\, \mu$
but we do not assume this relation here.)
Given the soft terms \FS\ generated at $M_X$, standard
RG-analysis can be used to compute the supersymmetric mass spectrum
at low energies [\BLM].\foot{See also ref.~[\LNZ].}
One finds that all squark masses $m_{\squark}$ are
essentially degenerate with the gluino mass $\tilde m_3$
$$
m_{\squark} \simeq \tilde m_3 \simeq 5\, m_{3/2}\ ,
\eqn\squarks $$
whereas the slepton masses obey
$$
m_{\slepton} \simeq 0.3\, \tilde m_3 \simeq 1.5\, m_{3/2}\ .
\eqn\sleptons $$
In order to evade the direct experimental bounds [\PDG]
on scalar and gaugino masses, eqs.~\squarks, \sleptons\ imply
$$
m_{3/2} > 30\, {\rm GeV}\ .
\eqn\mbound $$
In this section we study the physical properties of this scenario
beyond the leading order approximation. In particular, we assume
generic ${\cal O}(1)$ one-loop couplings $Z^{(1)}_{I{\bar J}}$
which induce flavour-dependent
scalar masses and non-proportional $A$-terms at the next order.
{}From eqs.~\MsIJ-\AIJL\ we learn (using \FS)
$$
\eqalign{
m^{2\, (1)}_{I{\bar J}} =&-5\ |m_{3/2}|^2 Z^{(1)}_{I{\bar J}}\sim
{\cal O}(m_{3/2}^2),\cr
A_{IJL}^{(1)} =&{F^S\over(S+\overline S)}\left(-\hat{K}^{(1)}\, Y_{IJL}
+ 3 Z^{(1)}_{I{\bar J}} Z ^{{\bar J} N \, (0)}\, Y_{NJL}
+ \partial_{\bar\jmath} \hat{K}^{(1)} \hat{K}^{(0)\, {\bar\jmath} i} D_i Y_{IJL}\right)\cr
\sim & {\cal O}(m_{3/2} Y), \cr
}
\eqn\FScorr $$
where $Z^{(0)},Z^{(1)}$ and $\hat K^{(1)}$ are all functions of ${\cal O}(1)$.
\section{Constraints from Flavor Changing Neutral Currents}
Let us first focus on the constraints implied by the smallness of
FCNC. We use the notation of ref.~[\NS] and the calculations of
ref.~[\GM].\foot{See also refs.~[\HKT].}
The experimental bounds from FCNC constrain the
sfermion masses at the weak scale $M^{f2}$ ($f=u,d,\ell$)
which are determined in terms of the soft
input parameters \FS\ and \FScorr\ generated at the high energy
scale $M_X$. In the basis where fermion mass-matrices are
diagonal and gaugino couplings are diagonal, the sfermion masses
appear in $3\times3$ submatrices,
$$M^{f2}=\pmatrix{M^{f2}_{LL}&M^{f2}_{LR}\cr
M^{f2}_{RL}&M^{f2}_{RR}\cr},
\eqn\sfmasses$$
where the soft scalar masses $m^2_{I{\bar J}}$ contribute to the
diagonal blocks $M^{f2}_{LL}, M^{f2}_{RR}$ while
the $A$-terms directly determine $M^{f2}_{LR}$ [\MSSM].
\foot{The diagonal elements in $M_{LR}^2$ also depend on $\mu$.}
As SUSY breaking by the dilaton leads to approximately
degenerate sfermions in each sector, it is convenient
to define the average sfermion mass-squared, $m_{\tilde f}^2$.
FCNCs are then proportional to
$$(\delta^f_{MN})_{ij}={(M^{f2}_{MN})_{ij}\over m_{\tilde f}^2}\ ,
\quad i\neq j\ \
\eqn\deltaf$$
and the strongest constraints on non-universality arise
from the light generations.
(For squarks the bounds are particularly strong on the combination
$\VEV{\delta^f_{12}}=\sqrt{(\delta^f_{LL})_{12}(\delta^f_{RR})_{12}}$.)
One finds [\GM, \NS]
$$\eqalign{{\rm Re}\VEV{\delta^d_{12}}\leq6\times10^{-3}
\left({m_{\tilde{d}}\over1\ TeV}\right),&\quad
{\rm Re}(\delta^d_{LR})_{12}\leq8\times10^{-3}
\left({m_{\tilde{d}}\over1\ TeV}\right)\ ;\cr
{\rm Im}\VEV{\delta^d_{12}}\leq5\times10^{-4}
\left({m_{\tilde{d}}\over1\ TeV}\right),&\quad
{\rm Im}(\delta^d_{LR})_{12}\leq7\times10^{-4}
\left({m_{\tilde{d}}\over1\ TeV}\right),\cr
}\eqn\KBDmixMM$$
from $\Delta m_K$ and $\epsilon_K$, and
$$
(\delta^\ell_{MM})_{12}\leq1.5\times10^{-2}\left({m_{\tilde{\ell}}
\over0.3\ TeV}\right)^2,\quad (\delta^\ell_{LR})_{12}\leq5\times
10^{-6}\left({m_{\tilde{\ell}}\over0.3\ TeV}\right),
\eqn\megMM$$
from the bound on BR$(\mu\rightarrow e\gamma)$. The bounds
\KBDmixMM\ and \megMM\ have been evaluated under the assumption
$m_{\squark}\simeq\tilde m_{3}$ and $m_{\tilde\ell} \simeq 2\, \tilde m_{1}$
as appropriate for dilaton-induced SUSY breaking (\cf\ eq.~\squarks).
In the slepton sector the bound on $(\delta^\ell_{MM})_{12}$
also depends on
$(M^{\ell2}_{LR})_{22}\approx m_\mu\left[{A_{\mu\mu H_d}\over
Y_{\mu\mu H_d}}+\mu^*{\VEV{H_u}^*\over\VEV{H_d}}\right]$
and we have used the value
$(M^{\ell2}_{LR})_{22} = -3.8\, m_\mu m_{3/2}$ in \megMM\
as a characteristic value for the dilaton scenario.
\foot{$(M^{\ell2}_{LR})_{22}$ depends on a phase
$\phi_B$, defined in the next section. Here we take
$\phi_B=0$ which gives the weakest constraint.}
All bounds quoted are only accurate up to factors of ${\cal O}(1)$
due to hadronic uncertainties in the squark sector and the
dependence on $(M^{\ell2}_{LR})_{22} $ in the slepton sector.
Finally, the bounds from $\Delta m_B$, $\Delta m_D$ and radiative
$\tau$ decays are much milder than \KBDmixMM\ and \megMM\
and play no role in our analysis.
The experimental bounds \KBDmixMM\ and \megMM\ can now be
compared with the theoretical `predictions' of the dilaton scenario
which follow from eqs.~\FS\ and \FScorr.
Let us first note that even for the universal boundary
conditions \FS\ renormalization effects induce small $\delta$'s
at low energies which obey \KBDmixMM\ and \megMM\ [\GM, \HKT].
The point we want to study here is the implication of the
non-universality implied by \FScorr. The running of the off-diagonal
mass-matrix elements of the first two generations
is neglibly small [\MSSM] and hence we can estimate at the weak scale:
$$\eqalign{
(\delta^q_{MM})_{12}&\simeq {\alpha_{X}\over4\pi}
{m^{2\, (1)}_{12}\over m_{\squark}^2}\simeq 1.2\times10^{-4},\cr
(\delta^d_{LR})_{12}&\simeq \ 3\, {\alpha_{X}\over4\pi }
{m_s m_{3/2} \over m_{\squark}^2}
\simeq4\times10^{-7}\left({1\ TeV\overm_{\squark}}\right),\cr
(\delta^\ell_{MM})_{12}&\simeq{\alpha_{X}\over4\pi}
{m^{2\, (1)}_{12}\over m_{\slepton}^2}\simeq 1.5\times10^{-3},\cr
(\delta^\ell_{LR})_{12}&\simeq 1.5\, {\alpha_{X}\over4\pi }
{m_\mum_{3/2}\over m_{\slepton}^2}
\simeq1\times10^{-6}\left({0.3\ TeV\overm_{\slepton}}\right),\cr }
\eqn\deltas$$
where we used eqs.~\squarks, \sleptons, \FScorr\ and $\alpha_{X}=1/24$.
Also \deltas\ are only order of magnitude estimates and factors
of ${\cal O}(1)$ are neglected.
Comparing \deltas\ with \KBDmixMM, \megMM\ we find that
in the squark sector the only potentially interesting bound arises
from $\epsilon_K$. Assuming phases of ${\cal O}(1)$ in the mass matrix
\sfmasses\ or equivalently ${\rm Im}(\delta^d_{MM})_{12}\approx
{\rm Re}(\delta^d_{MM})_{12}$, we find that \KBDmixMM\
can be satisfied by slightly raising $m_{\tilde{d}}$,
$$m_{\tilde{d}}\geq180\ GeV\ \Longrightarrow\ \tilde m_3\geq180\ GeV
\quad (m_{3/2}\geq 36\ GeV).
\eqn\mgq$$
In the slepton sector
the constraint is stronger and \megMM\ can only be satisfied for
$$m_{\tilde{\ell}}\geq 135\ GeV\ \Longrightarrow\ \tilde m_3\geq450\ GeV
\quad(m_{3/2}\geq 90\ GeV).
\eqn\mgl$$
The fact that the stronger constraint arises in the slepton sector is
a consequence of the large renormalization effect in the squark sector
due to the gluino mass which enhances the average squark masses and
therefore weakens the FCNC constraints [\DLK, \BIM, \CEKLP].
To summarize, when SUSY is broken by the dilaton, universality
and proportionality are violated
at the string loop level. The effect on mass differences
in the various neutral meson systems is small. If the phase in
the universality violating terms is of ${\cal O}(1)$, a lower
bound on the down squark masses arises, $m_{\tilde{d}}\geq 180\ GeV$.
The effects on the decay $\mu\rightarrow e\gamma$ due to violation of either
universality or proportionality are more significant and give a
lower bound on the charged slepton masses,
$m_{\tilde{\ell}}\geq 135\ GeV$.
As all sfermion masses are fixed by the gluino mass in this scenario,
we conclude that the most stringent constraint is the one from the
leptonic sector and requires
$\tilde m_{3}\geq450\ GeV$ (or equivalently $m_{3/2} \geq 90\ GeV).$
However, it should be stressed that such estimates are only accurate
up to factors of ${\cal O}(1)$.
\section{Constraints from CP Violation}
In the previous section we investigated the effects of violation
of universality and proportionality by string loop effects.
In this section we study then CP violating effects
that arise at the leading order and are implied by eqs.~\FS.
Such effects are
constrained by the upper bounds on the electric dipole moments
of the neutron (EDMN) and of various atoms and molecules.
When both universality and proportionality hold, there are, in general,
two new CP violating phases (in addition to the CKM phase $\delta_{KM}$
and the strong CP phase $\theta_{QCD}$) [\DGH]:
$$
\phi_A\equiv\ \arg\left(A_0\, \tilde{m}_{1/2}^*\right),\qquad
\phi_B\equiv\ \arg\left(B_0\, \tilde{m}_{1/2}^*\right),
\eqn\newphases$$
(where $B_0=B_{IJ}/\mu_{IJ}$).
In eq.~\mth\ we defined $m_{3/2}$ as a {\it complex}
quantity, its complex conjugate is
$\bar m_{3/2}=\kappa^2 e^{\hat K/2}\hat{\bar W}$.
Using eqs.~\FS\ we conclude that $\phi_A$ vanishes at tree level
while there is no significant simplification for $\phi_B$ and we
expect [\BIM]
$$\phi_A={\cal O}\left({\alpha_{X}\over4\pi}\right),
\qquad \phi_B={\cal O}\left(1\right)\ .
\eqn\phiAloop$$
The contributions to the EDMs of the neutron and of various atoms
from $\phi_A$ and $\phi_B$ were estimated in ref.~[\FPT].
The appropriate modifications of their estimates to our case read,
in the limit $\phi_B \gg \phi_A$,
\foot{Strictly speaking one should also take the
renormalization of $\phi_B$ into account. However,
in ref.~[\BV] it was shown that $\phi_B$ does not renormalize
and therefore we can use the boundary values at $M_X$.}
$$|d_{\rm N}|\sim 1.4\times10^{-24}\ e\ {\rm cm}\
\left({100\ GeV\over m_{3/2}}\right)^2
\left(\sin\phi_B\right)\eqn\EDMN$$
(where the leading contributions come from the light quark EDMs
and CDMs),
$$|d_{\rm Tl}|\sim 1.6\times10^{-22}\ e\ {\rm cm}\
\left({100\ GeV\over m_{3/2}}\right)^2
\left(\sin\phi_B\right)\eqn\EDMCs$$
(where the leading contribution comes from the EDM of the electron),
and
$$|d_{\rm Hg}|\sim 3\times10^{-26}\ e\ {\rm cm}\
\left({100\ GeV\over m_{3/2}}\right)^2
\left(\sin\phi_B\right)\eqn\EDMHg$$
(where the leading contribution comes from the nonderivative
nucleon-nucleon coupling). The experimental bounds
[\AlSm--\Jaco],
$$\eqalign{|d_{\rm N }|\leq&\ 1.2\times10^{-25}\ e\ {\rm cm},\cr
|d_{\rm Tl}|\leq&\ 6.6\times10^{-24}\ e\ {\rm cm},\cr
|d_{\rm Hg}|\leq&\ 1.3\times10^{-27}\ e\ {\rm cm},\cr}$$
require
$$m_{3/2}\geq480\ GeV\ \sqrt{\sin\phi_B}.\eqn\edmB$$
(We have also checked the bounds from $d_{\rm Cs}$, $d_{\rm Xe}$
and $d_{\rm TlF}$ and found that they are weaker.)
Even for $\sin\phi_B\sim0.1$, we need $m_{\tilde g}\geq800\ GeV$
which is stronger than any of the FCNC bounds \mgq, \mgl.
Finally, we note that if
$\phi_B={\cal O}({\alpha_{X}\over4\pi})$, then \edmB\ is satisfied
for $m_{3/2}\geq30\ GeV$, which
coincides with the direct limit \mbound.
To summarize, of the two new CP violating phases, one
vanishes at string tree level and poses no phenomenological problems.
The other is expected, in general, to be of ${\cal O}(1)$, in which
case it would require gluino mass above $800\ GeV$. Under special
circumstances it could be suppressed,
\foot{For example, when $\tilde\mu=0$ and $\partial_S \hat W/\hat{W}$ is
real. Different mechanisms are proposed in refs.~[\Japb, \choi].}
but we see no simple mechanism to guarantee its vanishing.
\chapter{Supersymmetry Breaking induced by the Moduli}
\section{Non-Universal Soft Terms}
If the dominant source of supersymmetry breaking are moduli $F$-terms
$\vev{F^i}$, the soft scalar masses are generically non-universal
at the string tree level and of ${\cal O}(m_{3/2})$. The $A$-terms are
not proportional to the Yukawa couplings and of ${\cal O}(m_{3/2})$.
The gaugino masses are also non-universal
but, more importantly, they are suppressed since $F^S \approx 0$
implies $\tilde{m}_{1/2}\approx 0$ via eq.~\mgapprox. Instead we have
$$
\tilde m_a = {\alpha_{X}\over 4\pi}\, \tilde m_a^{(1)} \ ,
\eqn\msuppression$$
where $\tilde m_a^{(1)} = {\cal O}(m_{3/2})$. The current lower bound on the
gluino mass [\PDG] implies
$$
150\ GeV < \tilde m_3 (M_{Z}) = 3\, \tilde m_3 (M_X)
\simeq 3\, {\alpha_{X}\over 4\pi}\, m_{3/2}\ ,
\eqn\gluinobound$$
or equivalently
$m_{3/2} \gsim 15\ TeV$.\foot{A similar bound follows from
the charginos. Again, this bound is correct only up to factors of
${\cal O}(1)$ and in specific models a smaller $m_{3/2}$ might appear [\CCM].}
Generically, this leads to squark and
slepton masses of the same order of magnitude
and hence radiative electroweak symmetry breaking
requires a major fine-tuning in order to keep $M_{Z}$ at 90 GeV [\BaGi].
However, large scalar masses also suppress the contributions to
FCNC processes.
For a small ratio $\tilde m_3^2/m_{\tilde f}^2\simeq10^{-4}$ (which follows from
eq.~\gluinobound), the experimental constraints slightly change
compared to \KBDmixMM\ and \megMM\ and now read
$$
\eqalign{
{\rm Re}\VEV{\delta^d_{12}}\leq 1.3 \times10^{-2}
\left({m_{\tilde{d}}\over1\ TeV}\right),&\quad
{\rm Re}(\delta^d_{LR})_{12}\leq 5\times10^{-3}
\left({m_{\tilde{d}}\over1\ TeV}\right);\cr
{\rm Im}\VEV{\delta^d_{12}}\leq 1.1\times10^{-3}
\left({m_{\tilde{d}}\over1\ TeV}\right),&\quad
{\rm Im}(\delta^d_{LR})_{12}\leq 4\times10^{-4}
\left({m_{\tilde{d}}\over1\ TeV}\right),\cr
}\eqn\KBDmixMMm$$
from $\Delta m_K$ and $\epsilon_K$, and
$$
(\delta^\ell_{MM})_{12}\leq 2\times10^{-2}\left({m_{\tilde{\ell}}
\over0.3\ TeV}\right)^2,\quad
(\delta^\ell_{LR})_{12}\leq 1\times10^{-4}\left({m_{\tilde{\ell}}
\over0.3\ TeV}\right),
\eqn\megMMm$$
{}from the bound on BR$(\mu\rightarrow e\gamma)$.
For large scalar masses, the strongest constraint arises in the
down-squark sector from $K-\bar K$ mixing.
(The slepton constraint becomes weaker due to its scaling behaviour.)
For $\tilde m^2_a \ll m_{\tilde f}^2$ the $\delta$'s (defined in
eq.~\deltaf) do not renormalize and are given
directly by their boundary values at $M_X$.\foot{
Non-proportional $A$-terms can renormalize the $\delta$'s and
weaken the constraints
but this mechanism is not available for $\tilde m^2_a \ll m_{\tilde f}^2$ [\CEKLP].}
Off-diagonal scalar mass matrix elements of ${\cal O}(m_{3/2})$ then imply
$\vev{\delta^d_{12}} \simeq 1$ and hence
$ m_{3/2} >\ 75\ TeV$ (or even $m_{3/2}>\ 650\ TeV$
if ${\rm Im}\vev{\delta^d_{12}}\sim{\rm Re}\vev{\delta^d_{12}}$)
is required in order to satisfy eqs.~\KBDmixMMm. This bound
is much stronger than the direct bound \gluinobound.
To summarize, for supersymmetry breaking induced by moduli the
gaugino masses are suppressed and the experimental bound on the gluino
implies rather large squark and slepton masses.
At the same time flavor non-diagonal soft terms are present already
at the string tree level and despite the large scalar masses they
violate the FCNC bounds.
Thus, one needs at least an approximate universality at leading order.
\section{Universal Soft Terms}
In the moduli dominated scenario universal soft terms appear at
leading order whenever the couplings $Z_{I{\bar J}}^{(0)}$ satisfy
$$
Z_{I{\bar J}}^{(0)} = h(M,\smash{\overline M}\vphantom{M})\ \delta_{I{\bar J}}\ .
\eqn\Zuni$$
The unit matrix $\delta_{I{\bar J}}$ in eq.~\Zuni\
is not the only solution which guarantees universal soft terms.
Rather, there could be an arbitrary matrix which only depends on
moduli whose $F$-terms vanish but which is independent on all moduli
whose $F$-terms break supersymmetry. Indeed, $Z_{I{\bar J}}^{(0)}$ obeys
such a `split' in string vacua based on $(2,2)$
compactifications where the metric for the ${\bf 27}$ (of $E_6$)
only depends on the $(1,2)$ moduli through an overall
scale factor [\DKL]. (Similarly, the metric for the ${\bf \overline{27}}$
only depends on the $(1,1)$ moduli through an overall scale factor.)
Using eqs.~\RGamma\ and \Zuni\ we find
$$
\Gamma_{i I}^J = \delta_I^J\ \partial_i \ln h\ ,\qquad
R_{i{\bar\jmath} I{\bar J}} = Z_{I{\bar J}}^{(0)}\ \partial_i \partial_{\bar\jmath} \ln h\ .
\eqn\Runi$$
Inserting into \MsIJ\ and \AIJL\ results in
$$
\eqalign{
m^{2}_0\ =&\ |m_{3/2}|^2 - F^i \smash{\overline F}\vphantom{F}^{{\bar\jmath}} \partial_i \partial_{\bar\jmath} \ln h \
\sim {\cal O}(m_{3/2}^2)\ ,\cr
A_{IJL}\ =&\ F^i \left(\partial_i Y_{IJL} + Y_{IJL}(\half \partial_i \hat{K}
- 3 \partial_i \ln h)\right)\ \sim {\cal O}(m_{3/2} Y)}
\eqn\Suni$$
at the leading order (string tree level).
For Yukawa couplings which only depend weakly on the
supersymmetry breaking moduli (\ie\ $\partial_i \tilde{Y}_{IJL}\approx0$)
the $A$-terms are strictly proportional to the Yukawa couplings
($A_0 = e^{\hat{K}/2} F^i (\partial_i \hat{K} - 3 \partial_i \ln h)$).
However, similar to the dilaton case this universality
might be lost at the next order for generic $Z_{I{\bar J}}^{(1)}$
couplings which do not obey \Zuni\
and we can estimate the physical consequences
implied by such non-universality.
The main difference is that now the gaugino masses
are much smaller than the scalar masses
$\tilde m^2_a \ll m^{2}_0$ and therefore no renormalization effects enter
into the low energy scalar masses; they are directly determined by
their boundary value $m^{2}_0$. \foot{
In the dilaton-dominated scenario the renormalization of the
squark and slepton masses are driven by the gaugino masses
which is the reason for eqs.~\squarks, \sleptons.}
Similarly, the $\delta$'s do not
renormalize and for both sleptons and squarks we have
$$
(\delta^f_{MM})_{12}\simeq {\alpha_{X}\over4\pi}
{m^{2\, (1)}_{12}\over m^{2}_0}\simeq 3.3\times 10^{-3},
\eqn\deltaqm$$
where we used $m^{2\, (1)}_{12}\simeq m^{2}_0$.
(The $(\delta^f_{LR})_{12}$ are suppressed by an additional
factor of the appropriate fermion mass divided by $m_{\tilde f}$
and thus provide no additional constraint.)
Comparing the theoretical prediction (eq.~\deltaqm) with the
experimental bounds \KBDmixMMm, \megMMm, we see that due to the large
squark and slepton masses implied by the direct limits \gluinobound\
all FCNC constraints are automatically satisfied.
Finally, let us discuss a specific example of the moduli dominated
scenarios which is closely related to no-scale models [\noscale].
For the special case of
$$h=e^{\hat K/3}\eqn\LRZIJ$$
in eq.~\Zuni\ (which can also be found in $(2,2)$ vacua),
\Runi\ and \Suni\ imply\foot{Note that this does not require any
constraint on $\hat K$ itself.}
$$
\eqalign{
\Gamma^N_{iI}=&{1\over3}\delta^N_I\hat K_i\ ,\quad
R_{i{\bar\jmath} I\bar J}={1\over3}\hat K_{i{\bar\jmath} }Z_{I\bar J}\ ,\cr
m^2_0 =&0\ , \qquad A_{IJL}=e^{\hat K/2}F^i\partial_i\tilde Y_{IJL}\ .}
\eqn\LRRGamma$$
If, in addition, the moduli dependence
of the Yukawa couplings is weak, $\partial_i\tilde Y_{IJL}\approx0$,
the $A_{IJL}$ terms also vanish at tree level and we have instead
$$A_{IJL}={\cal O}({\alpha_{X}\over4\pi}m_{3/2} ).\eqn\LRAIJL$$
Inserting eqs.~\LRZIJ\ and \LRRGamma\ into eq.~\muBIJ\ gives, in
general, no special cancellations for $B_{IJ}$.
Note, however, that if $H_{IJ}\simeq 0$, then the scale of
$B_{IJ}$ is set by $\tilde\mu_{IJ}$ which is independent of
$m_{3/2}$. In such a case, $B_{IJ}$ could be much smaller
than $m_{3/2}^2$ independently of the SUSY breaking mechanism.
Therefore, in our analysis below, we allow $B_{IJ}$ to
take arbitrary values (as long as they are phenomenologically
acceptable).\foot{$B_{IJ} = 0$ can also be arranged by choosing
$\hat K$ appropriately [\noscale].}
The bound \gluinobound\ still holds but
now the scalar masses also vanish at leading order and one expects
$$
m^2_{I{\bar J}} \simeq {\alpha_X\over 4 \pi} m_{3/2}^2 >(850\, GeV)^2 .
\eqn\news$$
Hence, most parameters in the observable sector are decoupled from
$m_{3/2}$ at the leading order and only arise from string loop effects
and with the appropriate suppression. However, the contributions
to FCNC processes are generically too large in this scenario.
The bounds are somewhat different from \KBDmixMMm\ and \megMMm\
because in this case $\tilde m_3^2/m_f^2 = 10^{-2}$:
$$
\eqalign{
{\rm Re}\VEV{\delta^d_{12}}\leq 1 \times10^{-2}
\left({m_{\tilde{d}}\over1\ TeV}\right),&\quad
{\rm Re}(\delta^d_{LR})_{12}\leq 6\times10^{-3}
\left({m_{\tilde{d}}\over1\ TeV}\right);\cr
{\rm Im}\VEV{\delta^d_{12}}\leq 8\times10^{-4}
\left({m_{\tilde{d}}\over1\ TeV}\right),&\quad
{\rm Im}(\delta^d_{LR})_{12}\leq 5\times10^{-4}
\left({m_{\tilde{d}}\over1\ TeV}\right),\cr
}\eqn\KBDmixMMn$$
$$
(\delta^\ell_{MM})_{12}\leq 1\times10^{-1}\left({m_{\tilde{\ell}}
\over0.3\ TeV}\right)^2,\quad
(\delta^\ell_{LR})_{12}\leq 1\times10^{-5}\left({m_{\tilde{\ell}}
\over0.3\ TeV}\right).
\eqn\megMMn$$
The strongest constraint again arises in the
down-squark sector from $K-\bar K$ mixing.
For off-diagonal mass matrix elements of the same order as the
average scalar masses, one has $(\delta^f_{MM})_{12} ={\cal O}(1)$ which
implies $m_{\tilde{d}}>\ 100\ TeV$ (or even $m_{\tilde{d}}>\ 1000\ TeV$
if ${\rm Im}\vev{\delta^d_{12}}\sim{\rm Re}\vev{\delta^d_{12}}$).
The bounds on $m_{3/2}$ from electric dipole moments are of ${\cal O}(1\
TeV)$ for phases of ${\cal O}(1)$. Thus, with $m_{3/2}\geq{\cal O}(10\ TeV)$
these bounds are always satisfied.
\section{Cosmological implications}
The existence of light moduli, $M_{i}\sim M_Z$, with couplings to
observable particles of order $1/m_P$, poses severe cosmological
problems [\CFKRG--\CCQR]. Such moduli are likely to dominate the
matter density of the universe until their decay. When they
decay, at time $\tau_{i}\sim m_P^2/M_{i}^3$, they give a reheat
temperature $T_R\sim\sqrt{m_P/\tau_{i}}\sim10^{-6}\ GeV$,
too low for successful nucleosynthesis ($T_{NS}\sim10^{-3}\ GeV$).
The cosmological implications of the moduli are drastically
different if their masses are much higher than $M_Z$.
A particularly interesting range is $M_{i}\sim {\rm tens\ of}\ TeV$.
If this it the typical mass scale of moduli then [\BCMN,\RaTo]
\item{1.} The universe becomes matter dominated by the heavy moduli
long before they decay if the Hubble constant
during inflation is larger than the moduli masses.
\item{2.} The moduli would decay at time $\tau_{i}\sim
{m_P^2\over M_{i}^3}\sim1\ sec$. They will give a reheat temperature
of $T_R\sim\sqrt{m_P/\tau_{i}}\sim{\rm a\ few}\ MeV$,
just right for nucleosynthesis.
\item{3.} All decay products will thermalize very fast: with typical
number density $n_P\sim T_R^4/M_{i}\sim10^{-16}\ GeV^3$,
hadronic-interaction cross section $\sigma\sim1/f_\pi^2$ and
initial velocity $v\sim1$, the thermalization rate $\sigma n_P v\sim
10^{-14}\ GeV$ is much faster than the expansion rate.
\item{4.} Upon thermalization, the number of photons increases to
$n_\gamma\sim10^{-9}\ GeV^3$, but (as baryon multiplicity in hadron
scattering is ${\cal O}(1-10)$) the number of baryons remains
essentially unchanged, $n_{B+\bar B}\sim n_P\sim10^{-7}n_\gamma$.
(If CP- and B-violating interactions -- either directly in moduli
couplings or indirectly in SUSY interactions -- induce an asymmetry
${n_B-n_{\bar B}\over n_B+n_{\bar B}}\sim10^{-3}$, it would lead
to the required baryon symmetry. However, such a large asymmetry
is unlikely, as a suppression factor $\leq{\cal O}({\alpha_s\over\pi})$
is unavoidable.)
Thus, while light moduli pose serious problems to nucleosynthesis,
heavy moduli ($M_{i}\sim100\ TeV$) could actually be {\it responsible}
to nucleosynthesis. \foot{For another solution of the cosmological
moduli problem, that does not require heavy moduli, see ref. [\RaTo].}
{}From eqs.~\msuppression, \LRAIJL\ and \news, we
learn that when (a) SUSY is broken by the moduli, (b) $Z_{I{\bar J}}^{(0)}=
e^{\hat K/3}\delta_{I{\bar J}}$,
and (c) the moduli dependence of the Yukawa couplings is weak, then
$\tilde m_a,\ A_{IJL}\ \ll\ m_{\tilde f}\ \ll m_{3/2}$ while
the moduli masses are $M_i={\cal O}(m_{3/2})$.\foot{ The moduli masses can
only be much lower than $m_{3/2}$ for special $\hat K$ and $\hat W$
[\noscale].} The direct experimental bound on $\tilde m_3$ implies then
$$
\tilde m_3\gsim150\ GeV,\ \ m_{\tilde f}\gsim900\ GeV,\ \ M_i\gsim15\ TeV,
\eqn\hierarchy$$
In this scenario, the moduli masses are necessarily heavy and
consequently the cosmological problems related to light moduli
can be evaded. However, the model faces two problems. First,
in the previous section we found that if universality is violated
at the string one-loop level, then $m_{\tilde f}$ (and consequently
all other scales) should be at least two orders of magnitude
above the bound \hierarchy. In this case, a major fine-tuning
(of order ${M_Z^2\over m_{\tilde f}^2}\sim10^{-6}$) is required
to produce the correct electroweak breaking scale, making
this scenario very unattractive. In order that it
remains viable, there should exist a mechanism that would guarantee
universality to high enough string loop level
that the various scales actually reside not far above the
lower bounds \hierarchy. Second, even if such a mechanism does exist,
the natural scale for $M_Z$ would still be of ${\cal O}(m_{\tilde f})$.
We were able to show, however, that with fine-tuning of
${\cal O}({\alpha_{X}\over 4\pi})$ (of either $m_t$ or $m_{\tilde t}$)
and $\mu = {\cal O}({\alpha_{X}\over 4\pi} m_{3/2}),
B={\cal O}({\alpha_{X}\over4\pi}m_{3/2}^2)$ we get the correct scale for $M_Z$.
\chapter{Conclusion}
In this paper we analyzed the effects of string loop corrections on
rare processes at the weak scale. Since only limited information about
these corrections is currently available, we estimated their
typical order of magnitude and compared them with the stringent
bounds implied by the small FCNC.
We find that in the dilaton scenario the experimental bounds can only
be satisfied by raising the supersymmetry breaking scale,
$$\tilde m_3\gsim450\ GeV,$$
which is a factor of 3 above the scale
required by the direct experimental limits.
For CP-violating phases of ${\cal O}(1)$, constraints from EDMN require an
even larger scale,
$$\tilde m_3\gsim2.4\ TeV\ \sqrt{\sin\phi_B}.$$
However, both estimates neglect factors of ${\cal O}(1)$.
In the moduli scenario the gaugino masses always only
appear as string loop corrections and therefore are
hierarchically smaller than the scalar masses,
$$\tilde m_3\gsim150\ GeV,\ \ m_{\tilde f}\gsim15\ TeV.$$
Even with this hierarchy, generic tree level soft terms
violate the bounds from rare processes. For squarks to have their
masses at the lower bound, $m_{\squark}\sim15\ TeV$, the
soft scalar masses that appear at the string tree level
have to be universal. Such universality
does occur with extra conditions on the metric $Z_{I{\bar J}}$.
We have not explicitly considered the case
where moduli and dilaton $F$-terms are of the
same order of magnitude $F^S \sim F^i$. If eq.~\Zuni\ holds, the soft
parameters are essentially equivalent to the standard
MSSM parameters at leading order with independent
$\tilde{m}_{1/2}, m^{2}_0, \mu, A_0, B$. The gaugino masses are not suppressed
and therefore they drive the renormalization of the scalar masses.
Without repeating the entire analysis we may conclude that,
within the accuracy of our estimates, this leads to similar
constraints as were found in the dilaton scenario.
That is, the scale of supersymmetry breaking has to be raised
compared the to scale required by the direct experimental limits.
In no-scale type scenarios also the scalar masses only
appear at the loop level and
$$\tilde m_3\gsim150\ GeV,\ \ m_{\tilde f}\gsim900\ GeV,\ \ m_{3/2}\gsim15\ TeV.$$
This leads to the interesting possibility of a
large hierarchy between the observable sparticle masses
and the moduli masses with interesting cosmological consequences.
However, FCNC constraints push the scale to at least two orders
of magnitude above the lower bounds. For this scenario
to be realistic, universality has to hold well beyond the
string one-loop level.
\ack
We thank M.~Dine and N.~Seiberg for initiating
this investigation and T.~Banks, L.~Dixon,
F.~Eberlein, L.~Ib\`a\~nez, A.~K\"onig,
S.~Pokorski and S.~Thomas for useful discussions.
Y.N.~is an incumbent of the Ruth E.~Recu Career Development chair,
and is supported in part by the Israel Commission for Basic Research,
by the United States -- Israel Binational Science Foundation (BSF),
and by the Minerva Foundation.
J.L.~ is supported by a Heisenberg fellowship of the DFG and would
like to thank the Weizmann Institute and Einstein Center
for hospitality and financial support.
\refout
\end
| -36,112.262457 |
[
-2.92578125,
2.8046875
] | 17.74744 |
[
-3.4296875,
-0.50830078125,
-2.06640625,
-6.0078125,
-0.434814453125,
8.40625
] |
[
2.765625,
10.0703125,
2.6484375,
5.21484375
] | 305 | 5,542 |
[
-2.51953125,
2.638671875
] | 31.350297 |
[
-5.78515625,
-4.24609375,
-4.80859375,
-2.498046875,
1.7275390625,
12.375
] | 0.898712 | 8.688566 | 26.398412 | 2.980954 |
[
2.0107717514038086
] | -24,125.673069 | 5.420967 | -36,026.817647 | 0.279599 | 6.186946 |
[
-2.537109375,
-4.00390625,
-3.708984375,
-4.9296875,
2.392578125,
12.4140625
] |
[
-5.4140625,
-2.0703125,
-1.9326171875,
-0.84326171875,
3.326171875,
3.66796875
] | |
BkiUbU7xK0iCl7DT6sRT
|
\section{Introduction}
To extend our knowledge of the nucleon structure far beyond what we know from parton distribution functions (PDFs) about longitudinal momentum distributions, we need a generalization of PDFs which are known as transverse momentum dependent parton distribution functions (TMDs). These functions contain also some information on transverse parton momenta as well as spin-orbit correlations.
TMDs are important since they play essential roles in the theoretical description of some experimental quantities like single spin asymmetries which exist in various hard processes including semi-inclusive deep inelastic scattering (SIDIS), Drell-Yan processes, etc \cite{SMC,HER,COM}. These functions can also be called the unintegrated parton distributions \cite{JCDS}.
\begin{figure}
\begin{center}
\begin{picture}{(400,50)(0,0)
\label{graph} \SetColor{Black} \SetScale{1}
{\SetWidth{1.5}\Line(0,10)(30,10)
\Text(15,5)[t]{\small{$U$}}} \PText(35,18)(0)[t]{=}
\Line(40,10)(70,10)
\Text(55,5)[t]{\small{$u_0$}} \PText(75,18)(0)[t]{+}
\Text(83,5)[t]{\small{$u_0$}} \Text(100,5)[t]{\small{$u$}}
\Text(117,5)[t]{\small{$u_0$}}
\Line(80,10)(120,10)
\PText(125,18)(0)[t]{+}
\DashCArc(100,10)(12,0,180){2}
\Text(105,35)[t]{\small{$\pi^{0}$}}
\Vertex(88,10){1.5}
\Vertex(112,10){1.5}
\Line(130,10)(170,10)
\DashCArc(150,10)(12,0,180){2}
\Vertex(138,10){1.5}
\Vertex(162,10){1.5}
\Text(155,35)[t]{\small{$\pi^{+}$}} \PText(175,18)(0)[t]{+}
\Text(133,5)[t]{\small{$u_0$}} \Text(151,6)[t]{\small{$d$}}
\Text(168,5)[t]{\small{$u_0$}}
\Line(180,10)(220,10)
\DashCArc(200,10)(12,0,180){2}
\Vertex(188,10){1.5}
\Vertex(212,10){1.5}
\Line(180,10)(220,10)
\DashCArc(200,10)(12,0,180){2}
\Vertex(188,10){1.5}
\Vertex(212,10){1.5}
\Text(205,35)[t]{\small{$K^{+}$}} \PText(225,18)(0)[t]{+}
\Line(230,10)(270,10)
\Text(183,5)[t]{\small{$u_0$}} \Text(201,6)[t]{\small{$s$}}
\Text(218,5)[t]{\small{$u_0$}}
\GlueArc(250,10)(12,0,180){2}{8}
\Text(250,35)[t]{\small{$g$}}
\Vertex(238,10){1.5}
\Vertex(262,10){1.5}
\Text(233,5)[t]{\small{$u_0$}} \Text(250,6)[t]{\small{$u$}}
\Text(268,5)[t]{\small{$u_0$}} \PText(280,18)(0)[t]{+ ...}
\SetColor{Black}
\SetColor{Black} \SetScale{1}{\SetWidth{1.5}
\Line(0,-45)(30,-45)
\Text(15,-50)[t]{\small{$D$}}} \PText(35,-37)(0)[t]{=}
\Line(40,-45)(70,-45)
\Text(55,-50)[t]{\small{$d_0$}} \PText(75,-37)(0)[t]{+}
\Text(83,-50)[t]{\small{$d_0$}} \Text(100,-50)[t]{\small{$d$}}
\Text(117,-50)[t]{\small{$d_0$}}
\Line(80,-45)(120,-45)
\PText(125,-37)(0)[t]{+}
\DashCArc(100,-45)(12,0,180){2}
\Text(105,-20)[t]{\small{$\pi^{0}$}}
\Vertex(88,-45){1.5}
\Vertex(112,-45){1.5}
\Line(130,-45)(170,-45)
\DashCArc(150,-45)(12,0,180){2}
\Vertex(138,-45){1.5}
\Vertex(162,-45){1.5}
\Text(155,-20)[t]{\small{$\pi^{-}$}} \PText(175,-37)(0)[t]{+}
\Text(133,-50)[t]{\small{$d_0$}} \Text(151,-49)[t]{\small{$u$}}
\Text(168,-50)[t]{\small{$d_0$}}
\Line(180,-45)(220,-45)
\DashCArc(200,-45)(12,0,180){2}
\Vertex(188,-45){1.5}
\Vertex(212,-45){1.5}
\Text(205,-20)[t]{\small{$K^{0}$}} \PText(225,-37)(0)[t]{+}
\Line(230,-45)(270,-45)
\Text(183,-50)[t]{\small{$d_0$}} \Text(201,-49)[t]{\small{$s$}}
\Text(218,-50)[t]{\small{$d_0$}}
\GlueArc(250,-45)(12,0,180){2}{8}
\Text(250,-20)[t]{\small{$g$}}
\Vertex(238,-45){1.5}
\Vertex(262,-45){1.5}
\Text(233,-50)[t]{\small{$d_0$}} \Text(250,-49)[t]{\small{$d$}}
\Text(268,-50)[t]{\small{$d_0$}} \PText(280,-37)(0)[t]{+ ...}
\SetColor{Black}}
\end{picture}
\vspace{2cm}
\caption{\small Up ($U$) and down ($D$) constituent quarks. This figure is quoted from Ref.\cite{nymf}. }\label{fluctuation} \label{fig:1}
\end{center}
\end{figure}
TMDs are expecting to perform us a three-dimensional view of the parton distributions in momentum space. Therefore they give us sufficient information in addition to what can be learned from the generalized parton densities \cite{MB,RP,MD,BJY}.
In comparison with the integrated parton densities, our knowledge about the TMD parton distributions is not rich. In contrast to
the integrated parton distributions which there are many model calculations for them, there are not enough for the TMD distributions.
The purpose of this work is extracting the unpolarized TMD quark and gluon distributions using the chiral quark model.\\
The main property of the $\chi QM$ is its application at low $Q^2$ scales. This property is due to breaking the chiral symmetry at low energy scales. In these scales, the light quark masses are not ignorable in comparison with the nucleon mass and chiral symmetry is broken.\\
In the $\chi QM$, the bare quarks inside the nucleon are surrounded by the clouds of Goldstone (GS) bosons and gluons \cite{myj,nymf}. Considering the interactions which occur in this bounded system at the first approximation and also the relations between the transverse momentum of the bare quarks and GS bosons(gluons) in these interactions, we can obtain the TMD quark and gluon distributions inside the proton. In order to do these calculations, we need to know the proton TMD bare quark distributions. Using the solution of the Dirac equation by considering the harmonic oscillator potential, these TMD bare distributions can be calculated.
\begin{figure*}
\begin{center}
\fcolorbox{white}{white}{
\begin{picture}(451,71) (231,-77)
\SetWidth{1.0}
\SetColor{Black}
\Line(232.128,-56.505)(351.247,-56.505)
\Line[dash,dashsize=1.527](281.761,-56.505)(310.777,-27.489)
\Text(232.128,-65.668)[lb]{\Large{\Black{$$}}}
\Text(341.32,-65.668)[lb]{\Large{\Black{$$}}}
\Text(284.761,-36.652)[lb]{{\Black{${\cal B}$}}}
\Text(354.301,-80.176)[lb]{{\Black{$$}}}
\Text(345.902,-69.486)[lb]{\Large{\Black{$$}}}
\Text(361.716,-81.703)[lb]{\large{\Black{$(a)$}}}
\Line(610.864,-58.032)(500.908,-58.032)
\Gluon(546.723,-58.032)(574.212,-30.543){2.644}{4}
\Text(548.723,-38.943)[lb]{{\Black{$g$}}}
\Text(500.908,-66.431)[lb]{\Large{\Black{$$}}}
\Text(601.701,-66.431)[lb]{\Large{\Black{$$}}}
\Text(616.209,-80.176)[lb]{\Large{\Black{$$}}}
\Text(610.864,-69.486)[lb]{\Large{\Black{$$}}}
\Text(630.915,-81.703)[lb]{\large{\Black{$(b)$}}}
\Text(236.71,-70.939)[lb]{{\Black{$q_i$}}}
\Text(346.665,-70.939)[lb]{{\Black{$q_j$}}}
\Text(506.253,-71.703)[lb]{{\Black{$q_i$}}}
\Text(610.1,-71.176)[lb]{{\Black{$q_j$}}}
\end{picture}
} \caption{\small (a) The fluctuation of a bare quark $q_{i}$ into
a GS boson ${\cal B}$ plus a struck quark $q_{j}$. (b) A bare quark $q_{i}$ emits a gluon and transforms to a constituent quark $q_{j}$.}\label{fluctuation} \label{fig:2}
\end{center}
\end{figure*}
The plan of this paper is as follows. Applying the $\chi QM$, the TMD quark and gluon densities are calculated in section 2. For this purpose, we first investigate the interactions which occur at the vertex of bare quark-GS bosons and also the bare quark-gluon vertex in the $\chi QM$ in subsections 2.1 and 2.2. In subsection 2.3 the required TMD bare quark distributions are computed. We give our results in section 3 and finally render our conclusions in section 4.
\section{TMD quark and gluon distributions in the chiral quark model}
In this section, we calculate the transverse momentum dependence of unpolarized quark and gluon distribution functions applying the chiral quak model. In this model which is used at low $Q^2$ scales, the important degrees of freedom are expressed in terms of quarks, gluons and GS bosons. In the $\chi QM$, the nucleon is consisted of the bare up and down quarks $(u_0, d_0)$ which are surrounded by the clouds of GS bosons and gluons (Fig.1). We first investigate in subsection 2.1 and 2.2 the different types of interactions which can be occurred at the vertexes of Fig.1.
\subsection{The bare quark-GS boson vertex}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{ccc}
{\includegraphics[width=50mm,height=50mm]{TMDPupid.eps}} &
{\includegraphics[width=50mm,height=50mm]{TMDPugu.eps}} &
{\includegraphics[width=50mm,height=50mm]{TMDPskd.eps}} \\
\end{tabular}
\caption{ The three dimentional representaions of transverse momentum dependent splitting functions $P_{u\pi^{-}/d}$, $P_{{ug}/{u}}$ and $P_{sk^{0}/d}$ with respect to $y$ and $p_T$. \label{fig:3}}
\end{center}
\end{figure*}
At the first approximation, one basic process which can be occurred at the vertex of bare quark-GS boson in Fig.1 is considered. In this process, the bare quark $q_i$ fluctuates into the intermediate quark $q_j$ and a GS boson ${\cal B}$ which is represented in Fig.2(a). This fluctuation can be given by:
\begin{equation}
q_{j}(x,p_{T})=\int^{1}_{x}\frac{\textmd{d}y}{y}P_{j{\cal
B}/i}(y,p_{T})q_{0,i}(\frac{x}{y},p_{T})\;.\label{qq}
\end{equation}
In Eq.(\ref{qq}), $P_{j{\cal B}/i}$ is the splitting function that expresses the probability for finding the struck quark $q_j$ with longitudinal momentum fraction $y$ and the transverse momentum ${p_{j}}_{T}$ and also the GS boson which is carried the longitudinal momentum fraction $1-y$ of the parent quark's momentum and the transverse momentum ${p_{\cal B}}_{T}$.\\
We can write the transverse momenta of the quark $q_j$ and the GS boson in terms of the transverse momentum of the bare quark $q_i$, $p_{T}$, and the intrinsic variables \cite{pasquini}:
\begin{eqnarray}\label{intrinsic}
{p_{j}}_{T}=k_{T}+yp_{T},~{p_{\cal B}}_{T}=-k_{T}+(1-y)p_{T}
\;.
\end{eqnarray}
Based on Sullivan processes \cite{Suli} we can suggest the TMD splitting function $P_{j{\cal B}/i}$ as it follows:
\begin{eqnarray}\label{qsplit}
P_{j{\cal B}/i}(y,p_{T})=\frac{1}{8\pi^{2}}(\frac{g_{A}\bar{m}}{f})^{2}(1-y)\nonumber\\
\int^{t_{min}}_{-\Lambda^2_{\chi}} \frac{[(m_{i}-m_{j})^{2}-t]}{(t-m^{2}_{{\cal
B}})^{2}}\textmd{d}t\;,
\end{eqnarray}
where $\Lambda_{\chi}$ is the cut off parameter, $m_i$, $m_{j}$ and $m_{\cal B}$ are the mass of quarks $q_i$, $q_j$ and the GS boson, respectively. The parameter $t$ is defined as:
\begin{eqnarray}\label{t}
t=\frac{-[{p^2_{j}}_{T}+(1-y)[m^2_{j}-ym^2_{i}]]}{y}\nonumber\\
=\frac{-[(k_{T}+yp_{T})^2+(1-y)[m^2_{j}-ym^2_{i}]]}{y}\;.
\end{eqnarray}
Substituting the t parameter in Eq.(\ref{qsplit}), we will arrive at:
\begin{eqnarray}\label{qsplitting}
P_{j{\cal B}/i}(y,p_{T})=\int \textmd{d}k_{T}\frac{2(k_{T}+yp_{T})}{y^{2}(1-y)(m_{i}^{2}-M^{2}_{j{\cal B}})^{2}}\nonumber\\
((m_{j}-m_{i}y)^{2}+(k_{T}+yp_{T})^{2})\times \frac{1}{8\pi^{2}}(\frac{g_{A}\bar{m}}{f})^{2}.
\end{eqnarray}
In Eq.(\ref{qsplitting}), $g_{A}$ and $f$ denote the axial vector constant and the pseudo-scalar decay constant, respectively; $\bar{m}$ is the average mass of $q_i$ and $q_j$ and $M^{2}_{j{\cal B}}$ is the square invariant mass of the final stat:
\begin{equation}
M^{2}_{j{\cal B}}=\frac{m^2_{j}+(k_{T}+yp_{T})^2}{y}
+\frac{m^2_{{\cal B}}+(k_{T}+yp_{T})^2}{1-y}.
\end{equation}
It is obvious that $t_{min}$ in Eq.(\ref{qsplit}) can be obtained from Eq.(\ref{t}) by substituting $k_{T}=0$ in this equation \cite{EHQ}.
It is found that if we put $p_{T}=0$ in Eq.(\ref{t}), this equation will be casted to its usual form while there is not finally any transverse momentum dependence in the $\chi QM$ \cite{weber,weber1}. In this case the splitting function $P_{j{\cal B}/i}$ and also the $q_j$ distribution are given by expressions which are used in Refs.\cite{nymf,swnpa,MA2005,MA2011}.
\subsection{The vertex of the bare quark-gluon}
A similar process can be occurred in the vertex of bare quark-gluon, at the first approximation (Fig.2(b)). In this process the bare quark $q_i$ emits a gluon and appears as the recoiled quark $q_j$. The TMD $q_j$ distribution has the following form:
\begin{equation}
q_{j}(x,p_{T})=\int^{1}_{x}\frac{\textmd{d}y}{y}P_{jg/i}(y,p_{T})q_{0,i}(\frac{x}{y},p_{T})\;.\label{qg}
\end{equation}
Here the related TMD splitting function $P_{jg/i}$ is:
\begin{eqnarray}\label{gsplitting}
P_{jg/i}(y,p_{T})=\int \textmd{d}k_{T} G^2_{jg/i}\frac{2(k_{T}+yp_{T})}{y^{2}(1-y)(m_{i}^{2}-M^{2}_{jg})^{2}}\nonumber\\
((m_{j}-m_{i}y)^{2}+(k_{T}+yp_{T})^{2})\times C_f\frac{{\alpha}_s(Q^2)}{4\pi}\;,
\end{eqnarray}
where $C_f$ is the color factor and the square invariant mass of the final quark-gluon state, $M^{2}_{jg}$, is written as:
\begin{equation}
M^{2}_{jg}=\frac{m^2_{j}+(k_{T}+yp_{T})^2}{y}
+\frac{m^2_{g}+(k_{T}+yp_{T})^2}{1-y}\;.
\end{equation}
In this equation $m_g$ denotes the gluon mass.\\
In Eq.(\ref{gsplitting}), the vertex function $G_{jg/i}$ is defined as:
\begin{equation}
G_{jg/i}= \exp \left ( {m_i^2-M_{jg}^2
\over 2\Lambda_{\chi} ^2} \right ).
\end{equation}
The TMD gluon distribution is also calculated via the interaction of Fig.2(b).
We use the following notation for the convolution integrals in Eqs.(\ref{qq}) and (\ref{qg}):
\begin{eqnarray}
P_{j{\cal B}/i}\otimes q_0=\int^{1}_{x}\frac{\textmd{d}y}{y}P_{j{\cal
B}/i}(y,p_{T})q_{0}(\frac{x}{y},p_{T}),\nonumber\\
P_{jg/i}\otimes q_0=\int^{1}_{x}\frac{\textmd{d}y}{y}P_{jg/i}(y,p_{T})q_{0}(\frac{x}{y},p_{T}).
\end{eqnarray}
\subsection{TMD bare quark distribution functions}
In order to calculate the TMD distribution functions inside the proton using Eqs.(\ref{qq}) and (\ref{qg}), we should first compute the TMD bare quark distributions. For this purpose we use the solution of the Dirac equation under harmonic oscillator potential \cite{nymf,phdt,ydm}. Applying this approach the ground state wave function of the bare quark in the momentum space in terms of two parameters $\rho$ and $R$ is obtained as \cite{nymf}:
\begin{equation}
\phi_{0}(p)=-\pi^{-\frac{3}{4}}R^{\frac{3}{2}}(1+\frac{3\rho^{2}}{2})^{-\frac{1}{2}}
e^{-\frac{p^{2}R^{2}}{2}}\chi_{s}\chi_{f}\chi_{c}\;,
\end{equation}
where $\chi_{s}$, $\chi_{f}$ and $\chi_{c}$ are the related spin, flavor and color parts of the wave function.\\
We consider the probability density as $\varrho=\phi^\dagger_{0}(p)\phi_{0}(p)$.\\
The TMD bare quark distribution, $f(x,p_{T})$, satisfies the following relation \cite{nymf,MA2009}:
\begin{eqnarray}
\int\varrho\delta(p^{0}-\sqrt{(p^{3})^{2}+({p}_{T})^{2}+m^{2})}\textmd{d}p^{0}\textmd{d}p^{3}\textmd{d}^{2}p_{T}\nonumber\\
=\int f(x,{p}_{T})\textmd{d}^{2}p_{T}\textmd{d}x\;.
\label{integer}
\end{eqnarray}
In above equation $p^0$, $\vec{p}=(p^1,p^2,p^3)$ and $m$ are the bare quark energy, 3-momentum and mass, respectively. In Eq.(\ref{integer}) we use the relations which exist between the components of four vector momentum in standard coordinates and light cone coordinates so we can write \cite{nymf,MA2009}:
\begin{equation}
\textmd{d}p^{0}\textmd{d}p^{3}\textmd{d}^{2}p_{T}=\frac{1}{2}M_{t}\textmd{d}p^{-}\textmd{d}x\textmd{d}^{2}p_{T}\;.
\label{measure}
\end{equation}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
{\includegraphics[width=60mm,height=40mm]{TMDUx.eps}} &
{\includegraphics[width=60mm,height=40mm]{TMDDx.eps}} \\
{\includegraphics[width=60mm,height=40mm]{TMDSx.eps}} &
{\includegraphics[width=60mm,height=40mm]{TMDGx.eps}} \\
\end{tabular}
\caption{ The TMD quark and gluon distribution functions with respect to $x$ at $p_T=0.1, 0.2$ and $0.3~GeV$. \label{fig:4}}
\end{center}
\end{figure*}
Finally, by comparing the both sides of Eq.(\ref{integer}), the TMD bare quark distribution is determined as \cite{nymf}:
\begin{eqnarray}\label{distrib}
f(x,{p}_{T})=\frac{1}{2}M_{t}R^{3}\pi^{-\frac{3}{2}}(1+\frac{3\rho^{2}}{2})^{-1}[1+\frac{({p}_{T})^{2}+m^{2}}{(M_{t}x)^{2}}]\nonumber\\
\times e^{-R^{2}({p}_{T})^{2}}e^{-\frac{R^{2}}{4}[M_{t}x-\frac{({p}_{T})^{2}+m^{2}}{M_{t}x}]^{2}};f=u_0,d_0.\nonumber\\
\end{eqnarray}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
{\includegraphics[width=60mm,height=40mm]{TMDUp1.eps}} &
{\includegraphics[width=60mm,height=40mm]{TMDUp2.eps}} \\
{\includegraphics[width=60mm,height=40mm]{TMDDp1.eps}} &
{\includegraphics[width=60mm,height=40mm]{TMDDp2.eps}} \\
\end{tabular}
\caption{ The TMD distribution of $u$ and $d$ quarks at two set of $x$ values. \label{fig:5}}
\end{center}
\end{figure*}
\begin{figure*}[htp]
\begin{center}
\begin{tabular}{cc}
{\includegraphics[width=60mm,height=40mm]{TMDSp1.eps}} &
{\includegraphics[width=60mm,height=40mm]{TMDSp2.eps}} \\
{\includegraphics[width=60mm,height=40mm]{TMDGp1.eps}} &
{\includegraphics[width=60mm,height=40mm]{TMDGp2.eps}} \\
\end{tabular}
\caption{ The TMD $s$ quark and gluon distributions with respect to $p_T$ at different values of $x$. \label{fig:6}}
\end{center}
\end{figure*}
According to above discussion, the unpolarized TMD quark distributions are given as following:
\begin{eqnarray}
u(x,p_{T})&=&Z_{u}u_{0}(x,p_{T})+P_{u\pi^{-}/d}\otimes d_{0}\nonumber\\
&&+\frac{1}{2}P_{u\pi^{0}/u}\otimes u_{0}+P_{{ug}/{u}}\otimes u_{0},\label{u(x)}
\end{eqnarray}
\begin{eqnarray}
d(x,p_{T})&=&Z_{d}d_{0}(x,p_{T})+P_{d\pi^{+}/u}\otimes
u_{0}\nonumber\\
&&+ \frac{1}{2}P_{d\pi^{0}/d}\otimes
d_{0}+P_{{dg}/{d}}\otimes d_{0},\label{d(x)}
\end{eqnarray}
\begin{equation}
\hspace{-0.5 cm}s\left( x,p_{T} \right) = {P_{{{s{k^ + }}
\mathord{\left/
{\vphantom {{s{k^ + }} u}} \right.
\kern-\nulldelimiterspace} u}}} \otimes {u_0} + {P_{{{s{k^0}} \mathord{\left/
{\vphantom {{s{k^0}} d}} \right.
\kern-\nulldelimiterspace} d}}} \otimes {d_0} .~\label{sdis}
\end{equation}
$Z_u$ and $Z_d$ are the renormalization constants of the $u$ and $d$ bare quark distributions.\\
Finally, the TMD gluon distribution function is written as:
\begin{equation}
g(x,p_{T})=P_{{ug}/{u}}\otimes u_{0}+P_{{dg}/{d}}\otimes d_{0}.\label{g(x)}
\end{equation}
Now we are able to calculate the TMD quark and gluon densities using above relations.
\section{Results}
We calculate the unpolarized TMD quark and gluon distribution functions inside the proton using the effective chiral quark model at $Q^2=0.35~GeV^2$. To this end we first compute the TMD splitting functions and also bare quark densities which were discussed in previous section.
In Fig.\ref{fig:3}, we have displayed the three dimensional representation of the TMD splitting functions $P_{u\pi^{-}/d}$, $P_{sk^{0}/d}$ and $P_{{ug}/{u}}$ with respect to $y$ and $p_{T}$.
We have depicted the TMD $u$, $d$ ad $s$ quark and also gluon distributions with respect to $x$ at three values of $p_{T}$ ($p_{T}=0.1, 0.2$ and $0.3~GeV$) in Fig.\ref{fig:4}.\\
It is shown that, as have been expected, by increasing the $p_{T}$ value, the TMD densities falloff down. In fact, the probability of finding the partons at larger values of $p_{T}$ is less. This behaviour of the TMD distributions is also seen in Refs.\cite{HA,AVE,AVE1,ZAVADA}.
In Fig.\ref{fig:5} and \ref{fig:6} we have plotted the TMD quark and gluon densities with respect to $p_{T}$ at different values of $x$. It is found that these $p_{T}$ distributions have the forms very close to the Gaussian distributions \cite{ZAVADA} and also the width of these densities is $x$ dependent \cite{ZAVADA}.\\
It should be pointed out that we have displayed each of $p_T$ densities in two set of $x$ values. In the first set which contain the small values of $x$, the TMD distributions grow by increasing the $x$ value in spite of the behaviour of the second set in which the $p_T$ densities decrease by increasing the amount of the $x$ variable.
Our results are arising out completely from a theoretical framework. As can be seen in Figs.\ref{fig:4}-\ref{fig:6}, these results yield us an appropriate behaviour in comparison with the results of Refs.\cite{HA,AVE,AVE1,ZAVADA}. Furthermore, we have also calculated the TMD function for strange($s$) quark and gluon distributions, using our theoretical model. Based on the current quark models, our investigations indicate that so far computations of the TMD distributions for gluon and $s$-quark have not been done.
\section{Conclusion}
A good understanding of partron densities can be obtained by studying the deep-inelastic lepton nucleon scattering. These studies provide us how the momenta of parton densities are distributed parallel to the nucleon momenta. To go beyond the one dimensional consideration of quark and gluon substructure, we need to investigate the transverse momentum dependent of parton densities. This can be done by taking into account the transverse momenta of produced hadrons via processes, for instance, semi-inclusive DIS or dileptons resulted from the Drell-Yan process.
In this article we used for the first time the modified $\chi QM$ to achieve the transverse momentum dependence of parton densities in unpolarized case. For this propose we used the Sullivan processes and suggested the TMD splitting functions which are being used in the $\chi QM$. Another key gradient in our calculations is to obtain the TMD bare quarks. This would be possible if we convert properly the measure of related integration to the light cone coordinate. What we got for the TMD parton densities are representing acceptable behaviour with respect to the variation of transverse momentum and the $x$-Bjorken variable. Extension the calculations to the polarized case, using the modified $\chi QM$ is possible which we hope to report them in future.
| -26,011.179311 |
[
-2.998046875,
2.712890625
] | 13.815789 |
[
-2.427734375,
2.439453125,
-0.7724609375,
-3.865234375,
-1.603515625,
4.3359375
] |
[
0.50048828125,
6.68359375,
1.6904296875,
3.873046875
] | 218 | 2,140 |
[
-3.703125,
4.1640625
] | 38.439047 |
[
-5.1171875,
-2.44921875,
-2.234375,
-1.75,
0.8603515625,
7.875
] | 1.179444 | 7.576902 | 34.859813 | 17.806111 |
[
2.9955694675445557
] | -18,415.397738 | 6.656075 | -25,724.025602 | 1.095198 | 5.63339 |
[
-2.908203125,
-2.921875,
-3.08203125,
-3.939453125,
1.95703125,
10
] |
[
-5.15234375,
0.06451416015625,
-1.3310546875,
-0.374267578125,
2.5703125,
1.66015625
] | |
BkiUd-k5qX_Bw5o9Imfb
|
\section{Introduction}
We consider the D-Optimality problem formulated as
\begin{align*}\label{prob}\tag{D-Opt}
\textstyle
&\max \left\{ \ldet\sum_{\ell\in N} x_\ell v_\ell v_\ell^\top \, : \, \mathbf{e}^\top x=s,~ l\leq x \leq u,~ x\in\mathbb{Z}^n
\right\},\\
&\quad=\max \left\{ \ldet \left(
\sum_{\ell\in N} l_\ell v_\ell v_\ell^\top
+ \sum_{\ell\in N} x_\ell v_\ell v_\ell^\top\right) \, : \, \mathbf{e}^\top x=s-\mathbf{e}^\top l\,,~ 0\leq x \leq u-l,~ x\in\mathbb{Z}^n
\right\},
\end{align*}
where $v_\ell \in \mathbb{R}^{m}$,
for $\ell\in N:=\{1,\ldots,n\}$,
$0\leq l < u\in\mathbb{Z}^n$, with
$\mathbf{e}^\top l \leq s \leq \mathbf{e}^\top u$.
\ref{prob} is a fundamental problem in statistics, in the area of ``experimental designs'' (see \cite{Puk}, for example).
Defining $A:= (v_1, v_2, \dots, v_n)^\top$, we consider the least-squares regression
problem $\min_{\theta\in \mathbb{R}^{m}} \|A\theta -y\|_2$, where $y$ is an
arbitrary response vector. We assume that $A$ has full column rank, and so there is a unique solution to the
least-squares problem (for each $y$). But we consider a situation where each $v_\ell$
corresponds to a costly experiment, which could be carried out up to $u_\ell$ times.
Overall, we have a budget to carry out a total of $s(\geq m)$ experiments, and so
we specify the choices by $x$ (in \ref{prob}).
For a given feasible solution $\tilde{x}$, we define $A_{\tilde{x}}$
to be a matrix that has $v_\ell^\top$ repeated $\tilde x_\ell$ times, for $\ell\in N$, as its rows.
This leads to the reduced least-squares
problem $\min_{\theta\in \mathbb{R}^m} \|A_{\tilde x}\theta -y\|_2$.
The generalized variance of the least-squares parameter estimator $\hat \theta$
is inversely proportional to $\det \sum_{\ell\in N} \tilde{x}_\ell v_\ell v_\ell^\top $
(which is proportional to the volume of a standard ellipsoidal confidence region for $\theta$),
and so \ref{prob} corresponds to picking the set of experiments to
minimize the generalized variance of the least-squares parameter estimator $\hat \theta$
(see \cite{Fedorov}, for example). There is a large literature on heuristic
algorithms for \ref{prob} and its variations.
\cite{Welch} was the first to approach \ref{prob} with an exact branch-and-bound algorithm,
employing a bound based on Hadamard's inequality and another based on continuous relaxation (apparently without using state-of-the art NLP solvers of that time).
\cite{KoLeeWayne,KoLeeWayne2} proposed a spectral bound
and analytically compared it with the Hadamard bound;
also see \cite{LeeLind2019}.
\cite{li2022d} applied a local-search procedure and an exact algorithm to the D-optimal Data Fusion problem, a particular case of the D-optimality problem where $\sum_{\ell\in N}l_\ell v_\ell v_\ell^\top $ is positive definite and known as the existing Fisher Information Matrix (FIM). Moreover, the D-optimal Data Fusion problem consider only the case where the variables are binary, i.e., $l=0$ and $u=\mathbf{e}$. Although the Data Fusion and the D-optimality problems have similarities, most techniques used in \cite{li2022d} rely on the positive definiteness of the existing FIM and cannot be applied to our problem.
Next, we highlight our contributions. We present in this work
\begin{itemize}
\item three local-search heuristics for \ref{prob},
\item five algorithms to construct an initial solution for the local-search procedures,
\item five procedures to compute the determinant of a rank-one update of a given matrix, knowing the determinant of the matrix. These procedures are essential to the successful application of the local-search procedures,
\item variable-bound tightening (VBT) inequalities, which are constructed based on a lower bound for \ref{prob} and on the knowledge of a feasible solution for the Lagrangian dual of its continuous relaxation,
\item a branch-and-bound algorithm based on a convex mixed-integer nonlinear programming formulation of \ref{prob}. We investigate possible methodologies to accelerate the convergence of the branch-and-bound algorithm, by combining the use of the VBT inequalities, local-search procedures, and the use of the Hadamard and the spectral upper bounds besides the bound obtained from the continuous relaxation.
\item numerical experiments with random generated instances where we first compare the use of the different algorithms to compute the determinant of a rank-one update of a matrix inside the local-search procedures. Then, we compare several versions of the branch-and-bound algorithm where subsets of the procedures described above are executed.
\end{itemize}
We note that although \cite{Welch} already considered the application of a branch-and-algorithm for D-optimality, the author did not use variable tightening inequalities based on convex optimization or investigated
the linear algebra of doing a fast local search.
A preliminary version of this paper appeared in \cite{PonteFampaLeeSBPO22}. Here, we suggest two new algorithms to construct initial solutions to the local-search procedures, we analyse different ways of computing the determinant of a matrix after a rank-one update, and we experiment the procedures proposed inside an enhanced branch-and-bound algorithm.
A similar solution approach has been successfully applied to the related max\-i\-mum-entropy sampling problem (MESP) (see \cite{AFLW_Using,Anstreicher_BQP_entropy,Kurt_linx,FL2022}), where given the covariance matrix $C$ of a
Gaussian random $n$-vector, one searches for
a subset of $s$ random variables which maximizes the ``information'' (measured by ``differential entropy'')
(see \cite{SW,CaseltonZidek1984,LeeEnv,FL2022}, for example).
\vspace{0.1in}
\noindent {\bf Notation.}
We let $\mathbb{S}^n$ (resp., $\mathbb{S}^n_+$~, $\mathbb{S}^n_{++}$)
denote the set of symmetric (resp., positive-semidefinite, positive-definite) matrices of order $n$.
We let $\mathbf{diag}(x)$ denote the $n\times n$ diagonal matrix with diagonal elements given by the components of $x\in \mathbb{R}^n$.
We denote an all-ones vector
by $\mathbf{e}$ and an identity matrix by $I$.
For matrices $A$ and $B$,
$A\bullet B:=\Trace(A^\top B)$ is the matrix dot-product.
For matrix $A$, we denote row $i$ by $A_{i\cdot}$ and
column $j$ by $A_{\cdot j}$~.
\section{Variable-bound tightening}\label{sec:ineq}
Next, we present a convex continuous relaxation of \ref{prob} and its Lagrangian dual, which will be used for tightening the bounds on the variables (it may also be used for variable fixing if sufficiently strong), based on general principles of convex MINLP.
We define $A:= (v_1, v_2, \dots, v_n)^\top$ and we note that
$
\textstyle
\sum_{\ell\in N} x_\ell v_\ell v_\ell^\top = A^\top \mathbf{diag}(x) A.
$
Then, a convex continuous relaxation of \ref{prob} may be formulated as
\begin{equation}\label{cont_rel}
\max \left\{ \ldet \big(A^\top \mathbf{diag}(x) A\big) \, : \, \mathbf{e}^\top x=s, \, l\leq x\leq u, \, x\in \mathbb{R}^n\right\}.
\end{equation}
It is possible to show that the Lagrangian dual of \eqref{cont_rel} can be formulated as
\begin{equation}\label{eq:lag_with_theta}
\begin{array}{lll}
&\min &-\ldet \Lambda + \lambda^\top u - \theta^\top l + \nu s - {m},\\
&\text{s.t.}
&\Lambda \bullet v_iv_i^\top - \lambda_i + \theta_i - \nu = 0,\quad i \in N,\\
&&\Lambda \succ 0,\lambda \geq 0, \theta \geq 0.
\end{array}
\end{equation}
In Theorem \ref{thm:fix_dopt}, we show how to tighten variables bounds for \ref{prob} based on knowledge of a lower bound and a feasible solution for the dual problem \eqref{eq:lag_with_theta}.
\begin{theorem}\label{thm:fix_dopt}
Let
\begin{itemize}
\item LB be the objective-function value of a feasible solution for \ref{prob};
\item $(\hat\Lambda,\hat\lambda,\hat\theta,\hat\nu)$ be a feasible solution for \eqref{eq:lag_with_theta} with objective-function value $\hat\zeta$.
\end{itemize}
Then, for every optimal solution $x^\star$ for \ref{prob}, we have:
\begin{align}
&x_k^\star \leq l_k + \left\lfloor
\left(\hat{\zeta}-{LB}\right)/\hat\theta_k
\right\rfloor ,\quad ~\forall\; k \in N\text{ such that } \hat \theta_k>0,\label{ineq1}\\%[1.5em]
&x_k^\star \geq u_k- \left\lfloor
\left(\hat{\zeta}-{LB}\right)/\hat\lambda_k
\right\rfloor,\quad ~\forall\; k \in N\text{ such that } \hat \lambda_k>0.\label{ineq2}
\end{align}
\end{theorem}
\section{Local-search heuristics}\label{sec:heur}
We introduce heuristics to construct a feasible solution to \ref{prob} by applying a local-search procedure from an initial solution. We propose different ways of constructing the initial solution and performing the local search. In the next section, we also investigate procedures to update the objective value inside the local search in order to make it more efficient. Without loss of generality, we assume that $l=0$ in \ref{prob}.
\subsection{Initial solutions from the SVD decomposition of $A$}\label{subsec:heur1}
Next, we show how we obtain initial solutions for our local-search procedures from the real singular-value decomposition (SVD) $A=U\Sigma V^\top$ (see \cite{GVL1996}, for example), where $U\in\mathbb{R}^{n\times n}$, $V\in\mathbb{R}^{m\times m}$ are orthonormal matrices and $\Sigma=\mathrm{diag}(\sigma_1,\sigma_2,\dots,\sigma_m)\in\mathbb{R}^{n\times m}$ ($n\geq m$) with singular values $\sigma_1\ge\sigma_2\ge\dots\ge\sigma_m\ge0$.
First, to ensure that we start the local-search procedures with a feasible solution for \ref{prob} with finite objective value, we construct a vector $\tilde{x}\in\{0,1\}^n$, such that $\mathbf{e}^\top \tilde{x}=m$ and $A^\top\mbox{diag}(\tilde{x})A\in\mathbb{S}^m_{++}$\,. This is equivalent to choosing $m$ linearly independent rows of $A$, and setting $\tilde{x}$ as the incidence vector for the selected subset of rows. We denote the set of indices of the selected rows by $\tilde{N}$. To select the linearly independent rows, we use the Matlab function nsub\footnote{\url{www.mathworks.com/matlabcentral/fileexchange/83638-linear-independent-rows-and-columns-generator}} (see \cite{FLPX2021} for details).
We note that
for each $k \in N$, we have $\sum_{j\in N} U_{jk}^2 = 1$ and $\sum_{j\in N} U_{kj}^2 = 1$. We define
\[
\textstyle
x^0_j := \sum_{k=1}^s U_{jk}^2\,,\quad j\in N.
\]
We clearly have $\mathbf{e}^\top x^0 = s$ and $0 \leq x^0_j \leq 1$, for all $j\in N$. So, $x^0$ is a feasible solution of \eqref{cont_rel}.
We let $\tau$ be the permutation of the indices in $N$, such that $x^0_{\tau(1)}\geq x^0_{\tau(2)}\geq\cdots\geq x^0_{\tau(n)}$.
Then, we propose two procedures to construct a feasible solution $\bar{x}$ for \ref{prob}, considering $\tau$ and $\tilde{x}$.
\begin{itemize}
\item ``Bin$(x^0)$'':
Let $\bar{N}$ be the first $s-m$ indices in $\tau$ (which depends on $x^0$) that are not in $\tilde {N}$. Set $\bar{x}_{j}:=1$, for $j\in\bar{N}$, and $\bar{x}_{j}:=\tilde{x}_j$\,, for $j\notin \bar{N}$.
\item ``Int$(x^0)$'':
Let $\Delta =u-\tilde{x}$ and $\bar{s}=s-m$. Define, for all $j\in N$,
\[
\textstyle
\tilde y_{\tau(j)} := \min\left\{\Delta_{\tau(j)},\max\left\{0,\bar{s}-\sum_{i=1}^{j-1}\tilde{y}_{\tau(i)}\right\}\right\}.
\]
Then, set $\bar{x}:=\tilde{x}+\tilde{y}$.
\end{itemize}
We observe that the objective function of \ref{prob} is given by $\ldet \big(\Sigma^{\top}U^{\top}\mathbf{diag}(x)U\Sigma\big)$, and so the choice of $x$ is related to the rows of $U\Sigma$. Then, we also define
\[
\textstyle
{\hat{x}^0_j := \sum_{i=1}^{m} \big(U_{ji} \Sigma_{ii}\big)^2,\quad j\in N.}
\]
Finally, replacing $x^0$ by $\hat{x}^0$ on the procedures described above, we construct two alternative initial solutions to our local-search procedures. We note that although $\hat{x}^0\geq 0$, it need not be feasible for \eqref{cont_rel}.
\subsection{Initial solution from the continuous relaxation}
In Algorithm \ref{alg:lee_cont}, we present how we compute an initial solution to our local-search procedures from a solution to the continuous relaxation \eqref{cont_rel}.
\begin{algorithm}[!ht]
\footnotesize{
\KwIn{a feasible solution $x^{\mathcal{C}}$ to \eqref{cont_rel}}
\KwOut{a feasible solution $\bar{x}$ to \ref{prob} }
$\bar{x} := \lfloor x^{\mathcal{C}} \rfloor$\;
$k := \mathbf{e}^\top \bar{x}$\;
$x^{f}:= x^{\mathcal{C}} -
\bar{x}$\;
\While{$k< s$}{
$\hat\jmath := \mbox{argmax}\{x^f\}$\;
$\bar x_{\hat \jmath} := \bar x_{\hat \jmath} + 1$\;
$x^f_{\hat\jmath}:=0$\;
$k := k + 1$\;
}
\caption{Convert Continuous to Integer}\label{alg:lee_cont}
}
\end{algorithm}
\subsection{Local-search procedures}\label{subsec:heur2}
In Algorithm \ref{localsearch}, we present the local-search procedures that consider as the criterion for improvement of the given solution, the increase in the value of the objective function of \ref{prob}. The neighborhood of a given solution $\bar{x}$ is defined by
\[
\mathcal{N}(\bar{x}):= \{y\in\mathbb{Z}^n~:~0\leq y\leq u,~y_i=\bar{x}_i+1,~y_j=\bar{x}_j-1,~ y_k=\bar{x}_k, k\neq i,k\neq j, \forall i,j\in N\}.
\]
\begin{algorithm
\footnotesize{
\KwIn{ A feasible solution $\bar{x}$ of \ref{prob}}
\KwOut{ A feasible solution $\bar{x}$ of \ref{prob}, possibly updated }
$x^0:=\bar{x}$\;
$z^0:=\ldet(A^\top \mathbf{diag}(x^0)A)$\;
$flag:=true$\;
\While{$flag$}
{
$flag:=false$\;
\For{ $i=1,\ldots,n$, such that $\bar{x}_i<u_i$}
{
$x_i := \bar{x}_i + 1$\;
\For{ $j=1,\ldots,n$, such that $\bar{x}_j>0$\,, $j\neq i$}
{
$x:=\bar{x}$\;
$x_j := \bar{x}_j - 1$\;
$z:= \ldet(A^\top \mathbf{diag}(x)A) $\label{innerloop}\;
\If {$z>z^0$}
{
$x^0:=x$\;
$z^0:=z$\;
$flag:=true$\;
\If {``First improvement''}
{
break loops for $i$ and $j$\;
}
}
}
\If {``First improvement plus'' \& $flag == true$}
{
break loop for $i$\;
}
}
$\bar{x}:=x^0$\;
}
\caption{Local-search procedures \label{localsearch}}
}
\end{algorithm}
We experiment with the three local-search procedures described next.
\begin{itemize}
\item ``FI'' (Local Search First Improvement):
Starting from $\bar{x}$, the procedure visits the solution in $\mathcal{N}(\bar{x})$ with increased objective value with respect to $\bar{x}$, such that $i$ is the least possible index, and $j$ is the least possible index for the given $i$.
\item ``FI$^+$'' (Local Search First Improvement Plus):
Starting from $\bar{x}$, the procedure visits the solution in $\mathcal{N}(\bar{x})$ with increased objective value with respect to $\bar{x}$, such that $i$ is the least possible index, and $j$ is selected in $N$, as the index that maximizes the objective value, for the given $i$.
\item ``BI'' (Local Search Best Improvement):
Starting from $\bar{x}$, the procedure visits the solution in $\mathcal{N}(\bar{x})$ with increased objective value with respect to $\bar{x}$, such that $i$ and $j$ are selected in $N$, as the pair of indices that maximizes the objective value.
\end{itemize}
\section{Fast local search}\label{sec:fast}
An efficient local search for \ref{prob} is
based on fast computation of $\ldet\left(B+v_iv_i^\top -v_jv_j^\top\right)$,
already knowing $\ldet B$, where $B:=
\sum_{\ell\in N} \bar{x}_\ell v_\ell v_\ell^\top$,
for some $\bar x$ that is feasible for \ref{prob} such that
$\bar{x}+\mathbf{e}_i-\mathbf{e}_j$ is
also feasible for \ref{prob}.
If $\ldet\left(B+v_iv_i^\top -v_jv_j^\top\right) > \ldet B$,
then $\bar{x}+\mathbf{e}_i-\mathbf{e}_j$ is an improvement on
$\bar{x}$ in \ref{prob}.
\subsection{Simplest}\label{sec:simplest}
In the simplest algorithm, we form $\hat{B}$ as $B +v_iv_i^\top - v_jv_j^\top$,
and then we calculate the determinant of $\hat B$ in $\mathcal{O}(m^3)$ flops.
\subsection{Cholesky update}
Let $B=LL^\top$ be the Cholesky factorization of $B$. The \texttt{lowrankupdate} and \texttt{lowrankdowndate} Julia functions compute the Cholesky factorization of a rank-one update of an $m\times m$ matrix in $\mathcal{O}(m^2)$ flops. We first apply a \texttt{lowrankupdate}, outside of the
inner loop of Algorithm \ref{localsearch}, to get a Cholesky factorization
$\tilde{L}\tilde{L}^\top$ of $B +v_iv_i^\top$ (from the one for $B$).
Then, inside the inner loop, with $i$ fixed
we apply the a \texttt{lowrankdowndate} to get the Cholesky factorization
$\hat{L}\hat{L}^\top$
of
$\hat{B}=B +v_iv_i^\top - v_jv_j^\top$ (from the one for
$B +v_iv_i^\top$). Finally, we have
$\ldet(\hat{B}) = 2 \sum_{\ell=1}^m \log({\hat L}_{\ell\ell})$.
Computing the Cholesky factorization of $\hat{B}$ directly
would have instead required
$\mathcal{O}(m^3)$ flops (see \cite[Sec. 4.2]{GVL1996}).
\subsection{Sherman–Morrison update}
The well-known Sherman-Morrison formula
\[
(M+ab^\top)^{-1} = M^{-1} -\frac{(M^{-1}a)(b^\top M^{-1})}{(1+b^\top M^{-1}a) }
\]
and the well-known matrix determinant lemma
\[
\det(M+ab^\top) = (1+b^\top M^{-1}a) \det(M)
\]
are useful for rank-one updates of inverses and determinants, respectively,
in $\mathcal{O}(m^2)$ for an order-$m$ matrix.
Outside the inner loop of Algorithm \ref{localsearch},
we can calculate the inverse of $B +v_iv_i^\top$ from the inverse of $B$, using the Sherman-Morrison formula
(setting $M:=B$, $a:=b:=v_i$).
Inside the inner loop (with $i$ fixed), for each $j$ we can calculate $\ldet(\hat B)$
from the inverse of $B +v_iv_i^\top$, using the matrix-determinant lemma
(setting $M:= B +v_iv_i^\top$,~
$a:=-v_j$ and $b:=v_j$).
\subsection{SVD Rank-One update}
The next method requires some preprocessing.
For a given feasible solution $\bar{x}$, we define $A_{\bar{x}}\in \mathbb{R}^{s\times m}$
to be a matrix that has $v_\ell^\top$ repeated $\bar x_\ell$ times, for $\ell\in N$, as its rows.
Note that $A_{\bar x}^\top A_{\bar x} = B$. But rather than working with $B$ directly, we instead work
with $A_{\bar x}$\,. Let $\phi(j)$ be any row index of $A_{\bar x}$
that contains a copy of $v_j^\top$\,.
Let
\[
X := A_{\bar x} + \mathbf{e}_{\phi(j)}\left(v_{i} - v_{j} \right)^\top.
\]
This rank-1 update of $A_{\bar x}$ is the result of replacing $v_j^\top$ with
$v_i^\top$ in row $\phi(j)$ of $A_{\bar x}$\,.
We note that $\ldet(X^\top X) = \ldet (\hat B)$.
Let $A_{\bar x} = U\Sigma V^\top$ with $U \in \mathbb{R}^{s \times m}$, $\Sigma \in \mathbb{R}^{m \times m}$ and $V \in \mathbb{R}^{m \times m}$ be the singular value decomposition (SVD) of $A_{\bar x}$.
Let $w:=v_i-v_j$\,.
We are interested in the SVD of
\begin{equation}\label{eq:upsvdinit}
A_{\bar x}+\mathbf{e}_{\phi(j)} w^\top =
\begin{bmatrix}U&\mathbf{e}_{\phi(j)}\end{bmatrix}
\begin{bmatrix}
\Sigma ~&~\mathbf{0}\\
\mathbf{0}~&~I
\end{bmatrix}
\begin{bmatrix}V&w\end{bmatrix}^\top
\end{equation}
expressed as modifications to $U,\Sigma,V$.\\
From \cite{brand2006fast}, let $p \in \mathbb{R}^{s}$ where $p := (I-UU^\top)\mathbf{e}_{\phi(j)}$, and $K \in \mathbb{R}^{(m + 1) \times m}$, where
\[
K:= \begin{bmatrix}
\Sigma V^\top + U^\top \mathbf{e}_{\phi(j)} w^\top\\
\|p\|w^\top
\end{bmatrix}~.
\]
Let
$K = {\tilde U}{\tilde \Sigma}{\tilde V}^\top$ be the singular value decomposition of $K$.
Then
\begin{equation}\label{eqX}
X= A_{\bar x} + \mathbf{e}_{\phi(j)} w^\top = \left(\begin{bmatrix}
U ~&~ p/\|p\|
\end{bmatrix}\tilde U\right){\tilde \Sigma} \tilde V^\top~,
\end{equation}
\noindent and
\[
\ldet \hat B = \ldet (X^\top X)=
\ldet\left((A_{\bar x}+\mathbf{e}_{\phi(j)} w^\top)^\top(A_{\bar x}\mathbf{e}_{\phi(j)} w^\top)\right) = \textstyle 2 \sum_{\ell=1}^m \log\left(\tilde \Sigma_{\ell\ell}\right).
\]
Inside the $i,j$ loops in
Algorithm \ref{localsearch}, working with the
$(m+1)\times m$ matrix $K$,
the $m\times m$ matrix $\Sigma V^\top$ does not change.
We only need the singular values of $K$ (and not the singular vectors).
In this case, we could employ the
Golub-Reinsch Algorithm which uses about
$\frac{8}{3}m^3+4m^2$ flops.
The direct computation of the singular values of $X$ would, instead, use about $4sm^2-\frac{4}{3}m^3$ if the Golub-Reinsch algorithm was applied (recommended when $s \lessapprox \frac{5}{3}m$), or about $2sm^2+2m^3$ flops if the R-SVD algorithm was applied (recommended when $s \gtrapprox \frac{5}{3}m$).
Outside the $i,j$ loops, we need the complete SVD (singular values and singular vectors) of $X$ to restart the local search. We can use the SVD of $K$ from the pair $(i,j)$ that determines the new solution $\bar x$, to make the computation more efficient (see \eqref{eqX}).
To compute the complete SVD
of an $(m+1)\times m$ matrix, we could again employ the
Golub-Reinsch algorithm which, in this case, uses about $22m^3+14m^2$.
Computing the complete SVD of an $s\times m$ matrix, would require, instead, about $14sm^2+8m^3$ if the Golub-Reinsch algorithm was applied (recommended when $s \lessapprox \frac{3}{2}m$), or about $6sm^2+20m^3$ flops if the R-SVD algorithm was applied (recommended when $s \gtrapprox \frac{3}{2}m$) (see \cite[Sec. 5.4.5]{GVL1996}).
Having complexity $\mathcal{O}(m^3)$ per iteration, this algorithm is unlikely to be competitive with the $\mathcal{O}(m^2)$.
\subsection{QR Rank-One update}
Similarly with the preceding section where we updated an SVD factorization, we can compute a QR factorization of $X$ knowing a QR factorization of $A_{\bar x}$\,.
Computing the QR factorization of $X$ directly by the Householder QR algorithm
would have instead required about
$2m^2(s-m/3)$ flops (see \cite[Sec. 5.2.1]{GVL1996}).
Let $A_{\bar x} = QR$, where $Q\in\mathbb{R}^{s\times s}$ and $R\in\mathbb{R}^{s\times m}$. The \texttt{qrupdate} Matlab function (which we have implemented in Julia) computes the QR factorization of a rank-one update of an $s\times m$ matrix in $\mathcal{O}(s^2)$ flops, getting $X= \tilde{Q}\tilde{R}$ (see \cite[Sec. 12.5.1]{GVL1996} for the algorithm). Then, we have
\[
\ldet\hat B=
\textstyle 2 \sum_{\ell=1}^m \log\left(\tilde R_{\ell\ell}\right).
\]
This algorithm could possibly be competitive with some of the
$\mathcal{O}(m^2)$ approaches, when $s$ is not much larger than $m$.
\section{Hadamard and Spectral Bounds}
Without loss of generality,
we assume that
$l=0$ in \ref{prob}.
Let $A_u$ denote a $\mathbf{e}^\top u \times m$
matrix obtained by repeating each row $i$ of
$A$ a total of $u_i$ times in $A_u$\,.
In this way, \ref{prob} becomes a 0/1 optimization problem on $A_u$\,, and we can apply some bounds of \cite{KoLeeWayne}.
The \emph{Hadamard bound} is defined as
\begin{equation}\label{hada_bound}
\textstyle
\mathcal{H} := \sum_{\ell=1}^{s} \log\left(1 + \phi_{\ell}^2( A_u)\right),
\end{equation}
where $\phi_\ell(A_u)$ denotes the
denotes the $\ell^{\mbox{th}}$-greatest 2-norm over
the rows of $A_u$\,, and the
\emph{spectral bound} is defined as
\begin{equation}\label{spec_bound}
\textstyle
\mathcal{S} := \sum_{\ell=1}^{s} \log\left(1 + \sigma_\ell^2( A_u)\right),
\end{equation}
where $\sigma_\ell(A_u)$ denotes the $\ell^{\mbox{th}}$-greatest singular value of $A_u$\,. \cite{KoLeeWayne}
gives details about how to adapt these bounds,
inside of branch-and-bound
(which becomes complicated when
the set of rows of $A$ fixed into a solution
do not span $\mathbb{R}^m$).
\section{Numerical Experiments}\label{sec:num_exp}
For our numerical experiments, we implemented the three local-search procedures described in Subsection \ref{subsec:heur2}, namely, ``FI'', ``FI$^+$'', and ``BI'', and we initialized each procedure with the four methods proposed in Subsection \ref{subsec:heur1}, namely, ``Bin$(x^0)$'', ``Int$( x^0)$'',``Bin$(\hat{x}^0)$'', ``Int$(\hat{x}^0)$''. All the methods described in Section \ref{sec:fast} to compute the determinant at each iteration of the local-search procedures were implemented and compared to each other.
We also implemented different versions of a branch-and-bound (B\&B) algorithm to obtain optimal solutions for our test instances, where the bounds are obtained with the convex continuous relaxation \eqref{cont_rel}. The different versions of the B\&B apply all, some, or none of the procedures described in the following at each node of the B\&B enumeration tree in an attempt to reduce its size and make the algorithm more efficient.
\begin{itemize}
\item VBT: Compute the variable-bound tightening (VBT) inequalities \eqref{ineq1} and \eqref{ineq2} and include them as cuts in the current subproblem, also fixing variables when possible. To compute the inequalities, we use the values of the optimal dual variables for the continuous relaxation \eqref{cont_rel} of the current subproblem and the best known lower bound for \ref{prob}.
\item LSI: Apply the local-search procedures from the solution of the continuous relaxation whenever the solution obtained is integer, in an attempt to increase the lower bound LB on the objective value of \ref{prob}.
\item LSC: Apply Algorithm \ref{alg:lee_cont} from the solution of the continuous relaxation whenever the solution is not integer. Then apply the local-search procedures from the integer solution obtained. Unlike we do for LSI, in this case we run the local-search procedures at every node of the B\&B algorithm.
\item HS: Compute the Hadamard bound \eqref{hada_bound} and the spectral bound \eqref{spec_bound} and compare them to the upper bound given by the continuous relaxation of the current subproblem, considering the best upper bound when testing if the node can be fathomed.
\end{itemize}
Algorithm \ref{alg:node_ls}
shows what is executed at each node of the B\&B algorithm when all the enhancement procedures described above are applied. \begin{algorithm
\footnotesize{
\While{$true$}
{
Get $z^\mathcal{H}$ and $z^{\mathcal{S}}$ from the Hadamard and spectral bounds\;
Get ${x},{z}^\mathcal{C},{\lambda},\theta$ from the continuous relaxation \eqref{cont_rel}\;
\If{$x$ is an integer feasible solution}{
$x^0_{LS} := x$\;
break while loop\;
}
\ElseIf {${x}$ is a continuous feasible solution and $\min\{{z}^\mathcal{C},{z}^\mathcal{H},{z}^\mathcal{S}\} > LB$ }{
Apply VBT\;
\If{VBT didn't change the bounds of any variable}{
Get $x_{LS}^0$ from Algorithm \ref{alg:lee_cont} with $x$ as input\;
break while loop\;
}
}\Else{
Node is discarded\;
}
}
Get best $x,z$ from the local-search procedures using $x_{LS}^0$ as input\;
\If{$z > LB$}{
$LB := z$\;
}
\caption{Procedure at each node of the enhanced branch-and-bound \label{alg:node_ls}}
}
\end{algorithm}
The algorithms proposed were coded in Julia v.1.7.1. To solve the convex relaxation \eqref{cont_rel}, we apply Knitro using the Julia package Knitro v0.13.0, and to solve \ref{prob}, we employ the branch-and-bound algorithm in Juniper \cite{juniper} (using the \texttt{StrongPseudoCost} branching rule, and $10^{-5}$ as the tolerance to consider a value as integer). We ran the experiments on
a 16-core machine (running Windows Server 2016 Standard): two Intel Xeon
CPU E5-2667 v4 processors running at 3.20GHz, with 8 cores each, and 128
GB of memory.
To construct our test instances, we used the Matlab function
\texttt{sprand} to randomly generate 15 $n\times m$ dimensional matrices $A$ with $m := \lfloor0.25 n\rfloor$ and rank $m$.
We generated three instances for each $n \in \{20,30,50,60,80\}$, we set $s := 0.5 n$, and used the Matlab function \texttt{randi} to generate a random vector $u$ of each dimension $n$, with integer values between $1$ and $3$. We set $l=0$ for all instances.
\subsection{Comparing the procedures to update the determinant}
In our first experiments, we verify how the different procedures described in Section \ref{sec:fast}, to update the computation of the determinant at each iteration of the local-search procedures, affect their performance. We refer the five procedures in Section \ref{sec:fast} as ``Simplest'', ``Chol'' (Cholesky), ``SM'' (Sherman-Morrison), ``SVD'', and ``QR''.
In Figures \ref{fig:vary_s_300_30}, \ref{fig:vary_m_s_125_250} and \ref{fig:vary_s_m_n_500_u_1} we compare the total elapsed times to run the three local-search procedures described in Subsection \ref{subsec:heur2}, starting from the solution ``Bin$(x^0)$'' (see Subsection \ref{subsec:heur1}), using each procedure described in Section \ref{sec:fast}. The times depicted correspond to only one instance generated as described previously, but for these tests we considered other values of $n$, $m$, and $s$, to better observe how the times increase with these parameters.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{plot_300_30_45_vary_s_sorted.pdf}
\caption{Elapsed times for local-search procedures}
\label{fig:vary_s_300_30}
\end{figure}
From Figure \ref{fig:vary_s_300_30}, we first observe that when $s$ increases in both plots (for $n = 300, m=30$ and $n = 300, m=45$) the times for the QR method (having complexity $\mathcal{O}(s^2)$ per iteration) have a big increase confirming what is expected from theory. We also see that when $m$ increases from $30$ to $45$ all the methods show an increase in time, but for SM we have the smallest times and the smallest increase. It is interesting to note that when $m$ increases from $30$ to $45$, QR becomes more competitive for Chol. In fact, for $m=45$ and $s\leq 125$, QR is faster than Chol, while it is always slower when $m=30$. As QR has complexity $\mathcal{O}(s^2)$ per iteration and Chol has complexity $\mathcal{O}(m^2)$ per iteration, increasing $m$ is expected to increase more the times for Chol.
The results point to SM as the most efficient method in our experiments.
The second best method can be Chol or QR. As expected, both methods with complexity $\mathcal{O}(m^3)$ per iteration (Simplest and SVD) have bad performance.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{plot_s_125_250_vary_m_sorted.pdf}
\caption{Elapsed times for local-search procedures}
\label{fig:vary_m_s_125_250}
\end{figure}
The superiority of SM is confirmed in Figure \ref{fig:vary_m_s_125_250}, where we vary $m$. We see that SVD (having complexity $\mathcal{O}(m^3)$ per iteration) becomes very inefficient in comparison to the other methods when $m=100$ or $125$, for
$n=250, s= 125$. The plot in the right ($n=500, s= 250$) compares the two methods with complexity $\mathcal{O}(m^2)$ per iteration and we see again a better performance when using SM for this larger instance.
In Figure \ref{fig:vary_s_m_n_500_u_1}, we compare the three methods with complexity $\mathcal{O}(m^2)$ per iteration when $s=m$ (in this case, we set $u=\mathbf{e}$). In that regime, we confirm that SM is the best option for our use and that there is no winner between QR and Chol.
From these experiments, we see that an efficient implementation of the procedure to update the computation of the determinant can have a significant impact in the efficiency of the local-search procedures.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.4]{plot_500_vary_s_m_sorted.pdf}
\caption{Elapsed times for local-search procedures}
\label{fig:vary_s_m_n_500_u_1}
\end{figure}
\subsection{Comparing the versions of branch-and-bound}
In Table \ref{tab:bb}, we analyse the impact of the procedures ``VBT'', ``LSI'', ``LSC'', and ``HS'' on the performance of the branch-and-bound algorithm. On the first row of the table, we identify the seven versions of the branch-and-bound algorithm that were executed. On the next four rows, we identify which procedures are running in the branch-and-bound for each version. Then, we present the elapsed time to solve the instances and the number of nodes on the branch-and-bound enumeration tree for each version of the algorithm. The parameters $n,m,s$ of each instance are presented in the first column of the table.
When comparing versions (1)--(4) to versions (5)--(7), we see that the most successful procedure is LSC. When adding it to the branch-and-bound algorithm both the time and the number of nodes decrease significantly in general. When applying VBT to the branch-and-bound algorithm already with the LSC procedure included (version (6)), we have another decrease in time and number of nodes for most instances and this is the most successful procedure on our tests. We see that both LCI and HS are not effective in reducing neither the time nor the number of nodes. In fact, we observed that running the local-search procedures only when an integer solution is obtained during the execution of branch-and-bound, rarely improves the current lower bound on the objective value of \ref{prob}. Moreover, the Hadamard and the spectral bounds are rarely stronger than the continuous bound.
\begin{table}[!ht]
\begin{tabular}{c|rrrrrrr}
\hline
BB version&\multicolumn{1}{c|}{(1)}&\multicolumn{1}{c|}{(2)}&\multicolumn{1}{c|}{(3)}&\multicolumn{1}{c|}{(4)}&\multicolumn{1}{c|}{(5)}&\multicolumn{1}{c|}{(6)}&\multicolumn{1}{c}{(7)}\\
\hline
VBT & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c}{$\checkmark$} \\
LSI & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\
LSC & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c}{$\checkmark$} \\
HS & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{$\checkmark$} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{$\checkmark$} \\ \hline
$n,m,s$ & \multicolumn{7}{c}{Elapsed time (sec)} \\ \hline
20,5,10 & \multicolumn{1}{r|}{3.49} & \multicolumn{1}{r|}{3.50} & \multicolumn{1}{r|}{3.52} & \multicolumn{1}{r|}{3.67} & \multicolumn{1}{r|}{2.68} & \multicolumn{1}{r|}{2.57} & 3.18 \\
20,5,10 & \multicolumn{1}{r|}{0.34} & \multicolumn{1}{r|}{0.56} & \multicolumn{1}{r|}{0.45} & \multicolumn{1}{r|}{0.56} & \multicolumn{1}{r|}{0.37} & \multicolumn{1}{r|}{0.40} & 0.54 \\
20,5,10 & \multicolumn{1}{r|}{4.73} & \multicolumn{1}{r|}{5.82} & \multicolumn{1}{r|}{5.93} & \multicolumn{1}{r|}{6.16} & \multicolumn{1}{r|}{5.25} & \multicolumn{1}{r|}{3.16} & 3.64 \\ \hline
30,7,15 & \multicolumn{1}{r|}{161.81} & \multicolumn{1}{r|}{42.47} & \multicolumn{1}{r|}{42.74} & \multicolumn{1}{r|}{45.10} & \multicolumn{1}{r|}{171.56} & \multicolumn{1}{r|}{40.72} & 43.61 \\
30,7,15 & \multicolumn{1}{r|}{18.90} & \multicolumn{1}{r|}{11.72} & \multicolumn{1}{r|}{12.14} & \multicolumn{1}{r|}{12.59} & \multicolumn{1}{r|}{22.67} & \multicolumn{1}{r|}{9.11} & 11.04 \\
30,7,15 & \multicolumn{1}{r|}{1.45} & \multicolumn{1}{r|}{2.47} & \multicolumn{1}{r|}{2.62} & \multicolumn{1}{r|}{2.60} & \multicolumn{1}{r|}{1.73} & \multicolumn{1}{r|}{1.81} & 2.66 \\ \hline
50,12,25 & \multicolumn{1}{r|}{469.31} & \multicolumn{1}{r|}{353.17} & \multicolumn{1}{r|}{390.00} & \multicolumn{1}{r|}{377.00} & \multicolumn{1}{r|}{467.12} & \multicolumn{1}{r|}{394.20} & 389.87 \\
50,12,25 & \multicolumn{1}{r|}{139.08} & \multicolumn{1}{r|}{142.24} & \multicolumn{1}{r|}{153.82} & \multicolumn{1}{r|}{148.98} & \multicolumn{1}{r|}{141.37} & \multicolumn{1}{r|}{145.74} & 146.67 \\
50,12,25 & \multicolumn{1}{r|}{34.40} & \multicolumn{1}{r|}{34.46} & \multicolumn{1}{r|}{38.22} & \multicolumn{1}{r|}{36.29} & \multicolumn{1}{r|}{30.17} & \multicolumn{1}{r|}{30.57} & 31.19 \\ \hline
60,15,30 & \multicolumn{1}{r|}{72.78} & \multicolumn{1}{r|}{60.10} & \multicolumn{1}{r|}{64.32} & \multicolumn{1}{r|}{61.85} & \multicolumn{1}{r|}{34.91} & \multicolumn{1}{r|}{51.91} & 51.66 \\
60,15,30 & \multicolumn{1}{r|}{27.05} & \multicolumn{1}{r|}{32.82} & \multicolumn{1}{r|}{33.79} & \multicolumn{1}{r|}{33.12} & \multicolumn{1}{r|}{25.81} & \multicolumn{1}{r|}{37.64} & 36.84 \\
60,15,30 & \multicolumn{1}{r|}{27.69} & \multicolumn{1}{r|}{37.11} & \multicolumn{1}{r|}{40.77} & \multicolumn{1}{r|}{37.23} & \multicolumn{1}{r|}{35.83} & \multicolumn{1}{r|}{45.08} & 45.90 \\ \hline
80,20,40 & \multicolumn{1}{r|}{421.14} & \multicolumn{1}{r|}{595.66} & \multicolumn{1}{r|}{622.47} & \multicolumn{1}{r|}{591.38} & \multicolumn{1}{r|}{258.11} & \multicolumn{1}{r|}{211.35} & 219.96 \\
80,20,40 & \multicolumn{1}{r|}{602.43} & \multicolumn{1}{r|}{1085.66} & \multicolumn{1}{r|}{1128.50} & \multicolumn{1}{r|}{1062.40} & \multicolumn{1}{r|}{320.31} & \multicolumn{1}{r|}{316.72} & 318.46 \\
80,20,40 & \multicolumn{1}{r|}{9579.26} & \multicolumn{1}{r|}{12751.16} & \multicolumn{1}{r|}{13301.99} & \multicolumn{1}{r|}{14560.62} & \multicolumn{1}{r|}{5088.98} & \multicolumn{1}{r|}{4636.28} & 4719.10 \\ \hline
$n,m,s$ & \multicolumn{7}{c}{Number of nodes} \\ \hline
20,5,10 & \multicolumn{1}{r|}{191} & \multicolumn{1}{r|}{125} & \multicolumn{1}{r|}{125} & \multicolumn{1}{r|}{125} & \multicolumn{1}{r|}{121} & \multicolumn{1}{r|}{103} & 103 \\
20,5,10 & \multicolumn{1}{r|}{3} & \multicolumn{1}{r|}{3} & \multicolumn{1}{r|}{3} & \multicolumn{1}{r|}{3} & \multicolumn{1}{r|}{3} & \multicolumn{1}{r|}{3} & 3 \\
20,5,10 & \multicolumn{1}{r|}{337} & \multicolumn{1}{r|}{303} & \multicolumn{1}{r|}{303} & \multicolumn{1}{r|}{303} & \multicolumn{1}{r|}{315} & \multicolumn{1}{r|}{219} & 219 \\ \hline
30,7,15 & \multicolumn{1}{r|}{10901} & \multicolumn{1}{r|}{3113} & \multicolumn{1}{r|}{3113} & \multicolumn{1}{r|}{3091} & \multicolumn{1}{r|}{10901} & \multicolumn{1}{r|}{3113} & 3091 \\
30,7,15 & \multicolumn{1}{r|}{1467} & \multicolumn{1}{r|}{683} & \multicolumn{1}{r|}{683} & \multicolumn{1}{r|}{683} & \multicolumn{1}{r|}{1467} & \multicolumn{1}{r|}{683} & 683 \\
30,7,15 & \multicolumn{1}{r|}{37} & \multicolumn{1}{r|}{37} & \multicolumn{1}{r|}{37} & \multicolumn{1}{r|}{37} & \multicolumn{1}{r|}{37} & \multicolumn{1}{r|}{37} & 37 \\ \hline
50,12,25 & \multicolumn{1}{r|}{23245} & \multicolumn{1}{r|}{17755} & \multicolumn{1}{r|}{17755} & \multicolumn{1}{r|}{17755} & \multicolumn{1}{r|}{23245} & \multicolumn{1}{r|}{17755} & 17755 \\
50,12,25 & \multicolumn{1}{r|}{6027} & \multicolumn{1}{r|}{5331} & \multicolumn{1}{r|}{5331} & \multicolumn{1}{r|}{5331} & \multicolumn{1}{r|}{5445} & \multicolumn{1}{r|}{4969} & 4969 \\
50,12,25 & \multicolumn{1}{r|}{1513} & \multicolumn{1}{r|}{1221} & \multicolumn{1}{r|}{1221} & \multicolumn{1}{r|}{1221} & \multicolumn{1}{r|}{1077} & \multicolumn{1}{r|}{919} & 919 \\ \hline
60,15,30 & \multicolumn{1}{r|}{2975} & \multicolumn{1}{r|}{1557} & \multicolumn{1}{r|}{1557} & \multicolumn{1}{r|}{1557} & \multicolumn{1}{r|}{889} & \multicolumn{1}{r|}{833} & 833 \\
60,15,30 & \multicolumn{1}{r|}{569} & \multicolumn{1}{r|}{463} & \multicolumn{1}{r|}{463} & \multicolumn{1}{r|}{463} & \multicolumn{1}{r|}{309} & \multicolumn{1}{r|}{403} & 403 \\
60,15,30 & \multicolumn{1}{r|}{531} & \multicolumn{1}{r|}{591} & \multicolumn{1}{r|}{591} & \multicolumn{1}{r|}{591} & \multicolumn{1}{r|}{531} & \multicolumn{1}{r|}{591} & 591 \\ \hline
80,20,40 & \multicolumn{1}{r|}{6955} & \multicolumn{1}{r|}{6245} & \multicolumn{1}{r|}{6245} & \multicolumn{1}{r|}{6245} & \multicolumn{1}{r|}{2845} & \multicolumn{1}{r|}{1897} & 1897 \\
80,20,40 & \multicolumn{1}{r|}{9013} & \multicolumn{1}{r|}{11101} & \multicolumn{1}{r|}{11101} & \multicolumn{1}{r|}{11101} & \multicolumn{1}{r|}{2995} & \multicolumn{1}{r|}{2857} & 2857 \\
80,20,40 & \multicolumn{1}{r|}{174481} & \multicolumn{1}{r|}{178609} & \multicolumn{1}{r|}{178609} & \multicolumn{1}{r|}{178609} & \multicolumn{1}{r|}{72635} & \multicolumn{1}{r|}{62905} & 62905 \\ \hline
\end{tabular}
\caption{Branch-and-bound (BB) algorithms }\label{tab:bb}
\end{table}
In Table \ref{tab:2}, we present for each instance, the elapsed time to execute the local-search procedures (``LS''), the branch-and-bound algorithm, which corresponds to version (6) on Table \ref{tab:bb} (``BB(6)''), and to solve the continuous relaxation \eqref{cont_rel} with Knitro at the root node at the branch-and-bound tree (``${z}^\mathcal{C}$''). We also present the objective value obtained with each procedure. Finally, column ``VBT'' shows the number of VBT inequalities that were effective in tightening the bounds of a variable during the execution of the branch-and-bound algorithm, and column ``FV'' shows the number of variables fixed by the VBT inequalities. The times for the local-search procedures corresponds to the execution of the three procedures described in Section \ref{subsec:heur2}, each starting from each of the four initial solutions presented in Section \ref{subsec:heur1}. The objective value corresponds to the best solution found. We see that the local-search procedures are very fast compared to the branch-and-bound algorithm and obtain solutions of very good quality even for the largest instances. The continuous relaxation is also solved in less than 1 second for all the instances and give tight bounds for the instances tested. Finally, we see that the VBT inequalities are effective and fix a significant number of variables.
\begin{table}[ht]
\small
\begin{tabular}{c|rrr|rrr|rr}
\hline
& \multicolumn{3}{c|}{Elapsed time (sec)} & \multicolumn{3}{c|}{Objective value} & \multicolumn{1}{c}{VBT} & \multicolumn{1}{c}{FV} \\
$n,m,s$ & \multicolumn{1}{c}{LS} & \multicolumn{1}{c}{BB(6)} & \multicolumn{1}{c|}{${z}^\mathcal{C}$} & \multicolumn{1}{c}{LS} & \multicolumn{1}{c}{BB(6)} & \multicolumn{1}{c|}{${z}^\mathcal{C}$} & \multicolumn{2}{c}{BB(6)} \\ \hline
20,5,10 & 0.012 & 2.572 & 0.020 & 5.754 & 5.768 & 5.802 & 71 & 49 \\
20,5,10 & 0.007 & 0.403 & 0.012 & 6.206 & 6.206 & 6.226 & 32 & 32 \\
20,5,10 & 0.007 & 3.155 & 0.016 & 5.645 & 5.696 & 5.735 & 30 & 27 \\ \hline
30,7,15 & 0.041 & 40.715 & 0.027 & 8.484 & 8.484 & 8.549 & 445 & 371 \\
30,7,15 & 0.025 & 9.109 & 0.023 & 8.549 & 8.549 & 8.604 & 192 & 138 \\
30,7,15 & 0.025 & 1.806 & 0.016 & 9.186 & 9.186 & 9.232 & 125 & 89 \\ \hline
50,12,25 & 0.110 & 394.195 & 0.039 & 13.660 & 13.660 & 13.712 & 6310 & 5838 \\
50,12,25 & 0.208 & 145.736 & 0.038 & 13.454 & 13.457 & 13.543 & 2852 & 2493 \\
50,12,25 & 0.110 & 30.568 & 0.042 & 14.100 & 14.104 & 14.180 & 868 & 727 \\ \hline
60,15,30 & 0.341 & 51.914 & 0.059 & 16.744 & 16.750 & 16.779 & 1033 & 834 \\
60,15,30 & 0.424 & 37.639 & 0.070 & 16.960 & 16.975 & 17.017 & 492 & 378 \\
60,15,30 & 0.252 & 45.080 & 0.050 & 17.083 & 17.083 & 17.162 & 591 & 348 \\ \hline
80,20,40 & 0.898 & 211.346 & 0.147 & 21.521 & 21.607 & 21.707 & 3581 & 2696 \\
80,20,40 & 0.881 & 316.715 & 0.147 & 21.411 & 21.553 & 21.670 & 3981 & 3052 \\
80,20,40 & 0.937 & 4636.277 & 0.107 & 21.610 & 21.671 & 21.789 & 55275 & 47219 \\ \hline
\end{tabular}
\caption{Performance of local search, branch-and-bound version (6), and VBT inequalities}\label{tab:2}
\end{table}
\newpage
\section{Conclusion}
Our numerical experiments indicate promising directions to investigate in order to improve the efficiency of the branch-and-bound algorithm to solve the D-optimality problem. One possible approach, for example, is to introduce the use of bounds from \cite{li2022d} when $\sum_{\ell\in \hat{N}} \hat{x}_{\ell}v_{\ell}v_{\ell}^{\top}$ is positive definite, where $x_\ell$ is fixed at $\hat{x}_{\ell}$ at a given subproblem, for all $\ell\in\hat N \subset N$.
| -48,021.844299 |
[
-2.40625,
2.05859375
] | 46.461949 |
[
-3.123046875,
0.09808349609375,
-2.150390625,
-6.03125,
-0.71875,
8.6328125
] |
[
1.6357421875,
8.15625,
-0.79296875,
5.234375
] | 407 | 4,882 |
[
-3.36328125,
3.7109375
] | 41.737828 |
[
-5.69140625,
-4.39453125,
-4.2734375,
-2.0859375,
2.123046875,
11.8515625
] | 0.780905 | 30.008518 | 28.11796 | 6.548604 |
[
1.6773524284362793
] | -30,348.512236 | 6.032978 | -47,344.14883 | 1.385258 | 6.116924 |
[
-1.9560546875,
-3.4296875,
-4.09765625,
-5.5625,
2.31640625,
12.6328125
] |
[
-5.4765625,
-1.3720703125,
-1.5791015625,
-0.84228515625,
3.134765625,
3.384765625
] | |
BkiUc-o4dbgg3WyiswoA
|
\section{Correctness under Weak Consistency}
\label{sec:correct}
\Cref{alg:detector} correctly and precisely detects all deadlocks
under our weak memory consistency model with some additional specific
consistency requirements on certain accesses. We define these
requirements, show how to meet them in each of the TSO, Java, and C++
memory models, and then prove the algorithm raises an alarm exactly
when there is a deadlock.
\subsection{Requirements}
In order to prove correctness, we require the following additional
memory consistency.
\begin{enumerate}
\item There is a total order, $<$, over all instances of the write in
\cref{alg:detector} line~\ref{ln:detector:enter}, across all memory
locations. Let $w_1 < w_2$. Any write preceding and including $w_1$
in h.b.~order is visible to any read following $w_2$ in h.b.~order.
%
\label{rule:seq}
\item The consistency of any $\fldowner$ field is expected to
follow from release-acquire semantics for any $\fldwaitingOn$
field.
%
Specifically, let $w_1$ be an \cref{alg:owners}
line~\ref{ln:owners:new:A} or line~\ref{ln:owners:async:C} write to
an $\fldowner$ field, let $w_2$ be an \cref{alg:detector}
line~\ref{ln:detector:enter} write to a $\fldwaitingOn$ field, let
$r_2$ be an \cref{alg:detector} line~\ref{ln:detector:waitingOn}
read, and let $r_1$ be an \cref{alg:detector}
line~\ref{ln:detector:changed} read.
%
Suppose $w_1,r_1$ refer to the same location, as do $w_2,r_2$.
%
If $w_1$ happens-before $w_2$, if $w_2$ is visible to $r_2$, and if
$r_2$ happens-before $r_1$, then $w_1$ is visible to $r_1$.
%
\label{rule:acq}
\item The write in \cref{alg:detector} line~\ref{ln:detector:final}
must not become visible until the fulfillment of $p_0$ is visible
(\cref{alg:owners} line~\ref{ln:owners:set:B}) or it is determined
that an exception should be raised (\cref{alg:detector}
line~\ref{ln:detector:fail}).
%
\label{rule:rel}
\end{enumerate}
These three requirements are readily attained in TSO, Java, and C++ as
follows.
\begin{itemize}
\item Under TSO, a memory fence is needed in \cref{alg:detector}
line~\ref{ln:detector:fence} to achieve requirement~\ref{rule:seq}
by ordering line~\ref{ln:detector:waitingOn} after
line~\ref{ln:detector:enter} and sequentializing all instances of
line~\ref{ln:detector:fence} with each other.
%
TSO naturally achieves requirement~\ref{rule:acq} by respecting the
local store order, as well as requirement~\ref{rule:rel} by not
allowing the line~\ref{ln:detector:final} write to become visible
early.
%
Note that the loop contains no fences.
\item Under the Java memory model, it suffices to mark the two fields,
$\fldowner$ and $\fldwaitingOn$, as volatile to satisfy
all three requirements.
%
This eliminates all write-read data races.
%
Remember that there are no write-write races (see
\cref{lem:writeorder}).
%
In the absence of any races on these two fields, the Java memory
model guarantees sequential consistency with respect to these
fields.
\item In C++ both of the fields must be \texttt{std::atomic} to
eliminate data races, but this alone is insufficient.
%
\Cref{alg:detector} line~\ref{ln:detector:enter} must be tagged as a
\texttt{std::memory\_order\_seq\_cst} access to achieve
requirement~\ref{rule:seq}, establishing a total order over these
writes and subsuming release consistency.
%
Line~\ref{ln:detector:waitingOn} must then be tagged
\texttt{std::memory\_order\_acquire} to achieve
requirement~\ref{rule:acq}.
%
And finally, line~\ref{ln:detector:final} must be
\texttt{std::memory\_order\_release} to satisfy~\ref{rule:rel}.
\end{itemize}
\subsection{Correctness}
Under the preceding consistency requirements, we can now prove
important theoretical guarantees of correctness for our deadlock
detector. Throughout, we consider an execution of a well-formed
program (\cref{def:wf}).
We first show that \cref{alg:detector} raises no false alarms.
\begin{theorem}
If task $t$ fails the assertion in line~\ref{ln:detector:fail}
during $\textsc{Get}(p)$, then a deadlock cycle exists, involving
$t$ and $p$.
%
\label{thm:precise}
\end{theorem}
\begin{proof}
We have $t_0 = t$ and $p_0 = p$.
%
If the execution had broken out of the while loop in
line~\ref{ln:detector:breakt},~\ref{ln:detector:breakp},
or~\ref{ln:detector:changed}, then the assertion would have
succeeded.
%
Therefore, it is the loop condition that fails.
%
Upon reaching line~\ref{ln:detector:inc} in each iteration, we have
found $p_i.\fldowner$ to be $t_{i+1}$ both before and after we found
$t_{i+1}.\fldwaitingOn$ to be $p_{i+1}$. Therefore, we know 1) that
at one time $t_{i+1}$ was the owner of $p_i$, and 2) that while
$t_{i+1}$ still observed itself to own $p_i$, $t_{i+1}$ had invoked
$\textsc{Get}(p_{i+1})$.
%
This follows from memory consistency requirement~\ref{rule:acq}.
%
At this point in the reasoning, we do not yet know if $t_{i+1}$
still the owner of $p_i$ or if $t_{i+1}$ is still awaiting
$p_{i+1}$.
When the loop
(lines~\ref{ln:detector:loop}--\ref{ln:detector:loopend}) terminates
with $t_{i+1} = t_0$, since $t_0$ is the current task, we deduce
that the final $t_{i+1}$, set by line~\ref{ln:detector:owner1}
or~\ref{ln:detector:owner2}, is the current owner of $p_i$.
%
For all $k$ modulo $i+1$, $t_k$ at one time concurrently observed
itself to be the owner of $p_{k-1}$ and was in a call to
$\textsc{Get}(p_k)$. This meets our definition of deadlock.
\end{proof}
The following series of lemmas builds to the theorem that
\cref{alg:detector} detects every deadlock.
\begin{definition}
In a deadlock cycle comprising tasks $T$, a \emph{$t^*$ task} is a
task in $T$ to which the line~\ref{ln:detector:enter} write by every
task in $T$ is visible.
\end{definition}
\begin{lemma}
Every deadlock cycle has a $t^*$ task.
%
\label{lem:tstar}
\end{lemma}
\begin{proof}
Corollary to memory consistency requirement~\ref{rule:seq}.
\end{proof}
A $t^*$ task, which need not be unique, should be thought of as the
(or a) last task to enter the deadlock.
\begin{lemma}
If a program execution exhibits a deadlock cycle comprising tasks
$T$ and promises $P$, when a $t^*$ task calls \textsc{Get} it
constructs a sequence $\{ t_i \}_i$ that is a subset of $T$ and a
sequence $\{ p_i \}_i$ that is a subset of $P$.
%
\label{lem:inTP}
\end{lemma}
\begin{proof}
We have $t_0 = t^* \in T$ and, by definition, $p_0 \in P$.
%
If the loop immediately terminates, then $t_1 = t_0 \in T$, and we
are done.
%
Otherwise, the values of $t_{i+1}$ and $p_{i+1}$ inductively depend
on $t_i$ and $p_i$.
%
By definition of deadlock, one of the tasks in $T$, call it
$o_{p_i}$, observes itself to be the owner of $p_i$. The most recent
write to $p_i.\fldowner$ (recall all the writes are ordered by
\cref{lem:writeorder}) occurred in program order before $o_{p_i}$'s
line~\ref{ln:detector:enter} write.
%
Therefore, memory consistency requirement~\ref{rule:seq} establishes
that $t^*$ must read $t_{i+1} = o_{p_i} \in T$ in
line~\ref{ln:detector:changed}.
%
By definition of $t^*$ and by memory consistency
requirement~\ref{rule:rel}, we see that
line~\ref{ln:detector:waitingOn} observes $t_{i+1}$'s
line~\ref{ln:detector:enter} write, not its
line~\ref{ln:detector:final} write. Thus, $p_{i+1} \in P$ by
definition of deadlock.
\end{proof}
\begin{lemma}
If a program execution exhibits a deadlock cycle comprising tasks
$T$, no $t^*$ task executes a diverging loop
(lines~\ref{ln:detector:loop}--\ref{ln:detector:loopend}) in its
call to \textsc{Get}.
%
\label{lem:terminate}
\end{lemma}
\begin{proof}
Suppose, during the call to \textsc{Get} by $t^*$, the loop does not
terminate. Thus $t_i \ne t_0$ for any $i > 0$.
%
But by \cref{lem:inTP}, the infinite sequence $\{ t_i \}_i$ is a
subset of $T$.
%
Therefore, $T$, in fact, exhibits a smaller cycle not involving
$t_0$, violating the minimality condition in the definition of
deadlock cycle.
\end{proof}
\begin{theorem}
If a program execution exhibits a deadlock cycle comprising tasks
$T$ and promises $P$, at least one task in $T$ fails the assertion
in \cref{alg:detector} line~\ref{ln:detector:fail}.
%
\label{thm:correct}
\end{theorem}
\begin{proof}
Suppose for the sake of contradiction that a deadlock cycle arises and
yet no assertion fails.
%
So every task $t \in T$ enters the \textsc{Get} procedure and either
blocks at line~\ref{ln:detector:return} on a promise in $P$ or
diverges in an infinite loop.
No task exits the loop by failing the loop condition,
$t_{i+1} \ne t_0$, since this would directly fail the assertion in
line~\ref{ln:detector:fail}.
For each invocation of \textsc{Get} by a $t^*$ task, the loop cannot
break in line~\ref{ln:detector:breakt} or
line~\ref{ln:detector:breakp} because \cref{lem:inTP} implies no
tasks or promises in the sequence are $\Null$.
%
If the loop breaks in line~\ref{ln:detector:changed}, then $t^*$ has
observed the owner of $p_i$ to change from one read to the
next. This is impossible: both reads observe the current owner,
$o_{p_i}$, by the same reasoning as in the proof of \cref{lem:inTP}.
%
Finally, the loop cannot diverge for $t^*$, by \cref{lem:terminate}.
%
Since there exists at least one $t^*$ task, by \cref{lem:tstar}, we
have a contradiction.
\end{proof}
\begin{corollary}[to \cref{thm:precise,thm:correct}]
\Cref{alg:detector} is precise and correct, guaranteeing the
existence of a deadlock when an alarm is raised and raising an alarm
upon every deadlock.
\end{corollary}
\section{Deadlock Detection Algorithm}
\label{sec:detector}
Now that we have established the relationship between promises and
tasks, it is possible to describe what a deadlock is.
A deadlock is a cycle of $n$ tasks, $t_i$, and $n$ promises, $p_i$,
such that $t_i$ awaits $p_i$ while $p_i$ is owned by $t_{i+1}$
($\mathrm{mod}~n$).
The information required to identify such a deadlock is, for the first
time, made available explicitly at runtime through the use of the \ensuremath{\mathcal{P}_o}\xspace
policy.
We can now develop a runtime detection mechanism to identify deadlocks
based on this information and raise an alarm as soon as one is
created.
\subsection{Approach}
Even assuming sequential consistency, the algorithm for finding such a
cycle is non-trivial. Conceptually, whenever a $\kwget p$ is executed
by $t$, $t$ must alternately traverse owned-by and waits-for edges to
see if the path of dependences returns to $t$. If another task, $t'$,
is encountered which is not currently awaiting a promise, this proves
that progress is still being made and there is no deadlock (yet). In
this case, $t$ passes verification and commits to blocking on
$p$. Should this path of dependences grow due to a subsequent
$\kwget p'$ by $t'$, then the same algorithm runs again in task $t'$
to verify that the new waits-for edge does not create a deadlock.
Crucially, during verification $t$ must establish a waits-for edge to
mark that it is awaiting $p$ \emph{prior} to traversing the dependence
path. That is, a waits-for edge is created before it is determined
that $t$ will be allowed to await $p$. A two-task cycle shows what
would go wrong if this procedure is not followed. If $t$ begins to
verify its wait of $p$ (say, owned by $t'$) without marking that $t$
is awaiting $p$, and concurrently $t'$ begins to verify its wait of
$p'$ (owned by $t$) without marking that $t'$ is awaiting $p'$, then
each task may find that the other is apparently not awaiting any
promises at this time, and both commit to blocking, creating an
undetected deadlock. However, by ensuring that each task marks itself
as awaiting a promise prior to verifying whether that wait is safe, we
guarantee the last task to arrive in the formation of a deadlock cycle
will be able to detect this cycle.
A second consideration is how this approach handles concurrent
transfer of promise ownership or concurrent fulfillment of
promises. Suppose that while the cycle detection algorithm is
traversing a dependence path, an earlier promise in the path is
transferred to a new owner or is fulfilled, thereby invalidating the
remainder of the traversed path. Failure to handle this correctly
could result in an alarm when there is no deadlock. The first
observation we make is that this scenario cannot arise for any but the
most recent promise encountered on the path. If $p_0$ is owned by
$t_1$, awaiting $p_1$, owned by $t_2$, then it is impossible for $p_0$
to move into a new task or to become fulfilled, since its current
owner, $t_1$, is blocked (or about to block, pending successful
verification). The concern is only that $t_2$ has not yet blocked and
may transfer or fulfill $p_1$. The natural solution is that when
traversing the dependence path, upon reaching each promise in the path
we must go back and double-check that the \emph{preceding} promise
still belongs to the task it belonged to in the previous iteration and
is still unfulfilled. If this check fails, then the present
verification passes because progress is still being made.
\subsection{Detection Algorithm}
\algDetector
The deadlock detector occupies the implementation of the \kwget
instruction, given in \cref{alg:detector}.
This detector can thereby raise an alarm in a task as soon as the task
attempts a deadlock-forming await of a promise.
At the time of raising an alarm, the available diagnostic information
that can be reported includes the task, the awaited promise, as well
as every other task and promise in the cycle, if desired.
For a preliminary understanding of the procedure's logic, we assume
sequential consistency in this section.
Upon entering \textsc{Get}, the currently executing task records the
promise that it will be waiting on (line~\ref{ln:detector:enter}).
This $\fldwaitingOn$ field was initialized to $\Null$ in
\cref{alg:owners} line~\ref{ln:owners:async:B}, and is always reset to
$\Null$ upon exiting \textsc{Get} (\cref{alg:detector}
line~\ref{ln:detector:final}), either normally
(line~\ref{ln:detector:return}) or abnormally
(line~\ref{ln:detector:fail}). Doing so makes the algorithm robust to
programs with more than one deadlock.
The loop in the detection algorithm traverses the chain of alternating
$\fldowner$ and $\fldwaitingOn$ fields.
If task $t$ is waiting on promise $p$, which is owned by a task $t'$,
then $t$ is effectively waiting on whatever $t'$ awaits.
In traversing this chain, if $t$ finds that it is transitively waiting
on itself, then we have identified a deadlock
(lines~\ref{ln:detector:loop},~\ref{ln:detector:fail}).
If the algorithm reaches the end of this chain without finding $t$
again, as indicated by finding a $\Null$ value in
line~\ref{ln:detector:breakt} ($p_i$ is already fulfilled) or in
line~\ref{ln:detector:breakp} ($t_{i+1}$ is not awaiting a promise),
then it is safe to commit to a blocking wait on the desired promise
(line~\ref{ln:detector:return}).
Recall that $p_i.\fldowner$ is $\Null$ after $p_i$ has been fulfilled,
and $t_{i+1}.\fldwaitingOn$ is $\Null$ when $t_{i+1}$ is not currently
executing \textsc{Get}.
In order to guarantee that an apparent cycle always corresponds to a
real deadlock, even under concurrent updates to promises, we rely on
line~\ref{ln:detector:changed} to establish that task $t_{i+1}$ was
waiting on promise $p_{i+1}$ \emph{while} $t_{i+1}$ was still the
owner of promise $p_i$.
This is achieved by reading the $\fldowner$ field both before
(line~\ref{ln:detector:owner1},~\ref{ln:detector:owner2}) and after
(line~\ref{ln:detector:changed}) reading the $\fldwaitingOn$ field
(line~\ref{ln:detector:waitingOn}).
If the task observes the owner of $p_i$ to have changed, it turns out
that it is safe to abandon the deadlock check and commit to the
blocking wait.
In \crefrange{sec:def}{sec:correct}, we will move to a weaker memory
model. There are two crucial points to remember. We must preserve the
ability to reason temporally over the edges in the dependence path,
and we must guarantee that at least one task entering a deadlock can
observe the existence of the whole deadlock cycle.
\section{Implementation and Evaluation}
We have implemented ownership semantics with omitted set and deadlock
detection in Java. We give a brief discussion of some of the practical
considerations in the design of this implementation. We then present
the results of a performance evaluation on a set of benchmark
programs.
\subsection{Objected-Oriented Promise Movement}
Introducing an explicit conception of ownership is minimally
disruptive. It is already the case that every promise is fulfilled by
at most one task, since two sets cause a runtime error. We only ask
that the programmer identify this task by leveraging the existing
structure of \kwasync directives.
However, for large, complex synchronization patterns that rely on many
promises, it can become tedious for a programmer to specify all the
relevant promises, one by one.
\lstChannel
In our Java implementation, an object-oriented approach can reduce the
burden of identifying which promises should be moved to new tasks.
In our Java implementation of these language features, classes
containing many promises may implement a \textsf{PromiseCollection}
interface so that moving a composite object to a new task is
equivalent to moving each of its constituent promises.
A channel class is shown in \cref{lst:channel}, illustrating that
complex and versatile primitives can be built on top of promises with
the aid of \textsf{PromiseCollection}.
This class behaves like a promise that can be used repeatedly, where
the $n$th \textsf{recv} operation obtains the value from the $n$th
\textsf{send} operation.
This behavior depends on dynamically allocated promises, and the
responsibility for the sending end of the channel is associated not to
the ownership of a single promise, but to the ownership of different
promises at different times. It is abstraction-breaking to ask the
channel user to manually specify which promise to move to a new task
in order to effectively move the sending end of the channel.
Instead, we give the impression that the channel object itself is
movable like a promise (line~\ref{ln:channel:b}), since it is a
\textsf{PromiseCollection}, and the implementation of \kwasync relies on
the \textsf{getPromises} method (line~\ref{ln:channel:a}) to
determine which promises should be moved.
\subsection{Exception Handling}
In an implementation of \cref{alg:owners}, some care must go into an
exception handling mechanism.
What code is capable of and responsible for recovering from the failed
assertion in line~\ref{ln:owners:async:E}?
And what happens if a task terminates early, with unfulfilled
promises, because of an exception?
Observe that line~\ref{ln:owners:async:E} occurs within an
asynchronous task after the user-supplied code for that task has
completed.
One solution is to add a parameter to \textsc{Async} so that the user
can supply a post-termination exception handler, which accepts the
list of unfulfilled promises, $t'.\fldowned$, as input.
Indeed, the fix for the AWS omitted set bug included such a mechanism
(not shown in \cref{lst:amazon})~\cite{AWSBugFixed}.
Alternatively, the runtime could automatically fulfill every
unfulfilled promise upon an assertion failure in
line~\ref{ln:owners:async:E}.
Some APIs, including in C++ and Java, provide an exceptional variant
of the completion mechanism for
promises~\cite{Cpp17,JavaCompletableFuture}.
In our implementation, we use this mechanism to propagate an exception
through the promises that were left unfulfilled.
Finally, observe that the correctness of \cref{alg:owners} only
depends on knowing when a task's $\fldowned$ list is empty. Therefore,
the $\fldowned$ list could be correctly replaced with a counter, which
would at least reduce the memory footprint of ownership tracking, if
not also the execution time of maintaining a list. However, doing so
would mean that an assertion failure in line~\ref{ln:owners:async:E}
could not indicate \emph{which} promises went unfulfilled. Therefore,
the implementation we evaluate uses an actual list.
\subsection{Benchmarks}
We evaluate the execution time and memory usage overheads introduced
by our promise deadlock detector on nine task-parallel programs. The
overheads are measured relative to the original, unverified baseline
versions.
\begin{enumerate}
\item Conway~\cite{ConwayBench} parallelizes a 2D cellular automaton
by dividing the grid into chunks. We adapted the code from C to
Java, using our \textsf{Channel} class (\cref{lst:channel}) in place
of MPI primitives used by worker tasks to exchange chunk borders
with their neighbors.
\item Heat~\cite{HeatBench} simulates diffusion on a one-dimensional
surface, with 50 tasks operating on chunks of 40,000 cells for 5000
iterations. Neighboring tasks again use \textsf{Channel} in place of
MPI primitives.
\item QSort sorts 1M integers using a parallelized divide-and-conquer
recursion; the partition phase is not parallelized. This is a
standard technique for parallelizing Quicksort~\cite{QuicksortAlg}
and has been previously implemented using the Habanero-Java
Library~\cite{HJlib}. We implemented the finish construct, which
awaits task termination using promises.
\item Randomized distributes 5000 promises over 2535 tasks spawned in
a tree with branching factor of 3. Each task awaits a random promise
with probability 0.8 before performing some work, fulfilling its own
promises, and awaiting all its child tasks. We chose a random seed
that does not construct a deadlock.
\item Sieve counts the primes below 100,000 with a pipeline of tasks,
each filtering out the multiples of an earlier prime. A similar
program is found in prior work~\cite{Ng16}.
\item SmithWaterman (adapted from HClib~\cite{hclib}; also used in
prior work \cite{TJ,KJ}) aligns DNA sequences having 18,000--20,000
bases. Each task operates on a $25 \times 25$ tile.
\item Strassen (such a program is found in the Cilk, BOTS, and KASTORS
suites~\cite{Cilk,BOTS,Kastors}) multiplies sparse $128 \times 128$
matrices containing around 8000 values. Divide-and-conquer recursion
issues asynchronous addition and multiplication tasks, up to depth
5.
\item StreamCluster (from PARSEC~\cite{Parsec}) computes a streaming
$k$-means clustering of 102,400 points in 128 dimensions, using 8
worker tasks at a time. We replaced the OpenMP barriers with
promises in an all-to-all dependence pattern.
\item StreamCluster2 reduces synchronization in StreamCluster by
replacing some of the all-to-all patterns with all-to-one when it is
correct to do so. We also correct a data race in the original
implementation.
\end{enumerate}
All benchmarks were run on a Linux machine with a 16-core AMD Opteron
processor under the OpenJDK 11 VM with a 1 GB memory limit.
A thread pool schedules asynchronous tasks by spawning a new thread
for a new task when all existing threads are in use. This execution
strategy is necessary in general for promises because there is no
\emph{a priori} bound on the number of tasks that can block
simultaneously.
We measured both execution time and, in a separate run, average memory
usage by sampling every 10 ms.
Each measurement is averaged over thirty runs within the same VM
instance, after five discarded warm-up runs; this is a standard
technique to mitigate the variability of JVM overheads, including JIT
compilation~\cite{Georges07}.
\tabResults
\begin{figure}
\includegraphics[width=\columnwidth]{time-plot.pdf}
\caption{Execution times for each benchmark
showing the mean with a 95\% confidence interval (red).}
\label{fig:time}
\Description{A plot of the baseline and verified execution times
for each benchmark. The Sieve, SmithWaterman, and StreamCluster
benchmarks have noticeable overheads.}
\end{figure}
\Cref{tab:results} gives the unverified baseline measurements for each
program and the overhead factors introduced by the verifiers.
The table also gives the geometric mean of overheads across all
benchmarks. There is an overall factor of \geomean{\prodTime}{\value{benchmarkCount}} in execution
time and \geomean{\prodMem}{\value{benchmarkCount}} in memory usage.
The total number of tasks in the program and the average rates of
promise get and set actions per millisecond (with respect to the
baseline execution time) are also reported.
\Cref{fig:time} represents the execution times of each benchmark,
showing the 95\% confidence interval.
The low overheads indicate that our deadlock detection algorithm does
not introduce serialization bottlenecks.
The overall execution time overheads are within 1.1$\times$ for each
of Conway, Heat, QSort, Randomized, SmithWaterman, Strassen, and
StreamCluster2. The same is true of the memory overheads for this
subset of benchmarks, excepting SmithWaterman. In many cases, the
verified run narrowly out-performs the baseline, which can be
attributed to perturbations in scheduling and garbage collection.
It is worth noting that the execution overhead for Sieve is in excess
of 2$\times$. Sieve has the single highest rate of get operations by
an order of magnitude (over 37,000, compared to SmithWaterman's
536). The Sieve program requires almost 9594 tasks to be live
simultaneously, each waiting on the next, with the potential to form
very long dependence chains for \cref{alg:detector} to traverse.
We can also remark on the 1.4$\times$ memory overhead in
SmithWaterman. Unlike Conway, Heat, Sieve, and both of the
StreamCluster benchmarks, in which most promises are allocated by the
same task that fulfills them, SmithWaterman (and Randomized) allocates
all promises in the root task and moves them later. In maintaining the
$\fldowned$ lists in \cref{alg:owners}, one can make trade-offs
between speed and space. Our implementation favors speed, so instead
of literally removing a promise $p$ from $t.\fldowned$ in
lines~\ref{ln:owners:async:Y} and~\ref{ln:owners:set:C}, we simply
rely on the fact that $p.\fldowner \ne t$ anymore to detect that $p$
should no longer be counted in line~\ref{ln:owners:async:E}.
For comparison with deadlock verification in other settings, the Armus
tool~\cite{Armus} can identify barrier deadlocks as soon as they
occur, with execution overheads of up to 1.5$\times$ on Java
benchmarks.
Our benchmark results represent an acceptable performance overhead
when one desires runtime-identifiable deadlocks and omitted sets with
attributable blame.
\section{Introduction}
The task-parallel programming model is based on the principle that
structured parallelism (using high-level abstractions such as
spawn-sync~\cite{Cilk,OpenMP}, async-finish~\cite{X10,HJlib,hclib},
futures~\cite{JCP,Cpp17}, barriers~\cite{OpenMP}, and
phasers~\cite{HJ,Phaser}) is a superior style to unstructured
parallelism (using explicit low-level constructs like threads and
locks).
Structured programming communicates programmer intent in an upfront
and visible way, providing an accessible framework for reasoning about
complex code by isolating and modularizing
concerns.
However, the \emph{promise} construct, found in mainstream languages
including C++ and Java, introduces an undesirable lack of structure
into task-parallel programming. A promise generalizes a future in that
it need not be bound to the return value of a specific task.
Instead, any task may elect to supply the value, and the code may not
clearly communicate which task is intended to do so.
Promises provide point-to-point synchronization wherein one or more
tasks can await the arrival of a payload, to be produced by another
task.
Although the promise provides a safe abstraction for sharing data
across tasks, there is no safety in the kinds of inter-task blocking
dependencies that can be created using promises.
The inherent lack of structure in promises not only leads to
deadlock-like bugs in which tasks block indefinitely due to a cyclic
dependence, but such bugs are not well-defined and are undetectable in
the general case due to the lack of information about which task is
supposed to fulfill which promise.
\lstDeadlock
A deadlock-like cycle may only be detected once all tasks have
terminated or blocked. For example, the Go language runtime reports a
deadlock if no task is eligible to run~\cite{Go}. However, if even one
task remains active, this technique cannot raise an alarm.
An example of such a program is in \cref{lst:deadlock}; the root task
and $t_2$ are in a deadlock that may be hidden if $t_1$ is a
long-running task, such as a web server.
An alternative detection approach is to impose timeouts on waits,
which is only a heuristic solution that may raise an alarm when there
is no cycle.
In both of these existing approaches, the detection mechanism may find
the deadlock some time \emph{after} the cycle has been created.
It is instead more desirable to detect a cycle immediately when it
forms.
\subsection{Promise Terminology}
There is inconsistency across programming languages about what to call
a promise and sometimes about what functionality ``promise'' refers
to.
The synchronization primitive we intend to discuss is called by many
names, including promise~\cite{Cpp17}, handled
future~\cite{LambdaFut}, completable
future~\cite{JavaCompletableFuture}, and one-shot
channel~\cite{RustOneshot}.
For us, a promise is a wrapper for a data payload that is initially
absent; each \emph{get} of the payload blocks until the first
and only \emph{set} of the payload is performed.
Setting the payload may also be referred to as completing, fulfilling,
or resolving the promise.
Some languages, such as C++, divide the promise construct into a pair
of objects; in this case, ``promise'' refers only to the half with a
setter method, while ``future'' refers to the half with a getter
method.
In Java, the \textsf{CompletableFuture} class is a promise, as it
implements the \textsf{Future} interface and additionally provides a
setter method.
Habanero-Java introduced the data-driven future~\cite{DDF}, which is a
promise with limitations on when gets may occur. When a new task is
spawned, the task must declare up front which promises it intends to
consume. The task does not become eligible to run until all such
promises are fulfilled.
In JavaScript, the code responsible for resolving a promise must be
specified during construction of the
promise~\cite{JavaScriptPromise}. This is a limitation that makes
deadlock cycles impossible, although the responsible code may omit to
resolve the promise altogether, leading to unexecuted callbacks.
Promises may provide a synchronous or an asynchronous API. The Java
concurrency library provides both, for
example~\cite{JavaCompletableFuture}.
The synchronous API consists of the \emph{get} and \emph{set} methods.
The asynchronous API associates each of the synchronous operations to a
new task.
A call to \emph{supplyAsync} binds the eventual return value of a new
task to the promise.
The \emph{then} operation schedules a new task to operate on the
promise's value once it becomes available.
The asynchronous API can be implemented using the synchronous API.
Conversely, the synchronous API can be implemented using continuations
and an asynchronous event-driven scheduler \cite{Imam14}.
We focus on the synchronous API in this work.
\subsection{Two Bug Classes}
We identify two kinds of synchronization bug in which the improper use
of promises causes one or more tasks to block indefinitely:
\begin{enumerate}
\item the \emph{deadlock cycle}, in which tasks are mutually blocked
on promises that would be set only after these tasks unblock, and
\item the \emph{omitted set}, in which a task is blocked on a promise
that no task intends to set.
\end{enumerate}
However, neither of these bugs manifests in an automatically
recognizable way at runtime unless every task in the program is
blocked.
In fact, the definitions of these bugs describe conditions which
cannot generally be detected.
What does it mean for no task to \emph{intend} to set a promise?
What does it mean that a task \emph{would} set a promise once the task
unblocks?
In a traditional deadlock, say one involving actual locks, the cycle
is explicit: Task 1 holds lock $A$ and blocks while acquiring lock
$B$, because task 2 is holding lock $B$ and concurrently blocked
during its acquisition of lock $A$.
Intention to release a lock (thereby unblocking any waiters) is
detectable by the fact that a task holds the lock.
But we currently have no concept of a task ``holding'' a promise and
no way to tell that a task intends to set it.
\newcommand\defkw[1]{
\expandafter\newcommand\csname kw#1\endcsname{%
\textsf{\textbf{#1}}\ifmmode\ \else\xspace\fi%
}
}
\defkw{new}
\defkw{set}
\defkw{get}
\defkw{async}
\newcommand\deffld[1]{
\expandafter\newcommand\csname fld#1\endcsname{%
\textsf{#1}\xspace%
}
}
\deffld{owner}
\deffld{owned}
\deffld{waitingOn}
\newcommand\Null{\mathit{null}}
\subsection{Need for Ownership Semantics}
Consider the small deadlock in \cref{lst:deadlock}.
Two promises, $p,q$, are created.
Task $t_2$ waits for $p$ prior to setting $q$, whereas the root task
waits for $q$ prior to setting $p$.
Clearly a deadlock cycle arises? Not so fast. To accurately call this
pattern a deadlock cycle requires knowing that task $t_1$ will not
ever set $p$ or $q$. Such a fact about what \emph{will} not happen is
generally not determinable from the present state without an offline
program analysis.
For this reason, a deadlock cycle among promises evades runtime
detection unless the cycle involves every currently executing task.
\lstOmittedSet
Now consider the bug in \cref{lst:omittedset}.
Two promises, $r,s$, are created. According to the comments, task
$t_3$ is responsible for setting both, and it subsequently delegates
the responsibility for $s$ to $t_4$.
However, $t_4$ fails to perform its intended behavior, terminating
without setting $s$.
The root task then blocks on $s$ forever.
If a bug has occurred, we would like to raise an alarm at runtime when
and where it occurs.
Where is this bug? Should the root task not have blocked on $s$?
Should $t_4$ have set $s$? Should $t_3$ have set $s$?
The blame cannot be attributed, and the bug may, in fact, be in any
one of the tasks involved.
Furthermore, \emph{when} does this bug occur?
The \emph{symptom} of the bug manifests in the indefinite blocking of
the root task, potentially \emph{after} $t_4$ terminates successfully.
If some other task may yet set $s$, then this bug is not yet confirmed
to have occurred.
Omitted sets evade runtime detection and, even once discovered, evade
proper blame assignment.
We propose to augment the task creation syntax (\kwasync in our
examples) to carry information about promise ownership and
responsibility within the code itself, not in the comments.
In doing so, omitted sets become detectable at runtime with blame
appropriately assigned.
Moreover, programmer intent is necessarily communicated in the code.
Finally, in knowing which task is expected to set each promise, it
becomes possible to properly discuss deadlock cycles among promises.
\subsection{Omitted Set in the Wild}
\lstAmazon
An example of an omitted set bug was exhibited by the Amazon Web
Services SDK for Java (v2) when a certain checksum validation
failed~\cite{AWSBugReport}.
An abbreviated version of the code is given in \cref{lst:amazon};
line~\ref{ln:amazon:fix} was absent prior to the bug fix.
The control flow ensures that either exception handling code or
non-exceptional code was executed, not both
(line~\ref{ln:amazon:bug})~\cite{AWSBugIntroduced}.
However, only the non-exceptional code would set the value of a
\texttt{CompletableFuture} (Java's promise) to indicate the work was
completed (line~\ref{ln:amazon:complete}), whereas the
\textsf{onError} method would take no action.
If checksum validation failed after a file download, any consumer
tasks waiting for the download to complete would block indefinitely.
A month later, the omitted set bug was identified and
corrected by adding line~\ref{ln:amazon:fix}~\cite{AWSBugFixed}.
When this bug arises at runtime, the symptom (the blocked consumer) is
far from the cause (the omitted set), and the bug is not readily
diagnosable.
If the runtime could track which tasks are responsible for which
promises, then this bug could be detected and reported as an exception
as soon as the responsible task terminates.
Using our approach, the bug would be detected when the task running
the \texttt{onComplete} callback finishes, and the alarm would name
the offending task and the unfulfilled promise.
\subsection{Contributions}
In this work, we propose the addition of \emph{ownership semantics} for
promises which enables a task's intention to set a promise to be
reflected in the runtime state.
In so doing,
\begin{enumerate}
\item we enable a precise definition of a deadlocked cycle of promises
in terms of runtime state;
\item we define a second kind of blocking bug, the \emph{omitted
set}, which does not involve a cycle;
\item we require important programmer intent to be encoded explicitly
and to respect a runtime-verifiable policy, thereby enabling
structured programming for promises.
\end{enumerate}
In addition to these theoretical contributions,
\begin{enumerate}
\item we introduce a new lock-free algorithm for detecting our
now-identifiable deadlock-cycle and omitted-set bugs \emph{when they
occur};
\item we identify properties critical for establishing the correctness
of the algorithm under weak memory consistency and show how to
ensure these properties hold under the TSO, Java, and C++ memory
models;
\item we prove that our algorithm precisely detects every deadlock
without false alarms;
\item we experimentally show that a Java implementation has low
execution time and memory usage overheads on nine benchmarks
relative to the original, unverified baseline (geometric mean
overheads of \geomean{\prodTime}{\value{benchmarkCount}} and \geomean{\prodMem}{\value{benchmarkCount}}, respectively).
\end{enumerate}
\input{ownership}
\input{detector}
\input{weak}
\input{correctness}
\input{evaluation}
\input{related}
\section{Conclusion}
We have introduced an ownership mechanism for promises, whereby each
task is responsible for ensuring that all of its owned promises are
fulfilled.
This mechanism makes it possible to identify a bug, called the omitted
set, at runtime when the bug actually occurs and to report which task
is to blame for the error.
The ownership mechanism also makes it meaningful, for the first time,
to formally define, discuss, and detect deadlock cycles among tasks
synchronizing with promises. Such a bug is now detectable as soon as
the cycle forms.
In our approach, any code that spawns a new asynchronous task must
name the promises which are to be transferred to the new task.
The programmer must already be aware of this critical information in
order to even informally reason about omitted set and deadlock
bugs. We now ask that it be explicitly notated in the code.
We provided an algorithm to check for compliance with the ownership
policy at runtime, thereby detecting omitted sets, as well as an
algorithm for detecting deadlock cycles using ownership information.
Both types of bug are detected when they occur, not after-the-fact.
Our deadlock detector is provably precise and correct under a weak
memory model and we described how to obtain this correct behavior
under the TSO, Java, and C++ memory models. Every alarm corresponds to
a true deadlock and every deadlock results in an alarm.
Experimental evaluation demonstrates that our lock-free approach to
deadlock detection exhibits low execution time and memory
overheads relative to an uninstrumented baseline.
\balance
\begin{acks}
This work is supported by the \grantsponsor{NSF}{National Science
Foundation}{https://www.nsf.gov} under Collaborative Grant
No.~\grantnum{NSF}{1822919} and Graduate Research Fellowship Grant
No.~\grantnum{NSF}{1650044}.
\end{acks}
\section{Ownership Policy}
\def\ensuremath{\mathcal{L}_p}\xspace{\ensuremath{\mathcal{L}_p}\xspace}
\def\ensuremath{\mathcal{P}_o}\xspace{\ensuremath{\mathcal{P}_o}\xspace}
In promise-based synchronization, a task does not directly await
another task; it awaits a promise, thereby \emph{indirectly} waiting
on whichever task fulfills that promise.
It is a runtime error to fulfill a promise twice, so there ought to be
one and only one fulfilling task.
However, the relationship between a promise and the task which
\emph{will} fulfill it is not explicit and inhibits the identification
of deadlocks.
To make this relationship explicit and meaningful, we say that each
promise is \emph{owned} by exactly one task at any given time.
The owner is responsible for fulfilling the promise eventually, or
else handing ownership off to another task.
Ownership hand-offs may only occur at the time of spawning a new
task.
We augment the \kwasync keyword, used to spawn tasks, with a list of
promises currently owned by the parent task that should be transferred
to the new child.
\subsection{Language Extension}
We define an abstract language, showing only its synchronization
instructions and leaving its sequential control flow and other
instructions unspecified.
For simplicity, we have abstracted away the payload values of promises
and refer to individual promises by globally unique identifiers.
\begin{definition}
The \ensuremath{\mathcal{L}_p}\xspace language consists of task-parallel programs, $P$, whose
synchronization instructions have the syntax
%
\begin{align*}
\kwnew p ~|~ \kwset p ~|~ \kwget p
~|~ \kwasync(p_1,\ldots,p_n)\ \{ P \}
\end{align*}
%
where $n$ may be $0$.
\end{definition}
The instruction $\kwnew p$ represents the point of allocation for the
promise $p$, and we assume well-formed programs do not allocate a
given $p$ twice or operate on $p$ prior to its allocation.
Each invocation of $\kwget p$ blocks the current task until after
$\kwset p$ has been invoked for the first (and only) time.
The \kwasync block creates a new task to execute a sub-program $P$;
the block is annotated with a list of promises, which should be moved
from the parent task to the new task.
In many task-parallel languages, \kwasync automatically creates a
future which can be used to retrieve the new task's return value. We
can readily reproduce this behavior using promises in the pattern
$\kwnew p; \kwasync (p, \ldots)~\{ \ldots; \kwset p \}$.
\begin{definition}
The \emph{ownership policy}, \ensuremath{\mathcal{P}_o}\xspace, maintains state during the
execution of an \ensuremath{\mathcal{L}_p}\xspace program in the form of a map
$\fldowner : \mathit{Promise} \to \mathit{Task} \cup \{ \Null \}$
according to these rules:
%
\begin{enumerate}
\item When task $t$ executes $\kwnew p$, set $\fldowner(p) := t$.
%
\label{rule:new}
\item When task $t$ spawns task $t'$ as
$\kwasync(p_1,\ldots,p_n)\ \{ P \}$, prior to $t'$ becoming
eligible to run, ensure $\fldowner(p_i) = t$ and update
$\fldowner(p_i) := t'$ for each $p_i$.
%
\label{rule:async}
\item When task $t$ terminates, ensure the set of promises
$\fldowner^{-1}(t)$ is empty.
%
\label{rule:exit}
\item When task $t$ executes $\kwset p$, ensure that
$\fldowner(p) = t$ and set $\fldowner(p) := \Null$.
%
\label{rule:set}
\end{enumerate}
\end{definition}
These four rules together ensure that there is at least one \kwset for
each promise, with omitted sets being detected by
rule~\ref{rule:exit}. Rule~\ref{rule:set} guarantees there is at most
one \kwset.
Our proposed modification to the program given in \cref{lst:deadlock}
is to annotate the \kwasync in line~\ref{ln:deadlock:t2} as
$\kwasync(q)$, indicating that $t_2$ takes on the responsibility to
set $q$.
It is now possible to trace the cycle when it occurs: the root task
awaits $q$, owned by $t_2$, awaiting $p$, owned by the root task. It
is clear that $t_1$, whose \kwasync is not given any parameters, is
not involved as it can set neither $p$ nor $q$ (rule~\ref{rule:set}).
The proposed modification to the program given in
\cref{lst:omittedset} is to write $\kwasync(r,s)$ in
line~\ref{ln:omittedset:t3} and $\kwasync(s)$ in
line~\ref{ln:omittedset:t4}. That is, the information already present
in the comments is incorporated into the code itself.
The moment $t_4$ terminates, the runtime can observe that $t_4$ still
holds an outstanding obligation to set $s$. We treat this as an error
immediately (rule~\ref{rule:exit}), irrespective of whether any task
is awaiting $s$.
\subsection{Algorithm for Ownership Tracking}
\algOwners
\Cref{alg:owners} implements the \ensuremath{\mathcal{P}_o}\xspace policy by providing code to be
run during \kwnew, \kwasync, and \kwset operations.
Each promise has an $\fldowner$ field to store the task that is
currently its owner, and each task has an associated $\fldowned$
list that maintains the inverse map, $\fldowner^{-1}$.
The functions $\mathit{currentTask}$ and $\mathit{getCurrentTask}$
interact with thread-local storage.
In compliance with \ensuremath{\mathcal{P}_o}\xspace rule~\ref{rule:new}, the \textsc{New}
procedure creates a promise owned by the currently running task
(line~\ref{ln:owners:new:A}) and adds this promise to that task's
owned list (line~\ref{ln:owners:new:B}).
$\textsc{Async}(P, f)$ schedules $f$ to be called asynchronously as a
new task and moves the promises listed in $P$ into this task.
These promises are first confirmed to belong to the parent task
(line~\ref{ln:owners:async:A}), then moved into the child task
(lines~\ref{ln:owners:async:A}--\ref{ln:owners:async:C}), in
accordance with rule~\ref{rule:async}.
(Line~\ref{ln:owners:async:B} is in preparation for
\cref{alg:detector}, presented in \cref{sec:detector}.)
Once the child task terminates, rule~\ref{rule:exit} requires that the
task not own any remaining promises (line\ref{ln:owners:async:E}).
The \textsc{Init} procedure shows how to set up a root task to execute
the main function.
Finally, $\textsc{Set}(p,v)$ achieves rule~\ref{rule:set}, checking
that the current task owns $p$ and marking $p$ as fulfilled by
assigning it no owner
(lines~\ref{ln:owners:set:A}--\ref{ln:owners:set:C}).
The procedure then invokes the underlying mechanism for actually
setting the promise value to $v$ (line~\ref{ln:owners:set:D}).
As an example of how \Cref{alg:owners} enforces compliance with \ensuremath{\mathcal{P}_o}\xspace,
refer again to \cref{lst:omittedset}. When promise $s$ is first
created, it belongs to the root task (\cref{alg:owners}
\cref{ln:owners:new:B}). If the \kwasync that creates $t_4$ is
annotated with $s$, then \cref{alg:owners} \cref{ln:owners:async:C}
changes the owner of $s$ to $t_4$. Since $t_4$ does not set $s$, upon
termination of $t_4$, an assertion fails in \cref{alg:owners}
\cref{ln:owners:async:E}.
The offending task, $t_4$, and the outstanding promise, $s$, are
directly identifiable and can be reported in the alarm.
\section{Related Work}
Task-parallel programming is prevalent in a variety of languages and
libraries.
Multilisp~\cite{Multilisp} is one of the earliest languages with
futures, a mechanism for parallel execution of functional code.
Fork-join parallelism is employed in Cilk~\cite{Cilk}, and the more
general async-finish with futures model was introduced in
X10~\cite{X10}.
Habanero-Java~\cite{HJ} modernized X10 as an extension to Java and,
later, as a Java library, HJlib~\cite{HJlib}; this language
incorporates additional synchronization primitives, such as the
phaser~\cite{Phaser} and the data-driven future~\cite{DDF}, which is a
promise-like mechanism.
Many other languages, libraries, and extensions include spawning and
synchronizing facilities, whether for threads or lightweight tasks,
including Chapel~\cite{Chapel}, Fortress~\cite{Fortress},
OpenMP~\cite{OpenMP}, Intel Threading Building Blocks~\cite{TBB},
Java~\cite{JCP}, C++17~\cite{Cpp17}, and Scala~\cite{ScalaFutures}.
The promise, as we define it, can be traced back to the I-structures
of the Id language~\cite{IStructures}, which are also susceptible to
deadlock.
Cells of data in an I-structure are uninitialized when allocated, may
be written to at most once, and support a read operation that blocks
until the data is available.
The classic definition of a deadlock is found in Isloor and
Marsland~\cite{Deadlock}, which is primarily concerned with concurrent
allocation of limited resources.
Solutions in this domain fall into the three categories of Coffman:
static prevention, run-time detection, and run-time
avoidance~\cite{Coffman71}.
We consider logical deadlocks, which are distinct from resource
deadlocks in that there is an unresolvable cyclic dependence among
computational results.
Solutions in the logical deadlock domain include techniques that
dynamically detect cycles~\cite{Luecke03, Krammer04, Krammer08,
Hilbrich09, Vo11, Hilbrich12}, that raise alarms upon the formation
or possible formation of cycles~\cite{Agarwal06, Boudol09, Gerakios11,
Armus, KJ, TJ}, that statically check for cycles through
analysis~\cite{Williams05, Naik09, Ng16} or through type
systems~\cite{Boyapati02,Vasconcelos09}, or that preclude cycles by
carefully limiting the blocking synchronization semantics available to
the programmer, either statically or dynamically~\cite{X10, Phaser,
HJ, KJ, TJ}.
The present work includes a dynamic, precise cycle detection
algorithm, enabled only by the introduction of a structured ownership
semantics on the otherwise unrestricted promise primitive.
Futures are a special case of promises where each one is bound to a
task whose return value is automatically put into the promise.
Transitive Joins~\cite{TJ} and its predecessor, Known Joins~\cite{KJ},
are policies with runtime algorithms for deadlock detection on
futures. They are, in general, not applicable to promises. These two
techniques impose additional structure on the synchronization pattern
by limiting the set of futures that a given task may await at any
given time.
Recent work identifies the superior flexibility of promises over
futures with the problematic loss of a guarantee that they will be
fulfilled and develops a \emph{forward} construct as a
middle-ground~\cite{Forward}. Forwarding can be viewed in terms of
delegating promise ownership, but it is restricted in that 1) it moves
only a single promise into a new task, and 2) in particular, it moves
only the implicit promise that is used to retrieve a task's return
value. In terms of futures, forwarding amounts to re-binding a future
to new task.
Other synchronization constructs benefit from similar annotations to
the one we have proposed for promises. This includes event-driven
programming models where events have similar semantics to that of promises.
JavaScript, though a single-threaded language, still uses an
asynchronous task model to schedule callbacks on an event
loop~\cite{JavaScriptAsync}, and could benefit from our approach.
Likewise, our approach is directly applicable to multithreaded
execution models, such as Concurrent Collections~\cite{CnC} and the Open
Community Runtime~\cite{OCR}, that use event-driven execution as a fundamental
primitive.
As another example, the MPI blocking receive primitive must name the sending
task; from this information a waits-for graph for deadlock detection
can be directly constructed~\cite{Hilbrich09}. In addition,
nonblocking communications in MPI
use {\tt MPI\_Request} objects in a manner similar to promises, and
the {\tt MPI\_Wait} operation akin to the get operation on promises.
Languages with barriers and phasers sometimes require the
participating tasks to \emph{register} with the
construct~\cite{Phaser}.
Notably, this kind of registration is absent from the Java API, which
is problematic for the Armus deadlock tool~\cite{Armus}. In that work,
registration annotations had to be added to the Java benchmarks in
order to apply the Armus methodology.
In this work, we considered programs which only use promises for
blocking synchronization, and we constrained ownership transfer to
occur only when a task is spawned.
Since a promise can have multiple readers or no readers at all, it is
not possible in principle to use one promise to synchronize the
ownership hand-off of a second promise between two existing tasks. We
cannot guarantee that the receiving task exists and is unique.
In future work, one could consider a slightly higher abstraction in
the form of a pair of promises acting like a rendezvous, which is a
primitive in languages like Ada and Concurrent
C~\cite{AdaConcurrentC}. Such a synchronization pattern could be
leveraged to hand off promise ownership since there would be a
guaranteed single receiving task.
The Rust language incorporates affine types in its move semantics to
ensure that certain objects have at most one extant reference at all
times~\cite{Rust}. The movement of promise ownership from one task to
another and the obligation to fulfill each promise exactly once may be
expressible at compile time through the use of a linear type system,
which restricts references to exactly one instance.
\section{Weakly Consistent Definition of Deadlock}
\label{sec:def}
With a few tweaks, we can obtain a correctness guarantee for our
deadlock detector under a weak memory model, which implies the same
guarantee under any stronger model, including sequential consistency.
First, we must define this weak memory model and give a definition of
deadlock that is compatible with it.
In practice, we do not want to assume that maps such as the
$\fldowner$ field have a single, globally consistent state that is
observed by all tasks. Machines and languages often have weaker
consistency guarantees, and there are performance costs for requesting
stronger consistency due to the synchronization required.
Instead, we will assume a weak memory model and use unsynchronized
accesses whenever possible.
We now define this weak memory model, which we will use to establish
the correctness of our deadlock detection algorithm under models at
least as strong as this one.
\begin{definition}
The \emph{happens-before} (h.b.) order is a partial order over the
instructions in a program execution that subsumes the intra-task
program order and, upon spawning each new task, the ordering of
\cref{alg:owners} line~\ref{ln:owners:async:D} (the start of the new
task) after \cref{alg:owners} line~\ref{ln:owners:async:C} (the last
action of the parent task before spawning).
%
The reverse of happens-before is \emph{happens-after}.
\end{definition}
\begin{definition}
With respect to a given memory location, a read may only
\emph{observe} a (not necessarily unique) last write which
happens-before it or any write with which the read is not
h.b.~ordered.
%
Two writes or a write and read of the same location which are not
h.b.~ordered are \emph{racing}.
\end{definition}
A typical language has a more refined happens-before ordering and
definition of observable writes, especially relating to reads-from
edges on promises; however, we will not need to appeal to such edges
in our formalism.
\begin{definition}
A program in \ensuremath{\mathcal{L}_p}\xspace is \emph{well-formed} if, in every execution, for
each promise, $p$, there is at most one $\kwnew~p$ instruction, and
each \kwset, \kwget, or \kwasync instruction referring to $p$
happens-after such a $\kwnew~p$.
%
\label{def:wf}
\end{definition}
We note that although the owners of different promises may be updated
concurrently, it is not possible in \cref{alg:owners} for a
write-write race to occur on the same owner field.
\begin{lemma}
Consider an execution of a well-formed program. If $w_1, w_2$ are
two writes to $p.\fldowner$ in \cref{alg:owners}, then $w_1$ and
$w_2$ are not racing.
%
Further, if $r$ is a read of $p.\fldowner$ by task $t$, and $r$
observes the value to be $t$, then $r$ does not race with the write
it observes.
%
\label{lem:writeorder}
\end{lemma}
\begin{proof}
The two claims can be shown together.
%
Line~\ref{ln:owners:new:A} represents the initialization of the
$\fldowner$ field and so happens-before every other write to it.
%
The writes in lines~\ref{ln:owners:async:C}
and~\ref{ln:owners:set:B} each happen-after a read of the same field
observes the value to be the currently executing task
(lines~\ref{ln:owners:async:X},~\ref{ln:owners:set:A}).
%
Take this together with the fact that there are only two ways to set
$p.\fldowner$ to $t$: line~\ref{ln:owners:new:A}, executed by $t$
itself, or line~\ref{ln:owners:async:C}, executed by the parent of
$t$ prior to spawning $t$. In either case, writing $t$ to
$p.\fldowner$ happens-before any read of $p.\fldowner$ by $t$
itself.
\end{proof}
Since we do not assume a globally consistent state, we have to be
careful in the definition of deadlock cycle. Two tasks need not agree
on the value of $\fldowner(p)$ for a given promise, $p$.
Instead of freely referring to $\fldowner$ as a map
$\mathit{Promise} \to \mathit{Task} \cup \{ \Null \}$, we must
additionally state which task's perspective is being used to observe
the $\fldowner$ map.
\begin{definition}
A non-empty set of tasks, $T$, is in a \emph{deadlock cycle} if
for every task $t \in T$,
%
\begin{enumerate}
\item $t$ is executing $\kwget p_t$ for some promise, $p_t$,
\item there exists a task, $o_{p_t}$, also in $T$ which observes that
$\fldowner(p_t) = o_{p_t}$,
\end{enumerate}
and $T$ is minimal with respect to these constraints.
%
The set of promises associated to the deadlock is
$\{ p_t ~|~ t \in T \}$.
\end{definition}
The subtle point in this definition is that task $o_{p_t}$ necessarily
has the most up-to-date information about the owner of $p_t$, since
$o_{p_t}$ is itself the owner. Per \cref{lem:writeorder}, we know that
all the writes to $p_t.\fldowner$ are ordered and that $o_{p_t}$ is
observing the last such write, since only $o_{p_t}$ is capable of
performing the next write to follow the observed one.
| -31,671.784553 |
[
-2.71875,
2.37890625
] | 32.893716 |
[
-2.14453125,
1.48046875,
-1.486328125,
-6.4296875,
-1.857421875,
7.51953125
] |
[
0.1920166015625,
6.90234375,
-0.12322998046875,
5.796875
] | 416 | 8,157 |
[
-1.9189453125,
1.9990234375
] | 23.862918 |
[
-4.87890625,
-3.65625,
-3.65234375,
-1.8427734375,
1.953125,
10.3125
] | 0.390364 | 12.378311 | 22.103715 | 0.703939 |
[
1.4478836059570312
] | -24,142.712215 | 5.495893 | -31,674.87103 | 0.214142 | 6.084588 |
[
-1.865234375,
-2.55859375,
-3.615234375,
-5.265625,
2.294921875,
11.046875
] |
[
-4.84375,
-1.763671875,
-1.619140625,
-1.7900390625,
3.609375,
4.0859375
] | |
BkiUblY5qU2Ap2IpcyL5
|
\section{Introduction} \label{sec:intro}
Quasars with billion-solar-mass black holes have been detected well within the epoch of reionization, when the universe was less than one Gyr old \citep{maz17,ban18,wang18,yang20}. These objects challenge our understanding of the formation and growth of supermassive black holes (SMBHs). A potential mechanism to grow these SMBHs is through jet-enhanced accretion, which can enable Super-Eddington accretion rates \citep{jk08,vol15}. Therefore, the study of jets on the first quasars could provide key clues about this outstanding question in astrophysics.
Quasars with strong radio emission are of particular interest given that Very Long Baseline Interferometry (VLBI) radio observations are the only way to investigate their jets on pc-scales.
In this paper, we present VLBI observations of the recently discovered radio-loud quasar PSO~J172.3556+18.7734 (hereafter P172+18) at $z = 6.823$ \citep{ban21}.
This is currently the only radio source known at $z>6.45$.
P172+18\ is among the fastest accreting quasars at any redshift with an Eddington ratio of $\sim2.2$, and hosts a supermassive black hole with a mass of $M_{\mathrm{BH}} \sim 2.9 \times 10^{8} M_{\odot}$. Karl G. Jansky Very Large Array (VLA)
observations show that the optical quasar is associated with an unresolved radio source with a
flux density of $510 \pm 15$\,$\mu$Jy at 1.52\,GHz and $222 \pm 9$\,$\mu$Jy at 2.87\,GHz, and a
size smaller than $1\farcs9 \times 0\farcs87$, or $10.1 \times 4.6$\,kpc \citep{ban21}.
The implied rest frame 1.4\,GHz luminosity density is
$L_{\rm \nu,1.4\,GHz} = (5.8 \pm 0.2) \times 10^{26}$~W~Hz$^{-1}$.
In addition to the quasar, a second radio source was identified with the VLA at an angular distance of 23\farcs1 from the quasar.
Although the redshift of this ``companion'' source is still unconstrained, the chances of having these two radio-sources at the same distance of the quasar itself is less than $2\%$ based on deep field number counts \citep{ban21}.
This ``companion'' source has no optical, near-infrared, or mid-infrared counterpart in the currently available shallow data. It is unresolved at 1.52\,GHz with a flux density of $732 \pm 15$\,$\mu$Jy and a size smaller than $1\farcs6 \times 0\farcs69$, and resolved at 2.87\,GHz with a flux density of $432 \pm 20$\,$\mu$Jy and a deconvolved size of $1\farcs3 \times 0\farcs8$ \citep{ban21}. The coordinates of the quasar and the ``companion'' source are listed in Table\,1 of \citet{ban21}.
In Section \ref{obs} we present the 1.53, 4.67, and 7.67\,GHz VLBI observations and their data reduction.
In Section \ref{resultsandanalysis} we present the results and analysis for P172+18\ and the ``companion'' source, separately.
Finally, in Section \ref{disc} we discuss the VLBI results and compare the quasar P172+18\ with other known radio-loud quasars near $z \sim 6$.
We adopt a flat cosmology with $H_0 = 70
\,\mbox{km\,s}^{-1}$\,Mpc$^{-1}$, $\Omega_M = 0.3$, and
$\Omega_\Lambda = 0.7$. At the redshift of this quasar, 1\,mas
corresponds to 5.3\,pc.
\section{Observations and Data Reduction} \label{obs}
The VLBI observations of the P172+18\ were carried out at 1.53\,GHz (L-band) on
2019 July 20, and simultaneously at 4.67 and 7.67\,GHz (C-band) on 2019 October 29,
using the Very Long Baseline Array (VLBA) of the
NRAO. At L-band, eight 32\,MHz data channel pairs were
recorded at each station using the ROACH Digital Backend and the
polyphase filterbank (PFB) digital signal-processing algorithm, both with right- and left-hand circular polarizations, and sampled at two bits. The total bandwidth was 256\,MHz centered at 1.53\,GHz. The total observing time at L-band was 6\,hr, with 4.2\,hr on target.
At C-band, which nominally covers the frequency range
3.9--7.9\,GHz, four 64\,MHz data channel pairs were
recorded at each station using the ROACH Digital Backend and the digital downconverter (DDC) signal-processing algorithm, also both with right- and left-hand circular polarizations, and sampled at two bits. Two data channel pairs were tuned near the lower end of the C-band receiver's frequency span, and the other two near its higher end, allowing for simultaneous observations at widely separated frequencies. The total bandwidth per used C-band frequency tuning was 128\,MHz, and the two tunings were centered at 4.67\,GHz and 7.67\,GHz. The total observing time at C-band was also 6\,hr (4.2\,hr on-target time).
At both L- and C-bands, the VLBA observations utilized nodding-style phase referencing using the calibrator J1122+1805, which is separated by $1\rlap{.}{^\circ}78$ from the target
source. The phase referencing cycle time was 3.45\,min:
2.75\,min on the target and 0.7\,min on the calibrator.
The uncertainty in the calibrator's position is
0.08\,mas in right ascension and 0.11\,mas in declination \citep{cha20}. The accuracy of the phase calibrator position is important in phase-referencing observations \citep {WAL99}, because it determines the accuracy of the absolute position of the target. As employed in these observations, phase referencing would preserve absolute astrometric positions to better than $\pm 0\rlap{.}^{''}01$ \citep{FOM99}. The observations also included the calibrator source 4C\,39.25 which was used as a fringe finder and bandpass calibrator. Amplitude calibration was performed using measurements of the antenna gain and the system temperature of each station.
The data were correlated with the VLBA DiFX correlator \citep{DEL11} in Socorro, NM, with 1\,s correlator integration time. Two correlation passes were performed on the data of each
observing session: the first pass was at the position of the quasar P172+18, and the second was at the position of the VLA detected ``companion'' radio source located at an angular distance of 23\farcs1 from the quasar.
Data reduction and analysis were performed using the Astronomical Image Processing System (AIPS: \citealt{G2003}) following standard VLBI data reduction procedures. The phase reference source was self-calibrated and the solutions were applied on the target fields (the quasar and the ``companion''). Deconvolution and imaging were performed using a grid weighting near the mid point between pure natural and pure uniform
(Robust=1 in AIPS task IMAGR).
\begin{figure}
\epsscale{1.2}
\plotone{f1.png}
\caption
{VLBA continuum image of the $z=6.82$ quasar P172+18\ at 1.53\,GHz and $14.3 \times
5.8$~mas resolution (P.~A.=$-18^{\circ}$). The contour levels are at $-3$, 3, 4.5, 6,
7.5, and 9 times the rms noise level, which is
27.5~$\mu$Jy~beam$^{-1}$. The gray-scale range is indicated by the step wedge at the
right side of the image.
\label{fig:vlba1}}
\end{figure}
\section{Results and Analysis} \label{resultsandanalysis}
\subsection{The Quasar P172+18}
\label{raqso}
\begin{figure*}[ht]
\epsscale{0.9}
\plottwo{f2a.png}{f2b.png}
\caption
{VLBA continuum images of the $z=6.82$ quasar P172+18\ at 4.67 ({\it Left}) and 7.67\,GHz
({\it Right}). Their synthesized beam sizes are $3.64 \times 1.62$~mas (P.~A.=$-2^{\circ}$),
and $2.39 \times 0.98$~mas (P.~A.=$-7^{\circ}$), respectively.
The contour levels are at $-3$ and 3 times the rms noise level in each image, which is
17.2~$\mu$Jy~beam$^{-1}$ at 4.67\,GHz and 20.4~$\mu$Jy~beam$^{-1}$ at 7.67\,GHz. The
gray-scale range is indicated by the step wedge at the right side of each image.
The plus sign in each image denotes the VLBA position of the dominant component of P172+18\ at 1.53\,GHz as seen in Figure\,1, which is
R.A.\,(J2000)=\,11$^{\rm h}$29$^{\rm m}$25.36227$^{\rm s}$,
Decl.\,(J2000)=\,$+$18$^\circ$46$^\prime$24\farcs2808.
\label{fig:vlba2}}
\end{figure*}
Figure~1 shows the VLBA 1.53\,GHz image of
P172+18\ at an angular resolution of $14.3 \times
5.8$~mas ($75.8 \times 30.7$\,pc at $z=6.82$) with a position angle of
P.~A.=$-18^{\circ}$. The rms noise in the image is 27.5~$\mu$Jy~beam$^{-1}$.
The observing frequency of 1.53\,GHz corresponds
to a rest frame frequency of 11.96\,GHz.
The image shows a dominant continuum source with a peak flux density of
$289.5 \pm 27.5$~$\mu$Jy~beam$^{-1}$, and a weak radio extension from the main source to the north-east.
Performing a two component 2-dimensional Gaussian
fit resulted in a resolved component for the dominant radio source with a total flux density of
$398.4 \pm 61.4$\,$\mu$Jy, and a deconvolved size of
$9.9 \times 3.5$~mas ($52.5 \times 18.6$\,pc at $z=6.82$). The
corresponding intrinsic brightness temperature value, i.e., at the rest frame frequency of 11.96\,GHz, is ($4.7 \pm 0.7) \times 10^7$\,K, implying a non-thermal emission mechanism.
The weak extension was fit by an unresolved component with a flux density of $ 84.2 \pm 27.5$\,$\mu$Jy ($\sim 3.1\sigma$).
We note that Gaussian components, as reported here, provide a convenient measure of source
structure even if they do not necessarily represent discrete physical structures.
Figure~2 shows the VLBA image of the P172+18\ at 4.67\,GHz ({\it Left}), and 7.67\,GHz ({\it Right}). Their restoring beams are $3.64 \times 1.62$~mas ($19.3 \times 8.6$\,pc) and $2.39 \times 0.98$\,mas ($12.7 \times 5.2$\,pc), respectively. The plus sign in each image denotes the location of the dominant source detected at 1.53\,GHz with the VLBA. The 3$\sigma$ point source upper limits are 51.6 and 61.2\,$\mu$Jy at 4.67 and 7.67\,GHz, respectively. The corresponding 3$\sigma$ upper limits to the intrinsic brightness temperatures are $3.8 \times 10^6$\,K, and $4.2 \times 10^6$\,K at the rest frame frequencies of 36.52 and 59.98\,GHz, respectively.
The limit on the spectral index between 1.53\,GHz and 4.67\,GHz for the dominant radio source seen in Figure~1 and using the 3$\sigma$ \added{point source} limit at 4.67\,GHz is $\alpha^{1.53}_{4.67} < -$1.55 (adopting $S\sim \nu^{\alpha}$). \added{Because the dominant radio source at 1.53\,GHz is resolved, this spectral index limit has been derived using its peak flux density value, which is $289.5\mu$\,Jy~beam$^{-1}$, on the assumption that this represents the maximum flux density of a point source at this frequency.} This derived value is steeper than that reported by \citet{ban21} between 1.52 and 2.87\,GHz with the VLA at a few arcsec resolution, which is $\alpha^{1.52}_{2.87} = -1.31$.
\subsection{The ``Companion'' Radio Source}
Figure~3 is the VLBA 1.53\,GHz image of the ``companion'' radio continuum source seen in the VLA images at a distance of 23\farcs1 from the quasar P172+18\ at 1.52 and 2.87\,GHz \citep{ban21}. The plus sign denotes the location of this radio source as seen in the VLA 2.87\,GHz image \citep{ban21}. The restoring beam size of the image shown in Figure~3 is $14.2 \times 5.9$~mas (P.~A.=$-18^{\circ}$), and the rms noise is 28.2~$\mu$Jy~beam$^{-1}$. No continuum emission is detected with the VLBA at the VLA position of this ``companion'' source with a 3$\sigma$ point source upper limit of 84.6\,$\mu$Jy, corresponding to an intrinsic brightness temperature upper limit of $4.1 \times 10^6$\,K at the rest frame frequency of 11.96\,GHz if located at $z=6.82$. At 1.53 GHz, the VLBA short spacing limit filters
out all spatial structures larger than about 0\farcs17, and the non-detection of this source at 1.53\,GHz is in agreement with its measured deconvolved size with the VLA at 2.87\,GHz, which is $1\farcs3 \times 0\farcs8$ \citep{ban21}.
\begin{figure}
\epsscale{1.02}
\plotone{f3.png}
\caption
{VLBA continuum image of the radio ``companion'' source of the $z=6.82$ quasar P172+18\ at 1.53\,GHz and $14.2 \times 5.9$~mas resolution (P.~A.=$-18^{\circ}$) located at an angular distance of 23\farcs1 from the quasar. The contour levels are at $-3$ and 3 times the rms noise level, which is 28.2~$\mu$Jy~beam$^{-1}$. The gray-scale range is indicated by the step wedge at the right side of the image. \added{The size of the image is $1\farcs6 \times 1\farcs6$, comparable to the deconvolved size limit of the source measured with the VLA at 1.52\,GHz.} The plus sign (arbitrary scale) denotes the VLA radio position at 2.87 GHz:
R.A.\,(J2000)=\,11$^{\rm h}$29$^{\rm m}$24.0782$^{\rm s}$, Decl.\,(J2000)=\,$+$18$^\circ$46$^\prime$38\farcs585.
\label{fig:vlba3}}
\end{figure}
Similar to the 1.53\,GHz results, both the 4.67\,GHz and 7.67\,GHz images do not show any radio emission from this ``companion'' source. The 3$\sigma$ point source upper limits are 53.4 and 62.4\,$\mu$Jy at 4.67 and 7.67\,GHz, respectively, and the corresponding upper limits to the intrinsic brightness temperatures are $4.0 \times 10^6$\,K, and $4.4 \times 10^6$\,K at the rest frame frequencies of 36.52 and 59.98\,GHz, respectively, if located at $z=6.82$.
\section{Discussion} \label{disc}
We have detected the $z=6.82$ quasar P172+18\ at 1.53~GHz with the VLBA at mas resolution (Figure\,1). The observations show that the radio emission from this source is dominated by a resolved compact source with a deconvolved size of $9.9 \times 3.5$\,mas ($52.5 \times 18.6$\,pc). A weak ($\sim 3.1\sigma$) extension to this dominant source is also seen in the VLBA image (Figure~1), but future observations are needed to unambiguously confirm its nature. The total radio flux density measured with the VLBA at 1.53\,GHz is $489.2 \pm 67.7$\,$\mu$Jy. This is consistent with the VLA measured value at 1.52\,GHz and $3\farcs55 \times 3\farcs24$ resolution, which is $510 \pm 15$\,$\mu$Jy \citep{ban21}, and suggests that the radio emission from the quasar is confined to the structures seen in the VLBA observations.
We find no indication of multiple radio components in the field of this source on scales of 20\,mas to a few arcseconds. A similar conclusion was reached for several other high redshift radio-loud quasars (e.g., \citealt{FR03,MOM04}).
These results imply that these quasars are not strongly gravitationally lensed.
The ``companion'' source identified in the VLA observations of \citet{ban21} and located at an angular distance of 23\farcs1 from the quasar was not detected with the VLBA at any of the three observed frequencies. While its nature and/or exact association with the quasar remains unclear, its non-detection in the VLBA observations at 1.53\,GHz suggests a size larger than 0\farcs17, which is expected for sources at lower redshift.
At the higher observing frequencies, namely 4.67 and 7.67\,GHz, the quasar itself was also not detected with the VLBA, perhaps contrary to expectations per the measured spectral index between
1.52 and 2.87\,GHz with the VLA, which is $\alpha^{1.52}_{2.87} = -1.31 \pm 0.08$ \citep{ban21}.
The spectral index limit we measure between 1.53 and 4.67\,GHz, which is $\alpha^{1.53}_{4.67} < -$1.55 (see \ref{raqso}), suggests a spectral steepening at the higher frequencies, while the slight flattening of the spectral index between the lower frequencies (1.52 and 2.87\,GHz) may be an indication of a spectral turnover taking place at much lower (few 100\, MHz) frequencies in the observed frame, which correspond to a few GHz in the rest-frame of the quasar.
The compact nature of this quasar located in the Epoch of Reionization, and its spectral trend speculated in the above at lower (few 100\, MHz) frequencies, would make it an important candidate for future sensitive H {\footnotesize I} 21\,cm absorption observations to detect the neutral IGM near $z \sim 7$ \citep{cgo02,FL02,GM17}. Knowledge of the source structure, as presented in these high angular resolution observations, is critical to identify candidates to search for H {\footnotesize I} and for subsequent interpretation of the results at these redshifts.
In the following we compare P172+18, which is the highest redshift radio-loud quasar known-to-date, with other radio loud quasars within a $\Delta z \sim \pm 1$, or $ z > 5.8$. In total, and including the quasar P172+18, there are currently nine radio-loud quasars at $z>5.8$. Following \citet{ban21}, we define $R_{2500}=f_{\nu,5\,\mathrm{GHz}} / f_{\nu,2500\,\mathrm{A}}$, with quasars that have $R_{2500} >10$ being radio loud\footnote{The radio-loudness of a quasar is defined as the ratio of the rest-frame 5\,GHz (radio) and 4400\,\AA\ (optical) flux densities; $R_{4400}$ \citep{kel89}, or the 2500\,\AA\ (ultra-violet) emission instead of the optical one; $R_{2500}$ \citep{jia07}.}. Figure~4 shows the rest-frame 5\,GHz radio vs.~the 2500\,\AA~luminosities of all the $z>5.8$ quasars that can be robustly classified as radio-quiet or radio-loud (see Table\,6 in \citealt{ban21}, as well as \citealt{ban15}, \deleted{and} \citealt{liu21}, \added{and \citealt{ighi21}}).
The dashed lines represent the radio-to-optical ratios ($R_{2500}$) at three different values: 10, 100, and 1000. The quasar P172+18\ is denoted with the star sign and has a value of $R_{2500} = 91 \pm 9$ \citep{ban21}. Of the other eight radio-loud quasars in this $z>5.8$ sample, currently six have reported mas resolution radio imaging results through VLBI (marked by the filled circles in Figure\,4): J0836+0054 at $z=5.81$ \citep{FR05}, P352$-$15 at $z=5.84$ \citep{MOM18}, J2228+0110 at $z=5.95$ \citep{CAO14}, PSO J030947.49+271757.31 at $z=6.10$ (hereafter PSO J0309+27; \citealt{spi20}), J1427+3312 at $z=6.12$ \citep{FR08,MOM08}, and J1429+5447 at $z=6.18$ \citep{FR11}. VLBI imaging shows three of these sources are resolved into two or more distinct radio components with linear projected separations or extents of 174\,pc (J1427+3312), 500\,pc (PSO J0309+27) and 1.62\,kpc (P352$-$15). These sources have been interpreted as compact or medium-size symmetric objects (J1427+3312, P352$-$15), or one sided core-jets (P352$-$15\footnote{The exact nature of P352$-$15 is currently not clear. Based on single frequency VLBI observations, it has been interpreted as both classes of objects \citep{MOM18}. Future multi-frequency VLBI analysis would address this ambiguity (Momjian et al., in prep).}, PSO J0309+27). The other three quasars imaged with VLBI, J0836+0054, J2228+0110, and J1429+5447, are dominated by single compact sources on VLBI scales. Of these, J0836+0054 and J1429+5447 are also known to have steep spectra ($\alpha^{1.4}_{5.0} \leq -0.8$; \citealt{FR05,FR11}).
\begin{figure}
\epsscale{1.1}
\plotone{f4.png}
\caption
{The 5\,GHz radio vs.\ the 2500\,\AA\ ultra-violet luminosities for the most distant quasars with redshifts $z>5.8$. The dashed lines represent the radio-loudness ratios ($R_{2500}$) at three different values: 10, 100, and 1000. Quasars with values of $R_{2500} >10$ are radio loud. The quasar P172+18\ at $z=6.82$ is denoted with the star sign and has a value of R $\sim 90$ \citep{ban21}. The open circles denote quasars that currently do not have published VLBI results: J2242+0334 at $z=5.88$ \citep{liu21} and VIK~J2318$-$3113 at $z=6.44$ \citep{ighi21}. The downward pointing arrows denote $z>5.8$ quasars with no radio detection. Their 5\,GHz luminosities are derived using $3\sigma$ limits from measurements at radio wavelengths \citep{ban21}.}
\end{figure}
To further investigate the nature of the $z=6.82$ quasar P172+18, we calculate the magnetic field strength and pressure in its dominant radio source adopting the standard minimum energy assumption for the fields and relativistic particle distribution (see the equations in \citealt{m80}). We assume equal energy in relativistic electrons and protons, a filling factor of unity, and a source size given by the VLBA observations. The derived minimum energy magnetic field is then 0.037\,G, and the energy density is $1.3\times 10^{-4}$ erg cm$^{-3}$. This field strength is comparable to those derived from VLBI observations of pc-scale jets in lower redshift radio AGN \citep{OG09}.
If the fields are close to the value derived assuming minimum energy, then the relativistic electron radiative lifetimes are short, $\le 0.7$\,years, using an upper limit to the synchrotron break frequency of 4.7\,GHz \citep{m80}. We have included both synchrotron losses and inverse Compton scattering off the CMB, although the former dominates, even at this redshift, given the high field strength. The source size of $\sim 50$\,pc implies a light crossing time of about 170\,years. Even if the jet is propagating at close to the speed of light, particle reacceleration in jet shocks appears to be required to maintain the electron distribution.
This source, with its steep spectral index and compactness, fits into the class of compact radio sources known as Gigahertz Peaked Spectrum (GPS) sources that have projected linear sizes less than 500\,pc \citep{ODea98,OS20}. Such sources, which host radio AGN, along with their more extended/evolved counterparts known as Compact Steep Spectrum radio sources (CSS; projected linear sizes of 0.5--20\,kpc), have been extensively studied at high angular resolution at lower redshifts (\citealt{OS20}, and references therein). One hypotheses to explain such sources is a very young radio jet. Assuming a median expansion speed for GPS/CSS sources of $\sim 0.1c$, the resulting kinematic age is 1700 years for the quasar P172+18, which is well within the range derived for lower redshift GPS/CSS sources \citep{OS20}. The extreme pressures in the radio emitting regions will certainly have a dramatic feedback effect on the ISM in the inner regions of the host galaxy, and continue to do so as the source expands \citep{ACF12}. Based on the above arguments, our conclusion of the $z=6.82$ quasar being similar to lower redshift GPS sources is also consistent with that reached for other $z\sim6$ single-source-dominated radio-loud quasars (e.g., \citealt{FR05, FR11}), implying these are very young radio sources in the early Universe.
\acknowledgments
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work made use of the DiFX software correlator developed at Swinburne University of Technology as part of the Australian Major National Research Facilities program.
\facilities{VLBA}
\software{AIPS: \citealt{G2003}}
| -16,879.612701 |
[
-3.0078125,
2.9140625
] | 53.374233 |
[
-3.251953125,
0.274169921875,
-1.666015625,
-6.22265625,
-0.78515625,
8.6015625
] |
[
4.33984375,
7.10546875,
4.6484375,
5.3828125
] | 371 | 3,163 |
[
-2.4609375,
2.45703125
] | 30.941603 |
[
-5.58203125,
-1.919921875,
-1.62109375,
-1.4599609375,
0.4501953125,
7.74609375
] | 1.204311 | 27.430524 | 29.93993 | 8.265443 |
[
2.103574275970459
] | -13,176.680146 | 5.250395 | -16,194.018989 | 0.547962 | 5.779628 |
[
-3.74609375,
-3.71484375,
-2.888671875,
-3.603515625,
2.564453125,
10.390625
] |
[
-6.4375,
-2.966796875,
-2.677734375,
-2.30078125,
3.6875,
6.6640625
] | |
BkiUdZfxaKgQKN2tUZY1
|
\section{Introduction}
Spectral problems depending non-linearly on the eigenvalue parameter arise frequently in applications, see e.g.\! the comprehensive collection in \cite{Betcke-Higham-Mehrmann-Schroeder-Tisseur-2013} or the monograph \cite{Moeller-Pivovarchik-2015}. The dependence ranges from quadratic in problems originating in second order Cauchy problems such as damped wave equations, see e.g.\
\cite{Jacob-Trunk-2009}, \cite{Jacob-Tretter-Trunk-Vogt-2018}, to rational as in electromagnetic problems with frequency dependent materials such as photonic crystals, see e.g.\ \cite{MR3543766}, \cite{MR4209764}. In addition, if energy dissipation is present due to damping or lossy materials, then the values of the corresponding operator functions need not be selfadjoint.
While for operator functions $T(\lambda)$, $\lambda\!\in\!\Omega \!\subseteq\! \mathbb{C}$, with unbounded operator values in a Hilbert space $\mathcal{H}$ the notion of numerical range $W(T)$ \vspace{-0.5mm} exists,
\begin{equation}
\label{intro-c0}
\hspace*{-4mm}
\begin{aligned}
W(T)\!:=&\!\set{\lambda\!\in\!\Omega}{0\!\in\! W(T(\lambda))} \\
\!=&\set{\lambda\!\in\!\Omega}{\exists\, f\!\in\!\operatorname{dom} T(\lambda), f\!\ne\!0, \ (T(\lambda)f,f)\!=\!0},
\end{aligned}
\vspace{-1mm}
\end{equation}
a spectral inclusion result $\sigma_{\rm ap}(T) \!\subseteq\! \overline{W(T)} \cap \Omega$ for the approximate point spectrum is lacking. Even in the case of bounded values $T(\lambda)$, spectral inclusion only holds under a certain condition that is not easy to verify. Moreover, spectral inclusion results are even lacking for the most important case of quadratic operator polynomials with unbounded coefficients, one of the most relevant cases for applications.
In the present paper we fill these gaps. To this end, we introduce the novel concept of \emph{pseudo numerical range} of operator functions $T(\lambda)$, $\lambda\!\in\!\Omega \!\subseteq\! \mathbb{C}$, with unbounded \vspace{-1mm} values,
\begin{equation}
\label{eq:intro.def.pseudo.nr}
W_\Psi(T)\vcentcolon=\bigcap\nolimits_{\varepsilon>0}W_\varepsilon(T), \quad W_\varepsilon(T)\vcentcolon=\bigcup\nolimits_{B\in L(\mathcal{H}) \atop \norm{B}<\varepsilon}W(T+B), \quad \varepsilon>0,
\vspace{-2.5mm}
\end{equation}
and analogously for families of unbounded quadratic forms $\mathbf{t}(\lambda)$, $\lambda\!\in\!\Omega \!\subseteq\! \mathbb{C}$. The sets $W_\varepsilon(T)$, $\varepsilon>0$, can be shown to have the equivalent form
\begin{equation}
\label{eq:intro.W.psi.epsilon}
W_\varepsilon(T) =\set{\lambda\in\Omega}{\exists ~f\in\operatorname{dom} T(\lambda), ~\norm{f}=1, ~\abs{(T(\lambda)f,f)}<\varepsilon};
\vspace{-1mm}
\end{equation}
hence they coincide with the so-called $\varepsilon$-pseudo numerical range first considered in \cite{Engstroem-Torshage-2017}.
As a consequence, the pseudo numerical range $W_\Psi(T)$ can equivalently be described \vspace{-0.5mm}
as
\begin{equation}
\label{eq:intro.pseudo.num.id}
W_\Psi(T) \!=\!\big\{\lambda\!\in\!\Omega:0\!\in\!\overline{W(T(\lambda))}\big\} =\vcentcolon W_{\Psi,0}(T).
\vspace{-1mm}
\end{equation}
One could be tempted to think that the condition $0\!\in\!\overline{W(T(\lambda))}$ in $W_{\Psi,0}(T)$ is equivalent to
$\lambda\!\notin\! \overline{W(T)}$, but this is neither true for operator functions with bounded values,
as already noted in \cite{Wagenhofer-PhD-2007}, nor for non-monic linear operator pencils for which the set $W_{\Psi,0}(T)$
was used recently in \cite{Boegli-Marletta-2019}.
One of the crucial properties of the pseudo numerical range is that, \emph{without any assumptions on the operator \vspace{-1mm} family},
\begin{equation}
\sigma_{\operatorname{ap}}(T)\subseteq W_\Psi(T),
\vspace{-1mm}
\end{equation}
see Theorem \ref{thm:spec.incl.pseudo.num.ran}, and that the norm of the resolvent of $T$ can be \vspace{-1mm} estimated~by
\begin{equation}
\norm{T(\lambda)^{-1}}\le \varepsilon^{-1}, \quad \lambda\in\rho(T)\setminus W_\varepsilon(T) \subseteq \rho(T)\setminus W_\Psi(T).
\vspace{-1mm}
\end{equation}
Not only from the analytical point of view, but also from a computational perspective, the pseudo numerical range seems to be more convenient since it is much easier to determine whether a number is small rather than zero.
Like the numerical range of an operator function, but in contrast to the numerical range
or essential numerical range of an operator \cite{Kato-1995}, \cite{MR4083777}, \cite{HT-2022}, the pseudo numerical range need~not be convex. An exception is the trivial case of a monic linear operator pencil $T(\lambda)\!=\!A\!-\!\lambda I$, $\lambda \!\in\! \mathbb{C}$, where the pseudo numerical range is simply the closure of the numerical range, $W_\Psi(T)\!=\!\overline{W(T)}\!=\!\overline{W(A)}$.
In general, we only have the obvious enclosure $W(T) \subseteq W_\Psi(T)$. Neither the interiors nor the closures in $\Omega$ of $W_\Psi(T)$ and $W(T)$ need to coincide and
there is also no inclusion either way between $W_\Psi(T)$ or its closure $\overline{W_\Psi(T)}\cap \Omega$ in $\Omega$ and the closure $\overline{W(T)} \cap \Omega$ of $W(T)$ in~$\Omega$; we give various counter-examples to illustrate these~effects.
In our first main result we use the pseudo numerical range of holomorphic form families $\mathbf{t} (\lambda)$, $\lambda \in \Omega$, of type (a) to prove the spectral inclusion for the associated holomorphic operator functions $T(\lambda)$, $\lambda \in \Omega$, of type (B) of m-sectorial operators $T(\lambda)$. More precisely, we show that if there exist $k\in\mathbb{N}_0$, $\mu\in\Omega$ and a core $\mathcal{D}$ of $\mathbf{t}(\mu)$ \vspace{-1mm} with
\begin{equation}
\label{eq:intro:ass.pseudo.dense.hol.fam}
0 \notin \overline{W\big(\mathbf{t}^{(k)}(\mu)\big|_\mathcal{D}\big)},
\vspace{-1mm}
\end{equation}
then $\sigma(T) \subseteq W_\Psi(\mathbf{t})=\overline{W(\mathbf{t})} \cap \Omega$ and, if
in addition, the operator family $T$ has constant domain, \vspace{-1.5mm} then
\begin{equation}
\label{intro-c1}
\sigma(T) \!\subsete
\, W_\Psi(T)=\overline{W(T)}\cap \Omega,
\vspace{-2mm}
\end{equation}
see Theorem \ref{thm:pseudo.dense.hol.fam}. Note that, due to \eqref{eq:intro.pseudo.num.id}, condition \eqref{eq:intro:ass.pseudo.dense.hol.fam} for $k\!=\!0$, \vspace{-0.5mm} i.e.\
$0 \notin \overline{W\big(\mathbf{t}(\mu)\big|_\mathcal{D}\big)}$ for some $\mu\in \mathbb{C}$, is equivalent to $W_\Psi(T)\ne \Omega$.
For operator polynomials $T(\lambda) = \sum_{k=0}^n \lambda^k A_k$ with domain $\operatorname{dom} T(\lambda)=\bigcap_{k=0}^n \operatorname{dom} A_k$, $\lambda \in \mathbb{C}$,
we prove that, if $0\notin\overline{W(A_n)}$, \vspace{-0.4mm} then
\begin{equation}
\label{eq:introo.poly.pseudo.num.op}
\sigma_{\operatorname{ap}}(T) \subseteq W_\Psi(T)\subseteq\overline{W(T)}\cap \Omega,
\vspace{-2mm}
\end{equation}
see Proposition \ref{prop:poly.pseudo.num.op}. The inclusion \eqref{intro-c1} follows if, in addition, $\sigma(T(\lambda)) \!\subseteq\! \overline{W(T(\lambda))}$, $\lambda\!\in\!\mathbb{C}$, which is a weaker condition than m-sectoriality of all
$T(\lambda)$.
The second new concept we introduce in this paper is the \emph{pseudo block numerical range} of operator functions $\mathcal{L}(\lambda)$, $\lambda\in\Omega$, that possess an operator matrix representation with respect to a
decomposition $\mathcal{H}=\mathcal{H}_1\oplus\cdots \oplus\mathcal{H}_n$, $n\in\mathbb{N}$, of the given Hilbert space $\mathcal{H}$. This means \vspace{-1mm} that
\begin{equation}
\label{eq:intro.op.matrix.fam}
\mathcal{L}(\lambda)=\big( L_{ij} (\lambda) \big)_{i,j=1}^n, \quad \operatorname{dom}\mathcal{L}(\lambda)=\bigoplus\nolimits_{\!j=1}^{\!n} \ \bigcap\nolimits_{i=1}^n \operatorname{dom} L_{ij}(\lambda),
\vspace{-1mm}
\end{equation}
with operator functions $L_{ij}(\lambda)$, $\lambda\!\in\!\Omega
$, of densely defined and closable linear operators from $\mathcal{H}_j$ to $\mathcal{H}_i$, $i$, $j=1,\dots, n$.
Extending earlier concepts we first define the \emph{block numerical range}\vspace{-1mm} of~$\mathcal{L}$~as
\begin{equation}
W^{n}(\mathcal{L}) \vcentcolon= \bigcup\nolimits_{(f_i)\in\operatorname{dom}\mathcal{L}(\lambda) \atop \|f_i\|\!=\!1}
\sigma_p \big( \mathcal{L}(\lambda)_{(f_i)} \big), \quad \mathcal{L}(\lambda)_{(f_i)} \!\vcentcolon=\! \left( \mathcal{L}_{ij}(\lambda) f_j, f_i \right) \!\in\! \mathbb{C}^{n\times n}\!;
\vspace{-1mm}
\end{equation}
for bounded values $\mathcal{L}(\lambda)$ see \cite{MR3302436} and \cite{Tretter-2010} for $n=2$, for unbounded operator matrices $\mathcal{L}(\lambda) \!=\! {\mathcal A} - \lambda I_\mathcal{H}$ see \cite{Rasulov-Tretter-2018}. Then we introduce the \emph{pseu\-do block numerical range} of $\mathcal{L}$ as
\begin{equation}
W^n_\Psi(\mathcal{L})\vcentcolon=\bigcap\nolimits_{\varepsilon>0}W_\varepsilon^n(\mathcal{L}), \qquad
W_\varepsilon^n(\mathcal{L})\vcentcolon= \bigcup\nolimits_{\mathcal{B}\in L(\mathcal{H})\atop \norm{\mathcal{B}}<\varepsilon} W^n(\mathcal{L}+\mathcal{B}), \quad \varepsilon>0.
\vspace{-2mm}
\end{equation}
For $n\!=\!1$ both block numerical range and pseudo block numerical range coincide with the numerical range and pseudo numerical range of $\mathcal{L}$, respectively. For $n\!>\!1$, the trivial inclusion $W^n(\mathcal{L}) \subseteq W^n_\Psi(\mathcal{L})$ and the characterisation
\eqref{intro-c0}, \vspace{-1mm} i.e.
\begin{equation}
\label{eq:intro.qnr.equiv}
W^n(\mathcal{L}) =\big\{ \lambda \in \Omega: 0\in W^n(\mathcal{L}(\lambda)) \big\}, \quad n\in \mathbb{N},
\vspace{-1mm}
\end{equation}
and a resolvent norm \vspace{-1mm} estimate
\begin{equation}
\norm{\mathcal{L}(\lambda)^{-1}}\!\le\! \varepsilon^{-1},
\quad
\lambda\!\in\!\rho(\mathcal{L})\setminus W_\varepsilon^{n}(\mathcal{L}) \subseteq\! \rho(\mathcal{L})\setminus W_\Psi^n(\mathcal{L}), \quad n\!\in\!\mathbb{N},
\end{equation}
see Theorem \ref{thm:spec.incl.pseudo.qnr} for both,
continue to hold, but otherwise not much carries over from the case $n\!=\!1$. The first difference is that, for the simplest case $\mathcal{L}(\lambda)=\mathcal{A}-\lambda I_\mathcal{H}$, $\lambda\in \mathbb{C}$, we may have $W_\Psi^n(\mathcal{L})\neq\overline{W^n(\mathcal{L})}$ for $n\!>1\!$, see Example \ref{ex:jordan}.
More importantly, for $n\!>\!1$ the relation \eqref{eq:intro.pseudo.num.id} need not hold for the pseudo block numerical range; here we only have the \vspace{-1mm} inclusion
\begin{equation}
W_\Psi^{n}(\mathcal{L}) \supseteq \set{\lambda\!\in\!\Omega}{0\in\overline{W^{n}(\mathcal{L}(\lambda))}} =\vcentcolon W_{\Psi,0}^{n}(\mathcal{L}), \quad n\in \mathbb{N},
\vspace{-1mm}
\end{equation}
see Proposition \ref{prop:nested.def.pseudo.qnr}. Therein we also assess two other candidates $W_{\Psi,i}^{n}(\mathcal{L}) \!=\! \bigcap_{\varepsilon>0} W_{\varepsilon,i}^{n}(\mathcal{L})$, $i\!=\!1,2$, for the pseudo block numerical range for which $W_{\varepsilon,1}^{n}(\mathcal{L})$ is defined by the scalar condition $\det \mathcal{L}(\lambda)_{(f_i)} \!<\! \varepsilon$ and $W_{\varepsilon,2}^{n}(\mathcal{L})$ by restricting to \emph{diagonal} perturbations $\mathcal{B}\in L(\mathcal{H})$ with $\norm{\mathcal{B}}<\varepsilon$. In fact, we show that
\begin{equation}
\label{eq:introo.pbnri}
W^n(\mathcal{L}) \subseteq W^{n}_{\Psi,1}(\mathcal{L})\subseteq W_{\Psi,0}^{n}(\mathcal{L})\subseteq W^{n}_{\Psi,2}(\mathcal{L})\subseteq W^{n}_\Psi(\mathcal{L}),
\end{equation}
and that, like the pseudo numerical range, the pseudo block numerical range $W^{n}_\Psi(\mathcal{L})$ has the spectral inclusion property, i.e.
\begin{equation}
\sigma_{\operatorname{ap}}(T) \subseteq W^{n}_\Psi(\mathcal{L}) \subseteq W_\Psi(T), \quad n\in \mathbb{N},
\end{equation}
but, in general, none of the subsets of $W^{n}_\Psi(\mathcal{L})$ in \eqref{eq:introo.pbnri} is large enough to contain $\sigma_{\operatorname{ap}}(T)$, see Example \ref{ex:jordan}.
Our second main result concerns the most important case $n\!=\!2$, the so-called \emph{quadratic numerical range} and \emph{pseudo quadratic numerical range}. Here we prove a novel type of spectral inclusion for diagonally dominant and off-diagonally dominant $\mathcal{L}(\lambda) \!=\! (L_{ij}(\lambda))_{i,j=1}^2$ in terms of the pseudo numerical ranges of the Schur complements $S_1$, $S_2$ and, further, the pseudo quadratic numerical range of $\mathcal{L}$,
\begin{equation}
\label{eq:intro.mat.fam.schur.app.incl}
\sigma_{\operatorname{ap}}(\mathcal{L})\setminus(\sigma(L_{11})\cup\sigma(L_{22})) \subseteq W_\Psi(S_1)\cup W_\Psi(S_2) \subseteq W^2_\Psi (\mathcal{L}),
\vspace{-1mm}
\end{equation}
see Theorem \ref{thm:mat.spec.incl.schur.app}, where $S_1(\lambda)\!=\!L_{11}(\lambda) \!-\! L_{12}(\lambda) L_{22}(\lambda)^{-1} L_{21}(\lambda)$, $\lambda \!\in\! \rho(L_{22})$, and similarly for $S_2$ with the indices $1$ and $2$ reversed. For symmetric and anti-symmetric corners, i.e.\ $L_{21}(\lambda) \subsete
\pm L_{12}(\lambda)^*$, $\lambda\!\in\!\Omega$, we even show that
\begin{equation}
\sigma_{\operatorname{ap}}(\mathcal{L}
\!\subseteq\! W_\Psi(S_1)\cup W_\Psi(L_{22}),
\end{equation}
if $L_{11}(\lambda)$ is accretive, $\mp L_{22}(\lambda)$ is m-sectorial and $\operatorname{dom} L_{22}(\lambda) \!\subsete
\! \operatorname{dom} L_{12}(\lambda)$, see Theo\-rem \ref{thm:spec.incl.def.indef}/Corollary \ref{cor:spec.incl.def.indef}, and similarly for the Schur complement~$S_2$.
As an interesting consequence, we are able to establish spectral separation and inclusion theorems for unbounded $2\!\times\!2$ operator matrices $\mathcal{A}=(A_{ij})_{i,j=1}^2$ with 'separated' diagonal entries; here 'separated' means that the numerical ranges of $A_{11}$ and $A_{22}$ lie in half-planes and/or sectors in the right and left half-plane $\mathbb{C}_+$ and $\mathbb{C}_-$, respectively, separated by a vertical strip $S\!\vcentcolon=\!\{z\!\in\!\mathbb{C}:\delta \!
\! \operatorname{Re} z \!
\! \alpha\}$ with $\delta\!<\!0\!<\!\alpha$ around~$\i\mathbb{R}$. More precisely, \emph{without} any bounds on the order of diagonal dominance or off-diagonal dominance we show that, if $\varphi$, $\psi \!\in\! [0,\frac \pi 2]$ are the semi-angles of $A_{11}$ and $A_{22}$ and $\tau\!:=\! \max\{\varphi,\psi\}$, then
\begin{equation}
\sigma_{\operatorname{ap}}(\mathcal{A}) \subseteq ( - \!\Sigma_\tau \cup \Sigma_\tau ) \setminus S =\vcentcolon \Sigma, \quad
\Sigma_\tau \vcentcolon= \{z\!\in\!\mathbb{C}: |\arg z| \le \tau \},
\end{equation}
and $\sigma(\mathcal{A}) \subseteq
\Sigma$ if $\rho(\mathcal{A}) \cap (\mathbb{C} \setminus \Sigma) \!\ne\! \emptyset$, see Theorem \ref{thm:spec.incl.BB*}. This result is a great step ahead compared to the earlier result \cite[Thm.\ 5.2]{Tretter-2009} where the dominance order had to be restricted to $0$.
Moreover, even to ensure the condition $\rho(\mathcal{A}) \cap (\mathbb{C} \setminus \Sigma) \!\ne\! \emptyset$ for the enclosure of the entire spectrum $\sigma(\mathcal{A})$ in Theorem \ref{thm:spec.incl.BB*}, we do not have to restrict~the dominance order as usual for perturbation arguments. Our new weak conditions involve only products of the columnwise relative bounds $\delta_1$ in the~first and $\delta_2$ in the second column, see Proposition \ref{thm:full.spec.incl.BB*}; in particular, either $\delta_1\!=\!0$ or $\delta_2\!=\!0$ guarantees $\rho(\mathcal{A}) \cap (\mathbb{C} \setminus \Sigma) \!\ne\! \emptyset$ in Theorem \ref{thm:spec.incl.BB*}
and~hence~$\sigma_{\operatorname{ap}}(\mathcal{A}) \!\subseteq\! \Sigma$.
As an application of our results, we consider abstract quadratic operator polynomials $T(\lambda)$, $\lambda\!\in\!\mathbb{C}$, induced by forms $\mathbf{t}(\lambda)\!=\!\mathbf{t}_0\!+\!2\lambda\mathbf{a}\!+\!\lambda^2$ with $\operatorname{dom}\mathbf{t}(\lambda)=\operatorname{dom}\mathbf{t}_0$, $\lambda\in\mathbb{C}$, as they arise e.g.\ from linearly damped wave equations
\begin{equation}
\label{intro-c2}
u_{tt}(x,t)+2a(x)u_t(x,t)=\left(\Delta_x-q(x)\right)u(x,t), \quad x\in\R^d, \quad t>0,
\end{equation}
where the non-negative potential $q$ and damping $a$ may be singular and/or unbounded, cf.\ \cite{Freitas-Siegl-Tretter-2018, Jacob-Tretter-Trunk-Vogt-2018,Jacob-Trunk-2007,Jacob-Trunk-2009} where also accretive damping was considered, and for which it is well-known that the spectrum is symmetric with respect to $\mathbb{R}$ and con\-fined to the closed left half-plane.
Here we use a finely tuned assumption on the 'unboundedness' of $\mathbf{a}$ with respect to $\mathbf{t}_0$, namely \emph{$p$-subordinacy} for $p\!\in\![0,1)$, comp.\ \cite[\S\,5.1]{Markus-1988} or \cite[Sect.~3]{Tretter-Wyss-2014} for the operator case. More precisely, if
$\mathbf{t}_0\!\ge\!\kappa_0\!\ge\!0$, $\mathbf{a}\!\ge\!\alpha_0\!\ge\!0$ with $\operatorname{dom}\mathbf{t}_0\!\subseteq\!\operatorname{dom}\mathbf{a}$ and there exist~$p\!\in\![0,1)$ and $C_{p}\!>\!0$ \vspace{-1mm}with
\begin{equation}
\label{eq:intro.pencil.subordinate}
\mathbf{a}[f]\le C_{p}\big(\mathbf{t}_0[f]\big)^p \big(\norm{f}^2\big)^{1-p}, \quad f\in\operatorname{dom}\mathbf{t}_0,
\end{equation}
we use the enclosure $\sigma(T) \!\subseteq\! W_\Psi(T) \!=\! W_\Psi(\mathbf{t}) \!=\! \overline{W(\mathbf{t})}$ to prove that the non-real spectrum of $T$ satisfies the \vspace{-1mm} bounds
\begin{align*}
\sigma(T)\setminus \mathbb{R} \! \subseteq \!
\Big\{ z\!\in\!\mathbb{C}: \, |z| \ge \sqrt{\kappa_0}, \, & \, \operatorname{Re} z\le-\alpha_0,
\\[-3mm] &\, \abs{\operatorname{Im} z}^2\!\!\ge\! \max\!\big\{0,C_{p}^{-\frac{1}{p}}\!\abs{\operatorname{Re} z}^\frac{1}{p}\!\!-\!\abs{\operatorname{Re} z}^2\big\}\Big\}
\\[-7mm]
\end{align*}%
and the real spectrum $ \sigma(T)\cap \mathbb{R} \subset [-\infty,0]$ is either empty or it is confined to one bounded interval, to one unbounded interval or to the disjoint union of a bounded and an unbounded interva
, see Theorem \ref{thm:pencil.spec.incl} and Figure \ref{fig:dwe.spec.incl}. Moreover, we describe both
the thresholds for the transitions between these cases and the enclosures for $ \sigma(T)\cap \mathbb{R}$ precisely in terms of $p$, $C_p$, $\kappa$ and $\kappa_0$. As a concrete example, we consider the damped wave equation \eqref{intro-c2} \vspace{-2mm}with
\begin{equation}
\label{eq:intro.dwe.pot.damp.inequ}
a(x)\!\le\!\sum_{j=1}^n\abs{x\!-\!x_j}^{-t}\!+\! u(x) \!+\! v(x), \ \ v(x) \!\le\! c_1 q(x)^r\!+c_2 \,\ \mbox{ for almost all $x\!\in\! \mathbb{R}^d$},
\vspace{-2mm}
\end{equation}
where $n\!\in\!\mathbb{N}_{0}$, $x_j\!\in\!\mathbb{R}^d$ for $j\!=\!1,\dotsc,n$, $u \!\in\! L^s (\R^d)$ with $s\!>\!\frac d2$, $v\!\in\!L^1_{\operatorname{loc}}(\R^d)$, $t\!\in\![0,2)$, $c_1$, $c_2\!\ge\!0$ and $r\!\in\![0,1)$. For the special case $q(x)\!=\!\abs{x}^2$, $a(x)\!=\!\abs{x}^{k}$, $x\!\in\!\R^d$, with $k \!\in\![0,2)$, the new spectral enclosure in
\vspace{-1mm}Theorem~\ref{thm:pencil.spec.incl}~yields
\begin{equation}
\label{eq:intro.dwe.comp.incl}
\sigma(T) \setminus\mathbb{R}\subseteq\Big\{z\!\in\!\mathbb{C}:\operatorname{Re} z\!\le\!0, \, \abs{z}\!\ge\! \sqrt{d}, \, |\operatorname{Im} z| \!\ge\!
\sqrt{\max\{0,\abs{\operatorname{Re} z}^{\!\frac{2}{k}
}\!\!-\!\abs{\operatorname{Re} z}^2\}}\Big\}
\vspace{-2mm}
\end{equation}
\vspace{-1mm}and, with $t_0=\max\big\{ \big( k(2-k) \big)^{-\frac 1{k-1}},d\big\}$,
\begin{align*}
\sigma(T) \cap \mathbb{R}
\begin{cases}
= \emptyset & \mbox { if } k\!\in\![0,1), \\
\subseteq(-\infty,-\sqrt{d}] & \mbox { if }k=1, \\[-1mm]
\subseteq \!\Big(\!\!-\!\infty,-\sqrt{t_0}^{k}\!+\!\sqrt{t_0^{k}\!-\!t_0 } \,\Big] & \mbox { if } k\!\in\! (1,2).
\end{cases}
\end{align*}
The paper is organised as follows. In Section \ref{sec:pseudo.num} we introduce the pseudo numerical range of operator functions and form functions and study the relation of $W_\Psi(T)$ and $\overline{W(T)}\cap \Omega$. In Section \ref{subsec:pseudo.nr.spec.encl} we establish spectral inclusion results in terms of the pseudo numerical range. In Section \ref{sec:op.mat.fam} we define the block numerical range $W^n(\mathcal{L})$ and pseudo block
numerical range $W^{n}_\Psi(\mathcal{L})$ of unbounded $n\!\times\! n$ operator matrix functions $\mathcal{L}$, investigate the differences to the special case $n\!=\!1$ of the pseudo numerical range $W_\Psi^1(\mathcal{L})\!=\!W_\Psi(\mathcal{L})$ and prove corresponding spectral inclusion theorems. In Section \ref{sec:schur.app.encl} we establish new
enclosures of the approximate point spectrum of $2\!\times\! 2$ operator matrix functions by means of the pseudo numerical ranges of their Schur complements. In Section \ref{sec:BB*} we apply them to prove
spectral bounds for diagonally dominant and off-diagonally dominant operator matrices with symmetric or anti-symmetric corners without restriction on the dominance order.
Finally, in Section \ref{sec:dwe}, we apply our results to linearly damped wave equations with possibly unbounded and/or singular damping and potential.
Throughout this paper, $\mathcal{H}$ and $\mathcal{H}_i$, $i\!=\!1,\dots,n$, denote Hilbert spaces,~$L(\mathcal{H})$ denotes the space of bounded linear operators on $\mathcal{H}$ and $\Omega\!\subseteq\!\mathbb{C}$ is a~domain.
\section{The pseu\-do numerical range of operator functions and form functions}
\label{sec:pseudo.num}
In this section, we introduce the new notion of pseu\-do numerical range for operator functions $\set{T(\lambda)}{\lambda\in\Omega}$ and form functions $\set{\mathbf{t}(\lambda)}{\lambda\in\Omega}$, respectively,
briefly denoted by $T$ and $\mathbf{t}$ if no confusion about $\Omega$ can arise. While the values $T(\lambda)$ and $\mathbf{t}(\lambda)$ may be bounded/unbounded linear operators and sesquilinear forms in a Hilbert space $\mathcal{H}$, the notion of pseudo numerical range is new also in the bounded case.
The \emph{numerical range} of $T$ and $\mathbf{t}$, respectively, are defined as
\begin{alignat*}{2}
W(T)\!&=\!\set{\lambda\!\in\!\Omega\!}{\!0\!\in\! W(T(\lambda))}
&&=\!\set{\lambda\!\in\!\Omega\!}{\!\exists\, f\!\in\!\operatorname{dom} T(\lambda), f\!\ne\!0, (T(\lambda)f,f)\!=\!0},
\\
W(\mathbf{t})\!&=\!\set{\lambda\!\in\!\Omega\!}{\!0\!\in\! W(\mathbf{t}(\lambda))}
\!&&=\!\set{\lambda\!\in\!\Omega\!}{\!\exists\, f\!\in\!\operatorname{dom} \mathbf{t}(\lambda), \,f\!\ne\!0, \,\mathbf{t}(\lambda)[f]\!=\!0},
\end{alignat*}
comp.\ \cite[\S\,26]{Markus-1988}. In the simplest case of a monic linear operator polynomial $T(\lambda) = T_0 - \lambda I_\mathcal{H}$, $\lambda \in \mathbb{C}$, this notion coincides with the numerical range $W(T_0)$ of the linear operator $T_0$, and analogously for forms;
note that the latter is also denoted by $\Theta(T_0)$, e.g.\ in \cite[Sect.~V.3.2]{Kato-1995}.
The following new concept of pseudo numerical range employs the notion of $\varepsilon$-pseudo
numerical range $W_\varepsilon(T)$, $\varepsilon>0$, introduced in \cite[Def.\ 4.1]{Engstroem-Torshage-2017}; the equivalent original definition therein, see \eqref{eq:W.psi.epsilon} below, was designed to obtain computable enclosures for spectra of rational operator functions.
\begin{defi}
\label{def:pseudo-nr}
We introduce the \emph{pseu\-do numerical range} of an operator function $T$ and a form function $\mathbf{t}$, respectively, as
\begin{equation}
\begin{aligned}
W_\Psi(T) & \vcentcolon=\bigcap_{\varepsilon>0}W_\varepsilon(T), & \quad W_\Psi(\mathbf{t}) & \vcentcolon=\bigcap_{\varepsilon>0}W_\varepsilon(\mathbf{t}),
\\[-7mm]
\end{aligned}
\end{equation}
where
\begin{equation}
W_\varepsilon(T) \vcentcolon= \bigcup_{B \in L(\mathcal{H}), \norm{B}<\varepsilon}W(T+B), \quad W_\varepsilon(\mathbf{t})
\vcentcolon=\bigcup_{\norm{\mathbf{b}}<\varepsilon}W(\mathbf{t}+\mathbf{b}), \quad \varepsilon>0;
\end{equation}
here $\norm{\mathbf{b}}=\sup_{\norm{f}=\norm{g}=1}\abs{\mathbf{b}[f,g]}$ for a bounded sesquilinear form $\mathbf{b}$ in $\mathcal{H}$.
\end{defi}
Clearly, for monic linear operator polynomials $T(\lambda) = A
- \lambda I_\mathcal{H}$, $\lambda \in \mathbb{C}$, the pseu\-do numerical range is nothing but the closure of the classical numerical range
$\overline{W(A)}$ of the linear operator $A$,
and analogously for forms.
The pseudo numerical range of operator or form functions, is, like their numerical ranges, in general neither convex nor connected, and, even for families of bounded operators or forms, it may be unbounded.
\begin{rem}
\begin{enumerate}
\item The following enclosures may be proper, see Example \ref{ex:pseudo.num.spec.incl},
\begin{equation}
W(T)\subseteq W_\Psi(T), \qquad W(\mathbf{t})\subseteq W_\Psi(\mathbf{t}).
\end{equation}
\item In general, the pseu\-do numerical range need neither be open nor closed in $\Omega$ equipped with the relative topology,
see Examples \ref{ex:pseudo.num.spec.incl} (i) and \ref{ex:ODE}, respectively.
\item Neither the closures nor the interiors with respect to the relative topology on $\Omega$ of the pseudo numerical range and the numerical range need to coincide, see Example \ref{ex:pseudo.num.spec.incl} (i) and (ii).
\end{enumerate}
\end{rem}
The following alternative characterisation of the pseudo numerical range will be frequently used in the sequel.
\begin{prop}
\label{prop:pseudo.num}
For every $\varepsilon>0$,
\begin{align}
\label{eq:W.psi.epsilon}
W_\varepsilon(T) & =\set{\lambda\in\Omega}{\exists ~f\in\operatorname{dom} T(\lambda), ~\norm{f}=1, ~\abs{(T(\lambda)f,f)}<\varepsilon}, \\
W_\varepsilon(\mathbf{t}) & =\set{\lambda\in\Omega}{\exists ~f\in\operatorname{dom} \mathbf{t}(\lambda), ~\norm{f}=1, ~\abs{\mathbf{t}(\lambda)[f]}<\varepsilon},
\end{align}
and, consequently,
\begin{align}
\label{eq:pseudo.num.id}
\hspace{-7mm}
W_\Psi(T) & \!=\!\set{\lambda\!\in\!\Omega}{0\!\in\!\overline{W(T(\lambda))}}, \ \
W_\Psi(\mathbf{t}) \!=\!\set{\lambda\!\in\!\Omega}{0\!\in\!\overline{W(\mathbf{t}(\lambda))}}.
\end{align}
\end{prop}
\begin{proof}
We show the claim for $W_\varepsilon(T)$; then the claim for $W_\Psi(T)$ is obvious by Definition \ref{def:pseudo-nr}. The proof for $W_\varepsilon(\mathbf{t})$ and $W_\Psi(\mathbf{t})$ is analogous.
Let $\varepsilon>0$ be arbitrary and $\lambda\in W_\varepsilon(T)$. There exists a bounded operator $B$ in $\mathcal{H}$ with $\norm{B}<\varepsilon$ such that $\lambda\in W(T+B)$, i.e.\
\begin{equation}
\scalarprod{T(\lambda)f}{f}=-(Bf,f), \quad f\in\operatorname{dom} T(\lambda), \quad \norm{f}=1.
\end{equation}
Hence, clearly, $\abs{\scalarprod{T(\lambda)f}{f}}\le\norm{B}<\varepsilon$, thus $\lambda$ is an element of the right hand side of \eqref{eq:W.psi.epsilon}.
Conversely, let $\lambda\in\Omega$ such that there exists $f\in\operatorname{dom} T(\lambda)$, $\norm{f}=1$, with $\abs{\scalarprod{T(\lambda)f}{f}}<\varepsilon$. Setting $B\vcentcolon=-\scalarprod{T(\lambda)f}{f}I$, this gives $\lambda\in W(T+B)$ and $\norm{B}=\abs{\scalarprod{T(\lambda)f}{f}}<\varepsilon$, hence $\lambda\in W_\varepsilon(T)$.
\end{proof}
The following properties of the pseudo numerical range with respect to closures, form representations and Friedrichs extensions are immediate consequences of its alternative description \eqref{eq:pseudo.num.id}.
Here an operator $A$ or a form $\mathbf{a}$ is called \emph{sectorial} if its numerical range lies in a sector $\{z\in\mathbb{C}: |\arg(z-\gamma)| \le \vartheta\}$ for some $\gamma \in\mathbb{R}$ and $\vartheta \in [0, \frac \pi 2)$, see \cite[Sect.\ V.3.10, VI.1.2]{Kato-1995}; if, in addition, $\rho(A) \cap \{z \in \mathbb{C} :|\arg(z-\gamma)| > \vartheta \} \neq \emptyset$, then $A$ is called m-sectorial.
\begin{cor}
\label{cor:pseudo.num}
\begin{enumerate}
\item If the family $T$ or $\mathbf{t}$, respectively, consists of closable operators or forms \textnormal
{(}and $\overline{T}$ or $\overline{\mathbf{t}}$ denotes the family of closures\textnormal{)}, then
\begin{equation}
W_\Psi(T)=W_\Psi(\overline{T}), \qquad W_\Psi(\mathbf{t})=W_\Psi(\overline{\mathbf{t}}).
\end{equation}
\item If the family $\mathbf{t}$ consists of densely defined closed sectorial forms and $T$ denotes the family of associated m-sectorial operators, then
\begin{equation}
W_\Psi(\mathbf{t})=W_\Psi(T).
\end{equation}
\item If the family $T$ consists of densely defined sectorial operators and $T_F$ denotes the family of corresponding Friedrichs extensions then
\begin{equation}
W_\Psi(T)=W_\Psi(T_F).
\end{equation}
\end{enumerate}
\end{cor}
\begin{proof}
(i) The equalities follow from Proposition \ref{prop:pseudo.num} and from the fact that $\overline{W(T(\lambda))}=\overline{W(\overline{T(\lambda)})}$ and $\overline{W(\mathbf{t}(\lambda))}=\overline{W(\overline{\mathbf{t}}(\lambda))}$ for $\lambda\in\Omega$, see \cite[Prob.\ V.3.7, Thm.\ VI.1.18]{Kato-1995}.
(ii) The equality follows from Proposition \ref{prop:pseudo.num} and the identity $\overline{W(\mathbf{t}(\lambda))}=\overline{W(T(\lambda))}$ for $\lambda\in\Omega$, see \cite[Cor.\ VI.2.3]{Kato-1995}.
(iii) The claim is a consequence of (i) and (ii).
\end{proof}
The alternative characterisation \eqref{eq:pseudo.num.id} might suggest that there is a relation between the pseudo numerical range $W_\Psi(T)$ and the closure $\overline{W(T)}\cap \Omega $ of the numerical range $W(T)$ in $\Omega$. However, in general, there is no inclusion either way between them, see e.g.\ Example \ref{ex:pseudo.num.spec.incl} where $W_\Psi(T)\not\subseteq\overline{W(T)}\cap \Omega $ and Example \ref{ex:ODE} where $\overline{W(T)} \cap \Omega \not\subseteq W_\Psi(T)$.
In fact, it was already noted in \cite[Prop.\ 2.9]{Wagenhofer-PhD-2007}, for continuous functions of \emph{bounded} operators and for the more general case of block numerical ranges, that, for \vspace{-1mm} $\lambda \in \Omega$,
\[
\lambda \in \overline{W(T)} \implies 0 \in \overline{W(T(\lambda))};
\]
the converse holds only under additional assumptions. More precisely, for families of bounded linear operators however, the following is known.
\begin{thm}{\cite[Prop.\ 2.9, Prop.\ 2.12, Thm.\ 2.14]{Wagenhofer-PhD-2007}}
\label{thm:bdd.pseudo.num.ran}
\begin{enumerate}
\item If $\,T$ is a $($norm-$)$continuous family of bounded linear operators, \vspace{-1mm} then
\begin{equation}
\overline{W(T)} \cap \Omega \subseteq W_\Psi(T).
\end{equation}
\item If $\,T$ is a holomorphic family of bounded linear operators and there exist $k\in\mathbb{N}_0$ and $\mu\in\Omega$ \vspace{-1mm} with
\begin{equation}
\label{eq:ass.markus.bdd}
0\notin\overline{W(T^{(k)}(\mu))},
\vspace{-2mm}
\end{equation}
then
\begin{equation}
\sigma(T) \subseteq \overline{W(T)} \cap \Omega =W_\Psi(T).
\end{equation}
\end{enumerate}
\end{thm}
The following simple example from \cite[Ex.\ 2.11]{Wagenhofer-PhD-2007}, which is easily adapted to the unbounded case, shows that condition \eqref{eq:ass.markus.bdd} is essential both for the equality $\overline{W(T)} \cap \Omega =W_\Psi(T)$ and for the spectral inclusion $\sigma(T) \subseteq \overline{W(T)} \cap \Omega $.
\begin{exple}
\label{ex:deriv.cond}
Let $f:\Omega \to \mathbb{C}$ be holomorphic, $f \not\equiv 0$, $A$ a bounded or un\-bounded linear operator in $\mathcal{H}$ with $0\in \sigma(A)$,
$0 \in \overline{W(A)}\setminus W(A)$ and~consider
\[
T(\lambda) := f(\lambda) A, \quad \operatorname{dom} T(\lambda) \vcentcolon= \operatorname{dom} A, \quad \lambda \in \Omega.
\]
Then \eqref{eq:ass.markus.bdd} is violated because, for any $k\in \mathbb{N}_0$ and $\mu \in \Omega$, we have $T^{(k)}(\mu) = f^{(k)}(\mu) A$ with $\operatorname{dom} T^{(k)}(\lambda) = \operatorname{dom} A$, $\lambda \in \Omega$, and so $0\!\in\! \overline{W(T^{(k)}(\mu))}$
since $0 \!\in\! \overline{W(A)}$. Further, it is easy to see~that
\[
\sigma(T)=\Omega, \quad W(T)= \overline{W(T)}\cap \Omega =\{ z \in \Omega: f(z)=0\} \ne \Omega, \quad W_\Psi(T)=\Omega.
\]
Thus neither $\overline{W(T)} \cap \Omega =W_\Psi(T)$ nor the spectral inclusion $\sigma(T) \subseteq
\overline{W(T)}\cap \Omega $ hold, while $\sigma(T) = W_\Psi(T)$.
\end{exple}
In the sequel we generalise Theorem \ref{thm:bdd.pseudo.num.ran} (i) and (ii) to families of unbounded operators and/or forms, including operator polynomials and sectorial families with constant form domain. In the remaining part of this section, we study the relation between $W_\Psi(T)$ and $\overline{W(T)}\cap\Omega$; results containing spectral enclosures may be found in Section \ref{subsec:pseudo.nr.spec.encl}.
\begin{prop}
\label{prop:poly.pseudo.num.op}
Let $T$ be an operator polynomial in $\mathcal{H}$ of degree $n\in\mathbb{N}$ with $($possibly unbounded$)$ coefficients
\vspace{-1mm} $A_k:\mathcal{H}\supseteq\operatorname{dom} A_k\to\mathcal{H}$, i.e.
\begin{equation}
T(\lambda)\vcentcolon=\sum_{k=0}^n\lambda^k A_k, \quad \operatorname{dom} T(\lambda)\vcentcolon=\displaystyle\bigcap_{k=0}^n\operatorname{dom} A_k, \quad \lambda\in\mathbb{C}.
\vspace{-1mm}
\end{equation}
If \,$0\notin\overline{W(A_n)}$, \vspace{-1mm} then
\begin{equation}
\label{eq:poly.pseudo.num.op}
W_\Psi(T)\subseteq\overline{W(T)}\cap \Omega,
\vspace{-2mm}
\end{equation}
and analogously for form polynomials.
\end{prop}
\begin{proof}
Let $\lambda_0\in W_{\Psi}(T)$. By Proposition \ref{prop:pseudo.num}, there is a sequence $\{f_m\}_m\subseteq\operatorname{dom} T(\lambda_0)$ with $\norm{f_m}=1$, $m\in\mathbb{N}$, and $(T(\lambda_0)f_m,f_m)\to0$ for $m\to\infty$. Since $0\notin W(A_n)$ by assumption, the complex \vspace{-2mm} polynomial
\begin{equation}
p_m(\lambda)\vcentcolon=(T(\lambda)f_m,f_m)=\sum_{k=0}^n(A_kf_m,f_m)\lambda^k, \quad \lambda\in\mathbb{C},
\vspace{-2mm}
\end{equation}
has degree $n$ for each $m\in\mathbb{N}$. Let $\lambda^m_1,\dotsc,\lambda^m_n\in\mathbb{C}$ denote its zeros. Then $\lambda^m_j\in W(T)$, $j=1,\dotsc,n$, and $p_m$ admits the \vspace{-2mm} factorisation
\begin{equation}
p_m(\lambda)=(A_nf_m,f_m)\prod_{j=1}^{n}(\lambda-\lambda^m_j), \quad \lambda\in\mathbb{C}, \quad m\in\mathbb{N}.
\vspace{-2mm}
\end{equation}
Since $p_m(\lambda_0)\to0$ for $m\to\infty$ and $0\notin\overline{W(A_n)}$, there exists $j_0\in\{1,\dotsc,n\}$ with $\lambda^m_{j_0}\to\lambda_0$, $m\to\infty$, thus $\lambda_0\in\overline{W(T)}$ and $\lambda_0 \in W_\Psi (T) \subseteq \Omega$.
\end{proof}
Next we generalise Theorem \ref{thm:bdd.pseudo.num.ran} (i) to families of sectorial forms with constant domain
which satisfy a natural continuity assumption, see \cite[Thm.\ VI.3.6]{Kato-1995}. This assumption is met, in particular, by holomorphic form families of type (a) and associated operator families of type (B).
Recall that a family $\mathbf{t}$ of densely defined closed sectorial sesquilinear forms in $\mathcal{H}$ is called holomorphic of type (a) if its domain is constant and the mapping $\lambda\mapsto\mathbf{t}(\lambda)[f]$ is holomorphic for every $f\in\mathcal{D}_\mathbf{t}\!\vcentcolon=\!\operatorname{dom}\mathbf{t}(\lambda)$. The associated family $T$ of m-sectorial operators is called holomorphic of type~(B), see \cite[Sect.~VII.4.2]{Kato-1995} and also \cite{MR3850318}. Sufficient conditions on form families to be holomorphic of type (a) can be found in \cite[\S VII.4]{Kato-1995}.
\begin{thm}
\label{prop:cont.fam.form}
\label{prop:hol.fam.incl.i}
Let $\mathbf{t}$ be a family of sectorial sesquilinear forms in $\mathcal{H}$ with constant domain $\mathcal{D}_\mathbf{t}\vcentcolon=\operatorname{dom}\mathbf{t}(\lambda)$, $\lambda\in\Omega$. Assume that for each $\lambda_0\in\Omega$, there exist $r$, $C>0$ and $w:B_r(\lambda_0)\to[0,\infty)$, $\lim_{\lambda\to\lambda_0}w(\lambda)=0$, such \vspace{-1mm} that
\begin{equation}
\label{eq:cont.fam.form.ass}
\abs{\mathbf{t}(\lambda_0)[f]-\mathbf{t}(\lambda)[f]}\le w(\lambda)\left( \abs{\operatorname{Re}\mathbf{t}(\lambda_0)[f]}+C\norm{f}^2\right)
\vspace{-1mm}
\end{equation}
for all $\lambda\in B_r(\lambda_0)$ and $f\in\mathcal{D}_\mathbf{t}$. \vspace{-1mm} Then
\begin{equation}
\overline{W(\mathbf{t})}\cap \Omega \subseteq W_\Psi(\mathbf{t}).
\vspace{-1mm}
\end{equation}
In particular, if $\mathbf{t}$ is a holomorphic form family of type \textnormal{(a)} with associated holomorphic operator family $T$ of type \textnormal{(B)} in $\mathcal{H}$, \vspace{-1mm}then
\begin{equation}
\label{eq:sect-nr-psnr}
\overline{W(T)}\cap \Omega \subseteq W_\Psi(T), \qquad \overline{W(\mathbf{t})}\cap \Omega \subseteq W_\Psi(\mathbf{t}).
\end{equation}
\end{thm}
\begin{proof}
Let $\lambda_0\in\overline{W(\mathbf{t})}$. Then there exist $\{\lambda_n\}_n\subseteq\Omega$ and $\{f_n\}_n\subseteq\mathcal{D}_\mathbf{t}$ with $\norm{f_n}=1$, $\mathbf{t}(\lambda_n)[f_n]=0$, $n\in\mathbb{N}$, and $\lambda_n\to\lambda_0$, $n\to\infty$. We show that $\mathbf{t}(\lambda_0)[f_n]\!\to\!0$ for $n\!\to\!\infty$ which, in view of \eqref{eq:pseudo.num.id}, implies $\lambda_0\!\in\! W_\Psi(\mathbf{t})$. \vspace{-1mm} By~\eqref{eq:cont.fam.form.ass},
\begin{equation}
\abs{\mathbf{t}(\lambda_0)[f_n]}=\abs{\mathbf{t}(\lambda_0)[f_n]-\mathbf{t}(\lambda_n)[f_n]}\le w(\lambda_n)\left(|\operatorname{Re}\mathbf{t}(\lambda_0)[f_n]|+C\right), \quad n\in\mathbb{N}.
\vspace{-1mm}
\end{equation}
Since $\abs{\operatorname{Re}\mathbf{t}(\lambda_0)[f_n]}\le\abs{\mathbf{t}(\lambda_0)[f_n]}$ and $w(\lambda_n)\to0$, $n\to\infty$, we obtain that, for $n\in\mathbb{N}$ sufficiently \vspace{-1mm} large,
\begin{equation*}
\abs{\mathbf{t}(\lambda_0)[f_n]}\le C\frac{w(\lambda_n)}{1-w(\lambda_n)}\longrightarrow 0, \quad n\to\infty.
\vspace{-1mm}
\end{equation*}
Now suppose that $\mathbf{t}$ and $T$ are holomorphic families of type \textnormal{(a)} and \textnormal{(B)}, respectively. We only need to show the second inclusion, the first one then follows from $W(T)\subseteq W(\mathbf{t})$ and Corollary \ref{cor:pseudo.num} (ii). The second inclusion follows from what we already proved since for holomorphic form families of type (a), after a possible shift $\mathbf{t}\!+\!c$ where $c\!>\!0$ is sufficiently large to ensure $\operatorname{Re}\mathbf{t}(\lambda_0)\!\ge\!1$, \cite[Eqn.\ VII.(4.7)]{Kato-1995} shows that assumption \eqref{eq:cont.fam.form.ass} is satisfied.
\end{proof}
Theorem \ref{thm:bdd.pseudo.num.ran} (i) does not extend to analytic families of sectorial linear operators with non-constant form domains, as the following example inspired by \cite[Ex.~VII.1.4]{Kato-1995} illustrates.
\begin{exple}
\label{ex:ODE}
Let $\mathcal{H}=L^2(0,1)$. The family $T(\lambda)$, $\lambda\in\mathbb{C}$, given by
\begin{equation}
\begin{aligned}
T(\lambda)f & \vcentcolon=-f''-\lambda f, \\
\operatorname{dom} T(\lambda) & \vcentcolon=\set{f\in H^2(0,1)}{f(0)=0, \, \lambda f'(1)=f(1)},
\end{aligned}
\end{equation}
is a holomorphic family of m-sectorial operators, but not holomorphic of type~(B). Below we will show \vspace{-1mm} that
\begin{align}
\label{eq:veryverylast}
0\in\overline{W(T)} \subseteq \overline{W_\Psi (T)}, \qquad 0\notin W_\Psi(T
;
\vspace{-1mm}
\end{align}
note that, since $\Omega \!=\! \mathbb{C}$, this implies that the conclusion of Theorem \ref{thm:bdd.pseudo.num.ran} (i) does not hold and that
$W_\Psi(T)$ is not closed in $\mathbb{C}$.
Indeed, it is not difficult to check that the forms associated to $T(\lambda)$, \vspace{-1mm} $\lambda\in\mathbb{C}$,
\begin{equation}
\mathbf{t} (0) [f] = \|f'\|^2, \quad \mathbf{t} (\lambda) [f] = \|f'\|^2 - \lambda \norm{f}^2 - \frac1\lambda |f(1)|^2,
\quad \lambda \in\!\mathbb{C}\!\setminus\!\{0\},
\end{equation}
are densely defined, closed and sectorial, but have $\lambda$-depending domain $\operatorname{dom} \mathbf{t}(0) \!=\! H_0^1(0,1)$ and $\operatorname{dom} \mathbf{t}(\lambda)\!=\! \set{f \in H^1(0,1)}{f(0)=0}$ for $\lambda\in\!\mathbb{C}\!\setminus\!\{0\}$. The holomorphy of the family follows from the holomorphy of the integral kernel, i.e.\ the Green's function, of $(T(\lambda)-\mu)^{-1}$, which, for $\lambda\in\mathbb{C}$ and $\mu\in\rho(T(\lambda))\neq\emptyset$, is given by
\begin{equation}
G(x,y;\mu,\lambda)=
\frac{\sin(\sqrt{\mu\!+\!\lambda}x)(\sin(\sqrt{\mu\!+\!\lambda}(1\!-\!y))\!-\!\lambda\sqrt{\mu\!+\!\lambda}\cos(\sqrt{\mu\!+\!\lambda}(1\!-\!y)))}
{\sqrt{\mu\!+\!\lambda}(\sin\sqrt{\mu\!+\!\lambda}-\lambda\sqrt{\mu\!+\!\lambda}\cos\sqrt{\mu\!+\!\lambda}
)}
\end{equation}
for $0\le x\le y\le1$ and $G(x,y;\mu,\lambda)=G(y,x;\mu,\lambda)$ for $0\le y\le x\le1$, cf.\ \cite[Ex.\ V.4.14, VII.1.5, VII.1.11]{Kato-1995} where the family $T(\lambda) + \lambda$, $\lambda \in \mathbb{C}$,
was~studied.
For fixed $\lambda\in\mathbb{C}$, the spectrum of $T(\lambda)$ is given by the singularities
of the integral kernel $G(\cdot,\cdot;\mu,\lambda)$,
\begin{equation}
\begin{aligned}
\sigma(T(\lambda)) \setminus \{-\lambda\}
& \!=\!\sigma_{\operatorname{p}}(T(\lambda)) \setminus \{-\lambda\}
\!=\!\big\{\mu\!\in\!\mathbb{C} \setminus \{-\lambda\}:\lambda\sqrt{\mu\!+\!\lambda}=\tan\sqrt{\mu\!+\!\lambda}\big\}.
\end{aligned}
\vspace{-1mm}
\end{equation}
For $\lambda\in(0,\infty)$ the operator $T(\lambda)$ is self-adjoint and unbounded from above, and for $\lambda \!\in\! (0,1)$ it has an eigenvalue
$\mu_\lambda \in \sigma_{\operatorname{p}}(T(\lambda)) \subseteq W(T(\lambda))$
of the form $\mu_\lambda = -\lambda - \kappa_\lambda^2 <0$ where $\kappa_\lambda$ is the unique positive solution of $\tanh \kappa = \lambda \kappa$.
Thus $0 \in W(T(\lambda))$ for $\lambda \in (0,1
$ due to the convexity of $W(T(\lambda))$, which proves $(0,1) \subseteq W(T) \subseteq W_\Psi (T)$ and thus $0\in\overline{W(T)}
. On the other hand, $0\notin\overline{W(T(0))}=[\pi^2,\infty)$ and so Proposition~\ref{prop:pseudo.num}
implies~$0\notin W_\Psi(T)$.
\end{exple}
\section{Spectral enclosure via pseudo numerical range}
\label{subsec:pseudo.nr.spec.encl}
In this section we derive spectral enclosures for families of unbounded linear operators $T(\lambda)$, $\lambda \in\Omega$, using the pseu\-do numerical range $W_\Psi(T)$. The latter is tailored to enclose the approximate point spectrum.
The spectrum and resolvent set of an operator family $T(\lambda)$, $\lambda \in \Omega$, respectively, are defined \vspace{-1mm} as
\begin{equation}
\sigma(T):=\set{\lambda\in\Omega}{0\in\sigma(T(\lambda))} \subseteq
\Omega,
\quad \rho(T):=\Omega \setminus \sigma(T),
\end{equation}
and analogously for the various subsets of the spectrum. In addition to the approximate point \vspace{-1mm} spectrum
\[
\sigma_{\operatorname{ap}}(T) \vcentcolon=
\set{\lambda\in\Omega}{\exists \, \{f_n\}_{n}\subseteq\operatorname{dom} T(\lambda), \norm{f_n}=1, T(\lambda)f_n\to 0, n\to\infty},
\]
we introduce the \emph{$\varepsilon$-approximate point spectrum}, see \cite{MR1217705} for the \vspace{-1mm}operator~case,
\begin{equation}
\label{eq:eps-ap-spec}
\sigma_{{\rm ap}, \varepsilon}(T) \vcentcolon= \set{\lambda\in\Omega}{\exists \, f\in\operatorname{dom} T(\lambda), \,\norm{f}=1, \, \norm{T(\lambda)f}<\varepsilon}.
\end{equation}
The latter is a subset of the $\varepsilon$-pseudo \vspace{-1mm}spectrum
\[
\sigma_\varepsilon(T) := \sigma_{{\rm ap}, \varepsilon}(T) \cup\sigma(T),
\]
which was defined for operator functions with unbounded closed values in \cite[Sect.\ 9.2, (9.9)]{MR2359869}, comp.\ also \cite{MR2158921}.
Clearly, for monic linear polynomials $T(\lambda)= A \!-\! \lambda I_{\mathcal{H}}$, $\lambda \!\in\! \mathbb{C}$, these notions coincide with the
spectrum, resolvent set, approximate point spectrum, $\varepsilon$-approximate point spectrum and $\varepsilon$-pseudo spectrum of the linear operator~$A$.
\begin{prop}
\label{thm:spec.incl.pseudo.num.ran}
For any operator family $T(\lambda)$, $\lambda \in \Omega$, and every $\varepsilon > 0$,
we have \vspace{-1mm} $\sigma_{{\rm ap}, \varepsilon}(T) \subseteq W_\varepsilon(T)$,
\begin{equation}
\label{eq:app-psi}
\norm{T(\lambda)^{-1}}\le\frac{1}{\varepsilon}, \quad \lambda\in\rho(T)\setminus W_\varepsilon(T),
\vspace{-1mm} \end{equation}
\vspace{-1mm} and hence
\begin{equation}
\label{eq:res.est}
\sigma_{\operatorname{ap}}(T)\subseteq W_\Psi(T).
\end{equation}
If $\sigma(T(\lambda))\subseteq\overline{W(T(\lambda))}$ for all $\lambda\in\Omega$, then
\begin{equation}
\sigma(T)\subseteq W_\Psi(T).
\end{equation}
\end{prop}
\begin{proof}
The claims follow easily from \eqref{eq:eps-ap-spec} and Definition \ref{def:pseudo-nr} together with Cauchy-Schwarz' inequality and \eqref{eq:W.psi.epsilon} in Proposition \ref{prop:pseudo.num}.
\end{proof}
The following simple examples illustrate some properties of $W_\Psi(T)$ versus $\overline{W(T)}\cap \Omega $, in particular, in view of spectral enclosures.
\begin{exple}
\label{ex:pseudo.num.spec.incl}
\begin{enumerate}
\item Let $A\!>\!0$ be self-adjoint in $\mathcal{H}$ with $0\!\in\!\sigma(A)$. Then,~for the non-holomorphic family $T(\lambda)\!=\!A\!+\!\abs{\sin\lambda}$, $\lambda \!\in\! \Omega\!:=\mathbb{C}$, it is easy to see~that
\begin{equation}
W_\Psi(T)=\sigma(T)=\set{k\pi}{k\in\mathbb{Z}}\not\subseteq\overline{W(T)}\cap \Omega =\emptyset;
\end{equation}
notice that this implies $\overline{W_\Psi (T)}\cap \Omega \neq \overline{W(T)}\cap \Omega$, i.e.\ the closures of $W_\Psi(T)$ and $W(T)$ in $\Omega$ do not coincide.
\item Let $A$ be bounded in $\mathcal{H}$ with $\operatorname{Re} W(A)>0$, $0\in\sigma(A)$ and $0\notin W(A)$. Consider the holomorphic family of bounded operators in $\mathcal{H}\oplus\mathcal{H}$
\begin{equation}
T(\lambda)=\left(\begin{array}{cc}
\lambda A & 0 \\
0 & \lambda\operatorname{Log}(\lambda+1) I_{\mathcal{H}}
\end{array}\right), \quad \lambda\in \Omega:= \mathbb{C}\setminus(-\infty,-1];
\end{equation}
here $\operatorname{Log}:\mathbb{C}\setminus(-\infty,0]\to\set{z\in\mathbb{C}}{\operatorname{Im} z\in(-\pi,\pi]}$ denotes the
principal value of the complex logarithm.
This family does not satisfy condition \eqref{eq:ass.markus.bdd} in Theorem~\ref{thm:bdd.pseudo.num.ran} since $0 \in \overline{W(A)}$ by assumption. It is not difficult to show~that
\begin{equation}
W_\Psi(T)=\sigma(T)= \mathbb{C}\setminus(-\infty,-1] \not\subseteq \overline{W(T)} \cap \Omega \subseteq \overline{B_1(-1)} \setminus [-2,-1]. \hspace{-3mm}
\end{equation}
In fact, the claims for $W_\Psi(T)$ are obvious. If $\lambda \!\in\! W(T)$, then $\lambda\!\in\!\mathbb{C}\!\setminus\!(-\infty,-1]$ and there exists $x\!=\!(f,g)^{\rm t} \!\in\! \mathcal{H} \oplus \mathcal{H}$, $(f,g)^{\rm t}\ne (0,0)^{\rm t}$, with
\[
\big( T(\lambda) x,x \big) = \lambda \big( (Af,f) + (\ln |\lambda+1| + {\rm i} \arg (\lambda+1) ) (g,g)\big) = 0
\hspace{-8mm}
\]
or, equivalently, noting that $\lambda \neq 0$ implies $g\neq 0$ as $0 \notin W(A)$,
\[
\lambda =0 \ \vee \ \Big( |\lambda\!+\!1| \!=\! \exp\Big(\!-\!\frac{\operatorname{Re}(Af,f)}{(g,g)}\Big) \wedge\arg (\lambda\!+\!1) \!=\! - \frac{\operatorname{Im} (Af,f)}{(g,g)} \Big).
\hspace{-12mm}
\]
Hence, since $\operatorname{Re} W(A)>0$,
\begin{align*}
W(T) \setminus \{0\} \! \subseteq
\! \big\{ z\!\in\! \mathbb{C}\setminus(-\infty,-1] \,:\, |z\!+\!1| \in (0,1)\big\} \subseteq
B_1(-1) \setminus (-2,-1].
\hspace{-10mm}
\end{align*}
Moreover, for arbitrary $h\in\mathcal{H}$, $h\ne 0$,
\begin{equation}
\left(T\left(\exp{\Big(\!-\!\frac{(Ah,h)}{(h,h)}\Big)} -1\right)\binom{h}{h}, \binom {h}{h} \right)=0.
\end{equation}
This shows that $\set{\exp(-z) - 1}{z \in W(A)} \subseteq W(T)$ and since $\exp$ is entire and non-constant, $W(A)^\circ\neq\emptyset$ implies that $W(T)^\circ\neq\emptyset$ by the open mapping theorem for holomorphic functions. So in this case $W_\Psi(T)^\circ \ne W(T)^\circ$ and both are non-empty.
\[
W_\Psi(T)^\circ\!=\!\mathbb{C}\setminus(-\infty,-1], \quad \emptyset \ne W(T)^\circ \subseteq
B_1(-1) \setminus (-2,-1].
\]
\end{enumerate}
\end{exple}
In the following, we generalise the spectral enclosure for bounded holomorphic families in Theorem \ref{thm:bdd.pseudo.num.ran} (ii) to holomorphic form families $\mathbf{t}$ of type (a) and associated operator families of type (B), i.e.\ $\mathbf{t}(\lambda)$ is sectorial with vertex $\gamma(\lambda)\!\in\!\mathbb{R}$, semi-angle $\vartheta(\lambda)\!\in\! [0,\frac \pi 2)$ and $\lambda$-independent domain $\operatorname{dom} \mathbf{t}(\lambda)\!=\! \mathcal{D}_\mathbf{t}$. Here, for $k \in \mathbb{N}_0$, we denote the $k$-th derivative of $\mathbf{t}$ by
\begin{equation}
\mathbf{t}^{(k)}(\lambda)[f] \vcentcolon= (\mathbf{t}(\cdot)[f])^{(k)}(\lambda), \quad f \in \operatorname{dom} \mathbf{t}^{(k)}(\lambda) \vcentcolon= \mathcal{D}_\mathbf{t} = \operatorname{dom} \mathbf{t}(\lambda), \quad \lambda \in \Omega;
\vspace{-3mm}
\end{equation}
note that $\mathbf{t}^{(k)}(\lambda)$ need not be closable or sectorial if $k>0$.
\begin{thm}
\label{thm:pseudo.dense.hol.fam}
\label{thm:markus.B}
Let $\mathbf{t}$ be a holomorphic form family of type \textnormal{(a)} with associated holomorphic operator family $T$ of type \textnormal{(B)} in $\mathcal{H}$. If there exist $k\in\mathbb{N}_0$, $\mu\in\Omega$ and a core $\mathcal{D}$ of $\mathbf{t}(\mu)$ \vspace{-2mm} with
\begin{equation}
\label{eq:ass.pseudo.dense.hol.fam}
0 \notin \overline{W\big(\mathbf{t}^{(k)}(\mu)\big|_\mathcal{D}\big)},
\vspace{-2mm}
\end{equation}
then
\begin{equation}
\sigma(T) \subseteq W_\Psi(\mathbf{t})=\overline{W(\mathbf{t})} \cap \Omega .
\end{equation}
If, in addition, the operator family $T$ has constant domain, then
\begin{equation}
\sigma(T) \!\subseteq \, W_\Psi(T)=\overline{W(T)}\cap \Omega .
\end{equation}
\end{thm}
\begin{rem}
\label{rem:suff.cond.pseudo.dense}
\begin{enumerate}
\item Since $\mathbf{t}(\lambda)$ is densely defined, closed and sectorial for all $\lambda \!\in\! \Omega$, condition~\eqref{eq:ass.pseudo.dense.hol.fam} for $k=0$ has the two equivalent forms
\[
0 \notin \overline{W\big(\mathbf{t}(\mu)\big|_\mathcal{D}\big)} \ \iff \
0 \notin \overline{W(T(\mu))};
\]
hence, by Proposition \ref{prop:pseudo.num} a sufficient condition for \eqref{eq:ass.pseudo.dense.hol.fam} is
$$W_\Psi(T)\neq\Omega.$$
\item For operator polynomials $T$, which are holomorphic and have constant domain by definition, see Proposition \ref{prop:poly.pseudo.num.op}, no sectoriality assumption is needed for the enclosure
\begin{equation}
\sigma_{\operatorname{ap}} (T) \subseteq W_\Psi (T) \subseteq \overline{W(T)} \cap \Omega.
\end{equation}
By Propositions \ref{prop:poly.pseudo.num.op} and \ref{thm:spec.incl.pseudo.num.ran}, the above holds under the mere assumption that $0 \notin \overline{W(A_n)}$ where $A_n$ is the leading coefficient of $T$; note that then \eqref{eq:ass.pseudo.dense.hol.fam} holds with $k=n$ and arbitrary $\mu \in \mathbb{C}$. This generalises the classical result \cite[Thm.\ 26.7]{Markus-1988} for bounded operator polynomials; see also \cite[Prop.\ 3.3]{Wagenhofer-PhD-2007} for the block numerical range.
\item In general, neither the assumption on holomorphy nor condition \eqref{eq:ass.pseudo.dense.hol.fam} in Theorem \ref{thm:pseudo.dense.hol.fam} can be omitted, see Examples \ref{ex:deriv.cond} and \ref{ex:pseudo.num.spec.incl}.
\end{enumerate}
\end{rem}
\begin{proof}[Proof of Theorem {\rm \ref{thm:pseudo.dense.hol.fam}}]
First we show that if condition \eqref{eq:ass.pseudo.dense.hol.fam} holds for some core $\mathcal{D}$ of $\mathbf{t}(\mu)$, it also holds for $\mathcal{D}$ replaced by $\mathcal{D}_\mathbf{t} = \operatorname{dom} \mathbf{t} (\lambda)$, $\lambda \in \Omega$.
For $k\!=\!0$, this follows from the properties of a core, see \cite[Thm.\ VI.1.18]{Kato-1995}. For $k>0$,
without loss of generality,
we may assume that $\operatorname{Re} \mathbf{t} (\mu) \ge 1$. From the proof of \cite[Eqn.\ VII.(4.7)]{Kato-1995}, it is easy to see that the second inequality therein holds for $\mathbf{t}^{(k)}$, i.e.~there exists a constant $C_\mu >0$ such that
\begin{equation}
\label{eq:Kato-VII.(4.7).deriv}
\big|\mathbf{t}^{(k)}(\mu)[f,g]\big| \le C_\mu \abs{\mathbf{t}(\mu) [f]}^\frac12\abs{\mathbf{t}(\mu) [g]}^\frac12, \quad f,g \in\mathcal{D}_\mathbf{t}.
\end{equation}
To prove the claim stated at the beginning assume, to the contrary, that $0 \in \overline{W(\mathbf{t}^{(k)}(\mu))}$, i.e.\ that there exists a sequence $\{f_n\}_n \subseteq \mathcal{D}_\mathbf{t}$, $\norm{f_n}=1$, $n\in\mathbb{N}$, such that $\mathbf{t}^{(k)}(\mu)[f_n] \!\to\! 0$ as $n\!\to\!\infty$. By the core property of $\mathcal{D}$ for $\mathbf{t}[\mu]$ and by \cite[Thm.\ VI.1.12]{Kato-1995}, for fixed $n\in\mathbb{N}$, there exists $\{f_{n,m}\}_m\subseteq\mathcal{D}$~with
\begin{equation}
\label{eq:core.sequ}
\hspace{2mm}
f_{n,m} \!\to\! f_n, \quad \mathbf{t}(\mu)[f_{n,m}\!-\!f_n]\!\to\! 0, \quad \mathbf{t}(\mu)[f_{n,m}] \!\to\! \mathbf{t}(\mu)[f_n], \quad m\!\to\! \infty.
\end{equation}%
Applying \eqref{eq:Kato-VII.(4.7).deriv}, we can estimate
\begin{equation}
\begin{aligned}
\big|\mathbf{t}^{\!(k)}(\mu) [f_{n,m}] \!-\! \mathbf{t}^{\!(k)}(\mu) [f_n] \big|
& \!\le\! \big|\mathbf{t}^{\!(k)}(\mu) [f_{n,m}, f_{n,m}\!-\!f_n]\big| \!+\! \big|\mathbf{t}^{\!(k)}(\mu) [f_n \!-\! f_{n,m}, f_n]\big|\\
& \!\le\! C_\mu \abs{\mathbf{t} (\mu) [f_{n,m} \!-\! f_n]}^\frac12\!\big(\abs{\mathbf{t}(\mu) [f_{n,m}]}^\frac12 \!\!+\! \abs{\mathbf{t}(\mu) [f_n]}^\frac12\!\big).
\end{aligned}
\end{equation}
Since $\norm{f_n}\!=\!1$, $n\!\in\!\mathbb{N}$, it follows from \eqref{eq:core.sequ} and the above inequality
that there exists $m_n\ge n$ such\vspace{-1mm} that
\begin{equation}
\label{eq:diag.sequ.core}
\norm{f_{n,m_n}}\ge\frac 12, \quad \abs{\mathbf{t}^{(k)}(\mu)[f_{n,m_n}]}<\abs{\mathbf{t}^{(k)}(\mu)[f_n]}+\frac{1}{n}.
\vspace{-1mm}
\end{equation}
In view of $\mathbf{t}^{(k)}(\mu) [f_n] \to 0$, $n \to \infty$, this implies the required \vspace{-1mm}claim
\begin{equation}
0 \in\overline{W\big(\mathbf{t}^{(k)}(\mu) \big|_\mathcal{D}\big)}.
\vspace{-1mm}
\end{equation}
This completes the proof that \eqref{eq:ass.pseudo.dense.hol.fam} holds with $\mathcal{D}_\mathbf{t}$ instead of $\mathcal{D}$.
By Corollary \ref{cor:pseudo.num} (ii), we have $W_\Psi(\mathbf{t})=W_\Psi(T)\subseteq\Omega$. Thus, due to \eqref{eq:sect-nr-psnr}, for the claimed equalities between pseudo numerical and numerical ranges it is sufficient to show
$W_\Psi(\mathbf{t})\subseteq\overline{W(\mathbf{t})}$ and $W_\Psi(\mathbf{t})\subseteq\overline{W(T)}$, respectively.
Let $\lambda_0\in W_\Psi(\mathbf{t})=W_\Psi(T)$. Then $0\in\overline{W(T(\lambda_0))}$ by Proposition \ref{prop:pseudo.num} and hence there exists $\{f_n\}_n\!\subseteq\!\operatorname{dom} T(\lambda_0)\!\subseteq\!\mathcal{D}_\mathbf{t}$ with $\norm{f_n}\!=\!1$, $n\in\mathbb{N}$, such that
\begin{equation}
\label{eq:form-to-0}
\scalarprod{T(\lambda_0)f_n}{f_n}=\mathbf{t}(\lambda_0)[f_n]\to 0, \quad n\to\infty.
\end{equation}
Define a sequence of holomorphic \vspace{-1mm} functions
\begin{equation}
\label{eq:diag.sequ.core.def}
\varphi_n(\lambda)\vcentcolon= \mathbf{t}(\lambda) [f_n],
\quad \lambda\in\Omega, \quad n\in\mathbb{N}.
\vspace{-1mm}
\end{equation}
Let $K\subseteq\Omega$ be an arbitrary compact subset and let $c>0$ be such that $\operatorname{Re} (\mathbf{t}+c) (\lambda_0) \ge 1$. By \cite[Eqn.\ VII.(4.7)]{Kato-1995}, there exists $b_K>0$ with
\begin{equation}
\label{eq:Kato-VII.(4.7)}
|(\mathbf{t}+c)(\lambda)[f]|\le b_K |(\mathbf{t}+c)(\lambda_0)[f]|, \quad \lambda \in K, \ f \in \mathcal{D}_\mathbf{t}.
\end{equation}
Using this, $\norm{f_n}=1$ and \eqref{eq:form-to-0}, we find that, for all $\lambda \in K$,
\begin{equation}
\abs{\varphi_n(\lambda)}\le b_{K} \abs{(\mathbf{t}+c)(\lambda_0)[f_n]} +c\le b_{K} \sup_{n\in\mathbb{N}} \abs{\mathbf{t}(\lambda_0)[f_n]} +(b_K+1)c<\infty.
\vspace{-2mm}
\end{equation}
Consequently, $\{\varphi_n\}_n$ is uniformly bounded on compact subsets of $\Omega$. By Mon\-tel's Theorem, see e.g.\ \cite[\S VII.2]{Conway-1978}, there exists a subsequence $\{\varphi_{n_j}\}_j\subseteq\{\varphi_n\}_n$ that converges locally uniformly to a holomorphic function $\varphi$. Now assumption \eqref{eq:ass.pseudo.dense.hol.fam} with $\mathcal{D}_\mathbf{t}$, which we proved to hold in the first \vspace{-1mm} step, implies
\begin{equation}
\varphi^{(k)}(\mu)=\frac{\d^k}{\d\!\lambda^k}\lim_{j\to\infty}\varphi_{n_j}(\lambda)\bigg|_{\lambda=\mu}=\lim_{j\to\infty}\varphi_{n_j}^{(k)}(\mu) = \lim_{j\to\infty} \mathbf{t}^{(k)}(\mu) [f_{n_j}]
\neq0
\vspace{-1mm}
\end{equation}
and thus $\varphi\not\equiv0$. By \eqref{eq:form-to-0}, we further conclude that $\varphi(\lambda_0)=0$. Then, by Hurwitz' Theorem, see e.g.\ \cite[\S VII.2]{Conway-1978}, there exists a sequence $\{\lambda_j\}_j\subseteq\Omega$ with $\lambda_j\to\lambda_0$ for $j\to\infty$ \vspace{-1mm} and
\begin{equation}
0=\varphi_{n_j}(\lambda_j)=\mathbf{t}(\lambda_j)[f_{n_j}], \quad j\in\mathbb{N}.
\vspace{-1mm}
\end{equation}
Hence, $\lambda_j\in W(\mathbf{t})$ for all $j\in\mathbb{N}$ and so $\lambda_0\in\overline{W(\mathbf{t})}\cap \Omega $, as required.
Now assume that the operator family $T$ has constant domain. Then, in the above construction, we have $f_{n_j} \in\operatorname{dom} T(\lambda_0) = \operatorname{dom} T(\lambda_j)$ for every $j\in\mathbb{N}$. It follows that $\lambda_j\in W(T)$, $j\in\mathbb{N}$, and thus $\lambda_0\in\overline{W(T)}\cap \Omega $.
The enclosures of the spectrum follow from Proposition \ref{thm:spec.incl.pseudo.num.ran} and from the fact that
$\sigma(T(\lambda)) \subseteq \overline{W(T(\lambda))}$ since $T(\lambda)$ is m-sectorial for all $\lambda\in\Omega$.
\end{proof}
As forms are the natural objects regarding numerical ranges, it is not surprising that the inclusion $W_\Psi(T)\subseteq\overline{W(T)}\cap \Omega $ in Theorem \ref{thm:pseudo.dense.hol.fam} might cease to hold for more general analytic operator families where the connection to a family of forms is lost. Nevertheless, using an analogous idea as in the proof of Theorem \ref{thm:pseudo.dense.hol.fam}, one can prove the corresponding inclusion for the approximate spectrum.
Recall that an operator family $T$ in $\mathcal{H}$ is called holomorphic of type (A) if it consists of closed operators with constant domain and for each $f\in\mathcal{D}_T\vcentcolon=\operatorname{dom} T(\lambda)$, the mapping $\lambda\mapsto T(\lambda)f$ is holomorphic on $\Omega$. Here, for $k \in\mathbb{N}_0$, the $k$-th derivative of $T$ is defined as
\begin{equation}
T^{(k)}(\lambda)f \vcentcolon= (T(\cdot) f)^{(k)}(\lambda), \quad f \in\operatorname{dom} T^{(k)} (\lambda) \vcentcolon= \mathcal{D}_T, \quad \lambda\in\Omega.
\end{equation}
\begin{thm}
\label{thm:markus.A}
Let $T$ be a holomorphic family of type \textnormal{(A)} in $\mathcal{H}$. If there exist $k\in\mathbb{N}_0$, $\mu\in\Omega$ and a core $\mathcal{D}$ of $T(\mu)$
\vspace{-1mm}
with
\begin{equation}
\label{eq:ass.markus.A}
0 \notin \overline{W\big(T^{(k)}(\mu)\big|_\mathcal{D}\big)},
\vspace{-1mm}
\end{equation}
\vspace{-1mm}then
\begin{equation}
\sigma_{\operatorname{ap}}(T)\subseteq\overline{W(T)} \cap \Omega .
\end{equation}
\end{thm}
\begin{proof}
In the same way as in the proof of Theorem \ref{thm:markus.B}, using the analogue~of \cite[Eqn.\,VII.(2.3)]{Kato-1995} for the $k$-th derivative of $T$ and Cauchy-Schwarz' in\-equa\-lity, one shows that \eqref{eq:ass.markus.A} holds with $\mathcal{D}_T \!=\! \operatorname{dom} T(\lambda)$, $\lambda \!\in\!\Omega$, instead of~$\mathcal{D}$.
We proceed similarly as in the proof of Theorem \ref{thm:pseudo.dense.hol.fam}. Let $\lambda_0\in\sigma_{\operatorname{ap}}(T)$. There exists a sequence $\{f_n\}_n\subseteq\mathcal{D}_T$ with $\norm{f_n}=1$, $n\in\mathbb{N}$, and $T(\lambda_0)f_n\to0$ as $n\to\infty$. Define a sequence of holomorphic functions
\begin{equation}
\varphi_n(\lambda)\vcentcolon= \scalarprod{T(\lambda)f_n}{f_n}, \quad \lambda\in\Omega, \quad n\in\mathbb{N}.
\end{equation}
Analogously to the proof of Theorem \ref{thm:pseudo.dense.hol.fam}, one uses Cauchy-Schwarz' inequality, equation \cite[Eqn.\ VII.(2.2)]{Kato-1995}, $\lim_{n\to\infty}T(\lambda_0)f_n=0$ and \eqref{eq:ass.markus.A} with $\mathcal{D}_T$ in order to show uniform boundedness of $\{\varphi_n\}_n$ on compacta, extract a locally uniformly converging subsequence with limit $\varphi\not\equiv0$ and infer $\varphi(\lambda_0)=0$. One then obtains $\lambda_0\in\overline{W(T)}\cap \Omega $ in the same way as in Theorem \ref{thm:pseudo.dense.hol.fam}.
\end{proof}
\begin{rem}
Theorems \ref{thm:markus.B} and \ref{thm:markus.A} generalise the classical result \cite[Thm.~III. 26.6]{Markus-1988} for bounded holomorphic families (which follows from Theorem \ref{thm:bdd.pseudo.num.ran}~(ii)).
\end{rem}
Like for the numerical range of unbounded operators, cf.\ \cite[Sct.\ V.3.2]{Kato-1995}, additional conditions are needed for enclosing not only the approximate point spectrum, but the entire spectrum $\sigma(T)$ in $W_\Psi (T)$.
\begin{rem}
\label{rem:spec.incl.pseudo.num.ran}
Let $T$ be a family of closed operators in $\mathcal{H}$ and let $T$ be continuous in the generalised sense. If $\sigma_{\operatorname{ap}}(T)\subseteq\Theta\subseteq\Omega$ and all connected components of $\Omega\setminus\Theta$ contain a point in the resolvent set of $T$, then $\sigma(T)\subseteq\Theta$. In particular, if all connected components of $\Omega\setminus W_\Psi(T)$ have non-empty intersection with $\rho(T)$, \vspace{-2mm} then
\begin{equation}
\sigma(T)\subseteq W_\Psi(T).
\end{equation}
This follows from the fact that the index of $T(\lambda)$ is locally constant on the set of regular points, see \cite[Thm.\ IV.5.17]{Kato-1995}.
\end{rem}
\section{Pseu\-do block numerical ranges of operator matrix functions and spectral enclosures}
\label{sec:op.mat.fam}
\label{subsec:qnr}
\label{subsec:spec.pseudo.qnr}
In this section we introduce the pseudo block numerical range of $n\times n$ operator matrix functions for which the entries may have unbounded operator values. While we study its basic properties for $n\ge 2$, we study the most important case $n=2$ in greater detail.
We suppose that with respect to a fixed decomposition $\mathcal{H}=\mathcal{H}_1\oplus\cdots \oplus\mathcal{H}_n$ with $n\in\mathbb{N}$, a family $\mathcal{L}=\set{\mathcal{L}(\lambda)}{\lambda\in\Omega}$ of densely defined linear operators in $\mathcal{H}$ admits a matrix representation
\begin{equation}
\label{eq:op.matrix.fam}
\mathcal{L}(\lambda)=\left( L_{ij} (\lambda) \right)_{i,j=1}^n :\mathcal{H}\supseteq\operatorname{dom}\mathcal{L}(\lambda)\to\mathcal{H};
\end{equation}
here $L_{ij}$ are families of densely defined and closable linear operators from $\mathcal{H}_j$ to $\mathcal{H}_i$, $i$, $j=1,\dots, n$, and
\vspace{-1.5mm} $\operatorname{dom}\mathcal{L}(\lambda)=\mathcal{D}_1(\lambda)\oplus\cdots \oplus\mathcal{D}_n(\lambda)$,
\begin{equation}
\mathcal{D}_j(\lambda)\vcentcolon= \bigcap_{i=1}^n \operatorname{dom} L_{ij}(\lambda), \quad j=1,\dots,n.
\vspace{-1.5mm}
\end{equation}
The following definition generalises, and unites, several earlier concepts: the block numerical range of $n\times n$ operator matrix families whose entries have bounded linear operator values, see \cite{MR3302436}, the block numerical range of unbounded $n \times n$ operator matrices, see \cite{Rasulov-Tretter-2018}, and in the special case $n\!=\!2$, the quadratic numerical range for bounded analytic operator matrix families and unbounded operator matrices, see \cite{Tretter-2010} and \cite{Langer-Tretter-1998}, \cite{Tretter-2009}, respectively. Further, we introduce the new concept of pseudo block numerical range.
\begin{defi}
\label{def:quad.num.ran}
\begin{enumerate}
\item We define the \emph{block numerical range} of $\mathcal{L}$ (with respect to the decomposition $\mathcal{H}=\mathcal{H}_1\oplus\cdots \oplus\mathcal{H}_n$) as
\begin{equation}
W^{n}(\mathcal{L})\vcentcolon=
\{\lambda\in\Omega: \exists\, f\in \!\operatorname{dom}\mathcal{L}(\lambda)\cap {\mathcal S}^n \ 0 \!\in\! \sigma(\mathcal{L}(\lambda)_f)\}
\end{equation}
where ${\mathcal S}^n\vcentcolon= \{ f\!=\!(f_i)_{i=1}^n \!\in\! \mathcal{H} : \norm{f_i}\!=\!1, i\!=\!1,\dots,n\}$ and,
for $f\!=\!(f_i)_{i=1}^n\!\in\!\operatorname{dom}\mathcal{L}(\lambda)\cap {\mathcal S}^n$ with $\lambda \!\in\! \Omega$,
\begin{equation}
\mathcal{L}(\lambda)_{f}\vcentcolon=\left( \mathcal{L}_{ij}(\lambda) f_j, f_i \right)\in\mathbb{C}^{n\times n}.
\end{equation}
\item We introduce the \emph{pseu\-do block numerical range} of $\mathcal{L}$ as
\begin{equation}
W^n_\Psi(\mathcal{L})\vcentcolon=\bigcap_{\varepsilon>0}W_\varepsilon^n(\mathcal{L}), \qquad
W_\varepsilon^n(\mathcal{L})\vcentcolon=\hspace{-2mm} \bigcup_{\mathcal{B}\in L(\mathcal{H}), \norm{\mathcal{B}}<\varepsilon}\hspace{-2mm} W^n(\mathcal{L}+\mathcal{B}), \quad \varepsilon>0.
\end{equation}
\end{enumerate}
\end{defi}
Note that, indeed, if $\mathcal{L}(\lambda)\!=\!\mathcal{A}\!-\!\lambda I_\mathcal{H}$, $\lambda \!\in\! \mathbb{C}$, with an (unbounded) operator matrix $\mathcal{A}$ in $\mathcal{H}$, then
$\operatorname{dom} \mathcal{L}(\lambda)\!=\!\operatorname{dom} \mathcal{A}$ is constant for $\lambda\!\in\!\mathbb{C}$ and
$W^n(\mathcal{L})$ coincides with the block numerical range $W^n(\mathcal{A})$ first introduced in \cite{Rasulov-Tretter-2018} and, for $n\!=\!2$, in \cite{Tretter-2009}. While the pseudo numerical range also satisfies $W_\Psi(\mathcal{L})\!=\!\overline{W(\mathcal{L})} = \overline{W(\mathcal{A})}$ this is no longer true for the pseudo block numerical range when $n>1$; in fact, Example \ref{ex:jordan} below shows that $W_\Psi^2(\mathcal{L})\neq \overline{W^2(\mathcal{L})} = \overline{W^{2}(\mathcal{A})}$ is possible.
\begin{rem}
\label{rem:4.2}
It is not difficult to see that, for the block numerical range and the pseudo block numerical range of general operator matrix \vspace{-1mm} families,
\begin{equation}
\label{eq:qnr.equiv}
\lambda\in W^n(\mathcal{L}) \iff 0\in W^n(\mathcal{L}(\lambda))
\vspace{-1mm}
\end{equation}
and $ W^n(\mathcal{L})\!\subseteq\! W^n_\Psi(\mathcal{L})$. If $\operatorname{dom} \mathcal{L}(\lambda)\!=:\!\mathcal{D}_\mathcal{L}$, $\lambda\!\in\!\Omega$, is constant, we can also~\vspace{-1mm}write
\[
W^{n}(\mathcal{L})\vcentcolon=
\bigcup_{f\in\mathcal{D}_\mathcal{L}\cap{\mathcal S}^n} \sigma \big( \mathcal{L}_{f} \big ).
\vspace{-2mm}
\]
\end{rem}
There are several other possible ways to define the pseudo block numerical range. In the following we show that, in general, they inevitably fail to contain the approximate point spectrum of an operator matrix family.
\begin{defi}
\label{eq:alt.def.psi0}
Define
\begin{equation}
W_{\Psi,0}^{n}(\mathcal{L})\!\vcentcolon=\!\set{\lambda\!\in\!\Omega}{0\in\overline{W^{n}(\mathcal{L}(\lambda))}}, \quad
W_{\Psi,i}^{n}(\mathcal{L})\!\vcentcolon=\!\bigcap_{\varepsilon>0}W_{\varepsilon,i}^{n}(\mathcal{L}), \ i\!=\!1,2,
\vspace{-2mm}
\end{equation}
where, for $\varepsilon>0$,
\begin{equation}
\begin{aligned}
W_{\varepsilon,1}^{n}(\mathcal{L}) & \!\vcentcolon=\! \set{\lambda\in\Omega}{\exists\, f\in\operatorname{dom}\mathcal{L}(\lambda) \cap {\mathcal S}^n, \abs{\det(\mathcal{L}(\lambda)_{f})}<\varepsilon}\!, \\
W_{\varepsilon,2}^{n}(\mathcal{L}) & \!\vcentcolon=\! \!\!\bigcup_{B_i\in L(\mathcal{H}_i),\norm{B_i}<\varepsilon} \!\!W^{n}\big(\mathcal{L}+\operatorname{diag}(B_1,\dots,B_n)\big).
\end{aligned}
\end{equation}
\end{defi}
While for the pseudo numerical range, analogous concepts as in Definition~\ref{eq:alt.def.psi0} coincide by Proposition \ref{prop:pseudo.num}, this is not true for the pseudo block numerical range. Here, in general, we only have the following inclusions.
\begin{prop}
\label{prop:nested.def.pseudo.qnr}
The pseudo block numerical range $W^{n}_\Psi(\mathcal{L})$ satisfies
\begin{equation}
\label{eq:pbnri}
W^n(\mathcal{L}) \subseteq W^{n}_{\Psi,1}(\mathcal{L})\subseteq W_{\Psi,0}^{n}(\mathcal{L})\subseteq W^{n}_{\Psi,2}(\mathcal{L})\subseteq W^{n}_\Psi(\mathcal{L}).
\end{equation}
\end{prop}
\begin{proof}
We consider the case $n=2$; the proofs for $n>2$ are analogous. The leftmost and rightmost inclusions are trivial by definition. For the remaining inclusions, it is sufficient to show that, for every $\varepsilon>0$,
\begin{equation}
\label{eq:nested.def.pseudo.qnr}
W^2_{\varepsilon,1}(\mathcal{L})
\subseteq \set{\lambda\in\Omega}{0\in\operatorname{B}_{\sqrt{\varepsilon}}(W^2(\mathcal{L}(\lambda)))}
\subseteq W^2_{\sqrt{\varepsilon},2}(\mathcal{L}).
\end{equation}
Then the respective claims follow by taking the intersection over all $\varepsilon>0$.
Let $\varepsilon>0$ and $\lambda\in W_{\varepsilon,1}^2(\mathcal{L})$. Then there exists $f\in\operatorname{dom}\mathcal{L}(\lambda) \cap {\mathcal S}^2$ with
\begin{equation}
\sigma(\mathcal{L}(\lambda)_{f})=\{\lambda_1,\lambda_2\}\subseteq W^2(\mathcal{L}(\lambda)), \qquad \abs{\lambda_1}\abs{\lambda_2}=\abs{\det\mathcal{L}(\lambda)_{f}}<\varepsilon.
\end{equation}
Now the first inclusion in \eqref{eq:nested.def.pseudo.qnr} follows from
\begin{equation}
\operatorname{dist}(0,W^2(\mathcal{L}(\lambda)))\le\min\{\abs{\lambda_1},\abs{\lambda_2}\}< \sqrt{\varepsilon}.
\end{equation}
For the second inclusion, let $\lambda\!\in\!\Omega$ with $\operatorname{dist}(0,W^2(\mathcal{L}(\lambda)))\!<\!\!\sqrt{\varepsilon}$, i.e.\ there exists $\mu\!\in\!\mathbb{C}$, $\abs{\mu}\!<\!\!\sqrt{\varepsilon}$, with $\mu\!\in\! W^2(\mathcal{L}(\lambda))$ or, equivalently, $0\!\in\! W^2(\mathcal{L}(\lambda)\!-\!\mu\mathcal{I}_\mathcal{H})$. By \eqref{eq:qnr.equiv}, the latter is in turn equivalent to
\begin{equation*}
\lambda\in W^2(\mathcal{L}-\mu\mathcal{I}_{\mathcal{H}})\subseteq W^2_{\sqrt{\varepsilon},2}(\mathcal{L}). \qedhere
\end{equation*}
\end{proof}
Clearly, in the simplest case $\mathcal{L}(\lambda)=\mathcal{A}-\lambda I_\mathcal{H}$, $\lambda\in\mathbb{C}$, with an $n\times n$ operator matrix $\mathcal{A}$ in $\mathcal{H}$ we \vspace{-1mm} have
\begin{equation}
\label{eq:ex-lin}
W_{\Psi,0}^{n}(\mathcal{L})=\overline{W^{n}(\mathcal{L})} =\overline{W^{n}(\mathcal{A})};
\vspace{-2mm}
\end{equation}
this shows that $W_{\Psi,0}^n(\mathcal{L})$ fails to enclose the spectrum of $\mathcal{L}$ whenever $\overline{W^n(\mathcal{A})}$ does.
The following example shows that, already in this simple case, in fact \emph{none} of the subsets $W^n_{\Psi,1}(\mathcal{L})\subseteq W_{\Psi,0}^n(\mathcal{L})\subseteq W^n_{\Psi,2}(\mathcal{L})$ of the pseudo block numerical range $W^n_\Psi(\mathcal{L})$, see \eqref{eq:pbnri}, is large enough to contain the approximate point spectrum $\sigma_{\rm ap}(\mathcal{L})$.
\begin{exple}
\label{ex:jordan}
Let $\mathcal{H}\!=\!\ell^2(\mathbb{N})\oplus\ell^2(\mathbb{N})$ and $\mathcal{L}(\lambda)\!=\!\mathcal{A}-\lambda I_\mathcal{H}$, $\lambda\in\mathbb{C}$, with
\begin{equation}
\mathcal{A} \!\vcentcolon=\!\left(\begin{array}{cc}
\!0 \!&\! \operatorname{diag}(m^2\!-\!1:m\!\in\!\mathbb{N})\!\! \\
\!0 \!&\! 0\!\!
\end{array}\right), \ \ \operatorname{dom} \mathcal{A}\!\vcentcolon=\!\ell^2(\mathbb{N})\,\oplus\,\operatorname{dom} \operatorname{diag}(m^2\!-\!1:m\!\in\!\mathbb{N}),
\end{equation}
where $\operatorname{diag}(m^2-1:m\!\in\!\mathbb{N})$ is the unbounded maximal multiplication operator in $\ell^2(\mathbb{N})$ with domain
\begin{align*}
&\operatorname{dom} \operatorname{diag}(m^2\!-\!1:m\!\in\!\mathbb{N}) := \big\{\{x_m\}_m \in \ell^2(\mathbb{N}): \{(m^2\!-\!1)x_m\}_m \in \ell^2(\mathbb{N}) \big\}.
\vspace{-2mm}
\end{align*}
Clearly, $W^2(\mathcal{L})=W^2(\mathcal{A})=\{0\}$. We will now show that
\begin{equation}
\{0\} \!= W_{\Psi,1}^2(\mathcal{L})\!=\!W_{\Psi,0}^2(\mathcal{L})\!=\!W_{\Psi,2}^2(\mathcal{L})
\ne W^2_\Psi(\mathcal{L})\!=\!\sigma_{\operatorname{ap}}(\mathcal{L})\!=\!\mathbb{C}.
\end{equation}
By the definition of $W_{\Psi,2}^2(\mathcal{L})$ and since $W_{\varepsilon,2}^2(\mathcal{L}) \!\subseteq\!B_\varepsilon(0)$, $\varepsilon\!>\!0$, it follows that
$W_{\Psi,2}^2(\mathcal{L})=\{0\}$ which, together with \eqref{eq:pbnri}, proves the first three equalities.
To prove the two equalities on the right, and hence the claimed inequality, let $\lambda\!\in\!\mathbb{C}$ be arbitrary. If $\lambda\!=\!0$, then $\lambda \in W^{2}_\Psi(\mathcal{L})$ by \eqref{eq:nested.def.pseudo.qnr}. If $\lambda\!\neq\! 0$, we define the bounded operator matrices
\begin{equation}
\mathcal{B}_{k} \vcentcolon=\left(\begin{array}{cc}
-\operatorname{diag}(\frac{\lambda}{m}\delta_{mk}:m\!\in\!\mathbb{N}) & 0 \\[2mm]
-\operatorname{diag}(\frac{\lambda^2}{m^2}\delta_{mk}:m\!\in\!\mathbb{N}) & \operatorname{diag}(\frac{\lambda}{m}\delta_{mk}:m\!\in\!\mathbb{N})
\end{array}\right), \quad k \in\mathbb{N},
\end{equation}
where $\delta_{mk}$ denotes the Kronecker delta. Then $\norm{\mathcal{B}_{k}}\to0$ as $k\to\infty$ and a straightforward calculation shows \vspace{-1mm} that
\begin{equation}
(\mathcal{A}-\lambda I_\mathcal{H}) f_k \!=\!\mathcal{B}_{k}
f_k , \quad f_k \!\vcentcolon=\!\frac{\widetilde f_k}{\|\widetilde f_k\|}\in\operatorname{dom}\mathcal{A}, \quad \widetilde f_k
\!=\! \binom{\frac{k(k+1)}{\lambda}e_{k}
}{e_{k}}, \quad k\in\mathbb{N}.
\vspace{-1mm}
\end{equation}
On the one hand, for arbitrary $\varepsilon>0$, this implies that there exists $N\in\mathbb{N}$ such that $\norm{\mathcal{B}_N}<\varepsilon$ and $0\in\sigma_{{\rm{p}}}(\mathcal{A}-\lambda I_{\mathcal{H}}-\mathcal{B}_N)=\sigma_{\operatorname{p}}(\mathcal{L}(\lambda)-\mathcal{B}_N)$, whence
\begin{equation}
\lambda\in\sigma_{\operatorname{p}}(\mathcal{L}-\mathcal{B}_N)\subseteq W^2(\mathcal{L}-\mathcal{B}_N)\subseteq W_\varepsilon^2(\mathcal{L})
\end{equation}
and thus $\lambda\in W_\Psi^2(\mathcal{L})$ by intersection over all $\varepsilon>0$. On the other hand, $\lambda\in\sigma_{\operatorname{ap}}(\mathcal{L})$ since the normalised sequence $\{f_k\}_{k}\subseteq\operatorname{dom}\mathcal{L}(\lambda)$ satisfies
\begin{equation}
\norm{(\mathcal{A}-\lambda) f_k }=\norm{\mathcal{B}_{k}f_k}\le\norm{\mathcal{B}_{k}}\to 0, \quad k \to\infty.
\end{equation}
\end{exple}
With one exception, we now focus on the most important case $n\!=\!2$ for which the notation
\begin{equation}
\label{eq:n=2}
\begin{aligned}
&\mathcal{L}(\lambda) \!\vcentcolon= \!\begin{pmatrix} A(\lambda) \!&\! B(\lambda) \\ C(\lambda) \!&\! D(\lambda) \end{pmatrix} \ \mbox{ in } \mathcal{H}=\mathcal{H}_1\oplus \mathcal{H}_2, \\
&\operatorname{dom} \mathcal{L}(\lambda) \!\vcentcolon=\! \big( \operatorname{dom} A(\lambda) \cap \operatorname{dom} C(\lambda) \big) \oplus \big( \operatorname{dom} B(\lambda) \cap \operatorname{dom} D(\lambda) \big),
\end{aligned}
\end{equation}
is more customary. We establish various inclusions between the (pseudo) quadratic numerical range $W^2_{(\Psi)}(\mathcal{L})$ and the (pseudo) numerical ranges of the diagonal operator functions $A$, $D$, as well as between $W^2_{(\Psi)}(\mathcal{L})$ and the (pseudo) numerical ranges of the Schur complements of $\mathcal{L}$.
\begin{prop}
\label{prop:op.mat.fam.num}
\begin{enumerate}
\item The quadratic numerical range and the pseudo quadratic numerical range satisfy
\begin{equation}
W^2(\mathcal{L})\subseteq W(\mathcal{L}), \quad W^2_\Psi(\mathcal{L})\subseteq W_\Psi(\mathcal{L}).
\end{equation}
\item Let $\Omega_1:=\{\lambda \in \Omega:\mathcal{D}_1(\lambda)=\operatorname{dom} A(\lambda)\}$ and suppose $\dim\mathcal{H}_2 >1$. Then
\[
W(A) \cap \Omega_1 \subseteq
W^2(\mathcal{L}), \quad W_\Psi(A) \cap \Omega_1 \subseteq W_{\Psi,2}^2(\mathcal{L}) \subseteq W_\Psi^2(\mathcal{L});
\]
if $\,\mathcal{D}_1(\lambda)\!=\!\operatorname{dom} A(\lambda)$ for all $\lambda\!\in\! W(A)$ or $\lambda\!\in\! W_\Psi(A)$, respectively, then
\begin{equation}
W(A)\subseteq W^2(\mathcal{L}), \quad W_\Psi(A) \subseteq W_{\Psi,2}^2(\mathcal{L}) \subseteq W_\Psi^2(\mathcal{L}).
\end{equation}
\item Let $\Omega_2\!:=\!\{\lambda \!\in\! \Omega:\mathcal{D}_2(\lambda)\!=\!\operatorname{dom} D(\lambda)\}$ and suppose $\dim\mathcal{H}_1>1$. Then
\[
W(D) \cap \Omega_2 \subseteq
W^2(\mathcal{L}), \quad W_\Psi(D) \cap \Omega_2 \subseteq W_{\Psi,2}^2(\mathcal{L}) \subseteq W_\Psi^2(\mathcal{L});
\]
if $\,\mathcal{D}_2(\lambda)\!=\!\operatorname{dom} D(\lambda)$ for all $\lambda\!\in\! W(D)$ or $\lambda\!\in\! W_\Psi(D)$, respectively, then
\begin{equation}
W(D)\subseteq W^2(\mathcal{L}), \quad W_\Psi(D) \subseteq W_{\Psi,2}^2(\mathcal{L}) \subseteq W_\Psi^2(\mathcal{L}).
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
The claims for the quadratic numerical range are consequences of \eqref{eq:qnr.equiv} and of the corresponding statements \cite[Prop.\ 3.2, 3.3 (i),(ii)]{Tretter-2009} for operator matrices. So it remains to prove the claims (i) and (ii) for the pseudo quadratic numerical range; the proof of claim (iii) is completely analogous.
(i) The inclusion for the quadratic numerical range in (i) app\-lied to $\mathcal{L}\!+\!\mathcal{B}$ with $\norm{\mathcal{B}}\!<\!\varepsilon$ yields $W^2_\varepsilon(\mathcal{L})\!\subseteq\! W_\varepsilon(\mathcal{L})$ for any $\varepsilon\!>\!0$. The claim for the pseu\-do quadratic numerical range follows if we take the intersection over all~$\varepsilon\!>\!0$.
(ii) Let $\lambda\!\in\! W_\varepsilon(A) \cap \Omega_1$ with $\varepsilon\!>\!0$ arbitrary. Then there exists a bounded operator $B_\varepsilon$ in $\mathcal{H}_1$ with $\norm{B_\varepsilon}\!<\!\varepsilon$ and $\lambda\!\in\! W(A+B_\varepsilon)$. Since $\operatorname{dom} (A(\lambda)+B_\varepsilon)$ $= \operatorname{dom} A(\lambda) \subseteq\operatorname{dom} C(\lambda)$, the inclusion for the quadratic numerical range in (ii) applied to $\mathcal{L} + \operatorname{diag}(B_\varepsilon,0_{\mathcal{H}_2})$ shows that
\begin{equation}
\lambda\in W^2(\mathcal{L}+\operatorname{diag}(B_\varepsilon,0_{\mathcal{H}_2})) \subseteq W_{\varepsilon,2}^2(\mathcal{L}) \subseteq W^2_\varepsilon(\mathcal{L}).
\end{equation}
By intersecting over all $\varepsilon>0$, we obtain $\lambda\in W^2_{\Psi,2} (\mathcal{L}) \subseteq W_\Psi^2(\mathcal{L})$. The second claim is obvious from the first one since then $\Omega_1 \subseteq W_\Psi (A)$.
\end{proof}
Both qualitative and quantitative behaviour of operator matrices are closely linked to the properties of their so-called Schur complements, see e.g.\ \cite{Tretter-2009}; the same is true for operator matrix functions, see e.g.\ \cite{Tretter-2010} for the case of bounded operator values.
\begin{defi}
The Schur complements of the $2\times 2$ operator matrix family $\mathcal{L}=\set{\mathcal{L}(\lambda)}{\lambda\in\Omega}$ in $\mathcal{H}=\mathcal{H}_1\oplus \mathcal{H}_2$ as in \eqref{eq:n=2} are the \vspace{-1mm} families
\begin{alignat*}{2}
S_1(\lambda) & \vcentcolon= A(\lambda)-B(\lambda)D(\lambda)^{-1}C(\lambda), \quad && \lambda\in\rho(D), \\
S_2(\lambda) & \vcentcolon= D(\lambda)-C(\lambda)A(\lambda)^{-1}B(\lambda), \quad &&\lambda\in\rho(A),
\end{alignat*}
of linear operators in $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively, with \vspace{-1mm} domains
\begin{alignat*}{2}
\operatorname{dom} S_1(\lambda) & \vcentcolon=\set{f\in\mathcal{D}_1(\lambda)}{D(\lambda)^{-1}C(\lambda)f\in\operatorname{dom} B(\lambda)}, \quad &&\lambda\in\rho(D), \\
\operatorname{dom} S_2(\lambda) & \vcentcolon=\set{f\in\mathcal{D}_2(\lambda)}{A(\lambda)^{-1}B(\lambda)f\in\operatorname{dom} C(\lambda)}, \quad && \lambda\in\rho(A).
\end{alignat*}
\end{defi}
The following inclusions between the numerical ranges and pseudo numerical ranges of the Schur complements $S_1$, $S_2$ and the quadratic numerical range and pseudo quadratic numerical range, respectively, of $\mathcal{L}$~hold.
\begin{prop}
\label{prop:schur.num.incl.qnr}
\label{prop:op.mat.fam.pseudo.num}
The numerical ranges and pseudo numerical ranges of the Schur complements satisfy
\begin{equation}
\label{eq:pseudo.schur.incl.pseudo.qnr}
W(S_1)\cup W(S_2)\subseteq W^2(\mathcal{L}), \quad W_\Psi(S_1)\cup W_\Psi(S_2) \subseteq W_{\Psi,2}^2(\mathcal{L}) \subseteq W_\Psi^2(\mathcal{L}).
\end{equation}
\end{prop}
\begin{proof}
The first claim follows from \eqref{eq:qnr.equiv} and the corresponding statement \cite[Thm.\ 2.5.8]{Tretter-2008} for unbounded operator matrices.
Using the first claim, the second claim can be proven in a similar way as the claim for the pseudo numerical range in Proposition \ref{prop:op.mat.fam.num} (ii).
\end{proof}
The following spectral enclosure properties of the block numerical range and pseudo block numerical range hold for operator matrix functions. They generalise results for the case of bounded operator values from \cite{Wagenhofer-PhD-2007}, see also \cite{Tretter-2010} for $n=2$, as well as the results for the operator function case, i.e.\ $n=1$, in Proposition \ref{thm:spec.incl.pseudo.num.ran}.
\begin{prop}
\label{prop:point.spec.incl.pseudo.qnr}
Let $\mathcal{L}$ be a family of operator matrices. Then
\begin{equation}
\sigma_{\operatorname{p}}(\mathcal{L})\subseteq W^{n}(\mathcal{L})\subseteq W_\Psi^{n}(\mathcal{L}).
\end{equation}
\end{prop}
\begin{proof}
The proof of the first inclusion is analogous to the bounded case, see \cite[Thm.\ 2.14]{Wagenhofer-PhD-2007} or \cite[Thm.\ 3.1]{Tretter-2010} for $n\!=\!2$; the second inclusion is obvious, see Remark \ref{rem:4.2}.
\end{proof}
\begin{thm}
\label{thm:spec.incl.pseudo.qnr}
Let $\mathcal{L}$ be a family of operator matrices in $\mathcal{H}=\mathcal{H}_1\oplus \dots \oplus \mathcal{H}_n$. For \vspace{-1.5mm} every~$\varepsilon\!>\!0$,
\begin{equation}
\label{eq:pseudo.spec.qnr}
\sigma_{{\rm ap},\varepsilon}(\mathcal{L}) \subseteq W_\varepsilon^n (\mathcal{L}),
\qquad
\norm{\mathcal{L}(\lambda)^{-1}}\le\frac{1}{\varepsilon}, \quad \lambda\in\rho(\mathcal{L})\setminus W_\varepsilon^n(\mathcal{L}),
\vspace{-1.5mm}
\end{equation}
and \vspace{-1mm} hence
\begin{equation}
\label{eq:app.incl.pseudo.qnr}
\sigma_{\operatorname{ap}}(\mathcal{L})\subseteq W_\Psi^n (\mathcal{L});
\end{equation}
if, for all $\lambda\in\Omega$, $\sigma(\mathcal{L}(\lambda))\subseteq\overline{W^n(\mathcal{L}(\lambda))}$, then
\begin{equation}
\sigma(\mathcal{L})\subseteq W^n_{\Psi,0}(\mathcal{L}) \subseteq W_\Psi^n (\mathcal{L}).
\end{equation}
\end{thm}
\begin{proof}
First let $\lambda \!\in\! \sigma_{{\rm ap},\varepsilon}(\mathcal{L})$. Then there exists $f_\varepsilon\!\in\!\operatorname{dom}\mathcal{L}(\lambda)$, $\norm{f_\varepsilon}
\!=\!1$, with $\norm{\mathcal{L}(\lambda) f_\varepsilon}\!<\!\varepsilon$.
The linear operator in $\mathcal{H}$ given \vspace{-1mm} by
\begin{equation}
\label{eq:app.sequ.pert}
\mathcal{B} f \vcentcolon=\begin{cases}
\mathcal{L}(\lambda) \mu f_\varepsilon& {\rm if}~f= \mu f_\varepsilon \in \operatorname{span} f_\varepsilon, \\
\ \ \ \ 0 & {\rm if}~f \perp f_\varepsilon,
\end{cases}
\end{equation}
is bounded with $\norm{\mathcal{B}}\!=\!\norm{\mathcal{L}(\lambda)f_\varepsilon}\!<\!\varepsilon$ and $(\mathcal{L}(\lambda)\!-\!\mathcal{B})f_\varepsilon\!=\!0$, i.e.\ $\lambda\!\in\!\sigma_{\operatorname{p}}(\mathcal{L}\!-\!\mathcal{B})$. By Proposition \ref{prop:point.spec.incl.pseudo.qnr} and since $\|\mathcal{B}\|\!<\!\varepsilon$, we conclude that
$\lambda\!\in\! W^n (\mathcal{L}-\mathcal{B})\!\subseteq\! W_\varepsilon^n(\mathcal{L})$, which proves the first claim.
The resolvent estimate in \eqref{eq:pseudo.spec.qnr} follows from the first claim and from the definition of $\sigma_{\rm{ap},\varepsilon}(\mathcal{L})$, cf.\ the proof of Proposition \ref{thm:spec.incl.pseudo.num.ran}.
Taking the intersection over all $\varepsilon>0$ in the first claim, we obtain that $\sigma_{\operatorname{ap}}(\mathcal{L})\subseteq W_\Psi^n(\mathcal{L})$.
Finally, the assumption that $\sigma(\mathcal{L}(\lambda))\!\subseteq\!\overline{W^n(\mathcal{L}(\lambda))}$ for all $\lambda\!\in\!\Omega$ implies~that $\sigma(\mathcal{L})\subseteq W^n_{\Psi,0}(\mathcal{L})$, see Definition \ref{eq:alt.def.psi0}. Now the second inequality in the last claim follows from the inclusion $W_{\Psi,0}^n(\mathcal{L})\!\subseteq\! W_\Psi^n (\mathcal{L})$ by Proposition~\ref{prop:nested.def.pseudo.qnr}.
\end{proof}
\section{Spectral enclosures by pseudo numerical ranges of \\ Schur complements}
\label{sec:schur.app.encl}
In this section we establish a new enclosure of the approximate point spectrum of an operator matrix family $\mathcal{L}$ by means of the pseudo numerical ranges of the associated Schur complements and hence, by Proposition \ref{prop:op.mat.fam.pseudo.num}, in $W^2_{\Psi,2} (\mathcal{L})$ and in the pseudo quadratic numerical range $W_\Psi^2(\mathcal{L})$. Compared to earlier work, we no longer need restrictive dominance assumptions.
\begin{thm}
\label{thm:mat.spec.incl.schur.app}
Let $\mathcal{L}$ be a family of operator matrices as in \eqref{eq:n=2}. If $\lambda\in\sigma_{\operatorname{ap}}(\mathcal{L})\setminus(\sigma(A)\cup\sigma(D))$ is such that one of the conditions
\begin{enumerate}
\item $C(\lambda)$ is $A(\lambda)$-bounded and $B(\lambda)$ is $D(\lambda)$-bounded;
\item $A(\lambda)$ is $C(\lambda)$-bounded, $D(\lambda)$ is $B(\lambda)$-bounded
and both $C(\lambda)$ and $B(\lambda)$ are boundedly invertible;
\end{enumerate}
is satisfied, then $\lambda\in\sigma_{\operatorname{ap}}(S_1)\cup\sigma_{\operatorname{ap}}(S_2)$.
If for all $\lambda\in\rho(A)\cap\rho(D)$ one of the conditions {\rm (i)} or {\rm (ii)} is satisfied, then
\begin{equation}
\label{eq:mat.fam.schur.app.incl}
\begin{aligned}
\sigma_{\operatorname{ap}}(\mathcal{L})\setminus(\sigma(A)\cup\sigma(D)) &\subseteq\sigma_{\operatorname{ap}}(S_1)\cup\sigma_{\operatorname{ap}}(S_2) \\
&\subseteq W_\Psi(S_1)\cup W_\Psi(S_2) \subseteq W^2_{\Psi,2} (\mathcal{L}) \subseteq W^2_\Psi (\mathcal{L}).
\end{aligned}
\end{equation}
\end{thm}
\begin{proof}
Let $\lambda\in\sigma_{\operatorname{ap}}(\mathcal{L})$. Then there exists a sequence $\{(u_n,v_n)\}_n\subseteq\operatorname{dom}\mathcal{L}(\lambda)$ with $\norm{u_n}^2+\norm{v_n}^2=1$, $n\in\mathbb{N}$,
\vspace{-1mm} and
\begin{alignat}{3}
\label{eq:mat.app.seq.1}
A(\lambda)u_n+B(\lambda)v_n & =\vcentcolon h_n & ~ \to ~0, \quad n & \to\infty, \\
\label{eq:mat.app.seq.2}
C(\lambda)u_n+D(\lambda)v_n & =\vcentcolon k_n & ~ \to ~0, \quad n & \to\infty.
\end{alignat}
The normalisation implies that $\liminf_{n\to\infty}\norm{u_n}\!>\!0$ or $\liminf_{n\to\infty}\norm{v_n}\!>\!0$. Let $\liminf_{n\to\infty}\norm{u_n}\!>\!0$, without loss of generality $\inf_{n\in\mathbb{N}}\norm{u_n}\!>\!0$. We show that, if $\lambda \in \rho(D)$, then $\lambda\!\in\!\sigma_{\operatorname{ap}}(S_1)$; if $\liminf_{n\to\infty}\norm{v_n}\!>\!0$, an analogous proof yields that, if $\lambda \in \rho(A)$, then $\lambda\!\in\!\sigma_{\operatorname{ap}}(S_2)$.
First we assume that $\lambda$ satisfies (i). Since $\lambda\in\rho(D)$, \eqref{eq:mat.app.seq.2} implies~that
\begin{equation}
v_n=D(\lambda)^{-1}k_n-D(\lambda)^{-1}C(\lambda)u_n, \quad n\in\mathbb{N}.
\end{equation}
Inserting this into \eqref{eq:mat.app.seq.1} and using $\operatorname{dom} D(\lambda)\subseteq\operatorname{dom} B(\lambda)$, we conclude that
\begin{equation}
\label{eq:mat.schur.app1}
S_1(\lambda)u_n+B(\lambda)D(\lambda)^{-1}k_n=h_n ~ \to~ 0, \quad n\to\infty.
\end{equation}
Due to (i) $B(\lambda)D(\lambda)^{-1}$ is bounded and hence $B(\lambda)D(\lambda)^{-1}k_n\to0$, $n\to\infty$.
Then \eqref{eq:mat.schur.app1} yields that $S_1(\lambda)u_n\to0$, $n\to\infty$. Because $\inf_{n\in\mathbb{N}}\norm{u_n}>0$,
we can \vspace{-2mm} set
\begin{equation}
f_n\vcentcolon= \frac{u_n}{\norm{u_n}}\in\mathcal{D}_1(\lambda)=\operatorname{dom} S_1(\lambda), \quad n\in\mathbb{N},
\end{equation}
and obtain that $S_1(\lambda)f_n\to0$ for $n\to\infty$, which proves $\lambda\in\sigma_{\operatorname{ap}}(S_1)$.
Now assume that $\lambda$ satisfies (ii). Since $C(\lambda)$ is invertible, \eqref{eq:mat.app.seq.2} shows~that
\begin{equation}
\label{eq:mat.schur.app3}
u_n=C(\lambda)^{-1}k_n-C(\lambda)^{-1}D(\lambda)v_n =\vcentcolon C(\lambda)^{-1}k_n-w_n, \quad n\in\mathbb{N},
\end{equation}
where $w_n\vcentcolon= C(\lambda)^{-1}D(\lambda)v_n\in\operatorname{dom} S_1(\lambda)$ for $n\in\mathbb{N}$ since
\begin{equation}
w_n\in\mathcal{D}_1(\lambda)=\operatorname{dom} C(\lambda), \quad D(\lambda)^{-1}C(\lambda)w_n=v_n\in\mathcal{D}_2(\lambda)=\operatorname{dom} B(\lambda).
\end{equation}
Inserting \eqref{eq:mat.schur.app3} into \eqref{eq:mat.app.seq.1} and using $\operatorname{dom} C(\lambda)\subseteq\operatorname{dom} A(\lambda)$, we obtain that
\begin{equation}
\label{eq:mat.schur.app2}
A(\lambda)C(\lambda)^{-1}k_n-S_1(\lambda)w_n=h_n ~ \to ~0, \quad n\to\infty.
\end{equation}
Since $C(\lambda)^{-1}$ is bounded, it follows that $C(\lambda)^{-1}k_n\!\to\!0$, $n\!\to\!\infty$. Thus $\inf_{n\in\mathbb{N}}\norm{u_n}>0$ and \eqref{eq:mat.schur.app3} show that, without loss of generality, we can assume that $\inf_{n\in\mathbb{N}}\norm{w_n}>0$. \vspace{-1mm} Set
\begin{equation}
g_n\vcentcolon=\frac{w_n}{\norm{w_n}}\in\operatorname{dom} S_1(\lambda), \quad n\in\mathbb{N}.
\end{equation}
By (ii) $A(\lambda)C(\lambda)^{-1}$ is bounded and so $A(\lambda)C(\lambda)^{-1}k_n\!\to\! 0$, $n\!\to\!\infty$.
Now~\eqref{eq:mat.schur.app2} yields $S_1(\lambda)w_n\!\to\!0$ and thus $S_1(\lambda)g_n
\!\to\!0$, $n\!\to\!\infty$, which proves~$\lambda\!\in\!\sigma_{\operatorname{ap}}(S_1)$.
Finally, the first inclusion in \eqref{eq:mat.fam.schur.app.incl} is obvious from what was already shown; the second inclusion in \eqref{eq:mat.fam.schur.app.incl} follows from Proposition \ref{thm:spec.incl.pseudo.num.ran} and the last two inclusions from Proposition \ref{prop:schur.num.incl.qnr}.
\end{proof}
\begin{rem}
\label{rem:nr.qnr.incl}
If under the assumptions of Theorem \ref{thm:mat.spec.incl.schur.app}, the Schur complements $S_1$ and $S_2$
satisfy the assumptions of Theorem \ref{thm:markus.B} or \ref{thm:markus.A}
on every connected component of $\rho(D)$ and $\rho(A)$, respectively, then
\begin{equation}
\sigma_{\operatorname{ap}}(\mathcal{L})\setminus(\sigma(A)\cup\sigma(D))\subseteq\overline{W(S_1)}\cup\overline{W(S_2)}\subseteq\overline{W^2(\mathcal{L})},
\end{equation}
see Proposition \ref{prop:schur.num.incl.qnr} for the second inclusion.
\end{rem}
For operator matrix families $\mathcal{L}$ with off-diagonal entries that are symmetric or anti-symmetric to each other,
we now establish conditions ensuring that the approximate point spectrum of $\mathcal{L}$ is con\-tained in the union of the approximate point spectrum of one Schur complement and the pseudo numerical range of the corresponding diagonal entry, i.e.\ $S_1$ and $D$ or $S_2$ and~$A$.
\pagebreak
\begin{thm}
\label{thm:spec.incl.def.indef}
Let $\mathcal{L}$ be an operator matrix family as in \eqref{eq:n=2}.
\begin{enumerate}
\item If $\,\lambda\!\in\!\sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(D)$ is such that $C(\lambda)\!\subseteq\! \pm B(\lambda)^*\!$, $A(\lambda)$ is accretive, $\mp D(\lambda)$ sectorial with vertex $0$ and $B(\lambda)$ is $D(\lambda)$-bounded, then $\lambda\!\in\!\sigma_{\operatorname{ap}}(S_1)\cup W_\Psi(D)$. If these conditions hold for all $\lambda\!\in\!\rho(D)$, then
\begin{equation}
\label{eq:BB*.def.indef.incl}
\sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(D) \!\subseteq\! \sigma_{\operatorname{ap}}(S_1)\!\cup\! W_\Psi(D) \!\subseteq\! W_\Psi(S_1)\cup W_\Psi(D);
\end{equation}
if $\dim \mathcal{H}_1 > 1$, \vspace{-1mm} then
\begin{equation}
\label{eq:BB*.def.indef.incl.pseudo.qnr}
\sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(D) \subseteq W_{\Psi,2}^2(\mathcal{L})\!\subseteq\! W_\Psi^2(\mathcal{L}).
\end{equation}
\item If $\lambda\!\in\!\sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(A)$ is such that $C(\lambda)\!\subseteq\! \pm B(\lambda)^*\!$, $A(\lambda)$ is sectorial with vertex $0$, $\mp D(\lambda)$ accretive and $C(\lambda)$ is $A(\lambda)$-bounded, then $\lambda\!\in\!\sigma_{\operatorname{ap}}(S_2)\cup W_\Psi(A)$. If these conditions hold for all $\lambda\!\in\!\rho(A)$, then
\begin{equation}
\hspace{9mm} \sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(A) \!\subseteq\! \sigma_{\operatorname{ap}}(S_2)\!\cup\! W_\Psi(A) \!\subseteq\! W_\Psi(S_2)\cup W_\Psi(A);
\end{equation}
if $\dim \mathcal{H}_2 > 1$, \vspace{-1mm} then
\[
\sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(A) \subseteq W_{\Psi,2}^2(\mathcal{L})\!\subseteq\! W_\Psi^2(\mathcal{L}).
\]
\end{enumerate}
\end{thm}
Note that here we do not assume that the entries of $\mathcal{L}$ are holomorphic. In the next section Theorem~\ref{thm:spec.incl.def.indef}
will be applied with $B(\lambda) = \operatorname{e}^{\i \omega(\lambda)} B$ and $C(\lambda) = \operatorname{e}^{-\i \omega(\lambda)} C$, where $C \subseteq B^*$ are constant and $\omega$ is real-valued, see the proof of Theorem~\ref{thm:spec.incl.BB*}.
The following corollary is immediate from Theorem {\rm \ref{thm:spec.incl.def.indef}} due to Proposition~\ref{prop:op.mat.fam.num} and Proposition \ref{prop:op.mat.fam.pseudo.num}.
\begin{cor}
\label{cor:spec.incl.def.indef}
Under the assumptions of Theorem {\rm \ref{thm:spec.incl.def.indef}}, if in {\rm (i)} additionally $\sigma(D) \! \subseteq
\! W_\Psi(D)$, then
\[
\sigma_{\operatorname{ap}}(\mathcal{L}) \!\subseteq\! \sigma_{\operatorname{ap}}(S_1)\!\cup\! W_\Psi(D) \!\subseteq\! W_\Psi(S_1)\cup W_\Psi(D)
\!\subseteq\! W_{\Psi,2}^2(\mathcal{L})\!\subseteq\! W_\Psi^2(\mathcal{L}),
\]
and if in {\rm (ii)} additionally $\sigma(A) \! \subseteq
\! W_\Psi(A)$, then
\[
\sigma_{\operatorname{ap}}(\mathcal{L}) \!\subseteq\! \sigma_{\operatorname{ap}}(S_2)\!\cup\! W_\Psi(A) \!\subseteq\! W_\Psi(S_2)\cup W_\Psi(A)
\!\subseteq\! W_{\Psi,2}^2(\mathcal{L})\!\subseteq\! W_\Psi^2(\mathcal{L}).
\]
\end{cor}
\begin{proof}[Proof of Theorem {\rm \ref{thm:spec.incl.def.indef}}.]
We only prove (i); the proof of (ii) is analogous. Let $\lambda\!\in\!\sigma_{\operatorname{ap}}(\mathcal{L})\!\setminus\!\sigma(D)$. In the same way as at the beginning of the proof of Theorem \ref{thm:mat.spec.incl.schur.app} we conclude that if $\liminf_{n\to\infty}\norm{u_n}\!>\!0$, then $\lambda \!\in\! \sigma_{\operatorname{ap}}(S_1)$. It remains to be shown that in the case $\liminf_{n\to\infty}\norm{v_n}\!>\!0$, without loss of generality $\inf_{n\in\mathbb{N}}\norm{v_n}\!>\!0$, it follows that $\lambda \!\in\! W_\Psi(D)$.
Taking the scalar product with $u_n$ in \eqref{eq:mat.app.seq.1} and with $v_n$ in \eqref{eq:mat.app.seq.2}, respectively, we conclude that
\begin{alignat}{3}
\label{eq:pm.1} (A(\lambda)u_n,u_n) && +(B(\lambda)v_n,u_n) & =(h_n,u_n), \quad && n\in\mathbb{N}, \\
\label{eq:pm.2} \pm (u_n,B(\lambda)v_n) && +(D(\lambda)v_n,v_n) & =(k_n,v_n), \quad && n\in\mathbb{N}.
\end{alignat}
By subtracting from \eqref{eq:pm.1}, or adding to \eqref{eq:pm.1}, the complex conjugate of \eqref{eq:pm.2}, we deduce that
\begin{equation}
(A(\lambda)u_n,u_n) \mp \overline{(D(\lambda)v_n,v_n)}=(h_n,u_n) \mp \overline{(k_n,v_n)}\to0, \quad n\to\infty.
\end{equation}
Taking real parts and using the accretivity of $A(\lambda)$ and $\mp D(\lambda)$, we obtain
\begin{equation}
0\le\operatorname{Re}(\mp D(\lambda)v_n,v_n)\le\operatorname{Re}(A(\lambda)u_n,u_n)\mp\operatorname{Re}(D(\lambda)v_n,v_n)\to 0, \quad n\to\infty.
\end{equation}
Since $\mp D(\lambda)$ is sectorial with vertex $0$ by assumption, this implies that $(\mp D(\lambda)v_n,v_n)\to0$ and hence $(D(\lambda)v_n,v_n)\to 0$, $n\to\infty$, which proves that $\lambda\!\in\! W_\Psi(D)$ by Proposition \ref{prop:pseudo.num}.
Finally, the first inclusion in \eqref{eq:BB*.def.indef.incl} is obvious from what was already proved;
the second inclusion in \eqref{eq:BB*.def.indef.incl} follows from Proposition \ref{thm:spec.incl.pseudo.num.ran}.
The last claim in \eqref{eq:BB*.def.indef.incl.pseudo.qnr} is then a consequence of Propositions \ref{prop:op.mat.fam.num} (iii) and \ref{prop:schur.num.incl.qnr}.
\end{proof}
\begin{rem}
\begin{enumerate}
\item Sufficient conditions for the inclusions $\sigma(A)\! \subseteq
\! W_\Psi(A)$ or $\sigma(D)\! \subseteq
\! W_\Psi(D)$, respectively, may be found e.g.\ in Theorem \ref{thm:pseudo.dense.hol.fam} or Pro\-po\-sition~\ref{thm:spec.incl.pseudo.num.ran}.
\item An analogue of Remark \ref{rem:nr.qnr.incl} also holds for Theorem \ref{thm:spec.incl.def.indef}; the details of all possible combinations of assumptions and corresponding inclusions are left to the reader.
\end{enumerate}
\end{rem}
\section{Application to structured operator matrices}
\label{sec:BB*}
In this section, we apply the results of the previous section to prove new spectral enclosures and resolvent estimates for non-selfadjoint operator matrix functions exhibiting a certain dichotomy.
More precisely, we consider a linear monic family $\mathcal{L}(\lambda)=\mathcal{A}-\lambda I_\mathcal{H}$, $\lambda\in\mathbb{C}$, with a densely defined operator matrix
\begin{equation}
\label{eq:op.mat}
\mathcal{A}\!=\!\left(\begin{array}{cc}
A & B \\
C & D
\end{array}\right), \quad \operatorname{dom} \mathcal{A}\!=\! \big( \operatorname{dom} A \cap \operatorname{dom} C \big) \!\oplus\! \big( \operatorname{dom} B
\cap \operatorname{dom} D \big)
\end{equation}
with $C\!\subseteq
\! B^*$ in $\mathcal{H}\!=\!\mathcal{H}_1\oplus\mathcal{H}_2$. We assume that the entries of $\mathcal{A}$
are densely defined closable linear operators acting between the respective spaces $\mathcal{H}_1$ and/or $\mathcal{H}_2$,
and that $A$, $-D$ are accretive or even sectorial with vertex $0$.
This means that their numerical ranges lie in
closed sectors $\Sigma_\omega$ with semi-axis $\mathbb{R}_+$ and semi-angle $\omega = \pi/2 $ or $\omega \in[0,\pi/2)$,
respectively, \vspace{-1mm} given by
\begin{equation}
\Sigma_\omega\vcentcolon=\set{z\in\mathbb{C}}{\abs{\arg z}\le\omega}, \quad \omega\in[0,\pi/2];
\end{equation}
here $\arg:\mathbb{C}\to(-\pi,\pi]$ is the argument of a complex number with $\arg0=0$.
The next theorem no longer requires bounds on the dominance orders among the entries in the columns of $\mathcal{A}$, in contrast to earlier results in \cite[Thm.\ 5.2]{Tretter-2009} where the relative bounds had to be $0$.
\begin{thm}
\label{thm:spec.incl.BB*}
Let $\mathcal{A}$ be an operator matrix as in \eqref{eq:op.mat} with $C\subseteq B^*$. Assume that there exist $\alpha$, $\delta \in \mathbb{R}
$ and semi-angles $\varphi,\psi\in[0,\pi/2]$ with
\begin{equation}
\label{eq:sec.diag.entries}
\operatorname{Re} W(D)\le\delta<0<\alpha\le\operatorname{Re} W(A), \quad W(A)\subseteq\Sigma_\varphi, \quad W(D)\subseteq-\Sigma_\psi.
\end{equation}
Suppose further that one of the following holds:
\begin{enumerate}
\item $A$, $-D$ are m-accretive, $C$ is $A$-bounded, $B$ is $D$-bounded,
\item $A$, $-D$ are m-accretive, $A$ is $C$-bounded, $D$ is $B$-bounded and $B$, $C$ are boundedly \vspace{0.9mm} invertible,
\item \!\!$-D$ is m-sectorial with vertex $0$, i.e.\ $\psi\!<\!\pi/2$, and $B$ is~$D$-bounded,
\item $A$ is m-sectorial with vertex $0$, i.e.\ $\varphi\!<\!\pi/2$, and $C$ is $A$-bounded.
\end{enumerate}
Then, with $\tau\vcentcolon=\max\{\varphi,\psi\}$,
\begin{equation}
\label{eq:app.incl.sigma}
\sigma_{\operatorname{ap}}(\mathcal{A})\subseteq(-\Sigma_\tau\cup\Sigma_\tau)\cap\set{z\in\mathbb{C}}{\operatorname{Re} z\notin(\delta,\alpha)}=\vcentcolon\Sigma;
\end{equation}
if, in addition, $\rho(\mathcal{A})\cap\Sigma^{\operatorname{c}}\neq\emptyset$, then $\sigma(\mathcal{A})\subseteq\Sigma$.
\vspace{-5mm}
\end{thm}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.68\textwidth]{sectors-new-1.pdf}
\caption{\small The set $\Sigma$ (green) enclosing $\sigma_{\operatorname{ap}} (\mathcal{A})$, see \eqref{eq:app.incl.sigma
; inside the sets $\Sigma_A\!\vcentcolon=\! \Sigma_\varphi \!\setminus\! S$ (bounded by red lin
) enclosing $W(A)$ (red, dashed) and $\Sigma_D\!\vcentcolon=\! -\Sigma_\psi \!\setminus\! S $ (bounded by blue lin
) enclosing
$W(D)$ (blue, dashed), separated by $S\!\vcentcolon=\! \{z\!\in\!\mathbb{C}:\operatorname{Re} z\!\in\! (\delta,\alpha)\}$,
\vspace{-2mm}see~\eqref{eq:sec.diag.entries}.}
\label{fig:sec}
\end{figure}
The proof of Theorem \ref{thm:spec.incl.BB*} relies on Theorems \ref{thm:mat.spec.incl.schur.app} and \ref{thm:spec.incl.def.indef}, and on the following enclosures for the pseu\-do numerical ranges of the Schur complements.
\begin{lem}
\label{lem:schur.compl.BB*}
Let $\mathcal{A}$ be as in \eqref{eq:op.mat} with $C\!\subseteq\!B^*$ and let $\lambda\in\mathbb{C}$.
\begin{enumerate}
\item Suppose $A$,\,$-D$ are uniformly accretive,
\begin{equation}
\label{eq:A.D.unif.accr}
\operatorname{Re} W(D)\le\delta<0<\alpha\le\operatorname{Re} W(A).
\end{equation}
If $\,\operatorname{Re} \lambda \in (\delta,\alpha)$, then
\begin{equation}
\begin{aligned}
\lambda\in\rho(D) & \implies \operatorname{Re}\overline{W(S_1(\lambda))}\ge\alpha-\operatorname{Re} \lambda>0, \\
\lambda\in\rho(A) & \implies \operatorname{Re}\overline{W(S_2(\lambda))}\le \delta-\operatorname{Re} \lambda<0.
\end{aligned}
\end{equation}
\item Suppose $A$,\,$-D$ are sectorial with vertex $0$,
\begin{equation}
W(A)\subseteq\Sigma_\varphi, \qquad W(D)\subseteq -\Sigma_\psi
\end{equation}
with $\varphi,\psi\!\in\![0,\pi/2)$ and let $\tau\!\vcentcolon=\!\max\{\varphi,\psi\}$. If $\,\arg\lambda\!\in\!(\tau,\pi-\tau)$, then
\begin{equation}
\hspace{6mm} \begin{aligned}
\lambda\in\rho(D) & \ \implies \ \arg(\overline{W(S_1(\lambda))}+\lambda) \in [-\arg\lambda,\tau], \\
\lambda\in\rho(A) & \ \implies \ \arg(\overline{W(S_2(\lambda))}+\lambda) \in (\!-\!\pi,-\arg\lambda]\cup[\pi-\tau,\pi];
\end{aligned}
\end{equation}
if $\,\arg\lambda\!\in\!(-\pi+\tau,-\tau)$, then
\begin{equation}
\hspace{8mm} \begin{aligned}
\lambda\in\rho(D) & \ \implies \ \arg(\overline{W(S_1(\lambda))}+\lambda) \in [-\tau,-\arg\lambda], \\
\lambda\in\rho(A) & \ \implies \ \arg(\overline{W(S_2(\lambda))}+\lambda) \in (\!-\!\pi,-\pi+\tau]\cup[-\arg\lambda,\pi].
\end{aligned}
\end{equation}
\end{enumerate}
\end{lem}
\begin{proof}
We show the claims for $S_1$, the proofs for $S_2$ are analogous. It is easy to see that it suffices to prove the claimed non-strict inequalities for $W(S_1(\lambda))$. Let $\lambda\in\rho(D)$, $f\in\operatorname{dom} S_1(\lambda)\subseteq\operatorname{dom} A\cap\operatorname{dom} B^*$ with $\norm{f}=1$, and set $g\vcentcolon=(D-\lambda)^{-1}B^*f$. \vspace{-1mm} Then
\begin{equation}
\label{eq:lem.Schur.compl.BB*}
(S_1(\lambda )f,f)=(Af,f)-\lambda -\overline{(Dg,g)}+\overline{\lambda} \norm{g}^2.
\end{equation}
(i) If $\operatorname{Re}\lambda \,\in (\delta,\alpha)$, then \eqref{eq:lem.Schur.compl.BB*} and \eqref{eq:A.D.unif.accr} show that
\begin{equation}
\operatorname{Re}(S_1(\lambda)f,f)\ge\alpha-\operatorname{Re}\lambda+(-\delta+\operatorname{Re}\lambda)\norm{g}^2\ge\alpha-\operatorname{Re} \lambda>0.
\end{equation}
(ii) We consider $\arg\lambda\!\in\!(\tau,\pi\!-\!\tau)$, the case $\arg\lambda\!\in\!(-\pi\!+\!\tau,-\tau)$ can be shown analogously. By assumption, $\lvert\arg (Af,f)\rvert\!\le\!\varphi\!\le\!\tau$, $\lvert\arg\overline{(-Dg,g)}\rvert\!\le\!\psi\!\le\!\tau$. Together with $\arg(\overline{\lambda}\norm{g}^2)=-\arg\lambda \!\in\!(-\pi\!+\!\tau,-\tau)$, it follows from \eqref{eq:lem.Schur.compl.BB*} that
\begin{equation*}
\arg \big( (S_1(\lambda)f,f)\!+\!\lambda\big)
\!=\! \arg\big((Af,f)\!+\!\overline{(-Dg,g)}\!+\!\overline{\lambda}\norm{g}^2\big)\in[-\arg\lambda,\tau].
\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem \textnormal{\ref{thm:spec.incl.BB*}}]
First we use Lemma \ref{lem:schur.compl.BB*} to show that if $A$ or $-D$ are m-accretive, respectively, then
\begin{equation}
\label{eq:Schur.pseudo.nr.in.Sigma}
W_\Psi(S_2)\subseteq\Sigma \quad \rm{or} \quad W_\Psi(S_1)\subseteq\Sigma.
\end{equation}
We prove the claim for $S_1$ by taking complements; the proof for $S_2$ is analogous. To this end, let $\lambda \in \Sigma^{\operatorname{c}} \subseteq \rho (D)$. Then $\operatorname{Re}\lambda\in(\delta,\alpha)$ or $\abs{\arg\lambda}\in(\tau,\pi-\tau)$; note that the latter case only occurs if both $A$ and $-D$ are sectorial with vertex $0$, i.e.\ if $\tau < \pi/2$. If $\operatorname{Re}\lambda\in(\delta,\alpha)$, Lemma \ref{lem:schur.compl.BB*} (i) implies $0\notin\overline{W(S_1(\lambda))}$, i.e.\ $\lambda\notin W_\Psi(S_1)$ by \eqref{eq:pseudo.num.id}. In the same way, if $\abs{\arg\lambda}\in(\tau,\pi-\tau)$, then $\lambda\notin W_\Psi(S_1)$ follows from Lemma \ref{lem:schur.compl.BB*} (ii); indeed, otherwise we would have $0\in\overline{W(S_1(\lambda))}$ and hence, e.g.\ if $\arg \lambda \in (\tau, \pi - \tau)$,
\begin{equation}
\arg (0 + \lambda) = \arg \lambda \in [-\arg \lambda, \tau] \cap (\tau, \pi - \tau) = \emptyset,
\end{equation}
and analogously for $\arg \lambda \in (-\pi + \tau,- \tau)$. This completes the proof of \eqref{eq:Schur.pseudo.nr.in.Sigma}.
We show that assumptions (i) or (iii) imply \eqref{eq:app.incl.sigma}; the proof when assumptions (ii) or (iv) hold is analogous.
Assume first that (i) holds and let $\lambda\in\sigma_{\operatorname{ap}}(\mathcal{A})$. If $\lambda\in\sigma(A)\cup\sigma(D)\subseteq\Sigma$, there is nothing to show.
If $\lambda\notin\sigma(A)\cup\sigma(D)$, then Theorem \ref{thm:mat.spec.incl.schur.app} (i) shows that
$\lambda\in W_\Psi(S_1)\cup W_\Psi(S_2)$ and we conclude $\lambda\in \Sigma$ from \eqref{eq:Schur.pseudo.nr.in.Sigma}.
Now assume that (iii) is satisfied. Then $-D$ is m-sectorial with vertex $0$ and $\sigma(D) \subseteq \overline{W(D)}\subseteq \Sigma$. In order to prove \eqref{eq:app.incl.sigma}, we show $\sigma_{\operatorname{ap}} (\mathcal{A}) \cap \Sigma^{\operatorname{c}} = \emptyset$. To this end, it suffices to prove that
\begin{equation}
\label{eq:incl.thm.def.indef}
\sigma_{\operatorname{ap}} (\mathcal{A}) \cap \Sigma^{\operatorname{c}}\subseteq W_\Psi (S_1) \cup W_\Psi (D-\cdot I_{\mathcal{H}_2});
\end{equation}
here, in the sequel, we write $D-\cdot I_{\mathcal{H}_2}$ for the operator family $D - \lambda I_{\mathcal{H}_2}$, $\lambda\in\mathbb{C}$. Indeed, if \eqref{eq:incl.thm.def.indef} holds, then $W_\Psi(D-\cdot I_{\mathcal{H}_2}) = \overline{W(D)} \subseteq \Sigma$ and \eqref{eq:Schur.pseudo.nr.in.Sigma} yield that $\sigma_{\operatorname{ap}} (\mathcal{A}) \cap \Sigma^{\operatorname{c}} \subseteq \Sigma$ and hence the claim.
For the proof of \eqref{eq:incl.thm.def.indef}, we will use Theorem \ref{thm:spec.incl.def.indef} (i). To this end, for $\lambda \in \Sigma^{\operatorname{c}}$, we define a rotation angle
\begin{equation}
\omega(\lambda) \vcentcolon= \begin{cases}
0, & \operatorname{Re}\lambda \in (\delta, \alpha), \\
\operatorname{sgn} (\arg \lambda) \big|\frac\pi2 - |\arg\lambda|\big|,
& \operatorname{Re} \lambda \notin (\delta, \alpha) \wedge |\arg \lambda| \in(\tau, \pi-\tau);
\end{cases}
\end{equation}
note that the second case only occurs if $A$ is sectorial with vertex $0$, i.e.\ if $\tau < \pi/2$, and that then $\lambda \neq 0$ and $|\omega(\lambda)| \in (0,\pi/2-\tau)$.
Define a rotated operator matrix family $\widetilde \mathcal{L}$ by
\begin{equation}
\widetilde \mathcal{L}(\lambda) \!\vcentcolon=\! \operatorname{diag} \big(\!\operatorname{e}^{\i \omega(\lambda)}\mathcal{I}_{\mathcal{H}_1}, \operatorname{e}^{-\i \omega(\lambda)}\mathcal{I}_{\mathcal{H}_2}\!\big) (\mathcal{A}-\lambda \mathcal{I}_\mathcal{H}), \ \
\operatorname{dom} \widetilde \mathcal{L} (\lambda) \!\vcentcolon=\! \operatorname{dom} \mathcal{A}, \quad \lambda \!\in\!\Sigma^{\operatorname{c}}\!.
\vspace{-1mm}
\end{equation}%
Since, for fixed $\lambda \!\in\! \Sigma^{\operatorname{c}}$, the operator matrix $\operatorname{diag} (\operatorname{e}^{\i \omega(\lambda)}\mathcal{I}_{\mathcal{H}_1}, \operatorname{e}^{-\i \omega(\lambda)}\mathcal{I}_{\mathcal{H}_2})$ is bounded and boundedly invertible (even unitary), it is straightforward to show \vspace{-1mm} that
\begin{equation}
\lambda \in \sigma_{\operatorname{ap}} (\mathcal{A}) \, \iff \, 0 \in \sigma_{\operatorname{ap}} (\widetilde \mathcal{L} (\lambda)),
\end{equation}
which implies $\sigma_{\operatorname{ap}} (\widetilde \mathcal{L}) = \sigma_{\operatorname{ap}} (\mathcal{A}) \cap \Sigma^{\operatorname{c}}$. Moreover,
the angle $\omega(\lambda)$ is chosen such that $\operatorname{e}^{\i \omega(\lambda)}(A -\lambda I_{\mathcal{H}_1})$ is accretive,
$-\operatorname{e}^{-\i \omega(\lambda)}(D -\lambda I_{\mathcal{H}_2})$ is
sectorial with vertex~$0$ and $\operatorname{e}^{-\i \omega(\lambda)}C \!\subseteq\! \operatorname{e}^{\i \omega(\lambda)}B^*$ for every $\lambda \!\in\! \Sigma^{\operatorname{c}}$.
In fact, if $\operatorname{Re} \lambda \in (\delta, \alpha)$, this is obvious. If $\operatorname{Re} \lambda \notin (\delta, \alpha) $ and $|\arg \lambda| \in(\tau, \pi-\tau)$, then $\varphi < \pi /2$ and $|\omega(\lambda)| < \pi/2-\tau $ as mentioned above. From $\operatorname{Re} W(A) \ge \alpha >0$ and $W(A) \subseteq \Sigma_\varphi$, it thus follows that $\operatorname{e}^{\i \omega(\lambda)}A$ is uniformly accretive and sectorial with vertex~$0$
and, since $\operatorname{Re} (\operatorname{e}^{\i \omega(\lambda)}\lambda) \le 0
, the claim for $\operatorname{e}^{\i \omega(\lambda)}(A -\lambda I_{\mathcal{H}_1})$ holds.
The proof for $-\operatorname{e}^{-\i \omega(\lambda)}(D -\lambda I_{\mathcal{H}_2})$ is analogous.
Therefore $\widetilde \mathcal{L}$ satisfies the assumptions of Theorem \ref{thm:spec.incl.def.indef} (i) and, because $\sigma (\operatorname{e}^{-\i \omega}(D -\cdot I_{\mathcal{H}_2})) = \sigma (D) \cap \Sigma^{\operatorname{c}} = \emptyset$, \eqref{eq:BB*.def.indef.incl} therein yields that
\begin{equation}
\sigma_{\operatorname{ap}} (\mathcal{A})\cap \Sigma^{\operatorname{c}} = \sigma_{\operatorname{ap}} (\widetilde \mathcal{L}) \subseteq W_\Psi (\widetilde S_1) \cup W_\Psi (\operatorname{e}^{-\i \omega}(D -\cdot I_{\mathcal{H}_2})),
\end{equation}
where $\widetilde S_1$ is the first Schur complement of $\widetilde\mathcal{L}$. Now the claim \eqref{eq:incl.thm.def.indef} follows from the above inclusion and from the fact that, since $\operatorname{e}^{\i \omega(\lambda)}\!\ne\! 0$,
\begin{equation}
0 \!\in\! \overline{W(\widetilde S_1 (\lambda))}
\!\iff\,
0 \!\in\! \overline{W(\operatorname{e}^{\i \omega(\lambda)} S_1 (\lambda))} \!=\! \operatorname{e}^{\i \omega(\lambda)} \overline{W(S_1 (\lambda))}
\iff
0 \!\in\! \overline{W(S_1 (\lambda))}
\vspace{-2mm}
\end{equation}
for $\lambda \!\in\! \Sigma^{\operatorname{c}}\!$, and analogously for the family $\operatorname{e}^{-\i \omega}(D -\cdot I_{\mathcal{H}_2})$. This completes the proof that (i) and (iii) imply~\eqref{eq:app.incl.sigma}.
Finally, if $\rho(\mathcal{A})\cap\Sigma^{\operatorname{c}}\neq\emptyset$, then $\mathcal{A}$ is closed and $\sigma(\mathcal{A}) \!\subseteq\!\Sigma$ follows from $\sigma_{\operatorname{ap}}(\mathcal{A})\subseteq\Sigma$, see \eqref{eq:app.incl.sigma}, and from the stability of Fredholm index, see \cite[Thm.\ IV.5.17]{Kato-1995}.
\end{proof}
In Proposition \ref{thm:full.spec.incl.BB*} below, we derive sufficient conditions for $\rho(\mathcal{A})\cap\Sigma^{\operatorname{c}}\neq\emptyset$ in Theorem \ref{thm:spec.incl.BB*} for diagonally dominant and off-diagonally dominant operator matrices. For the latter, we use a result of \cite{Cuenin-Tretter-2016}, while for the former we employ the following lemma, inspired by an estimate in \cite[Prob.\ V.3.31]{Kato-1995} for accretive operators.
\begin{lem}
\label{lem:sec.res.est}
Let the linear operator $T$ in $\mathcal{H}$ be m-sectorial with vertex $0$ or m-accretive, i.e.\ there exists
$\omega\!\in\!\left[0,\pi/2\right)$ or
$\omega = \pi/2$, respectively, with $\sigma(T)\!\subseteq\!\overline{W(T)}\!\subseteq\!\Sigma_\omega$.
\vspace{-1mm} Then
\begin{equation}
\norm{T(T\!-\!\lambda)^{-1}}\!\le\! \frac 1{m_T(\arg\lambda)}\!:=\!
\left\{\begin{array}{cl}
\displaystyle\!\!\frac{1}{\sin(\abs{\arg\lambda}\!-\!\omega)},\! & \!\abs{\arg\lambda}\!\in\!(\omega,\omega\!+\!\frac{\pi}{2}),
\\[3.5mm]
\!\!1, & \!\abs{\arg\lambda}\!\in\![\omega\!+\!\frac{\pi}{2},\pi],
\end{array}\right.
\!\lambda\!\notin\!\Sigma_\omega.
\end{equation}
\end{lem}
\begin{proof}
Let $\lambda\notin\Sigma_\omega$ and $\varepsilon\in(0,\abs{\lambda})$ be arbitrary. Then $\lambda\in\rho(T)$, $-\varepsilon\in\rho(T)$, $\lambda\neq-\varepsilon$ and we can write
\begin{align}
T(T-\lambda)^{-1}
& =(T+\varepsilon)(T+\varepsilon-(\lambda+\varepsilon))^{-1}-\varepsilon(T-\lambda)^{-1}, \\
& \label{eq:sec.res.est} =-(\lambda+\varepsilon)^{-1}\left((T+\varepsilon)^{-1}-(\lambda+\varepsilon)^{-1}\right)^{-1}-\varepsilon(T-\lambda)^{-1}. \qquad
\end{align}
Since $\varepsilon>0$, it is easy to see that $T+\varepsilon$ is m-accretive or m-sectorial with semi-angle $\omega$ and vertex $0$, and hence so is $(T+\varepsilon)^{-1}$, cf.\ \cite[Prob.\ V.3.31]{Kato-1995} for the m-accretive case. Thus, by \cite[Thm.\ V.3.2]{Kato-1995} and \eqref{eq:sec.res.est}, we can \vspace{-1mm} estimate
\begin{equation}
\norm{T(T-\lambda)^{-1}}\le\frac{\abs{\lambda+\varepsilon}^{-1}}{\operatorname{dist}\left((\lambda+\varepsilon)^{-1},\Sigma_\omega\right)}+\frac{\varepsilon}{\operatorname{dist}\left(\lambda,\Sigma_\omega\right)}.
\end{equation}
The claim now follows by taking the limit $\varepsilon\to0$ and using the \vspace{-1mm} estimate
\begin{equation}
\label{eq:kato.sec.num.dist.est}
\operatorname{dist}\left(\lambda^{-1},\Sigma_\omega\right)\ge\left\{\begin{array}{cl}
\displaystyle\frac{\sin(\abs{\arg\lambda}-\omega)}{\abs{\lambda}}, & \,\abs{\arg\lambda}\in\left(\omega,\omega+\frac{\pi}{2}\right),
\\[4mm]
\displaystyle\frac{1}{\abs{\lambda}}, & \,\abs{\arg\lambda}\in\left[\omega+\frac{\pi}{2},\pi\right],
\end{array}\right.
\vspace{-1mm}
\end{equation}
cf.\ \cite[Thm.\ 2.2]{Kato-1961-I}.
\end{proof}
\begin{rem}
The inequality in Lemma \ref{lem:sec.res.est} is optimal, equality is achieved e.g.\ for normal operators
with spectrum on the boundary of $\Sigma_\omega$.
\end{rem}
\begin{prop}
\label{thm:full.spec.incl.BB*}
Suppose that, under the assumptions of Theorem {\rm \ref{thm:spec.incl.BB*}}, we strengthen assumptions {\rm (i)} and {\rm (ii)} to
\begin{enumerate}
\item[{\rm (i${'}$)}] $A$, $-D$ are m-sectorial with vertex $0$, i.e.\ $\varphi$, $\psi\!<\!\pi/2$ in \eqref{eq:sec.diag.entries},
$C$ is $A$-bounded~with relative bound $\delta_A$ and $B$ is $D$-bounded with relative bound $\delta_D$ such that
\begin{align}
\label{eq:BB*spec.incl.diag.cond}
\delta_A\delta_D & < \sin(\theta_{0}-\varphi)\sin(\theta_{0}+\psi) =: M_{\theta_0} \in (0,1]
\end{align}
where
\[
\theta_0:=
\begin{cases}
\max \big\{ \frac \pi 2 \!+\! \frac{\varphi-\psi}2, \tau \big\}, & \ \varphi \le \psi, \\
\,\min \big\{ \frac \pi 2 \!+\! \frac{\varphi-\psi}2, \pi\!-\!\tau \big\}, & \ \psi < \varphi;
\end{cases}
\]
\item[{\rm (ii${'}$)}] $A$, $-D$ are m-accretive,
$C\!=\!B^*$, $A$ is $C$-bounded with relative bound~$\delta_C$, $D$~is $B$-bounded with relative bound $\delta_B$ \vspace{-1mm}with
\[
\delta_B \delta_C < 1,
\]
$B$, $C$ are boundedly in\-vert\-ible, and the relative boundedness constants $a_C$, $a_B \!\ge\! 0$, $b_C$, $b_B \!\ge\! 0$ in
\begin{alignat*}{2}
\qquad &\|Ax\|^2\le a_C^2\|x\|^2+b_C^2\|Cx\|^2, \quad &&x\in \operatorname{dom} C, \\
\qquad &\|Dy\|^2\le a_B^2\|y\|^2+b_B^2\|By\|^2, \ &&y \in \operatorname{dom} B,
\end{alignat*}
satisfy
\[
\sqrt{a_C^2\|B^{-1}\|^2+b_C^2} \sqrt{a_B^2\|B^{-1}\|^2+b_B^2}<1.
\]
\end{enumerate}
Then $\rho(\mathcal{A})\cap\Sigma^{\operatorname{c}}\neq\emptyset$ and hence
\begin{equation}
\sigma(\mathcal{A})\subseteq(-\Sigma_\tau\cup\Sigma_\tau)\cap\set{z\in\mathbb{C}}{\operatorname{Re} z\notin(\delta,\alpha)} = \Sigma.
\end{equation}
\end{prop}
\begin{proof}
By Theorem \ref{thm:spec.incl.BB*}, it suffices to show $\rho(\mathcal{A})\cap\Sigma^{\operatorname{c}}\neq\emptyset$.
Suppose that (i${'}$) holds and let $\lambda\!=\!r\operatorname{e}^{\i\theta}$ with $r\!>\!0$, $\theta \!\in\! (\tau,\pi\!-\!\tau)$ to be~cho\-sen later.
Then $\lambda \!\in\! \rho(A) \cap \rho(D)$. Since $\frac 1{M_{\theta_0}} \delta_A \delta_D \!<\! 1$, there exists $\varepsilon\!>\!0$
\vspace{-2mm} so~that
\begin{equation}
\label{eq:3factors}
\frac 1 {M_{\theta_0} - \varepsilon } (\delta_A + \varepsilon ) (\delta_D + \varepsilon ) < 1.
\end{equation}
Due to the relative boundedness assumption on $C$, there exist $a_A$, $b_A>0$, $b_A\in[\delta_A,\delta_A+\varepsilon)$ such that
\begin{equation}
\label{eq:spec.incl.BB*1}
\norm{C(A-\lambda)^{-1}}\le a_A\norm{(A-\lambda)^{-1}}+b_A\norm{A(A-\lambda)^{-1}}.
\end{equation}
Since $A$ is m-sectorial with semi-angle $\varphi$ and vertex $0$, we have the estimate
\begin{equation}
\label{eq:spec.incl.BB*2}
\norm{(A-\lambda)^{-1}}\le\frac{1}{\operatorname{dist}(\lambda,W(A))}\le\frac{1}{r m_A(\theta)},
\end{equation}
with $m_{A} (\theta)$ defined as in Lemma \ref{lem:sec.res.est}, see \cite[Thm.\ V.3.2]{Kato-1995} or
\eqref{eq:kato.sec.num.dist.est}. Consequently, by \eqref{eq:spec.incl.BB*1}, \eqref{eq:spec.incl.BB*2} and Lemma \ref{lem:sec.res.est}, we obtain
\begin{equation}
\label{eq:spec.incl.BB*5}
\norm{C(A-\lambda)^{-1}}
\le\frac{a_A}{r m_A(\theta)}
+\frac{b_A}{m_A(\theta)}.
\end{equation}
Similarly, since $-D$ is m-sectorial with semi-angle $\psi$ and vertex $0$, and using Lemma \ref{lem:sec.res.est} as well as \eqref{eq:kato.sec.num.dist.est} and $|\arg (-\lambda)| = \pi-\theta$, we conclude that there exist $a_D$, $b_D>0$, $b_D\in[ \delta_D,\delta_D+\varepsilon)$ with
\begin{equation}
\label{eq:spec.incl.BB*6}
\norm{B(D-\lambda)^{-1}}\le\frac{a_D}{rm_{-D} (\pi -\theta)}+\frac{b_D}{{m_{-D}(\pi -\theta)}}
\end{equation}
with $m_{-D}(\pi -\theta)$ defined as in Lemma \ref{lem:sec.res.est} and hence
\begin{equation}
\label{eq:4factors}
\| C(A-\lambda)^{-1} B(D-\lambda)^{-1} \| \!\le\! \frac {b_A b_D}{M_\theta\!} \Big( \frac{a_A}{r b_A} \!+\! 1 \Big) \Big( \frac{a_D}{r b_D} \!+\! 1 \Big).
\end{equation}
Here the function
\[
[\varphi, \pi-\psi] \to [0,1], \quad \theta \mapsto M_\theta \vcentcolon= m_A(\theta) m_{-D}(\pi -\theta),
\]
is continuous, monotonically increasing for $\theta \le \widetilde \theta_0 := \frac \pi 2 + \frac{\varphi-\psi}2 \in [\varphi, \pi-\psi]$ and decreasing for $\theta \ge \widetilde \theta_0$. Hence, the restriction of $\theta \mapsto M_\theta$ to $[\tau, \pi-\tau]$ attains its maximum at $\theta_0$ and we can choose $\delta>0$ such that $M_{\theta_0} - \varepsilon < M_\theta$ for
$\theta \in (\theta_0-\delta,\theta_0+\delta) \cap (\tau,\pi-\tau)$. Now we fix such a $\theta$. Using \eqref{eq:4factors} and \eqref{eq:3factors}, we conclude that there exists
$r>0$ so large that
\begin{equation}
\| C(A-\lambda)^{-1} B(D-\lambda)^{-1} \| \!\le\! \frac {(\delta_A+\varepsilon)(\delta_D+\varepsilon)} {M_{\theta_0}\!\!-\!\varepsilon } \Big( \frac{a_A}{r b_A } \!+\! 1 \Big) \Big( \frac{a_D}{r b_D}\!+\! 1\Big) \!<\! 1.\!
\end{equation}
This implies $1 \!\in\! \rho( C(A\!-\!\lambda)^{-1} B(D\!-\!\lambda)^{-1})\!$ and thus $\lambda \!\in\! \rho(\mathcal{A})$ by \cite[Cor.~2.3.5]{Tretter-2008}.
Suppose that (ii${'}$) is satisfied. By the assumptions on $B$, $C$, the operator $\mathcal{S}\!\vcentcolon=\!\mathcal{S}_1$ is self\-adjoint and has a spectral gap
$(-\|B^{-1}\|^{-1},\|B^{-1}\|^{-1})$ around~$0$. Then \cite[Thm.\ 4.7]{Cuenin-Tretter-2016} with $\beta_T = 1/\norm{B^{-1}}$ therein implies that $\i\mathbb{R} \subseteq \rho(\mathcal{A})$.
\end{proof}
\section{Application to damped wave equations in $\R^d$ with unbounded damping}
\label{sec:dwe}
In this section we use the results obtained in Section \ref{subsec:pseudo.nr.spec.encl} to derive new spectral enclosures for linearly damped wave equations with non-negative possibly singular and/or unbounded damping $a$ and potential
$q$.
Our result covers a new class of unbounded dampings which are $p$-subord\-inate to $-\Delta+q$, a notion going back to \cite[\S I.7.1]{MR0342804}, \cite[\S 5.1]{Markus-1988}, cf.\ \cite[Sect.~3]{Tretter-Wyss-2014}.
\begin{thm}
\label{thm:pencil.spec.incl}
Let $\mathbf{t}$ be a quadratic pencil of sesquilinear forms given by
\begin{equation}
\mathbf{t}(\lambda)\vcentcolon=\mathbf{t}_0+2\lambda\mathbf{a}+\lambda^2, \quad \operatorname{dom}\mathbf{t}(\lambda)\vcentcolon=\operatorname{dom}\mathbf{t}_0, \quad \lambda\in\mathbb{C},
\end{equation}
where $\mathbf{t}_0$ and $\mathbf{a}$ are densely defined sesquilinear forms in $\mathcal{H}$ such that $\mathbf{t}_0$ is closed, $\mathbf{t}_0\ge\kappa_0\ge0$, $\mathbf{a}\ge\alpha_0\ge0$ and $\operatorname{dom}\mathbf{t}_0\subseteq\operatorname{dom}\mathbf{a}$. Suppose that there exist $\kappa \le \kappa_0$ and $p\in(0,1)$ such that $\mathbf{a}$ is $p$-form-subordinate with respect to $\mathbf{t}_0-\kappa\ge0$, i.e. there is $C_{p}>0$ with
\begin{equation}
\label{eq:pencil.subordinate}
\mathbf{a}[f]\le C_{p}\big((\mathbf{t}_0-\kappa)[f]\big)^p \big(\norm{f}^2\big)^{1-p}, \quad f\in\operatorname{dom}\mathbf{t}_0.
\end{equation}
Then the family $\mathbf{t}$ is holomorphic of type \textnormal{(a)}. If $\,T$ denotes the associated holomorphic family of type \textnormal{(B)}, then
\[
\sigma(T) \subseteq W_\Psi(T) \subseteq \big\{ z\!\in\!\mathbb{C}\!: \operatorname{Re} z\le 0 \big\}
\]
and the following more precise \vspace{1mm} spectral enclosures hold:
\begin{enumerate}
\item The non-real spectrum of $\,T$ is \vspace{-1.5mm}contained~in
\begin{align*}
\label{eq:pencil.spec.incl}
\sigma(T)\setminus \mathbb{R} \subseteq W_\Psi(T) \setminus \mathbb{R} \subseteq \!
\bigg\{ & z\!\in\!\mathbb{C}\!: \operatorname{Re} z\le-\alpha_0, \, |z| \ge \sqrt{\kappa_0}
, \\[-3mm]
&\abs{\operatorname{Im} z}\!\ge\! \sqrt{\max\!\Big\{0,C_{p}^{-\frac{1}{p}}\!\abs{\operatorname{Re} z}^\frac{1}{p}\!\!-\!\abs{\operatorname{Re} z}^2\!\!+\!\kappa\Big\}}\bigg\};
\hspace{-10mm}
\\[-6mm]
\end{align*}
\item
if $\,p\!<\!\frac 12$ or if $\,p\!=\!\frac 12$ and $C_{\frac 12}\! <\!1$ or if $p=\frac12$ and $C_\frac12 = 1$ and $\kappa>0$, the real spectrum of $\,T$ \vspace{-1mm} satisfies~either
\[
\sigma(T)\cap \mathbb{R} = \emptyset \quad \mbox{ or } \quad \sigma(T) \cap \mathbb{R} \subseteq [s^-,s^
],
\vspace{-1mm}
\]
if $\,p \!>\! \frac 12$ or if $\,p\!=\!\frac 12$ and $C_{\frac 12}\!>\!1$ or if $p=\frac12$ and $C_\frac12 = 1$ and $\kappa\le 0$, the real spectrum of $\,T$ satisfies \vspace{-1mm} either
\[
\sigma(T)\cap \mathbb{R} \subseteq (-\infty, r^+] \cup [s^-\!, s^+]
\ \ \mbox{ or } \ \
\sigma(T) \cap \mathbb{R} \subseteq
(-\infty,s^+],
\vspace{-1mm}
\]
where $\infty<r^+< s^- \!\le\! s^+ \!\le\! 0$ depend on $p$, $C_p$, $\kappa_0$ and $\kappa$;
\vspace{1.5mm}
\item if $\kappa=0$ and $p < \frac 12$, \vspace{-0.5mm} then
\begin{align*}
\qquad \ \
&\sigma(T)\cap \mathbb{R} = \emptyset \hspace{6.5cm} \mbox{ if } (C_p^2)^{\frac 1{1-2p}} \!<\! \kappa_0, \\[-1mm]
&\sigma(T)\cap \mathbb{R} \subseteq \!\!\Big[
\!-\!C_pt_0^p\!-\!\sqrt{C_{p}^pt_0^{2p}\!-\!t_0 }, -C_p\kappa_0^p \!+\!\sqrt{C_p^2\kappa_0^{2p}\!-\!\kappa_0}\Big)
\Big]\!\! \\[-1mm]
& \hspace{8.7cm} \mbox{ if } (C_p^2)^{\frac 1{1-2p}} \!\ge\! \kappa_0,\\[-10mm]
\end{align*}
where $t_0:= \max \big\{ \big( 4C_p^2p(1\!-\!p) \big)^{-\frac 1{2p-1}}\!\!,\kappa_0 \big\}$;
\item if $\kappa=0$ and $p= \frac 12$, \vspace{-0.1mm} then
\begin{alignat*}{2}
&\sigma(T)\cap \mathbb{R} = \emptyset && \ \mbox{ if } C_{\frac 12}\!<\!1 \mbox{ and } \kappa_0 \!>\! 0, \\
&\sigma(T) \cap \mathbb{R} \subseteq \{0\} && \ \mbox{ if } C_\frac 12 \!<\!1 \mbox{ and } \kappa_0 \!=\! 0, \\[-1mm]
\hspace{6mm} &\sigma(T)\cap \mathbb{R} \subseteq \!\Big(\!\!-\!\infty,-\Big(C_\frac12 \!-\!\sqrt{C_\frac12^2 \!-\! 1}\Big) \kappa_0^{\frac 12} \Big] & &\ \mbox{ if } C_{\frac 12} \!\ge\! 1;
\end{alignat*}
\item if $\kappa=0$ and $p> \frac 12$, \vspace{-0.1mm}
then
\begin{alignat*}{2}
&\sigma(T)\cap \mathbb{R} \subseteq \Big(\!\!-\!\infty, -C_pt_0^p+\sqrt{C_{p}^2t_0^{2p}\!-\!t_0 }\,\Big] && \ \mbox{ if } \kappa_0 > 0, \\[-1mm]
& \sigma(T)\cap \mathbb{R} \subseteq \Big(\!\!-\!\infty, -C_pt_0^p+\sqrt{C_{p}^2t_0^{2p}\!-\!t_0 }\,\Big] \cup \{0\} && \ \mbox{ if } \kappa_0 = 0,
\\[-8mm]
\end{alignat*}
where $t_0:= \max \big\{ \big( 4C_p^2p(1\!-\!p) \big)^{-\frac 1{2p-1}}\!\!,\kappa_0 \big\}$.
\end{enumerate}
\end{thm}
\begin{figure}[h]
\hspace*{-4mm}
\subcaptionbox*{\hspace{5.5mm}{\scriptsize (a)\,$p\!=\!0.4$,\,$C_p\!=\!1.3$,\,$\kappa\!=\!-2$, \\
\hspace*{9mm} $\alpha_0\!=\!2.5$,\,$\kappa_0\!=\!5$}}{\includegraphics[width=0.322\textwidth]{dwe-new-1-comp}} \
\subcaptionbox*{{\hspace{4.5mm} \scriptsize (b)\,$p\!=\!0.5$,\,$C_p\!=\!0.7$,\,$\kappa\!=\!3$, \\
\hspace*{9mm} $\alpha_0\!=\!0.5$,\,$\kappa_0\!=\!6$}}{\includegraphics[width=0.322\textwidth]{dwe-new-2-comp}} \
\subcaptionbox*{{\hspace{3mm} \scriptsize (c)\,$p\!=\!0.65$,\,$C_p\!=\!0.5$,\,$\kappa\!=\!-5$, \\
\hspace*{8mm}$\alpha_0\!=\!1$,\,$\kappa_0\!=\!0$}}{\includegraphics[width=0.322\textwidth]{dwe-new-3-comp}}
\caption{{\small
\! Enclosures for $\sigma(T) \!\setminus\! \mathbb{R}$ in Theorem~\ref{thm:pencil.spec.incl}~(i) (blue) and
for $\sigma(T)\cap\mathbb{R}$ in Theorem~\ref{thm:pencil.spec.incl}~(ii)-(v) (red in (a),\,(c), empty~in~(b)).}}
\vspace{-4mm}
\label{fig:dwe.spec.incl}
\end{figure}
\vspace{-1mm}
\begin{rem}
If \eqref{eq:pencil.subordinate} holds with $p=0$, then $\mathbf{a}$ is bounded and $\norm{\mathbf{a}} \le C_p= C_0$. In this case,
the spectrum of $T$ lies in a strip to the left of the imaginary axis; more precisely,
the non-real spectrum of $T$ \vspace{-0.5mm} satisfies
\begin{equation}
\sigma(T)\setminus\mathbb{R} \subseteq \set{z \in \mathbb{C}}{-C_0 \le \operatorname{Re} z \le -\alpha_0, \, |z| \ge \sqrt{\kappa_0} },
\end{equation}
while the real spectrum \vspace{-0.5mm} satisfies
\begin{equation}
\sigma (T) \cap \mathbb{R} \begin{cases}
= \emptyset & \quad {\rm if } \,\, C_0^2 < \kappa_0, \\
\subseteq [-C_0 - \sqrt{C_0^2-\kappa_0}, -C_0 + \sqrt{C_0^2-\kappa_0}] & \quad {\rm if } \,\, C_0^2 \ge \kappa_0;
\end{cases}
\end{equation}
notice that the latter corresponds to Theorem~\ref{thm:pencil.spec.incl} (iii) with $p=0$.
\end{rem}
\begin{proof}[Proof of Theorem {\rm \ref{thm:pencil.spec.incl}}]
Clearly, $\mathbf{t}$ is holomorphic. For arbitrary $\varepsilon>0$, applying Young's inequality to \eqref{eq:pencil.subordinate}, we obtain
\begin{equation}
\label{eq:pencil.rel.bdd}
\begin{aligned}
\mathbf{a}[f] & \le \left(\frac{\varepsilon}{p}\right)^p \!\!\! \big((\mathbf{t}_0-\kappa)[f]\big)^p \left(\frac{p}{\varepsilon}\right)^p C_{p}\big(\norm{f}^2\big)^{1-p}\\[-1mm]
& \le \varepsilon \big((\mathbf{t}_0-\kappa)[f]\big)+ (1\!-\!p)\left(\frac{p}{\varepsilon}\right)^\frac{p}{1-p} C_p^\frac{1}{1-p} \norm{f}^2
\end{aligned}
\end{equation}
for all $f\in\operatorname{dom}\mathbf{t}_0$, i.e.\ $\mathbf{a}$ is $\mathbf{t}_0$-bounded with relative bound $0$. Hence, for each $\lambda\in\mathbb{C}$, the form $\mathbf{t}(\lambda)$ is densely defined, sectorial and closed, see e.g.\ \cite[Thm.\ VI.1.33]{Kato-1995}. This shows that $\mathbf{t}$ is a holomorphic family of type (a). Since all enclosing sets in Theorem \ref{thm:pencil.spec.incl} are closed and
\[
\sigma(T) \subseteq W_{\Psi} (T)=W_{\Psi} (\mathbf{t}) = \overline{W(\mathbf{t})}
\]
by Theorem \ref{thm:pseudo.dense.hol.fam} with $k=2$ and $\mu\in\mathbb{C}$ arbitrary, it suffices to show that $W(\mathbf{t})\setminus \mathbb{R}$ and $W(\mathbf{t})\cap \mathbb{R}$ satisfy the claimed enclosures.
Let $\lambda_0\in W(\mathbf{t})$, i.e.\ there exists $f\in\operatorname{dom}\mathbf{t}_0$, $\norm{f}=1$, with $\mathbf{t}(\lambda_0)[f]=0$. Taking real and imaginary part in this equation, we conclude that
\begin{align}
\label{eq:pencil.re}
\mathbf{t}_0[f]+2\operatorname{Re}\lambda_0\,\mathbf{a}[f]+(\operatorname{Re}\lambda_0)^2-(\operatorname{Im}\lambda_0)^2 & =0, \\
\label{eq:pencil.im}
2\operatorname{Im}\lambda_0\,\mathbf{a}[f]+2\operatorname{Re}\lambda_0\operatorname{Im}\lambda_0 & =0.
\end{align}
First assume that ${\lambda_0 \!\in}W(\mathbf{t})\setminus\mathbb{R}$.
Dividing \eqref{eq:pencil.im} by $2\operatorname{Im}\lambda_0$ ($\ne 0$) and inserting this into \eqref{eq:pencil.re}, we find
\begin{align}
&\operatorname{Re}\lambda_0=-\mathbf{a}[f]\le-\alpha_0\le 0, \label{eq:pencil.re.num.ineq}\\
&\abs{\lambda_0}^2=(\operatorname{Im}\lambda_0)^2+(\operatorname{Re}\lambda_0)^2=\mathbf{t}_0[f] \ge \kappa_0.
\label{eq:pencil.im.num.ineq-0}
\end{align}
Using these relations and assumption \eqref{eq:pencil.subordinate}, we can further \vspace{-1mm} estimate
\begin{equation}
\label{eq:pencil.im.num.ineq}
(\operatorname{Im}\lambda_0)^2 =\mathbf{t}_0[f] - |\operatorname{Re} \lambda_0|^2 \ge \max\{ 0, {C_p^{-\frac 1p} } \abs{\operatorname{Re}\lambda_0}^\frac{1}{p} - |\operatorname{Re} \lambda_0|^2 +\kappa \},
\end{equation}
and hence $\lambda_0 \!\in W(\mathbf{t})\!\setminus\!\mathbb{R}$ satisfies all three claimed inequalities in (i).
Now assume that $\lambda_0 \!\in \!W(\mathbf{t})\cap\mathbb{R}$. Then $\mathbf{a}[f]^2\!-\!\mathbf{t}_0[f]\!\ge\!0$ and thus, in par\-ti\-cular, $\mathbf{a}[f] \!\ge\! \max\{\alpha_0,\sqrt{\kappa_0}\}$. Moreover, since $\operatorname{Im}\lambda_0\!=\!0$, equality \eqref{eq:pencil.im}~trivially holds and \eqref{eq:pencil.re} implies
\begin{equation}
\label{eq:real-Wt}
\lambda_0 = -\mathbf{a}[f]\pm\sqrt{\mathbf{a}[f]^2-\mathbf{t}_0[f]}\le 0
\end{equation}
because~$\mathbf{t}_0 \!\ge\! 0$. This, together with $\mathbf{a} \!\ge\! \alpha_0$ and assumption \eqref{eq:pencil.subordinate}, yields \vspace{1mm} that
\begin{equation}
\label{eq:ineq-new}
\max\big\{\alpha_0^2,\kappa_0\big\} \le \max\{ \alpha_0^2, \mathbf{t}_0[f]\} \le \mathbf{a}[f]^2 \le C_p^2 \big((\mathbf{t}_0-\kappa)[f]\big)^{2p}.
\end{equation}
If we define
\begin{align*}
&d(x):= C_p^{-\frac 1p} x^{\frac 1{2p}} \!-\!x \!+\! \kappa, \quad x\!\in\! [0,\infty),\quad
D_{\leq 0} := \big\{ x \!\in\! [\kappa_0,\infty): d(x) \leq 0\big\},
\end{align*}
then it is easy to see that $\mathbf{t}_0[f] \!\in\! D_{\le 0}$;
in particular, $D_{\le 0} = \emptyset$ implies $W(\mathbf{t}) \cap \mathbb{R} =\emptyset$. An elementary analysis shows that $d$ is either identically zero, has no zero, one simple zero or two (possibly coinciding) zeros on $[0,\infty)$, which we denote by $x_+$ and $x_-\le x_+$, respectively, if they exist.~Then
\begin{align}
\label{eq:p.cases.1}
p < &
\frac 12 \mbox{ or } p = \frac12, C_\frac12 < 1 \mbox{ or } p = \frac12, C_\frac12 = 1, \kappa>0 \\[1mm]
& \implies D_{\le 0} \!=\! \emptyset \mbox { or } D_{\le 0} \mbox{ is bounded}, \ D_{\le 0} \!=\! [ \kappa_0,x_+] \mbox{ or } D_{\le 0}\!=\![x_-,x_+]
\intertext{}
\label{eq:p.cases.2}
p > &\frac 12 \mbox{ or } p = \frac12, C_\frac12 > 1 \mbox{ or } p = \frac12, C_\frac12 = 1, \kappa\le 0 \\[1mm]
& \implies D_{\le 0} \!\ne\! \emptyset \mbox{ is unbounded},\ D_{\le 0}\!=\![\kappa_0,\infty) \mbox{ or } D_{\le 0} \!=\! [x_+, \infty)\\
& \hspace{5.75cm}
\mbox{ or } D_{\le 0}\!=\![\kappa_0,x_-]\!\cup\![x_+,\infty).
\end{align}
Which case prevails for fixed $p \!\in\! [0,1)$ can be characterised by means of in\-equalities involving the constants $\kappa_0$, $\kappa$ and $C_p$. For~estimating $\lambda_0$ in \eqref{eq:real-Wt} while respecting the restrictions in \eqref{eq:ineq-new}, we consider the functions
\[
f_\pm(s,t):= -s \pm \sqrt{s^2 \!-\! t}, \quad s\!\in\![\alpha_0,\infty), \ t\!\in\! [\kappa_0,\infty), \ t \!\le\! s^2 \!\le\! C_p^2(t-\kappa)^{2p}.
\]
It is easy to check that $f_+$ is monotonically increasing in $s$ and monotonically decreasing in $t$, while $f_-$ is monotonically decreasing in $s$ and monotonically increasing in $t$ and hence, since $s \le C_p(t-\kappa)^{p}$,
\begin{equation}
\label{eq:gpm}
\begin{aligned}
f_+(s,t) & \le f_+(C_p(t-\kappa)^p,t) =\vcentcolon g_+(t), \\
f_-(s,t) & \ge f_-(C_p(t-\kappa)^p,t) =\vcentcolon g_-(t)
\end{aligned}
\end{equation}
Now we distinguish the two qualitatively different cases \eqref{eq:p.cases.1} and \eqref{eq:p.cases.2}.
To obtain the claimed enclosures for $W(\mathbf{t})\cap \mathbb{R}$, we use \eqref{eq:ineq-new},
\eqref{eq:real-Wt} and \eqref{eq:gpm} to conclude that $g_-(t) \leq \lambda_0 \leq g_+(t)$ for some $t \in D_{\leq 0}$.
\\
If \eqref{eq:p.cases.1} holds, there are the following two possibilities: \\[1mm]
(1) If $d$ has no zeros on $[0,\infty)$ or if $d$ has at least one zero and $x_+ \!<\!\kappa_0$,~then $D_{\le0} =\emptyset$ and \vspace{-2mm} thus
\[
W(\mathbf{t})\cap \mathbb{R} = \emptyset.
\]
(2) If $d$ has at least one zero $x_+$ and $x_+ \ge \kappa_0$, then $D_{\le0}
$ is one bounded interval and
\begin{align*}
\label{eq:spm}
W(\mathbf{t})\!\cap \mathbb{R} \subseteq \!\big[s^-\!, s^+\big], \quad
& s^- \!\vcentcolon=\!\! \min_{t\in D_{\le0}
}g_-(t), \quad s^+ \!\vcentcolon=\!\! \max_{t\in D_{\le0}
}g_+(t);
\end{align*}
here if $d$ has only one zero $x_+$ or if $d$ has two zeros $x_\pm$ and $x_-<\kappa_0$, then $D_{\le0} = [\kappa_0,x_+]$ and if $d$ has two zeros and $x_-\ge\kappa_0$, then \vspace{1mm} $D_{\le0} =[x_-,x_+]$.
\noindent
If \eqref{eq:p.cases.2} holds, there are the following two possibilities: \\[1mm]
(3) If $d$ has two zeros $x_\pm$ on $[0,\infty)$ and $x_- \!\ge\! \kappa_0$, then $D_{\le0}\!=\! [\kappa_0,x_-] \cup [x_+,\infty)$ and we obtain
\begin{align*}
W(\mathbf{t})\cap \mathbb{R} \!\subseteq \! \big(\!-\!\infty, r^+ \big] \! \cup \!
\big[ s^-\!,s^+ \big], \ \ & r^+ \!\vcentcolon=\!\!
\max_{t \in [x_+,\infty)} g_+(t), \ s^+ \!\vcentcolon=\!\!\! \max_{t\in [\kappa_0,x_-]}g_+(t), \\[-1mm]
& s^- \!\vcentcolon=\!\!\! \min_{t\in [\kappa_0,x_-]}g_-(t)
;
\end{align*}
here $g_+$ attains a maximum on $[x_+,\infty)$ since $g_+(t)$ tends to $-\infty$ as $t\to\infty$, and analogously in the next case.
\noindent
(4) If $d$ has either at most one zero $x_+$ or two zeros $x_\pm$ on $[0,\infty)$ and $x_- < \kappa_0$, then $D_{\le0} = [\max\{\kappa_0, x_+\},\infty)$ and we conclude that
\[
W(\mathbf{t})\cap \mathbb{R} \subseteq \big(-\infty, s^+\big], \quad s^+ \!\vcentcolon=\! \max_{t \in [\max\{\kappa_0, x_+\},\infty)} g_+(t).
\vspace{-2mm}
\]
This proves claim (ii).
Claim (iv) for $\kappa\!=\!0$ and $p\!=\! \frac 12$ follows from cases (1), (2) and (4) above if we note that then $d(x)=(C_{\frac 12}^{-2}\!-\!1)x$, $x\!\in\![0,\infty)$, is either identically \vspace{-2mm} zero or
has the only zero $x_+\!=\!0$ and, for case (4), $g_+(t)\!=\!-t^{\frac 12} \big(C_{\frac 12}\!+\! \sqrt{C_{\frac 12}^2\!-\!1}\big)$ \vspace{-2mm} is montonically decreasing
so that $s^+=g_+(\kappa_0)$.
Finally, if $\kappa\!=\!0$ and $p\!\ne\! \frac 12$, the function $d$ has the two zeros \vspace{-1.5mm} $x_-\!=\!0$ and $x_+ \!=\! (C_p^2)^{\frac 1{1-2p}}$ on $[0,\infty)$, and the respective bounds $r^+$, $s^\pm$ above can be determined explicitly to deduce claims (iii) and (v). More precisely, claim (iii) follows from cases (1) and (2) if we note that, in (2),
$D_{\le0} = [\kappa_0, x_+]$, $g_+$ is monotonically decreasing on $[0, x_+]$
and $g_-$
attains its minimum \vspace{-1mm} on $[0,x_+]$ at $t=\big( 4C_p^2p(1\!-\!p) \big)^{-\frac 1{2p-1}}$. Claim (v) follows from cases (4) if $\kappa_0>0$ and (3) if $\kappa_0=0$; note that, for $\kappa\!=\!0$, case (3) where $p\!>\!1/2$ can only occur if $\kappa_0\!=\!0$. In both cases, we use \vspace{-1mm} that $g_+$ attains its maximum on $[x_+
,\infty)$ at $t=\big( 4C_p^2p(1\!-\!p) \big)^{-\frac 1{2p-1}}$.
\end{proof}
\begin{rem}
If \eqref{eq:pencil.subordinate} holds with $\kappa \le \kappa_0$ and $p\in[0,1)$, then it holds for every $q\in(p,1)$ with $\kappa_1 \le \kappa$ such that $\kappa_1 < \kappa_0$.
Indeed, then $\mathbf{t}_0\!-\!\kappa\! \le\! \mathbf{t}_0\!-\!\kappa_1$ and $\mathbf{t}_0\! -\!\kappa_1 \!\ge\! \kappa_0\!-\!\kappa_1 \!>\!0$ which implies that $(\|f\|^2)^{q-p} \!\le\! (\kappa_0\!-\!\kappa_1)^{p-q} \big((\mathbf{t}_0\!-\!\kappa_1)[f] \big)^{q-p}$\!\!,
$f\!\in\! \operatorname{dom} \mathbf{t}_0$. Hence \eqref{eq:pencil.subordinate} holds with $q$, $\kappa_1$ and $C_q=C_p (\kappa_0\!-\!\kappa_1)^{p-q}$.
\end{rem}
\begin{rem}
\label{Jacob-Trunk}
As a special case of Theorem \ref{thm:pencil.spec.incl} we obtain the enclosure for the non-real spectrum proved in \cite[Thm.\ 3.2,~Part 5]{Jacob-Trunk-2009} (where the damping was only assumed to be accretive) and we considerably improve the enclosure for the real spectrum therein since we obtain that the latter is, in fact, empty. The assumption in \cite[Thm.\ 3.2,~Part 5]{Jacob-Trunk-2009} is~that
\begin{equation}
\label{eq:JT09}
\nu:= \sup_{f\in\operatorname{dom}\mathbf{t}_0\setminus\{0\}} \frac{2\mathbf{a}[f]}{\mathbf{t}_0[f]^{1/2}\|f\|} \in (0,2).
\end{equation}
The parameters $a_0$, $\beta$ and $\nu$ in \cite[(5) and p.\ 83]{Jacob-Trunk-2009} correspond to the following special choices in Theorem~\ref{thm:pencil.spec.incl} and assumption \eqref{eq:pencil.subordinate}:
\[
p=\frac 12, \quad C_{\frac 12} = \frac \nu 2, \quad \kappa=0, \quad \kappa_0 = a_0^2 >0, \quad \alpha_0 = \frac \beta 2.
\]
Under the assumption \eqref{eq:JT09} made in \cite[Thm.\ 3.2,~Part 5]{Jacob-Trunk-2009}, Theorem~\ref{thm:pencil.spec.incl}~(i) yields the spectral \vspace{-2mm} enclosure
\[
\sigma(T) \setminus \mathbb{R} \subseteq \! \bigg\{ z\!\in\mathbb{C}: \operatorname{Re} z\!\le\!-\frac \beta 2, \, \abs{z} \!\ge\! a_0, \, \, \abs{\operatorname{Im} z}\!\ge\! \sqrt{\frac 4{\nu^2}\!-\!1\,}\abs{\operatorname{Re} z} \bigg\}.
\]
This enclosure is the same as in \cite[Thm.\ 3.2,~Part 5]{Jacob-Trunk-2009}. However, since $\nu\!<\!2$ is equivalent to $C_{\frac 12} \!<\!1$, the enclosure $\sigma(T)\cap \mathbb{R} \subseteq (-\infty,-\frac {a_0}\nu-\frac{4a_0}{\nu^3}]$ in \cite[Thm.\ 3.2,~Part 5]{Jacob-Trunk-2009} is considerably improved by Theorem~\ref{thm:pencil.spec.incl} (iv) to
\[\sigma(T)\cap \mathbb{R} = \emptyset.\]
\end{rem}
\begin{rem}
In the second case in Theorem~\ref{thm:pencil.spec.incl}~(ii), i.e.\ if $p \!>\! \frac 12$ or $p \!=\! \frac12$, $C_\frac12 \!>\! 1$ or $p \!=\! \frac12$, $C_\frac12 \!=\! 1$, $\kappa\!\le\! 0$, the set $W(\mathbf{t})\cap(-\infty,0]$ used to enclose the spectrum can, indeed, be unbounded if so is $\mathbf{t}_0$.
In fact, if $W(\mathbf{t}_0) \!=\! [\kappa_0,\infty)$, we can choose $\mathbf{a}\!=\!C_p(\mathbf{t}_0-\kappa)^p$. Then there exist $f_n\!\in\!\operatorname{dom}\mathbf{t}_0$, $\norm{f_n}\!=\!1$, with $\mathbf{t}_0[f_{n}] \!\ge\! n$ for $n\!\in\!\mathbb{N}$.
he conditions on~$p$, $C_p$ and $\kappa$ ensure, comp.~\eqref{eq:p.cases.2}, that $C_p^2(\mathbf{t}_0[f_{n}]-\kappa)^{2p}-\mathbf{t}_0[f_{n}]\ge0$ for sufficiently large $n\in\!\mathbb{N}$ and thus
\begin{equation}
W(\mathbf{t})\cap(-\infty,0]\ni\lambda_0\!=\!-\mathbf{t}_0[f_n]^p\!-\!\sqrt{\mathbf{t}_0[f_n]^{2p}\!-\!\mathbf{t}_0[f_n]}\le-\mathbf{t}_0[f_n]^p \le -n^p \to -\infty,
\vspace{-1mm}
\end{equation}
and hence $\inf \left(W(\mathbf{t})\cap(-\infty,0]\right)=-\infty$.
\end{rem}
In the next example we apply Theorem \ref{thm:pencil.spec.incl} to linearly damped wave equations with possibly unbounded and/or singular damping.
\begin{exple}
\label{ex:damped.wave.PDE}
Let $\mathcal{H}=L^2(\R^d)$ with $d\ge3$ and $a$, $q\inL^1_{\operatorname{loc}}(\R^d)$, $a \neq 0$ and
$a,q\ge0$ almost everywhere. If $\operatorname{dom} a^\frac{1}{2}$ and $\operatorname{dom} q^\frac{1}{2}$ denote the maximal domains of the multi\-plication operators $a^\frac{1}{2}$ and $q^\frac{1}{2}$ in $L^2(\R^d)$, respectively, we define the quadratic forms $\mathbf{a}$ and $\mathbf{t}_0$ in $L^2(\R^d)$ \vspace{-1mm}by
\begin{equation}
\begin{aligned}
\mathbf{a}[f] & \vcentcolon=\int_{\R^d}a\abs{f}^2\d x, \quad & \operatorname{dom}\mathbf{a}& \vcentcolon=\operatorname{dom} a^\frac{1}{2}, \\
\mathbf{t}_0[f] & \vcentcolon=\int_{\R^d}\abs{\nabla f}^2\d x+\int_{\R^d}q\abs{f}^2\d x, \qquad & \operatorname{dom}\mathbf{t}_0 & \vcentcolon= H^1(\R^d)\cap\operatorname{dom} q^\frac{1}{2}.
\end{aligned}
\end{equation}
Suppose that, for almost all $x\in\R^d$,
\begin{equation}
\label{eq:dwe.pot.damp.inequ}
a(x)\le\sum_{j=1}^n\abs{x-x_j}^{-t}+ u(x) + v(x), \qquad v(x) \le c_1 q(x)^r+c_2,
\end{equation}
where $u \in L^s (\R^d)$ with $s>d/2$, $v\!\in\!L^1_{\operatorname{loc}}(\R^d)$, $t\!\in\![0,2)$, $n\!\in\!\mathbb{N}_{0}$, $x_j\!\in\!\mathbb{R}^d$ for $j\!=\!1,\dotsc,n$, $c_1$, $c_2\!\ge\!0$ and $r\!\in\![0,1)$. Then $\mathbf{a}$, $\mathbf{t}_0$ are closed, $\mathbf{a}$, $\mathbf{t}_0\ge 0$ and, without further assumptions, we only know that $\alpha_0 \ge 0$, $\kappa_0 \ge 0$ in Theorem~\ref{thm:pencil.spec.incl}. In order to verify \eqref{eq:pencil.subordinate}, let $f\in\operatorname{dom}\mathbf{t}_0$ with $\norm{f}=1$. By H\"older's and Hardy's inequality, for $1\le j\le n$,
\begin{equation}
\label{eq:dwe.Hardy}
\begin{aligned}
\int_{\R^d}\!\!\abs{x\!-\!x_j}^{-t}\abs{f}^2 \d x & \le \left(\int_{\R^d}\!\!\abs{x\!-\!x_j}^{-2}\abs{f}^2\d x\right)^\frac{t}{2} \!\le\! \frac{2^t}{(d-2)^t}\norm{\nabla f}^t.
\end{aligned}
\end{equation}
Moreover, by Gagliardo-Nirenberg-Sobolev's inequality, there exists a constant $G_d>0$ depending only on the dimension $d$ such that
\begin{equation}
\norm{f}_{L^{2^*}(\R^d)} \le G_d \norm{\nabla f}, \quad f\in H^1 (\R^d), \quad 2^*\vcentcolon= \frac{2d}{d-2},
\end{equation}
where $2^*\!>\!2$ is the critical Sobolev exponent for the embedding $H^1 (\R^d) \!\hookrightarrow\! L^{2^*}(\R^d)$. Since $d/s \!\in\! (0,2)$, we can use H\"older's inequality with three terms to estimate
\begin{equation}
\int_{\R^d} u |f|^2 \d x \le \norm{u}_{L^s(\R^d)} \left(\int_{\R^d} |f|^{\frac ds \frac{2s}{d-2}} \d x\right)^\frac{d-2}{2s} \left(\int_{\R^d} |f|^{\left(2-\frac ds\right)\frac{2s}{2s-d}} \d x\right)^\frac{2s-d}{2s}.
\end{equation}
This inequality, together with the relations
\begin{equation}
\frac ds \frac{2s}{d-2} = 2^*, \quad \frac{d-2}{2s} = \frac{d}{2^*s}, \quad \left(2-\frac ds\right)\frac{2s}{2s-d} = 2,
\end{equation}
and $\norm{f}=1$, yields that
\begin{equation}
\label{eq:Sobolev}
\int_{\R^d} u |f|^2 \d x \le \norm{u}_{L^s(\R^d)} \norm{f}_{L^{2^*} (\R^d)}^\frac ds \le \norm{u}_{L^s(\R^d)} G_d^\frac ds \norm{\nabla f}^\frac ds.
\end{equation}
Next the bound on $v$ in \eqref{eq:dwe.pot.damp.inequ} with $r\in
0,1)$, H\"older's inequality with $1/r\in (1,\infty]$, $1/(1-r)\in [1,\infty)
$ and $\norm{f}=1$ give
\begin{equation}
\label{eq:dwe.v.subord}
\int_{\R^d} v |f|^2 \d x \le c_1\int_{\R^d}q^r\abs{f}^2\d x + c_2 \le c_1\left(\int_{\R^d}q\abs{f}^2\d x\right)^r + c_2.
\end{equation}
Combining the inequalities \eqref{eq:dwe.Hardy}, \eqref{eq:Sobolev} and \eqref{eq:dwe.v.subord}, we arrive at
\begin{equation}
\label{eq:dwe.damp.subord}
\begin{aligned}
\hspace{-2.5mm}\mathbf{a}[f] \!&\le\! \frac{n2^t}{(d\!-\!2)^t}\norm{\nabla f}^t \!\!+\! \norm{u}_{L^s(\R^d)} G_d^\frac ds \norm{\nabla f}^\frac ds \!+\! c_1\left(\int_{\R^d} \!q\abs{f}^2\d x\right)^{\!r}\!\!\!+\!c_2 \hspace{-4mm}
\\
&=: \alpha_1 (\norm{\nabla f}^2)^{\frac t2} \!\!+\! \alpha_2 (\norm{\nabla f}^2)^{\frac d{2s}} \!+\! \alpha_3 \left(\int_{\R^d} \!q\abs{f}^2\d x\right)^{r} \!+\! \alpha_4.
\end{aligned}
\end{equation}
In order to further bound \eqref{eq:dwe.damp.subord}, we estimate $\alpha_1 x_1^{p_1} \!\!+\! \alpha_2 x_2^{p_2} \!+\! \alpha_3 x_3^{p_3} \!+\! \alpha_4$~with $x_i \!\ge\! 0$, $p_i \!\in\! [0,1)$, $i\!=\!1,2,3$, and $\alpha_i \!\ge\! 0$, $i\!=\!1,2,3,4$; note that $x_1\!=\!x_2=\|\nabla f\|^2$ in~\eqref{eq:dwe.damp.subord}. If we set $p \!\vcentcolon=\! \max \{p_1,p_2,p_3\}$ and maximise $\delta(x) \!\vcentcolon=\! x^{p_i} \!-\! x^p\!$, $x\in [0,1]$, $i\!=\!1,2,3$, we \vspace{-2mm}find~that
\begin{equation}
\label{eq:delta}
x_i^{p_i} \!\le\! x_i^{p} \!+\!\delta_i, \quad \delta_i \vcentcolon= \begin{cases}
\hspace{8mm} 0 & \mbox{if } p_i = p, \\
\frac{p-p_i}{p} \left(\frac{p_i}{p}\right)^\frac{p_i}{p-p_i} & \mbox{if } p_i < p,
\end{cases}
\quad i=1,2,3.
\end{equation}
If $\max \{\alpha_1, \alpha_2, \alpha_3\} \neq 0$, then
\begin{equation}
\label{eq:gamma.p}
\gamma_p \vcentcolon= \alpha_1 (1 \!+\! \delta_1) \!+\! \alpha_2 (1 \!+\! \delta_2) \!+\! \alpha_3 (1 \!+\! \delta_3) \!+\! \alpha_4 \neq0.
\end{equation}
If we use \eqref{eq:delta}, the concavity of $x\!\mapsto\! x^{p}$ on $[0,\infty)$ and $x_1\!=\!x_2$,
we~obtain
\begin{align*}
& \alpha_1 x_1^{p_1} \!\!+\! \alpha_2 x_2^{p_2} \!+\! \alpha_3 x_3^{p_3} \!+\! \alpha_4
\, \le \,\alpha_1 (x_1^p \!+\! \delta_1) \!\!+\! \alpha_2 (x_2^{p} \!+\!\delta_2 ) \!+\! \alpha_3 (x_3^p \!+\! \delta_3) \!+\! \alpha_4 \\[1mm]
&= \gamma_p \Big( \frac{\alpha_1}{\gamma_p } x_1^{p} \!\!+\! \frac{\alpha_2}{\gamma_p } x_2^p + \frac{\alpha_3}{\gamma_p} x_3^p\!\!+\! \frac{\alpha_1\delta_1 + \alpha_2 \delta_2\!+\!\alpha_3\delta_3 + \alpha_4}{\gamma_p } \Big) \\
&\le \gamma_p \Big( \frac{\alpha_1}{\gamma_p } x_1 \!+\! \frac{\alpha_2}{\gamma_p } x_2 + \frac{\alpha_3}{\gamma_p} x_3 \!+\! \frac{\alpha_1\delta_1 + \alpha_2 \delta_2 \!+\!\alpha_3 \delta_3 + \alpha_4}{\gamma_p } \Big)^{p} \\[1mm]
&= \gamma_p ^{1-p} \big( (\alpha_1 + \alpha_2) x_1 \!+\! \alpha_3 x_3 + \alpha_1\delta_1 + \alpha_2 \delta_2 \!+\!\alpha_3 \delta_3 + \alpha_4 \big)^{p} \\[1mm]
&\le \gamma_p ^{1-p} \max\{ \alpha_1 +\alpha_2, \alpha_3\}^{p} \Big( x_1 \!+\! x_3 \!+\! \frac{\alpha_1\delta_1 + \alpha_2 \delta_2 \!+\!\alpha_3 \delta_3+\alpha_4}{\max\{ \alpha_1+\alpha_2, \alpha_3\}} \Big)^{p}.
\end{align*}
If $\max\{n,\norm{u}_{L^s(\R^d)},c_1\} \!\neq\! 0$, we can apply this estimate to \eqref{eq:dwe.damp.subord} with $p_1 \!=\! t/2$, $p_2 \!=\! d/(2s)$, $p_3 \!=\! r$, $\delta_i$, $i\!=\!1,2,3$, as in \eqref{eq:delta} to obtain that $\operatorname{dom}\mathbf{t}_0\subseteq\operatorname{dom}\mathbf{a}$ and assumption \eqref{eq:pencil.subordinate} holds with the
parameters
\begin{equation}
\label{vne0}
\begin{aligned}
&p\!=\!\max\Big\{\frac{t}{2},\frac{d}{2s}, r\!\Big\}, \quad
C_p \!=\! \gamma_p^{1-p} \max\Big\{ \frac{n2^t}{(d\!-\!2)^t} + \norm{u}_{L^s(\R^d)} G_d^{\frac ds},c_1 \Big\}^{\!p}\!\!, \\
&\kappa \!=\!- \frac{ n2^t \delta_1 + (d-2)^t (\norm{u}_{L^s(\R^d)} G_{d}^{\frac ds} \delta_2 + c_1 \delta_3 + c_2)}{\max\{ n2^t \!+\! (d\!-\!2)^t \norm{u}_{L^s(\R^d)} G_d^{\frac ds},\,(d-2)^t c_1\}},
\end{aligned}
\end{equation}
where, according to \eqref{eq:gamma.p},
\begin{equation}
\gamma_p = \frac{n2^t}{(d-2)^t} (1 \!+\! \delta_1) \!+\! \norm{u}_{L^s(\R^d)} G_d^\frac ds (1 \!+\! \delta_2) \!+\! c_1 (1 \!+\! \delta_3) \!+\! c_2.
\end{equation}
If $\max\{n,\norm{u}_{L^s(\R^d)},c_1\} \!=\! 0$, i.e.~$n=0$, $u\equiv0$ and $c_1=0$, then the damping $a=v$ is bounded, our assumption $a\neq 0$ implies $c_2>0$ and \eqref{eq:pencil.subordinate} trivially holds with $p=0$, $C_0=c_2 =\|a\|_\infty$ and $\kappa \le d=\kappa_0$ arbitrary.
The constants in \eqref{vne0} in the general case $\max\{n,\norm{u}_{L^s(\R^d)},c_1\} \!\neq\! 0$ simplify substantially
if either $n\!=\!0$, $u\!\equiv\! 0$ or $v \!\equiv\! 0$. If e.g.~two of $n$, $u$ or $v$ vanish, the constants $p$, $C_p$ and $\kappa$, which may be read off from \eqref{eq:dwe.Hardy}, \eqref{eq:Sobolev} or \eqref{eq:dwe.v.subord}, are also obtained as special cases of \eqref{vne0}. For \vspace{-1.5mm} instance,
\begin{alignat*}{4}
&p=\frac t2, \quad &&C_{\frac t2} = \frac{n2^t}{(d-2)^t}, \quad &&\kappa = 0
&&\quad \mbox{if $n\neq 0$, $u\equiv0$ and $v\equiv0$},\\[-1mm]
&p=\frac d{2s}, \ &&C_\frac{d}{2s} = \norm{u}_{L^s(\R^d)} G_d^\frac ds, \ &&\kappa = 0
&&\quad \mbox{if $n= 0$, $u\not\equiv0$ and $v\equiv0$},\\
&p=r, \quad &&C_r = (c_1\!+\!c_2)^{1-r} c_1^r, \quad &&\kappa = -\frac{c_2}{c_1}
&&\quad \mbox{if $n= 0$, $u\equiv0$ and $v\not\equiv0$, $c_1\!>\!0$};
\end{alignat*}
in \eqref{vne0} these are the 3 cases $\delta_1 = 0$ with $c_1=c_2=r=0$ and $s$ sufficiently large such that $d/(2s)<r$,
$\delta_2 = 0$ with $t=c_1=c_2=r=0$, and $\delta_3=0$ with $t=0$ and $s$ sufficiently large, respectively. The cases where only one of $n$, $u$ or $v$ vanishes are similar and are left to the reader.
As a special case, we \vspace{-1.5mm} consider
\begin{equation}
a(x)=\abs{x}^{k} \ \mbox{with } k \in[0,2), \quad q(x)=\abs{x}^2, \quad x\in\R^d.
\vspace{-1.5mm}
\end{equation}
Here $\alpha_0\!=\!0$ and we can choose $\kappa_0 >0$ as the ground energy of the harmonic oscillator, cf. \cite[Sec.\ XIII.12]{Reed-Simon-1978}, \vspace{-1mm}i.e.\
\begin{equation}
\kappa_{0}=\inf_{f\in \operatorname{dom}\mathbf{t}_0}\frac{\mathbf{t}_0[f]}{\norm{f}^2}=\frac{\mathbf{t}_0[f_0]}{\norm{f_0}^2}=d,
\vspace{-1mm}
\end{equation}
where $f_0(x)=\exp(-\abs{x}^2\!/2)$, $x\in\R^d$, is the (non-normalised) ground state of the harmonic oscillator.
Moreover, in this special case $a$ satisfies \vspace{-1.5mm} \eqref{eq:dwe.pot.damp.inequ}~with
\[
n\!=\!0, \quad t\!=\!0, \quad u \equiv 0, \quad v \equiv a, \quad r = \frac{k} 2, \quad c_1 = 1, \quad c_2=0,
\vspace{-1mm}
\]
and by what was shown above, condition \eqref{eq:pencil.subordinate} holds\vspace{-1mm} with
\[
p=\frac k 2, \quad C_p=1, \quad \kappa=0.
\vspace{-1mm}
\]
Hence the results in Theorem \ref{thm:pencil.spec.incl} (iii), (iv) and (v) yield \vspace{-1mm} that
\begin{equation}
\label{eq:dwe.comp.incl}
\sigma(T) \setminus\mathbb{R}\subseteq\Big\{z\!\in\!\mathbb{C}:\operatorname{Re} z\!\le\!0, \, \abs{z}\!\ge\! \sqrt{d}, \, |\operatorname{Im} z| \!\ge\!
\sqrt{\max\{0,\abs{\operatorname{Re} z}^{\!\frac{2}{k}
}\!\!-\!\abs{\operatorname{Re} z}^2\}}\Big\}
\vspace{-2mm}
\end{equation}
\vspace{-2mm}and
\begin{align*}
\sigma(T) \cap \mathbb{R}
\begin{cases}
= \emptyset & \mbox { if } k\in[0,1), \\
\subseteq(-\infty,-\sqrt{d}] & \mbox { if }k=1, \\
\subseteq \!\Big(\!\!-\!\infty,-\sqrt{t_0}^{k}\!+\!\sqrt{t_0^{k}\!-\!t_0 } \,\Big] & \mbox { if } k\in (1,2),
\end{cases}
\end{align*}
where in the latter case $t_0=\max\big\{ \big( k(2-k) \big)^{-\frac 1{k-1}},d\big\}$.
\end{exple}
{\bf Acknowledgements.}
{\small
The authors gratefully acknowledge the support of the Swiss National Science Foundation (SNF)
by the grants no.\ $200021\_169104$ and $200021\_204788$.
}
\medskip
{\bf Data availability statement.}
{\small
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
}
\bibliographystyle{acm}
| -181,634.422724 |
[
-2.33203125,
2.01171875
] | 25.722406 |
[
-3.064453125,
0.54296875,
-2.134765625,
-5.96484375,
-1.5712890625,
8.578125
] |
[
2.029296875,
7.66796875,
-0.178466796875,
3.615234375
] | 2,736 | 13,674 |
[
-3.671875,
4.203125
] | 36.952531 |
[
-5.56640625,
-4.4921875,
-4.87890625,
-2.54296875,
1.833984375,
12.8828125
] | 1.206358 | 10.237581 | 20.074594 | 2.519083 |
[
1.431653380393982
] | -108,403.319392 | 7.213983 | -180,191.307401 | 0.510928 | 6.466984 |
[
-1.640625,
-3.470703125,
-4.50390625,
-6.109375,
2.0703125,
13.4140625
] |
[
-5.21484375,
-1.123046875,
-1.6162109375,
-0.77978515625,
3.0546875,
2.734375
] | |
BkiUe4vxK7Dgs_cY6wtW
|
\section{Introduction}
\label{intro}
At this age of increasing specialization, it has become almost impossible to go through
all the research works of an author and judge their merits. This inability necessitates
an objective analysis of an author's research output so that a wider population can
comprehend his/her research merit. This objective analysis is also very helpful in
comparing research outputs of different authors, and has become important tool for
the employers, policy makers and grand commissions.
How do we objectively and comprehensibly analyze an author's research merit? Clearly
for such an analysis many factors should be considered -the quality and quantity of
the research output, coauthors' contributions to a researcher's work, his/her ability
to do independent work, a researcher's efficiency in doing collaborative work,
his/her ability in working in different fields, etc. It is possible to carefully
define different parameters/metrics
to quantify each of the above aspects of an author's research performance.
To reflect on a particular aspect, if I may use physics terms, one is required to
extract the ``coarse-grained" information out of a huge amount of ``microscopic"
details associated with an author's publications and their impacts.
At this point it is important to realize that a single parameter or metric can
not give a full view of an author's scholarship or research merit.
As mentioned above, different parameters can be defined to judge different aspects
of an author's scholarly output. For an efficient and objective description
of an individual's overall research performance,
it is therefore crucial to recognize the most
important aspects of research output and separately quantify them by carefully
defined parameters. These parameters are expected to be
independent to avoid redundancy and it is also expected that they will have some
simple physical meaning such that they can be comprehended by the wider population.
In this work we identify three most important aspects of an author's research
output - (a) quantity, (b) quality and (c) author's own contribution in his/her
published works. In other way, these three aspects are the collective impact of
the published papers,
author's productivity and author's share in the total impact of his/her works.
Clearly we need at least three independent parameters/metrics to
reliably quantify these three different aspects. What are the
three independent parameters which best serve this cause? I will argue in
this paper that the total number of citations ($N_c$), the $h$-index and the
newly defined $I$-index -these three parameters do the job satisfactorily.
Let me now briefly discuss why the division of
credit among the coauthors is so important, and $N_c$ and the $h$-index
are not enough to analyze an author's scholarly activity.
It is not uncommon for the senior and established researchers to
collaborate with many groups and publish a large numbers of papers per year.
These researchers will normally have higher citations ($N_c$) and the
$h$-index than those who are spending all their time working alone or
in a small group. It will be greatly
unfair for these lonely or small group workers if an author's research
performance is analyzed only by the parameters $N_c$ and the $h$-index.
It is therefore necessary to quantify a researcher's own role in his/her
success or in other words, how much the researcher could have achieved if
he/she had worked independently.
Here I propose the $I$-index (it can be interpreted as the
{\it Independence}-Index) to solve this problem. We will see in the next
section that, this index has a simple meaning
which will appeal to the wider population. It is defined in such a way
that its value will not directly depend on the most of the subjective
issues like an author's popularity/influence, affiliation, seniority and
career break/low activity (due to some severe medical condition, family
tragedy or importantly a female researcher's motherhood).
It is also argued in this paper that a simple scheme of equidistribution
of credit among the coauthors of a paper will not normally result in a
significant error in calculating the $I$-index.
We will see
that $N_c$, the $h$-index and the $I$-index are three independent
parameters (within their bounds), and together can give a comprehensive
idea about a researcher's overall performance
(see Sec. \ref{sec:3}).
There is an additional advantage in considering the $I$-index
while analyzing an individual's research output. The parameters like $N_c$
and $h$-index can be unethically inflated in different ways. For example, a
number of
researchers working in several independent groups can decide that when a
group publishes a paper, it will give authorships to the members from other
groups even when they do not contribute. It is often complained that junior
authors are sometimes compelled to give authorships to senior non-contributing
researchers for sub-academic reasons. This unethical practice will be
discouraged if the $I$-index is considered
while analyzing an individual's research performance.
It is often stated that, even though it is very
important to quantify an author's own share in the total credit of his/her
published papers, but doing so may demoralize researchers to do true
collaborations which are imperative for the progress and betterment of science.
This crucial issue can be mostly resolved if we quantify three different
research aspects by three separate independent parameters. In this
three-parameter framework of research analysis, researchers will be encouraged
to do effective collaborations to improve their $N_c$ and $h$-index. At the
same time they will be probably restrained from resorting to the unethical
practices (mentioned above) if the $I$-index is also considered along
with the other two indices. In this framework of analysis, authors' ranking
can still be done according to their $h$ values (supplemented by $N_c$); the
$I$-index can help resolve the ranking issues when multiple authors have
close values of $h$-index (and $N_c$). In fact, researchers can be ranked in
different ways depending on what importance is given to the $I$-index
(for more discussions, see Sec. \ref{sec:3.1}).
In this work, {\it not} much importance is given to describe all three aspects
of research performance by a single parameter or metric. Here I may
emphasize that any attempt to do so would be gross due to serious loss of
informations. The obscurity or ambiguity resulting from
the loss of informations may eventually lead in the error of judgement; as a
consequence, a group of scientists may get undue advantage while the deserving
candidates may be penalized. For example, though the $h$-index \cite{hirsch05}
somewhat successfully quantifies first two aspects of an
author's research output, the $\hbar$-index \cite{hirsch10}, which additionally
attempts to consider
coauthors' role, is not that successful. Besides loosing simple meaning and
calculation friendliness, the $\hbar$-index is known to be unfair towards junior
researchers and extra biased towards senior (having high $h$-index) researchers.
Three carefully-defined independent parameters would
provide us much better view (higher resolution) of a researcher's scholarly
activity than any single parameter can possibly do.
With these facts in mind, one may also like to know what
parameter we should use if for some practical reasons it is
needed to rank authors by a single parameter. For this purpose,
in Sec. \ref{sec:3.1}, I define a normalized $h$-index (written as
$\widetilde{h}$-index) which combines the effects/impacts of both $h$-index
and $I$-index in a rational way. This $\widetilde{h}$-index is interpreted as
the possible $h$-index of an author if he/she had worked alone. Subsequently
I also propose $\widetilde{h}_T$-index which
additionally takes care of the seniority issue.
\section{$I$-index: definition and characteristics}
\label{sec:2}
Before I define and discuss the $I$-index, I will first briefly deliberate on two
main {\it assumptions} considered in this work:
\smallskip
\begin{enumerate}
\item[(1)] {\it
The impact of a paper is solely determined by the number of citations it
received. This number of citations is the total credit to be distributed
among the coauthors of the paper.}
\smallskip
\item[(2)] {\it For a multi-author paper, each author is indispensable
and effectively contributes equally if not mentioned otherwise.}
\end{enumerate}
\smallskip
While the first assumption is somewhat easy to comprehend, the second assumption
needs some discussions.
I will argue and try to establish in this work that,
even though the assumption of equidistribution of credit may not be satisfactory
when applied to a single publication, it becomes quite a reasonable assumption
when applied to all the publications by an individual to determine his/her
overall share in the total citations received by those publications.
Controversies and debates over credit distribution are not rare.
Despite the fact that it is crucial to distribute the credit among the
coauthors, the demarcation of contributions is a hopelessly difficult job.
Sometimes even for the coauthors it appears impossible to decide who contributed
what and what weight it carries. Sometimes an author's contribution may be small but
indispensable, without which the paper will not be complete and published. Sometimes
though a senior author's direct contribution to a paper is less but we have to
remember that he/she generally spends lot of time writing projects and bringing
fundings, without which there will be no research and no paper.
Any `logical' distribution of credit among the coauthors of a paper is highly
subjective and hence debatable. Different experts evaluating a multi-author paper
would give different credits to a particular author depending on how the evaluation
was done.
This discussion clearly shows that, due to the inherent subjective
nature of the analysis, we can not have a satisfactory deterministic model for
quantifying an individual's share in the total credit of his/her published papers
(the third aspect of research output, as mentioned before).
If we define an index/metric to quantify this aspect of research output, and a large
number of experts independently estimate the value of the index for an individual,
then they will get different values for the index. Due to this inevitable randomness
(or uncertainty) in the estimated value of the index, we need to develop a realistic
statistical model to predict the most probable (or expectation) value of the index.
Here I define the $I$-index to quantify an individual's share
in the total credit of his/her works. I then discuss two relevant statistical models
(two different statistical approaches),
and show that, within the domain of their validity, the scheme of equidistribution of
credit gives the most probable value of the $I$-index. Frequently in this paper the
most probable value of the index is simply referred to as the $I$-index of an
individual. It may be also mentioned here that the statistical arguments presented
in this work is not generally applicable for a junior author with only a few papers.
\smallskip
{\underline{\it Definition}}: The $I$-index is an author's percentage share in
the total citations received by his/her published papers. If $c_i$ is the number
of citations received by the $i$-th paper and $z_i$ is the author's
expected share of credit for the paper, then his/her $I$-index is given by:
\begin{eqnarray}
\label{iidxdf}
I = \frac{\sum_{i=1}^{N_p} z_i}{\sum_{i=1}^{N_p}c_i} \times 100 \%,
\end{eqnarray}
where $N_p$ is the total number of papers published by the author.
Now if $n_i$ is the number of authors contributed for the $i$-th
paper, then, assuming the equidistribution of credit among the coauthors, we have
$z_i = c_i/n_i$. Consequently, the author's $I$-index would be,
\begin{eqnarray}
\label{iindex}
I = \frac{\sum_{i=1}^{N_p} c_i/n_i}{N_c} \times 100 \%,
\end{eqnarray}
where $N_c = \sum_{i=1}^{N_p}c_i$.
\smallskip
In the following I will present two different statistical arguments to demonstrate
the effectiveness of the equidistribution of credit scheme in calculating the $I$-index.
After that I will discuss some of the main features or the characteristics of the index.
\smallskip
{\it Argument} (1): In short, here I will argue that
the value given by Eq. \ref{iindex} is the most probable value
or the expectation value of the $I$-index defined in Eq. \ref{iidxdf};
the statistical error in calculating the $I$-index using
Eq. \ref{iindex} is not normally significantly large.
Consider that a
multi-author paper has $n$ coauthors and received $c$ citations. Let $z^j$ is
the $j$-th author's expected share of credit for the paper;
it is possible to express this quantity in the
following form: $z^j = c/n + e^j$, where $e^j$ is the author's deviation of
share from the average share of coauthors ($c/n$).
Since the total credit to be
distributed among the $n$ authors is $c$, we must have $\sum_{j=1}^{n}z^j = c$.
This implies, $\sum_{j=1}^{n}e^j = 0$. Now using this relation and the fact
that $z^j > 0$, we can get the strict mathematical bounds for $e^j$:
$-\frac{c}{n} < e^j < \frac{(n-1)c}{n}$. In practice we expect the deviation
$|e^j|$ to be small and within some fraction of the average share
(i.e., $|e^j| < c/n$). The relation $\sum_{j=1}^{n}e^j = 0$ confirms that, the
quantity $e^j$ would be positive for some authors
and negative for others ($e^j$ can be zero, of course). Now which authors
deserve to get positives values of $e^j$ and which authors should get negative
values? While this can be hard to decide, it will not be
unreasonable to assume here that, for an individual author with many published
papers, his/her $e^j$ will be positive for some of his/her papers and
negative for others. In other words, sometimes an individual researcher's
contribution to a multi-author paper can be more than the coauthors' average
contribution to the paper, while in some other occasions his/her contribution
to a multi-author paper would be less than the average contribution.
In the following I will use this statistical property of $e^j$ to
calculate an individual's expected share in the total citations received
by his/her papers (the superscript index $j$ will be dropped since we will
focus on a particular author).
Let a researcher's expected share of credit for his/her
$i$-th paper is $z_i = c_i/n_i + e_i$, where $e_i$ is a small
number ($|e_i| < c_i/n_i$). While $c_i/n_i \ge 0$ for all papers,
statistically the number $e_i$ would take positive values for some papers
and negative values for others. When $n_i = 1$, we have $e_i = 0$, since for
a single-author paper its sole author gets all the credit ($z_i = c_i$).
Now when we calculate the researcher's total share in the collective credit
of his/her papers by summing $z_i$ over all the published papers, we get
$C_{share} = \sum_{i=1}^{N_p}c_i/n_i + E_r$, with
$E_r = \sum_{i=1}^{N_p} e_i$. Since $e_i$ is a small quantity
($|e_i| < c_i/n_i$) and statistically it takes both positive
and negative values, we expect that $E_r$ will generally be a very
small number when $N_p$ is large (i.e. $|E_r|\ll\sum_{i=1}^{N_p}c_i/n_i$).
Therefore, if we ignore $E_r$ and just take
$C_{share} \approx \sum_{i=1}^{N_p}c_i/n_i$,
then the resultant error would be normally less than what one might expect to
get from this simple scheme of equidistribution of credit (in somewhat
different context an argument similar in spirit can be found in
Refs. \cite{pepe12,kurtz05}). While calculating the $I$-index,
this resultant error ($E_r$) will then be further weakened due to the
presence of the large denominator factor ($N_c$) in the definition of
the index (cf. Eq. \ref{iidxdf}). We note that the possible
statistical error in calculating the $I$-index using Eq. \ref{iindex} is
$\Delta = \frac{E_r}{N_c}\times100\%$. This error is expected to be negligible
when $N_c$ becomes large.
Let us now try to get a rough estimation of $|\Delta|$ for
an individual.
First consider that the author has $l$ number of significant papers so
that the total number of citations for these $l$
papers is much larger than the total number of citations for the rest of
the papers (i.e., $\sum_{i=1}^lc_i\gg\sum_{i=l+1}^{N_p}c_i$
when papers are arranged in the descending order of citation count).
The value of $l$ can be assumed to be the $h$-index of the
author. Furthermore consider that $\overline{c}$ and $\overline{n}$ are
respectively the average number of citations and the average number of
authors for those $l$ significant papers. As we discussed before, in
practice we expect $|e_i|$ to be some percentage of the corresponding average,
i.e., $|e_i| \sim \frac{x_i}{100} \times \frac{c_i}{n_i}$ where $x_i$ may take
any value between, say, 0 and 20. This allows us to write
$E_r = \sum_{i=1}^{N_p}e_i \sim
\frac{\overline{c}}{100\overline{n}}\sum_{i=1}^{l}s_i x_i$. Here $s_i$ carries
only the sign of $e_i$; if $e_i$ is positive (negative),
then $s_i = +1$ ($s_i = -1$). Now if we take $\overline{x}$ to be the average
value of $x_i$'s for those $l$ significant papers, then $E_r \sim
\overline{x}(\frac{\overline{c}}{100\overline{n}})\sum_{i=1}^{l}s_i$.
With $N_c = \sum_{i=1}^{N_p}c_i \sim l \overline{c}$, we get the following,
$\Delta = \frac{100}{N_c}\times E_r
\sim \frac{100}{l\overline{c}}\times
\overline{x}(\frac{\overline{c}}{100\overline{n}})\sum_{i=1}^{l}s_i$, or,
$\Delta \sim \frac{\overline{x}}{l\overline{n}}\sum_{i=1}^{l}s_i$. We note
that, if an individual's estimated contribution to a multi-author paper is
more (less)
than the average contribution of coauthors, then $s_i = + 1$ ($s_i = - 1$).
If all $s_i$'s are +1, then $\sum_{i=1}^{l}s_i = l$. On the other extreme,
if all $s_i$'s are -1, then $\sum_{i=1}^{l}s_i = -l$. In principle,
depending on the
details of the author's contributions made to the $l$ significant papers,
$\sum_{i=1}^{l}s_i$ can take any of the following possible values: $\{-l, -l+2,
-l+4, \cdots, l\}$. Since the value of $\Delta$ can be different depending
on the value of $\sum_{i=1}^{l}s_i$, we will now calculate an expected
value of $\Delta$ for an individual author. Noticing that a simple average
over all possible values of $\Delta$ is 0, we will here consider the
root mean square value of $\Delta$ as its expected value. Once we know this
root mean square value (denoted as $|\overline{\Delta}|$), we can
say that, an individual's percentage share of credit for his/her works would
be normally within ($I\pm|\overline{\Delta}|$)\% where the value of $I$ is
given by Eq. \ref{iindex}.
Now to calculate $|\overline{\Delta}|$, we first note that
$s_i$'s are independent variables. This is because an author's amount of
contribution to one paper does not presumably depend on his/her amount of
contribution to another one. This independence of variables allows us to
use some simple statistical results in estimating $|\overline{\Delta}|$.
Now, these $l$ independent variables can take values in $2^l$ possible
ways. For example, all the variables can be 1. This can happen in only one
way ($^lC_0$) and in this case $\sum_{i=1}^{l}s_i = l$. Similarly, one
variable can be -1 and the rest can be 1. This can happen in $^lC_1$ ways and
in this case $\sum_{i=1}^{l}s_i = l-2$. In general $k$ variables can be -1
and the rest ($l-k$) variables can be 1; this can happen in $^lC_k$ ways and
here $\sum_{i=1}^{l}s_i = l-2k$.
This counting helps us write the desired quantity in the following way:
$|\overline{\Delta}| \sim \left(\frac{\overline{x}}{l\overline{n}}\right)
\left(\frac{1}{2^l}\sum_{i=0}^l~^lC_i (l-2i)^2 \right)^{1/2}$. Some
simple calculation shows that,
$\left(\frac{1}{2^l}\sum_{i=0}^l~^lC_i (l-2i)^2 \right)^{1/2} = \sqrt{l}$.
Therefore, we get
$|\overline{\Delta}| \sim \frac{\overline{x}}{\overline{n}\sqrt{l}}$.
Here we see that the value of
$|\overline{\Delta}|$ gets smaller with increasing $l$ (and $\overline{n}$).
While a typical value of $|\overline{\Delta}|$ is expected to be less than 1,
a typical value of $I$ is about 40. So here we conclude that the equidistribution
of credit scheme gives us a reasonably good value of the $I$-index without much
statistical error.
\smallskip
{\it Argument} (2):
It is possible to give a somewhat better mathematical argument, based on
the {\it Central Limit Theorem} \cite{bhatt00} (CLT), to show that the
value obtained from Eq. \ref{iindex} is the most probable value of the
$I$-index (with an associated small standard deviation which decreases
with the increasing number of significant papers). A very careful
analysis of the situation is needed here. As we discussed earlier,
due to the inherent subjective nature of the analysis, it is hardly
possible to decide who gets how much credit for a multi-author paper.
If different experts independently evaluate the
distribution of credit among the coauthors of a paper, then a particular
author will get different values of credit from the different experts
depending on how the evaluation was done. So the $I$-index for a
researcher, defined in Eq. \ref{iidxdf}, will have different values when
calculated by different experts. Which value
shall we take? It would be recommended to take an average of these values.
So what is the average or expectation value of the $I$-index if a large
number of experts independently calculate it? Using the Central Limit
Theorem we will now show that, within some reasonable
assumptions, the average value of the $I$-index is what one gets by the
scheme of equidistribution of credit (cf. Eq. \ref{iindex}). We are also
interested in knowing the standard deviation about the average value,
since a small deviation will allow us to confidently say that the
average value is what an individual's share of credit is without much
uncertainty.
When a large number of experts independently evaluate the sharing of
credit for a multi-author paper, the values of credit obtained by a
particular author will follow some distribution. That is to say, an
author will get a certain credit with some probability.
Let for the $i$-th paper its $j$-th author gets $y_i^j$ credit with
the (marginal) probability density $K_i(y_i^j)$. In the joint probability
distribution of credits (for a particular paper $i$), the variables
$y_i^j$'s are not totally independent; they obey a singular
constraint: $\sum_{j=1}^{n_i} y_i^j = c_i$ (with $0< y_i^j \le c_i$).
So we see that the (random) variables $y_i^j$'s for
different $j$'s are not independent, even though $y_i^j$'s are totally
independent variables for different $i$'s (for an individual author $j$).
This makes it easier to apply statistical theory to determine the
probability distribution for the $I$-function defined for a specific
author:
\begin{eqnarray}
\label{idstfn}
I(Y) = \frac{\sum_{i=1}^{N_p} y_i}{N_c}\times 100.
\end{eqnarray}
Note that the author index $j$ is dropped from the credit variables
$y$'s as we are focussing on a particular author. The symbol $Y$ denotes
the sum of all random variables ($y_i$'s). It may be
noted that the $I$-function, defined for an individual, does not give
a single value since each variable $y_i$ follows some distribution.
The $I$-function gives a value with some probability; we are interested
in knowing the average value of the $I$-function and the standard
deviation associated with it.
Before we go further, let us briefly discuss what the CLT
tells us. Let $X_1$, $X_2$, $\cdots$, $X_n$ are $n$ number of independent
random variables with arbitrary distributions but each has a well-defined mean
value ($E[X_i] = \mu_i$) and a well-defined variance ($var(X_i) = \sigma_i^2$).
Now consider the function: $Y = \sum_{i=1}^n X_i$. The CLT assures us that,
in the limit of large $n$, values of $Y$ will follow a {\it normal}
or {\it Gaussian} distribution with a mean given by
$ E[Y] = \sum_{i=1}^n \mu_i$ and a variance given by
$var(Y) = \sum_{i=1}^n \sigma_i^2$. This result from the CLT does not
depend on the details of distributions of $X_i$'s, and is often valid even
for a small $n$ \cite{note1}.
Since the variables $y_i$'s are essentially independent, in the limit of
large number of papers ($N_p$), we can use the above statistical results
to assure ourselves that the $I$-function will be a Gaussian in nature
whose mean and variance can be given in terms of the means and variances
of the variables $y_i$'s. To make things more quantitative,
we now need to consider the means and the variances of $K_i$'s.
Since the variable $y_i$ can take any value between 0 and $c_i$, and
there are $n_i$ authors to share the total credit $c_i$, a reasonable
assumption would be to take the mean value of the
variable $y_i$ to be $c_i/n_i$ (note: if we sum this over all coauthors
of $i$-th paper, we get back the total credit $c_i$). In fact, even if
the mean value of $y_i$ is not strictly $c_i/n_i$, we will still normally
have the same results that follow. Argument for this will be given
soon after I write down the mean and variance of the $I$-function.
Since the range of the variable $y_i$ is finite, its variance will also
be finite (for any regular distribution); let us for the time
being consider $\sigma_i^2$ ($<\infty$) be its variance.
If we now use the CLT results for
$I(Y)$, we get the following: in the limit of large $N_p$, the
values of $I(Y)$ will be distributed in a {\it normal} or
{\it Gaussian} distribution with the mean
$\frac{100}{N_c}\sum_{i=1}^{N_p} c_i/n_i$ (i.e. the $I$-index
defined in Eq. \ref{iindex}) and the variance
$\Sigma^2 = \frac{100^2}{N_c^2}\sum_{i=1}^{N_p}\sigma^2_i$.
It may be noted that here we have used following two general
relations: $E[a X_i] = a E[X_i]$ and $var(a X_i) = a^2 ~var(X_i)$,
where $a$ is any constant.
Now I will argue that even if the mean of $y_i$ is not strictly $c_i/n_i$,
we will still normally have the same mean for the $I$-function.
The reasoning goes exactly like the {\it Argument} (1)
given before.
Statistically, for some variables corresponding mean can be more
than $c_i/n_i$ (i.e., $E[y_i] \ge c_i/n_i$)
and for others the mean can be less than that
(i.e., $E[y_i] < c_i/n_i$). Therefore when we calculate
the sum of the means of $y_i$'s, we expect that the result will not be
much different than $\sum_{i=1}^{N_p} c_i/n_i$.
Now whatever (small) difference it might have, that
will be further weakened by the large denominator factor $N_c$ present
in the definition of the $I$-function. So here we conclude that, in
all normal cases, the mean value of the $I$-function is
$\frac{100}{N_c}\sum_{i=1}^{N_p} c_i/n_i$ without much significant
deviation.
Now we will analyze whether the $I$-function has broad or narrow peak
about its mean value.
A narrow peak about the mean value will allow us to confidently
say that, an author's $I$-index is what one gets from
Eq. \ref{iindex}.
For the distribution of $y_i$, the standard deviation $\sigma_i$ is expected
to depend on $c_i$ (this is because, normally larger is the range of
a variable, wider is the distribution; here the variable $y_i$ varies
from 0 to $c_i$). We assume $\sigma_i$ to be some percentage of the mean
value $c_i/n_i$ of the distribution, i.e.,
$\sigma_i \sim c_i/n_i \times x_i/100$ ($x_i$ takes values between,
say, 0 and 20). Let us now consider that an author has $l$ number of
significant papers so that the total number of citations for these $l$
papers is much larger than the total number of citations for the rest of
the papers (i.e., $\sum_{i=1}^lc_i\gg\sum_{i=l+1}^{N_p}c_i$
when papers are arranged in the descending order of citation count).
The value of $l$ can be assumed to be the $h$-index of the
author. If $\overline{c}$ and $\overline{n}$ are respectively the average
number of citations and the average number of authors for those $l$
significant papers, then $N_c = \sum_{i=1}^{N_p}c_i
\sim l\overline{c}$ and $\sum_{i=1}^{N_p}\sigma^2_i
\sim \sum_{i=1}^{N_p} (\frac{c_i}{n_i}\frac{x_i}{100})^2 \sim
l \frac{\overline{c}^2}{\overline{n}^2}\frac{\overline{x}^2}{100^2}$,
where $\overline{x}$ is the average value of $x_i$ for those $l$
significant papers. This implies that,
$\Sigma^2 = \frac{100^2}{N_c^2}\sum_{i=1}^{N_p}\sigma^2_i \sim
\frac{100^2}{l^2 \overline{c}^2} \times
\frac{l \overline{c}^2 \overline{x}^2}{\overline{n}^2 100^2}$, or
$\Sigma \sim \frac{\overline{x}}{\overline{n}\sqrt{l}}$.
We see that the value of $\Sigma$ gets smaller with an increase in the
values of $l$ (or $h$-index) and $\overline{n}$.
A typical value of the standard deviation $\Sigma$ is expected to be
less than 1 whereas a typical value of the mean value
of the $I$-function is about 40.
So we conclude that, normally the $I$-function defined in
Eq. \ref{idstfn} has a very sharp Gaussian distribution
about its mean value given by the $I$-index (cf. Eq. \ref{iindex}). This
allows us to say that {\it the most probable value of the $I$-index can be
obtained by a simple scheme of equidistribution of credit among the coauthors
of a paper. Uncertainty (statistical standard deviation) associated with the
value is normally very small (especially for authors with high $h$-index)}.
\smallskip
In the following I will now discuss some of the main features/characteristics
of the $I$-index.
\smallskip
{\it Characteristic} (a):
Unlike the $h$-index or $N_c$ (= $\sum_{i=1}^{N_p}c_i$), the $I$-index
is expected to be a very slowly varying function of time.
The $h$-index is linear in time while $N_c$ is quadratic
in time \cite{hirsch05}. Similar to $N_c$, $C_{share} = \sum_{i=1}^{N_p}c_i/n_i$
is also expected to be quadratic in time since both $N_c$ and $C_{share}$ are
essentially linear sum of $c_i$'s (see argument given in
Ref. \cite{hirsch05}).
Now we assume that, $N_c = a_1 t+a_2 t^2$ and $C_{share} = b_1 t+b_2 t^2$,
where $t$ is the career span of a scientist (see Sec. \ref{sec:3.1}), and
$a_1$, $a_2$, $b_1$ and $b_2$ are some constants (author dependent).
This leads us to $I$ as a following function of time,
$I = 100\times\frac{C_{share}}{N_c} =
100\times\frac{b_1 t+b_2 t^2}{a_1 t+a_2 t^2}$, or
$I = 100\times\frac{b_2 + b_1/t}{a_2 +a_1/t}$. It is now easy to see why the
$I$-index is expected to be a very slowly varying function of time.
For a similar reason, the $I$-index will not be much affected by career break or
low activity (due to some severe medical condition,
family tragedy or importantly a female researcher's motherhood).
We note that a career break/low activity would affect both $N_c$
and $C_{share}$ in a similar way. So their ratio i.e. the $I$-index is expected
to be mostly free of the effects caused by these important subjective issues.
\smallskip
{\it Characteristic} (b):
Normally an author's affiliation, seniority or popularity affects
the citations ($c_i$'s) received by his/her papers. As a result
both $C_{share}=\sum_{i=1}^{N_p} c_i/n_i$ and $N_c = \sum_{i=1}^{N_p} c_i$ would
depend on those factors. Since both the quantities, $C_{share}$ and $N_c$,
are linear functions of $c_i$'s, we expect that both of them will be influenced
in a similar way by those factors. Now as the
$I$-index is defined as the ratio between those two quantities, it is expected
that those subjective issues will not help better one's $I$-index.
The essential functional difference between the $I$-index and
$N_c$ or $h$-index is that, unlike the later two, the $I$-index is a relative
quantity which effectively quantifies what fraction of the total credit an
individual entitled to get for his/her papers. Being a relative quantity, we expect
the $I$-index to be mostly independent of all the subjective issues mentioned.
\smallskip
For the properties of the $I$-index stated above (cf. (a) and (b)), it will not
be unfair to compare
values of this index for the authors with different seniorities or
affiliations/popularities.
\smallskip
{\it Characteristic} (c)
The $I$-index can only be improved if
a researcher starts publishing single-author or a-few-author impactful papers. Here
it may be noted that even if someone manages to improve his/her $h$-index and $N_c$
by doing large number of collaborations, the $I$-index may not increase in this way,
and sometimes it may decrease!
Unlike $N_c$ or the $h$-index, the $I$-index is not a
monotonically increasing function of time. For example, its value may decrease if
a paper with a large number of authors starts getting highly cited or a researcher
starts publishing large number of highly collaborative works.
\smallskip
{\it Characteristic} (d):
Unlike $N_c$ and $h$-index, the $I$-index is a bounded parameter. We see
from Eq. \ref{iindex} that, if $n_i =1$ for all $i$, then $I = 100\%$, and if
$n_i$'s are very large, then $I$ will be very small. For any author this index
takes a value between 0 and 100. Theoretically, $0\% < I \le 100 \%$ for any
fixed non-zero values of $N_c$ and $h$-index.
\section{The triplet: $N_c$, $h$-index, $I$-index}
\label{sec:3}
In this section we will see how $N_c$, $h$-index and $I$-index are three
independent parameters and together they can provide us a comprehensive idea
of an author's overall research merit. We will also see advantages of choosing
them over other available parameters.
First we note that, irrespective of the values of $N_c$ and $h$-index,
the $I$-index can take any possible value between 0 and 100 depending on the number
of coauthors of the published papers (as explained above, see
{\it Characteristic} (d) of the $I$-index).
Theoretically, $0\% < I \le 100 \%$ for any fixed non-zero values
of $N_c$ and $h$-index.
Now for a fixed non-zero value of the $h$-index, the minimum possible value of
$N_c$ is $h^2$ while the maximum value can be any large number
depending on the number of citations received by the individual
papers within the $h$-core. Theoretically,
$h^2 \le N_c < \infty$ for any fixed non-zero values of the $h$-index and $I$-index.
For a fixed non-zero value of $N_c$, the minimum possible value of the $h$-index
is 1, while the maximum value is $\lfloor \sqrt{N_c} \rfloor$ if the number of
papers $N_p \ge \lfloor \sqrt{N_c} \rfloor$ else the maximum value is $N_p$.
Theoretically, $1 \le h \le h_{max}$ for any fixed non-zero values of $N_c$ and
$I$-index. Here $h_{max} = \lfloor \sqrt{N_c} \rfloor$ when
$N_p \ge \lfloor \sqrt{N_c} \rfloor$,
otherwise $h_{max} = N_p$. It may be noted here that
$\lfloor x \rfloor$ is the usual mathematical floor function.
I will now give three {\it elementary} examples to illustrate that
the three parameters are independent and that each parameter gives an important
information which is not contained in other two parameters. First let us consider
two researchers each with 10 papers, and for both of them, their papers are cited
followingly (when arranged in the descending order of citation count):
first paper is cited 10
times, the second one is cited 9 times, and so on (i.e., the $i$-th paper is cited
($11-i$) times). In this example, $N_c = 55$ and $h = 5$ for both the authors.
In addition, if we now consider that the first researcher wrote all his/her
papers with one more author (total two authors per paper) and the second
researcher wrote all his/her papers with two more authors (total three authors
per paper), then $I = 50\%$ for the first researcher and $I = 33.33\%$ for the
second researcher.
This shows that, even when two researchers have the same $N_c$ and $h$ values, they
can have quite different $I$ values depending on the number of coauthors.
A smaller value of $I$ signifies that the researcher do more collaborative work.
In the second example, consider that the first
researcher has 12 papers, each cited 8 times and coauthored by two while the second
researcher has 10 papers, each cited 8 times and coauthored by two. In this case,
$h = 8$ and $I = 50\%$ for both the researchers but $N_c = 96$ for the first
researcher while $N_c = 80$ for the second researcher. So here, the total scientific
impact of the first researcher is more than that of the other researcher even though
their $h$ and $I$ values are same.
In the third example,
consider that the first researcher has 10 papers, each cited 8 times and coauthored
by two while the second researcher has 20 papers, each cited 4 times and coauthored
by two. In this case, $N_c = 80$ and $I = 50\%$ for both the researchers but $h = 8$
for the first researcher and $h = 4$ for the other researcher. In this example,
the first researcher has more significant papers than the other researcher, or in
other words, the first researcher's quality of research work is better than that of
the second researcher even though their $N_C$ and $I$ values are same.
From the above discussions it is clear that $N_c$, $h$-index and the $I$-index can
take values independently (within their bounds).
These three parameters or metrics quantify three
most important aspects of a researcher's scholarly output -quantity, quality and a
researcher's own role in his/her overall success. Each of these independent parameters
carries important new informations; if we miss one, the description of a researcher's
merit will be highly incomplete. This shows why a single parameter, however smartly
defined, would be insufficient and gross in describing a researcher's scholarly output.
Now I will discuss, instead of other possible parameters, why I choose
$N_c$, $h$-index and $I$-index as the preferred ones to quantify the three separate
aspects of an author's research output.
The $h$-index is known to be the best single parameter which somewhat successfully
quantifies the first two aspects of one's research output, i.e., the collective impact
and the productivity (or in other way, the quality and quantity).
But most of the time this parameter highly under-estimates
the total impact of an author's research output. For example, two authors
having same value of $h$-index can have widely different collective impact if one of the
authors has some very highly cited papers within his/her $h$-core.
This necessitate us to choose a separate
parameter to represent the collective impact of an author's research output; the total
citations or $N_c$ is the natural choice for this purpose. The advantages
of using these two parameters are that they have simple and easy-to-calculate definitions
and can provide very efficient and comprehensive description of the first two aspects of
one's research output.
The concern for accounting coauthors' contributions is not new and has been considered
in many previous works
\cite{hirsch10,pepe12,kurtz05,schreiber09,batista06,egghe08,pal15,ausloos15
biswal13,galam11,liu12}.
Now I will argue why the $I$-index does a reasonably good job in
quantifying the third aspect of one's research output, i.e., an author's own contribution
in his/her published works. In the most of the related works I know, all three aspects
of one's research output were tried to be quantified by a single unbound parameter or by
a coauthor ranking algorithm. But as we have emphasized several times,
any single parameter (or any
ranking algorithm which assigns a score to each coauthor) will be unsatisfactory in
describing an author's research output due to serious loss of informations.
Moreover, it is not clear from those works
whether the consideration of coauthorship would discourage true collaborations (for
notable exception, see Ref. \cite{hirsch10}). No bibliometric indicator should
discourage scientists from doing honest collaboration which is imperative for the
progress and betterment of science. The ranking algorithms have additional problems.
Generally they are
computationally extensive for large number of authors sharing even larger number of
papers. In practice hundreds of authors can be connected to each other by
a coauthorship network and they may share thousands of papers
(sometimes it is not even practical to get a complete set of authors sharing papers
among them). Since in principle the ranking algorithms should
simultaneously rank all these authors (and also papers) by solving equation of large
matrices (representing authors, papers and their inter connections), it looks very unlikely
that these algorithms can practically resolve the coauthorship issue.
Additionally, due to complex computation (normally involves iterative matrix
manipulations \cite{pal15}), the ranking looses intuitive meaning (or comprehensiveness)
for the wider population.
In contrast to these works, in this paper we do not try to quantify all the
aspects of one's research output by a single parameter. The $I$-index proposed here is a
complimentary metric, meant to quantify only one aspect of an individual's research output.
It has a simple intuitive meaning (cf. Eq. \ref{iidxdf}),
is easy to calculate and argued to provide a reasonably good measure
even with a simple scheme of equidistribution of credit (see {\it Argument} (1) and
{\it Argument} (2) given in Sec. \ref{sec:2}). The $I$-index, being a bounded
parameter (varies from 0 to 100), will be very helpful in judging authors according to
their performance in the third aspect of research output. A high value of the $I$-index
signifies that the author works more independently (see also discussions in
Sec. \ref{sec:3.1} and Sec. \ref{sec:4}).
An important advantage of separately considering $I$-index besides
$N_c$ and the $h$-index is that, it will discourage the unethical practice of giving/taking
authorship to/by non-contribution authors. This will not probably though deter
scientists from doing true collaborations, as otherwise their $N_c$ and the $h$-index
will not improve (see also Sec. \ref{sec:3.1}).
\subsection{Ranking of authors}
\label{sec:3.1}
It is always difficult to make a merit list for authors. But when it is needed, how do we
do it? Here I will discuss some practical ways of ranking authors.
First I will discuss how this can be done using the three independent parameters
deliberated in this paper.
In fact using three independent parameters the ranking can be done in different ways
depending on which aspect of research is considered to be more important (for, say,
a particular job).
Three independent parameters naturally gives more freedom to the employers to choose
candidates of their requirements. For example, considering $h$-index is the most important
parameter among the three parameters, first one can try to rank authors according to their
$h$ values. Surely there will be many authors with same (or close) $h$ values. One of the
reasons for the occurrence of degeneracy is that the $h$-index takes only discrete integer
values. The authors with same or close $h$ values can be ranked using the $I$-index. An
author with better $I$ value should rank higher. In the next step, if these two parameters
does not help to resolve the ranking issue among a group of researchers then their $N_c$
values can be used to see who is the better performer.
If for a particular job employers are looking for a researcher who can work independently,
then they probably can give more importance to the $I$-index. In this case, among the
researchers with their $h$ values within a fixed range, the employers can choose the
person who has highest $I$ value.
As I already emphasized, a single parameter/metric will not be sufficient and reliable
in describing an author's research merit. With this fact in mind, we now ask, which
parameter shall we use if for some practical reasons it is needed to rank authors
by a single parameter? For this purpose I will now define a normalized $h$-index
(written as $\widetilde{h}$-index)
which combines the effects/impacts of both $h$-index and $I$-index in a rational
way. Subsequently I also propose $\widetilde{h}_T$-index which
additionally takes care of the seniority issue.\\
\noindent
{\underline{\it $\widetilde{h}$-index and $\widetilde{h}_T$-index}}:
Here idea is to estimate how much an author would have achieved if he/she had worked
alone. Roughly an author will have $N_a = N_c * I/100$ citations for his/her works
if he/she worked alone (see definition of $I$-index, Eq. \ref{iindex}).
It is shown in Ref. \cite{hirsch05}
that the total number of citation ($N_c$) is proportional to $h^2$ (this is a
general trend with the proportionality constant varies for different authors).
Therefore, $N_a = g_1 h^2 * I/100$; where $g_1$ is the proportionality constant.
Now if $\widetilde{h}$ is the expected $h$-index of the author if he/she had worked
alone, then $N_a$ should be proportional to $\widetilde{h}^2$, i.e.,
$N_a = g_2 \widetilde{h}^2$ with $g_2$ being another proportionality constant.
Comparing two expressions of $N_a$, we get the following relation:
$\widetilde{h} = (\sqrt{\frac{g_1}{g_2}})~h*\sqrt{I}/10$. It is not easy to
find any simple relation between the two constants $g_1$ and $g_2$. Here I present
a rough argument to show that, for a given individual, the values of these two
constants would not be much different. According to the simplest possible model
discussed in Ref. \cite{hirsch05}, $g_1 = \frac{(1+c/p)^2}{2c/p}$, where the
researcher publishes $p$ papers per year and each published paper gets $c$
new citations per year in every subsequent year. Now if the researcher had worked
alone, the value of $p$ would have been smaller. Since an effective collaboration
enhances quality of papers, we can expect that $c$ would also get smaller if the
researcher works alone. Due to the collective or cooperative effect of
collaboration, the sum of impacts of independent individuals is expected to be
smaller than the total impact of the works done in collaboration by those individuals.
Going by this argument, we can say that the ratio $c/p$ will not be much different
depending on whether a researcher works alone or in collaborations. This implies
that, the value of $g_2$ is expected to be reasonably close to $g_1$.
This is in accordance with the fact
that, irrespective of the collaboration details of researchers, the proportionality
constant $g_1$ takes values from a small range of numbers
(between 3 and 5 \cite{hirsch05}). Now since the square root of a positive number is
always closer to 1 than the number itself ($|1-\sqrt{x}|\le|1-x|$ with $x>0$),
we expect that $\sqrt{\frac{g_1}{g_2}}$ will be very close to 1 even though
$\frac{g_1}{g_2}$ is somewhat away from 1. Now taking
$\sqrt{\frac{g_1}{g_2}} \approx 1$, we get the following formula for
the normalized value of the $h$-index,
\begin{eqnarray}
\widetilde{h} = h*\sqrt{I}/10.
\label{hnrm}
\end{eqnarray}
We note that, if an author publishes only single-author papers,
then his/her $I = 100$, and consequently his/her $\widetilde{h} = h$. This is in
accordance with what one expects for a researcher who always works alone. The
experimentalists do more collaborative works than the theorists; so compared to a
theorist, an experimentalist will normally have higher value of $h$ and
lower value of $I$.
This trend can be seen in the next subsection on results (see Table \ref{tbl1}).
For a theorist and an experimentalist of presumably same calibre, their values of
$\widetilde{h}$-index should be very close even though their $h$ and $I$ values
are quite different. Interestingly this is what we observe in our analysis of
some established authors (see Sec. \ref{sec:3.2}).
Since $\widetilde{h}$ depends on both $h$ and $I$, to improve
the value of $\widetilde{h}$-index, a researcher needs to better both those parameters or
at least better one parameter keeping another relatively fixed.
Advantage of considering the
$\widetilde{h}$-index over the original $h$-index is that, it will discourage researchers
to involve in unethical practice of giving/taking authorship without substantial
contribution. If they do, their $I$-index will reduce and as a consequence their
$\widetilde{h}$-index will also be badly affected. But probably this will not dissuade
researchers to do true collaboration, as otherwise their $h$-index will not improve much
and as a result $\widetilde{h}$-index will not get better.
It should be noted that the $\widetilde{h}$-index is not an independent parameter,
it is a derived parameter/metric proposed here to help rank authors using a single
parameter. This parameter does not take into consideration the issue of seniority or
length of research career. This can be done by dividing $\widetilde{h}$ by the length
of an author's research career. If $T$ is the time (in years) between the first
publication (at least once cited) and the last published one, then we define,
\begin{eqnarray}
\widetilde{h}_T = \widetilde{h}/T.
\label{hnrm_t}
\end{eqnarray}
This parameter ($\widetilde{h}_T$) takes into consideration both the issues of
coauthorship and the length of research career. Though this simple division by
career length has some problems. It will be unfavorable for the authors who had taken
career breaks. At the same time it will favor the authors whose careers have ended.
This second problem can be somewhat circumvented by taking $T$ as the time between
the first publication and
the time of data collection. Here it may be noted that, in mathematical sense,
$\widetilde{h}_T$ is not a derived parameter since $T$ is an independent parameter.
\begin{table*}[tp]
\caption{The values of the parameters/metrics $N_c$, $h$, $I$, $\widetilde{h}$ and
$\widetilde{h}_T$ are given for some established
authors. Age of an author is given within bracket just after his/her name. Under
each author's name his/her specialization and major awards (if any) are given.
Here,
TP = Theoretical Physics, EP = Experimental Physics, HE = High Energy physics,
CM = Condensed Matter physics, AMO = Atomic, Molecular and Optical physics,
QI = Quantum Information science, FM = Field Medalist, NL = Nobel Laureate.}
\begin{center}
\begin{tabular}{l|c|c|c||c|c}
\hline
& & & &\\
Author & $N_c$ & $h$-index & $I$-index (\%) &
$\widetilde{h} = h*\sqrt{I}/10$ &
$\widetilde{h}_T=\widetilde{h}/T$\\
& & & &\\ \hline
E. Witten (63) & & & & &\\
(TP-HE, FM) & 166563 & 179 & 74.35 & 154.3& 3.9\\
& & & & & \\ \hline
A. Sen (59) & & & & & \\
(TP-HE) & 25967 & 85 & 81.62 & 76.8 & 2.3\\
& & & & & \\ \hline
C.W.J. Beenakker (55) & & & & & \\
(TP-CM) & 29983 & 83 & 50.12 & 58.8 & 1.8\\
& & & & & \\ \hline
D.J. Gross (74) & & & & & \\
(TP-HE, NL) & 44292 & 83 & 45.64 & 56.1 & 1.1\\
& & & & & \\ \hline
T.W. H\"{a}nsch (73) & & & & & \\
(EP-AMO, NL) & 51719 & 107 & 23.97 & 52.4 & 1.1\\
& & & & & \\ \hline
C.L. Kane (52) & & & & & \\
(TP-CM) & 29471 & 55 & 43.26 & 36.2 & 1.3\\
& & & & & \\ \hline
A.E. Nelson (57) & & & & &\\
(TP-HE) & 17153 & 52 & 37.81 & 32.0 & 0.9\\
& & & & & \\ \hline
C. Monroe (49) & & & & & \\
(EP-AMO-QI) & 24774 & 60 & 19.85 & 26.7 & 1.0\\
& & & & & \\ \hline
\end{tabular}
\end{center}
\label{tbl1}
\end{table*}
\subsection{Some results}
\label{sec:3.2}
I have estimated three independent parameters/metrics ($N_c$, $h$ and $I$)
for some of the established researchers. List is prepared carefully to
represent researchers working in different fields and belonging to different age groups
(there is 25 years of age gap between youngest and oldest researcher).
The results can be found in Table \ref{tbl1}. In the last two columns of the table
values of the other two parameter ($\widetilde{h}$-index and $\widetilde{h}_T$-index)
are also given. As I discussed in Sec. \ref{sec:3.1}, ranking can be
done in different ways depending on how we analyze the research output. In addition,
since the listed researchers work in different (sub)fields, it may not be
appropriate to compare their performance without considering the publication/citation
trends in the (sub)fields (for a discussion,
see \cite{pepe12}). In any case, for the completeness of our analysis in this paper,
they are ranked in the table according to their $\widetilde{h}$ values.
We may here note that, generally
those having high $\widetilde{h}$-index, have high $\widetilde{h}_T$-index. For two
authors with close $\widetilde{h}$ value, one may have lower $\widetilde{h}_T$ value
than the other if he/she takes a career break for some reason. This is because, a
career break acts more harsh on $\widetilde{h}_T$ than $\widetilde{h}$.
We also see from the table that the experimentalists have lower value of $I$-index
than the theorists. This is because experimentalists generally do more collaborations
than theorists (an experimental paper normally has more authors than a theory paper).
For the same reason, generally the experimentalists have higher $h$-index than
the theorists of their age group. This discipline dependency of these two parameters
is the reason we choose $\widetilde{h}$-index to decide the ranking in the table
($\widetilde{h}$-index combines the effects/impacts of both $h$-index and $I$-index
in a rational way).
It is here interesting to note that, for the two Noble Laureates (D.J. Gross, a
theorist and T.W. H\"{a}nsch, an experimentalist), the research output measured
by $\widetilde{h}$ or $\widetilde{h}_T$ is same or very close even though
their $h$-index and $I$-index are quite different.
The parameters in the table
are extracted from the data collected manually in July, 2015
from {\it Google Scholar Citation}. In the calculation of parameters, not only the
original research papers, other scholarly works like review articles and books are
also considered. Some practical issues may appear while estimating these parameters.
For example: different chapters of a book can be written by different authors. In this
case if the total citations of the book is available, then that citation number can be
first divided by the number of chapters and next this credit per chapter can be divided
among the coauthors of a chapter to determine how much credit one author should get.
If the detail author information of a scholarly work is missing, then the $I$-index
should be calculated simply ignoring that particular work.
\section{Conclusion
\label{sec:4}
In this paper I have tried to establish a rational and objective
framework for analyzing scientists' research outputs. Three most important aspects of
someone's research performance have been identified -collective impact, productivity
and author's own contribution in his/her published works. It is emphasized that we
need three independent parameters/metrics to quantify those three separate aspects
reliably. A single parameter will be insufficient and gross in describing an
author's research performance due to serious loss of informations.
A practical advantage of using three independent parameters for analysis is that it
will give employers more freedom to choose candidates according to their requirement.
I have suggested following three parameters for the purpose: the total number
of citations ($N_c$), the $h$-index and the newly defined $I$-index. The $I$-index
is defined as an author's claim for the percentage of total citations received by
his/her papers. Besides its simple and comprehensible meaning, this index is very
easy to calculate and argued to be almost independent of most of the subjective issues
like affiliation, seniority or career break. It is also argued using the
{\it central limit theorem} that,
the most probable value of the $I$-index can be obtained by the simple scheme of
equidistribution of credit among the coauthors of a paper. Uncertainty associated with
the value is normally very small.
It will be highly unfair for researchers working alone or in small groups if we consider
only $N_c$ and $h$-index to judge their performance. The researchers sharing time with
many collaborators will normally have large number of papers and
consequently have higher $N_c$ and $h$-index. So it is crucial to distribute credit among
the coauthors and measure how much contribution one has in his/her scientific achievement.
The new index (i.e., $I$-index)
proposed in this paper tries to address this crucial issue.
A larger value of the $I$-index signifies that the author works more independently
(this is why the $I$-index can be considered as the {\it Independence}-index).
A practical advantage of considering this $I$-index along with $N_c$ and the $h$-index
is that, it will discourage scientists from engaging in the unethical practice
of giving/taking authorships to/by non-contributing scientists. This will, though,
probably not deter scientists from doing true collaborations, as otherwise
their $N_c$ and the $h$-index will not improve.
In this work we have also defined $\widetilde{h}$-index, and subsequently $\widetilde{h}_T$-index,
to rank authors if for some practical reasons it is needed to rank them using a single parameter.
Unlike the $h$-index, the $\widetilde{h}$-index takes into consideration the crucial issue of
coauthors' contributions, while $\widetilde{h}_T$-index additionally takes care of the seniority
issue.
Since low value of the $I$-index signifies a more collaborative nature
of one's work, we can define a {\it Collaboration}-index or $C$-index, as a complementary index
to the $I$-index: $C = 100 - I$. Note that, like $I$, $C$ also takes values between 0 and 100.
A larger $C$ value for a researcher indicates that his/her work is more collaborative in nature.
In future study, the average $C$-index for the scientists working in a
particular field or in a particular institute can be estimated; this will tell us in which
field or institute scientists do more collaborative works than others. Similarly the average
values of the $C$-index for different countries can be calculated to see in which country
scientists do more collaborative work.
\begin{acknowledgements}
The author thanks CEFIPRA for financial support.
\end{acknowledgements}
| -33,150.00068 |
[
-3.46875,
3.203125
] | 22.684703 |
[
-2.865234375,
0.99267578125,
-2.376953125,
-5.19140625,
-0.68994140625,
7.4921875
] |
[
3.00390625,
5.03125,
0.9306640625,
8.03125
] | 475 | 8,727 |
[
-2.0625,
1.9619140625
] | 26.069376 |
[
-5.99609375,
-4.390625,
-4.16015625,
-1.6748046875,
2.208984375,
11.9140625
] | 0.953103 | 7.218872 | 15.377564 | 1.842946 |
[
1.9585628509521484
] | -23,953.273924 | 4.989343 | -34,066.456677 | 0.551192 | 5.847623 |
[
-3.330078125,
-3.134765625,
-1.974609375,
-3.32421875,
2.720703125,
8.7109375
] |
[
-6.08984375,
-2.625,
-2.388671875,
-0.9697265625,
4.09765625,
5.83984375
] | |
BkiUfz45qhDCjszB7VNQ
|
\section{Introduction}
Precise determination of solar large scale velocity patterns can provide information
about the transport of angular momentum in the solar convective zone and provide important
observational constraints for the solar dynamo models.
One possibility to determine the solar velocity field is by observing the motions of
structures which can be observed at the surface of the Sun. Most often sunspots
and sunspot groups were used as tracers \citep[$e.g.$][among many others]{howard1984,
balthasar1986,howard1991,lustig1994,pulkkinen1998a,woehl2001,zuccarello2003,sudar2014,
sivaraman2010,mandal2017,sudar2017}. Besides tracing sunspots, other methods have been used
for assessment of solar large scale flows, for instance: Doppler measurements
\citep[$e.g.$][]{hathaway1996} and tracing coronal bright points (CBP)
\citep[$e.g.$][]{sudar2016}. In recent years the observations of solar velocity field were
revolutionized by helioseismology \citep{hanasoge2015}.
While all mentioned methods give very similar results for solar rotation the obtained
results for meridional flows are controversial, as described in
\citet{hathaway1996} and \citet{sudar2017}.
The Doppler measurements as well as observation and analysis of global oscillations
reveal that there is a poleward meridional circulation in the near surface layers of the
Sun in both hemispheres. This is in agreement with the result of most theoretical models
which predict unicellular meridional circulation directed poleward at the top and
equatorward at the bottom of the convection zone \citep{brun2009}.
Observations utilizing tracers show more complicated pattern of meridional flows.
All kinds of meridional flow directions were found (poleward, equatorward, towards and away from the
center of activity). However the results for meridional circulation using tracers are influenced
by several effects. First, the active regions locally modify the amplitude and direction
of meridional circulation \citep{haber2004,svanda2008}. Next, the movement of the (magnetic)
tracers do not represent the movement of the solar surface plasma, but the movement of the
layer where the observed features are anchored, which might change with time \citep{ruzdjak2004},
and finally, the solar meridional circulation might be variable as pointed out by \citet{hathaway1996}.
Differential rotation of the Sun can be explained as rotationally influenced turbulence in the
convective zone. The turbulence leads to the formation of large-scale turbulent fluxes
\citep{Rudiger2004}. The angular momentum fluxes are proportional to the velocity correlation
tensor and are given by:
\begin{equation}
q_{ij}=\overline{v^\prime_i v^\prime_j}
\end{equation}
where $q_{ij}$ is Reynolds stress tensor, $\vec v$ is velocity, the overbar denotes azimuthal averaging,
and primes denote variations about the averages. The latitudinal flux of the angular momentum
is described by the horizontal component of the Reynolds stress tensor $q_{\lambda b}$,
which can be calculated as the covariance of the meridional motion and the rotation velocity residuals.
The rotation and meridional circulation of tracers can easily be determined separately.
Therefore, contrary to meridional flow analysis, tracers are suitable tool for analysing
the latitudinal flux of the angular momentum, {\it i.e.} the turbulent Reynolds stress as the main driver of differential rotation.
Kanzelh\"ohe Observatory for Solar and Environmental Research (KSO) was
founded during WW II as one station within a network of observatories
for observing ``solar eruptions" (flares) which were interfering with
radio communications.
Nowadays KSO is affiliated with the University of Graz and performs regular
high-cadence full-disk observations of the
Sun in the H$\alpha$, the Ca {\sc ii} K spectral lines, and in white
light with a coverage of about 300 observing days {\it per} year \citep{veronig2016}.
KSO white light images and sunspot drawings have been used by different authors for measuring
the photospheric velocity fields. The data before 1985 were used,
{\it e.g.} by \citet{lustig1983}, \citet{hanslmeier1986}, \citet{lustig1987},
\citet{balthasar1988} and \citet{lustig1991}.
\citet{poljancic2010} and \citet{poljancic2011} compared the GPR, USAF/NOAA,
DPD and KSO sunspot databases and found that DPD and KSO data are, in some respect, more
accurate than the USAF/NOAA data. Consequently, the venture of determination
of the heliographic positions from the sunspot drawings and full disc white
light CCD images was undertaken. The procedure and results for solar rotation
in the period 1964-2016 are presented in \citet{poljancic2017}.
Here we present the analysis of meridional motions and Reynolds stress determined
from KSO data in the same period 1964-2016.
\section{Data and Analysis}
The drawings of the whole Sun are made at KSO using a refractor telescope ($d/f$=110/1650 mm).
Additionally, from 1989 onwards the white light photographs of the whole Sun are made
with a refractor telescope ($d/f$=130/1950 mm), where the photographic camera was replaced with a
CCD camera in July 2007. The positions of the sunspot groups were measured by two methods:
interactive and automatic.
The interactive procedure was applied for data from 1964 to 2008, where the ``Sungrabber" software
package \citep{sungrabber} was used by two independent observers to measure the positions of group centers
on the sunspot drawings made at KSO.
For the automatic procedure, morphological image processing based on the ``STARA" algorithm \citep{watson2011}
was used for the determination of the positions of sunspot groups. The automatic method was applied
to the data observed with the digital cameras first of which was installed in July 2007.
Since the Solar Cycle 23 ended in 2008 to have an homogeneous dataset within the solar cycle only
the data during 2009-2016 were obtained by the automatic method. A detailed description of both
methods and the availability of the data is given in \citet{poljancic2017}.
To check how the two methods compare with each other the
drawings of the whole Sun made during 2014 (solar maximum) were measured using Sungrabber.
Descriptive statistics of meridional motions and rotation
rate residuals calculated using both methods are presented in
Table \ref{statistic}.
\begin{table}[h]
\caption{The
measures of central tendency and dispersion for meridional motions and rotation
rate residuals obtained by interactive and automatic methods. Prior to calculation
the
outliers were discarded. Stdev stand for standard deviation, IQR for interquartile range,
Skew for skewness and Kurt for kurtosis}
\label{statistic}
\begin{tabular}{lrcrrrrrr}
\hline
Quantity & Method & N & Mean & Median & Stdev & IQR & Skew & Kurt \\
$v_\mathrm{mer}$ (m\,s$^{-1}$) & interactive & 961 &2 &--1& 74 & 83& 0.08& 1.7 \\
$v_{\rm mer}$ (m\,s$^{-1}$) & automatic & 792 &--1&--1&73&57&--0.12&4.2 \\
$\Delta v_{\rm rot}$ (m\,s$^{-1}$)& interactive & 961 &76&77&167&207&--0.12&0.91 \\
$\Delta v_{\rm rot}$ (m\,s$^{-1}$)& automatic & 792 &5&-6&182&144&0.04&1.4 \\
\hline
\end{tabular}
\end{table}
A data set of 45914 times and positions of sunspot groups during the period from January 1964 to April 2016
were used to calculate meridional and rotational speeds. For the sunspot groups for which the
Central Meridian Distance (CMD) was less than 58$^\circ$, which corresponds to about 0.85
of the projected solar radius \citep{balthasar1986}, rotation speeds were calculated by division
of CMD differences by elapsed time and meridional motions were calculated by division of
latitude differences by elapsed time. This resulted in 33817 rotation and
meridional velocity values. The obtained synodic rotation velocities were transformed
to sidereal ones by the procedure described in \citet{skokic2014}.
Finally, to account for errors of
misclassification and other errors, an iterative fitting method was used, similar to the one
used in \citet{sudar2016} and \citet{sudar2017}.
Rotation rate residuals were calculated by
subtracting the individual rotation velocities from the average rotation profile:
\begin{equation}
\omega(b)=A+B\sin^2b,
\label{rotprofile}
\end{equation}
where $A$ and $B$ are differential rotation parameters in [$^\circ$\,day$^{-1}$] and $b$ is the
heliographic latitude in [$^\circ$].
Robust statistics of the rotation rate residuals and meridional velocity was used,
and values lying outside 3.5 interquartile ranges from the median were considered
as outliers and discarded. Since the removed outliers were contributing to the mean
rotation profile derived, the process is iteratively repeated until no outliers are
present in the data. The procedure converges very fast and after 4 iterations
no outliers were present. Data whose absolute values of rotation rate
residual and meridional velocity were larger than 4.2$^\circ$\,day$^{-1}$ and 2.3$^\circ$\,day$^{-1}$,
respectively, were discarded. After applying all these reduction steps 32616 data points
are left for further analyses.
In Figure \ref{rprofile} the
obtained differential rotation profile is presented, the best fit differential
rotation parameters are: $A=14.5177\pm 0.0096^\circ$\,day$^{-1}$ and
$B=-2.800\pm 0.088^\circ$\,day$^{-1}$. The averaged 2$^\circ$ latitude bin
values of $\omega(b)$ are also shown. The errors for bins at higher
latitudes are quite large due to the small number of sunspots present at these latitudes.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{rotProfile.eps}
\caption{Differential rotation profile obtained from KSO data from 1964 to 2016.
Points with error
bars are averaged 2$^\circ$ latitude bin values and the best fit profile (Equation \ref{rotprofile}) is
shown with the solid line.}
\label{rprofile}
\end{figure}
When analysing latitudinal dependencies the latitude of the first measurement was assigned
to each rotational and meridional velocity \citep{olemskoy2005} as was done in
\citet{sudar2014,sudar2015,sudar2016,sudar2017} to avoid false meridional flows.
The rotation rate residuals and meridional
velocities were transformed from angular values to linear ones. Taking $R_\odot=6.96\times 10^8$m,
the conversion factors are 140.6 and 140.6\,$\cos(b)$ m\,$^{-1}$\,day($^\circ$)$^{-1}$
for meridional velocities and rotation velocity residuals, respectively, where the latitude
of the first measurement was taken into account. In addition the meridional speeds are
transformed so that negative value of meridional speed represents motion toward the
equator for both solar hemispheres. This is achieved by changing the sign of meridional
velocities for the southern solar hemisphere, where negative values of latitude are assigned.
\section{Results}
\subsection{Latitudinal Dependence of Meridional Motions and Rotation Velocity Residuals}
\begin{figure}
\centering
\includegraphics[height=0.33\textwidth,angle=-90]{vmer25.ps}
\includegraphics[height=0.33\textwidth,angle=-90]{vmer25NS.ps}
\caption{Meridional motions as a function of latitude. Data are averaged
over 2.5$^\circ$ in latitude. In the left panel both solar hemispheres are shown together
and positive values indicate motion towards the poles. In the right panel the northern and southern
hemisphere are shown separately and positive values indicate motion towards north. }
\label{mer}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{torsional.eps}
\caption{Rotation velocity residuals as function of latitude. Data were averaged over
5$^\circ$ in latitude. Positive values denote rotation faster than average and negative values
rotation slower than average. Both solar hemispheres have been treated together.}
\label{torsional}
\end{figure}
The dependence of meridional motions obtained from the KSO data on latitude is illustrated in
Figure \ref{mer}. The data were averaged over 2.5$^\circ$ in latitude and mean values with
error bars are given for each latitude stripe. It can be seen that at low latitudes ($\leq10^\circ$)
meridional motions are toward the poles. The latitude stripes 20$^\circ$--30$^\circ$ shows
motions toward the solar equator, while the rest of the values are not significantly different
from zero.
In the right panel of the figure each solar hemisphere is shown separately.
Here the meridional velocities are not transformed and positive values of the meridional
velocity denotes motion towards north.
The result for the southern hemisphere shows motions toward the pole at low latitudes and
changes to flow towards north (equator) at higher latitudes. This behavior is reminiscent
of the one found by \citet{sudar2014} analysing GPR, USAF/NOAA data and by \citet{sudar2017}
studying DPD data.
The values for the northern solar hemisphere are not significantly different from zero in all
latitude stripes.
The most statistically significant values are for stripes 0$^\circ$--2.5$^\circ$ and
22.5$^\circ$--25$^\circ$ both showing motions toward south. The observed meridional motions would
be consistent with the equatorward motions on the northern solar hemisphere.
Such motions were observed on both solar hemispheres by \citet{sivaraman2010}
analysing Kodaikanal and Mt. Wilson data.
Figure \ref{torsional} shows the dependence of rotation residual velocities on latitude.
The data were averaged over 5$^\circ$ in latitude and average values
with error bars are presented for each latitude stripe. None of the rotation rate
residual values is significantly different from zero.
\begin{table}[h]
\caption{Description of data subsets with cycle and phase boundaries. Slope and
both intercept values are the result of meridional velocities latitude dependence linear fit.
The solar cycle boundaries are taken from \citet{brajsa2009}.}
\label{subset}
\begin{tabular}{lcccc}
\hline
Description & Boundaries & Slope & Intercept\ Y & Intercept\ X \\
\hline
Solar Cycle 20 &21.10.1964--20.04.1976 & $-0.42\pm0.19$ & $8.0\pm2.8$ & $19.0\pm10.0$ \\
Solar Cycle 21 &21.04.1976--15.09.1986 & $-0.37\pm0.13$ & $4.7\pm2.2$ & $12.7\pm\ 7.4$ \\
Solar Cycle 22 &16.09.1986--25.05.1996 & $-0.20\pm0.11$ & $2.1\pm1.9$ & $10.5\pm11.1$ \\
Solar Cycle 23 &26.05.1996--30.06.2008 & $-0.43\pm0.11$ & $6.7\pm1.9$ & $15.5\pm\ 6.0$ \\
\hline
Minimum &02.01.1964--25.05.1967 & & & \\
from 2y before &21.04.1974--30.06.1978 & & & \\
minimum till &16.09.1984--31.12.1987 &$-0.19\pm0.14$ & $3.4\pm2.5$ & $17.9\pm18.6$ \\
1.5y before &26.05.1994--20.10.1998 & & & \\
maximum &01.07.2006--25.11.2012 & & & \\
\hline
Pre maximum &26.05.1967--25.11.1968 & & & \\
from 1.5 y &01.07.1978--31.12.1979 & & & \\
prior to &01.01.1988--30.06.1989 & $-0.45\pm0.13$ & $7.8\pm2.7$ & $17.3\pm7.8$ \\
maximum till &21.10.1998--20.04.2000 & & & \\
maximum & 26.11.2012--25.05.2014 & & & \\
\hline
Past maximum &26.11.1968--25.05.1970 & & & \\
from maximum &01.01.1980--30.06.1981 & & & \\
till 1.5 y &01.07.1989--31.12.1990 & $-0.33\pm0.12$ & $4.4\pm2.1$ & $13.3\pm8.0$ \\
after the &21.04.2000--20.10.2001 & & & \\
maximum &26.05.2014--20.04.2016 & & & \\
\hline
Declining phase &26.05.1970--20.04.1974 & & & \\
Fom 1.5y after &01.07.1981--15.09.1984 &$-0.45\pm0.12$ & $5.1\pm1.6$ & $11.3\pm4.7$ \\
max. till 2y&01.01.1991--25.05.1994 & & & \\
before minimum &21.10.2001--30.06.2006 & & & \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[height=0.24\textwidth,angle=-90]{sc20.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{sc21.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{sc22.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{sc23.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{cic1.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{cic2.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{cic3.ps}
\includegraphics[height=0.24\textwidth,angle=-90]{cic4.ps}
\caption{Meridional motions as a function of latitude. In the upper row profile for each
cycle from Solar Cycle 20 to 23 is shown separately (from left to right).
In lower row four different phases of the cycle are presented (minimum, pre-maximum, past-maximum
and declining phase, from left to right).
Velocities belonging to corresponding phase from all cycles were averaged.
Data are averaged over 5$^\circ$ in latitude and both solar hemispheres
are shown together to have sufficient number of data in each latitude bin.
Positive values indicate motion towards the poles.}
\label{cycle}
\end{figure}
To examine the changes of meridional circulation with time and the phase of the solar cycle
the dataset was divided into four subsets containing individual cycles from Solar Cycle 20
to Solar Cycle 23 and four subsets corresponding to different phases of the cycle. The description
of the data subsets is given in Table \ref{subset}. To have sufficient number of data in each latitude
stripe both solar hemispheres were treated together and data were averaged over 5$^\circ$ in latitude.
The latitudinal dependence of meridional motions for individual solar cycles is presented in
upper row of Figure \ref{cycle} and the changes within the cycle are shown in the lower row.
All meridional velocity profiles showing the motions toward pole at low latitudes and
motions toward equator at higher latitudes. The exception is the profile observed during solar
cycle minimum.
The most notable changes are the rise of polarward meridional velocity near solar equator
and decrease of equatorward velocity at higher latitudes with time. Similar trends can be
observed within each solar cycle. However it should be noted that the changes are not
statistically significant due to large errors of the mean velocity near equator and
at higher latitudes due to the smaller number of spots present at these latitudes.
In an attempt to quantify the changes of meridional velocity profiles we calculated linear
fits trough all datapoints of a given subset. The results of the fit are presented in
Table \ref{subset}. Slope and intercept with both axes are given. All fits have a
negative slope which is significant over 2$\sigma$ for all solar cycles and all phases of the cycle
except the minimum and Solar Cycle 22. Also, the intercept with the x-axis (latitude) decreases with
the phase of the cycle, but the change is not statistically significant due to large
errors. A similar result was found by \citet{sudar2014}.
\subsection{Correlation between Meridional Motions and Rotation Rate Residuals
and Reynolds Stress}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{v_correlation.eps}
\caption{Meridional velocities as a function of rotation rate residuals. Individual data are
represented with points. The solid line is the linear fit (Equation \ref{linfit}).}
\label{correlation}
\end{figure}
To maintain the observed solar differential rotation profile against diffusive
decay, the angular momentum should be somehow transported towards the solar equator.
This phenomenon can be observed by investigating the relationship between meridional
velocities and rotation velocity residuals.
In Figure \ref{correlation} the meridional velocities are plotted against
the rotation rate residuals. The solid line represents the least square fit
in the form:
\begin{equation}
v_\mathrm{mer} = (-0.0912 \pm 0.0028)\Delta v_\mathrm{rot} + (-0.42 \pm 0.43)\ \mathrm{m\,s}^{-1}.
\label{linfit}
\end{equation}
To check for the influence of outliers on the derived parameters in Equation \ref{linfit}
the data were also fitted using least deviation method. The least deviation method gives
--0.088 and 0.49 m\,s$^{-1}$ for slope and intercept, respectively.
The slope of the fit is negative indicating that on the average the angular momentum
is transported toward the solar equator.
\begin{table}[h]
\caption{Table of the best fit coefficients (Equation \ref{expfit}).}
\label{coef}
\begin{tabular}{lcr}
\hline
Coefficient & Value & Relative error \\
\hline
$c_1$ [m$^2$\,s$^{-2}$deg$^{-1}$] & -154$\pm$11 & 7.3\% \\
$c_3$ [deg$^{-2}$] & 0.00026$\pm$0.00013 & 50.8\% \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{reynolds.eps}
\caption{Horizontal component of the Reynolds stress tensor as a function of latitude.
Data were averaged over 10$^\circ$ in latitude. Solid line represents best fit in the
form of Equation \ref{expfit}. Shaded areas are defined by errors of the best fit
coefficients (Table \ref{coef}).
}
\label{reynolds}
\end{figure}
The covariance of meridional velocities and rotation velocity residuals gives the horizontal
component of the Reynolds stress tensor. In Figure \ref{reynolds} the horizontal component
of the Reynolds stress tensor is shown versus latitude. Values were averaged in 10$^\circ$ latitude
stripes. The average values are negative for all latitude stripes which means that the angular
momentum is transported towards lower latitudes, {\it i.e.} toward the solar equator. The solid line in
Figure \ref{reynolds} represents the empirical exponential cut-off function
\citep{sudar2014,sudar2017} describing the decreasing trend of the horizontal
component of the Reynolds stress tensor with latitude:
\begin{equation}
q_{\lambda b}(b)=c_1be^{-c_3b^2},
\label{expfit}
\end{equation}
where $q_{\lambda b}$ is the horizontal component of the Reynolds stress tensor and $b$ is the latitude.
The values of the coefficients $c_1$ and $c_3$ with their respective errors
are given in Table \ref{coef}. The shaded area in the
figure is defined by the errors of the coefficients $c_1$ and $c_3$.
\section{Discussion and Conclusion}
The differential rotation profile obtained in this work is the same (within 1$\sigma$)
as the one obtained by \citet{poljancic2017} using the same data and
(within 2$\sigma$) as by \citet{sudar2017} using the DPD and \citet{sudar2014} using
the GPR and USAF/NOAA datasets.
The small differences between our result and the result of \citet{poljancic2017}
can be attributed to the different procedures of discarding erroneous values.
The iterative procedure applied here results in slightly smaller values for rotation
rate than the 8--19$^\circ$day$^{-1}$ velocity filter used by \citet{poljancic2017}.
The values of differential
rotation parameters from different methods and datasets are compared in more detail
in \citet{sudar2015} and \citet{poljancic2017}.
The average values of rotation
rate residuals which do not differ significantly from zero are indicative of the
quality of the solar rotation profile function fit (Equation \ref{rotprofile}).
Further, our results show meridional motions toward the poles at low latitudes and meridional motions
toward solar equator at latitudes of 25--30$^\circ$. This is consistent with the picture
of flows directed toward the centre of activity \citep{sudar2014}. Similar results were
obtained by \citet{sudar2017} using the DPD dataset.
When each solar hemisphere is treated separately, meridional circulation on southern
hemisphere is consistent with flows directed toward the centre of activity,
while the flows seem to be predominantly equatorward on the northern hemisphere,
reminiscent of the flows found by \citet{sivaraman2010} analysing Kodaikanal and Mt. Wilson datasets.
These results confirm that the KSO data
are of sufficiently high quality and that they can be used in the analysis of solar
velocity patterns.
As summarized in \citet{hathaway1996} and \citet{sudar2017} the previously obtained
results for meridional flows
are controversial. Both flows toward and away from the centre of activity as well as
flows toward the poles and toward the solar equator were observed. Flows out of
the centre of activity can be attributed to false assumption that any latitude
(latitude of first, last measurement or mean latitude) can be assigned to the
observed meridional velocity without taking into account the distribution of tracers.
This can result in false flows out of the centre of activity \citep{olemskoy2005}.
Next, the cycle dependence of the meridional motions and rotational velocities
can influence the results.
More detailed explanation of the above mentioned effects can be found in \citet{sudar2017}.
The difference between flows toward the centre of activity obtained by sunspot
groups and poleward flows shown by Doppler and CBP data can be reconciled if it
is assumed that the meridional flow is different in the active regions, where
sunspots are located, from the flow outside activity areas \citep{sudar2017}.
Alternatively, the anchoring depth of magnetic features can be important, {\it i.e.}
the differences in velocity patterns measured by different features reflect the
differences in the coupling or anchoring depth of those features. Finally, the
solar meridional flow might be strongly variable \citep{hathaway1996} and the different
results reflect its variability.
When analysing the meridional motions for possible variations in time and within the
solar cycle motions consistent with flows directed toward the centre of activity were found.
The exception are data for Solar Cycle 22 and the minimum of activity, where the result
is more reminiscent of equatorward motions.
The minimum of activity dataset contains the sunspot groups belonging to
two centers of activity, the one from preceding cycle at low latitudes and the one
from following cycle at higher latitudes, what can influence the result. Besides,
the errors of the meridional velocity values calculated for each latitude stripe
for all subsets (not just minimum) are quite large making most of the values
statistically insignificant and the most notable changes of the profile are at low
latitudes ($<$5$^\circ$) and high latitudes ($>$30$^\circ$) where the number of data
is smallest. Therefore it cannot be concluded that the obtained result represents the actual
changes of meridional motions and is not caused by random error of the mean value for given
latitude stripe.
By examining the correlation and covariance of meridional velocities and rotation
rate residuals we found that the angular momentum is transported towards the solar
equator. The horizontal component of the Reynolds stress tensor is found to be in the
order of several thousands m$^2$\,s$^{-2}$, with the maximal value of $(-4122\pm 1089)$m$^2$\,s$^{-2}$
at $(44\pm11)^\circ$ latitude.
This is in good agreement with the results of other studies using sunspots as tracers
\citep{ward1965,gilman1984,pulkkinen1998b,sudar2014,sudar2017}. This result is also
in agreement with the theoretical calculations of \citet{canuto1994},
\citet{kapyla2011} and \citet{varela2016}.
The analysis of the CBP data \citep{vrsnak2003,sudar2016} seems to yield smaller
values for the horizontal component of the Reynolds stress. As before, this discrepancy
can be reconciled if it is supposed that the Reynolds stress is stronger around
active regions.
This would imply that the major part of angular momentum transfer occurs in
the activity belt. On the other hand, the anchoring depth or height of the tracers
might influence the result, too.
By examining the correlation and covariance of meridional velocities and rotation rate residuals
it was found that the angular momentum is transported towards the solar equator at all
latitudes. Despite meridional motions and rotation rate residuals having
values of low statistical significance, their correlation expressed by Reynolds stress
is significant, which means that the Reynolds stress is a robust quantity.
The absolute value of the horizontal component of Reynolds stress is found to be
increasing from the equator attaining maximum at about 40$^\circ$ latitude which is
in agreement with results of other studies and theoretical calculations.
The observed values of the Reynolds stress are sufficient to maintain
the solar differential rotation profile.
Therefore, our results confirm that the Reynolds stress is the main contributor
to the transport of angular momentum towards solar equator which maintains the
observed solar differential rotation.
This general result, indicated in various previous studies using other data sets
and methods, is now independently confirmed also by using the KSO data set.
The questions how the anchoring depth of analysed features
and the variability influence the obtained results are still open and
need to be analysed in the future.
\begin{acknowledgements}
This work was partly supported by the Croatian Science Foundation under the project
6212 ``Solar and Stellar Variability" and in part by the University of Rijeka
under project number 13.12.1.3.03. We wish to express our gratitude to
anonymous referee whose careful review and detailed criticism of the manuscript
helped to improve the presentation and sharpen the arguments.
\end{acknowledgements}
\section*{Disclosure of Potential Conflicts of Interest}
The authors declare that they have no conflicts of interest.
\bibliographystyle{spr-mp-sola}
| -17,830.595944 |
[
-2.7265625,
2.64453125
] | 30.03876 |
[
-2.890625,
0.408203125,
-1.7646484375,
-5.21875,
-0.78076171875,
7.75390625
] |
[
2.83984375,
7.046875,
2.818359375,
5.546875
] | 398 | 3,843 |
[
-2.98828125,
3.59765625
] | 27.258288 |
[
-5.5546875,
-2.21484375,
-2.3984375,
-1.841796875,
1.01171875,
9.3984375
] | 1.426815 | 14.287977 | 25.396825 | 6.052928 |
[
3.52445125579834
] | -14,013.565211 | 5.92714 | -17,117.311117 | 0.895601 | 5.691397 |
[
-3.732421875,
-3.66796875,
-2.607421875,
-3.1953125,
2.6171875,
9.8046875
] |
[
-5.7578125,
-1.892578125,
-2.001953125,
-1.2392578125,
3.45703125,
4.6796875
] | |
BkiUfmbxK6wB9lIs_Mx6
|
\section{Introduction}
Modern cosmology has entered a new era driven by `` big data ''. In the past about two decades, with gradually mounting data, a number of cosmological observations such as Type Ia supernovae (SNIa) \cite{1,2}, baryonic acoustic oscillations (BAO) \cite{3}, cosmic microwave background (CMB) anisotropies \cite{4,5}, etc., have strongly indicated that the universe is undergoing a phase of accelerated expansion. The simplest scenario to explain the intriguing phenomena is a combination of the cosmological constant and cold dark matter component ($\Lambda$CDM model) \cite{6}, which is still in remarkably good agreement with almost all cosmological data more than ten years after the observational discovery of the accelerated expansion rate of the universe. Nonetheless, this very successful model still faces two unsolved and attractive puzzles \cite{7}: (i) Why is the value of the cosmological constant unexpectedly small with respect to any physically meaningful scale, except the current horizon scale ? (ii) Why this value is not only small, but also surprisingly close to another unrelated physical quantity, the present matter density ? Since we are still unclear about the nature of dark sector of the universe, to alleviate or even solve these puzzles, a great deal of alternative cosmological models based on different physical origins are proposed. In general, they can be divided into two main classes, the dark energy model \cite{8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,wm,24}, which introduces a new fluid or field in the universe, and the extended theory of gravity \cite{25,26,27,28,29,30,31,32,33,34,35}, which modifies the standard lagrangian of Einstein's gravity. Very interestingly, one can also address the above puzzles by exploring the connection between gravitation and new geometry. Finsler geometry \cite{24,25,26}, which includes Riemann geometry as its special case, is a good candidate to understand the current cosmological puzzles. This new geometry keeps the elegant properties of Riemann geometry, i.e., the corresponding isometric group is a Lie group on a Finslerian manifold, while it admits less Killing vectors than a Riemannian spacetime does. Generally, there are $n(n-1)/2+1$ independent Killing vectors in a $n$ dimensional non-Riemannian Finslerian spacetime at most. Taking the simplest possible asymmetrical generalization of Riemannian metric into account, G. Randers \cite{36} proposed the so-called Randers space, a subclass of Finslerian space. In the framework of Randers space, a generalized Friedmann-Robertson-Walker (FRW) cosmological model based on Finsler geometry has been studied \cite{37}, and a modified dispersion relation of free particles has also been discussed \cite{38}.
The gravity in a Finslerian space were studied for a long time \cite{39,40,41,42}. The gravitational field equations (GFEs) derived from a Riemannian osculating metric were presented in Ref. \cite{43}. For such a metric, the FRW-like cosmological scenario and the anisotropies of the universe were also investigated \cite{37,44}. However, their GFEs are derived without satisfying the Bianchi identity and the general covariance principle of Einstein's gravity. Attractively, the authors in Refs. \cite{45,46,47} have solved these problems and derived the corresponding Friedmann-like equations in Ref. \cite{45} by constructing a Randers-Finsler space of approximate Berwald type, which is just an extension of a Riemannian space. Following this theoretical line, we are motivated by placing constraints on the new Finslerian cosmological model using the current cosmological observations.
This study is organized in the following manner. In Section 2, we introduce briefly the Finslerian models to be constrained. In Section 3, we describe the methodology and data we use in this analysis, and exhibit the corresponding constraining results. In Section 4, we distinguish this model from the $\Lambda$CDM model using two geometrical diagnostics. The concluding remarks are presented in the final section.
\section{Finslerian models}
In this study, we consider two Finslerian models, i.e., the simplest Finslerian $\Lambda$CDM (F$\Lambda$) model and its one-parameter extension, non-flat F$\Lambda$ model. In a Finsler-Berwald FRW universe, combining the time-component Friedmann equation and the acceleration equation \cite{45,46,47}, the continuity equation can be expressed as
\begin{equation}
\dot{\rho}(\frac{3\alpha}{4}+\frac{\beta}{12}+1)+\dot{p}(-\frac{3\alpha}{4}+\frac{\beta}{4})=-\frac{\dot{a}}{a}[\rho(\frac{3\alpha}{2}+\frac{\beta}{6}+2)+p(-\frac{3\alpha}{2}+\frac{\beta}{2})+(\rho+3p)(1+\frac{\beta}{3})], \label{1}
\end{equation}
where $a$, $\alpha$ and $\beta$ are the scale factor, effective time-component and space-component parameters, respectively. Note that here \textit{`` effective ''} means that the physical quantities are derived based on the non-Riemannian Berwald space. Substituting the equation of state (EoS) $p_i=\omega_i\rho_i$ of each independent component $i$ (where the constant $\omega_i$ corresponds to the non-relativistic matter, dark energy and effective curvature, respectively) into Eq. (\ref{1}), one can obtain the effective energy density as follows
\begin{equation}
\rho_i\propto a^{-\frac{36(1+\omega_i)+18(1-\omega_i)\alpha+6(1+3\omega_i)\beta}{12+9(1-\omega_i)\alpha+(1+3\omega_i)\beta}}. \label{2}
\end{equation}
Subsequently, using the time-component Friedmann equation and Eq. (\ref{2}) the square of the effective dimensionless Hubble parameter $E_1(a)$ of the F$\Lambda$ model can be written as
\begin{equation}
E_1^2(a)=\frac{9\alpha+\beta+12}{4(\alpha+1)(\beta+3)}\Omega_{m}a^{-\frac{6(3\alpha+\beta+6)}{9\alpha+\beta+12}}+\frac{9\alpha-\beta+6}{2(\alpha+1)(\beta+3)}\Omega_{de} a^{-\frac{6(3\alpha-\beta)}{9\alpha-\beta+6}}, \label{3}
\end{equation}
where $\Omega_{m}$ and $\Omega_{de}$ denote the effective matter and dark energy density parameters today, respectively. Note that this model reduces to the $\Lambda$CDM model when $\alpha=\beta=0$. Since we are of interest in investigating the evolution of the late universe, we ignore the contribution from the radiation component. Furthermore, considering the spatial curvature in the Finslerian scenario, the square of the effective dimensionless Hubble parameter $E_2(a)$ of the non-flat F$\Lambda$ model is expressed as
\begin{equation}
E_2^2(a)=\frac{3}{3+\beta}\Omega_{k}a^{-2}+ \frac{9\alpha+\beta+12}{4(\alpha+1)(\beta+3)}\Omega_{m}a^{-\frac{6(3\alpha+\beta+6)}{9\alpha+\beta+12}}+\frac{9\alpha-\beta+6}{2(\alpha+1)(\beta+3)}\Omega_{de} a^{-\frac{6(3\alpha-\beta)}{9\alpha-\beta+6}}, \label{4}
\end{equation}
where $\Omega_{k}$ is the effective curvature density parameter today. One can also find that the contribution from curvature based on Finsler geometry depends only on the space-component parameter $\beta$. We use units $8\pi G=c=1$ throughout this work.
\begin{figure}
\centering
\includegraphics[scale=0.4]{1.pdf}
\caption{The one-dimensional marginalized probability distribution on the individual parameter and $2$-dimensional contours of the F$\Lambda$ model by using SBC data sets.}\label{f1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{2.pdf}
\caption{The one-dimensional marginalized probability distribution on the individual parameter and $2$-dimensional contours of the non-flat F$\Lambda$ model using SBC data sets.}\label{f2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{3.pdf}
\caption{The one-dimensional posterior probability distributions of $H_0$ values derived from the F$\Lambda$ (solid) and non-flat F$\Lambda$ (dashed) models using SBC data sets, respectively.}\label{f3}
\end{figure}
\section{Constraints}
We employ the Markov Chain Monte Carlo (MCMC) method to place constraints on the above two Finslerian models by using the current cosmological observations. More specifically, we modify carefully the publicly MCMC code CosmoMC \cite{48,49} and Boltzmann code CAMB \cite{50} to infer the posterior probability distributions of parameters, and analyze the correlated MCMC samples using the publicly code GetDist \cite{51}. Meanwhile, we choose flat priors to the model parameters and marginalize the foreground nuisance parameters provided by Planck. For the F$\Lambda$ model, we consider the following 8-dimensional parameter space
\begin{equation}
\mathbf{P_1}=\{\Omega_bh^2, \quad \Omega_ch^2, \quad 100\theta_{MC}, \quad \tau, \quad \alpha, \quad \beta, \quad \mathrm{ln}(10^{10}A_s), \quad n_s\}, \label{5}
\end{equation}
where $\Omega_bh^2$ and $\Omega_ch^2$ denote, respectively, the present-day baryon and CDM densities, $100\theta_{MC}$ is the approximation to $100 \times$ angular size of sound horizon at the redshift of last scattering $z_*$, $\tau$ represents the Thomson scattering optical depth due to reionization, $\alpha$ and $\beta$ are two typical model parameters of F$\Lambda$ model, $\mathrm{ln}(10^{10}A_s)$ and $n_s$ denote the amplitude and the spectral index of the primordial scalar perturbation power spectrum at the pivot scale $K_0=0.05$ Mpc$^{-1}$, respectively. Here $h\equiv H_0/100$ and $H_0$ is the Hubble constant. Similarly, the 9-dimensional parameter space for the non-flat F$\Lambda$ model is expressed as
\begin{equation}
\mathbf{P_2}=\{\Omega_bh^2, \quad \Omega_ch^2, \quad 100\theta_{MC}, \quad \Omega_k, \quad \tau, \quad \alpha, \quad \beta, \quad \mathrm{ln}(10^{10}A_s), \quad n_s\}, \label{6}
\end{equation}
where $\Omega_{k}$ denotes the effective curvature density today. Subsequently, we exhibit the cosmic probes used in this analysis, including the SNIa, BAO, CMB and Planck lensing data.
$\bullet$ The SNIa data: Since the absolute magnitudes of all the SNIa are considered to be the same based on the nearly same explosion mass, the SNIa is regarded as a standard candle to explore the background evolution of the universe in theory. We use the largest `` Joint Light-curve Analysis '' (JLA) sample containing 740 SNIa data points, which covers the redshift range $z \in [0.01, 1.3]$ \cite{52}. The JLA compilation consists of 118 low-$z$ SNe in the range $z \in [0, 0.1]$ from \cite{53,54,55,56,57,58}, 374 SNe in the range $z \in [0.3, 0.4]$ from the Sloan Digital Sky Survey (SDSS) SNe search \cite{59}, 239 SNe in the range $z \in [0.1, 1.1]$ from the Supernova Legacy Survey (SNLS) project \cite{60}, and 9 high-$z$ SNe in the range $z \in [0.8, 1.3]$ from the Hubble Space Telescope (HST) \cite{61}.
$\bullet$ The BAO data: To break the parameter degeneracy efficiently, we take four BAO measurements consisting of the 6dFGS (six-degree-field galaxy survey) sample at effective redshift $z_{eff}=0.106$ \cite{62}, the SDSS MGS (main galaxy sample) sample at $z_{eff}=0.15$ \cite{63}, and the LOWZ at $z_{eff}=0.32$ and CMASS $z_{eff}=0.57$ samples of the SDSS-III BOSS (Baryon Oscillation Spectroscopic Survey) DR12 sample \cite{64}.
$\bullet$ The CMB data: We also use the Planck 2015 temperature and polarization data in our numerical analysis \cite{65}, including the likelihoods of temperature (TT) at $30\leqslant \ell\leqslant 2500$, the cross correlation of temperature and polarization (TE), the polarization (EE) power spectra and the Planck low-$\ell$ temperature and polarization likelihood at $2\leqslant \ell\leqslant 29$.
$\bullet$ The lensing data: We utilize the Planck lensing data \cite{66}, which gives the most powerful measurement to date with a 2.5$\%$ constraint on the amplitude of the lensing potential power spectrum (or alternatively, a 40$\sigma$ detection of lensing effects).
\begin{table}[h!]
\caption{The prior ranges of different model parameters used in the Bayesian analysis.}
\label{t1}
\begin{tabular}{ll}
\hline
\hline
Parameter &Prior \\
\hline
$\Omega_bh^2$ &$[0.005, 0.1]$ \\
$\Omega_ch^2$ &$[0.001, 0.99]$ \\
$100\theta_{MC}$ &$[0.5, 10]$ \\
$\tau$ &$[0.01, 0.8]$ \\
$\Omega_k$ &$[-0.05, 0.05]$ \\
$\alpha$ &$[-3, 3]$ \\
$\beta$ &$[-3, 3]$ \\
$\mathrm{ln}[10^{10}A_s]$ &$[2, 4]$ \\
$n_s$ &$[0.8, 1.2]$ \\
$H_0$ &$[20, 100]$ \\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\renewcommand\arraystretch{1.3}
\caption{The $1\sigma$ ($68\%$) uncertainties of different parameters in the F$\Lambda$ and non-flat F$\Lambda$ models using the data combinations S, SB and SBC, respectively.}
\label{t2}
\begin{tabular} { l c c c c c c}
\hline
\hline
Model & & F$\Lambda$ && & non-flat F$\Lambda$ \\
\hline
Data & S & SB & SBC & S & SB & SBC \\
\hline
{$\Omega_b h^2 $} & $0.0232^{+0.0016}_{-0.0014}$ & $0.02239\pm 0.00084 $ & $0.02232\pm 0.00012 $ & $0.0235^{+0.0023}_{-0.0026}$ & $0.02183\pm 0.00052 $ & $0.02222\pm 0.00016 $ \\
{$\Omega_c h^2 $} & $0.1169^{+0.0027}_{-0.0037}$ & $0.1193^{+0.0043}_{-0.0071}$ & $0.11817\pm 0.00081 $ & $0.110^{+0.011}_{-0.014} $ & $0.1204^{+0.0036}_{-0.0040}$ & $0.1193^{+0.0016}_{-0.0013}$ \\
{$100\theta_{MC} $} & $1.04016\pm 0.00097 $ & $1.0416^{+0.0047}_{-0.0065}$ & $1.04100^{+0.00028}_{-0.00031}$ & $1.04273^{+0.00095}_{-0.0024}$ & $1.0401^{+0.0017}_{-0.0012}$ & $1.04085\pm 0.00031 $ \\
{$\tau $} &$0.130\pm 0.029 $ & $0.153^{+0.082}_{-0.031} $ & $0.0810^{+0.0052}_{-0.0072}$ & $0.148\pm 0.046 $ & $0.076^{+0.027}_{-0.052} $ & $0.0755^{+0.0047}_{-0.0042}$
\\
{$\Omega_K $} & --- & --- & --- & $-0.011^{+0.022}_{-0.019} $ & $0.0028^{+0.0043}_{-0.0048}$ & $0.0013\pm 0.0019 $ \\
{$\alpha $} & $0.0131\pm 0.0058 $ & $0.0013^{+0.0012}_{-0.0091}$ & $0.0027^{+0.0029}_{-0.0047}$ & $-0.001^{+0.062}_{-0.022} $ & $-0.0024^{+0.0057}_{-0.0065}$ & $-0.0036\pm 0.0019 $ \\
{$\beta $} & $0.0000\pm 0.0032 $ & $-0.0011^{+0.0089}_{-0.0061}$ & $0.0016\pm 0.0048 $ & $-0.002^{+0.016}_{-0.026} $ & $0.0022\pm 0.0038 $ & $-0.0051^{+0.0076}_{-0.0069}$ \\
{${\rm{ln}}(10^{10} A_s)$} & $3.104^{+0.015}_{-0.010} $ & $3.102^{+0.013}_{-0.031} $ & $3.094^{+0.011}_{-0.023} $ & $3.088^{+0.016}_{-0.010} $ & $3.0994^{+0.0084}_{-0.0064}$ & $3.0848^{+0.0071}_{-0.0061}$ \\
{$n_s $} & $0.949^{+0.017}_{-0.020} $ & $0.969^{+0.011}_{-0.017} $ & $0.9690\pm 0.0033 $ & $0.971\pm 0.067 $ & $0.9468\pm 0.0088 $ & $0.9655\pm 0.0042 $ \\
\hline
$H_0 $ & $68.9^{+2.1}_{-1.9} $ & $67.95^{+0.87}_{-1.0} $ & $67.99^{+0.38}_{-0.30} $ & $67.4\pm 4.9 $ & $67.84\pm 0.81 $ & $68.12\pm 0.61 $ \\
\hline
\hline
\end{tabular}
\end{table}
In what follows, for simplicity, we denote the data combinations SNIa, CMB+lensing, SNIa+BAO and SNIa+BAO+CMB+lensing as S, C, SB and SBC, respectively. Specifically, we perform constraints on the F$\Lambda$ and non-flat F$\Lambda$ models by using S, SB and SBC, respectively. The prior ranges of different model parameters considered in this analysis are shown in Tab. \ref{t1}.
By implementing a Bayesian analysis, the constraining results of the above two models using different data combinations are exhibited in Tab. \ref{t2}. Utilizing the joint constraints from SBC, the corresponding one-dimensional marginalized probability distribution on the individual parameter and $2$-dimensional contours for the F$\Lambda$ and non-flat F$\Lambda$ models are shown in Figs. \ref{f1}-\ref{f2}, respectively. For the F$\Lambda$ model, we find that the values of typical model parameters $\alpha$ and $\beta$ are well consistent with zero at the $1\sigma$ C.L. and prefer slightly the positive best-fit values being very close to zero when using the data combination SBC. Being different from the F$\Lambda$ model, for the non-flat F$\Lambda$ one, the value of $\beta$ is well consistent with zero at the $1\sigma$ C.L., but the value of $\alpha$ is not. Attractively, adding a curvature parameter into the F$\Lambda$ model, the values of $\alpha$ and $\beta$ prefer slightly negative best-fit values being very close to zero when using SBC. We also find that the value of the spatial curvature $\Omega_k=0.0013\pm 0.0019$ from SBC in the non-flat F$\Lambda$ model is consistent with zero at the $1\sigma$ C.L., which means that a flat universe is still preferred in the framework of Finsler geometry. It is worth noting that, using SBC, our measured values of the spectral index $n_s$ of the primordial scalar perturbation power spectrum in the above two Fisnlerian scenarios both rule out the scale invariance at more than $8\sigma$ C.L., which is very compatible with the predictions from Planck temperature and polarization data and Planck lensing data \cite{65,66} (see Tab. \ref{t2}).
Recently, the directly local measurement $H_0=73.24\pm1.74$ km s$^{-1}$ Mpc$^{-1}$ from Riess et al. 2016 (hereafter R16) \cite{67} using the improved SNIa calibration techniques exhibits a strong tension with the indirectly global measurement $H_0=66.93\pm0.62$ km s$^{-1}$ Mpc$^{-1}$ derived by Planck collaboration (hereafter P15) \cite{68} under the assumption of the base six-parameter $\Lambda$CDM model at the $3.4\sigma$ level. We are also of much interest in addressing this issue using the Finslerian settings. Using SBC data sets, the one-dimensional posterior probability distributions of $H_0$ values derived from the F$\Lambda$ and non-flat F$\Lambda$ models are shown in Fig. \ref{f3}. Interestingly, we find that, using SBC, the current $H_0$ tension can be alleviated from $3.4\sigma$ to $2.9\sigma$ and $2.8\sigma$ in the F$\Lambda$ and non-flat F$\Lambda$ models, respectively. By opening an extra parameter $\Omega_k$, we also find, as expected, a mild relaxation on $H_0$ limits at the $1\sigma$ C.L. in the non-flat F$\Lambda$ model.
\begin{figure}
\centering
\includegraphics[scale=0.5]{4.pdf}
\includegraphics[scale=0.5]{5.pdf}
\caption{The left and right panels correspond to the relations between the scale factor $a$ and the acceleration parameter $q(a)$ and $Om(a)$ diagnostic, respectively. The red (solid), blue (dash-dotted) and magenta (short-dashed) lines correspond to the best-fit F$\Lambda$, non-flat F$\Lambda$ and $\Lambda$CDM models, respectively. The shaded yellow, purple and green regions represent the $1\sigma$ regions of the F$\Lambda$, non-flat F$\Lambda$ and $\Lambda$CDM models, respectively. The horizontal orange (long-dashed) line corresponds to the zero-acceleration universe.}\label{f4}
\end{figure}
\section{Diagnostics}
In this section, we employ two simple geometrical diagnostics, i.e., the deceleration parameter and $Om(a)$ diagnostic \cite{69} to distinguish the F$\Lambda$ and non-flat F$\Lambda$ models from the standard cosmological model and one from the other. The deceleration parameter for a given dark energy model can be expressed as
\begin{equation}
q(a)=-\frac{a}{E(a)}\frac{dE(a)}{da}-1, \label{7}
\end{equation}
where $E(a)$ of the above two Finslerian models can be found in Eqs. (\ref{3}-\ref{4}).
The $Om(a)$ diagnostic is an very useful geometrical method to distinguish different cosmological models from each other, which is written as
\begin{equation}
Om(a)=\frac{E^2(a)-1}{a^{-3}-1}. \label{8}
\end{equation}
It is easy to verify that, for a flat $\Lambda$CDM model, the $Om(a)$ diagnostic is fixed to be $\Omega_{m}$, namely $Om(a)=\Omega_{m}$ (see also Ref. \cite{69}).
Utilizing $\Omega_{m}=0.308\pm 0.012$ measured by Planck collaboration \cite{65} for the $\Lambda$CDM model and error propagations of the constrained model parameters $\alpha$, $\beta$, $\Omega_m$ and $\Omega_k$ using SBC data sets, we present the diagnostic results in Fig. \ref{f4}. We find that the evolutional behaviors of two Finslerian models are very close to that of the $\Lambda$CDM one, and consequently, they cannot be distinguished from the $\Lambda$CDM model at the $1\sigma$ C.L.. By simple calculations, we also obtain the deceleration-acceleration redshift $z_{t1}=0.6558^{+0.0130}_{-0.0103}$ for the F$\Lambda$ model and $z_{t2}=0.6517^{+0.0121}_{-0.0102}$ for the non-flat F$\Lambda$ model at the $1\sigma$ C.L., respectively.
\section{Concluding remarks}
To understand accurately the current cosmic acceleration from a new geometrical perspective, we have performed the first constraints on the simplest Finslerian model, the F$\Lambda$ model and its one-parameter extension, the non-flat F$\Lambda$ model by using the current cosmological observations. Utilizing the most stringent constraints SBC we can provide, we find that: For the F$\Lambda$ model, the values of typical model parameters $\alpha$ and $\beta$ are well consistent with zero at the $1\sigma$ confidence level and prefer slightly the positive best-fit values being very close to zero; For the non-flat F$\Lambda$ one, nonetheless, the values of $\alpha$ and $\beta$ prefer slightly negative best-fit values being very close to zero and the value of $\beta$ is well consistent with zero at the $1\sigma$ C.L., but the value of $\alpha$ is not; A spatially flat universe is still preferred in the framework of Finsler geometry; Our measured values of the spectral index $n_s$ of primordial power spectrum in both Fisnlerian scenarios rule out the scale invariance at more than $8\sigma$ C.L.;
The current $H_0$ tension can be relived from $3.4\sigma$ to $2.9\sigma$ and $2.8\sigma$ in the F$\Lambda$ and non-flat F$\Lambda$ models, respectively.
Using two popular geometrical diagnostics, we find that both Finslerian models cannot be distinguished from the $\Lambda$CDM model during the evolutional process of the universe. It is noteworthy that we obtain this conclusion just at the background evolution level without considering the perturbation effects. Furthermore, in this analysis, we just test primarily the abilities of two Finslerian models in explaining the current cosmological phenomena without consider other interesting extensions. In addition, more data except SBC can also be applied in constraining the Finslerian models. In total, our investigations on using the Finsler geometry to understand the evolution of the universe and reconcile the cosmological tensions among various probes are just in the beginning stage. The remaining issues will be addressed carefully in a forthcoming study \cite{70}.
\section*{ACKNOWLEDGMENTS}
Deng Wang warmly thanks Jing-Ling Chen and F. Canfora for very useful communications. Xin-He Meng appreciates S. D. Odintsov and B. Ratra for helpful discussions on cosmology. This study is partly supported by the National Science Foundation of China.
\begin{appendix}
\section{u}
\end{appendix}
| -20,987.786359 |
[
-2.95703125,
2.73828125
] | 36 |
[
-3.314453125,
0.5234375,
-1.5732421875,
-5.73046875,
-0.6123046875,
7.9453125
] |
[
4.0859375,
8.78125,
3.185546875,
6.33984375
] | 342 | 2,783 |
[
-2.322265625,
2.19140625
] | 36.228215 |
[
-5.73046875,
-3.94140625,
-4.07421875,
-2.38671875,
1.7001953125,
11.7890625
] | 0.915416 | 27.373835 | 33.63277 | 11.625942 |
[
2.854872465133667
] | -13,982.898732 | 5.887891 | -20,389.34243 | 0.823874 | 5.834345 |
[
-3.14453125,
-3.537109375,
-3.412109375,
-4.390625,
2.228515625,
11.375
] |
[
-6.08203125,
-2.3125,
-2.154296875,
-1.689453125,
3.845703125,
5.30078125
] | |
BkiUbmrxaL3SuhzLmt7Y
|
\section{Introduction}
Silos and hoppers are used frequently in the agriculture \cite{Karimi2019}, pharmaceutical\cite{Faqih2007}, and consumer products industries\cite{Fitzpatrick2004} to
store fluids and granular materials. Materials confined within silos and hoppers are discharged using vertical or slanted walls that lead to an orifice at the bottom of the device. Microfluidic devices also incorporate flow constrictions to control the pressure and flow rate of complex fluids, such as emulsion droplets\cite{Bick2021}. Despite the fact that hopper and silo flows are ubiquitous in industry, we do not yet have a fundamental understanding of the outflow properties from hoppers and silos. For example, it is difficult to predict the outflow rate of particulate materials from hoppers and silos as a function of the device geometry, orifice size, and particle properties.
For inviscid fluid flows from hoppers, the volume flow rate $Q$ is proportional to the orifice area ($w^2$ in three dimensions, where $w$ is the diameter of the circular orifice) times the characteristic fluid velocity $v_c$ at the orifice, $Q = w^2 v_c$\cite{Alessio2021}. For pressure-driven flows, $v_c \sim \sqrt{\Delta P/\rho}$, where $\Delta P$ is the pressure difference and $\rho$ is the mass density of the fluid. For viscous fluid flows, the volume flow rate $Q = C_d w^2 v_c$ includes a discharge coefficient $C_d$ that depends on the hopper geometry and viscosity of the fluid\cite{Essien2019}.
Unlike ordinary fluids, granular materials consist of macro-sized grains that interact via dissipative forces, which can give rise to intermittency and clogging during hopper flows in the limit of small orifice sizes. Beverloo and co-workers\cite{BEVERLOO1961260} carried out seminal experimental studies of hopper flows of a wide range of granular materials in air and proposed an empirical form for the flow rate that allows flow arrest to occur at nonzero orifice width:
\begin{equation}
\label{eq:1}
Q(w) = C(w/\sigma_{\rm avg}-k)^{\beta},
\end{equation}
where $C$ is a constant with units of flow rate, $\sigma_{\rm avg}$ is the average diameter of the particles, $Q(k \sigma_{\rm avg})=0$, and $k$ depends on the particle properties, such as the stiffness, shape, and friction coefficient. Another key difference between hopper flows of ordinary fluids and granular materials is that the power-law scaling exponent $\beta = d-1/2$, where $d$ is the spatial dimension, is not an integer for hopper flows of granular materials. This relation has been verified in 2D and 3D, for spherical \cite{Pascot2020,Hirshfeld1997} and non-spherical \cite{Tang2016} particles, and for frictionless\cite{Ashour2017} and frictional\cite{Sheldon2010} particles.
Numerous researchers have provided heuristic arguments for the $\beta=d-1/2$ scaling exponent for hopper flows of granular materials. For example, Brown and Richards proposed a model for the regime $w\gg k\sigma_{\rm avg}$ where transient arches form and break in a region above the orifice, creating a free-fall region below with a height proportional to the orifice width $w$ \cite{Brown1961}. Because of shielding by the transient arches, the grains move at low velocities until they enter the free-fall region. Thus, the discharge velocity $v_{c} \sim \sqrt{gw}$ when grains reach the orifice, and $Q \sim w^2 v_c \sim w^{5/2}$ in 3D or $Q\sim w v_c \sim w^{3/2}$ in 2D. Cutoffs for the finite size of the particles can be added to these expressions to recover Eq.~\ref{eq:1}.
The original studies of Beverloo {\it et al.} involved hopper flows of hard grains in air\cite{BEVERLOO1961260}. Recent studies of hopper flows of spherical glass beads submerged in water have found that the scaling exponent $\beta \sim 1$ does not obey $\beta = d -1/2$ from the original Beverloo equation\cite{Wilson2014,Fan2022}. In addition, studies of qausi-2D hopper flows of air bubbles immersed in water have found $\beta \sim 0.5$\cite{Bertho2006}, again deviating from the exponent in the original Beverloo equation. Thus, from these previous results, it is not clear whether the dissipation mechanism (i.e. particle-particle or background fluid dissipation), particle stiffness or other particle properties control the power-law scaling exponent in Eq.~\ref{eq:1}.
In this article, we carry out computer simulations of hopper flows of deformable particles in two (2D) and three dimensions (3D), including both interparticle kinetic friction and viscous dissipation with the background fluid. We employ two computational models of particle deformation: 1) the ``soft particle" model that describes particle deformation as overlaps between pairs of particles and therefore does not conserve particle volume in 3D (area in 2D) and 2) the deformable particle model that includes a shape-energy function for changes in particle volume (area in 2D), surface area (perimeter in 2D), and surface bending, as well as an interaction energy that prevents particle overlaps. Studying these two models allows us to assess the importance of volume conservation in determining the flow properties and provides the ability to tune the particle stiffness, static and kinetic friction coefficients, and background viscous drag and quantify their effects on the flow rate.
We find several important results. First, the power-law scaling exponent $\beta$ relating the volume flow rate $Q$ and orifice width $w$ is controlled by the dissipation mechanism, {\it i.e.}~the ratio of the viscous damping coefficient to the kinetic friction coefficient, $\lambda=\zeta/\mu$. We find that the exponent varies continuously between $\beta = d-1/2$ in the $\lambda \rightarrow 0$ limit and $d-3/2$ in the $\lambda \rightarrow \infty$ limit, with a midpoint $\lambda_c$ that depends on the hopper opening angle $\theta_w$. In contrast, the exponent $\beta$ is only weakly dependent on the particle deformability and surface roughness. Second, we show that the spatio-temporal dynamics for flows with the two exponents, $\beta = d-1/2$ and $d-3/2$, are different. In particular, the velocity profile varies more strongly with the orifice size for flows with $\beta = d-1/2$ in the $\lambda \rightarrow \infty$ limit. Third, the offset $k\sigma_{\rm avg}$ at which $Q \rightarrow 0$ decreases with particle deformability, and increases with the static friction coefficient. Finally, we show that the simulations of hopper flows using the soft and deformable particle models in the $\lambda \rightarrow \infty$ limit are able to recapitulate the experimental results for quasi-2D gravity-driven hopper flows of oil droplets in water.
The remainder of the article is organized as follows. In Section~\ref{sim_methods}, we describe the simulation methods including the soft particle and deformable particle models, the equations of motion, and simulation protocol that we employ to generate continuous flows. In Section~\ref{expmethods}, we describe the experimental system, including the hopper geometry and method to generate emulsion droplets and flows. In Section \ref{results}, we show results for the volume flow rate (area flow rate in 2D) $Q$ versus the orifice width $w$ for the soft particle model and the deformable particle model as a function of $\zeta/\mu$ and particle deformability in both 2D and 3D. We characterize the spatial structure of the flows by measuring the velocity as a function of distance from the orifice and we associate changes in the spatial structure of the flows to changes in the power-law scaling exponent $\beta$. In Section 4, we discuss the implications of our results, and propose future research directions, such as developing an improved deformable particle model that includes surface tension, which would allow more quantitative comparisons between the simulations and experiments on hopper flows of oil droplets in water. We also include three Appendices. In Appendix A, we describe the details of the frictionless, deformable particle model. In Appendix B, we show more detailed comparisons of the flow rate for the soft particle and deformable particle models in the compressible and incompressible particle limits. In Appendix C, we show that the system size effects on the flow rate are small in the simulations.
\begin{figure*}[!ht]
\includegraphics[width=0.98\textwidth]{SPandDPM.pdf}
\centering
\caption{Snapshot from simulations of hopper flows of bidisperse mixtures in a gravitational field using the (a) soft particle (SP) and (f) deformable particle (DP) models in 2D. The hopper geometry can be slanted with variable tilt angle $\theta_w$, e.g. $\theta_w= 45^\circ$ in (a) and $90^\circ$ in (f). ${\vec g}$ indicates the direction of the gravitational acceleration, $W$ is the separation between the straight walls far from the orifice, $h$ indicates the distance from the hopper orifice, and $w$ is the width of the orifice. (b) Close-up of hopper flow using the SP model with $N/2$ large particles and $N/2$ small particles with diameter ratio $1.4$, highlighting overlapping particles $m$ and $n$ with separation $r_{mn} < \sigma_{mn}$, where $\sigma_{mn} = (\sigma_m+\sigma_n)/2$. (c) Illustration of the method to calculate the closest separation between frictionless, deformable particles $m$ and $n$. $\delta_m$ is the width of the edges of particle $m$ and $\delta_{m,i}^{n,j}$ is the shortest distance between edges $i$ and $j$ on particles $m$ and $n$, respectively. (See Appendix A.) (d) Close-up of hopper flow using the DP model with surface roughness with $N/2$ large particles, $N/2$ small particles, and area ratio $1.96$. $a_m$ is the area and $p_m$ is the perimeter of deformable particle $m$.
Both small and large particles have $N_v =16$ vertices.
(e) Illustration of the interactions between deformable particles $m$ and $n$ with surface roughness. $\delta_m$ is the diameter of each circular vertex on particle $m$ and $\delta_{m,i}^{n,j}$ is the distance between vertices $i$ and $j$ on particles $m$ and $n$, respectively. }
\label{fig:1}
\end{figure*}
\section{Simulation Methods}
\label{sim_methods}
In this section, we describe the methods for simulating gravity-driven hopper flows of bidisperse particles in 2D and 3D. We first illustrate the hopper geometry. We then describe the two methods for modeling the particle shape and interactions: 1) the soft particle model, which treats each spherical particle as a single degree of freedom located at its center of mass and mimics particle contact interactions by allowing overlaps between pairs of particles and 2) the deformable particle model that uses a shape-energy function to penalize changes in particle volume (area in 2D), surface area (perimeter in 2D), and surface bending. The deformable particle model can be implemented such that the particles are nearly frictionless or the model can include surface roughness. For each model, we describe the forces that result from the shape-energy function, particle-particle interactions, and dissipative forces arising from interparticle kinetic friction and drag from the background fluid, and then we write down the resulting equations of motion for each particle. Finally, we discuss the initialization of the particle positions and velocities and the method used to generate continuous flows.
\subsection{Hopper Geometry}
In 2D, the hopper is constructed from two infinitely long straight (top and bottom) walls separated by a distance $W \sim 60\sigma_s$ (where $\sigma_s$ is the diameter of the small particle), which connect to the right wall at an angle $\theta_w$ as shown in Fig.\ref{fig:1} (a). The gravitational field points from left to right. The orifice is centered and has width $w<12\sigma_s$, so that $W/w > 5$, which ensures that the top and bottom walls are sufficiently separated such that they do not influence the flow. In 3D, the hopper is an infinitely long cylinder with diameter $W\sim 30\sigma_s$, and the long axis of the cylinder is oriented in the direction of gravity. The hopper in 3D has a flat base ($\theta_w=90^\circ$) containing a circular orifice with diameter $w$ that is centered on the long axis of the cylinder.
In 2D, we focus on systems containing $N=1600$ particles, but we also considered systems over a range from $N=800$ to $3200$ to assess system size effects. In 3D, we focus on systems with $N=6400$ particles. To mimic continuous flows, particles that exit the hopper orifice are replaced on the left side of the hopper near the leftmost flowing particles and given the same speed as neighboring particles. The distance between the hopper orifice and the leftmost flowing particle is $L \sim 20$-$30\sigma_s$.
\subsection{Soft Particle Model}
For gravity-driven hopper flows, there are typically four contributions to the total potential energy: 1) the shape-energy function $U_m^s$, 2) the gravitational potential energy $U_m^g$, 3) the particle-particle interaction energy $U_{\rm int}$, and 4) the particle-wall interaction energy $U_m^w$. For the SP model, $U_m^s=0$. Purely repulsive interparticle forces are generated by allowing overlaps between pairs of spherical particles\cite{Durian1995,Durian1997,Hong2017,Tao2021}, as shown in Fig.\ref{fig:1} (b). The pairwise interaction energy of the SP model is given by
\begin{equation}
\label{sp}
U_{\rm int} = \sum_{m=1}^{N}\sum_{n>m}^{N} \frac{\epsilon_{sp}}{2} (1-r_{mn}/\sigma_{mn})^2\Theta(1-r_{mn}/\sigma_{mn}).
\end{equation}
In Eq.~\ref{sp}, $\sigma_{mn} = (\sigma_m + \sigma_n)/2$ is the average diameter of particles $m$ and $n$, $r_{mn}$ is the separation between particles $m$ to $n$, and $\epsilon_{sp}$ is the characteristic energy scale of the repulsive interaction. The Heaviside step function $\Theta(\cdot)$ ensures that the pair forces are non-zero only between overlapping particles.
We consider a similar repulsive interaction between the hopper walls and each particle $m$ that is in contact with the walls:
\begin{equation}
U_{m}^{w} =\frac{\epsilon_{w}}{2} (1-2d_w/\sigma_{m})^2\Theta(1-2d_w/\sigma_{m}),
\end{equation}
where $d_w$ is the distance between the center of particle $m$ and the hopper wall and $\epsilon_w$ is the characteristic energy scale of the particle-wall interaction. Thus, the total potential energy of the system is given by
\begin{equation}
\label{totalU}
U = \sum_{m=1}^{N} (U_{m}^{s} + U_{m}^{g} +U_{m}^{w}) + U_{\rm int},
\end{equation}
where $U_{m}^{g} = -M_m g h$, $h$ is the height of the center of mass of particle $m$, $g$ is the gravitational acceleration, $M_m = \rho V_{m,0}$ is the mass of particle $m$ with mass density $\rho$ and volume $V_{m,0} = \pi \sigma_m^3/6$.
(In 2D, $M_m = \rho a_{m,0}$ is the mass of particle $m$ with areal mass density $\rho$ and area $a_{m,0} = \pi \sigma_m^2/4$.)
We include two types of dissipative forces on the particles. First, we consider viscous drag forces on particles moving in a background viscous fluid:
\begin{equation}
\vec{F}_{m}^{\zeta} = - \zeta \vec{v}_m,
\end{equation}
where $\zeta$ is the drag coefficient and $\vec{v}_m$ is the velocity of particle $m$. The second dissipative force arises from kinetic friction between contacting particles. The kinetic friction force is proportional to the relative velocity between contacting particles\cite{Silbert2002}:
\begin{equation}
\vec{F}_{m}^{\mu} = - \mu \sum_{n\neq m}^{N} (\vec{v}_m - \vec{v}_n )\Theta(1-r_{mn}/\sigma_{mn}),
\end{equation}
where $\mu$ is the kinetic friction coefficient. The dimensionless parameter $\lambda =\zeta/\mu$ determines whether the energy dissipation arises mainly from viscous drag ($\lambda \gg 1$) or from kinetic friction ($\lambda \ll 1$). We measure the kinetic friction and drag coefficients in units of $\mu_0 = \zeta_0 = \rho \sigma_{\rm avg}^{d-1} g t_0$, where $t_0 = \sqrt{\sigma_{\rm avg}/g}$. For the SP model, the equation of motion for each particle $m$ is
\begin{equation}
\label{eom}
M_m \frac{\partial^2 \vec{r}_m}{\partial t^2} = -\vec{\nabla}_{r_m}U + \vec{F}_{m}^{\zeta} + \vec{F}_{m}^{\mu}.
\end{equation}
We integrate Eq.~\ref{eom} using a modified velocity Verlet integration scheme with time step $\Delta t=10^{-3} t_0$. The flow rate $Q$ is measured in units of $Q_0 = \sigma_{\rm avg}^d/t_0$.
For the SP model, we focus on bidisperse systems in 2D and 3D composed of half large particles and half small particles with diameter ratio $\alpha = \sigma_l/\sigma_s = 1.4$ to avoid crystallization\cite{Zhang2013}. The average diameter of particles in the bidisperse system is $\sigma_{avg} = (\sigma_l+\sigma_s)/2 = 1.2\sigma_s$. Two important dimensionless energy scales are the ratios of the characteristic particle-particle and particle-wall repulsive energy scales to the gravitational potential energy, i.e. $E_{sp} = \epsilon_{sp}/(g\rho \sigma_{\rm avg}^{d+1})$ and $E_w = \epsilon_{w}/(g\rho \sigma_{\rm avg}^{d+1})$, where $d=2$, $3$ in two and three dimensions, respectively. We set $E_w = 10^4$ to minimize overlaps between the particles and hopper walls and will vary $E_{sp}$ to determine the effect of particle softness on the flow rate $Q(w)$.
\subsection{Deformable Particle Model}
To explicitly model changes in particle shape, we recently developed the deformable particle (DP) model in both 2D \cite{Boromand2018,Boromand2019} and 3D\cite{Wang2021}. In 2D, the particles are modeled as deformable polygons composed of $N_v$ vertices. We can achieve deformable particles with nearly smooth surfaces by modeling the vertices as circulo-lines as shown in Fig.\ref{fig:1} (c) or achieve deformable particles with nonzero surface roughness by modeling the vertices as small disks as shown in Fig.\ref{fig:1} (d) and (e). We consider the following shape-energy function for particle $m$:
\begin{equation}\label{shape_energy}
\begin{split}
U_{m}^{s} =& \frac{k_a}{2} (a_m-a_{m,0})^2 + \frac{k_l N_v}{2}\sum_{i=1}^{N_v} (l_{m,i}-l_{m,0})^2 \\ &+
\frac{k_b}{2N_v}\sum_{i=1}^{N_v}\left(\frac{\hat{l}_{m,i}-\hat{l}_{m,i+1}}{l_{m,0}}\right)^2,
\end{split}
\end{equation}
which includes three terms. The first term imposes a harmonic energy penalty for changes in particle area $a_m$ from the preferred value $a_{m,0}$ and $k_a$ controls the fluctuations in particle area. The second term imposes a harmonic energy penalty for deviations in the separations $l_{m,i}$ between adjacent vertices $i$ and $i+1$ from the equilibrium length $l_{m,0}$ and $k_l$ controls fluctuations in the separations between adjacent vertices. The third term is the bending energy that favors particle shapes with $\hat{l}_{m,i}$ and $\hat{l}_{m,i+1}$ in the same direction. $k_b$ is the bending rigidity that controls fluctuations in the angle between $\hat{l}_{m,i}$ and $\hat{l}_{m,i+1}$. The factor of $N_v$ in the numerator of the second term and in the denominator of the third term of Eq.~\ref{shape_energy} ensure that $U_m^s$ does not depend on $N_v$.
We focus on hopper flows of $N=1600$ bidisperse deformable particles in 2D with half large particles and half small particles. We define effective diameters $\sigma_l = \sqrt{4a_{0,l}/\pi}$ and $\sigma_s = \sqrt{4a_{0,s}/\pi}$ for the large and small particles, respectively, and set the diameter ratio $\sigma_l / \sigma_s = 1.4$. We choose $N_v = 16$, which gives an effective friction coefficient $\mu_{\rm eff} \sim 0.6$ for the DP model with surface roughness\cite{Papanikolaou2013}. For the nearly smooth DP model, we find that $N_v \ge 16$ does not affect the properties of the hopper flows. From $a_{0,s}$ and $l_{0,s}$, we can define the dimensionless shape parameter in 2D, ${\cal A}_0 = (N_v l_{0,s})^2/4\pi a_{0,s}$. We study systems composed of nearly circular particles with ${\cal A}_0 =(N_v/\pi)\tan (\pi/N_v) \sim 1.013$, which is the value for a regular polygon with $N_v=16$ sides.
For the DP model with surface roughness, each vertex in particle $m$ is represented by a disk with diameter $\delta_m = l_{m,0}$ and the total interaction energy $U_{\rm int}$ is calculated by summing up all the repulsive interactions between overlapping circular vertices on different particles:
\begin{equation}\label{eq:9}
U_{\rm int} = \sum_{m=1}^{N}\sum_{n>m}^{N}\sum_{i=1}^{N_v}\sum_{j=1}^{N_v} \frac{\epsilon_{c}}{2} (1-\delta_{m,i}^{n,j}/\delta_{mn})^2\Theta(1-\delta_{m,i}^{n,j}/\delta_{mn}),
\end{equation}
where $\delta_{mn} = (\delta_m + \delta_n)/2$ is the average vertex diameter on particles $m$ and $n$, $\epsilon_c$ gives the characteristic energy scale of the repulsive interactions between vertices, and $\delta_{m,i}^{n,j}$ is the separation between vertex $i$ on particle $m$ and vertex $j$ on particle $n$. For the nearly smooth DP model, we represent edges of the polygon as circulo-lines with width $\delta_m = 0.1 l_{m,0}$ and length $l_{m,i}$. The interparticle repulsive interactions still follow Eq.\ref{eq:9}, but $\delta_{m,i}^{n,j}$ represents the distance between edges $i$ and $j$ on particles $m$ and $n$, respectively. See Appendix A for more details on implementing the nearly frictionless DP model in 2D.
The wall interaction between vertex $i$ on particle $m$ and the hopper wall is
\begin{equation}
\label{dp_wall}
U_{m,i}^{w} =\frac{\epsilon_{w}}{2} (1-2d_w/\delta_{m})^2\Theta(1-2d_w/\delta_{m}),
\end{equation}
where $d_w$ is the minimum distance between vertex $i$ on particle $m$ and the hopper wall. The total potential energy $U$ is again the sum of the shape-energy function $U_{m}^{s}$, the gravitational potential energy $U_{m}^{g}$, and the particle-wall interactions $U_{m}^{w}$ over all particles plus the potential energy from particle-particle interactions $U_{\rm int}$, as given in Eq.~\ref{totalU}.
As for the SP model, we consider two types of dissipative forces acting on the deformable particles. Since we will write equations of motion for each vertex, we consider dissipative forces acting on the individual vertices. First, the viscous drag force on vertex $i$ on particle $m$ is:
\begin{equation}
\vec{F}_{m,i}^{\zeta} = - \frac{\zeta}{N_v} {\vec v}_{m,i},
\end{equation}
where ${\vec v}_{m,i}$ is the velocity of vertex $i$ on particle $m$. The kinetic friction force on vertex $i$ on particle $m$ arising from an overlap with vertex $j$ on particle $n$ is
\begin{equation}
\vec{F}_{m,i}^{\mu} = - \mu \sum_{n\neq m}^{N} \sum_{j=1}^{N_v} ({\vec v}_{m,i} - {\vec v}_{n,j}) \Theta(1-\delta_{m,i}^{n,j}/\delta_{mn}).
\end{equation}
Thus, for the DP model, the equation of motion for vertex $i$ on particle $m$ is
\begin{equation}
M_{m,i} \frac{\partial^2 \vec{r}_{m,i}}{\partial t^2} = -\vec{\nabla}_{r_{m,i}}U + \vec{F}_{m,i}^{\zeta} + \vec{F}_{m,i}^{\mu},
\end{equation}
where $M_{m,i} = M_m/N_v$ is the mass of vertex $i$ on particle $m$.
From Eqs.~\ref{shape_energy},~\ref{eq:9}, and~\ref{dp_wall}, we can obtain five dimensionless energy scales for the DP model in a gravitational field in 2D: $K_a = k_a\sigma_{\rm avg}^2/(g\rho)$, $K_l = k_l/(g\rho)$, $K_{b} = k_{b}/(g\rho \sigma_{\rm avg}^4)$, $E_w = \epsilon_{w}/(g\rho \sigma_{\rm avg}^2)$ and $E_c = \epsilon_{c}/(g\rho \sigma_{\rm avg}^2)$, where $\sigma_{\rm avg}=(\sigma_s+\sigma_l)/2$. We choose $K_a>10^4$ so that the fluctuations in the particle areas are negligible. We also set $K_c = K_w = 10^4$ to minimize vertex-vertex and vertex-wall overlaps. We will vary $K_l$ and $K_b$ to determine their effects on the flow rate. The time $t$ and flow rate $Q$ are measured in units of $t_0 = \sqrt{\sigma_{\rm avg}/g}$ and $Q_0 = \sigma_{\rm avg}^d/t_0$. The equations of motion are integrated using a modified velocity Verlet algorithm with a time step of $10^{-3} t_0$.
\subsection{Simulation Initialization}
For the DP model, we initialize the particles as regular polygons, and set the edge lengths to be equal to their equilibrium values $l_{m,0} = \sqrt{4a_{m,0} N_v \tan(\pi/N_v)}/N_v$. For both the SP and DP models, we randomly place the particles within the hopper with zero velocity. Initially, gravity is turned off, and energy minimization (using FIRE\cite{Bitzek2006}) is carried out to ensure no overlaps between the particles and the particles and the walls. After the removal of overlaps, gravity is turned on and the particles begin to fall toward the orifice. To achieve continuous flow, particles that exit the hopper orifice are placed back into the left side of the hopper in contact with one of the bulk particles with the same velocity as the particle it is touching. A particle is considered outside of the hopper (and does not contribute to the flow rate) when it first exits the orifice. However, particles are put back into the hopper only after they fall at least two particle diameters past the orifice.
\begin{figure}[!tp]
\centering
\includegraphics[width=0.8\columnwidth]{exponent.pdf}
\centering
\caption{Area flow rate $Q$ versus orifice width $w/\sigma_{\rm avg}$ for hopper flows in 2D using the SP and DP models with (a) kinetic friction only, $\mu/\mu_0 = 10\sqrt{10}$, and (c) viscous drag only, $\zeta/\zeta_0 = 1/\sqrt{10}$. We consider the SP model with $E_{sp}=10^2$ (asterisks) and $10^4$ (squares), the frictionless DP model with $K_l=10$ and $K_b=10^{-1}$ (crosses) and $K_l=10$ and $K_b=10$ (circles), and the DP model with surface roughness with $K_l=10$ and $K_b =10^{-1}$ (triangles). The solid curves in (a) and (c) are fits to the power-law scaling relation in Eq.~\ref{eq:1}. In (b) and (d), we show $\log_{10}(Q/C)$ versus $\log_{10}(w/\sigma_{\rm avg}-k)$ for the data in (a) and (c), and the dotted and dashed lines have slopes of $1/2$ and $3/2$, respectively.}
\label{fig:2}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=0.95\columnwidth]{3D.pdf}
\centering
\caption{
(a) Volume flow rate $Q$ versus orifice diameter $w/\sigma_{\rm avg}$ for hopper flows in 3D using the SP model with $E_{sp}=10^2$ and either kinetic friction only, $\mu/\mu_0 = 10\sqrt{10}$ (circles), or viscous drag only, $\zeta/\zeta_0 = 1/\sqrt{10}$ (crosses) for the dissipative forces. The solid curves in (a) are fits to the power-law scaling relation in Eq.~\ref{eq:1}. In (b), we show $\log_{10}(Q/C)$ versus $\log_{10}(w/\sigma_{\rm avg}-k)$ for the data in (a), and the dotted and dashed lines have slopes of $3/2$ and $5/2$, respectively.}
\label{fig:3D}
\end{figure}
\section{Experimental Methods}
\label{expmethods}
Below, we will compare the simulation results for hopper flows using the SP and DP models in 2D to experimental studies of quasi-2D hopper flows of oil droplets in water. In this section, we describe the details of the experimental studies. We consider silicon oil-in-water emulsions undergoing gravity-driven hopper flows in narrow channels.
The oil-in-water emulsions are prepared through the aid of a Micronit focused-flow microfluidic device. This device is capable of producing hundreds of droplets with volumes set by the relative flow rates between the continuous and dispersed phases \cite{Shah2008,Utada2007}. To stabilize the emulsions, the droplets are suspended and created in a $5\%$ Tween 20 nonionic detergent solution \cite{Xin2013}. The density of the droplets is $\rho_{\rm oil} \sim 0.936$ g/ml, and they are suspended in water with density $\rho_{\rm water} \sim 0.997$ g/ml.
The oil-in-water emulsions produced by the flow-focused microfluidic device are then injected between two $75\times50$ mm$^2$ microscope slides separated by a thin sheet of either a glass cover-slip or laser-cut plastic, ranging in thickness from $180$ to $220$ $\mu$m, in accordance with prior work on the clogging of emulsion droplets~\cite{Hong2017}. In both cases, the thickness of the sheet is sufficiently small (smaller than the smallest droplet diameter) to keep droplets from stacking. Hence, the thickness of the spacer, which also doubles as the hopper itself, confines the droplets to nearly two-dimensions \cite{Desmond2013}. The chambers are sealed with Norland Optical Adhesive 68 and placed under ultraviolet light to harden. Inside the chambers, the droplets generally have polydispersity in size between 6-15\% (where the polydispersity is defined by the standard deviation of the droplet diameter divided by the mean). The mean diameter of the droplets ranged from $250$-$400$~$\mu$m between different experiments. However, the mean diameter was always larger than the chamber thickness, which ensures that the droplets are quasi-two-dimensional. Across all experiments, the average droplet diameter is $\sigma_{\rm avg}\sim 315$~$\mu$m. Occasionally, the oil droplets coalesce into larger droplets with diameters significantly greater than 400 $\mu$m; however, these larger droplets are either among the last droplets to pass through the opening, thus acting solely as sources of pressure, or the very first, thus contributing nothing to the subsequent flow.
Two hopper geometries were used for these experimental studies: the first has two walls oriented at $45^\circ$, and the second has one $45^\circ$ wall facing a $0^\circ$ wall (that is, a wall parallel to the direction of droplet motion). The $45^\circ/0^\circ$ geometry is made with cover-slip glasses. The $45^\circ/45^\circ$ geometry uses thin sheets of plastic that are laser cut into the desired shapes. The length and width of the glass and plastic hoppers are chosen to appropriately fit the microscope slide, ensuring room for hundreds of droplets, but small enough to ensure that the droplets have enough space to clear the orifice unimpeded by droplets that had gathered outside of the hopper. The range of the hopper openings is $w/\sigma_{\rm avg} \sim 1.7$ to $12.3$.
To initialize the experiments, a large air bubble is introduced into the sample chamber to clog the opening. This allows droplets to stack against the bubble and create a well-packed initial condition. We then press the sample chamber which induces the air bubble to exit, thus initiating the oil droplet flow. To observe the flow we rotate a microscope $90^\circ$, aligning the stage parallel to the direction of gravity and viewing the sample with a horizontally directed microscope objective ($1.6\times$). An external lamp is used for illumination and images are taken with a digital camera recording at $0.75$ fps. Using image analysis, we obtain the droplet positions and areas, and use standard methods to track the droplet motion \cite{Crocker1996}.
Similar to the simulations, the time and area flow rate units in the experiments are defined as $t_0 = \sqrt{\sigma_{\rm avg}/g_{\rm eff}}$ and $Q_0 = \sigma_{\rm avg}^2/t_0$. We use the mean diameter across all of the experiments, $\sigma_{avg} \sim 315$~$\mu$m, and $g_{\rm eff} = g(\rho_{\rm water} - \rho_{\rm oil})/\rho_{\rm oil}$ is the acceleration imposed by oil-in-water buoyancy and $g \sim 9.8$ m/s$^2$ is the gravitational constant.
\section{Results}
\label{results}
In this section, we describe the results of our numerical simulations of hopper flows of the soft particle model (SP) in 2D and 3D and the deformable particle (DP) model in 2D. We investigate the scaling of the flow rate $Q$ versus the orifice width $w$ as a function of the ratio of the viscous drag and kinetic friction coefficients, particle deformability, surface roughness, and spatial dimension. We find that the flow rate $Q$ scales as a power-law in the orifice width $w/\sigma_{\rm avg}$ with a cutoff $k$, $Q = C(w/\sigma_{\rm avg}-k)^{\beta}$. The power-law scaling exponent $\beta$ depends strongly on the ratio of the viscous drag and kinetic friction coefficients $\lambda =\zeta/\mu$, but it does not depend on the particle deformability or surface roughness. In particular, if the particles only experience kinetic friction, without viscous drag, $\beta=d-1/2$, as found by Beverloo and others for hopper flows of granular materials. However, if the particles only experience viscous drag, without kinetic friction, $\beta=d-3/2$. Further, we show that the scaling exponent $\beta$ varies continuously with $\lambda$ between $\beta = d-1/2$ in the $\lambda \rightarrow 0$ limit and $d-3/2$ in the $\lambda \rightarrow \infty$ limit, with a midpoint $\lambda_c$ that decreases with decreasing hopper opening angle $\theta_w$. We show that the change in the power-law scaling exponent $\beta$ is associated with changes in the spatio-temporal dynamics of the flows. In particular, the gradient in the velocity profile varies more strongly with the orifice size $w$ for flows with $\beta=d-3/2$ than those with $\beta=d-1/2$. We then show that the offset $k$ at which $Q(k\sigma_s)=0$ decreases from values above $3$ to below $1$ as the particle deformability increases. We also find that both the soft and deformable particle models in the $\lambda \rightarrow \infty$ limit are able to recapitulate $Q(w)$ obtained from experimental studies of quasi-2D hopper flows of oil droplets in water.
\begin{figure}[!ht]
\includegraphics[width=0.95\columnwidth]{drag2friction.pdf}
\centering
\caption{Power-law scaling exponent $\beta$ from Eq.~\ref{eq:1} plotted versus the ratio of the viscous drag and kinetic friction coefficients $\lambda=\zeta/\mu$ for hopper flows in (a) 2D and (b) 3D. In 2D, we consider the SP model with $E_{sp}=50$ (triangles) and $10^3$ (diamonds), frictional DP model with $K_l=10$ and $K_b=10^{-1}$ (stars), and the frictionless DP model with $K_l=10$ and $K_b=10^{-1}$ (crosses), $K_l=10$ and $K_b=10^2$ (squares), and $K_l=10^3$ and $K_b=10^{-1}$ (asterisks), all with hopper opening angle $\theta_w = 90^{\circ}$. In 3D, we consider the SP model with $E_{sp}=10^2$ (circles). In (a) and (b), the solid curves are fits to the sigmoid in Eq.~\ref{eq:b_sig} and the horizontal dashed lines indicate $\beta = 5/2$, $3/2$, and $1/2$.}
\label{fig:drag2friction}
\end{figure}
In Fig.~\ref{fig:2} (a), we show the area flow rate $Q$ versus the orifice width $w/\sigma_{\rm avg}$ for the soft particle and deformable particle models with kinetic friction only (i.e. $\mu/\mu_0 =10\sqrt{10}$ and $\zeta=0$) in 2D for hopper opening angle $\theta_w =90^{\circ}$. We compare results for the SP model with $E_{sp} = 10^2$ and $10^4$, the frictionless DP model with $K_l=10$ and $K_b = 10^{-1}$, $K_l=10$ and $K_b =10$, and $K_l=10^3$ and $K_b =10^{-1}$, and the frictional DP model with $K_l =10$ and $K_b=10$. $Q(w)$ for all of these systems can be fit to the power-law scaling relation in Eq.~\ref{eq:1}. While $C$ and $k$ for these systems vary, the power-law scaling exponent $\beta=3/2$ is the same for all models as shown in Fig.~\ref{fig:2} (b).
(Note that both the SP and DP models can be studied in the rigid-particle limit, i.e. $E_{sp} \rightarrow \infty$ for the SP model and $K_b \rightarrow \infty$ for the DP model. In this limit, $Q(w)$ is the same for both models as shown in Appendix B.) Since we compared systems with different values of $K_{sp}$, $K_b$, and $K_l$ and with different values of surface roughness and obtained the same values of $\beta$, these results emphasize that $\beta$ does not depend strongly on particle deformability and surface roughness.
\begin{figure}[!ht]
\includegraphics[width=0.95\columnwidth]{vprofile.pdf}
\centering
\caption{Average speed of the particles in the direction of gravity at the center of the hopper $v_g$ as a function of distance $h/\sigma_{\rm avg}$ above the hopper orifice in 2D using the SP model with $E_{sp} = 10^2$ and dissipative forces (a) with $\lambda \rightarrow 0$ that yield $\beta =d-1/2$, (b) with $\lambda \sim \lambda_c$ that yield $\beta \sim d-1$, and (c) with $\lambda \rightarrow \infty$ that yield $\beta =d-3/2$. The arrows indicate increasing orifice diameters from $w/\sigma_{avg} = 4.0$ (blue) to $10.6$ (red).}
\label{fig:vprofile}
\end{figure}
In Fig.~\ref{fig:2} (c), we show similar results for $Q$ versus $w/\sigma_{\rm avg}$ for same 2D models, but for systems with viscous drag forces only (i.e. $\mu =0$ and $\zeta/\zeta_0=1/\sqrt{10}$) for the dissipative forces. All of the data can also be fit to the power-law scaling relation in Eq.~\ref{eq:1}. Again, $C$ and $k$ vary, but the power-law scaling exponent $\beta=1/2$ is the same for all models, as shown in Fig.~\ref{fig:2} (d). Clearly, the power-law scaling exponent $\beta$ does not depend on particle deformability and surface roughness, but it depends strongly on the type of dissipative forces that are included.
In Fig.~\ref{fig:3D} (a), we show similar results for the volume flow rate $Q$ versus orifice width $w/\sigma_{\rm avg}$ for the SP model in 3D
with $E_{sp}=10^2$ and either kinetic friction forces only ($\mu/\mu_0=10\sqrt{10}$, $\zeta=0$) or viscous drag forces only ($\mu=0$, $\zeta/\zeta_0=1/\sqrt{10}$). $Q(w)$ for both systems can be fit by Eq.~\ref{eq:1} and have power-law scaling exponents $\beta=5/2$ and $3/2$ in the limits $\lambda \rightarrow 0$ and $\infty$, respectively, as shown in Fig.~\ref{fig:3D} (b). Figs.~\ref{fig:2} and~\ref{fig:3D} illustrate that $\beta=d-1/2$ in the $\lambda \rightarrow 0$ limit and $\beta =d-3/2$ in the $\lambda \rightarrow \infty$ limit.
\begin{figure}[!ht]
\includegraphics[width=0.9\columnwidth]{hopperAngle.pdf}
\centering
\caption{(a) Power-law scaling exponent $\beta$ versus the ratio $\lambda$ of the viscous drag and kinetic friction coefficients for the frictionless DP model in 2D with $K_{l} = 10$ and $K_b = 10^{-1}$ for hopper opening angles $\theta_w = 90^\circ$ (crosses), $60^\circ$ (circles), $30^\circ$ (asterisks), and $20^\circ$ (triangles). The solid lines are fits to Eq.~\ref{eq:b_sig}. The horizontal dotted and dashed lines indicate $\beta=3/2$ and $1/2$. (b) Ratio of the average magnitude of the drag force on a particle to the average magnitude of the kinetic friction force on a particle $|{\vec F}^{\zeta}|/|{\vec F}^{\mu}|$ plotted versus $\theta_w$ for the systems in (a) at $\lambda = 10^{-2}$.}
\label{fig:hopperAngle}
\end{figure}
What is the value of the power-law exponent $\beta$ at intermediate values of $\lambda$? In Fig.~\ref{fig:drag2friction}, we show $\beta$ from fits of $Q(w)$ to Eq.~\ref{eq:1} for the SP and DP models in 2D (for hopper opening angle $\theta_w = 90^{\circ}$) and the SP model in 3D versus the ratio of the viscous drag and kinetic friction coefficients $\lambda$. $\beta$ varies continuously with $\lambda$ in both 2D and 3D and can be described by a sigmoidal function:
\begin{equation}
\beta = \frac{1}{2}\left(d-\tanh\left[\log_{10}\left(\lambda-\lambda_c\right)^{1/b}\right]\right),
\label{eq:b_sig}
\end{equation}
where $\lambda_c \sim 0.05$ and $\sim 0.07$ in 2D and 3D is the characteristic value at which the power-law scaling exponent reaches the midpoint $\beta_c = d-1$ and $0< 1/b < 1$ is the stretching exponent. We show explicitly in 2D that $\beta(\lambda)$ does not depend on particle deformability and surface roughness. Further, these results do not depend on the number of particles $N>800$ as shown in Appendix C. We find similar results for $\beta(\lambda)$in 3D.
\begin{figure}[!ht]
\includegraphics[width=0.95\columnwidth]{heatMap.pdf}
\caption{(a) The offset $k$ obtained from fits of the area flow rate $Q(w)$ to Eq.~\ref{eq:1} versus $E_{sp}$ (with $\theta_w =90^{\circ}$) using the 2D SP model in the $\lambda \rightarrow \infty$ limit (stars) and $\lambda \rightarrow 0$ limit (circles). The offset $k$ for the 2D frictionless DP model on a color scale (b) from $0$ (blue) to $1.7$ (red) in the $\lambda \rightarrow 0$ limit (with $\theta_w =90^{\circ}$) and (c) from $1$ (blue) to $3.5$ (red) in the $\lambda \rightarrow \infty$ limit (with $\theta_w =90^{\circ}$) as a function of the perimeter $K_l$ and bending $K_b$ energy scales.}
\label{fig:heatMap}
\end{figure}
What is different about the spatiotemporal dynamics of the hopper flows with different values of the power-law scaling exponent $\beta$? To address this question, we calculate the velocity profiles in systems with different values of $\beta$. In Fig.~\ref{fig:vprofile}, we show (for the SP model in 2D with $\theta_w = 90^{\circ}$) the average speed of the particles in the direction of gravity at the center of the hopper $v_g$ as a function of the distance above the hopper orifice $h/\sigma_{\rm avg}$ for three ratios of the dissipative forces, $\lambda \rightarrow 0$, $\lambda \sim \lambda_c$, and $\lambda \rightarrow \infty$.
To smooth the velocity profile, we define $v_g$ at location $\vec{r}$ as $v_g(\vec{r}) = \sum_{i=1}^{N} v_{gi} \phi({\vec r}-\vec{r}_i)$, where $\vec{r}_i$ and $v_{gi}$ are the position and speed in the direction of gravity of particle $i$ and $ \phi(\vec{r}-\vec{r}_i) =(\sqrt{2 \pi} \sigma_{\rm avg})^{-2} \exp(-|\vec{r}-\vec{r}_i|^2/2\sigma_{\rm avg} ^2)$ is a Gaussian coarse-graining function\cite{PhysRevLett.114.238002,Goldhirsch2010}.
For systems with $\lambda \rightarrow 0$ and $\beta = d - 1/2$, $v_g(h=0) \sim w^\beta/w^{d-1} \sim w^{1/2}$, and thus $v_g(h=0)$ increases with the orifice diameter $w$, as shown in Fig.~\ref{fig:vprofile} (a). For systems with $\lambda \rightarrow \infty$ and $\beta = d - 3/2$, $v_g(h=0) \sim w^\beta/w^{d-1} \sim w^{-1/2}$, and thus $v_g(h=0)$ decreases with increasing $w$, as shown in Fig.~\ref{fig:vprofile} (c). In contrast, the average speed in the direction of gravity far from the hopper orifice, $v_g(h \rightarrow \infty) \sim w^{\beta}/W\sim w^{\beta}$, increases with $w$ for all values of $\lambda$, as shown in Fig.~\ref{fig:vprofile} (a)-(c). Because of the difference in how $v_g(0)$ and $v_g(\infty)$ depend on the orifice width $w$ for different values of $\lambda$, the gradient of the velocity profile $dv_g/dh$ can easily distinguish flows with small versus large values of $\lambda$. As shown in Fig.~\ref{fig:vprofile} (a), for $\lambda \rightarrow 0$, $dv_g/dh$ does not depend strongly on $w$, suggesting a weak variation of the pressure profile on $w$. However, for $\lambda \rightarrow \infty$, $dv_g/dh$ decreases strongly with increasing $w$, indicating large pressure profiles in systems with small $w$. In this limit, the large differences in viscous drag forces caused by the velocity difference $v(0)-v(\infty)$ are balanced by overlap forces, which give rise to the large pressure profile. As expected, $v_g$ near the orifice for the intermediate case $\lambda \sim \lambda_c$ possess very weak dependence on $w$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\columnwidth]{exp.pdf}
\caption{Oil-in-water emulsions flowing through a plastic hopper with orifice diameter $w \sim 180$ $\mu$m and (a) $45^{\circ}$/$45^{\circ}$ and (b) $45^{\circ}$/$0^{\circ}$ wall geometries. The droplets have an average diameter $\sigma_{\rm avg} \sim 304$ $\mu$m with a polydispersity $\Delta \sigma/\sigma_{\rm avg} \sim 7 \%$. (c) Area flow rate $Q$ plotted versus orifice diameter $w$. The solid line provides a fit to Eq.~\ref{eq:1} with $\beta \sim 0.49$, $k \sim 1.47$, and $C \sim 1.6 \times 10^{-4}$. (d) $Q/C$ plotted versus $\log_{10} (w/\sigma_{\rm avg}-k)$ for the data in (c). The dashed line has a slope of $1/2$. In (c) and (d), we show data for both $45^{\circ}$/$45^{\circ}$ (circles) and $45^{\circ}$/$0^{\circ}$ (crosses) wall geometries.}
\label{fig:exp}
\end{figure}
We have shown that $\beta(\lambda)$ does not depend on particle deformability and surface roughness, but it does depend strongly on the nature of the dissipative forces (i.e. whether viscous drag or kinetic frictional forces dominate) and the resulting velocity profile in the hopper. These results suggest that $\beta(\lambda)$ can be altered by varying the hopper opening angle $\theta_w$ since changes in $\theta_w$ modify the velocity profile. In Fig.\ref{fig:hopperAngle} (a), we show $\beta(\lambda)$ from hopper flows using the frictionless DP model in 2D with $\theta_w = 90^{\circ}$, $60^{\circ}$, $30^{\circ}$, and $20^{\circ}$. Over this range in $\theta_w$, the characteristic $\lambda_c$ at which $\beta$ reaches its midpoint decreases from $5 \times 10^{-2}$ to $6 \times 10^{-3}$. As the hopper wall angle $\theta_w$ decreases (i.e. the hopper walls become more aligned with the direction of gravity), in the regime $\lambda \sim \lambda_c$, the ratio of the
average force stemming from the viscous drag to that stemming from the kinetic friction $|\vec{F}^\zeta|/|\vec{F}^\mu|$ increases (Fig.\ref{fig:hopperAngle} (b)), and thus $\lambda_c$ must decrease with decreasing $\theta_w$. In the low-$\theta_w$ limit (i.e. $\theta_w \le 30^{\circ}$), the ratio stops increasing and $\lambda_c$ reaches a plateau value, $\sim 5 \times 10^{-3}$. Note that the time required to reach steady-state diverges as $\theta_w \rightarrow 0$, and thus we are limited in the values of $\theta_w$ that we can study.
In contrast to the power-law scaling exponent $\beta$, the offset $k$ at which $Q(k\sigma_{\rm avg})=0$ depends on particle deformability and surface roughness.
Previous studies have shown that $k$ varies from $\sim 1.3$ to $2.9$ as the static friction coefficient increases\cite{BEVERLOO1961260,ANAND20085821}.
In Fig.~\ref{fig:2}, we show similar results that $k$ increases with surface roughness for the DP model. How does the offset $k$ depend on particle deformability? In Fig.~\ref{fig:heatMap} (c), we show
$k$ as a function of the perimeter $K_l$ and bending $K_b$ energy scales for the frictionless DP model in 2D in the $\lambda \rightarrow \infty$ limit (for $\theta_w = 90^{\circ}$). At small $K_l$, $k$ increases from $\sim 1$ to $\sim 3.5$ as $K_b$ increases from $10^{-2}$ to above $10^2$, suggesting the formation of transient multi-particle arches in the rigid-particle limit. (Note that we have shown in Appendix B that the DP model reaches the hard-particle limit for $K_b > 10^{2}$.) We find similar results for the 2D SP model for $\lambda \rightarrow \infty$ (Fig.~\ref{fig:heatMap} (a)): the offset $k$ increases from $k \sim 1$ to $3.5$ as $E_{sp}$ approaches the rigid-particle limit. At small $K_b$ for the DP model, $k$ increases, but only from $k \sim 1$ to $2$ as $K_l$ increases from $1$ to $10^3$, suggesting the formation of small arches and increased particle rigidity. However, $K_l \gg 10^3$ is required to reach $k \sim 3.5$ as found in the rigid-particle limit for the DP model when increasing the bending energy. In addition, we find that the prefactor in Eq.~\ref{eq:1} (with $N=1600$) $C \sim 0.45$ for all $K_b$ and $K_l$ for the DP model and all $E_{sp}$ for the SP model, emphasizing that $C$ is weakly dependent on particle deformability. We show the system size dependence of the prefactor in Appendix C.
For hopper flows with $\lambda \rightarrow 0$, the increase in the offset $k$ is much less pronounced. (See Fig.~\ref{fig:heatMap} (a) and (b).) For example, for the 2D DP model, $k < 1$ for $K_b = 10^{-2}$ and $k \sim 1.5$ for $K_b = 10^2$ in the rigid-particle limit. Thus, large multi-particle arches do not form frequently in $\lambda \rightarrow 0$ hopper flows. Again, $C \sim 0.15$ for all $K_b$, $K_l$ and $E_{sp}$ values (for $N=1600$).
As discussed in the Introduction, numerous experimental studies have shown that hopper flows of granular materials with static and kinetic frictional forces posses $\beta =d-1/2$ and can be modeled quantitatively using the SP model. Here, we present the results from quasi-2D experiments of hopper flows of oil droplets in water. (See Fig.~\ref{fig:exp} (a) and (b).) Unlike the simulations where the number of particles in the hopper is kept constant (by replenishing them when particles exit), the number of particles in the hopper experiments decreases with time. The hopper flow is driven by hydrostatic pressure, which scales with the height $h_{\rm max}$ of the droplet pile pushing out of the opening. Given the triangular geometry, this distance can be related to the number of droplets $N$ that have yet to exit, $h_{\rm max} \sim \sqrt{N}$. Hence, the droplet flux can be written as
\begin{equation}\label{first}
\frac{dN}{dt}=c_0\sqrt{N},
\end{equation}
where $c_0$ has units of inverse time. This relation is experimentally observed for a large range of $N$, with slight deviations as the first $\sim 100$ and last $\sim 100$ droplets flow out due to transient effects. Fitting the steady-state data gives values for $c_0$, which we non-dimensionalize as $Q = c_0 \sqrt{\sigma_{\rm avg}/g_{\rm eff}}$. Fig.~\ref{fig:exp} (c) and (d) show the results for the area flow rate $Q$ versus the orifice width $w/\sigma_{\rm avg}$. While we have two experimental geometries, the results for the area flow rate $Q$ are identical. $Q(w)$ can be fit by the power-law scaling relation in Eq.~\ref{eq:1} with $\beta \sim 0.49$ and $k\sim 1.5$. These values of $\beta$ and $k$ are consistent with the simulation results for $\lambda \rightarrow \infty$ and $E_{sp} \sim 10^2$ for the SP model and the $k \sim 1.5$ contour for the frictionless DP model in Fig.~\ref{fig:heatMap} (c). These results emphasize that the kinetic frictional forces are weak relative to the viscous drag forces in hopper flows of oil droplets in water.
\begin{figure}[!ht]
\includegraphics[width=0.9\columnwidth]{frictionless.pdf}
\centering
\caption{Schematic of the frictionless DP model to illustrate the contact distance $\delta_{m,i}^{n,j}$ between vertex $i$ on particle $m$ with position ${\vec r}_{m,i}$ and vertex $j$ on particle $n$ with position ${\vec r}_{n,j}$. ${\vec l}_{m,i}$ is the vector pointing from ${\vec r}_{m,i}$ to ${\vec r}_{m,i+1}$ and ${\hat n}_{m,i} \cdot {\vec l}_{m,i} =0$. The definition of $\delta_{m,i}^{n,j}$ depends on the location of the intersection point $P$ of the line along ${\vec l}_{m,i}$ and the line that is perpendicular to ${\vec l}_{m,i}$ that includes ${\vec r}_{n,j}$. If point $P$ is between ${\vec r}_{m,i}$ and ${\vec r}_{m,i+1}$, $\delta_{m,i}^{n,j} = {\vec r}_{m,i}^{~n,j} \cdot {\hat n}_{m,i}$ as shown in (a), otherwise $\delta_{m,i}^{n,j} = |{\vec r}_{m,i}^{~n,j}|$ as shown in (b).}
\label{fig:frictionless}
\end{figure}
\section{Discussion and Conclusions}
\label{discussion}
In this article, we carried out extensive numerical simulations of gravity-driven hopper flows of particulate media in 2D and 3D using the soft (SP) and deformable particle (DP) models. We found several important results. First, we showed quite generally that the flow rate $Q$ versus orifice width $w$ obeys the power-law scaling relation: $Q(w) = C(w/\sigma_{\rm avg}-k)^{\beta}$. While $k$ depend on the particle deformability and surface roughness, the exponent $\beta$ does not. Instead, $\beta$ is controlled by the ratio of the viscous drag to the kinetic friction coefficients $\lambda$ and $\beta$ varies continuously from $\beta = d-1/2$ in the $\lambda \rightarrow 0$ limit to $\beta = d-3/2$ in the $\lambda \rightarrow \infty$ limit. The midpoint $\beta_c(\lambda_c)$ can be tuned by varying the hopper opening angle $\theta_w$ since it can alter the ratio of the average viscous drag force to the average kinetic friction force. The spatiotemporal dynamics of the flows differ for systems with different power-law exponents. In particular, the gradients of the velocity and pressure profiles vary more strongly with the orifice width for $\beta = d-3/2$ than those with $\beta=d-1/2$. We also found the offset $k$ increases with particle stiffness until $k \sim k_{\rm max}$ in the hard-particle limit, where $k_{\rm max} \sim 3.5$ in $\lambda \rightarrow \infty$ limit and $k_{\rm max} \sim 1.6$ in $\lambda \rightarrow 0$ limit. In addition, we showed that both the SP and DP models are able to recapitulate the flow rate $Q(w)$ from experimental studies of quasi-2D hopper flows of oil droplets in water.
These results suggest a number of promising future research directions. First, the current studies focused on nearly spherical particles. How does the power-law scaling exponent $\beta(\lambda)$ depend on particle shape? By changing the reference shape parameter ${\cal A}_0$ in the DP model, we can determine $\beta(\lambda)$ as a function of the particle shape. Second, in the current studies, we included viscous drag and kinetic friction forces between particle pairs, but we did not include kinetic friction forces between the particles and the side walls with kinetic friction coefficient $\nu$. How does the power-law scaling exponent vary with the dimensionless ratios $\zeta/\nu$ and $\mu/\nu$ that quantify the dominant dissipative forces? Third, in the current studies, we found that both the SP and DP models are able to recapitulate $Q(w)$ in the experimental studies of hopper flows of emulsion droplets. However, in future studies, we seek a more quantitative approach where the simulations can recover the particle shapes during the hopper flows in experiments. To do this, we will refine the model for surface tension in the DP model. In addition, we will simulate hopper flows of emulsion droplets in the intermittent and clogging regime for $w \sim k\sigma_{\rm avg}$. In this regime, we expect qualitatively different results for the SP and DP models, since truly deformable particles can significantly change their shapes, but maintain their volume, to alleviate clogs in hopper flows.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{rigid.pdf}
\caption{Area flow rate $Q$ versus orifice width $w/\sigma_{\rm avg}$ for hopper flows in 2D using the SP model $E_{sp}=10^2$ (circles) and $10^5$ (squares) and the frictionless DP model with $K_l=10$ and $K_b=10^{-1}$ (crosses) and $K_l=10$ and $K_b=10^2$ (asterisks) with (a) kinetic friction forces only ($\mu/\mu_0 = 10\sqrt{10}$, $\zeta=0$) and (b) viscous drag forces only ($\zeta/\zeta_0 = 1/\sqrt{10}$, $\mu=0$). The solid curves are fits to the power-law scaling relation for $Q(w)$ in Eq.~\ref{eq:1}. In the hard-particle limit, we find (a) $C \approx 0.12$ and $k \approx 1.6$ for $\lambda \rightarrow 0$ and (b) $C \approx 0.42$ and $k \approx 3.4$ for $\lambda \rightarrow \infty$.}
\label{fig:rigid}
\end{figure}
\section*{Appendix A}
\label{app:A}
In this Appendix, we include more details concerning the detection of contacts between frictionless deformable particles in 2D. For frictionless deformable particles, the $i$th vertex on a given particle $m$ is modeled as a circulo-line made up of a rectangular region with length $l_{m,i}$ plus a pair of half-circular end caps with radius $\delta_m$
. Here, we describe how to calculate the closest distance $\delta_{m,i}^{n,j}$ between vertex $i$ on particle $m$ and vertex $j$ on particle $n$ as shown in Fig.\ref{fig:frictionless}. We first find the line $L$ that includes the point ${\vec r}_{n,j}$ and is perpendicular to ${\vec l}_{m,i}$. If line $L$ intersects the line along ${\vec l}_{m,i}$ at a point between ${\vec r}_{m,i}$ and ${\vec r}_{m,i+1}$, the closest distance between vertices $i$ and $j$ is the distance between ${\vec r}_{n,j}$ and the line along ${\vec l}_{m,i}$, i.e. $\delta_{m,i}^{n,j} = \vec{r}_{m,i}^{~n,j} \cdot \hat{n}_{m,i}$ as shown in Fig.\ref{fig:frictionless} (a). In this case, the repulsive pair force from $U_{\rm int}$ is in the direction of $\hat{n}_{m,i}$ (perpendicular to the surface of particle $m$), and therefore it is a frictionless interaction\cite{Kim2022}.
If line $L$ does not intersect the line along ${\vec l}_{m,i}$ at a point between ${\vec r}_{m,i}$ and ${\vec r}_{m,i+1}$, the closest distance between vertices $i$ and $j$ is $\delta_{m,i}^{n,j} = |{\vec r}_{m,i}^{~n,j}|$ as shown in Fig.\ref{fig:frictionless} (b). Again, in this case, the gradient of $U_{\rm int}$ is along ${\hat r}_{m,i}^{~n,j}$, and thus the repulsive interaction force is frictionless.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{spV.pdf}
\caption{(a) Power-law scaling exponent $\beta$ plotted versus the ratio of the viscous drag to the kinetic friction coefficients $\lambda$ from the data in Fig.~\ref{fig:drag2friction} (a) for the SP and DP models in 2D with $\theta_w =90^{\circ}$. We also show data for the SP model in 2D with $E_{sp}=20$ (circles). (b) Schematic of the $2w \times w$ rectangular region in 2D over which the particle number density $\rho_n$ is measured. (c) The power-law scaling exponent $\beta^{'}(\lambda)$ obtained by fitting the corrected area flow rate, $Q^{'}=Q a_{\rm eff}/a_{\rm avg}$ to Eq.~\ref{eq:1}.}
\label{fig:spV}
\end{figure}
\section*{Appendix B}
\label{app:B}
In this Appendix, we show results for the area flow rate $Q(w)$ in the rigid-particle limit for both cases $\lambda \rightarrow 0$ and $\lambda \rightarrow \infty$. We also show that conservation of total particle area is important for accurately modeling the area flow rate in hopper flows of soft and deformable particles in 2D. (Similar results are found in 3D.) As discussed in Sec.~\ref{sim_methods}, the SP model does not explicitly model particle shape change, but instead mimics particle deformability by allowing overlaps between neighboring particles. As a result, the SP model does not conserve total particle area. In contrast, the DP model includes a term in the shape-energy function to conserve particle area as particles change their shapes. (See Eq.~\ref{shape_energy}.) However, in the large-$E_{sp}$ limit for the SP model, where particle overlaps in SP model are small, and in the large-$K_b$ limit for the DP model, the area flow rate $Q(w)$ is same for these two models. As shown in Fig.\ref{fig:rigid}, $Q(w)$ is nearly identical for the SP model with $E_{sp}=10^5$ and for the frictionless DP model with $K_b=10^2$ in the $\lambda \rightarrow 0$ and $\lambda \rightarrow \infty$ limits. For $\lambda \rightarrow 0$ with $\beta= 3/2$, the offset $k \approx 1.6$ in the hard-particle limit, in close agreement with experiments of hopper flows of frictional grains. For $\lambda \rightarrow \infty$ with $\beta= 1/2$, the offset $k \approx 3.4$ in the hard-particle limit. Note that the offset $k$ has different values for $\lambda \rightarrow 0$ and $\lambda \rightarrow \infty$ in the hard-particle limit, which suggests that $k$ is controlled by the flow dynamics and cannot be determined by the hopper geometry alone~\cite{Wilson2014,Fan2022}.
\begin{figure}[!ht]
\includegraphics[width=\columnwidth]{sysSize.pdf}
\caption{(a) Power-law scaling exponent $\beta$ versus the ratio of the viscous drag and the kinetic friction coefficients $\lambda$ for the SP model in 2D with $E_{sp}=50$, $\theta_w = 90^{\circ}$, and $N=800$ (crosses), $1600$ (circles), and $3200$ (asterisks). The horizontal dashed lines indicate $\beta = 3/2$ and $1/2$. The solid line is a fit to Eq.~\ref{eq:b_sig}. (b) Prefactor $C$ in Eq.~\ref{eq:1} normalized by the value at $N=800$ versus the system size $N$ for the 2D SP model for both $\lambda \rightarrow 0$ (circles) and $\lambda \rightarrow \infty$ (stars). }
\label{fig:sysSize}
\end{figure}
For extremely soft particles, the overlaps that occur in the SP model are sufficiently large that they influence the hopper flow dynamics. For example, in
Fig.\ref{fig:spV}(a), we show the power-law exponent $\beta$ as a function of $\lambda$ for the SP model with $E_{sp}=20$ in addition to all of the data in Fig.\ref{fig:drag2friction}(a). For this data, the area flow rate $Q$ was calculated by counting the number of mass points that flow past the orifice opening per unit time divided by the particle areal mass density $\rho$. The power-law exponent $\beta(\lambda)$ for the SP model with extremely large overlaps ($E_{sp}=20$) deviates from all of the other data. We can correct $Q(w)$ for the SP model with large particle overlaps by determining the true particle area flowing through the hopper orifice. To do this, we consider a $2w \times w$ rectangular region near the hopper orifice as shown in Fig.\ref{fig:spV}(b) and measure the number density $\rho_n = N/A$ in this region. The effective particle area in this region is $a_{\rm eff}=1/\rho_n$, and the corrected area flow rate is $Q' = Q a_{\rm eff}/a_{\rm avg}$, where $a_{\rm avg} = (a_s + a_l)/2$. In Fig.\ref{fig:spV} (c), we show the power-law scaling exponent $\beta'$ obtained from fitting $Q'$ to Eq.~\ref{eq:1}. We find that the data from Fig.\ref{fig:drag2friction}(a) (where the particle overlaps are small) do not change and $\beta^{'}=\beta$. However, $\beta^{'}$ for the SP model with $E_{sp}=20$ shifts so that it falls on the rest of the data from Fig.\ref{fig:drag2friction}(a).
\section*{Appendix C}
In this Appendix, we investigate how the power-law exponent $\beta(\lambda)$ obtained by fitting $Q(w)$ to Eq.~\ref{eq:1} depends on system size for the SP model in 2D. In Fig.~\ref{fig:sysSize} (a), we show $\beta(\lambda)$ for the 2D SP model with $E_{sp}=10^2$ and $\theta_w = 90^{\circ}$ for $N=800$, $1600$, and $N=3200$. We find that $\beta$ is very weakly dependent on system size for the 2D SP model, and we expect similar results for the SP model in 3D. Based on our recent studies of jamming of deformable particles, we expect similar weak system size dependence of $\beta$ for the 2D DP model\cite{Boromand2018,Boromand2019}. We also show the system size dependence of the prefactor $C$ in Eq.~\ref{eq:1} for the 2D SP model in Fig.\ref{fig:sysSize} (b). $C$ grows roughly linearly with system size, but the slope is much weaker for systems with $\lambda \rightarrow \infty$.
\section*{Acknowledgments}
We acknowledge support from NSF Grants No. CBET-2002782 (Y. C., J. D. T., and C. S. O.), No. CBET-2002815 (B. L., P. H., and E. R. W.), and No. CBET-2002797 (M. D. S.). This work was also supported by the High Performance Computing facilities operated by Yale's Center for Research Computing.
| -49,794.270602 |
[
-3.47265625,
3.201171875
] | 45.084746 |
[
-3.4140625,
0.490966796875,
-2.251953125,
-6.859375,
-0.83154296875,
10.0390625
] |
[
2.12109375,
6.734375,
3.587890625,
6.83203125
] | 531 | 8,799 |
[
-3.55078125,
4.06640625
] | 28.791515 |
[
-6.60546875,
-4.65234375,
-4.62109375,
-2.451171875,
2.4375,
13.015625
] | 1.075102 | 27.486591 | 16.695079 | 1.935098 |
[
3.247422933578491
] | -32,341.131515 | 5.126946 | -48,704.827521 | 0.731513 | 5.801791 |
[
-3.033203125,
-3.78515625,
-3.755859375,
-4.5546875,
2.6171875,
11.828125
] |
[
-5.9140625,
-2.583984375,
-2.82421875,
-2.25390625,
4.08203125,
6.58203125
] | |
BkiUa0w25V5hd-428LQo
|
\section{Introduction}
Let $\mathbb{L}^d = (\mathbb{Z}^d, \mathbb{E}^d)$ be the standard $d$-dimensional hypercubic lattice with vertex set $\mathbb{Z}^d$ and nearest-neighbour edges $\mathbb{E}^d$. Given a finite subgraph $G=(V,E)$ of $\mathbb{L}^d$, the Blume-Capel model on $G$ with inverse temperature $\beta>0$, crystal field strength $\Delta \in \mathbb{R}$, and boundary condition $\eta \in \{-1,0,1\}^{\mathbb{Z}^d}$ is the probability measure $\mu_{G,\beta,\Delta}^\eta$ defined on spin configurations $\sigma \in \{-1,0,1\}^V$ by
\begin{equs}
\mu_{G,\beta,\Delta}^\eta(\sigma)
=
\frac{1}{Z_{G,\beta,\Delta}^\eta} e^{-H_{G,\beta,\Delta}^\eta(\sigma)},
\end{equs}
where
\begin{equs}
H^\eta_{G,\beta,\Delta}(\sigma)
=
-\beta\sum_{xy \in E} \sigma_x\sigma_y -\Delta\sum_{x \in V} \sigma_x^2 - \beta \sum_{\substack{xy \in E^d \\x \in V,\, y \in \mathbb{Z}^d\setminus V }} \sigma_x \eta_y
\end{equs}
is the Hamiltonian, and $Z_{G,\beta,\Delta}^\eta$ is the partition function. By convention, we write $\mu^{\eta}_{V,\beta,\Delta}$ to denote the Blume-Capel measure on the subgraph of $\mathbb{L}^d$ spanned by $V$, and we denote expectations by $\langle \cdot \rangle^{\eta}_{V,\beta,\Delta}$. We denote by $0$ (resp. $+$, $-$) the boundary conditions $\eta_x=0$ (resp. $\eta_x=+1$, $\eta_x=-1$) for all $x\in \mathbb{Z}^d$.
The Blume-Capel model is closely related to one of the most famous models in statistical physics, the Ising model. The latter is defined analogously, with spins taking values in $\{\pm 1\}$ and the Hamiltonian only consisting of the terms involving $\beta$. In particular, one can view the Blume-Capel model as an Ising model on an annealed random environment, i.e.\ on the (random) set of vertices for which $\sigma_x \neq 0$. In the limit $\Delta\rightarrow \infty$, the underlying random environment becomes deterministically equal to $\mathbb{Z}^d$, and the Blume-Capel model converges to the classical Ising model on $\mathbb{Z}^d$.
The model was introduced independently by Blume \cite{BL} and Capel \cite{Capel} in 1966 to study the magnetisation of uranium oxide and an Ising system consisting of triplet ions, respectively. Both papers were trying to explain first-order phase transitions that are driven by mechanisms other than external magnetic fields. Since then, it has been studied extensively by physicists due to its particularly rich phase diagram, i.e.\ as an archetypical example of a model that exhibits a multicritical point, see \cite{ButeraPernici} and references therein. Indeed, for each value of the parameter $\Delta\in \mathbb{R}$, the Blume-Capel model undergoes a phase transition at a critical\footnote{In the physics literature, $\beta_c$ is sometimes called a transition point or triple point when the phase transition is discontinuous, to distinguish from a critical point corresponding to continuous phase transition. In this article we adopt percolation terminology and call $\beta_c$ a critical point.} parameter $\beta_c(\Delta)$, which is defined as
\begin{equs}
\beta_c(\Delta)=\inf\{\beta>0 \mid \langle \sigma_0 \rangle^+_{\beta,\Delta}>0\},
\end{equs}
where $\langle \cdot \rangle^+_{\beta,\Delta}$ denotes the plus measure at infinite volume, which is defined as the limit of the finite volume measures $\langle \cdot \rangle^+_{G,\beta,\Delta}$ as $G$ tends to $\mathbb{L}^d$. It is expected that the critical behaviour of the model depends strongly on $\Delta$: as $\Delta$ increases, the phase transition changes from discontinuous, when $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}>0$, to continuous, when $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}=0$, and this transition happens as $\Delta$ crosses a single point $\Delta_{{\text{tric}}}$.
The model at the so-called tricritical point $(\beta_c(\Delta_{{\text{tric}}}),\Delta_{\text{tric}})$ is of particular interest. Indeed, in some dimensions it is expected to exhibit vastly different behaviour from the points on the critical line when $\Delta>\Delta_{{\text{tric}}}$, even though in both cases the phase transition is expected to be continuous.
In particular, in $d=2$, the scaling limit of the model at the critical points when $\Delta > \Delta_{{\text{tric}}}$ is expected to be in the Ising and $\phi^4$ universality class. On the other hand, at $\Delta_{\text{tric}}$, the scaling limit of the model is expected to be in a distinct universality class corresponding to the $\phi^6$ minimal conformal field theory with central charge $c=7/10$ (whilst the Ising universality class is of central charge $c=1/2$) -- see Mussardo \cite{Mussardo}. A rigorous glimpse of this distinct universality class appears in the work of Shen and Weber \cite{ShenWeber}, where near-critical scaling limits are considered. In $d=3$, the scaling limit of the model at $\Delta > \Delta_{{\text{tric}}}$ is expected to be nontrivial, as conjecturally for Ising which is supported by e.g.\ conformal bootstrap methods, see Rychkov, Simmons-Duffin, and Zan \cite{3D-Ising-Slava}. Whereas for $\Delta_{\text{tric}}$, $d=3$ is predicted to be the upper-critical dimension for the model and one expects triviality of the scaling limit. This is supported by renormalisation group heuristics, which has recently been made rigorous for the $\phi^6$ model in the weak coupling regime by Bauerschmidt, Lohmann, and Slade \cite{BLS20}. In dimensions $d \geq 5$, one expects that the model is trivial throughout the continuous phase transition regime -- c.f. the case of Ising, where triviality was shown by Aizenman \cite{A82} and Fr\"ohlich \cite{F82}. We also refer to the review book \cite{BBS19} for an account of renormalisation group approaches to this problem. In $d=4$, there may be some distinction: for $\Delta > \Delta_{{\text{tric}}}$, one expects a marginal triviality result as in the case for Ising, which was recently shown by Aizenman and Duminil-Copin \cite{4D-Ising-triviality}, whereas at $\Delta_{{\text{tric}}}$ it is unclear whether logarithmic corrections are present.
Despite the interest of this model in physics and the interesting predictions about the tricritical point, there is a lack of rigorous understanding of the phase diagram of the Blume-Capel model. Indeed, many rigorous results about the model have been focused well within the discontinuous transition regime, where it is a good test case for Pirogov-Sinai theory -- see \cite[Chapter 7]{FV} and \cite{BS}. This is in contrast to the case of the Ising model, where much of the phase diagram is now well-understood \cite{HDC-IP}. Stochastic geometric methods have been at the heart of many recent developments, in particular probabilistic representations of spin correlations via the random cluster and random current representations. For the Blume-Capel model, an analogous random cluster representation, called the dilute random cluster representation, has been developed by Graham and Grimmett \cite{GG}. In this article, we show that the underlying philosophy of the recent techniques used to analyse the Ising model and its random cluster representation can be adapted to rigorously analyse the phase diagram of the Blume-Capel model in dimensions $d\geq 2$. The main result of this paper implies the existence of at least one separation point between the points of continuous and discontinuous phase transitions, i.e.\ the existence of a tricritical point. Namely, we prove the following.
\begin{thm}\label{thm: existence}
Let $d \geq 2$. Then, there exist $\Delta^-(d) \leq \Delta^+(d)$ such that
\begin{itemize}
\item for any $\Delta < \Delta^-(d)$, $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta} > 0$
\item for any $\Delta \geq \Delta^+(d)$, $\langle \sigma_0 \rangle_{\beta_c(\Delta),\Delta}^+ = 0$.
\end{itemize}
Moreover, one can take $\Delta^+(d):=-\log{2}$ for $d\geq 3$, and $\Delta^+(d):=-\log{4}$ for $d=2$.
\end{thm}
The proof of the existence of a discontinuous critical phase is a standard application of Pirogov-Sinai theory and is given in Section \ref{sec:basic-facts-phase-transition}. The crux of the article is to establish the existence of a continuous critical phase, for which we use different techniques depending on the dimension. In the case $d\geq 3$, the proof relies on a new representation of the model as an Ising model on a larger (deterministic) graph, which is ferromagnetic only when $\Delta\geq -\log{2}$ -- see Section~\ref{sec: mapping}. This representation has the advantage that it naturally relates the correlation functions between the Blume-Capel and the Ising model, and allows us to use techniques and tools developed for the latter to study the former. We use the celebrated infrared/Gaussian domination bound of Fr\"ohlich, Simon, and Spencer \cite{FSS} to show that under the free boundary conditions, the two-point correlations decay, in an averaged sense, to $0$ for every $\beta\leq \beta_c(\Delta)$. This allows us to use the breakthrough result of Aizenmann, Duminil-Copin and Sidoravicius \cite{ADCS} on the continuity of the Ising model on $\mathbb{Z}^d$ (see also \cite{AR} for a relevant result for general amenable graphs) to conclude that the phase transition of the Blume-Capel model is continuous in $d\geq 3$ when the corresponding Ising model is ferromagnetic, namely, when $\Delta\geq -\log{2}$.
In dimension $2$, the infrared bound is no longer useful and the proof instead relies on quantitative estimates on crossing probabilities. In particular, we develop a Russo-Seymour-Welsh (RSW) theory for the dilute random cluster representation of the Blume-Capel model. Following the renormalisation strategy of \cite{DCT}, we obtain a quadrichotomy result which describes the behaviour of crossing probabilities of macroscopic squares (and rectangles) under the effect of boundary conditions which are at macroscopic distance, and applies to both the critical and the off-critical phase. At criticality, when the phase transition is continuous, the crossing probabilities stay bounded away from $0$ and $1$ under both the wired (favourable boundary conditions) and the free measure (unfavourable boundary conditions). Whereas, when the phase transition is discontinuous, the crossing probabilities converge to $1$ exponentially fast under the wired boundary conditions, and they converge to $0$ exponentially fast under the free boundary conditions. As a consequence, in both cases, the two-point correlations decay to $0$ under the free measure, and as in dimensions $d\geq 3$, the phase transition is continuous when $\Delta\geq -\log 2$. The estimate on $\Delta^+$ can be improved in dimension $2$ to obtain continuity for $-\log 4\leq \Delta< -\log 2$, where the corresponding Ising model is not ferromagnetic. Indeed, under the condition that the weak plus measure (defined as the measure with $+\epsilon$ boundary condition) converges to the plus measure in the infinite volume limit, one can show that the Radon-Nikodym derivative between the plus and the free measure grows subexponentially fast with the boundary of the domain. This renders the behaviour of crossing probabilities when the phase transition is discontinuous impossible. An estimate on when this criterion holds, i.e.\ for $\Delta \geq -\log 4$, is obtained by showing a Lee-Yang type theorem on the complex zeros of the partition functions of the Blume-Capel model.
\begin{rem}
We note that $\Delta = -\log 4$ corresponds to the location of the tricritical point for the Blume-Capel model on the complete graph, see \cite{EllisOttoTouchette, ShenWeber}. We expect that the tricritical point on $\mathbb{Z}^d$ converges to that of the complete graph as $d \rightarrow \infty$, i.e.\ in the mean-field limit. One can imagine that the underlying philosophy of using the convergence of the weak plus measure to the plus measure is still a sufficient condition to yield continuity in higher dimensions. Thus, since our Lee-Yang type theorem holds for $\Delta \geq -\log 4$ on any graph, it seems that the underlying strategy becomes sharp as $d \rightarrow \infty$.
\end{rem}
Proving uniqueness of the tricritical point, which is the main omission of Theorem \ref{thm: existence}, amounts to showing that one can take $\Delta^-(d)=\Delta^+(d)$ in Theorem \ref{thm: existence}. Unfortunately, it is unclear whether there is monotonicity along the critical line and, in the absence of integrability, to the best of our knowledge all known techniques for showing discontinuity are intrinsically perturbative. Nevertheless, in dimension $2$, the quantitative estimates on crossing probabilities, and other considerations that we describe shortly, allow us to obtain a fuller picture of the phase diagram, although we fall short of proving uniqueness of the tricritical point.
To state our next result, we first recall the definition of the truncated $2$-point correlation:
\begin{equs}
\langle \sigma_0 ; \sigma_x \rangle^+ = \langle \sigma_0 \sigma_x \rangle^+ - \langle \sigma_0 \rangle^+ \langle \sigma_x \rangle^+.
\end{equs}
In dimension $d=2$ we obtain a fine picture of the phase diagram by showing that the truncated $2$-point correlation decays exponentially everywhere except for the continuous critical regime, where they decay polynomially. We also give an alternative characterisation of the points of continuity in terms of the percolation of $0$ and $-$ spins (equivalently, $\ast$-percolation properties of $+$ spins, where $\ast$-percolate refers to percolation on $\mathbb{Z}^2$ union the diagonals).
\begin{thm}\label{thm: truncated}
Let $d=2$. Then the following hold.
\begin{labeling}{{\bf (DiscontCrit)}}
\item[{\bf (OffCrit)}] For all $\beta > 0$ and $\Delta \in \mathbb{R}$ such that $\beta \neq \beta_c(\Delta)$, there exists $c=c(\beta,\Delta)>0$ such that
\begin{equs}
\hspace{-1.4em}\langle \sigma_0 ; \sigma_x \rangle^+_{\beta,\Delta}\leq e^{-c\lVert x \rVert_{\infty}}, \qquad \forall x \in \mathbb{Z}^2.
\end{equs}
\item[{\bf (DiscontCrit)}] For all $\Delta \in \mathbb{R}$ such that $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta} > 0$, there exists $c=c(\Delta)>0$ such that
\begin{equs}
\langle \sigma_0 ; \sigma_x \rangle^+_{\beta_c(\Delta),\Delta}\leq e^{-c\lVert x \rVert_{\infty}}, \qquad \forall x \in \mathbb{Z}^2.
\end{equs}
\item[{\bf (ContCrit)}] For all $\Delta \in \mathbb{R}$ such that $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta} = 0$, there exist $c_1=c_1(\Delta),c_2=c_2(\Delta)$, such that for all $x\in \mathbb{Z}^2$ with $\lVert x \rVert_{\infty}$ large enough
\begin{equs}
\hspace{-2.6em}\frac{1}{\lVert x \rVert_{\infty}^{c_1}}
\leq
\langle \sigma_0 \sigma_x \rangle^+_{\beta_c(\Delta),\Delta}
\leq
\frac{1}{\lVert x \rVert_{\infty}^{c_2}}.
\end{equs}
\item[{\bf (TriCrit)}] The set of $\Delta\in \mathbb{R}$ such that $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}>0$ is open.
\item[{\bf (Perc)}] For all $\Delta\in \mathbb{R}$, $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}=0$ if and only if $\{0,-\}$ spins do not percolate under $\langle \cdot \rangle^0_{\beta_c(\Delta),\Delta}$.
\end{labeling}
\end{thm}
The proof of the behaviour at the critical points is based on the quadrichotomy for crossing probabilities for the dilute random cluster representation of the Blume-Capel model mentioned earlier. The fact that {\bf (TriCrit)} holds has the implication that at any separation point (i.e.\ tricritical point) on the line of critical points, the phase transition is continuous and hence satisfies {\bf (ContCrit)}. The percolation characterisation {\bf (Perc)} is an adaptation of an elegant argument for the continuity of nearest-neighbour Ising on $\mathbb{Z}^2$ by Werner \cite{W09}. The proof of the subcritical behaviour relies on a generalisation of the OSSS inequality for monotonic measures by Duminil-Copin, Raoufi and Tassion \cite{DCRT} to the dilute random cluster model, and more generally to \textit{weakly monotonic measures}, which we define in Section~\ref{sec: OSSS}. The original OSSS inequality was obtained by O'Donell et.\ al.\ \cite{OSSS} for product measures. In fact, the technique for showing subcritical sharpness is robust enough to extend to all dimensions:
\begin{thm}\label{thm: exp dec}
Let $d\geq 2$ and $\Delta\in \mathbb{R}$. Then for every $\beta<\beta_c(\Delta)$ there exists $c=c(\beta,\Delta,d)>0$ such that
\begin{equs}
\langle \sigma_0 \sigma_x \rangle^+_{\beta,\Delta}\leq e^{-c\lVert x \rVert_{\infty}}
\end{equs}
for every $x\in \mathbb{Z}^d$.
\end{thm}
We end on a natural follow up question on Theorem \ref{thm: truncated}, which relates the percolative properties of the underlying random environment to the nature of the critical point.
\begin{qtn*}
In $d=2$, is it true that, for all $\Delta \in \mathbb{R}$, $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}=0$ if and only if $\{0\}$ spins do not $\ast$-percolate under $\langle \cdot \rangle^0_{\beta_c(\Delta),\Delta}$?
\end{qtn*}
\subsection{Paper organisation}
In Section \ref{sec: drc} we define the dilute random cluster model and develop the necessary tools that we require for the rest of this article. We then establish basic facts about the phase transition for the Blume-Capel model in Section \ref{sec:basic-facts-phase-transition}. In Section \ref{sec: mapping bc-ising} we introduce a combinatorial mapping from the Blume-Capel model on $\mathbb{Z}^d$ to an Ising model on an enlarged graph, and we use it to prove Theorem \ref{thm: existence} for $d\geq 3$ in Section \ref{sec: pf in 3d}. In Section \ref{sec: quantitative} we establish a quadrichotomy result for crossing probabilities under the dilute random cluster model in $d=2$ and derive some basic consequences of it. We then apply this, together with Lee-Yang type arguments, to establish Theorem \ref{thm: existence} for $d=2$ in Section \ref{sec: tric 2}. In Section \ref{sec: subcrit} we prove Theorem \ref{thm: exp dec} via a generalisation of the OSSS argument to weakly monotonic measures. Finally, in Section \ref{sec: further} we finish off the proof of Theorem \ref{thm: truncated}.
\subsection*{Acknowledgements}
Above all we thank Hugo Duminil-Copin for encouraging this collaboration. We thank Hendrik Weber for introducing us to this model. We thank Amol Aggarwal, Roman Koteck\'y, Vieri Mastropietro, Romain Panis, S\'ebastien Ott, Tom Spencer, Yvan Velenik for exciting discussions at various stages of this project. TSG was supported by the Simons Foundation, Grant 898948, HDC. DK and CP were supported by the Swiss National Science Foundation and the NCCR SwissMAP.
\section{The dilute random cluster model} \label{sec: drc}
\subsection{Definitions}
Let $\Psi=\{0,1\}^{\mathbb{Z}^d}$ and $\Omega=\{0,1\}^{\mathbb{E}^d}$. We say that $\psi=(\psi_x)\in \Psi$ and $\omega=(\omega_e)\in \Omega$ are compatible if $\omega_{xy}=0$ whenever $\psi_x=0$ or $\psi_y=0$. Denote by $\Theta \subset \Psi \times \Omega$ the set of all possible compatible configurations, equipped with the $\sigma$-algebra generated by cylinder events, $\mathcal F$.
Let $(\psi,\omega) \in \Theta$. A vertex $x\in V$ is called open if $\psi_x=1$, and is called closed otherwise. An edge $e$ is called open if $\omega_e=1$, and closed otherwise. Define $V_{\psi}=\{x\in \mathbb{Z}^d : \psi_x=1\}$ to be the set of open vertices and $E_{\psi}=\{xy\in \mathbb{E}^d:
\psi_x=\psi_y=1\}$ to be the set of induced edges. In addition, define $o(\omega)$ to be the set of
open edges. Observe that $(\psi,\omega)\in\Theta$ if
and only if $o(\omega)\subseteq E_\psi$.
For $\Lambda \subset \mathbb{Z}^d$ be finite, let $\partial \Lambda = \{ x \in \mathbb{Z}^d\setminus \Lambda \text{ such that there exists } y \in \Lambda \text{ incident to } x\}$ and set $\overline{\Lambda} := \Lambda \cup \partial \Lambda$. In addition, write $E_{\Lambda}$ for the set of edges with at least one endpoint in $\Lambda$, and $b(\Lambda)=\{xy\in \mathbb{E}^d : x,y\in \Lambda\}$ for the set of edges induced by $\Lambda$. We stress that the region $(\Lambda, E_\Lambda)$ is not a graph, but $(\overline{\Lambda}, E_\Lambda)$ is a graph. Finally, given $\psi \in \Psi$, write $E_{\psi,\Lambda} = E_\psi \cap E_\Lambda$.
Let $\xi=(\kappa,\rho)\in\Theta$. Define $\Theta_\Lambda^\xi$ to be the set of $(\psi,\omega) \in \Omega$ such that
$\psi_x=\kappa_x$ for $x\in \mathbb{Z}^d\setminus \Lambda$ and $\omega_e=\rho_e$ for $e\in \mathbb{E}^d\setminus E_{\Lambda}$, i.e. configurations on $\Lambda$ with boundary condition $\xi$. Note that configurations in $\Omega_\Lambda^\xi$ are measurable with respect to the region $(\Lambda, E_\Lambda)$. We naturally identify the $\sigma$-algebra on $\Theta_\Lambda^\xi$ as a sub $\sigma$-algebra of $\mathcal F$. Finally, we write $k(\theta,\Lambda)$ for the number of connected components of $(V_\psi,o(\omega))$ that intersect $\overline{\Lambda}$.
We now define the finite volume dilute random cluster model with parameters $a \in (0,1)$ and $p \in (0,1)$ and fixed boundary conditions.
\begin{defn}[Dilute random cluster model]
Let $p,a \in (0,1)$ and write $r = \sqrt{1-p}$. For $\Lambda \subset \mathbb{Z}^d$ finite and $\xi \in \Theta_\Lambda^\xi$, let $\varphi^{\xi}_{\Lambda,p,a}$ denote the probability measure on $\Theta_\Lambda^\xi$ defined, for $\theta \in \Theta_\Lambda^\xi$, by
\begin{equs} \label{defRC}
\varphi^{\xi}_{\Lambda,p,a} [\theta]
=
\frac{\mathbf{{1}}[\theta \in \Theta_\Lambda^\xi]}{Z^{\xi}_{\Lambda,p,a}}
\prod_{x\in \Lambda} \left(\frac{a}{1-a}\right)^{\psi_x}
\prod_{e\in E_{\psi,\Lambda}} \left(\frac{p}{1-p}\right)^{\omega_e}
r^{|E_{\psi,\Lambda}|} 2^{k(\theta,\Lambda)}
\end{equs}
where $Z^{\xi}_{\Lambda,p,a}$ is the normalisation constant.
\end{defn}
\noindent When clear from context, we simply write $\varphi^{\xi}_{\Lambda,p,a}=\varphi^{\xi}_{\Lambda}$.
On any finite subset of $\mathbb{Z}^d$, there are two natural candidates for extremal\footnote{In the sense of Proposition \ref{prop: domination}.} dilute random cluster measures. These measures play a central role in this paper, so we formalise them in the following definition.
\begin{defn}
Let $p \in [0,1)$ and $a \in [0,1]$. For $\Lambda \subset \mathbb{Z}^d$ finite, we define
\begin{itemize}
\item the free measure $\varphi^0_\Lambda := \varphi^{(0,0)}_{\Lambda,p,a}$
\item and, the wired measure $\varphi^1_\Lambda := \varphi^{(1,1)}_{\Lambda,p,a}$
\end{itemize}
where $(0,0)$ is the configuration consisting of only closed vertices and edges, and $(0,0)$ is the configuration consisting of only open vertices and edges.
\end{defn}
\begin{rem}
It is also of interest to include the cases when $p,a \in \{0,1\}$. By convention, for any $p \in [0,1]$, we set: $\varphi^\xi_{\Lambda, p, 0} = \delta_{(0,0)}$; and, $\varphi^\xi_{\Lambda,p,1}$ to be the usual random cluster measure of parameter $p$ and $q=2$. On the other hand, for any $a \in (0,1]$, we set: $\varphi^\xi_{\Lambda,0,a}$ to be the tensor product of Bernoulli site percolation of parameter $a$ and the dirac measure centred at $0 \in \Omega$; and, $\varphi^\xi_{\Lambda,1,a} = \delta_{(1,1)}$.
\end{rem}
Finally, we define a probability measure on $\Psi$ that captures the law of the (dependent) site percolation induced by the dilute random cluster measure.
\begin{defn}
Let $p,a \in [0,1]$. For $\Lambda \subset \mathbb{Z}^d$ and $\xi = (\kappa,\rho) \in \Omega$, let $\Psi^\xi_{\Lambda,p,a}$ denote the probability measure on $\Psi$ defined, for $\psi \in \Psi$, by
\begin{equs}
\Psi^{\xi}_{\Lambda,p,a}[\psi]
=
\sum_{\omega\in \Omega} \varphi^{\xi}_{\Lambda,p,a}[(\psi,\omega)] \mathbf{{1}}\{\psi|_{\mathbb{Z}^d \setminus \Lambda} = \kappa \}.
\end{equs}
We call $\Psi^{\xi}_{\Lambda,p,a}$ the vertex marginal and write $\Psi^{\xi}_{\Lambda}=\Psi^\xi_{\Lambda,p,a}$ when clear from context.
\end{defn}
\subsection{Basic properties}\label{sec:basic-properties}
We now list some important properties of the dilute random cluster model, the proofs of which follow either directly from the definition or by straightforward modifications of standard arguments, see e.g. \cite{DCT}.
The first property tells us that, conditional on the random environment $\psi \in \Psi$, the dilute random cluster measure coincides with the usual random cluster measure on the random graph induced by $\psi$.
\begin{prop}
Let $p,a \in [0,1]$, $\Lambda \subset \mathbb{Z}^d$ finite, and $\xi=(\kappa,\rho) \in \Theta$. For every $\psi\in \Psi$ such that $\psi_x = \kappa_x$ for all $x \in \mathbb{Z}^d \setminus \Lambda$, we have
\begin{equs}\label{eq: conditioning}
\varphi^{\xi}_{\Lambda,p,a}[\omega \mid \psi]
=
\phi^{\textup{RC},\xi}_{\Lambda,\psi,p}[\omega].
\end{equs}
Above,
\begin{equs}
\phi^{\textup{RC},\xi}_{\Lambda,\psi,p}[\omega]
=
\frac{1}{Z^{\textup{RC},\xi}_{\Lambda,\psi,p}}2^{k(\theta,\Lambda)}
\prod_{e\in E_{\psi,\Lambda}} \left(\frac{p}{1-p}\right)^{\omega_e}
\end{equs}
where $Z^{\textup{RC},\xi}_{\Lambda,\psi,p}$ is the normalization constant. Note that $\phi^{\textup{RC},\xi}_{\Lambda,\psi,p}[\omega]$ is the random cluster measure on $V_\psi\cap \overline{\Lambda}$ with boundary conditions $\xi$, and parameters $p$ and $q=2$.
\end{prop}
The second property concerns automorphism invariance of the measure.
\begin{prop}
Let $\tau$ be an automorphism of $\mathbb{Z}^d$. Let $p,a \in [0,1]$, $\Lambda \subset \mathbb{Z}^d$ finite, and $\xi=(\kappa,\rho) \in \Theta$. Then, for every event $A$ depending on the vertices and edges in $(\Lambda,E_{\Lambda})$, we have
\begin{equs}
\varphi_{\tau\Lambda}^{\tau \xi}[\tau A]
=
\varphi_{\Lambda}^{\xi}[A],
\end{equs}
where $\tau A$ denotes the image of $A$ under the action of $\tau$.
\end{prop}
Next, we formalise the following spatial Markov property.
\begin{prop}
Let $p,a \in [0,1]$, $\Lambda'\subset \Lambda$ finite subsets of $\mathbb{Z}^d$, and $\xi\in \Theta$. For every configuration $\xi'=(\psi',\omega')\in \{0,1\}^{\Lambda\setminus\Lambda'}\times \{0,1\}^{E_{\Lambda}\setminus E_{\Lambda'}}$,
\begin{equs}\label{eq:DMPRC}\tag{SMP}
\varphi_{\Lambda}^\xi[\cdot \mid \psi_x=\psi'_x \; \forall \, x\in \Lambda\setminus\Lambda',\omega_e=\omega'_e \; \forall \, e\in E_{\Lambda}\setminus E_{\Lambda'}]
=
\varphi_{\Lambda'}^{\xi\cup\xi'}
\end{equs}
where $\xi\cup\xi'\in \Theta$ is the boundary condition which is equal to $\xi'$ on $(\Lambda\setminus\Lambda')\times(E_{\Lambda}\setminus E_{\Lambda'})$, and otherwise coincides with $\xi$.
\end{prop}
Finally, we state the finite energy property of the measure.
\begin{prop}
Let $p,a \in [0,1]$, $\Lambda \subset \mathbb{Z}^d$ finite, and $\xi\in \Theta$. For every $x\in \Lambda$ and every configuration $\xi'=(\psi',\omega')\in \{0,1\}^{\Lambda\setminus\{x\}}\times \{0,1\}^{E_{\Lambda}\setminus E_{\{x\}}}$,
\begin{equs}
0<\varphi_{\Lambda}^\xi[\psi_x=1 \mid \psi_y=\psi'_y \; \forall \, y\in \Lambda\setminus\{x\},\omega_e=\omega'_e \; \forall \, e\in E_{\Lambda}\setminus E_{\{x\}}]<1.
\end{equs}
Moreover, for every edge $xy\in b(\Lambda)$ and every configuration $\xi'=(\psi',\omega')\in \{0,1\}^{\Lambda\setminus\{x,y\}}\times \{0,1\}^{E_{\Lambda}\setminus \{xy\}}$,
\begin{equs}
0<\varphi_{\Lambda}^\xi[\omega_{xy}=1 \mid \psi_u=\psi'_u \; \forall \, u\in \Lambda\setminus\{x,y\},\omega_e=\omega'_e \; \forall \, e\in E_{\Lambda}\setminus \{xy\}]<1.
\end{equs}
\end{prop}
\subsection{Edwards-Sokal coupling with the Blume-Capel model}
Let $\Delta\in \mathbb{R}$ and $\beta>0$. Set $p = 1-e^{-2\beta}$ and $a = \frac{2e^\Delta}{1+2e^{\Delta}}$. For any $\Lambda \subset \mathbb{Z}^d$ be finite, we define the measure $\mathbf{P}_{\Lambda}$ on $\Sigma_V \times \Psi\times\Omega$ by
\begin{equs}
\mathbf{P}_{\Lambda}[ (\sigma,\psi,\omega) ]
=
\frac{\mathbf{{1}}_{\mathcal{(\sigma,\psi,\omega)\in S}}}{Z}
r^{E_{\psi,\Lambda}} \prod_{x\in V} \left(\frac{a}{1-a}\right)^{\psi_x}
\prod_{e\in E_{\psi,\Lambda}} \left(\frac{p}{1-p}\right)^{\omega_e}
\end{equs}
where $\mathcal{S}$ is the set of triples $(\sigma,\psi,\omega)\in \Sigma_{\Lambda}\times \Psi\times \Omega$ such that the following hold:
\begin{enumerate}[(1)]
\item $(\psi,\omega)\in \Theta^0_{\Lambda}$,
\item $\sigma_x^2=\psi_x$ for every $x\in \Lambda$,
\item for every edge $xy$, if $\sigma_x \neq \sigma_y$ then $\omega_{xy}=0$.
\end{enumerate}
Note that the latter condition is equivalent to $\sigma$ being constant on the clusters of $\omega$.
\begin{prop}
Let $\Delta \in \mathbb{R}$ and $\beta>0$. For any $\Lambda \subset \mathbb{Z}^d$ finite, the marginal measure of $\mathbf{P}_{\Lambda}$ on $\Sigma_{\Lambda}$ is $\mu^0_{\Lambda,\beta,\Delta}$, whilst the marginal on $\Psi\times \Omega$ is $\phi^0_{\Lambda,p,a}$.
\end{prop}
\begin{proof}
See \cite[Theorem 3.7]{GG}.
\end{proof}
Conditionally on $(\psi,\omega)$, the following statements hold for the law of $\sigma$:
\begin{enumerate}[(a)]
\item the spins are constants on the clusters of $\omega$, and the spin of each cluster is uniformly distributed on the set $\{\pm 1\}$,
\item the spins on different clusters are independent random variables.
\end{enumerate}
Conditionally on $\sigma$, the following statements hold for the law of $(\psi,\omega)$:
\begin{enumerate}[(i)]
\item $\psi_x=0$ if and only if $\sigma_x=0$,
\item the random variables $\omega_e$ are independent,
\item for an edge $xy$, $\omega_{xy}=0$ if $\sigma_x\neq \sigma_y$, and $\omega_{xy}=1$ with probability $p$ if $\sigma_x=\sigma_y$.
\end{enumerate}
We now wish to obtain a coupling between $\mu^+_{\Lambda,\beta,\Delta}$ and $\phi^1_{\Lambda,p,a}$. To this end, let $\Lambda' \subset \mathbb{Z}^d$ finite be such that $\Lambda'\supset \overline \Lambda$ and such that, if two vertices $x,y\in \partial \Lambda$ are in the same connected component of $\mathbb{Z}^d\setminus \Lambda$, then $x,y$ are in the same connected component of $\Lambda'\setminus \Lambda$.
We define
\begin{equs}
\mathbf{P}^+_{\Lambda}
=
\mathbf{P}_{\overline{\Lambda'}}[\, \cdot \mid \sigma_x=\psi_x=1 \; \forall x\in \Lambda'\setminus\Lambda, \omega_{uv}=1 \; \forall uv\in b(\Lambda'\setminus \Lambda)].
\end{equs}
As a corollary of the above and \eqref{eq:DMPRC} we obtain the following.
\begin{cor}
Let $\Delta \in \mathbb{R}$ and $\beta>0$. For any $\Lambda \subset \mathbb{Z}^d$ finite, the marginal measure of $\mathbf{P}^+_{\Lambda}$ on $\Sigma_{\Lambda}$ is $\mu^+_{\Lambda,\beta,\Delta}$, whilst the marginal on $\Psi\times \Omega$ is $\phi^1_{\Lambda,p,a}$.
\end{cor}
Let $A\longleftrightarrow B$ denote the event that $A$ is connected to $B$ by an open path in $\omega$. The following corollary is a standard application of the coupling, see e.g.\ \cite[Corollary 1.4]{DCT}.
\begin{cor} \label{cor: ES correlations}
Let $\Delta \in \mathbb{R}$ and $\beta>0$. For any $\Lambda \subset \mathbb{Z}^d$ finite, we have
\begin{equs}
\langle \sigma_x \rangle^+_{\Lambda,\beta,\Delta}
&=
\phi^1_{\Lambda,p,a}[x\longleftrightarrow\partial \Lambda], \qquad \forall x \in \Lambda
\\
\langle \sigma_x \sigma_y \rangle^+_{\Lambda,\beta,\Delta}
&=
\phi^1_{\Lambda,p,a}[x\longleftrightarrow y], \qquad \forall x,y \in \Lambda
\\
\langle \sigma_x \sigma_y \rangle^0_{\Lambda,\beta,\Delta}
&=
\phi^0_{\Lambda,p,a}[x\longleftrightarrow y], \qquad \forall x,y\in\Lambda.
\end{equs}
\end{cor}
\subsection{Stochastic ordering and the FKG inequality}
We introduce the following partial order on the set $\Theta$. For $\xi,\xi' \in \Theta$, write $\xi\leq \xi'$ to denote that $\psi_x\leq \psi'_x$ for every $x\in \mathbb{Z}^d$, and $\omega_e\leq \omega'_e$ for every $e\in \mathbb{E}^d$. A function $f:\Theta \mapsto \mathbb{R}$ is called increasing if $\xi\leq \xi'$ implies that $f(\xi)\leq f(\xi')$. An event $A$ is said to be increasing if $\mathbf{{1}}_A$ is increasing.
We are going to show that the dilute random cluster measures all satisfy the FKG inequality. In the proof, we use that the vertex marginals satisfy an analogous inequality, as formalised below.
\begin{lem}
Let $p,a \in [0,1]$, $\Lambda \subset \mathbb{Z}^d$ finite, and $\xi \in \Theta$. Then,
\begin{equs} \label{eq: fkg vertex}
\Psi^{\xi}_{\Lambda}[A\cap B]
\geq
\Psi^{\xi}_{\Lambda}[A]\Psi^{\xi}_{\Lambda}[B]
\end{equs}
for all increasing events $A$ and $B$ on $\Psi = \{0,1\}^{\mathbb{Z}^d}$ that are $\Lambda$-measurable.
\end{lem}
\begin{proof}
This is a standard consequence of the positive association property established in \cite[Theorem 5.3]{GG}.
\end{proof}
\begin{prop}\label{prop: FKG}
Let $p,a \in [0,1]$, $\Lambda \subset \mathbb{Z}^d$ finite, and $\xi \in \Theta$. For all increasing events $A,B \in \mathcal F$ that are $\Lambda$-measurable, we have
\begin{equs}\label{eq:FKGRC}\tag{FKG}
\varphi^{\xi}_{\Lambda,p,a}[A\cap B]\geq \varphi^{\xi}_{\Lambda,p,a}[A]\varphi^{\xi}_{\Lambda,p,a}[B].
\end{equs}
\end{prop}
\begin{proof}
Let $A \in \mathcal F$ be an increasing event that is $\Lambda$-measurable. For each $\psi\in \Psi$, let $C_{\psi,A}:=\{\omega \in \Omega : (\psi,\omega)\in A\}$. The fact that $A$ is increasing has two straightforward consequences: first, $C_{\psi,A}$ is an increasing event on $\Omega$; second, $C_{\psi,A}\subset C_{\psi',A}$ whenever $\psi\leq \psi'$.
Thus, by the conditioning equality \eqref{eq: conditioning} and the usual FKG inequality for the random cluster measure, we have that
\begin{equs} \label{eq: fkgproof}
\varphi^{\xi}_{\Lambda}[A \cap B]
&=
\Psi^{\xi}_{\Lambda}\Big[\phi^{\textup{RC},\xi}_{\Lambda, \psi}[C_{\psi,A \cap B}]\Big]
\\
&\geq
\Psi^{\xi}_{\Lambda}\Big[\phi^{\textup{RC},\xi}_{\Lambda, \psi}[C_{\psi,A }]\phi^{\textup{RC},\xi}_{\Lambda, \psi}[C_{\psi, B}]\Big].
\end{equs}
Note that, for any $\tilde A \in \mathcal F$ increasing, the mapping $\psi \mapsto \phi^{\textup{RC},\xi}_{\Lambda, \psi}[C_{\psi,\tilde A }]$ is increasing. Indeed, for any $\psi,\psi' \in \Psi$ that coincide with $\xi$ outside $\Lambda$ and such that $\psi' \geq \psi$, we have
\begin{equs}
\phi^{\textup{RC},\xi}_{\Lambda, \psi}[C_{\psi,\tilde A }]
\leq
\phi^{\textup{RC},\xi}_{\Lambda, \psi'}[C_{\psi,\tilde A }]
\leq
\phi^{\textup{RC},\xi'}_{\Lambda, \psi'}[C_{\psi',\tilde A }]
\end{equs}
where the first inequality is by \cite[Proposition 5]{DCT} and the second is due to inclusion of events. The desired result \eqref{eq:FKGRC} then follows from applying the inequality \eqref{eq: fkg vertex} in \eqref{eq: fkgproof}.
\end{proof}
We now compare different dilute random cluster measures. For two measures $\mu_1,\mu_2$ on $(\Theta,\Sigma)$, we write $\mu_1 \preceq \mu_2$ if $\mu_1(f)\leq \mu_2(f)$ for every increasing function $f$. In this case, we say that $\mu_2$ stochastically dominates $\mu_1$. When comparing measures on sub $\sigma$-algebras of $\mathcal F$, we tacitly assume that the partial order for these events.
The following proposition states that the dilute random cluster measures are stochastically increasing in $a,p$, and the boundary condition.
\begin{prop}\label{prop: domination}
For every $a,a',p,p'\in [0,1]$ and $\xi,\xi'\in \Theta$ such that $a\leq a'$, $p\leq p'$ and $\xi\leq \xi'$,
\begin{equs}
\varphi^{\xi}_{\Lambda,p,a}
\preceq
\varphi^{\xi'}_{\Lambda,a',p'}.
\end{equs}
\end{prop}
\begin{proof}
See \cite[Theorem 5.12]{GG}.
\end{proof}
We require a more refined stochastic domination that conveys that $0$ boundary conditions are the most favourable boundary condition that induces no connections at the boundary. In the remainder of this subsection, we fix $a,p \in [0,1]$.
Let $V$ be a finite set. Given a configuration $\psi\in \{0,1\}^V$ and $T\subset V$, we define the configurations $\psi^{T}$ and $\psi_{T}$ by
\begin{equs}
\psi^{T}_y= \begin{cases}
\psi_y, & y\not\in T \\
1, & y\in T,
\end{cases}
\qquad
(\psi_{T})_y= \begin{cases}
\psi_y, & y\not\in T \\
0, & y\in T.
\end{cases}
\end{equs}
We first recall the following standard fact.
\begin{lem} \label{lem: fkglattice}
Let $V$ be a finite set. Let $\mu_1,\mu_2$ be strictly positive probability measures on $\{0,1\}^V$. If
\begin{equs}\label{eq:stoch1}
\mu_2\left[\psi^{\{x\}}\right]\mu_1\left[\psi_{\{x\}}\right]\geq \mu_2\left[\psi_{\{x\}}\right]\mu_1\left[\psi^{\{x\}}\right]
\end{equs}
and in addition, either $\mu_1$ or $\mu_2$ satisfies
\begin{equs}\label{eq:stoch2}
\mu\left[\psi^{\{x,y\}}\right]\mu\left[\psi_{\{x,y\}}\right]\geq \mu\left[\psi^{\{x\}}_{\{y\}}\right]\mu\left[\psi^{\{y\}}_{\{x\}}\right],
\end{equs}
then $\mu_1 \preceq \mu_2$.
\end{lem}
\begin{proof}
See \cite[Theorem 2.6]{GGBook}.
\end{proof}
For $S\subset \partial \Lambda$, let $E(S,\Lambda)$ be the set of edges with one endpoint in $S$ and the other in $\Lambda$. Given $\xi\in \Theta$, we let $\xi\cap \mathbf{0}_S\in \Theta$ be the configuration which is equal to $0$ on $S\times E_S$, and otherwise coincides with $\xi$. In addition, we let $\xi\cap \mathbf{0}_{E(S,\Lambda)}\in \Theta$ be the configuration whose edge configuration is equal to $0$ on $E(S,\Lambda)$, and otherwise coincides with $\xi$. Note that, in the latter, the vertices are not necessarily $0$ on $S$.
We define
\begin{equs}
\varphi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}
=
\varphi^{\xi}_{\Lambda}[\cdot \mid \omega_e=0 \textup{ for every } e\in E(S,\Lambda)].
\end{equs}
We write $\Psi^{\xi\cap \, \mathbf{0}_{S}}_{\Lambda}$ and $\Psi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}$ for the vertex marginals of $\varphi^{\xi\cap \, \mathbf{0}_{S}}_{\Lambda}$ and $\varphi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}$, respectively.
\begin{lem}\label{free-empty-vertex}
Let $\Lambda$ be a finite subset of $\mathbb{Z}^d$, and let $S\subset \partial \Lambda$. For every $\xi\in \Theta$,
\begin{equs}
\Psi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}
\preceq
\Psi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}.
\end{equs}
\end{lem}
\begin{proof}
We prove that \eqref{eq:stoch1} is satisfied for every $x \in \Lambda$. For $\psi \in \Psi$, write $N_x(\psi)$ and $N_{x,S}(\psi)$ to denote the number of neighbours of $x$ in the graph $(V_{\psi^{\{x\}}}, E_{\psi^{\{x\}}})$ and $(V_{\psi^{\{x\}} \cap \mathbf{0}_S}, E_{\psi^{\{x\}}\cap\mathbf{0}_S})$, respectively.
By direct calculation, we have
\begin{equs}
\frac{\Psi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}[\psi^{\{x\}}]}{\Psi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}[\psi_{\{x\}}]}
&=
r^{N_x(\psi)} \frac{a}{1-a} \frac{Z^{\textup{RC},\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi^{\{x\}}}}
{Z^{\textup{RC}, \xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi_{\{x\}}}}
\\
\frac{\Psi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}[\psi^{\{x\}}]}{\Psi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}[\psi_{\{x\}}]}
&=
r^{N_{x,S}(\psi)} \frac{a}{1-a} \frac{Z^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda, \psi^{\{x\}}}}{Z^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda, \psi_{\{x\}}}}
\end{equs}
where $Z^{\textup{RC},\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi^{\{x\}}}$ denotes the partition function of the random cluster model on $(V_{\psi^{\{x\}}}, E_{\psi^{\{x\}}})$ with boundary conditions $\xi$, conditioned on $E(S,\Lambda)$ being closed, and $Z^{\textup{RC},\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi_{\{x\}}}$ is defined similarly.
Note that
\begin{equs}
\frac{Z^{\textup{RC},\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi^{\{x\}}}}
{Z^{\textup{RC}, \xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi_{\{x\}}}}
=
\frac{Z^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda, \psi^{\{x\}}}}{Z^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda, \psi_{\{x\}}}}
\end{equs}
because in both cases, the edges in $E(S,\Lambda)$ are closed.
Since $r\leq 1$ and $N_{x,S}\leq N_x$, it follows that \eqref{eq:stoch1} is satisfied.
Applying \cite[Proposition 5.5]{GG} for $\Phi_1=\Phi_2=\Psi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}$, we obtain that \eqref{eq:stoch2} is satisfied by $\Psi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}$. The desired result follows by Lemma \ref{lem: fkglattice}.
\end{proof}
\begin{prop}\label{free-closed}
Let $\Lambda$ be a finite subset of $\mathbb{Z}^d$, and let $S\subset \partial \Lambda$. For every $\xi\in \Theta$,
\begin{equs}
\varphi^{\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda}
\preceq
\varphi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}.
\end{equs}
\end{prop}
\begin{proof}[Proof of Proposition~\ref{free-closed}]
Let $A \in \mathcal F$ be $\Lambda$-measurable and increasing. Recall the notation introduced in the proof of Proposition~\ref{prop: FKG}. By \eqref{eq: conditioning},
\begin{equs}
\varphi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}[A]
=
\Psi^{\xi\cap \, \mathbf{0}_S}_{\Lambda}\left[\phi^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda,\psi}[C_{\psi,A}]\right].
\end{equs}
As argued above, the mapping $\psi\mapsto \phi^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda, \psi}[C_{\psi,A}]$ is increasing. Moreover, the measures $\phi^{\textup{RC},\xi\cap \, \mathbf{0}_S}_{\Lambda, \psi}$ and $\phi^{\textup{RC},\xi\cap \, \mathbf{0}_{E(S,\Lambda)}}_{\Lambda, \psi}$ coincide. The desired result then follows by applying Lemma~\ref{free-empty-vertex} and undoing the conditioning.
\end{proof}
\subsection{Monotonicity in the domain}
We now develop stochastic comparison results for dilute random cluster measures on different domains. Let $\textup{Cyl}(\mathbb{Z}^d \times \Theta) = \{ (\Lambda, \xi) \in \mathbb{Z}^d \times \Theta : \text{ $\Lambda$ is finite}\}$.
\begin{defn}
Let $\preceq_1$ be the partial order on $\textup{Cyl}(\mathbb{Z}^d \times \Theta)$ defined, for $(\Lambda, \xi), (\Lambda',\xi') \in \textup{Cyl}(\mathbb{Z}^d \times \Theta)$, by the relation
\begin{equs}
(\Lambda, \xi) \preceq_1 (\Lambda', \xi')
\qquad \Longleftrightarrow \qquad
\Lambda'\subset \Lambda \quad \text{and}\quad \text{$\xi'\geq \xi\cup \mathbf{{1}}_{\Lambda,\Lambda'}$}
\end{equs}
where $\xi\cup\mathbf{{1}}_{\Lambda,\Lambda'}\in \Theta$ is the boundary condition which is equal to $1$ on $(\Lambda\setminus\Lambda')\times (E_{\Lambda}\setminus E_{\Lambda'})$, and otherwise coincides with $\xi$.
\end{defn}
\begin{defn}
Let $\preceq_0$ be the partial order on $\textup{Cyl}(\mathbb{Z}^d \times \Theta)$ defined, for $(\Lambda, \xi), (\Lambda',\xi') \in \textup{Cyl}(\mathbb{Z}^d \times \Theta)$, by the relation We also write $(\Lambda,\xi)\preceq_0(\Lambda',\xi')$ if
\begin{equs}
(\Lambda,\xi)\preceq_0(\Lambda',\xi')
\qquad \Longleftrightarrow \qquad
\Lambda\subset \Lambda' \quad \text{and}\quad \text{$\xi\leq \xi\cap \mathbf{0}_{\Lambda,\Lambda'}$}
\end{equs}
where $\xi\cap \mathbf{0}_{\Lambda,\Lambda'}\in \Theta$ is the boundary condition which is equal
to $0$ on $(\Lambda\setminus\Lambda')\times (E_{\Lambda}\setminus E_{\Lambda'})$, and otherwise coincides with $\xi$.
\end{defn}
\begin{prop}
Let $(\Lambda, \xi), (\Lambda',\xi') \in \textup{Cyl}(\mathbb{Z}^d \times \Theta)$ such that either $(\Lambda,\xi)\preceq_1(\Lambda',\xi')$ or $(\Lambda,\xi)\preceq_0(\Lambda',\xi')$. Then,
\begin{equs}
\varphi_\Lambda^\xi
\preceq
\varphi_{\Lambda'}^{\xi'}
\end{equs}
in the sense that, for every increasing event $A \in \mathcal F$ that is measurable with respect to $(\Lambda\cap \Lambda',E_{\Lambda}\cap E_{\Lambda'})$, we have
\begin{equs}
\label{eq:MON}
\varphi_{\Lambda}^\xi[A] \le \varphi_{\Lambda'}^{\xi'}[A].\tag{MON}
\end{equs}
\end{prop}
\begin{proof}
The result follows from a standard application of \eqref{eq:FKGRC}, \eqref{eq:DMPRC}, and Proposition \ref{prop: domination}. See \cite[Proposition 5]{DCT}.
\end{proof}
\section{Basic facts about the phase transition}\label{sec:basic-facts-phase-transition}
In this section, we use the dilute random cluster representation of the Blume-Capel model to prove various important properties about its phase diagram. First, we show that the critical point of the Blume-Capel model coincides with the critical percolation threshold of the dilute random cluster model and state some basic consequences of this. Then, we show that there exists a continuous line of critical points $\Delta \mapsto \beta_c(\Delta)$. Finally, we show that classical results using Pirogov-Sinai theory establish that the phase transition is discontinuous at the critical point for sufficiently negative values of $\Delta$.
We begin by collecting important facts about the natural infinite volume limits of the free and wired measures.
\begin{prop}\label{prop: inf vol limits}
Let $p,a \in [0,1]$. Then, the weak limits of $\varphi^1_{\Lambda,p,a}$ and $\varphi^0_{\Lambda,p,a}$ as $\Lambda \Uparrow \mathbb{Z}^d$, denoted by $\varphi^1_{p,a}$ and $\varphi^0_{p,a}$, exist and satisfy the following Gibbs property: let $\varphi = \varphi^1_{p,a}$ or $\varphi^0_{p,a}$. Then, for all $\Lambda \subset \mathbb{Z}^d$ finite and $\varphi$-a.s. $\lambda \in \Theta$,
\begin{equs}
\varphi[A \mid \mathcal F_{\Lambda^c}] (\lambda)
=
\varphi^\lambda_{\Lambda, p,a}[A],
\qquad
A \in \mathcal F_\Lambda
\end{equs}
where $\mathcal F_\Lambda$ (resp. $\mathcal F_{\Lambda^c}$) consists of events in $\mathcal F$ that are $\Lambda$-measurable (resp. $\Lambda^c$-measurable).
Furthermore, both $\varphi^1_{p,a}$ and $\varphi^0_{p,a}$ are invariant and mixing (hence, ergodic) under the group of automorphisms of $\mathbb{L}^d$.
\end{prop}
\begin{proof}
When $p,a \in (0,1)$, the existence and Gibbs property follows from \cite[Theorems 7.2 and 7.4]{GG}. The invariance and mixing under translations is then a standard consequence of the convergence, \eqref{eq:FKGRC}, and Proposition \ref{prop: domination}. See \cite[Lemma 1.10]{DCT}. The cases when $a,p \in \{0,1\}$ are classical.
\end{proof}
\subsection{Definition of the critical point}
We begin by defining the critical point of the Blume-Capel modena and the critical percolation threshold of the random cluster model.
\begin{defn} \label{def: crit point blume}
Let $\Delta\in \mathbb{R}$. We define the critical temperature $\beta_c(\Delta)$ by
\begin{equs}
\beta_c(\Delta)
:=
\inf\{\beta>0:\langle\sigma_0\rangle_{\beta,\Delta}^+>0\}.
\end{equs}
Furthermore, we set
\begin{equs}
\mathcal L_c
:=
\{ (\beta, \Delta) : \beta = \beta_c(\Delta) \}
\end{equs}
to be the graph of critical points.
\end{defn}
A consequence of the GKS inequalities \cite[Theorem 4.1.3]{GlimJaf} is that for each $\Delta$, the magnetisation is monotone in $\beta$.
\begin{defn}
For $a\in (0,1)$, define
\begin{equs}
p_c
:=
\inf\{p\in (0,1): \, \phi^1_{p,a}\left[0\longleftrightarrow\infty\right]>0\}.
\end{equs}
Additionally, we set $p_c(0) = 1$ and $p_c(1) = p_c^\textup{Ising}(\mathbb{Z}^d)$, where the latter is the critical point for the usual random cluster model (i.e.\ FK-Ising model) on $\mathbb{Z}^d$.
\end{defn}
We now state the relation between the critical points of the dilute random cluster model and the Blume-Capel model.
\begin{prop} \label{prop: critical pts}
Let $\Delta\in \mathbb{R}$ and $a_\Delta=\frac{2e^{\Delta}}{1+2e^{\Delta}}$. Then,
\begin{equs}
p_c(a_\Delta)
=
1-e^{-2\beta_c(\Delta)}
\end{equs}
where we recall $\beta_c(\Delta)$ is defined in Definition \ref{def: crit point blume}.
\end{prop}
\begin{proof}
This is a direct consequence of the convergence in Proposition \ref{prop: inf vol limits} and Corollary \ref{cor: ES correlations}.
\end{proof}
One important consequence of Proposition \ref{prop: critical pts} is that it allows to prove rigorously that the critical point of the Blume-Capel model coincides with the point at which the model undergoes a long-range order transition.
\begin{cor}\label{existence of pt}
Let $\Delta \in \mathbb{R}$. Then, with $\beta_c(\Delta)$ as in Definition \ref{def: crit point blume}, we have that
\begin{equs}
\lim_{|x| \rightarrow \infty}\langle \sigma_0 \sigma_x \rangle^+
\begin{cases}
= 0, \quad \beta < \beta_c \\
> 0, \quad \beta > \beta_c.
\end{cases}
\end{equs}
\end{cor}
\begin{proof}
Follows from a straightforward modification of arguments in \cite[Corollary 1.13]{DCT} by using Proposition \ref{prop: inf vol limits}.
\end{proof}
\begin{rem}
The phase transition can also be defined with respect to a uniqueness to non-uniqueness transition for the set of Gibbs measures.
\end{rem}
\subsection{Non-triviality of critical points}
We will now show that $p_c(a)$ is always non-trivial, i.e.\ $p_c(a)\in (0, 1)$, except when $a=0$.
\begin{prop}\label{prop:existence}
For $a\in (0, 1)$, $p_c(a)\in (0, 1)$.
\end{prop}
\begin{proof}
Let $a \in (0,1)$ and set $p_c:=p_c(a)$. Let $p < p_c^\textup{Ising}(\mathbb{Z}^d)$. For $n \in \mathbb{N}$, let $B_n$ denote the ball of radius $n$ around $0$. Fix $n$ and let $\Lambda \subset \mathbb{Z}^d$ finite such that $\Lambda \supset B_n$. Then,
\begin{equs}
\varphi^1_{\Lambda,p,a}[0 \longleftrightarrow \partial B_n]
=
\Psi^1_{\Lambda,p,a} \Big[ \phi^{\textup{RC,1}}_{\Lambda,\psi,p} [0 \longleftrightarrow \partial B_n ] \Big]
\leq
\phi^{\textup{RC},1}_{\Lambda,p}[0 \longleftrightarrow \partial B_n]
\end{equs}
where the first line is by \eqref{eq: conditioning} and the second line is by monotonicity in boundary conditions for the usual random cluster model, see \cite[Proposition 5]{DCT}. Taking limits as $\Lambda \uparrow \mathbb{Z}^d$ and then $n \rightarrow \infty$, we obtain that $\varphi^1_{p,a}[0\longleftrightarrow \infty]=0$. Thus, $p_c\geq p_c^{\operatorname{Ising}}(\mathbb{Z}^d)>0$.
In order to show $p_c < 1$, it is convenient to consider the edge marginal of $\varphi^1_{p,a}$, which is the probability measure $\Omega^1_{p,a}$ on $\Omega$ defined by
\begin{equs}
\Omega^1_{p,a}[\omega]
=
\sum_{\psi \in \Psi : (\psi,\omega) \in \Theta} \varphi^1_{p,a}[(\psi,\omega)],
\qquad
\omega \in \Omega.
\end{equs}
We show that, for $p$ sufficiently close to $1$,
\begin{equs}
\mathbf{P}^\textup{Ber}_{2/3}
\preceq
\Omega^1_{p,a}
\end{equs}
where $\mathbf{P}^\textup{Ber}_{2/3}$ is the law of Bernoulli bond percolation on $\mathbb{E}^d$ with parameter $2/3$ (i.e., in the supercritical regime). Hence, $\varphi^1_{p,a}[0 \longleftrightarrow \infty]> 0$ and so $p_c < 1$.
Indeed, it is classical \cite{LSS} that the stochastic domination follows once we show that for an edge $e:=xy$, $\Omega^{\xi}_{\Lambda,p,a}[\omega_e=1]\geq \rho$ for some $\rho$ close enough to $1$, where $\xi$ is any boundary condition and $\Lambda=\{x,y\}$.
Note that, by Proposition~\ref{prop: domination} and a direct calculation,
\begin{equs}
\Omega^{\xi}_{\Lambda,p,a}[\omega_e=1]\geq \Omega^0_{\Lambda,p,a}[\omega_e=1] =\frac{\left(\frac{a}{1-a}\right)^2\sqrt{1-p}\left(\frac{p}{1-p}\right)}{\left(\frac{a}{1-a}\right)^2\sqrt{1-p}\left(\frac{p}{1-p}\right)(1+f(p,a))}
\end{equs}
where $f(p,a)=o(1)$ as $p \rightarrow 1$. Hence, $\Omega^{\xi}_{\Lambda,p,a}[\omega_e=1]$ converges to $1$ as $p$ tends to $1$ uniformly in $\xi$, as desired.
\end{proof}
\subsection{Continuity of the critical line}
In order to prove continuity of the critical line, we need to show the following strengthening of Proposition \ref{prop: domination} about stochastic domination.
\begin{prop}\label{prop:stochastic-domination-2}
Let $d\geq 1$, $a\in (0,1)$, and $p_1>p_2$. Then, there exists $\varepsilon=\varepsilon(d,p_1,p_2,a)>0$ such that, for any $a_1>a-\varepsilon$ and any $a_2<a+\varepsilon$,
\begin{equs}
\varphi^1_{p_2,a_2}
\preceq
\varphi^1_{p_1,a_1}.
\end{equs}
\end{prop}
\begin{proof}
We prove the stochastic domination for $\Lambda\subset \mathbb{Z}^d$ finite. The assertion then follows by taking the limit as $\Lambda$ tends to $\mathbb{Z}^d$.
We wish to find $\varepsilon>0$ such that, for any edge $xy$ of $\mathbb{L}^d$
and any boundary conditions $\xi_1\geq \xi_2$,
\begin{equs}\label{eq:comparison}
\varphi^{\xi_2}_{\Lambda',p_2,a_2}
\preceq
\varphi^{\xi_1}_{\Lambda',p_1,a_1}
\end{equs}
where $\Lambda'=\{x,y\}$, and $a_1$ and $a_2$ are as above. Then, by Strassen's theorem \cite{Strassen}, there exists an increasing coupling between $\varphi^{\xi_2}_{\Lambda',p_2,a_2}$ and $\varphi^{\xi_1}_{\Lambda',p_1,a_1}$. A standard Markov chain argument (see, for instance, \cite[Lemma 1.5]{HDC-IP}) then gives that
\begin{equs}
\varphi^{1}_{\Lambda,p_2,a_2}
\preceq
\varphi^{1}_{\Lambda,p_1,a_1}.
\end{equs}
In addition,
since by Proposition~\ref{prop: domination} the measure $\phi^\xi_{\Lambda',p,a}$ is stochastically increasing in $\xi$, it suffices to prove \eqref{eq:comparison} for $\xi_1=\xi_2=\xi$.
First observe that, for any increasing and non-empty event $A \in \mathcal F$ that is measurable with respect to $(\Lambda',E_{\Lambda'})$, the function $p\mapsto \varphi^{\xi}_{\Lambda',p,a}[A]$ is analytic and non-constant. Thus, by Proposition \ref{prop: domination}, we have the strict inequality
\begin{equs}
\varphi^{\xi}_{\Lambda',p_1,a}[A]
>
\varphi^{\xi}_{\Lambda',p_2,a}[A].
\end{equs}
Note that there are finitely many distinct maps\footnote{Recall our measures depend only on the state of $\partial \Lambda'$ and which vertices of $\partial \Lambda'$ are connected to each other outside of $\Lambda'$.} $a\mapsto \varphi^{\xi}_{\Lambda',p_2,a}$ ranging over all choices of $\xi$ and over all possible $A$. Hence, by continuity
and monotonicity (see Proposition \ref{prop: domination})
of the map $a\mapsto \varphi^{\xi}_{\Lambda',p_2,a}[A]$, we can choose $\varepsilon=\varepsilon(d,p_1,p_2,a)>0$ such that
\begin{equs}
\varphi^{\xi}_{\Lambda',p_1,a_1}[A]
>
\varphi^{\xi}_{\Lambda',p_2,a_2}[A]
\end{equs}
for all $a_1>a-\varepsilon$ and $a_2<a+\varepsilon$, and uniformly over all increasing, non-empty events $A$ and boundary conditions $\xi$ on $(\Lambda', E_{\Lambda'})$. The desired assertion follows.
\end{proof}
\begin{prop}\label{prop: line cont}
The function $a\mapsto p_c(a)$ is decreasing and continuous on $[0, 1]$.
\end{prop}
\begin{proof}
For the open interval $(0,1)$, the assertion follows from Propositions~\ref{prop: domination} and \ref{prop:stochastic-domination-2}. See the proof of \cite[Theorem 5.5]{GGBook}.
For $a=0$, we can argue as in the proof of Proposition~\ref{prop:existence} to see that for every $p\in (0,1)$ there is $\varepsilon>0$ such that for every $a\in (0,\varepsilon)$, $\Psi^1_{p,a}$ is dominated by subcritical Bernoulli site percolation on $\mathbb{Z}^d$. This implies that $\varphi^1_{p,a}$ is not supercritical, hence $p_c(a)\geq p$. It follows that $p_c(a)$ converges to $p_c(0)=1$ as $a$ tends to $0$.
To handle $a=1$, we wish to show that for every $p'>p$ in $(0,1)$, there exists $a=a(p,p')\in (0,1)$ such that $\Omega^1_{p',a}$ stochastically dominates $\phi^{\textup{RC},1}_{p}$, from which it follows that $p_c(a)$ converges to $p_c(1)$ as $a$ tends to $1$. To this end, we prove the stochastic domination for $\Lambda\subset \mathbb{Z}^d$ finite.
We can argue as above to deduce that for any boundary conditions $\rho,\rho'\in \{0,1\}^{\mathbb{E}^d}$ with $\rho'>\rho$, and any increasing, non-empty event $A$ depending on $E_{\Lambda'}$, we have the strict inequality
\begin{equs}
\phi^{\textup{RC},\rho'}_{\Lambda',p'}[A]> \varphi^{\textup{RC},\rho}_{\Lambda',p}[A],
\end{equs}
where $\Lambda'=\{x,y\}$ for some neighbours $x,y$.
By continuity of the map $a\mapsto\Omega^{\xi}_{\Lambda',p',a}[A]$, where $\xi=(\kappa,\rho')$ with $\kappa=1$, we have that
\begin{equs}
\Omega^{\xi}_{\Lambda',p',a}[A]> \varphi^{\textup{RC},\rho}_{\Lambda',p}[A]
\end{equs}
for every $a$ close enough to $1$.
Arguing as in the proof of Proposition~\ref{prop:existence}, we see that $\varphi^1_{\Lambda,p',a}[\psi_u=1 \, \forall u\in \partial \Lambda' \mid \rho'_e, e\in \mathbb{E}^d\setminus E_{\Lambda}]$ convergences to $1$ uniformly in $\Lambda$ as $a$ tends $1$. Hence,
\begin{equs}
\Omega^{1}_{\Lambda,p',a}[A \mid \rho'_e, e\in \mathbb{E}^d\setminus E_{\Lambda}]\geq \varphi^{\textup{RC},\rho}_{\Lambda,p}[A]
\end{equs}
for every $a$ sufficiently close to $1$.
The stochastic domination follows.
\end{proof}
\begin{rem}
The arguments above can be adapted to show that the function $a\mapsto p_c(a)$ is strictly decreasing for $d\geq 2$.
\end{rem}
\begin{rem}
Proposition \ref{prop: line cont} gives a rigorous proof that $\mathcal L_c$, as introduced in Definition \ref{def: crit point blume}, corresponds to a continuous, (strictly) decreasing curve of critical points whose left-limit is $+\infty$ and right-limit is $\beta_c(\rm Ising)$.
\end{rem}
\subsection{Discontinuity via Pirogov-Sinai theory}
We now show that the first half of Theorem~\ref{thm: existence}, namely the existence of $\Delta^-(d)$, follows from the classical result \cite{BS} in Pirogov-Sinai theory (see also \cite[Chapter 7]{FV} for a textbook approach to Pirogov-Sinai theory applied to the Blume-Capel model). To state the latter result precisely, we introduce a slightly different parametrisation of the Blume-Capel model.
For this parametrisation, there are two parameters $\beta'>0$ and $\lambda\in \mathbb{R}$.
The Hamiltonian is defined by
\begin{equs}
H^\eta_{G,\lambda}(\sigma)
=
\sum_{xy \in E} (\sigma_x-\sigma_y)^2 -\lambda\sum_{x \in V} \sigma_x^2 + \sum_{\substack{xy \in E^d \\x \in V, y \in \mathbb{Z}^d\setminus V }} (\sigma_x-\eta_y)^2,
\end{equs}
and the probability of a configuration is proportional to
\begin{equs}
e^{-\beta' H^{\eta}_{G,\lambda}(\sigma)}.
\end{equs}
Expressed in our parametrisation, this corresponds to the change of variables $\beta= 2\beta'$ and $\Delta=\beta'(\lambda-2d)$. In \cite{BS}, it is proved that there exists $\beta'_0>0$ such that for every $\beta'>\beta'_0$ the following holds: there exists $\lambda_0(\beta')>0$ such that
\begin{equs}
\langle \sigma_0 \rangle^+_{\beta',\lambda}
\begin{cases}
=0, \quad \lambda < \lambda_0(\beta') \\
> 0, \quad \lambda \geq \lambda_0(\beta').
\end{cases}
\end{equs}
Let now $\beta=2\beta'$ and $\Delta=\beta'(\lambda_0(\beta')-2d)$. Note that either $\beta=\beta_c(\Delta)$ or $\beta>\beta_c(\Delta)$. If the latter holds, then $\langle \sigma_0 \rangle^+_{\beta',\lambda}>0$ for some $\lambda<\lambda_0(\beta')$ by Proposition~\ref{prop: line cont}, which is a contradiction.
\section{Combinatorial mapping between the Blume-Capel model and the Ising model} \label{sec: mapping bc-ising}
Let $d\geq 2$, and let $G=(V,E)$ be a finite subgraph of $\mathbb{Z}^d$. We establish a correspondence\footnote{Other mappings of the Blume-Capel model to certain spin models on the same graph $\mathbb{Z}^d$ appeared in the literature, see, for instance, \cite{BLH}.} between the Blume-Capel model on $G$ and an Ising model (not necessarily ferromagnetic) on a larger graph associated to $G$, described below. An immediate byproduct of the proof is that the coupling constants of the associated Ising model are ferromagnetic provided $\Delta \geq - \log 2$. We use this extensively later in the article.
First, we lift $G$ to the graph\footnote{This graph coincides with the strong product of $G$ with the complete graph on two vertices.} $\ell(G)=(\ell(V),\ell(E))$ with vertex set $\ell(V)=V\times \{0,1\}$ and edge set $\ell(E)=E_1\cup E_2$, where
\begin{equs}
E_1
=
\bigcup_{xy \in E} \bigcup_{i,j=0}^1 \{(x,i), (y,j)\},
\qquad
E_2
=
\bigcup_{x \in V} \{ (x,0), (x,1) \}.
\end{equs}
Note that $\ell(G)$ can be seen as a subgraph of the graph $\ell(\mathbb{Z}^d)$ with vertex set $\mathbb{Z}^d\times \{0,1\}$ and edge set $\mathbb{E}_1^d\cup \mathbb{E}_2^d$, where
\begin{equs}
\mathbb{E}_1^d
=
\bigcup_{xy \in \mathbb{E}^d} \bigcup_{i,j=0}^1 \{(x,i), (y,j)\},
\qquad
\mathbb{E}_2^d
=
\bigcup_{x \in \mathbb{Z}^d} \{ (x,0), (x,1) \}.
\end{equs}
\begin{lem} \label{lem: amenable}
The graph $l(\mathbb{Z}^d)$ is amenable and transitive.
\end{lem}
\begin{proof}
It is easy to see that $\ell(\mathbb{Z}^d)$ is a Cayley graph of the amenable group $\mathbb{Z}^d\times\{0,1\}$. The desired result follows.
\end{proof}
Given some boundary conditions $\xi\in \{-1,0,+1\}^{\mathbb{Z}^d}$, we define the boundary conditions $\ell(\xi)\in \{-1,0,1\}^{\mathbb{Z}^d\times \{0,1\}}$ by letting $\ell(\xi)_{(x,i)}=\xi_x$ for every $x\in \mathbb{Z}^d$ and $i\in \{0,1\}$. Let also
$T:\{\pm 1\}^V\times \{\pm 1\}^V \mapsto \{-1,0,1\}^V$ be the map $(\tau^0,\tau^1) \mapsto \sigma$, where $\sigma_x=\frac{1}{2}(\tau^0_{x}+\tau^1_{x})$. Identifying $\{\pm 1\}^V\times \{\pm 1\}^V$ with $\{\pm 1\}^{\ell(V)}$, we also view $T$ as a map defined on $\{\pm 1\}^{\ell(V)}$, i.e.\ a map defined over all possible Ising configurations on $\ell(G)$. We write $\tau$ for the spin variable of the Ising model to distinguish it from the spin variable $\sigma$ of the Blume-Capel model.
\begin{lem}\label{lem: comb mapping}
Let $\xi\in \{-1,0,+1\}^{\mathbb{Z}^d}$. Let also $\mu^{{\rm Ising}, \ell(\xi)}_{\ell(G),J}$ denote the Ising model on $\ell(G)$ with boundary conditions $\ell(\xi)$ and coupling constants
\begin{equs}
J_{x,y}=J(\beta,\Delta)_{x,y}
=
\frac{\beta}4 \mathbf{{1}}_{xy \in E_1} + \frac{(\Delta + \log 2)}2 \mathbf{{1}}_{xy \in E_2}.
\end{equs}
Then, for every $\eta\in \{-1,0,1\}^V$, we have
\begin{equs}
\mu^{\xi}_{G,\beta,\Delta}(\sigma=\eta)=\mu^{{\rm Ising}, \ell(\xi)}_{\ell(G),J}(\tau\in T^{-1}(\eta)).
\end{equs}
\end{lem}
\begin{proof}
We start by showing that
\begin{equs}
\begin{split}\label{bc}
\sum_{\tau \in T^{-1}(\eta)}
\prod_{uv \in E_1} e^{\frac{\beta}4 \tau_u \tau_v} \prod_{\substack{u\in \ell(V) \\ v \in \partial \ell(V)}} e^{\frac{\beta}4 \tau_u \ell(\xi)_v}
\prod_{uv \in E_2} e^{\frac{(\Delta + \log 2)}2 \tau_v \tau_v}
& = \\
e^{-\frac{(\Delta-\log 2)}2|V|}
\prod_{xy \in E}e^{\beta \eta_x \eta_y}
\prod_{\substack{x\in V \\ y \in \partial V}} e^{\beta \eta_x \xi_y}
\prod_{x \in V} e^{\Delta \eta_x}
\end{split}
\end{equs}
Note that
\begin{equs}
\begin{split} \label{bc1}
\sum_{\tau \in T^{-1}(\eta)}
\prod_{uv \in E_1} e^{\frac{\beta}4 \tau_u \tau_v} \prod_{\substack{u\in \ell(V) \\ v \in \partial \ell(V)}} e^{\frac{\beta}4 \tau_u \ell(\xi)_v}
\prod_{uv \in E_2} e^{\frac{(\Delta + \log 2)}2 \tau_v \tau_v}
\\ =
\sum_{\substack{\tau^0, \tau^1 \in \{ \pm 1\}^{V} \\ T(\tau^0, \tau^1)=\eta} }
\prod_{xy \in E} \prod_{i,j=0}^1 e^{\frac{\beta}4 \tau^i_x\tau^j_y }
\prod_{\substack{x\in V \\ y \in \partial V}} \prod_{i,j=0}^1 e^{\frac{\beta}4 \tau^i_x \ell(\xi)_{(y,j)}}
\prod_{x \in V}e^{\frac{(\Delta+\log 2)}{2}\tau^0_x \tau^1_x}
\\ =
\sum_{\substack{\tau^0,\tau^1 \in \{\pm 1\}^V \\ T(\tau^0, \tau^1)=\eta}}
\prod_{xy \in E}e^{\beta \eta_x \eta_y} \prod_{\substack{x\in V \\ y \in \partial V}}e^{\beta \eta_x \xi_y} \prod_{x \in V} e^{(\Delta + \log 2)\eta_x^2} e^{-\frac{(\Delta+\log 2)}2}
\\ =
e^{-\frac{(\Delta+\log 2)}2|V|} \prod_{xy \in E}e^{\beta \eta_x \eta_y} \prod_{\substack{x\in V \\ y \in \partial V}}e^{\beta \eta_x \xi_y} \prod_{x \in V} e^{(\Delta + \log 2)\eta_x^2}
\sum_{\substack{\tau^1,\tau^2 \in \{\pm 1\}^V \\ T(\tau^1, \tau^2)=\eta}} 1,
\end{split}
\end{equs}
where in the second equality, we used that
\begin{equs}
\prod_{i,j=0}^1 e^{\frac{\beta}4 \tau^i_x\tau^j_y }=e^{\beta \eta_x \eta_y}, \: \prod_{i,j=0}^1 e^{\frac{\beta}4 \tau^i_x \ell(\xi)_{(y,j)}}=e^{\beta \eta_x \xi_y} \text{ and } \tau^0_x \tau^1_x=2\eta_x^2-1.
\end{equs}
It is easy to see that $T$ is injective on the vertices $x\in V$ such that $\tau^0_x = \tau^1_x$ and is $2$-to-$1$ on the vertices $x\in V$ such that $\tau^0_x \neq \tau^1_x$. Thus, $T^{-1}(\eta)$ contains exactly $2^{\sum_{x\in V}(1- \eta_x^2)}$ elements.
Hence,
\begin{equs}
\eqref{bc1}
&=
e^{-\frac{(\Delta-\log 2)}2|V|}
\prod_{xy \in E}e^{\beta \eta_x \eta_y}
\prod_{\substack{x\in V \\ y \in \partial V}}e^{\beta \eta_x \xi_y}
\prod_{x \in V} e^{\Delta \eta_x^2}
\end{equs}
which establishes \eqref{bc}.
Similarly, we obtain
\begin{equs}
Z^{\rm Ising}_{\ell(G),J}=e^{-\frac{(\Delta-\log 2)}2|V|}Z_{G,\beta,\Delta},
\end{equs}
and the desired assertion follows readily.
\end{proof}
\begin{rem}
$J$ is ferromagnetic provided $\Delta \geq -\log 2$. We note that the edges between the vertices $(x,1)$ and $(x,2)$ are antiferromagnetic for $\Delta < -\log 2$ and their interaction becomes stronger as $\Delta \rightarrow -\infty$, i.e.\ as $0$ becomes more likely.
\end{rem}
In what follows, we abuse the notation and write $\tau^i_x$ instead of $\tau_{(x,i)}$.
\begin{cor}\label{corollary: correlations}
Let $G=(V,E)$ be a finite subgraph of $\mathbb{Z}^d$, and let $J$ be as in Lemma \ref{lem: comb mapping}. For any non-empty set $A\subset V$, any indices $i_x\in \{0,1\}$, $x\in A$, and any boundary conditions $\xi\in \{-1,0,1\}^V$,
\begin{equs}
\langle \prod_{x\in A}\sigma_x \rangle^{\xi}_{G,\beta,\Delta}
=
\langle \prod_{x\in A} \tau^{i_x}_x \rangle^{{\rm Ising},\ell(\xi)}_{\ell(G),J}.
\end{equs}
\end{cor}
\begin{proof}
Observe that, by symmetry,
\begin{equs}
\langle \prod_{x\in A}\tau^{i_x}_x \rangle^{{\rm Ising},\ell(\xi)}_{\ell(G),J}
=
\frac{1}{2^{|A|}}\sum_{j:A\mapsto \{0,1\}}\langle \prod_{x\in A}\tau^{j_x}_x \rangle^{{\rm Ising},\ell(\xi)}_{\ell(G),J}
=
\Big\langle \prod_{x\in A}\frac{\tau^0_x+\tau^1_x}2 \Big\rangle^{{\rm Ising},\ell(\xi)}_{\ell(G),J}. \end{equs}
The latter is equal to $\langle \prod_{x\in A}\sigma_x \rangle^{\xi}_{G,\beta,\Delta}$ by Lemma~\ref{lem: comb mapping}.
\end{proof}
\section{Tricritical point in $d\geq 3$ via the infrared bound}\label{sec: pf in 3d}
In this section, we prove Theorem \ref{thm: existence} in $d\geq 3$ for $\Delta^+ = -\log 2$. The key idea is to use the mapping described in Section \ref{sec: mapping bc-ising} to map the Blume-Capel model to a ferromagnetic Ising model on $l(\mathbb{Z}^d)$. Since this graph is transitive and amenable, we can then apply the main result of \cite{ADCS} (see also \cite{AR}), whose hypothesis is guaranteed in our setting by the infrared bound.
First, we claim that in $d\geq 3$ the two-point correlations $\langle \tau^1_x \tau^1_y \rangle^{{\rm Ising},0}_{J}$ decay, in an averaged sense, to $0$ for every $\beta\leq \beta_c(\Delta)$.
\begin{lem} \label{lem: decay in 3d}
Let $d \geq 3$. Then, for any $\Delta \in \mathbb{R}$ and $\beta \leq \beta_c(\Delta)$,
\begin{equs}
\inf_{B \subset \mathbb{Z}^d, |B|<\infty} \frac{1}{|B|^2} \sum_{ x,y \in B} \langle \tau^1_x \tau^1_y \rangle^{{\rm Ising},0}_{J(\beta,\Delta)}
=
0
\end{equs}
where $J(\beta,\Delta)$ is as in Lemma \ref{lem: comb mapping}.
\end{lem}
\begin{proof}
The proof is straightforward adaptation of the proof of \cite[Corollary 1.4]{ADCS} and is based on the celebrated infrared/Gaussian domination bound of \cite{FSS}, which applies to reflection positive models (such as the nearest-neighbour Blume-Capel model) on $\mathbb{Z}^d$ for $d \geq 3$. We omit the details.
\end{proof}
\begin{proof} [Proof of Theorem \ref{thm: existence} in $d \geq 3$]
With Lemma \ref{lem: decay in 3d} in hand, the proof follows essentially from applying \cite[Theorem 1.2]{ADCS}. The only caveat is that the result is stated for Ising models on $\mathbb{Z}^d$. Thus, we proceed as follows: by Lemma \ref{lem: amenable}, $l(\mathbb{Z}^d)$ is amenable and transitive. Thus, it follows from \cite[Proposition 1]{AR} that $\langle \tau_0 \tau_x\rangle^{{\rm Ising},+}_{J(\beta,\Delta)}=\langle \tau_0 \tau_x\rangle^{{\rm Ising},0}_{J(\beta,\Delta)}$ for every $\beta>0$ and $\Delta\geq-\log{2}$.
Arguing as in \cite[Proof of theorem 3]{ADCS} and using Lemma \ref{lem: decay in 3d}, we obtain
\begin{equs}
\left(\langle \tau^1_0 \rangle^{{\rm Ising},+}_{J(\beta_c(\Delta),\Delta)} \right)^2\leq \inf_{B \subset \mathbb{Z}^d\times\{0,1\}, |B|<\infty} \frac{1}{|B|^2} \sum_{ x,y \in B} \langle \tau_x \tau_y \rangle^{{\rm Ising},0}_{J(\beta_c(\Delta),\Delta)}
=
0.
\end{equs}
Hence, $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}=0$.
\end{proof}
\section{Quantitative analysis of crossing probabilities in $d=2$}\label{sec: quantitative}
\subsection{A quadrichotomy on crossing probabilities via RSW and renormalisation inequalities}
Given a rectangle $R=[a,b]\times [c,d] \subset \mathbb{R}^2$, we define the bottom, top, left, and right sides of $R$ by
\begin{equs}
{\rm B}[R]
&=
[a,b] \times \{c\}
\\
{\rm T}[R]
&=
[a,b] \times \{ d \}
\\
{\rm L}[R]
&=
\{ a \} \times [c,d]
\\
{\rm R}[R]
&=
\{ b \} \times [c,d].
\end{equs}
\noindent We identify $\mathbb{Z}^2$ with its natural embedding in $\mathbb{R}^2$ and identify the subsets defined above with their corresponding subgraphs of $\mathbb{Z}^2$. We denote by $H_R$ the event that $\omega\cap R$ contains a path of open edges from ${\rm L}[R]$ to ${\rm R}[R]$, and we call such a path a {\em horizontal crossing}.
Similarly, we denote by $V_R$ the event that $\omega\cap R$ contains a path of open edges from ${\rm B}[R]$ to ${\rm T}[R]$, and we call such a path a {\em vertical crossing}. When $R=\Lambda_n:= [-n,n]^2$ for some $n \geq 1$, we simply write $H_n$ and $V_n$, respectively.
The following quadrichotomy on crossing probabilities is at the heart of our approach in $d=2$.
\begin{prop}\label{prop:quad}
Fix $p,a \in (0,1)$. Then, exactly one of the following properties is satisfied for some $c>0$:
\begin{description}
\item[(SubCrit)] \qquad $\varphi_{\Lambda_{2n}}^1[H_n]\leq e^{-cn}$, $\qquad \forall n \in \mathbb{N}$;
\item[(SupCrit)] \qquad $\varphi_{\Lambda_{2n}}^0[H_n]\geq 1-e^{-cn}$, $\qquad \forall n \in \mathbb{N}$;
\item[(ContCrit)] \qquad \hspace{-0.6em}
$c\leq \inf_{\xi} \varphi_{\Lambda_{2n}}^\xi[H_n]\leq \sup_{\xi} \varphi_{\Lambda_{2n}}^\xi[H_n]\leq 1-c$, $\qquad \forall n \in \mathbb{N}$;
\item[(DiscontCrit)] \quad \hspace{-0.7em}
$\varphi_{\Lambda_{2n}}^0[H_n]\leq e^{-cn}
\; \text{ and } \;
\varphi_{\Lambda_{2n}}^1[H_n]\geq 1-e^{-cn}$, $\qquad \forall n \in \mathbb{N}$.
\end{description}
\end{prop}
\noindent The proof of Proposition \ref{prop:quad} is based on the approach of \cite{DCT} and uses renormalisation inequalities. We introduce the key ingredients of this approach and then sketch how they are used to prove Proposition \ref{prop:quad} at the end of this subsection.
The first step is establishing an RSW estimate, which relates the horizontal crossings and vertical crossings in long rectangles. The RSW estimates apply to the wired and free measures in infinite volume, and also the following strip measures: let $S_m = \mathbb{R} \times [-m,2m]$ denote the infinite strip of width $3m$. We write $\varphi^{1}_{S_m}, \varphi^{0}_{S_m}$, and $\varphi^{0/1}_{S_m}$ to denote the wired, free, and mixed boundary conditions (wired on bottom and free on top) random cluster measures on $S_m$, respectively.
\begin{prop}[RSW estimate] \label{prop: RSW}
Let $(p,a) \in (0,1)$ and let $\varphi$ denote either $\varphi^1_{p,a}$ or $\varphi^0_{p,a}$, or $\varphi^\#_{S_m}$, where \# are as above. Then, for any $\rho >0$, there exists $c_\rho > 0$ such that
\begin{equs} \label{eq: RSW} \tag{RSW}
\varphi[H_{[0,\rho n] \times [0,n]}]
\geq
c_\rho \Big( \varphi[V_{[0,\rho n] \times [0,n]}] \Big)^{1/c_\rho},
\qquad \forall n \geq 1/c_\rho.
\end{equs}
\end{prop}
\noindent The proof of Proposition \ref{prop: RSW} is deferred to Subsection \ref{subsec: proof of RSW}.
We now introduce strip densities.
\begin{defn}
Let $n \in \mathbb{N}$. Define
\begin{equs}
p_n
&:=
\limsup_{\alpha \rightarrow \infty} \Big( \varphi^0_{[0,\alpha n]\times[-n,2n]} [ H_{[0,\alpha n]\times[0,n]} ] \Big)^{1/\alpha}
\\
q_n
&:=
\limsup_{\alpha \rightarrow \infty} \Big( \varphi^1_{[0,\alpha n]\times[-n,2n]} [ V_{[0,\alpha n]\times[0,n]}^c] \Big)^{1/\alpha}.
\end{equs}
\end{defn}
\noindent The complement of a vertical crossing can be interpreted using planar duality. Let $(\mathbb{L}^2)^* = \mathbb{L}^2 + (1/2, 1/2)$ be the dual lattice, which we identify with its natural embedding in $\mathbb{R}^2$. We typically write $\omega^*$ to denote set of edges in $(\mathbb{L}^2)^*$. Given $\omega$ on $\mathbb{Z}^2$, recall that there is an associated dual configuration $\omega^*$ constructed by declaring that an edge in the dual graph is open if and only if it crosses a closed edge in the primal graph. Thus, if $V_R^c$ occurs for some rectangle $R$, then this implies a horizontal crossing in a suitable rectangle in the dual lattice. As such $p_n$ is referred to as the crossing strip density and $q_n$ is referred to as the dual crossing strip density.
The crossing and dual crossing densities are interrelated, albeit on different scales.
\begin{lem}[Duality of strip densities] \label{lem: dual relation of strip densities}
Let $(a,p)$ be such that neither {\bf (SubCrit)} nor { \bf (SupCrit)} occur. Then, there exists $C>0$ such that, for all integers $\lambda \geq 2$ and $n \in 9\mathbb{N}$,
\begin{equs}
p_{3n} \geq \frac{1}{\lambda^C} q_n^{3+3/\lambda},
\qquad
q_{3n} \geq \frac{1}{\lambda^C} p_n^{3+3/\lambda}.
\end{equs}
\end{lem}
\noindent The proof of Lemma \ref{lem: dual relation of strip densities} is a relatively straightforward adaptation of the arguments in \cite[Section 5.1]{DCT} to our setting. The adaptations required are already present in the proofs of Proposition \ref{prop: RSW} and the renormalisation inequalities of Lemma \ref{lem: renormalisation inequality for strip densities}. Therefore, we omit it.
The following lemma contains the key renormalisation inequalities.
\begin{lem}[Renormalisation of strip densities] \label{lem: renormalisation inequality for strip densities}
Let $(p,a)$ be such that neither {\bf (SubCrit)} nor {\bf (SupCrit)} occur. Then, there exists $C>0$ such that for all integers $\lambda \geq 2$ and $n \in 9\mathbb{N}$,
\begin{equs}
p_{3n} \leq \lambda^C p_n^{3-9/\lambda},
\qquad
q_{3n} \leq \lambda^C q_n^{3-9/\lambda}.
\end{equs}
\end{lem}
\noindent The proof of Lemma \ref{lem: renormalisation inequality for strip densities} is deferred to Subsection \ref{subsec: proof of renorm}.
\begin{proof}[Proof of Proposition \ref{prop:quad} assuming Lemma \ref{lem: renormalisation inequality for strip densities}]
First note that {\bf(SubCrit)} and {\bf (SupCrit)} are disjoint. Thus, it suffices to show that if {\bf non(SubCrit)} and {\bf non(SupCrit)} occur, then either {\bf (ContCrit)} or {\bf (DiscontCrit)} occur. Assume that {\bf non(SubCrit)} and {\bf non(SupCrit)} occur.
Note that by Lemmas \ref{lem: dual relation of strip densities} and \ref{lem: renormalisation inequality for strip densities}, there exists $c>0$ such that either: (i) $\inf_{n \in 9\mathbb{N}} p_n > 0$ and $\inf_{n \in 9\mathbb{N}} q_n > 0$; or, (ii) there exists $c> 0$ such that $p_n \leq e^{-cn}$ and $q_n \leq e^{-cn}$ for all $n \in 9\mathbb{N}$. For claims (i) and (ii), one can take $n \in \mathbb{N}$ because by \eqref{eq:MON} and inclusion of events, we have $p_m\geq p_n^{m/n}$ and $q_m\geq q_n^{m/n}$ for every $m\geq n$.
For claim (ii) the fact that the decay is exponential and not just stretched exponential comes from a bootstrap argument: first, applying Lemma~\ref{lem: renormalisation inequality for strip densities} for $\lambda = 9$ we obtain stretch exponential decay along a geometric subsequence. Note that, by finite energy we have that $p_n \geq e^{-c'n}$ and so $\sup_n p_n^{-9/n} <\infty$. This implies that for $\lambda=n$, we have that $p_{3n} \leq C_1 n^{C_2} p_n^3$, for some constants $C_1$ and $C_2$. Thus, for some $C_3$ sufficiently large, the sequence $(n^{C_3} p_n)$ decays exponentially, and hence $p_n$ decays exponentially.
Arguing as in \cite[Section 5.4]{DCT}, we find that: if (i) holds, then {\bf (ContCrit)} occurs; whereas, if (ii) holds, then {\bf (DiscontCrit)} occurs. The adaptation of these arguments to our setting requires some care concerning the dual crossings. For the case of {\bf (ContCrit)}, see the proof of Lemma
\ref{lem: renormalisation inequality for strip densities}; for the case of {\bf (DiscontCrit)}, see the proof of Lemma \ref{dual-circuit}.
\end{proof}
\subsection{RSW theory: Proof of Proposition \ref{prop: RSW}} \label{subsec: proof of RSW}
We reduce the proof of Proposition \ref{prop: RSW} to proving conditional probability estimates on the occurrence of tortuous paths, c.f. Lemma \ref{lem: cond prob}. An intermediate step is to reduce the proof to estimating the probability of certain bridge events.
Let $R=[a,b]\times [c,d]$ be a rectangle in $\mathbb{R}^2$. For $E,F \in \{ {\rm B}[R], {\rm T}[R], {\rm L}[R], {\rm R}[R] \}$ and a rectangle $S \subset R$, we define the following connection events:
\begin{itemize}
\item we say $E \overset{S}{\longleftrightarrow} F$ if there exists a path in $\omega$ which lies in $S$, and intersects both $E$ and $F$;
\item we say $E \overset{*,S}{\longleftrightarrow} F$ if there exists a path in $\omega^*$ which, except from its first and last edge, lies in $S$, and intersects both $E$ and $F$.
\end{itemize}
If $S=R$, then we drop $S$ from the notation. We write $H_R$ and $V_R$ to denote the events that ${\rm L}[R] \longleftrightarrow {\rm R}[R]$ and ${\rm B}[R] \longleftrightarrow {\rm T}[R]$, respectively. We write $H^*_R$ and $V^*_R$ to denote the events that ${\rm L}[R] \overset{*}{\longleftrightarrow} {\rm R}[R]$ and ${\rm B}[R] \overset{*}{\longleftrightarrow} {\rm T}[R]$, respectively.
Let $k = \lceil n/50 \rceil$. Define $R_0=\{-17k,\dots,18k\}\times\{0,\dots, n\}$ and $S_0 = \{0,\dots,k\}\times\{0\}$. Let $R_j, S_j$ denote the translates of $R_0+(jk,0)$ and $S_0 +(jk,0)$, respectively.
We consider the following bridge events:
\begin{equs}
A_j
&=
\{ S_j \overset{R_j \cup R_{j+4}}{\longleftrightarrow} S_{j+2} \cup S_{j+4} \}
\\
A_j^*
&=
\{ S_j \overset{*, R_j \cup R_{j+4}}{\longleftrightarrow} S_{j+2} \cup S_{j+4}\}.
\end{equs}
\begin{prop} \label{prop: bridge events}
There exists $c_1>0$ such that, for every $\lambda > 0$ and $n \in \mathbb{N}$,
\begin{equs}
\varphi[A_0]
\geq
\frac{c_1}{\lambda^3} \varphi[ V_{[0,\lambda n]\times[0,n]} ]^3
\text{ and }
\varphi[A_0^*]
\geq
\frac{c_1}{\lambda^3} \varphi [ V^*_{[0,\lambda n]\times[0,n]} ]^3.
\end{equs}
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop: RSW} assuming Proposition \ref{prop: bridge events}]
The proof of Proposition \ref{prop: RSW} follows from Proposition \ref{prop: bridge events} by applying straightforward gluing arguments, see \cite[Lemma 9]{DCT}.
\end{proof}
To prove Proposition \ref{prop: bridge events}, we follow the strategy of \cite[Section 3]{DCT}. The main differences are the following:
\begin{enumerate}
\item[(1)] conditioning on a set of edges being closed does not induce the free boundary conditions, but the conditional measure is dominated by the free measure (see Lemma \ref{free-empty-vertex});
\item[(2)] the dual model is not a dilute random cluster measure, hence the estimate on dual crossings needs to be proved directly.
\end{enumerate}
\begin{proof}[Proof of Proposition \ref{prop: bridge events}]
Without loss of generality, assume $\lambda \geq 1$. Note that $\bigcup_{j=0}^C R_j \supset [0,\lambda n]\times[0,n]$,
where $C= \lfloor \lambda n / k \rfloor \in (0,50\lambda]$. We bound the events $V_{[0,\lambda n]\times[0,n]}$ and $V^*_{[0,\lambda n]\times[0,n]}$ according to the crossings in the $R_j$ and $R_j^*$, respectively.
Define the events
\begin{align*}
\mathscr{T}_j
&=
\{ S_j \overset{R_j}{\longleftrightarrow} {\rm T}[R_j] \}
&&
\mathscr{T}_j^*
=
\{ S_j \overset{*,R_j}{\longleftrightarrow} {\rm T}[R_j] \}
\\
\mathscr{L}^g_j
&=
\{ S_j \overset{R_{j-13}}{\longleftrightarrow} {\rm L}[R_{j+4}] \}
&&
\mathscr{L}^{*,g}_j
=
\{ S_j \overset{*, R_{j-13}}{\longleftrightarrow} {\rm L}[R_{j+4}] \}
\\
\mathscr{L}^b_j
&=
\{ S_j \overset{R_{j}}{\longleftrightarrow} {\rm L}[R_j] \} \setminus \mathscr{L}^g_j
&&
\mathscr{L}^{*,b}_j
=
\{ S_j \overset{*, R_{j}}{\longleftrightarrow} {\rm L}[R_j] \} \setminus \mathscr{L}^{*,g}_j
\\
\mathscr{R}^{g}_j
&=
\{ S_j \overset{R_{j+13}}{\longleftrightarrow} {\rm R}[R_{j-4}] \}
&&
\mathscr{R}^{*,g}_j
=
\{ S_j \overset{*, R_{j+13}}{\longleftrightarrow} {\rm R}[R_{j-4}] \}
\\
\mathscr{R}^{b}_j
&=
\{ S_j \overset{R_j}{\longleftrightarrow} {\rm R}[R_j] \} \setminus \mathscr{R}^{g}_j
&&
\mathscr{R}^{*,b}_j
=
\{ S_j \overset{*, R_j}{\longleftrightarrow} {\rm R}[R_j] \} \setminus \mathscr{R}^{*,g}_j.
\end{align*}
\begin{rem}
The events $\mathscr{T}_j, \mathscr{T}_j^*$ consist of up-down crossings and dual crossings in $R_j$. The event $\mathscr{L}^g_j$ (resp. $\mathscr{L}^{*,g}_j$) consists of a good left crossing in the sense that there exists a path (resp. dual path) from $S_j$ to ${\rm L}[R_j]$ that does not explore to the right of the $y$-axis translated by the vector $(jk+5k,0)$. The event $\mathscr{L}^b_j$ (resp. $\mathscr{L}^{*,b}_j$) consists of a bad left crossing in the sense that any path (resp. dual path) from $S_j$ to ${\rm L}[R_j]$ must cross the $y$ axis translated by the vector $(jk + 5k,0)$. There are similar observations for the events involving right crossings.
\end{rem}
Observe that, by translation invariance, we have $\varphi[C_j] = \varphi[C_0]$, where $C_j$ refers to any of the crossings or dual crossings defined above. Moreover, by reflection invariance along the $y$-axis translated by the vector $(jk +k,0)$, we have that $\varphi[C^L_j] = \varphi[C^R_j]$, where $C^L_j$ and $C^R_j$ refer to any of the same type of crossing/dual crossing event defined above. Therefore, by union bounds,
\begin{equs}
\varphi[V_{[0,\lambda n]\times[0,n]}]
&\leq
\sum_{j = 0}^C \Big( \varphi[\mathscr{T}_j] + \varphi[\mathscr{L}^g_j] + \varphi[\mathscr{L}^b_j] + \varphi[\mathscr{R}^g_j]+\varphi[\mathscr{R}^b_j] \Big)
\\
\varphi[V^*_{[0,\lambda n]\times[0,n]}]
&\leq
\sum_{j = 0}^C \Big( \varphi[\mathscr{T}^*_j] + \varphi[\mathscr{L}^{*,g}_j] + \varphi[\mathscr{L}^{*,b}_j] + \varphi[\mathscr{R}^{*,g}_j]+\varphi[\mathscr{R}^{*,b}_j] \Big)
\end{equs}
from which we deduce
\begin{equs}
\begin{split} \label{eq : rsw max}
\max\{ \varphi[\mathscr{T}_0], \varphi[\mathscr{L}^{g}_0],\varphi[\mathscr{L}^{b}_0] \}
&\geq
\frac{1}{5(C+1)} \varphi[V_{[0,\lambda n]\times[0,n]}]
\end{split}
\\
\begin{split} \label{eq : rsw max dual}
\max\{ \varphi[\mathscr{T}^*_0], \varphi[\mathscr{L}^{*,g}_0],\varphi[\mathscr{L}^{*,b}_0] \}
&\geq
\frac{1}{5(C+1)} \varphi[V^*_{[0,\lambda n]\times[0,n]}].
\end{split}
\end{equs}
Note that by translation invariance, reflection invariance, and straightforward gluing arguments, it is sufficient to consider the case when the maximum is taken over the tortuous paths $\mathscr{T}_0$, $\mathscr{T}_0^*$, $\mathscr{L}^b_0$, and $\mathscr{L}^{*,b}_0$. Let $C_0$ be the maximiser amongst $\{ \mathscr{T}_0, \mathscr{L}^b_0, \mathscr{T}_0^*, \mathscr{L}^{*,b}_0 \}$. We show that the following conditional probability estimate, whose proof we postpone to afterwards, is sufficient to establish Proposition \ref{prop: bridge events}.
\begin{lem}\label{lem: cond prob}
Let $C_0$ be as above. Then,
\begin{equs}
\varphi[A_0 \mid C_0 \cap C_4]
&\geq
\frac 12 \varphi[ C_2 \setminus (A_0 \cup A_2) \mid C_0 \cap C_4 ].
\end{equs}
\end{lem}
By Lemma \ref{lem: cond prob} and translation invariance, we have that
\begin{equs}
q \varphi [ A_0 ]
\geq
\varphi \left[ (C_2 \setminus (A_0 \cup A_2)) \cap C_0 \cap C_4 \right]
\end{equs}
and
\begin{equs}
\varphi[A_0]
\geq
\frac 12 \varphi[A_0 \cap (C_2 \cap C_0 \cap C_4)] + \frac 12 \varphi[ A_2 \cap (C_0 \cap C_2 \cap C_4)].
\end{equs}
Hence, by summing over disjoint events and using the FKG inequality, we obtain
\begin{equs}
(q+2) \varphi[A_0]
&\geq
\varphi[(C_0 \cap C_2 \cap C_4) \setminus (A_0 \cup A_2)] + \varphi [(C_0 \cap C_2 \cap C_4) \cap (A_0 \cup A_2)]
\\
&=
\varphi[C_0 \cap C_2 \cap C_4]
\geq
\varphi[C_0]^3
\end{equs}
These estimates combined with \eqref{eq : rsw max} and \eqref{eq : rsw max dual} completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem: cond prob}]
We only treat the case $C_0 = \mathscr{T}_0^*$ to illustrate the techniques; the other cases are similar, with the caveat that $\mathscr{L}^b_0$ and $\mathscr{L}^{*,b}_0$ require some additional geometric arguments that are purely deterministic (i.e.\ there is no change in adapting this to our setting). This difficulty was already treated in \cite[Section 3]{DCT}.
Let $\Gamma_L^*$ (resp. $\Gamma_R^*$) denote the left-most (resp. right-most) up-down dual crossing consisting of closed edges. We condition on $\Gamma_L^* = \gamma_L^*$ and $\Gamma_R^* = \gamma_R^*$, where $\gamma_L^*$ and $\gamma_R^*$ are dual paths in $R_0$ and $R_4$, respectively, such that all edges except the first and last lie in their respective domains. Identifying $\gamma_L$ and $\gamma_R$ with the corresponding set of edges in the primal, we let $v^-(\gamma_L)$ be the set of vertices incident to the left of $\gamma_L$ and $v^+(\gamma_R)$ be the set of vertices incident to the right of $\gamma_R$.
Let ${\rm Sym} = [-17k,\dots,22k] \times [0,\dots,39k]$. Note that ${\rm L}[{\rm Sym}] = \{17k\} \times [0,\dots,39k]$ and ${\rm R}[{\rm Sym}] = \{ 22k \} \times [0,\dots,39k]$. Let $\Omega$ be set of vertices with boundary given by the union of $v^-(\gamma_L) \cup v^+(\gamma_R)$ and the appropriate top and bottom segments of ${\rm Sym}$. Let ${\rm mix}$ denote the following boundary conditions on $\Omega$:
\begin{itemize}
\item $0$ vertices on $v^-(\gamma_L)$ and $v^+(\gamma_R)$
\item wired on ${\rm T}[\Omega]$
\item wired on ${\rm B}[\Omega]$.
\end{itemize}
Let $\xi$ denote a random boundary condition on $\Omega$ such that the leftmost and rightmost interior bonds are closed. Then, by Lemma \ref{free-empty-vertex} and monotonicity in the domain, we have that
\begin{equs}
\varphi[A_0^* \mid \Gamma_L^* = \gamma_L^*, \Gamma_R^* = \gamma_R^*]
&\geq
\varphi[ \varphi_\Omega^\xi [ v^-(\gamma_L) \overset{*}{\longleftrightarrow} v^+(\gamma_R) ] \mid \Gamma_L^* = \gamma_L^*, \Gamma_R^* = \gamma_R^* ]
\\
&\geq
\varphi_\Omega^{\rm mix} [ v^-(\gamma_L) \overset{*}{\longleftrightarrow} v^-(\gamma_R)]
\\
&\geq
\varphi_{\rm Sym}^{\rm mix} [ {\rm L}[{\rm Sym}] \overset{*}{\longleftrightarrow} {\rm R}[{\rm Sym}] ].
\end{equs}
Now, we turn to the righthand side. We again partition $ \mathscr{T}_0^* \cap \mathscr{T}_4^*$ according to $\Gamma_L^*$ and $\Gamma_R^*$. Fix $\Gamma_L^* = \gamma_L^*$ and $\Gamma_R^* = \gamma_R^*$. Conditionally on this, note that the event $\mathscr{T}_2^* \setminus (A_0^* \cup A_2^*)$ implies the existence of random primal paths $\Pi_L$ (left-most) and $\Pi_R$ (rightmost) separating $\mathscr{T}_2^*$ from $\gamma_L^*$ and $\gamma_R^*$. Conditional on $\Pi_R = \pi_R$ and $\Pi_L = \pi_L$, let $\Omega^*$ denote the set of vertices with boundary given by $\pi_R \cup \pi_L$, and the appropriate top and bottom segments of ${\rm Sym}$, which we denote $T(\Omega^*)$ and $B(\Omega^*)$, respectively. Let $\xi$ denote a random boundary condition on $\Omega^*$ that is wired on $\pi_R \cup \pi_L$. Let also ${\rm mix}^*$ denote the following boundary conditions on $\Omega$:
\begin{itemize}
\item $0$ vertices on the appropriate top and bottom segments of $\Omega \cap {\rm Sym}$
\item wired on $\pi_L \cup \pi_R$.
\end{itemize}
By the spatial Markov property, monotonicity in boundary conditions, and inclusion of events,
\begin{equs}
\varphi[\mathcal T_2^* \setminus (A_0^* \cup A_4^*) \mid \gamma_L^*, \gamma_R^*, \pi_L, \pi_R ]
&\leq
\varphi[\varphi_{\Omega^*}^\xi[{\rm B}[\Omega^*] \overset{*}{\longleftrightarrow} {\rm T}[\Omega^*]] \mid \gamma_L^*, \gamma_R^*, \pi_L, \pi_R ]
\\
&\leq
\varphi^{{\rm mix}^*}_{\Omega^*}[{\rm B}[\Omega^*] \overset{*}{\longleftrightarrow} {\rm T}[\Omega^*]]
\\
&\leq
\varphi_{\rm Sym}^{{\rm mix}^*}[{\rm B}[{\rm Sym}] \overset{*}{\longleftrightarrow} {\rm T}[{\rm Sym}]]
\\
&\leq
2\varphi_{\rm Sym}^{{\rm mix}}[{\rm B}[{\rm Sym}] \overset{*}{\longleftrightarrow} {\rm T}[{\rm Sym}]].
\end{equs}
The desired conclusion follows from $\pi/2$ rotation invariance.
\end{proof}
\subsection{Renormalisation: Proof of Lemma \ref{lem: renormalisation inequality for strip densities}} \label{subsec: proof of renorm}
We require an intermediary lemma that gives estimates on crossing probabilities under balanced boundary conditions at macroscopic distance away from the rectangle. This allows us to push boundary conditions in the proof of the renormalisation inequality.
\begin{lem} \label{lem: push bc}
There exists $c>0$ such that, for all $n \in 9\mathbb{N}$, one of the following must occur:
\begin{align}\label{eq: pushprimal} \tag{\textbf{PushPrimal}}
\forall \alpha > 0, \qquad &\varphi_{R_0}^{{\rm LTR}}[H_R]
\geq
c^\alpha
\\ \label{eq: pushdual} \tag{\textbf{PushDual}}
\forall \alpha > 0,
\qquad
&\varphi_{R_0}^{\rm B}[V_R^c]
\geq
c^\alpha
\end{align}
where $R=[0,\alpha n]\times [0,n/9]$; $R_0=[0,\alpha n] \times [0, 28 n/9]$; {\rm LTR} is the boundary condition that is wired on ${\rm L}[R]\cup{\rm T}[R]\cup{\rm R}[R]$ and $0$ on ${\rm B}[R]$; and, {\rm B} is the boundary condition that is $0$ on ${\rm L}[R]\cup{\rm T}[R]\cup{\rm R}[R]$, and wired on ${\rm B}[R]$.
\end{lem}
The proof of Lemma \ref{lem: push bc} follows the strategy of \cite[Sections 5.2]{DCT}. The adaptation of the arguments involved to our setting is similar in spirit to the modifications presented in the proof of Proposition \ref{prop: RSW} and, in addition, in the proof of the renormalisation inequality below. Thus, we omit the proof.
\begin{proof}[Proof of Lemma \ref{lem: renormalisation inequality for strip densities}]
We stress again that proof is an adaptation of the proof of \cite[Lemma 15]{DCT}, where the main differences are those discussed in the proof of Proposition \ref{prop: RSW}. It is included for completeness. Without loss of generality, we prove the second inequality (renormalisation of the $q_n$'s). Moreover, without loss of generality we may assume that \eqref{eq: pushdual} occurs.
Assume $n \in 9\mathbb{N}$. Define the rectangles
\begin{align*}
R&=[0,\alpha n] \times [0, 6\lambda n + 3n]
\\
R_i&= [0,\alpha n] \times [6in+3n, 6in + 6n], &\qquad 0 \leq i \leq \lambda - 1
\\
R_i'&=[0,\alpha n] \times [6in, 6in + 3n], &\qquad 0 \leq i \leq \lambda.
\end{align*}
Note that the $R_i$ and $R_i'$ partition $R$. We further subdivide each $R_i$ horizontally into three thin rectangles. For $0 \leq i \leq \lambda -1$, define
\begin{equs}
\tilde R_i^-
&:=
[0,\alpha n]\times [6 i n + 3n + \frac{12}{9}n, 6 i n + 3n + \frac{13}{9}n ]
\\
\tilde R_i
&:=
[0,\alpha n]\times [6 i n + 3n + \frac{13}{9}n, 6 i n + 3n + \frac{14}{9}n ]
\\
\tilde R_i^+
&:=
[0,\alpha n]\times [6 i n + 3n + \frac{14}{9}n, 6 i n + 3n + \frac{15}{9}n ].
\end{equs} Let $\mathcal E$ denote the event that each rectangle $R_i$ is crossed horizontally. Let $\mathcal F$ denote the event that each rectangle $R_i'$ is not crossed vertically, i.e. that there is a crossing of dual edges which cut closed edges in the primal. Let $\tilde \mathcal E$ be the event that all of the $\tilde R_i$ are crossed horizontally. Let $\tilde \mathcal F$ be the event that none of the $\tilde R_i^{\pm}$ are crossed vertically.
Let $1/0$ denote the boundary condition that is wired on ${\rm T}[R] \cup {\rm B}[R]$ and free on ${\rm L}[R] \cup {\rm R}[R]$. Assume the following estimates hold:
\begin{align} \label{eq: pf of ren 1}
\varphi^{1/0}_R[\tilde \mathcal E \cap \mathcal F]
&\geq
\frac{(2r)^{-12\lambda n + 6n}}{\lambda^{C\lambda \alpha }}
\varphi^1_{[0,\alpha n] \times [-n, 2n]}[V_{[0,\alpha n]\times [0,n]}^c]^{\lambda + 1}
\\ \label{eq: pf of ren 2}
\varphi^{1/0}_R[\tilde\mathcal F \mid \tilde \mathcal E \cap \mathcal F]
&\geq
c^{2\lambda \alpha}
\\ \label{eq: pf of ren 3}
\varphi^{1/0}_R[\tilde \mathcal E \cap \tilde \mathcal F]
&\leq
\varphi^0_{[0,\alpha n] \times [-n/9, 2n/9]}[H_{[0,\alpha n]\times [0, n/9]}]^\lambda.
\end{align}
Then, by \eqref{eq: pf of ren 1} and \eqref{eq: pf of ren 2},
\begin{equs}
\varphi^{1/0}_R[\tilde \mathcal E \cap \tilde \mathcal F]
&\geq
\varphi^{1/0}_R[\tilde \mathcal F \mid \tilde \mathcal E \cap \mathcal F] \varphi^{1/0}_R[\tilde \mathcal E \cap \mathcal F]
\\
&\geq
\frac{c^{2\lambda \alpha}(2r)^{-12\lambda n + 6n}}{\lambda^{C\lambda \alpha }}
\varphi^1_{[0,\alpha n] \times [-n, 2n]}[V_{[0,\alpha n]\times [0,n]}^c]^{\lambda + 1}
\end{equs}
where $1/0$ denotes the balanced boundary condition that is wired on the top and bottom of $R$, and free on the left and right of $R$. Hence, by \eqref{eq: pf of ren 3}, we get that
\begin{equs}
\varphi^1_{[0,\alpha n] \times [-n, 2n]}[V_{[0,\alpha n]\times [0,n]}^c]^{(\lambda + 1)/\alpha}
\leq
\frac{\lambda^{C\lambda }}{c^{2\lambda }(2r)^{(-12\lambda n + 6n)/\alpha}}
\varphi^0_{[0,\alpha n] \times [-n/9, 2n/9]}[H_{[0,\alpha n]\times [0, n/9]}]^{\lambda/\alpha}
\end{equs}
The renormalisation inequality then follows: i) raising to the power of $1/\lambda$; ii) taking $\alpha \rightarrow \infty$ to express these events in terms of $p_m$ and $q_m$, where $m=m(n)$; and, iii) applying the duality relation between $p_m$ and $q_m$ of Lemma \ref{lem: dual relation of strip densities}. It remains to prove \eqref{eq: pf of ren 1}-\eqref{eq: pf of ren 3}.
To prove \eqref{eq: pf of ren 1}, first note that
\begin{equs} \label{eq: ren1 pf1}
\phi^{1/0}_{R}[\tilde \mathcal E \cap \mathcal F]
\geq
(2r)^{-12\lambda n + 6n} \varphi^1_R[\tilde \mathcal E \cap \mathcal F]
\geq
(2r)^{-12\lambda + 6n} \varphi^1_R[\tilde \mathcal E] \varphi^1_R[\mathcal F \mid \tilde \mathcal E].
\end{equs}
Note that wired boundary conditions are favourable for the event $\tilde\mathcal E$. Thus, by monotonicity in boundary conditions, the FKG inequality, and arguing as in \cite[Corollary 11]{DCT} to find the dependencies on $\lambda$ and $\alpha$, we find
\begin{equs} \label{eq: ren1 pf2}
\varphi^1_R[\tilde\mathcal E]
\geq
\varphi^1_{S_{6\lambda n + 3n}}[\tilde\mathcal E]
\geq
\prod_{i=1}^{\lambda - 1}
\varphi^1_{S_{6\lambda n + 3n}}[\tilde R_i]
\geq
\lambda^{C \lambda \alpha}
\end{equs}
where above we abuse notation and write $S_{6\lambda n + 3n} = \mathbb{R} \times[0, 6\lambda n + 3n]$.
On the other hand, note that we may partition the event $\tilde \mathcal E$ as follows. For each $1 \leq i \leq \lambda - 1$, let $\Gamma_i$ be the top-most path in $\tilde R_i$, and set $\Gamma_{0} = {\rm B}[R]$ and $\Gamma_{\lambda}={\rm T}[R]$. Let $R_i'(\Gamma)$ be the random domain with boundary given by $\Gamma_{i+1}$, $\Gamma_{i}$, and the relevant left and right segments of the boundary of $R$. Then, by conditioning on $\Gamma$, the spatial Markov property, independence, monotonicity in boundary conditions, and translation invariance, we find
\begin{equs} \label{eq: ren1 pf3}
\varphi^1_R[\mathcal F \mid \tilde \mathcal E]
\geq
\prod_{i=0}^{\lambda}\varphi^1_{R_i'(\Gamma)}[V_{R_i'}^c]
\geq
\varphi^1_{[0,\alpha n] \times [-n, 2n]}[V_{[0,\alpha n]\times [0,n]}^c]^{\lambda + 1}.
\end{equs}
Inserting \eqref{eq: ren1 pf2} and \eqref{eq: ren1 pf3} into \eqref{eq: ren1 pf1} establishes \eqref{eq: pf of ren 1}.
A similar conditioning argument, this time on the dual paths occuring in the $\tilde R_i^{\pm}$, together with the stochastic domination in Lemma \ref{free-empty-vertex}, yields \eqref{eq: pf of ren 3}. Finally, in order to obtain \eqref{eq: pf of ren 2}, we note that by a similar conditioning (this time on the crossings in $\tilde \mathcal E$ and dual crossings of $\mathcal F$) and pushing boundary conditions argument, together with translation invariance, we have the following estimate:
\begin{equs}
\varphi^{1/0}_R[\tilde \mathcal F \mid \tilde \mathcal E \cap \mathcal F]
\geq
\varphi^{\rm LTR}_{[0,\alpha n] \times [0, 28n/9]}[V_{[0,\alpha n]\times [0,n/9]}^c]^{2\lambda}.
\end{equs}
Applying the \eqref{eq: pushdual} then completes the proof.
\end{proof}
\subsection{Applications of the quadrichotomy}
We now explain some important consequences of Proposition \ref{prop:quad} that allow us to get a better quantitative understanding of the phase diagram $d=2$. We suppress notational dependence on $p$ and $a$ when clear from context.
\subsubsection{Off-critical behaviour}
The following propositions characterise the off-critical behaviour. The proofs of the first two are standard and can be found in \cite[Section 5]{DCT}.
\begin{prop}\label{cor:1}
Assume that {\bf (SubCrit)} occurs. There exists $c>0$ such that for every $n\geq 1$,
\begin{equs}\label{eq:kj}\varphi_{\Lambda_n}^1[0\longleftrightarrow \partial\Lambda_n]\leq e^{-cn}.\end{equs}
In particular, $\varphi^1[0\longleftrightarrow\infty]=0$ and $\varphi^0=\varphi^1$.
\end{prop}
\begin{prop}\label{cor:2}
Assume that {\bf (SupCrit)} occurs. There exists $c>0$ such that for every $n\geq 1$,
\begin{equs}
\varphi^0[\Lambda_n\centernot\longleftrightarrow\infty]\leq e^{-cn}.
\end{equs}
In particular, $\varphi^0[0\longleftrightarrow\infty]>0$ and $\varphi^0=\varphi^1$.
\end{prop}
\begin{prop}\label{sup-circuit}
Assume that {\bf (SupCrit)} occurs. Then there exists $t>0$ such that for every $n\geq 1$, we have $\varphi^0[\mathcal{C}_n] \geq 1- e^{-tn}$.
\end{prop}
\begin{proof}
It suffices to show that $\varphi^0[H^*_{[-2n,-n]\times[-2n,2n]}]$ decays exponentially. This follows from our assumption that {\bf (SupCrit)} occurs, \eqref{eq:MON}, and \eqref{eq: RSW}.
\end{proof}
\subsubsection{Discontinuous critical behaviour}
The following proposition characterises the discontinuous critical behaviour.
\begin{prop}\label{cor:3}
Assume that {\bf (DiscontCrit)} occurs. There exists $c>0$ such that for every $n\geq 1$,
\begin{equs}
\varphi^0[0\longleftrightarrow\partial \Lambda_n]\leq e^{-cn} \quad \text{and} \quad \varphi^1[\Lambda_n\centernot\longleftrightarrow\infty]\leq e^{-cn}.
\end{equs}
In particular, $\varphi^0[0\longleftrightarrow\infty]=0$ and $\varphi^1[0\longleftrightarrow\infty]>0$.
\end{prop}
In order to prove Proposition \ref{cor:3}, we require an intermediate lemma. Let $\mathcal{C}_n$, respectively $\mathcal{C}^*_n$, be the event that there is an open circuit in $\omega\cap(\Lambda_{2n}\setminus \Lambda_n)$, respectively $\omega^*\cap(\Lambda_{2n}\setminus \Lambda_n)$, surrounding the origin.
\begin{lem}\label{dual-circuit}
Assume that {\bf (DiscontCrit)} occurs. Then there exists $t>0$ such that for every $n\geq 1$, we have $\varphi^0[\mathcal{C}^*_n] \geq 1- e^{-tn}$ and $\varphi^1[\mathcal{C}_n] \geq 1- e^{-tn}$.
\end{lem}
\begin{proof}
We will show the assertion in the case of $\varphi^0$, with the case of $\varphi^1$ being similar.
We start by showing that $\varphi^0_{\Lambda_{4n}}[\mathcal{C}^*_n] \geq 1- e^{-tn}$, and then proceed to show that the same holds at infinite volume.
Note that if the rectangles $[-2n,-n]\times[-2n,2n]$ and $[n,2n]\times[-2n,2n]$ have dual vertical crossings, while $[-2n,2n]\times[-2n,-n]$ and $[-2n,2n]\times[n,2n]$ have dual horizontal crossings, then $\mathcal{C}^*_n$ occurs. By the $\pi/2$ rotational symmetry of the measure $\varphi^0_{\Lambda_{4n}}$, the FKG inequality, and duality, it suffices to show that $\varphi^0_{\Lambda_{4n}}[H_{[-2n,-n]\times[-2n,2n]}]$ decays exponentially fast to $0$.
To this end, recall that $p_n$ decays exponentially by our assumption that {\bf (DiscontCrit)} occurs. It follows from the union bound that there are vertices $x$ and $y$ on the left and right side of $[-2n,-n]\times[-2n,2n]$, respectively, such that
\begin{equs}
\varphi^0_{\Lambda_{4n}}[x\longleftrightarrow y \text{ in } [-2n,-n]\times[-2n,2n]]\geq \frac{\varphi^0_{\Lambda_{4n}}[H_{[-2n,-n]\times[-2n,2n]}]}{(4n+1)^2}.
\end{equs}
Moreover, for every $k\geq 1$, we can create a horizontal crossing of the rectangle $[0,4kn]\times [0,4n]$ by combining $4k$ translations and reflections of the event $\{x\longleftrightarrow y \text{ in } [-2n,-n]\times[-2n,2n]\}$, as appropriate. Using the FKG inequality, \eqref{eq:MON}, and the reflection symmetry of the measure $\varphi^0_{S_{4n}}$, we obtain
\begin{equs}
\varphi^0_{S_{4n}}[H_{[0,4kn]\times [0,4n]}]\geq\left(\frac{\varphi^0_{\Lambda_{2n}}[H_{[-2n,-n]\times[-2n,2n]}]}{(4n+1)^2}\right)^{4k}.
\end{equs}
By the finite energy property, there exists a constant $c>0$ such that
\begin{equs}
\phi^0_{[0,4kn]\times[-4n,8n]}[H_{[0,4kn]\times [0,4n]}]\geq e^{-cn} \phi^0_{S_{4n}}[H_{[0,4kn]\times [0,4n]}],
\end{equs}
hence taking $k$th roots and sending $k$ to infinity, we obtain
\begin{equs}
\varphi^0_{\Lambda_{4n}}[H_{[-2n,-n]\times[-2n,2n]}]\leq (4n+1)^2 p_{4n}^{1/4}.
\end{equs}
This implies that $\varphi^0_{\Lambda_{2n}}[H_{[-2n,-n]\times[-2n,2n]}]$ decays exponentially, as desired.
To prove the assertion at infinite volume, consider some $k\geq 1$, and note that
\begin{equs}
\varphi^0_{\Lambda_{2^k n}}[\mathcal{C}^*_{n}]\geq \varphi^0_{\Lambda_{2^k n}}\Big[\bigcap_{i=1}^k \mathcal{C}^*_{2^{k-i} n}\Big]=\varphi^0_{\Lambda_{2^k n}}[\mathcal{C}^*_{2^{k-1} n}]\prod_{i=2}^k\varphi^0_{\Lambda_{2^k n}}\Big[\mathcal{C}^*_{2^{k-i}n} \mid \bigcap_{j=1}^{i-1} \mathcal{C}^*_{2^{k-j} n} \Big].
\end{equs}
By Lemma~\ref{free-closed} and \eqref{eq:DMPRC}, \eqref{eq:MON}, we have
\begin{equs}
\varphi^0_{\Lambda_{2^k n}}\Big[\mathcal{C}^*_{2^{k-i}n} \mid \bigcap_{j=1}^{i-1} \mathcal{C}^*_{2^{k-j} n} \Big]\geq \varphi^0_{\Lambda_{2^{k-i+2} n}}[\mathcal{C}^*_{2^{k-i}n}],
\end{equs}
and $\varphi^0_{\Lambda_{2^k n}}[\mathcal{C}^*_{2^{k-1} n}]\geq \varphi^0_{\Lambda_{2^{k+1}n}}[\mathcal{C}^*_{2^{k-1} n}]$, which implies that $\varphi^0_{\Lambda_{2^k n}}[\mathcal{C}^*_{n}]\geq 1-e^{-t'n}$ for some constant $t'>0$. Sending $k$ to infinity we obtain the desired assertion.
\end{proof}
We are now ready to prove Proposition~\ref{cor:3}.
\begin{proof}[Proof of Proposition~\ref{cor:3}]
For the first inequality we use that $\varphi^0[0\longleftrightarrow\partial \Lambda_n]\leq 1-\varphi^0[\mathcal{C}^*_{n/2}]$ and Lemma~\ref{dual-circuit}. For the second inequality, we observe that when $\Lambda_n$ is not connected to infinity, there is an open circuit in $\omega^*$ surrounding $\Lambda_n$. This implies that for some $k\geq n$, the dual edge $\{(k+1/2,1/2),(k+1/2,-1/2)\}$ is connected to distance $k$ in $\omega^*$, hence the second inequality follows from Lemma~\ref{dual-circuit}.
\end{proof}
\subsubsection{Continuous critical behaviour}
The following proposition, which gives polynomial bounds on one-arm events, is a standard consequence of Proposition \ref{prop:quad}. See \cite[Section 5]{HDC-IP}.
\begin{prop}\label{cor:a}
Assume that {\bf (ContCrit)} occurs. There exists $c>0$ such that for every $n\geq 1$,
\begin{equs}\label{prop: one arm}
\frac{c}{n}\leq \varphi^1[0\longleftrightarrow\partial\Lambda_n]\leq \frac1{n^c}.
\end{equs}
In particular, $\varphi^1[0\longleftrightarrow\infty]=0$ and $\varphi^1=\varphi^0$.
\end{prop}
\subsubsection{Exact correspondence of critical behaviour}
The following proposition makes rigorous the intuitively clear statement that {\bf (ContCrit)} or {\bf (DiscontCrit)} can only occur at $p_c$.
\begin{prop}\label{prop: pc behaviour}
Let $a \in (0,1)$. If $p=p_c(a)$, then either {\bf (ContCrit)} or {\bf (DiscontCrit)} occurs. Furthermore, the set of $a\in (0,1)$ for which {\bf (DiscontCrit)} occurs at $p_c(a)$ is open.
\end{prop}
\begin{proof}
The fact that {\bf (ContCrit)} or {\bf (DiscontCrit)} must occur at $p_c(a)$ follows from a finite size criterion. Indeed, if {\bf (SupCrit)} holds at $p_c(a)$, then arguing as in \cite[Lemma 10]{DCT}, this would imply that there is no infinite cluster under $\phi^1_{p_c(a) + \epsilon}$, which is a contradiction. Similar considerations hold with {\bf (SupCrit)}. In addition, a standard consequence of the renormalisation inequality in Lemma \ref{lem: renormalisation inequality for strip densities} is that the set of points at which {\bf (DiscontCrit)} holds is open. See \cite[Section 5.4]{DCT}.
\end{proof}
Finally, we establish that {\bf (ContCrit)} and {\bf (DiscontCrit)} correspond exactly to continuous and discontinuous critical points for the Blume-Capel model, respectively.
\begin{prop}\label{correspondence}
Let $\Delta\in \mathbb{R}$ and $a=\frac{2e^{\Delta}}{1+2e^{\Delta}}$. If {\bf (ContCrit)} occurs at $p=p_c(a)$, then $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}=0$. If {\bf (DiscontCrit)} occurs at $p=p_c(a)$, then $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}>0$.
\end{prop}
\begin{proof}
Recall that $\langle \sigma_0 \rangle^+=\varphi^1[0\longleftrightarrow\infty]$ from the Edwards-Sokal coupling. If {\bf (ContCrit)} occurs, then $\varphi^1[0\longleftrightarrow\infty]=0$ by Proposition~\ref{cor:a}. If {\bf (DiscontCrit)} occurs, then $\varphi^1[0\longleftrightarrow\infty]>0$ by Proposition~\ref{cor:3}. This proves the desired result.
\end{proof}
\section{Tricritical point in $d=2$ via crossing probabilities}\label{sec: tric 2}
\subsection{A direct proof from the quadrichotomy}
The importance of Proposition \ref{prop: pc behaviour} is that it allows to deduce that the two point function decays to $0$ at $p_c(a)$ for any $a \in (0,1)$. This, together with the mapping described in Section \ref{sec: mapping bc-ising}, implies Theorem \ref{thm: existence} for $\Delta^+ = -\log 2$.
\begin{proof}[Proof of Theorem \ref{thm: existence} in $d=2$ for $\Delta^+ = -\log 2$]
Let $\Delta \geq - \log 2$ and set $a=\frac{2e^\Delta}{1+2e^\Delta}$. By Proposition \ref{prop: pc behaviour}, we know that either {\bf (ContCrit)} or {\bf (DiscontCrit)} occurs at $p_c(a)$. Using Propositions \ref{cor:3} and \ref{cor:a}, we obtain that in any case, $\lim_{|x|\rightarrow \infty} \phi^0_{p_c(a),a}[0 \longleftrightarrow x] = 0$. As a consequence of Corollary \ref{corollary: correlations} we have that
\begin{equs}
\lim_{|x| \rightarrow \infty}\langle \tau_0 \tau_x \rangle^{{\rm Ising},0}_{J(\beta_c(\Delta), \Delta)}
=
0.
\end{equs}
Hence, by \cite[Proposition 1] {AR}, translation invariance, and mixing,
\begin{equs}
\Big(\langle \tau^1_0 \rangle^{{\rm Ising},+}_{J(\beta_c(\Delta),\Delta)} \Big)^2
=
\lim_{|x| \rightarrow \infty}\langle \tau_0 \tau_x \rangle^{{\rm Ising},+}_{J(\beta_c(\Delta), \Delta)}
=
\lim_{|x| \rightarrow \infty}\langle \tau_0 \tau_x \rangle^{{\rm Ising},0}_{J(\beta_c(\Delta), \Delta)}
=
0.
\end{equs}
Thus $\langle \sigma_0 \rangle^+_{\beta_c(\Delta), \Delta} = 0$.
\end{proof}
\subsection{A sufficient criterion for continuity }
We give a sufficient criterion to prove continuity in dimension $2$ that avoids the use of the coupling with Ising explained in Section \ref{sec: mapping bc-ising}, thus having the potential to extend beyond $\Delta^+ = - \log 2$. In the next subsection, we show that this is indeed the case.
Let $G=(V,E)$ be a finite subgraph of $\mathbb{Z}^2$, and let $\varepsilon>0$.
We denote by $\mu_{G,\beta,\Delta}^{+,\varepsilon}$ the probability measure defined on spin configurations $\sigma \in \{-1,0,1\}^{V}$ by
\begin{equs}
\mu_{G,\beta,\Delta}^{+,\varepsilon}(\sigma)
=
\frac{1}{Z_{G,\beta,\Delta}^{+,\varepsilon}} e^{-H_{G,\beta,\Delta}^{+,\varepsilon}(\sigma)}
\end{equs}
where
\begin{equs}
H_{G,\beta,\Delta}^{+,\varepsilon}(\sigma)
=
-\beta\sum_{xy \in E} \sigma_x\sigma_y -\Delta\sum_{x \in V} \sigma_x^2 - \varepsilon \sum_{\substack{xy \in \mathbb{E}^d \\x \in V, y \in \mathbb{Z}^d\setminus V }} \sigma_x.
\end{equs}
and $Z_{\Lambda,\beta,\Delta}^{+,\varepsilon}$ is the partition function. This corresponds to the Blume-Capel model with $\varepsilon$-boundary conditions.
When the phase transition is continuous, $\mu_{G}^{+,\varepsilon}[\sigma_0]$ converges to $\mu^+[\sigma_0]=0$ for every $\varepsilon>0$. On the other hand, when the phase transition is discontinuous, it is unclear whether this property holds as one expects that $\mu^0$ is extremal. Therefore, the following criterion is natural.
\begin{defn}
For $\beta > 0$ and $\Delta \in \R$, we say ${\bf (H)}$ is satisfied if
\begin{equs}
\lim_{n\to\infty} \mu^{+,\varepsilon}_{\Lambda_n,\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]
=
\mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right],
\qquad \forall \epsilon > 0.
\end{equs}
\end{defn}
Given $\delta\in (0,1)$, we now define a dilute random cluster measure $\varphi^{1,\delta}_{\Lambda,p,a}$, which corresponds to $\mu^{+,\varepsilon}_{\Lambda,\beta,\Delta}$ for $\varepsilon=-\frac{1}{2}\log(1-\delta)$ via the Edwards-Sokal coupling. The measure $\varphi^{1,\delta}_{\Lambda,p,a}$ is defined by
\begin{equs}
\varphi^{1,\delta}_{\Lambda,p,a} [\theta] =
\frac{\mathbf{{1}}[\theta \in \Theta_\Lambda^1]}{Z^{1,\delta}_{\Lambda,p,a}} 2^{k(\theta,\Lambda)}
\prod_{x\in V} \left(\frac{a}{1-a}\right)^{\psi_x}
\prod_{e\in E_{\psi,\Lambda}} r_e\left(\frac{p_e}{1-p_e}\right)^{\omega_e},
\end{equs}
where $Z^{1,\delta}_{\Lambda,p,a}$ is the normalisation constant. Here $r_e=\sqrt{1-p}$ if both endpoints of $e$ belong to $\Lambda$, and $r_e=\sqrt{1-\delta}$ otherwise, while $p_e=p$ if both endpoints of $e$ belong to $\Lambda$, and $p_e=\delta$ otherwise. Note that samples from $\varphi^{1,\delta}_{\Lambda, p, a}$ are supported on $\Theta^1_{\Lambda}$, although we stress that not all vertices in $\Lambda_n$ are required to be open.
In the next lemma, we obtain a comparison between $\varphi^{1,\delta}_{\Lambda_n,p,a}$ and $\varphi^0_{\Lambda_n,p,a}$ which states that, up to an exponential rewiring cost that is proportional to $n$ and small when $\delta$ is small, we may bound probabilities under $\varphi^0_{\Lambda_n,p,a}$ from below by probabilities under $\varphi^{1,\delta}_{\Lambda_n,p,a}$. We recall that $b(\Lambda_n)$ is the set of edges induced by $\Lambda_n$. We write $\partial_E \Lambda = \{ xy \in \mathbb{E}^d : x \in \Lambda, y \in \mathbb{Z}^d \setminus \Lambda \}$ to denote the edge boundary of $\Lambda$.
\begin{lem}\label{comparison}
For every $\delta\in (0,1)$, $n \in \mathbb{N}$, and event $A$ depending on $(\Lambda_n, b(\Lambda_n))$, we have
\begin{equs}
\varphi^{1,\delta}_{\Lambda_n,p,a}[A]\leq \left(\frac{1}{1-\delta}\right)^{12n+6}\varphi^0_{\Lambda_n,p,a}[A].
\end{equs}
\end{lem}
\begin{proof}
Let $\theta\in \{0,1\}^{\Lambda_n}\times \{0,1\}^{b(\Lambda_n)}$ be a configuration on $\Lambda_n$. Given $\xi \in \Theta^\xi_{\Lambda_n}$, we write $\xi \sim_{\Lambda_n} \theta$ if $\xi|_{\Lambda_n \times b(\Lambda_n)} = \theta$, i.e.\ if the two configurations coincide exactly in $(\Lambda_n, b(\Lambda_n))$.
Let $Z^{1,\delta}_{\Lambda_n, p,a}[\theta]$ and $Z^{0}_{\Lambda_n, p, a}[\theta]$ be defined such that $\varphi^{1,\delta}_{\Lambda_n, p,a}[\theta] = \frac{Z^{1,\delta}_{\Lambda_n, p,a}[\theta]}{Z^{1,\delta}_{\Lambda_n, p,a}}$ and $\varphi^{0}_{\Lambda_n, p,a}[\theta] = \frac{Z^{0}_{\Lambda_n, p,a}[\theta]}{Z^{0}_{\Lambda_n, p,a}}$, respectively. Note that
\begin{equs}
\varphi^{1,\delta}_{\Lambda_n, p, a}[\theta]
=
\frac{1}{Z^{1,\delta}_{\Lambda_n,p ,a}} \, \sum_{\substack{\theta^1 \in \Theta^1_{\Lambda_n} \\ \theta^1 \sim_{\Lambda_n} \theta }} Z^{1,\delta}_{\Lambda_n, p, a}[\theta^1], \quad \text{ and } \quad
\varphi^{0}_{\Lambda_n, p, a}[\theta]
=
\frac{1}{Z^{0}_{\Lambda_n,p ,a}} \, \sum_{\substack{\theta^0 \in \Theta^0_{\Lambda_n} \\ \theta^0 \sim_{\Lambda_n} \theta } } Z^{0}_{\Lambda_n, p, a}[\theta^0].
\end{equs}
Thus, the proof is immediate if we can prove:
\begin{equs} \label{eq: expwire claims}
Z^{1,\delta}_{\Lambda_n,p,a}[\theta]
\leq
\left(\frac{1}{1-\delta}\right)^{8n+4} Z^{0}_{\Lambda_n,p,a}[\theta],
\quad \text{ and } \quad
Z^{1,\delta}_{\Lambda_n,p,a}[\theta]
\geq
(1-\delta)^{4n+2} Z^{0}_{\Lambda_n,p,a}[\theta].
\end{equs}
The lefthand side of \eqref{eq: expwire claims} follows from three observations. First, note that for every $\theta^1\in \Theta^1_{\Lambda_n}$ and $\theta^0\in \Theta^0_{\Lambda_n}$ that coincide with $\theta$ on $\Lambda_n$, we have $k(\theta^1,\Lambda_n)\leq k(\theta^0,\Lambda_n)$. Second, note that $r_e\leq 1$. Finally, note that
\begin{equs}
\sum_{\substack{\omega \in \{0,1\}^{\partial_E \Lambda_n} \\ \omega|_{b(\Lambda_n)} = \theta_{b(\Lambda_n)} }}\prod_{e\in \partial_E \Lambda_n} \left(\frac{\delta}{1-\delta}\right)^{\omega_e}
\leq
\left(\frac{1}{1-\delta}\right)^{8n+4}
\end{equs}
where $\theta_{b(\Lambda_n)}$ refers to the edge configuration (i.e.\ the projection on the second coordinate) of $\theta$ inside $b(\Lambda_n)$.
For the righthand side of \eqref{eq: expwire claims}, we truncate the sum in $Z^{1,\delta}_{\Lambda_n,p,a}[\theta]$ over all configurations $\theta^1$ whose edge projection is equal to $0$ for every edge in $\partial_E \Lambda_n$. Thus, for any such $\theta^1$ and for any $\theta^0 \in \Theta^0_{\Lambda_n}$, we have $k(\theta^1,\Lambda_n)= k(\theta^0,\Lambda_n)$. The estimate follows from this consideration together with the bound
\begin{equs}
\prod_{\substack{x\in \Lambda_n, y\not\in \Lambda_n\\ xy\in E_{\psi^1,\Lambda_n}}} r_{xy}
\geq
(1-\delta)^{\frac{8n+4}2}
=
(1-\delta)^{4n+2}
\end{equs}
where $\psi^1$ is the projection of $\theta^1$ onto its first coordinate.
\end{proof}
We now show that {\bf (H)} is a sufficient condition for continuity.
\begin{prop} \label{prop: h implies continuity}
Let $\Delta \in \mathbb{R}$. Assume that {\bf (H)} is satisfied at $(\beta_c(\Delta), \Delta)$. Then
\begin{equs}
\langle \sigma_0 \rangle^+_{\beta_c(\Delta), \Delta}
=
0.
\end{equs}
\end{prop}
\begin{proof}
Let $(\beta_c(\Delta), \Delta)$ be such that {\bf (H)} is satisfied and assume that we have $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}>0$. By Proposition~\ref{correspondence}, {\bf (DiscontCrit)} occurs at $p=p_c(a)$ for $a=\frac{2e^{\Delta}}{1+2e^{\Delta}}$.
Let $t$ be the constant of Lemma~\ref{dual-circuit}. Then, by Lemma~\ref{comparison}, there exists $\delta>0$ such that
\begin{equs}
\varphi^{1,\delta}_{\Lambda_{2n}}[\mathcal{C}^c_n \mid \psi_0=1]\leq e^{tn/2} \varphi^0_{\Lambda_{2n}}[\mathcal{C}^c_n \mid \psi_0=1]
\end{equs}
for every $n\geq 1$.
Note that
\begin{equs}
\varphi^0_{\Lambda_{2n}}[\mathcal{C}^c_n\mid \psi_0=1]\leq \frac{\varphi^0_{\Lambda_{2n}}[\mathcal{C}^c_n]}{\varphi^0_{\Lambda_{2n}}[\psi_0=1]}\leq C \varphi^0_{\Lambda_{2n}}[\mathcal{C}^c_n]\leq C e^{-tn}
\end{equs}
for some constant $C>0$ independent of $n$, which implies that
\begin{equs}
\varphi^{1,\delta}_{\Lambda_{2n}}[\mathcal{C}^c_n \mid \psi_0=1]\leq Ce^{-tn/2}.
\end{equs}
By the Edwards-Sokal coupling,
\begin{equs}
\mu^{+,\varepsilon}_{\Lambda_{2n}}[\sigma_0 \mid \sigma_0\neq 0]=\varphi^{1,\delta}_{\Lambda_{2n}}[0\longleftrightarrow \Lambda_{2n}^c \mid \psi_0=1] \leq \varphi^{1,\delta}_{\Lambda_{2n}}[\mathcal{C}^c_n \mid \psi_0=1]\leq Ce^{-tn/2},
\end{equs}
where $\varepsilon=-\frac{1}{2}\log(1-\delta)$.
Letting $n$ go to infinity and using Lemma~\ref{weak+limit} we obtain
$\mu^{+}[\sigma_0 \mid \sigma_0\neq 0]=0$,
hence $\mu^+[\sigma_0]=0$,
which concludes the proof by contradiction.
\end{proof}
\subsection{ $\Delta^+ = -\log 4$ via Lee-Yang theory}
In order to prove Theorem \ref{thm: existence} in two dimensions with $\Delta^+=-\log{4}$, we need the following general statement of Lee-Yang type on the complex zeros of partition functions, which works for any finite graph.
\newcommand{\operatorname{Ising}}{\operatorname{Ising}}
\newcommand{\operatorname{BC}}{\operatorname{BC}}
Let us first define the Blume-Capel model with a magnetic field. Given a finite graph $G=(V, E)$ and a \emph{real} magnetic field $h:V\rightarrow\mathbb{R}$, we denote by $\mu_{G,\beta,\Delta}^{h}$ the probability measure defined on spin configurations $\sigma \in \{-1,0,1\}^{V}$ by
\begin{equs}
\mu_{G,\beta,\Delta}^{h}(\sigma)
=
\frac{1}{Z_{G,\beta,\Delta}^{h}} e^{-H_{G,\beta,\Delta}^{h}(\sigma)} \end{equs}
where
\begin{equs}
H_{G,\beta,\Delta}^{h}(\sigma)
=
-\beta\sum_{xy \in E} \sigma_x\sigma_y -\Delta\sum_{x \in V} \sigma_x^2 - \sum_{x \in V} h_x\sigma_x.
\end{equs}
and $Z_{\Lambda,\beta,\Delta}^{h}$ is the partition function.
One can extend the definition of $\mu_{G,\beta,\Delta}^{h}$ to a \emph{complex} magnetic field $h:V\rightarrow\mathbb{C}$, which is in general a complex measure, as long as the partition function $Z_{G,\beta, \Delta}^h$ does not vanish. Our aim is to study the complex zeros of $Z_{G,\beta, \Delta}^h$ and, more generally, the complex zeros of partition functions of events. Given an event $\mathcal E$, we define $Z^{h}_{G,\beta,\Delta}[\mathcal E]$ as
\[
Z_{G,\beta,\Delta}^h[\mathcal E]:=\sum_{\sigma\in \mathcal E} \exp{\left\{\sum_{xy\in E} \beta\sigma_x\sigma_y +\Delta\sum_{x\in V} \sigma^2+\sum_{x\in V} h_x\sigma_x\right\}}.
\]
In the following lemma we study the Lee-Yang zeros of the partition function $Z_G^h[\sigma_A\neq 0]$ with complex magnetic field $h$. The proof is based on an adaptation of \cite{dunlop1977zeros}.
\begin{lem}\label{lem:Lee-Yang}
Let $\beta>0$ and $\Delta\geq -\log4$. Consider a finite graph $G=(V, E)$, and let $A\subset V$. Let $h:V\rightarrow\mathbb{C}$ be a magnetic field such that for each $x\in V$, we have $\Re{(h_x)}\geq |\Im{(h_x)}|$. Then $Z^{h}_{G,\beta,\Delta}[\sigma_A\neq 0]\neq 0$.
\end{lem}
\begin{proof}
Let $\mathcal E=\{\sigma_A\neq 0\}$. We first express $\left|Z_{G,\beta,\Delta}^h[\mathcal E]\right|^2$ as a sum over pairs of configurations:
\begin{equs}
\left|Z_{G,\beta,\Delta}^h[\mathcal E]\right|^2=\sum_{\sigma, \sigma'\in \mathcal E}\exp\left\{\beta \sum_{xy\in E}(\sigma_x\sigma_y + \sigma'_x\sigma'_y) + \Delta \sum_{x\in V} (\sigma_x^2+{\sigma'_x}^2) +\sum_{x\in V} (h_x\sigma_x+\overline{h_x}\sigma'_x)\right\}.
\end{equs}
We now express our duplicated system as follows.
For each vertex $x$ we define $\nu_x:=e^{-i\pi/4}\cdot(\sigma_x+i\sigma'_x)\in \mathbb{C}$. We then have
\[
\sigma_x\sigma_y+\sigma'_x\sigma'_y = \frac{ (\sigma_x+i\sigma'_x)(\sigma_y-i\sigma'_y)+(\sigma_x-i\sigma'_x)(\sigma_y+i\sigma'_y)}{2}= \frac{\nu_x\overline{\nu_y}+\overline{\nu_x}\nu_y}{2},
\]
\[\sigma_x^2+{\sigma'_x}^2 = \left| \nu_x \right|^2,\]
and
\begin{align*}
h_x\sigma_x+\overline{h_x}\sigma'_x &= \Re(h_x)(\sigma_x+\sigma'_x)+i\Im(h_x)(\sigma_x-\sigma'_x)=
\Re(h_x)\frac{\nu_x+\overline{\nu_x}}{\sqrt{2}}+\Im(h_x)\frac{\overline{\nu_x}-\nu_x}{\sqrt{2}}
\\& =
\nu_x\frac{\Re(h_x)-\Im(h_x)}{\sqrt{2}}+\overline{\nu_x}\frac{\Re(h_x)+\Im(h_x)}{\sqrt{2}}.
\end{align*}
Putting this together we obtain
\begin{align*}
\left|Z_{G,\beta,\Delta}^h[\mathcal E]\right|^2 &= \sum_{\nu\in S^{V\setminus A}_{\textup{BC}}\times S_{{\rm Ising}}^A} \left(\prod_{xy\in E} \exp\left\{\beta \nu_x\overline{\nu_y}/2\right\}\cdot\exp\left\{\beta \overline{\nu_x}\nu_y/2\right\}
\right)
\cdot
\left(\prod_{x\in V} e^{\Delta |\nu_x|^2}\right)
\\&
\cdot\left(\prod_{x\in V} \exp\left\{\nu_x\frac{\Re(h_x)-\Im(h_x)}{\sqrt{2}}\right\}\cdot\exp\left\{\overline{\nu_x}\frac{\Re(h_x)+\Im(h_x)}{\sqrt{2}}\right\} \right),
\end{align*}
where the sets $S_{\operatorname{Ising}}$ and $S_{\operatorname{BC}}$ are given by
\[
S_{\operatorname{Ising}}:=\sqrt{2}\cdot \{1, i, -1, -i\},
\qquad
S_{\operatorname{BC}}:=\tfrac{1}{\sqrt{2}}\cdot \{0, 2, 2i, -2, -2i, 1+i, -1+i, -1-i, 1-i\}.
\]
Now, since both $\Re{(h_x)}\pm \Im{(h_x)}$ are non-negative, when we expand all exponentials, except for $e^{\Delta|\nu_x|^2}$, as $e^s=\sum s^k/k!$, and exchange summations, we get
\begin{equation}\label{eq:partition-squared-expanded}
\left|Z_{G,\beta,\Delta}^h[\mathcal E]\right|^2 = \sum_{n, n':V\rightarrow \mathbb{Z}_{\geq 0}} f(n, n')\cdot \prod_{x\in V} \left(\sum_{\nu_x} \nu_x^{n_x}\overline{\nu_x}^{n'_x} e^{\Delta |\nu_x|^2}\right),
\end{equation}
where $\nu_x$ ranges over $S_{\operatorname{Ising}}$ for $x\in A$ and over $S_{\operatorname{BC}}$ otherwise, and all factors $f(n, n')$ are non-negative with $f(\bold{0}, \bold{0})=1$. We have that for $x\in A$, the sum is equal to
\begin{equs}
\sum_{\nu_x} \nu_x^{n_x}\overline{\nu_x}^{n'_x} e^{\Delta |\nu_x|^2} = 4\cdot \sqrt{2}^{n_x+n'_x}\cdot e^{2\Delta}\mathbf{{1}}_{\{4|n_x-n'_x\}},
\end{equs}
whereas for $x\in V\setminus A$ we have
\begin{equs}
\sum_{\nu_x} \nu_x^{n_x}\overline{\nu_x}^{n'_x} e^{\Delta |\nu_x|^2} = \mathbf{{1}}_{\{n_x=0,n'_x=0\}}+ \mathbf{{1}}_{\{4|n_x-n'_x\}}\cdot \left[ 4\cdot \sqrt{2}^{n_x+n'_x}\cdot e^{2\Delta}+4e^\Delta\cdot (-1)^{\frac{n_x-n'_x}{4}}\right].
\end{equs}
Since $n_x-n'_x\equiv 4\mod 8$ implies $n_x+n'_x\geq 4$, all these sums are positive if $\Delta \geq -\log{4}$, except when $n_x+n'_x=4$, in which case the sum is equal to $0$. This means that all summands on the right hand side of \eqref{eq:partition-squared-expanded} are non-negative. Since $f(\bold{0},\bold{0})=1$, the contribution from $(n,n'):=(\bold{0},\bold{0})$ is strictly positive, hence $Z_{G,\beta,\Delta}^h[\mathcal E]\not=0$.
\end{proof}
In what follows, we condition on $\sigma_0\neq 0$ and show that this convergence is true for every $\Delta\geq -\log{4}$, using the above lemma. For that we use the monotonicity of the conditional measure in $\varepsilon$ and $G$, which follows the GKS inequality and the FKG inequality.
\begin{prop}\label{weak+limit}
Let $\beta>0$ and $\Delta\geq -\log{4}$. Then {\bf (H)} is satisfied, i.e.
\begin{equs}
\lim_{n\to\infty} \mu^{+,\varepsilon}_{\Lambda_n,\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]
=
\mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right], \qquad \forall \epsilon > 0.
\end{equs}
\end{prop}
\begin{proof}
Note that for every $\varepsilon\geq \beta$, $\mu^{+,\varepsilon}_{\Lambda_n}\left[\cdot \mid \sigma_0\neq 0\right]$ stochastically dominates $\mu^{+}\left[\cdot \mid \sigma_0\neq 0\right]_{\Lambda_n}$. On the other hand, the restriction of $\mu^{+,\varepsilon}_{\Lambda_n}\left[\cdot \mid \sigma_0\neq 0\right]$ on $\Lambda_{n-1}$ is stochastically dominated by $\mu^+_{\Lambda_{n-1}}\left[\cdot \mid \sigma_0\neq 0\right]$.
This implies that
\begin{equs}
\lim_{n\to \infty} \mu^{+,\varepsilon}_{\Lambda_n,\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]= \mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]
\end{equs}
for every $\varepsilon\geq \beta$. Hence it suffices to show that $\mu^{+,\varepsilon}_{\Lambda_n,\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right] $ converges to an analytic function of $\varepsilon>0$.
Note that
\begin{equs}\label{w-expansion}
\mu^{+,\varepsilon}_{\Lambda_n,\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]
=
\frac{Z^{+,\varepsilon}_{\Lambda_n,\beta,\Delta}[\sigma_0=1]-Z^{+,\varepsilon}_{\Lambda_n,\beta,\Delta}[\sigma=-1]}{Z^{+,\varepsilon}_{\Lambda_n,\beta,\Delta}[\sigma_0=1]+Z^{+,\varepsilon}_{\Lambda_n,\beta,\Delta}[\sigma=-1]}
=
\frac{1-W^{\varepsilon}_{\Lambda_n,\beta,\Delta}}{1+W^{\varepsilon}_{\Lambda_n,\beta,\Delta}},
\end{equs}
where
\begin{equs}
W^{\varepsilon}_{\Lambda_n,\beta,\Delta}=\frac{Z^{+,\varepsilon}_{\Lambda_n,\beta,\Delta}[\sigma_0=-1]}{Z^{+,\varepsilon}_{\Lambda_n,\beta,\Delta}[\sigma_0=1]}.
\end{equs}
For $\varepsilon\geq \beta$, $W^{\varepsilon}_{\Lambda_n,\beta,\Delta}$ converges to a constant, i.e.
\begin{equs}\label{w-limit}
\lim_{n\to\infty} W^{\varepsilon}_{\Lambda_n,\beta,\Delta}= \frac{1-\mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]}{1+\mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]}.
\end{equs}
We wish to show that $W^{\varepsilon}_{\Lambda',\beta,\Delta}$ remains bounded for $\varepsilon$ in a complex neighbourhood of $(0,\infty)$. For $h,\eta$ with $\Re(h)> |\Im(h)|$ and $\Re(\eta)> |\Im(\eta)|$, let $\mathbf{h}:\Lambda_n\to \mathbb{C}$ be the function which is defined by
\begin{equs}
\mathbf{h}_x=\eta \mathbf{{1}}_{x=0}+\sum_{\substack{y\in \mathbb{Z}^2\setminus \Lambda_n \\ y\sim x}}h\mathbf{{1}}_{x\neq 0}.
\end{equs}
Then we can write
\begin{equs}
Z^{\mathbf{h}}_{\Lambda_n,\beta,\Delta}[\sigma_0\neq 0]=e^{\eta} Z^{+,h}_{\Lambda_n,\beta,\Delta}[\sigma_0=1]+e^{-\eta} Z^{+,h}_{\Lambda_n,\beta,\Delta}[\sigma_0=-1]
\end{equs}
Since $Z^{\mathbf{h}}_{\Lambda_n,\beta,\Delta}[\sigma_0\neq 0]\neq 0$, it follows that $W^{h}_{\Lambda_n,\beta,\Delta}\neq -e^{2\eta}$.
Note that for any $z\in \mathbb{C}$ with $|z|> e^{2\pi}$ there is some $\eta$ with $\Re(\eta)> |\Im(\eta)|$ such that $z=-e^{2\eta}$, hence
$\left\vert W^{h}_{\Lambda_n,\beta,\Delta}\right\vert \leq e^{2\pi}$.
It follows from \eqref{w-limit} and Vitali's theorem that
\begin{equs}
\lim_{n\to\infty} W^{h}_{\Lambda_n,\beta,\Delta}= \frac{1-\mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]}{1+\mu^+_{\beta,\Delta} \left[\sigma_0 \mid \sigma_0\neq 0\right]}
\end{equs}
for every $h\in \mathbb{C}$ with $\Re(h)> |\Im(h)|$. The desired assertion follows readily from \eqref{w-expansion} by taking the limit as $n$ tends to infinity.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: existence} for $\Delta^+ = -\log 4$]
This is a direct corollary of Propositions \ref{prop: h implies continuity} and \ref{weak+limit}.
\end{proof}
\begin{rem}
We expect the overall strategy above to extend to higher dimensions. Given that the tricritical point of the mean-field Blume-Capel model is $\Delta=-\log{4}$, see \cite{EllisOttoTouchette}, we expect that for any $\Delta<-\log{4}$ there is a sequence of graphs whose Lee-Yang zeroes have an accumulation point on the positive real line.
\end{rem}
\section{Subcritical sharpness for dilute random cluster on $\mathbb{Z}^d$}\label{sec: subcrit}
\subsection{Weakly monotonic measures and the OSSS inequality}\label{sec: OSSS}
In general, the dilute random cluster measure is not monotonic, as was already observed in \cite[p.12]{GG}. In this section, we show that the dilute random cluster measure satisfies a weaker notion of monotonicity, once we restrict to certain types of boundary conditions. This allows us to prove an OSSS inequality for the dilute random cluster measure.
Let $\Lambda\subset \mathbb{Z}^d$ be a finite set of vertices, and let $n=|\Lambda|+|E_{\Lambda}|$. We define $D=\Lambda\sqcup E_{\Lambda}$, and we call it a \textit{domain}. We also define $S_{\Lambda}$ to be the set of sequences $(d_1,d_2,\ldots,d_k)$, $k\leq n$, such that each element of $D$ appears at most once in $(d_1,d_2,\ldots,d_k)$, and for every edge $e$ appearing in $(d_1,d_2,\ldots,d_k)$, all the endpoints of $e$ lying in $\Lambda$ appear in $(d_1,d_2,\ldots,d_k)$, and they all precede $e$ (if $e\in \partial_E \Lambda$, then only one endpoint lies in $\Lambda$). We write $U_{\Lambda}$ for the corresponding set of unordered objects, i.e.\ $\{d_1,\ldots,d_k\}$ belongs to $U_{\Lambda}$ whenever each element of $D$ appears at most once, and for every edge $e\in \{d_1,\ldots,d_k\}$, all the endpoints of $e$ lying in $\Lambda$ belong to $\{d_1,\ldots,d_k\}$.
A \textit{decision tree} is a pair $T = (d_1, (\phi_t)_{2\leq t\leq n})$, where $d_1\in D$, and for each $t>1$, the function $\phi_t$
takes a pair
$((d_1,\ldots, d_{t-1}), \eta_{(d_1,\ldots,d_{t-1})})$ as an input, where $d_i\in D$ and $\eta\in \{0,1\}^D$, and returns an element $d_t \in D \setminus \{d_1,\ldots, d_{t-1}\}$. For a comprehensive introduction to decision trees, we refer the reader to \cite{OD}. We call a decision tree \textit{admissible} if $d_1\in \Lambda$ and for every $t>1$, if $(d_1,\ldots,d_{t-1})\in S_{\Lambda}$, then $(d_1,\ldots,d_t)\in S_{\Lambda}$. In other words, an admissible decision tree starts always from a vertex, and queries the state of an edge $e$ only if its endpoints lying in $\Lambda$ have been queried at previous steps.
Our result applies beyond the dilute random cluster measure to measures satisfying the following property.
We call a measure $\mu$ on $\{0,1\}^D$ \textit{weakly monotonic} if for every $\{d_1,\ldots,d_k\}\in U_{\Lambda}$ and every $\eta^1,\eta^2\in \{0,1\}^{\{d_1,\ldots,d_k\}}$ such that $\eta^1\leq \eta^2$ pointwise,
\begin{equs}
\mu[\eta_{d_0}=1 \mid \eta^1] \leq \mu[\eta_{d_0}=1 \mid \eta^2]
\end{equs}
for every $d_0\in D$.
For an $n$-tuple $d_{[n]}=(d_1,\dots,d_n)$ and $t\leq n$, write $d_{[t]}=(d_1,\dots,d_t)$ and $\eta_{d_{[t]}}=(\eta_{d_1},\dots,\eta_{d_t})$.
Let $T$ be an admissible decision tree and let $f:\{0,1\}^D\mapsto \mathbb{R}$. Given a pair $(d,\eta)$ produced by $T$, we define
\begin{equs}
\tau_f(\eta)=\tau_{f,T}(\eta):=\min\big\{ t\ge1:\forall \eta'\in\{0,1\}^D,\quad \eta'_{d_{[t]}}=\eta_{d_{[t]}}\Longrightarrow f(\eta)=f(\eta') \big\}.
\end{equs}
We can now state the main technical result of this section.
\begin{thm}[OSSS inequality for weakly monotonic measures]\label{OSSS}
Let $\Lambda\subset \mathbb{Z}^d$ be a finite set of vertices, and let $f:\{0,1\}^D\mapsto [0,1]$ be an increasing function. For any weakly monotonic measure $\mu$ on $\Lambda$ and any admissible decision tree $T$,
\begin{equs}
\textup{Var}_\mu(f)\leq \sum_{d_0 \in D} \delta_{d_0}(f,T) \textup{Cov}_\mu(f, \eta_{d_0}),
\end{equs}
where $\delta_{d_0}(f,T):=\mu[\exists t\leq \tau_f(\eta): d_t=d_0]$ is the revealment (of $f$) for the decision tree $T$.
\end{thm}
\begin{proof}
The proof is the same as that of Theorem 1.1 in \cite{DCRT}.
\end{proof}
The following lemma, which is analogous to \cite[Lemma 3.2]{DCRT}, can be viewed as a sharp threshold result for the event $\{0\longleftarrow \partial \Lambda_n\}$.
\begin{lem}\label{cov-bound}
Let $\Lambda\subset \mathbb{Z}^d$ be a finite set containing $0$. For every weakly monotonic measure $\mu$ on $\Lambda$ and every $n\geq 1$, we have
\begin{equs}
\sum_{d\in D} \textup{Cov}_\mu(\mathbf{{1}}_{0\longleftrightarrow \partial \Lambda_n},\eta_d)\geq \frac{n}{4dQ_n}\mu[0\longleftrightarrow\partial \Lambda_n]\left(1-\mu[0\longleftrightarrow\partial \Lambda_n]\right),
\end{equs}
where $Q_n:=\max_{x\in \Lambda_n}\sum_{k=0}^{n-1}\mu[x\longleftrightarrow\partial \Lambda_k(x)]$.
\end{lem}
\begin{proof}
For any $k\in \{1,2,\ldots,n\}$, we wish to construct an admissible decision tree $T=T(k)$ determining $\mathbf{{1}}_{0\longleftrightarrow\partial\Lambda_n}$ such that
for every $u \in \Lambda_n$,
\begin{equs}\label{eq:deltau}
\delta_u(T)\leq \mathbf{{1}}_{u\in \partial \Lambda_k}+\mathbf{{1}}_{u\not\in \partial \Lambda_k} \sum_{\substack{x\in \Lambda_n \\ x\sim u}} \mu[x\longleftrightarrow\partial \Lambda_k]
\end{equs}
and for each edge $e=uv$,
\begin{equs} \label{eq:deltae}
\delta_e(T)\le \mu[u\longleftrightarrow \partial\Lambda_k]+\mu[v\longleftrightarrow \partial\Lambda_k].
\end{equs}
Note that for $u\in \Lambda_n$,
\begin{equs}
\sum_{k=1}^n\mu[u\longleftrightarrow \partial\Lambda_k]
\leq \sum_{k=1}^n \mu[u\longleftrightarrow \partial\Lambda_{|k-\|u\|_1 |}(u)] \leq 2Q_n
\end{equs}
from which the desired assertion follows by applying Theorem~\ref{OSSS} for each $T(k)$ with $f=\mathbf{{1}}_{0\longleftrightarrow\partial\Lambda_n}$ and then summing over $k\in\{1,2,\ldots,n\}$.
The decision tree $T$ is defined at follows. We first fix an ordering of the vertices and the edges. Then we explore the state of each vertex of $\partial \Lambda_k$, one at a time, according to the ordering, and once we have explored the state of all vertices in $\partial \Lambda_k$, we explore the state of the edges with both endpoints being open and in $\partial \Lambda_k$. Let $F_0$ be the set of vertices explored thus far, namely $\partial \Lambda_k$, and $V_0$ the set of open vertices in $F_0$. Next, we explore the state of all the vertices $u\in \Lambda_n\setminus F_0$ adjacent to $F_0$, one at a time, according to the ordering, and if $u$ is open, then we explore the state of the edges connecting $u$ to $V_0$, according to the ordering. We let $F_1$ be the union of $F_0$ with the set of vertices $u\in \Lambda_n\setminus F_0$ adjacent to $V_0$, and $E_1$ be the set of open edges explored thus far.
We then proceed inductively. Assuming we have defined $F_t$ and $E_t$ for some $t\geq 1$, we then consider the first vertex $u\in \Lambda_n\setminus F_t$ incident to $E_t$, and if $u$ is open, then we explore the state of the edges connecting $u$ to $E_t$, according to the ordering. We let $F_{t+1}=F_t \cup \{u\}$, and $E_{t+1}$ be the union of $E_t$ with the open edges between $u$ and $E_t$. As long as there is an open vertex incident to $E_{t+1}$, we keep exploring the cluster of $\partial \Lambda_k$. Otherwise, the exploration process stops, in which case the event $\{0\longleftrightarrow \partial \Lambda_n\}$ has been determined, and we can define our decision tree after that point arbitrarily.
It is clear from the construction that $T$ is an admissible decision tree and that \eqref{eq:deltae} and \eqref{eq:deltau} are satisfied. This concludes the proof.
\end{proof}
\subsection{A generalisation of the dilute random cluster}
In this section, we introduce the generalized dilute random cluster measure, which depends on three parameters $p,a,r$ and is defined as the standard dilute random cluster measure, except that we are not requiring $r=\sqrt{1-p}$. We show that for certain values of $r$, the generalized dilute random cluster measure is weakly monotonic, which together with Theorem~\ref{OSSS} will be used in the proof of subcritical sharpness for the two parameter dilute random cluster measure.
Given $a\in (0,1)$, $p\in (0,1)$, and $r>0$, we let $\varphi^{\xi}_{\Lambda,p,a,r}$ denote the measure on
$\Lambda$ with boundary condition $\xi$, that is,
\begin{equs}
\varphi^{\xi}_{\Lambda,p,a,r} [\theta] =
\frac{\mathbf{{1}}[\theta \in \Theta_\Lambda^\xi]}{Z^{\xi}_{\Lambda,p,a,r}} r^{|E_{\psi,\Lambda}|} 2^{k(\theta,\Lambda)}
\prod_{x\in V} \left(\frac{a}{1-a}\right)^{\psi_x}
\prod_{e\in E_{\psi,\Lambda}} \left(\frac{p}{1-p}\right)^{\omega_e},
\end{equs}
where $Z^{\xi}_{\Lambda,p,a,r}$ is the normalisation constant.
\begin{prop}\label{weakly}
Let $a,p\in (0,1)$ and $r\geq \frac{2(1-p)}{2-p}$. Then the measure $\varphi^{\xi}_{\Lambda,p,a,r}$ is weakly monotonic.
\end{prop}
\begin{proof}
Let $\{d_1,\ldots,d_k\}\in U_{\Lambda}$ and let $\eta^1,\eta^2\in \{0,1\}^{\{d_1,\ldots,d_k\}}$ be such that $\eta^1\leq \eta^2$. We wish to show that the vertex marginal $\Psi_2:=\Psi^{\xi}_{\Lambda,p,a,r}[\cdot \mid \eta^2]$ stochastically dominates the vertex marginal $\Psi_1:=\Psi^{\xi}_{\Lambda,p,a,r}[\cdot \mid \eta^1]$. Then the desired assertion follows as in the proof of Proposition~\ref{free-closed}.
It suffices to verify \eqref{eq:stoch1} and \eqref{eq:stoch2}. To this end, let $\Lambda'=\Lambda\setminus \{d_1,d_2,\ldots,d_k\}$, and let $x\in \Lambda'$. Then, for every configuration $\psi\in \{0,1\}^{\Lambda'}$,
\begin{equs}
\frac{\Psi_2[\psi^{\{x\}}]}{\Psi_2[\psi_{\{x\}}]}
=r^{N^2_x} \frac{a}{1-a} \frac{Z^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi^{\{x\}}}}
{Z^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi_{\{x\}}}}
\quad \text{and} \quad
\frac{\Psi_1[\psi^{\{x\}}]}{\Psi_1[\psi_{\{x\}}]}
=r^{N^1_x} \frac{a}{1-a} \frac{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}}}{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi_{\{x\}}}},
\end{equs}
where $N_x^i$ is the number of neighbours of $x$ in $V_{\psi}\cup(V_{\xi\cup\eta^i}\setminus \Lambda')$. Define $A(x)$ to be the event that all edges incident to $x$ are closed, and note that
\begin{equs}
\frac{Z^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi^{\{x\}}}}
{Z^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi_{\{x\}}}}= \frac{1}{\phi^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi^{\{x\}}}[A(x)]}
\quad \text{and} \quad
\frac{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}}}{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi_{\{x\}}}}=\frac{1}{\phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}}[A(x)]}.
\end{equs}
It remains to show that
\begin{equs}\label{stoch-ineq}
\phi^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi^{\{x\}}}[A(x)]
\leq
r^{N_x^2-N_x^1} \phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}}[A(x)].
\end{equs}
For this, recall that by the finite energy property for the random cluster model \cite[Theorem 3.1]{GGBook},
\begin{equs}
\phi^{\textup{RC},\xi\cup\eta^2}_{\Lambda,\psi^{\{x\}}}[A(x)]
\leq
c_p^{N_x^2-N_x^1} \phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}}[A(x)],
\end{equs}
where $c_p=\max\left\{1-p,1-\frac{p}{p+2(1-p)}\right\}=\frac{2(1-p)}{2-p}$.
Since $r\geq \frac{2(1-p)}{2-p}$, it follows that \eqref{stoch-ineq} holds. Thus we have verified \eqref{eq:stoch1}.
We wish to verify \eqref{eq:stoch2} for $\Psi^1$. It is not hard to see that
\begin{equs}
\frac{\Psi^1\left[\psi^{\{x,y\}}\right]\Psi^1\left[\psi_{\{x,y\}}\right]}{\Psi^1\left[\psi^{\{x\}}_{\{y\}}\right]\Psi^1\left[\psi^{\{y\}}_{\{x\}}\right]}
=
r^{\mathbf{{1}}\{xy\in \mathbb{E}^d\}}\frac{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x,y\}}}}{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}_{\{y\}}}}
\frac{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi_{\{x,y\}}}}{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{y\}}_{\{x\}}}}.
\end{equs}
As above, we have
\begin{equs}
\frac{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x,y\}}}}{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x\}}_{\{y\}}}}
\frac{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi_{\{x,y\}}}}{Z^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{y\}}_{\{x\}}}}=\frac{\phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{y\}}_{\{x\}}}[A(y)]}{\phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x,y\}}}[A(y)]}.
\end{equs}
Let $B(x,y)$ be the event that all edges $xu\in \mathbb{E}^d\setminus\{xy\}$ are closed. By the FKG inequality
\begin{equs}
\phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x,y\}}}[A(y)]\leq \phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x,y\}}}[A(y) \mid B(x,y)].
\end{equs}
Using the finite energy property we see that by closing $xy$ we get
\begin{equs}
\phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{x,y\}}}[A(y) \mid B(x,y)]\leq \left(\frac{2(1-p)}{2-p}\right)^{\mathbf{{1}}\{xy\in \mathbb{E}^d\}} \phi^{\textup{RC},\xi\cup\eta^1}_{\Lambda,\psi^{\{y\}}_{\{x\}}}[A(y)],
\end{equs}
which completes the proof.
\end{proof}
The following result is a standard calculation similar to the case of the classical random cluster model \cite[Theorem 3.12]{GGBook}.
\begin{prop}
Let $a,p\in (0,1)$ and $r>0$. Let also $\Lambda\subset \mathbb{Z}^d$ finite. For every event $A$ that is $\Lambda$-measurable, we have
\begin{equs}
\frac{d\varphi^{\xi}_{\Lambda,p,a,r}[A]}{dp}=\frac{1}{p(1-p)}\sum_{e\in E_{\Lambda}} \textup{Cov}(A,\eta_e)
\end{equs}
and
\begin{equs}
\frac{d\varphi^{\xi}_{\Lambda,p,a,r}[A]}{da}=\frac{1}{a(1-a)}\sum_{x\in \Lambda} \textup{Cov}(A,\eta_x).
\end{equs}
\end{prop}
\subsection{Proof of subcritical sharpness}
The main result of this section is the following.
\begin{thm}\label{sharpness}
Let $a\in (0,1)$. For every $p<p_c(a)$ there exists $c=c(p,a)>0$ such that for every $n\geq 0$,
\begin{equs}
\varphi^1_{\Lambda_n,p,a}[0\longleftrightarrow \partial \Lambda_n]\leq e^{-cn}.
\end{equs}
\end{thm}
Before proving this result, we use it to prove Theorem~\ref{thm: exp dec}.
\begin{proof}[Proof of Theorem~\ref{thm: exp dec} assuming Theorem \ref{sharpness}]
For $x\in \mathbb{Z}^d$ with $\lVert x \rVert_{\infty}=n$, we have
\begin{equs}
\langle \sigma_0 \sigma_x \rangle^+_{\beta,\Delta}\leq \varphi^1_{p,a}[0\longleftrightarrow \partial \Lambda_n] \leq \varphi^1_{\Lambda_n,p,a}[0\longleftrightarrow \partial \Lambda_n],
\end{equs}
by the Edwards-Sokal coupling and \eqref{eq:MON}. The assertion follows from Theorem~\ref{sharpness}.
\end{proof}
We are now ready to prove Theorem~\ref{sharpness}.
\begin{proof}[Proof of Theorem~\ref{sharpness}]
Let $a_0\in (0,1)$ and consider $p_0<p_c(a_0)$. We prove the assertion for the pair $(p_0,a_0)$.
Define $c_0=\frac{p_0(1-a_0)}{(1-p_0)a_0}$. For $n\geq 1$, $p\in (0,1)$, $a(p)=\frac{p}{p+c_0(1-p)}$, and $r_0=\sqrt{1-p_0}$, define
\begin{equs}
\mu_n:=\varphi_{\Lambda_{2n},p,a(p),r_0}^1,\quad \theta_k(p):=\mu_k[0\longleftrightarrow \partial\Lambda_k],\quad \text{and} \quad S_n:=\sum_{k=0}^{n-1}\theta_k.
\end{equs}
Note that there exists $\varepsilon>0$ such that $r_0\geq \frac{2(1-p)}{2-p}$ for every $p\geq p_0-\varepsilon$, hence $\mu_n$ is weakly monotonic for every $p\geq p_0-\varepsilon$ by Proposition~\ref{weakly}.
Note also that $a(p_0)=a_0$ and
\begin{equs}
a'(p)=\frac{a(p)(1-a(p))}{p(1-p)},
\end{equs}
hence
\begin{equs}
\theta_k'(p)=&\frac{1}{p(1-p)}\sum_{e\in E_{\Lambda}}\textup{Cov}(\mathbf{{1}}_{0\longleftrightarrow \partial \Lambda_k},\omega_e)+\frac{a'(p)}{a(p)(1-a(p))}\sum_{x\in \Lambda}\textup{Cov}(\mathbf{{1}}_{0\longleftrightarrow \partial \Lambda_k},\psi_x)\\ =& \frac{1}{p(1-p)}\sum_{d\in D}\textup{Cov}(\mathbf{{1}}_{0\longleftrightarrow \partial \Lambda_k},\eta_d)\geq \sum_{d\in D}\textup{Cov}(\mathbf{{1}}_{0\longleftrightarrow \partial \Lambda_k},\eta_d).
\end{equs}
Now, the comparison between boundary conditions together with the fact that, for $k \leq n/2$, $\Lambda_{2k}(x)\subset\Lambda_{2n}$, imply that for $x\in\Lambda_n$,
\begin{equs}
\sum_{k=1}^{n-1}\mu_n[x\longleftrightarrow\partial\Lambda_k(x)]\leq 2\sum_{k\leq n/2}\mu_n[x\longleftrightarrow\partial\Lambda_k(x)] \leq 2\sum_{k\leq n/2}\mu_k[0\longleftrightarrow\partial\Lambda_k]\leq 2S_n.
\end{equs}
By Lemma~\ref{cov-bound}, there exists a constant $t>0$ such that
\begin{equs}
\theta_n'(p)\geq t\frac{n}{S_n}\theta_n(p)
\end{equs}
for $p\leq 1-\varepsilon$, where we used that $1-\theta_n(p)\geq 1-\theta_1(p)$, and the latter is bounded away from $0$. Using Lemma 3.1 in \cite{DCRT} for $f_n=\theta_n/t$, we obtain the existence of $p_1>0$ such that the following holds: for every $p_0-\varepsilon\leq p<p_1$ there exists $c>0$ such that
\begin{equs}
\theta_n(p)\leq e^{-cn} \text{ for every } n\geq 0,
\end{equs}
and for every $p_1<p\leq p_0-\varepsilon$,
\begin{equs}
\varphi^1_{p,a(p),r_0}[0\longleftrightarrow\infty]\geq t(p-p_1).
\end{equs}
Note that $\mu_n[0\longleftrightarrow\partial \Lambda_{2n}]\leq \theta_n(p)$, and $\varphi_{\Lambda_{n},p,a(p),r_0}^1[0\longleftrightarrow\partial \Lambda_{n}]$ is decreasing in $n$, hence to conclude it suffices to show that in the above one could take $p_1>p_0$.
To this end, consider some $p_0<p'<p_c(a_0)$. We wish to show that $\varphi_{p',a}^1$ stochastically dominates $\varphi_{p'',a(p''),r_0}^1$ for some $p''>p_0$. This can be easily done by checking conditional probabilities of the form $\varphi^{\xi}_{\Lambda',p,a,r}[A]$ as in the proof of Proposition~\ref{prop:stochastic-domination-2}, noting that $\varphi^{\xi}_{\Lambda',p,a,r}[A]$ is strictly increasing in $p$ for $r=\sqrt{1-p}$ and fixed $a$, and continuous in $a,p$ for $r=r_0$. It follows that $\varphi_{p'',a(p''),r_0}^1[0\longleftrightarrow\infty]=0$, hence $p_1\geq p''>p_0$, as desired.
\end{proof}
\section{Further analysis of critical behaviour in dimension $2$}\label{sec: further}
In this section, we prove Theorem~\ref{thm: truncated}. The statements {\bf (OffCrit)}, {\bf (DiscontCrit)}, {\bf (ContCrit)}, and {\bf (TriCrit)} are almost direct consequences of the quadrichotomy in Proposition \ref{prop:quad}. We therefore first develop the tools to establish {\bf (Perc)} before writing the proof. Let us introduce the following notions. We say that the $\{0,-\}$ spins percolate if with positive probability there is an infinite connected component $\mathscr{C}$ such that $\sigma_x\in \{0,-\}$ for every $x\in \mathscr{C}$. Define
\begin{equs}
\mathcal L_c
=
\{ (\Delta,\beta_c(\Delta)) \in \mathcal L : \{0,-\} \text{ spins do not percolate under } \mu^0\}.
\end{equs}
\subsection{Werner's argument for continuity}
We start by showing the first direction of {\bf (Perc)}. Let us first recall some standard facts. Consider a finite subset $A$ of $\mathbb{Z}^2$, which induces a connected graph. Let $C$ be the set of vertices $x\in A$ for which there exists an adjacent vertex $y\in \mathbb{Z}^d\setminus A$ such that $y$ can be connected to infinity in $\mathbb{Z}^d\setminus A$. In general, $C$ is not connected in $\mathbb{Z}^2$, but it is a {\it *-connected circuit}. The latter means that $C$ can be represented as a walk $w=(w(0),w(1),\ldots, w(n))$ such that $\lVert w(i)-w(i+1)\rVert_{\infty}=1$, $w(0)=w(n)$, and otherwise $w(i)\neq w(j)$ for $i\neq j$. Given a *-connected circuit $C$ surrounding $0$, we write $C^{\rm int}$ for the union of $C$ with the connected component of $\mathbb{Z}^2\setminus C$ that contains $0$.
We now prove the following fact, which is a modification of Werner's argument for the planar Ising model \cite{W09}.
\begin{prop}\label{prop:0-}
For every $(\Delta, \beta_c(\Delta)) \in \mathcal L_c$, $\langle \sigma_0 \rangle^+_{\beta_c(\Delta),\Delta}=0$.
\end{prop}
\begin{proof}
We fix some $(\Delta,\beta_c(\Delta))\in \mathcal L_c$ and drop it from the notation. Let $\mathcal{A}_n$ be the event that there is no path $\Pi$ connecting $0$ to $\partial \Lambda_n$ such that $\sigma_x\in \{0,-\}$ for every vertex $x\in \Pi$. On this event, there is a *-connected circuit of $\{+\}$ spins surrounding $0$ which is contained in $\Lambda_n$. Let $\mathcal C_n$ denote the outermost such circuit in $\Lambda_n$. Then, for every $n\geq 1$
\begin{equs}
\langle \sigma_0 \rangle^0
&=
\langle \sigma_0 \mathbf{{1}}_{\sigma \notin \mathcal{A}_n} \rangle^0 + \langle \sigma_0 \mathbf{{1}}_{\sigma \in \mathcal{A}_n} \rangle^0
=
\langle \sigma_0 \mathbf{{1}}_{\sigma \notin \mathcal{A}_n} \rangle^0 + \sum_C \langle \sigma_0 \mathbf{{1}}_{\mathcal C_n=C} \rangle^0
\\
&=
\langle \sigma_0 \mathbf{{1}}_{\sigma \notin \mathcal{A}_n} \rangle^0 + \sum_C \langle \sigma_0 \rangle_{C^{\rm int}}^+ \mu^0[\mathcal C_n=C]
\geq
\langle \sigma_0 \mathbf{{1}}_{\sigma \notin \mathcal{A}_n} \rangle^0 + \langle \sigma_0 \rangle^+ \mu^0[\mathcal{A}_n].
\end{equs}
In the second line we have partitioned the event $\mathcal{A}_n$ over all possibilities for $\mathcal C_n$. In the third line we have used the spatial Markov property and the fact that the event $\{\mathcal C_n=C\}$ is measurable with respect to the configuration on $C\cup (\mathbb{Z}^d\setminus C^{\rm int})$. In the final line we have used the monotonicity of the $+$ boundary conditions in the domain.
Taking limits as $n \rightarrow \infty$ and using that $\mu^0[\mathcal{A}_n]$ converges to $1$, we obtain that
$0 =\langle \sigma_0 \rangle^0 \geq\langle \sigma_0 \rangle^+$.
Since $\langle \sigma_0 \rangle^+\geq 0$, the desired assertion follows.
\end{proof}
\subsection{Infinite cluster and crossing probabilities}
Our aim now is to relate the existence of an infinite $\{0,-\}$ cluster with crossing probabilities, from which the desired result will follow. In order to do so, we first need to show that $\mu^0$ is mixing at the critical line. We remark that $\mu^0$ is not mixing when $\varphi^0[0 \longleftrightarrow \infty]>0$, since there is probability $1/2$ that there is an infinite cluster of $\{+\}$ spins, and probability $1/2$ that there is an infinite cluster of $\{-\}$ spins. For this reason, our proof utilises that $\varphi^0[0 \longleftrightarrow \infty]=0$ at the critical line.
Let us introduce the following definition. For every $x\in \mathbb{Z}^2$, we define $\mathscr{C}_x=\{x\}\cup\{y\in \mathbb{Z}^2 : x \text{ is connected to } y \text{ by an open path in } \omega \}$ to be the cluster of $x$ in a configuration $(\psi,\omega)$. Note that $x\in \mathscr{C}_x$ even if $\psi_x=0$.
\begin{lem}\label{lem: varphi0 no inf cluster}
Let $\beta>0, \Delta\in \mathbb{R}$ and let $p=1-e^{-2\beta}$, $a=\frac{2e^{\Delta}}{1+2e^{\Delta}}$. Assume that
$\varphi^0_{p,a}[0 \longleftrightarrow \infty]=0$.
Then $\mu^0_{\beta,\Delta}$ is mixing.
\end{lem}
\begin{proof}
We fix $\beta,\Delta,p$ and $a$ and drop them from the notation. It suffices to show that $$\lim_{x\to\infty}\mu^0[A,\tau_x B]=\mu^0[A] \: \mu^0[B]$$
for all events $A, B$ depending on the spins $\sigma_x$, $x\in \Lambda_k$ for some $k\geq 1$.
To this end, let us define $\mathscr{C}_k=\bigcup_{x\in \Lambda_k}\mathscr{C}_x$. Since $\varphi^0(o \longleftrightarrow \infty)=0$, $\mathscr{C}_k$ is finite almost surely. Note that $\mathscr{C}_k$ spans a connected subgraph of $\mathbb{Z}^2$, hence almost surely there is dual circuit surrounding $\Lambda_k$.
Fix $\varepsilon>0$ and let $n>k$ be such that $\varphi^0[\mathcal{C}_{k,n}]\geq 1-\varepsilon$, where $\mathcal{C}_{k,n}$ denotes the event that there is a dual circuit surrounding $\Lambda_k$ which is contained in $\Lambda_n$. For every $x\in \Z^2$, we have
\begin{equs}
\mathbf{P}[A,\mathcal C_{k,n}, \tau_x B,\tau_x\mathcal C_{k,n}]\leq \mu^0[A,\tau_x B]
\leq \mathbf{P}[A,\mathcal C_{k,n}, \tau_x B,\tau_x\mathcal C_{k,n}]+2\varepsilon.
\end{equs}
Consider some $\Vert x \Vert_{\infty}\geq 2n+1$, and a pair of configurations $\theta=(\psi,\omega),\theta'=(\psi',\omega')$ living on $\Lambda_n$ such that $\theta,\theta'\in\mathcal C_{k,n}$. Then
\begin{equs}
\mathbf{P}\left[A,\tau_x B, \theta,\tau_x\theta'\right]=\varphi^0\left[\theta,\tau_x\theta'\right]\mathbf{P}[A\mid\theta] \, \mathbf{P}[B \mid \theta'].
\end{equs}
because conditioned on $\theta$ and $\tau_x \theta'$, the spins on the vertices of $\Lambda_k$ and $\tau_x \Lambda_k$ are assigned independently, and the assignment is measurable with respect to $\theta$ in the first case and with respect to $\theta'$ in the second case, due to the existence of the dual circuits.
By the ergodicity of $\varphi^0$, $\varphi^0[\theta,\tau_x \theta']$ converges to $\varphi^0[\theta] \,\varphi^0[\theta']$ as $\lVert x \rVert_{\infty}$ tends to infinity, hence summing over all possible $\theta$ and $\theta'$ we obtain
\begin{equs}
\lim_{x\to\infty} \mathbf{P}[A,\mathcal C_{k+1,n}, \tau_x B,\tau_x \mathcal C_{k+1,n}]= \mathbf{P}[A,\mathcal C_{k+1,n}] \, \mathbf{P}[B,\mathcal C_{k+1,n}].
\end{equs}
Sending $n$ to infinity and $\varepsilon$ to $0$, we obtain that $\mathbf{P}[A,\mathcal C_{k+1,n}]$ converges to $\mu^0[A]$, and $\mathbf{P}[B,\mathcal C_{k+1,n}]$ converges to $\mu^0[B]$. The desired assertion follows readily.
\end{proof}
As a corollary of the above lemma and Propositions~\ref{cor:3} and \ref{cor:a} we obtain the following.
\begin{cor}\label{free mixing}
Let $\Delta\in \mathbb{R}$. Then $\mu^0_{\beta_c(\Delta),\Delta}$ is mixing.
\end{cor}
Let $\mathcal{N}_{0,-}$ be the number of infinite clusters of $\{0,-\}$ spins. Since $\mu^0$ satisfies the finite energy property and the FKG inequality, a Burton-Keane argument implies the following.
\begin{lem}\label{Burton-Keane}
Let $\Delta\in \mathbb{R}$ and $\beta>0$. Assume that $\mu^0_{\beta,\Delta}$ is ergodic. Then either $\mu^0_{\beta,\Delta}[\mathcal{N}_{0,-}=0]=1$ or $\mu^0_{\beta,\Delta}[\mathcal{N}_{0,-}=1]=1$.
\end{lem}
Let $H^{0,-}_n$ be the event that there is a horizontal crossing $\gamma$ in $\Lambda_n$ such that $\sigma_x\in \{0,-\}$ for every vertex $x\in \gamma$. Define $V^{0,-}_n$, $H^+_n$ and $V^+_n$ analogously.
The following result tells us that when $\mu^0_{\beta,\Delta}$ is ergodic and there is a unique infinite cluster of $\{0,-\}$ spins, crossing probabilities go to $1$. It is an adaptation of Zhang's argument and its proof is given in \cite[Proposition 2.1]{HDC-IP}.
\begin{lem}\label{cross_1}
Let $\Delta\in \mathbb{R}$ and $\beta>0$. Assume that $\mu^0_{\beta,\Delta}[\mathcal{N}_{0,-}= 1]=1$. Then
\begin{equs}
\lim_{n\to\infty}\mu_{\beta,\Delta}^0[H^{0,-}_n]=1.
\end{equs}
\end{lem}
\subsection{Proof of Theorem~\ref{thm: truncated}}
\begin{proof}[Proof of Theorem~\ref{thm: truncated}]
Since $\beta,\Delta$ are fixed, we drop them from the notation. Note that {\bf (TriCrit)} is a standard consequence of the renormalisation inequalities of Lemma \ref{lem: renormalisation inequality for strip densities}. We first establish {\bf (OffCrit)} and {\bf (DiscontCrit)}. For $\beta<\beta_c(\Delta)$, we have $\langle \sigma_0 ; \sigma_x\rangle^+=\langle \sigma_0 \sigma_x\rangle^+$. In this case, the assertion follows from Theorem~\ref{thm: exp dec}.
Let $\beta\geq \beta_c(\Delta)$ and $\Delta\in \mathbb{R}$ be such that $\langle \sigma_0 \rangle^+>0$. Recall that $\mathcal{C}_n$ is the event that there is an open circuit in $\omega\cap(\Lambda_{2n}\setminus \Lambda_n)$. Also recall that Propositions~\ref{cor:2}, \ref{cor:3} and Propositions~\ref{dual-circuit}, \ref{sup-circuit} state that there exists $c>0$ such that for every $n\geq 1$,
\begin{equs}\label{eq:ineq}
\varphi^1[\Lambda_n\centernot\longleftrightarrow\infty]\leq e^{-cn} \quad \text{and} \quad \varphi^1[\mathcal{C}_n]\geq 1- e^{-cn}.
\end{equs}
On the one hand, for every $\lVert x \rVert_{\infty}>4n$ we have
\begin{equs}
\langle \sigma_0 \sigma_x \rangle^+ \leq \left(\varphi^1_{\Lambda_{2n}}[0\longleftrightarrow \partial \Lambda_{2n}]\right)^2
\end{equs}
by the Edwards-Sokal coupling and \eqref{eq:MON}.
On the other hand, when each of $\mathcal{C}_n$, $\{0\longleftrightarrow \mathcal{S}\}$, and $\{\Lambda_n\longleftrightarrow\infty\}$ happens, where $\mathcal{S}$ denotes the exterior most circuit in $\omega\cap (\Lambda_{2n}\setminus \Lambda_{n})$, $0$ is connected to infinity. Hence,
\begin{equs}
\langle \sigma_0 \rangle^+=& \varphi^1[0\longleftrightarrow \infty]\geq \varphi^1[\mathcal{C}_n, 0\longleftrightarrow \mathcal{S}, \Lambda_n\longleftrightarrow\infty] \\
\geq& (1-e^{-cn})\varphi^1[0\longleftrightarrow \mathcal{S} \mid \mathcal{C}_n]-e^{-cn} \\ \geq& (1-e^{-cn})\varphi^1_{\Lambda_{2n}}[0\longleftrightarrow \partial \Lambda_{2n}]-e^{-cn}
\end{equs}
by \eqref{eq:ineq}, \eqref{eq:DMPRC} and \eqref{eq:MON}. The desired assertion for $\beta\geq \beta_c(\Delta)$ such that $\langle \sigma_0 \rangle^+>0$ follows.
We now establish {\bf (ContCrit)}. Let $\beta=\beta_c(\Delta)$ and $\langle \sigma_0 \rangle^+=0$. For the upper bound, we use Proposition~\ref{prop: one arm}. For the lower bound, if $\lVert x \rVert_{\infty}=2n$, then one way for $0$ to be connected to $x$ is if both $\mathcal{C}_n$ and $x+\mathcal{C}_n$ happen, $0$ is connected to $\partial \Lambda_{2n}$, and $x$ is connected to $x+\partial \Lambda_{2n}$. Then the assertion follows from Propositions~\ref{prop:quad} and \ref{prop: one arm} and the FKG inequality.
Finally, we prove {\bf (Perc)}. The reverse implication follows from Proposition~\ref{prop:0-}. For the forward implication, let us assume that $\langle \sigma_0 \rangle^+=0$. Then {\bf (ContCrit)} for the dilute random cluster model (i.e.\ in the sense of Proposition \ref{prop:quad}) occurs at $p=p_c(a)$ by Proposition~\ref{correspondence}.
It follows from the monotonicity in the domain and the $\pi/2$ rotational symmetry that for every $n\geq 1$, $\varphi^0[V_n]\geq c$.
Since we have probability $1/2$ to colour a vertical crossing with $\{+\}$ spins, it follows that for every $n\geq 1$,
$\varphi^0[V^+_n]\geq c/2$.
Note that when $V^+_n$ happens, the event $H^{0,-}_n$ does not happen, hence
\begin{equs}
\varphi^0[H^{0,-}_n]\leq 1-\varphi^0[V^+_n]\leq 1- c/2.
\end{equs}
It follows from Corollary~\ref{free mixing} and Lemmas~\ref{Burton-Keane} and \ref{cross_1} that the $\{0,-\}$ spins do not percolate under $\mu^0$, as desired.
\end{proof}
\bibliographystyle{alpha}
| -159,511.653952 |
[
-2.591796875,
2.251953125
] | 43.675418 |
[
-2.373046875,
0.66455078125,
-2.630859375,
-5.7734375,
-1.4970703125,
9.171875
] |
[
2.80078125,
9.0703125,
0.1767578125,
5.11328125
] | 911 | 17,509 |
[
-3.4296875,
3.775390625
] | 34.651426 |
[
-5.96875,
-4.6171875,
-5.40234375,
-3.095703125,
2.048828125,
14.3515625
] | 0.541654 | 28.788007 | 17.413901 | 2.382613 |
[
1.0674686431884766
] | -89,801.052732 | 5.904792 | -159,754.430502 | 0.588082 | 6.444294 |
[
-1.8447265625,
-3.607421875,
-4.11328125,
-5.5546875,
1.955078125,
13.1640625
] |
[
-5.92578125,
-2.32421875,
-2.150390625,
-1.5703125,
3.90625,
5.3046875
] | |
BkiUbhvxaL3Sug5GKihy
|
\section{Introduction}
Hawking's original paper \cite{hawking1975particle} showed that when quantum effects are considered, black holes radiate a thermal flux of particles. Despite multiple derivations the physical understanding of Hawking radiation is far from complete with a number of proposed mechanisms include tidal forces on virtual particle-anti-particle pairs analogous to pair creation in an electric field, the splitting of entangled modes as the horizon forms, and quantum tunnelling through the horizon\cite{BROUT1995329} \cite{universalitybh}.
\\
Since the Hawking radiation is in a sense universal, independent of details of the collapse phase of matter in the formation of the black hole, it should be possible to understand some of its features by constructing physical models. One such model is the Unruh effect, introduced in \cite{unruh1976notes} and \cite{davies1975scalar}. Essentially the idea is to exploit the analogy between a constantly accelerating observer, whose worldline is confined to the Rindler wedge of Minkowski spacetime, and a near- horizon observer in Schwarzschild spacetime at constant radial distance. It is claimed that both detect radiation with a blackbody spectrum at a temperature proportional to the acceleration of the observer as measured at infinity. Grove \cite{grove1986inertial} was the first to object to this interpretation and suggested instead that the accelerating oscillator emits negative energy with respect to the Minkowski vacuum, which balances out the positive energy emitted by the oscillator as it makes a downward transition, and hence overall there is no net energy flux in the Minkowski vacuum, thereby breaking the analogy.
\\
This argument was extended further in \cite{raine1991does} and \cite{ford2006there}. These authors also consider a quantum oscillator uniformly accelerating in Minkowski spacetime. To operationalise the meaning of radiation in this context they look at the excitation of a distant inertial detector. The authors show that the second order fluctuations induced in the field at the detector by the in-falling oscillator balance the first order perturbation exactly and the detector would therefore register no radiation. The result arises because the oscillator and detector are coupled to the same vacuum field. To be clear, in any start-up phase the accelerated oscillator will emit radiation as it comes into equilibrium with the ambient vacuum, but there is no radiation in the steady state.
\\
It is of obvious interest to extend this model to in-falling quantum systems in a true black hole vacuum. In \cite{scully2018quantum}, the authors examine the transient from a two-level atom falling in the Boulware vacuum. They find a blackbody spectrum arising from the initial response of the atom as a result of the detailed form of the time-dependence along the in-falling trajectory. Because no account is taken of the reaction back on the atom as it decays, the use of first order perturbation theory here is valid on a timescale less than the decay time of an excited state \cite{louisell73quantum}.
\\
In this paper we investigate a model oscillator falling freely into a black hole as it attempts to come into equilibrium with the ambient vacuum of a scalar field. In 1+1 dimensions we can solve this problem exactly. We use Painlev\'{e}-Gullstrand coordinates (as in \cite{schutzhold2001hawking}, \cite{PhysRevLett.85.5042} \cite{kanai2012hawking}), with a regular future horizon and future region interior to the horizon, but with an incomplete manifold in the past. The coordinates have the useful feature of a global time which is also the proper time of the in-falling oscillator. This allows us to define a global vacuum based on the incoming and outgoing modes, with positive and negative frequencies defined everywhere with respect to the proper time. There is no collapse phase, but the metric is not time-symmetric. We find that in this vacuum the difference between the ingoing and outgoing modes leads to blackbody radiation from the oscillator beyond any contribution from the initial transient. In contrast in the Boulware vacuum, unsurprisingly because it is static, there is no influence on a distant inertial detector beyond the initial transient.
\\
The difference between this case and that of the constantly accelerated atom arises from the lack of time-symmetry. Even though the exterior region is static, the outgoing and ingoing wave modes differ. The infalling atom is sensitive to this difference in that it transforms the ingoing modes into a complex mixture of positive and negative frequency outgoing modes. In this respect it acts like reflection in the origin in the original Hawking calculation. Nevertheless, since the flux at infinity depends on the coupling constant between the atom and the field, this radiation is not the Hawking radiation. Indeed, there is no true Hawking radiation in this set-up.
\\
The model does however possess some interesting features and possibly tantalising hints. First the radiation here is not the result of the Unruh effect. Indeed we can isolate the "Unruh" terms in the oscillator Hamiltonian -- those corresponding to excitation of the oscillator accompanied by emission of a quantum of the scalar field -- and show that these are a factor (oscillator period)/(decay time) smaller than the dominant (energy conserving) terms.
\\
We note that the radiation comes from the vicinity of the black hole, but not from within a radial distance from the hole of order $c/\omega$ where $\omega$ is the natural frequency of the oscillator, hence from a distance much greater than the Planck length. Thus, there is no trans-Planckian problem in this model.
\\
We show that as well as emitting to infinity, the infalling oscillator emits negative energy into the black hole.The Hawking radiation proper arises only when we consider the collapse phase in the formation of a black hole. But this phase is nothing other than the in-fall of a collection of radiating atoms (or oscillators). Thus, the infalling matter emits negative energy into the (forming) black hole. In the Hawking picture the negative energy flux into the hole accompanies the positive energy flux to infinity: these are two parts of the same process. For the infalling atom the ingoing flux perturbs the hole in addition to accompanying the outgoing radiation. The situation is therefore similar to the familiar "burning paper" \cite{preskill1992black}.
\\
The energy going into the hole will perturb the hole and induce it to emit further radiation which will be correlated with the outgoing flux from the infalling matter. If this all happens on an infall timescale, or on the relaxation timescale of the event horizon, then this is a transient from the collapse that just happens to have a blackbody form at the Hawking temperature and has little directly to do with Hawking radiation. However, it is intriguing to consider that the information paradox might be resolved if black hole emission is a two-(real)-photon process, linking the emission from the collapsing matter with the evaporating horizon. To decide we need a more detailed model of collapse and evaporation that will allow us to calculate the relaxation time in the presence of the external radiation.
\section{Gravitational Collapse in Painlev\'{e}-Gullstrand}
We consider a quantum oscillator falling into a Schwarzschild black hole in $1+1$ dimensions. The Painlev\'{e}-Gullstrand coordinate system provides a convenient framework since the coordinates are regular in the exterior region and across the future horizon; the time coordinate is also conveniently the proper time of the infalling oscillator. We adopt natural units: $\hbar=G=c=1$.
\\
In terms of Schwarzschild coordinates $(t_s, r)$ the Panlev\'{e}-Gullstrand time $\tau$ is given by $\tau =t_s -h(r)$, where the function $h(r)$ is obtained from
\begin{equation}
\label{PG-a}
\frac{dh}{dr} = \mp \left(1-\frac{2M}{|r|} \right)^{-1}\sqrt{\frac{2M}{|r|}}.
\end{equation}
In $1+1$ dimensions the spatial coordinate $r$ ranges over $-\infty < r < +\infty$ but we shall be concerned with only the region $r>0$, so we assume that $r$ is positive throughout. Adopting these coordinates, the 1+1 metric for a black hole of mass $M$ becomes
\begin{equation}
-ds^2=-d\tau^2+\left(dr + \sqrt{\frac{2M}{r}} d\tau\right)^2.
\end{equation}
The equation of motion of a particle falling freely from rest at infinity is {\cite{kanai2012hawking}}
\begin{equation}
r(\tau)=\left(\frac{9M}{2}(-\tau)^2\right)^{1/3},
\end{equation}
where, writing $\tau_s =4M/3$, in the exterior region $-\infty < \tau \leq - \tau_s $, and $\tau_s < \tau <0$ in the black hole interior.
As in \cite{raine1991does} in order to operationalise the existence of radiation from the infalling oscillator, we place a detector on a world line $r = $constant, at some large distance from the event horizon. The Penrose-Carter diagram showing this scenario is given in figure \ref{Penrose-Carter}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{1D-PG_Penrose_diagram.eps}
\caption{The Penrose-Carter diagram showing the oscillator on a free-fall trajectory in Painlev\'{e}-Gullstrand coordinates.
The detector remains outside the black hole.}
\label{Penrose-Carter}
\end{figure}
The relationship between Schwarzschild time $t_s$ and Painlev\'{e}-Gullstrand coordinates $(\tau,r)$ is:
\begin{equation}
dt_s=d\tau-\frac{\sqrt{2M/r}}{1-2M/r}dr.
\end{equation}
In terms of the usual tortoise coordinate
\begin{equation}
r_*=r+2M\ln\left|\frac{r}{2M}-1 \right|,
\end{equation}
for $r > 0$ we define the \emph{out-going} null coordinates
\begin{equation}
u=t_s-r=\tau-\xi(r),
\end{equation}
with
\begin{equation}
\xi(r)=r+2\sqrt{2Mr}+4M\ln\left(\sqrt{\frac{r}{2M}}-1 \right),
\end{equation}
and similarly, the \emph{in-going} null coordinate for $r > 0$,
\begin{equation}
v=t_s+r_s=\tau+\eta(r),
\end{equation}
with
\begin{equation}
\eta(r)=r-2\sqrt{2Mr}+4M\ln\left(\sqrt{\frac{r}{2M}} + 1\right).
\end{equation}
We couple the oscillator to a massless scalar field $\Phi$ satisfying the Klein-Gordon equation,
\begin{equation}
\Box\Phi=0.
\end{equation}
In P-G coordinates in 1+1 dimensions this is
\begin{equation}
(1-f)^{-1}\left[\frac{\partial}{\partial t} - (1-f) \frac{\partial}{\partial r} \right]\left[ (1-f) \frac{\partial}{\partial t} + (1- f^2) \frac{\partial}{\partial r}\right] = 0
\end{equation}
where $f= \sqrt{2M/r}$.
In the right hand wedge the outgoing field can be expanded in terms of out-going modes $e^{\pm i k u}$
\begin{equation}
\phi_{\mathrm{out}}=\sum_{k\geq 0}e_k \left\{b^{(0)}_k e^{-ik(t-\xi(r))}+b^{(0) \dagger}_{k}e^{ik(t-\xi(r)}\right\}
\end{equation}
with the usual Klein-Gordon normalisation in a box of length $L$, $e_k = \sqrt{\frac{2\pi}{kL}}.$
\noindent Similarly we can express the field in terms of in-going modes $ e^{ \pm i k v}$ in the wedge $r>0$ as
\begin{equation}
\label{Phi in}
\phi_{\mathrm{in}}=\sum_{k\geq 0}e_k\left\{b^{(0)}_{-k}e^{-ik(t+\eta(r))}+b^{(0) \dagger}_{-k}e^{ik(t+\eta(r))}\right\}.
\end{equation}
\noindent In general, the full solution to the Klein-Gordon equation will be a linear combination of the ingoing and
outgoing modes:
\begin{equation}
\Phi=\phi_{\mathrm{in}}+\phi_{\mathrm{out}}.
\end{equation}
We define the vacuum state $|0\rangle$ by
\begin{equation}
b^{(0)}_k|0\rangle = 0 \quad \text{and} \quad b^{(0)}_{-k}|0\rangle=0.
\end{equation}
We shall need the Fourier transforms of these modes as evaluated along the worldline of the infalling oscillator. We define
\begin{equation}
e^{ik(\tau-\xi(\tau))}=\left\{\begin{array}{l l}\displaystyle{\int_{-\infty}^\infty} \alpha_k(k') e^{-ik'\tau} \ dk' & \quad \tau < - \tau_s \\
0 & \quad \tau > - \tau_s\\
\end{array} \right.
\label{alpha}
\end{equation}
and
\begin{equation}
e^{ik(\tau+\eta(\tau))}=\left\{\begin{array}{l l} \displaystyle{\int_{-\infty}^\infty} \beta_k(k')e^{-ik'\tau}\ dk' & \quad \tau < - \tau_s \\
0 & \quad \tau > - \tau_s.\\
\end{array} \right.
\label{beta}
\end{equation}
\noindent Approximate expressions for the Fourier components $\alpha_k(k')$ and $\beta_k(k')$ are obtained in appendix 1. We find
\begin{align}
\alpha_k(k')&= \frac{2M}{\pi}ke^{- 2 \pi Mk}e^{-iM(22k+4k')/3}(4M)^{4iMk}(3k+k')^{4iMk-1}\Gamma(-4iMk)\label{Alpha1}\\
\beta_k(k') &=2 \sqrt{\frac{2 M}{\pi|k|}}e^{i\pi /4}\exp\left\{ iMk\left[-\frac{10}{3}+4 \ln 2\right] +ik'\tau_s-\frac{2iM}{k}(k+2k')^2 \right\} . \label{beta1}
\end{align}
If $3k + k'$ is real, then the branch in the imaginary power is defined by $3k+k' = e^{- i \pi} |3k+k'|$ for $3k+k' < 0.$
\section{The Quantum Langevin Equation}
We now derive the equation of motion for the oscillator. We take a quantum harmonic oscillator of mass $m$ and natural
frequency $\omega$ confined to a free-fall worldline in $r > 0$ with proper time $\tau$. We couple this to a massless scalar field $\Phi$ with a scalar-electrodynamic form for the interaction. Our Hamiltonian is therefore
\begin{equation}
\begin{split}
\mathscr H =\omega a^\dagger a+&\sum_{k>0} \omega_k b^\dagger_kb_k + \sum_{k>0} \omega_k b^\dagger_{-k}b_{-k}\\
&ig\sqrt{\frac{\omega}{2m}} \sum_{k>0} e_k(a^\dagger-a)(b_ke^{ik\xi}+b^\dagger_ke^{-ik\xi}+b_{-k}e^{-i\omega_k \eta}+b^\dagger_{-k}e^{ik\eta})
\end{split}
\label{hamiltonian}
\end{equation}
where $a^\dagger$ is the creation operator for the quantum oscillator, $\omega_k = |k|$, and $g$ is a coupling constant.
\\
We do not make the rotating wave approximation at this point \cite{louisell73quantum} to remove the products that pair creation operators of the field and oscillator (and similarly pairings of annihilation operators) both because keeping them here makes the calculation slightly easier and for comparison with the Unruh radiation in Scully et al. \cite{scully2018quantum} later. We shall impose the rotating wave approximation appropriately below.
\\
We now use Heisenberg's equation of motion to determine the evolution of the oscillator and the scalar field.
From
\begin{equation}
\frac{da}{d\tau}=-i[a,\mathscr H],
\end{equation}
and putting $\lambda = g \sqrt{\omega / 2m} $ for brevity, we obtain the equation of motion for the oscillator
\begin{equation}
\frac{da}{d\tau}=-i\omega a +\lambda \sum_k e_k(b_ke^{ik\xi}+b^\dagger_ke^{-ik\xi}+b_{-k}e^{-i\omega_k \eta}+b^\dagger_{-k}e^{ik\eta}).
\label{osc1}
\end{equation}
Similarly, for the scalar field:
\begin{equation}
\frac{db_j}{d\tau}=-i[b_j,\mathscr H],
\end{equation}
from which we obtain
\begin{equation}
\frac{db_k}{d\tau}=-i\omega_k b_k+\lambda e_k(a^\dagger-a)e^{-ik\xi} \ \ \mathrm{and} \ \
\frac{db_{-k}}{d\tau}=-i\omega_k b_{-k}+\lambda e_k(a^\dagger-a)e^{ik\eta}.
\label{field1}
\end{equation}
\noindent We can solve (\ref{field1}) to obtain expressions for the scalar field operators:
\begin{equation}
b_k(\tau)=e^{-ik\tau}b_k^{(0)}+\lambda e^{-ik\tau}e_k\int_{-\infty}^\tau (a^\dagger-a)e^{ik(\tau'-\xi(\tau'))}\ d\tau'
\label{bk}
\end{equation}
and
\begin{equation}
b_{-k}(\tau)=e^{-ik\tau}b_{-k}^{(0)}+\lambda e^{-ik\tau}e_k\int_{-\infty}^\tau (a^\dagger-a)e^{ik(\tau'+\eta(\tau'))}\ d\tau'.
\label{b-k}
\end{equation}
We assume that the interaction is switched on at some distant time $\tau > -\infty$ in the past when $\lambda \rightarrow 0$ and $b_k = b_k^{(0)}.$
\\
We now go on to use (\ref{bk}) and (\ref{b-k}) to derive an expression for the position operator for our oscillator. Direct
substitution into (\ref{osc1}) gives:
\begin{equation}
\begin{split}
\frac{da}{d\tau}=&-i\omega a+\mathcal G_a +\lambda^2\sum_k e_k^2 \bigg\{ \\
&e^{-ik(\tau-\xi)}\int_{-\infty}^\tau(a^\dagger-a)e^{ik(\tau'-\xi')}\ d\tau'
-e^{ik(\tau-\xi)}\int_{-\infty}^\tau(a^\dagger-a)e^{-ik(\tau'-\xi')}\ d\tau'\\
&+e^{-ik(\tau+\eta)}\int_{-\infty}^\tau(a^\dagger-a)e^{ik(\tau'+\eta')}\ d\tau'
-e^{ik(\tau+\eta)}\int_{-\infty}^\tau(a^\dagger-a)e^{-ik(\tau'+\eta')}\ d\tau'\bigg\}
\end{split}
\label{osc2}
\end{equation}
with the function
\begin{equation}
\mathcal G_a(\tau)=\lambda \sum_k e_k\{ b_{k}^{(0)}e^{-ik(\tau-\xi)}+b^{(0) \dagger}_{k}e^{ik(\tau-\xi)}
+b_{-k}^{(0)}e^{-ik(\tau+\eta)}+b^{(0) \dagger}_{-k}e^{ik(\tau+\eta)}\}.
\label{Ga}
\end{equation}
We now remove high frequency behaviour by setting
\begin{equation}
a(\tau)=e^{-i\omega \tau}A(\tau).
\label{A}
\end{equation}
Using this and the Fourier transforms of (\ref{alpha}) and (\ref{beta}) in (\ref{osc2}) gives:
\begin{equation}
\begin{split}
\frac{dA}{d\tau}=&\mathcal G_A+\lambda^2\sum_k e_k^2 e^{i\omega\tau} \bigg\{ \\
&\int_{-\infty}^\infty \alpha^*(k'')e^{ik''\tau}\ dk''\int_{-\infty}^\infty\alpha_k(k')\ dk' \int_{-\infty}^\tau\left(
e^{i(\omega-k')\tau'}A^\dagger-e^{-i(\omega+k')\tau' }A\right)\ d\tau' \\
&-\int_{-\infty}^\infty \alpha(k')e^{-ik'\tau}\ dk'\int_{-\infty}^\infty\alpha_k^*(k'')\ dk'' \int_{-\infty}^\tau\left(
e^{i(\omega+k'')\tau'}A^\dagger-e^{-i(\omega-k'')\tau' }A\right)\ d\tau' \\
&+\int_{-\infty}^\infty \beta^*(k'')e^{ik''\tau}\ dk''\int_{-\infty}^\infty\beta_k(k')\ dk' \int_{-\infty}^\tau\left(
e^{i(\omega-k')\tau'}A^\dagger-e^{-i(\omega+k')\tau' }A\right)\ d\tau' \\
&-\int_{-\infty}^\infty \beta(k')e^{-ik'\tau}\ dk'\int_{-\infty}^\infty\beta_k^*(k'')\ dk'' \int_{-\infty}^\tau\left(
e^{i(\omega+k'')\tau'}A^\dagger-e^{-i(\omega-k'')\tau' }A\right)\ d\tau' \bigg\}.
\end{split}
\label{osc-3a}
\end{equation}
The rotating wave approximation now amounts to neglecting the $A^\dagger$ terms since these behave like $e^{2\omega \tau}$. We could subsequently include these terms in an iterative solution, which would amount to taking into account the higher energy levels of the oscillator in the line profile, but this is not crucial for our discussion. (In effect we are treating the oscillator as a two-level atom.)
\\
Next we perform the $\tau'$ integrations using integration by parts:
\begin{equation}
\int_{-\infty}^\tau e^{-i(\omega+k')\tau'}A(\tau')d\tau' =
\left[\frac{i A(\tau')e^{-i(\omega+k')\tau'}}{\omega+k'} \right]_{-\infty}^\tau-i\int_{-\infty}^\tau
\frac{dA}{d\tau'}\frac{e^{-i(\omega+k')\tau'}}{\omega+k'}\ d\tau'
\label{IBP}
\end{equation}
We can neglect
the final (integral) term in (\ref{IBP}) because, from (\ref{osc-3a}), it contributes a correction of order $\lambda^2$ to $\mathcal G_A $ and of order $\lambda^4$ to $dA/d \tau$. This method is equivalent to solving equation (\ref{osc-3a}) via a Laplace transform as in Louisell \cite{louisell73quantum}.
(We demonstrate this equivalence in appendix 5.) The contribution from $\tau' \rightarrow -\infty$ vanishes under our assumption that the interaction is switched off at early times. We are left with
\begin{equation}
\begin{split}
\frac{dA}{d\tau} &= \mathcal G_A+i\lambda^2 A(\tau)\sum_k e_k^2 \bigg\{ \\
&-\int_{-\infty}^\infty \alpha^*(k'')e^{ik''\tau}dk'' \int_{-\infty}^\infty \frac{\alpha_k(k')e^{-ik'\tau}dk'}{\omega+k'}
+\int_{-\infty}^\infty \alpha(k')e^{-ik'\tau}dk' \int_{-\infty}^\infty \frac{\alpha_k^*(k'')e^{ik''\tau}dk''}{\omega-k''}\\
&-\int_{-\infty}^\infty \beta^*(k'')e^{ik''\tau}dk'' \int_{-\infty}^\infty \frac{\beta_k(k')e^{-ik'\tau}dk'}{\omega+k'}
+\int_{-\infty}^\infty \beta(k')e^{-ik'\tau}dk' \int_{-\infty}^\infty \frac{\beta_k^*(k'')e^{ik''\tau}dk''}{\omega-k''} \bigg\}.
\end{split}
\label{osc3}
\end{equation}
This expression can be simplified by interchanging $k'$ and $k''$ in the second integrals on each of the last two lines of equation (\ref{osc3}) and using $\alpha_k(k') = \alpha_{-k}^{*}(-k')$ with a similar relation for $\beta_k(k')$ to combine the integrals into a sum over $k$, $-\infty < k < \infty$. Finally, writing the sum over $k$ as an integral (i.e. letting $L \rightarrow \infty$) we get
\begin{equation}
\begin{split}
\frac{dA}{d\tau} &= \mathcal G_A \\
& -i\lambda^2 A(\tau)\int_{-\infty}^{\infty} \frac{dk}{k}\int_{-\infty}^{\infty}dk' \int_{-\infty}^{\infty}dk'' \left[ \alpha_{k}^{*}(k'') \alpha_k(k') + \beta^{*}_k(k'') \beta_k(k') \right] \frac{e^{i(k''-k')\tau}dk'}{\omega+k'}
\end{split}
\label{q-lang-aa-bb}
\end{equation}
Defining ${\mathcal G_A} = e^{i\omega \tau}\mathcal G_a$, we write the quantum Langevin equation for the oscillator as
\begin{equation}
\frac{dA}{d\tau}=-\left(\frac{\gamma}{2}+ i\Delta \omega\right)A(\tau)+\mathcal G_A,
\label{q-langevin}
\end{equation}
with the friction constant $\gamma$ and the Lamb shift $\Delta \omega$ implicitly given by (\ref{q-lang-aa-bb}).
\\
We show in appendix 3 that
\begin{equation}
\gamma = \frac{ \pi g^2}{m}.
\label{gamma1}
\end{equation}
\noindent The evaluation of the Lamb shift is more subtle and will be pursued elsewhere. Here we simply incorporate it into the definition of $\omega$.
Solving (\ref{q-langevin}) we get
\begin{equation}
A(\tau)=e^{-\gamma\tau/2}\int_{-\infty}^\tau e^{\gamma\tau'/2}\mathcal G_A(\tau') \ d\tau'
\end{equation}
where we have ignored the initial value of $A$ since this gives rise to a transient signal far from the black hole and is consequently of no interest here.
Thus,
\begin{equation}
\begin{split}
A(\tau)& =\\
&\lambda e^{-\gamma \tau/2} \sum_k e_k\left\{
b_{k}^{(0)}\int_{-\infty}^\tau e^{i(\omega-i\gamma/2)\tau'}e^{-ik(\tau'-\xi(\tau'))}d\tau'+b^{(0) \dagger}_{k}\int_{-\infty}^{\tau}
e^{i(\omega-i\gamma/2)\tau'}e^{ik(\tau'-\xi(\tau'))}\ d\tau'\right.\\
&\left. +b_{-k}^{(0)}\int_{-\infty}^\tau e^{i(\omega-i\gamma/2)\tau'}e^{-ik(\tau'+\eta(\tau'))}d\tau'+b^{(0) \dagger}_{-k}\int_{-\infty}^{\tau}
e^{i(\omega-i\gamma/2)\tau'}e^{ik(\tau'+\eta(\tau'))}\ d\tau'\right\}.
\end{split}
\end{equation}
Using the Fourier transforms (\ref{alpha}) and (\ref{beta}) gives:
\begin{equation}
\begin{split}
A(\tau)&=\lambda e^{-\gamma \tau/2} \sum_k e_k \bigg\{ \\
&b_{k}^{(0)}\int_{-\infty}^\infty\int_{-\infty}^\tau e^{i(\omega+k'-i\gamma/2)\tau'}\alpha^*(k')d\tau'dk'+
b^{(0) \dagger}_k\int_{-\infty}^\infty\int_{-\infty}^\tau e^{i(\omega-k'-i\gamma/2)\tau'}\alpha(k')d\tau'dk'\\
&+b_{-k}^{(0)}\int_{-\infty}^\infty\int_{-\infty}^\tau e^{i(\omega+k'-i\gamma/2)\tau'}\beta^*(k')d\tau'dk'+
\left.b^{(0) \dagger}_{-k}\int_{-\infty}^\infty\int_{-\infty}^\tau e^{i(\omega-k'-i\gamma/2)\tau'}\beta(k')d\tau'dk' \right\}.
\end{split}
\end{equation}
We define the oscillator susceptibility
\begin{equation}
\chi(k')=\frac{1}{\omega+k'-i\gamma/2}.
\label{chi}
\end{equation}
Performing the $\tau'$ integrations we obtain
\begin{equation}
\begin{split}
A(\tau)&=\\
& -i\lambda e^{i\omega \tau}\sum_k e_k \left\{
b_k^{(0)}\int_{-\infty}^\infty e^{ik'\tau} \alpha^*(k')\chi(k')dk'+b^{(0) \dagger}_k\int_{-\infty}^\infty e^{-ik'\tau}\alpha_k(k')\chi(-k')dk'\right.\\
&\left.+b_{-k}^{(0)}\int_{-\infty}^\infty e^{ik'\tau}\beta_{k}^*(k')\chi(k')dk'+b^{(0) \dagger}_{-k}\int_{-\infty}^\infty e^{-ik'\tau}\beta_k(k')\chi(-k')dk'
\right\}.
\end{split}
\end{equation}
In principle, this would allow us to determine how the oscillator comes into equilibrium with its local environment at any distance from the black hole. However, we know the $k'$-dependence of $\alpha_{k}(k')$ and $\beta_{k}(k')$ only on the assumption that $\tau < \sim \tau_s$.
\\
We now have everything we require to determine the position operator of the oscillator
\begin{equation}
q=\frac{1}{\sqrt{2\omega m}}(a^\dagger+a)
\end{equation}
near the black hole.
Using $A(\tau)=e^{i\omega\tau}a(\tau)$, and writing
\begin{equation}
\Delta\chi(k') = \chi^*(k') - \chi(-k'),
\label{Delta chi}
\end{equation}
the position operator of the harmonic oscillator, ignoring transients, becomes
\begin{equation}
\begin{split}
q(\tau)=\frac{i\lambda}{\sqrt{2m\omega}}&\sum_k e_k \bigg\{ \\
& b_k^{(0)}\int_{-\infty}^\infty e^{ik'\tau}\alpha^*(k')\Delta \chi(-k') dk'
+b^{(0) \dagger}_k\int_{-\infty}^\infty e^{-ik'\tau}\alpha_k(k')\Delta \chi (k')dk'\\
+ & \left. b_{-k}^{(0)}\int_{-\infty}^\infty e^{ik'\tau}\beta^*(k')\Delta \chi(-k') dk'
+b^{(0) \dagger}_{-k}\int_{-\infty}^\infty e^{-ik'\tau}\beta_k(k')\Delta \chi(k')dk'\right\}.
\end{split}
\label{q}
\end{equation}
\section{The Solution to the Field Equation}
We now wish to determine the solution to the scalar field equation in the presence of the oscillator. In section 2 we determined that the scalar field
$\Phi$ can be decomposed into a linear sum of in-going and outgoing modes:
\begin{equation}
\Phi=\sum_k e_k \left\{
b_k e^{ik\xi}+b^\dagger_ke^{-ik\xi}+b_{-k}e^{-ik\eta}+b^\dagger_{-k}e^{ik\eta}
\right\}.
\label{Phi1}
\end{equation}
In section 3 we determined expression for $b_k$ and $b_{-k}$; these are given in (\ref{bk}) and (\ref{b-k}). Thus substituting
into (\ref{Phi1}) we find that the field can be written as
\begin{equation}
\Phi=\Phi_h+\Phi_p,
\end{equation}
where the homogeneous part
\begin{equation}
\Phi_h=\sum_k e_k\left\{
b_k^{(0)}e^{-ik(\tau-\xi)}+b_{k}^{\dagger (0)}e^{ik(\tau-\xi)}+b_{-k}^{(0)}e^{-ik(\tau+\eta)}+b_{-k}^{\dagger (0)}e^{-ik(\tau+\eta)} \right\},
\end{equation}
and the particular integral is
\begin{equation}
\begin{split}
\Phi_p=&\lambda \sum_k e_k^2 \Big\{ \\
&e^{-ik(\tau-\xi(\tau))}\int_{-\infty}^\tau(a^\dagger-a)e^{ik(\tau'-\xi(\tau'))} \ d\tau'
-e^{ik(\tau-\xi(\tau))}\int_{-\infty}^\tau(a^\dagger-a)e^{-ik(\tau'-\xi(\tau'))} \ d\tau' \\
&e^{-ik(\tau+\eta(\tau))}\int_{-\infty}^\tau(a^\dagger-a)e^{ik(\tau'+\eta(\tau'))} \ d\tau'
-e^{ik(\tau+\eta(\tau))}\int_{-\infty}^\tau(a^\dagger-a)e^{-ik(\tau'+\eta(\tau'))} \ d\tau'\Big\}.
\end{split}
\end{equation}
Using the relation between the annihilation and creation operators and the momentum, $p$,
\begin{equation}
p=m\frac{dq}{d\tau} = i\sqrt{\frac{m \omega}{2}} (a^{\dagger} - a),
\end{equation}
we get
\begin{equation}
\begin{split}
\Phi_p=-ig&\sum_ke_k^2\left\{ \int_{-\infty}^\tau \frac{dq}{d\tau'}\left( e^{ik(\tau'-\tau)-ik(\xi(\tau')-\xi(\tau))}-
e^{-ik(\tau'-\tau)+ik(\xi(\tau')-\xi(\tau))}\right)\ d\tau' \right. \\
&+\int_{-\infty}^\tau\frac{dq}{d\tau'}
\left.\left(e^{ik(\tau'-\tau)+ik(\eta(\tau')-\eta(\tau))}-e^{-ik(\tau'-\tau)-ik(\eta(\tau')-\eta(\tau))} \right)\ d\tau'\right\}.
\end{split}
\end{equation}
Combining the exponentials and converting $\sum_k\rightarrow \int dk$ gives
\begin{equation}
\begin{split}
\Phi_p=2g&\left[
\int_0^\infty \int_{-\infty}^\tau \frac 1 k \frac{dq}{d\tau'} \sin[k(\tau'-\tau)-k(\xi(\tau')-\xi(\tau)]\ dt\ dk \right. \\
&\left.+\int_{-\infty}^0 \int_{-\infty}^\tau \frac 1 k \frac{dq}{d\tau'} \sin[k(\tau'-\tau)-k(\eta(\tau')+\eta(\tau)]\ dt\ dk
\right].
\end{split}
\end{equation}
We now set
\begin{equation}
I_1=\int_0^\infty \int_{-\infty}^\tau\frac 1 k \frac{dq}{d\tau'}\sin[k(\tau'-\tau)-k(\xi(\tau')-\xi(\tau))]\ d\tau'dk,
\end{equation}
and
\begin{equation}
I_2=\int_{-\infty}^0 \int_{-\infty}^\tau\frac 1 k \frac{dq}{d\tau'}\sin[k(\tau'-\tau)+k(\eta(\tau')+\eta(\tau))]\ \tau'dk.
\end{equation}
Evaluating the $k$ integral in $I_1$ first, we see that this is just the retarded Green's function in two dimensions
\cite{raine1991does}, and so:
\begin{equation}
\mathscr G_{\mathrm{ret}}(\tau, \xi, \tau', \xi')=
\left\{ \begin{array}{cc} \pi & |\xi-\xi'|<\tau-\tau', \ \tau>\tau' \\ 0 & \mathrm{otherwise} \end{array} \right.
\end{equation}
which means that
\begin{equation}
I_1=q(\tau_{\mathrm{ret}})
\end{equation}
where $\tau_{\rm ret}$ is given by
\begin{equation}
\label{tau-ret}
\tau_{\rm ret}-\xi(\tau_{\rm ret})= \tau'-\xi'.
\end{equation}
Evaluating $I_2$ we identify the $k$ integral in $I_2$ as being the advanced Green's function, and if we let
$k\rightarrow -k$
\begin{equation}
\mathscr G_{\mathrm{adv}}=\lim_{\epsilon\rightarrow 0} \left\{ \frac{1}{2\pi}
\int_0^\infty \int_{-\infty}^\infty \frac{e^{ik(\eta-\eta')-ik^0(\tau-\tau')}}{(k^0-i\epsilon)^2-k^2} \ dk^0 dk.\right\}
\end{equation}
Examining the $k^0$ integral we see that the simple poles are located in the upper-half of the complex plane. However we still require $\tau>\tau'$ so we would form a semi-circular in the lower half of the complex plane which
does not therefore enclose the poles. So this integral gives no contribution. Thus we have obtained the
solution to the scalar field equation
\begin{equation}
\Phi(\tau,r)=\Phi_h+2\pi g q(\tau_{\mrm{ret}}).
\label{Phi}
\end{equation}
\section{The Energy Flux at the Detector}
We now look at the response of a detector at a large distance from the black hole. This operationalises the meaning of radiation from the infalling oscillator. For the accelerated detector in the Rindler wedge in \cite{raine1991does} we calculated the noise power on the world line of a distant inertial detector, which is directly related to the probability of excitation of the detector. Here we shall look at the closely related, but more familiar, energy flux at the detector.
\\
We shall demonstrate first that the energy flux at the detector has a blackbody form modulated by the impedance of the infalling body. This enables us to explain the difference between the Rindler case and a black hole. We also show that the oscillator emits a negative energy flux into the hole. We then use the explicit form for the impedance function of a harmonic oscillator to derive an explicit form for the energy flux from the infalling oscillator.
\\
Let the P-G coordinates of the oscillator be $(\tau, \xi(\tau))$ and let the coordinates of the detector be $(t ', \xi'(t'))$. For $\xi' > \xi$
\begin{equation}
\tau_{\rm ret} - \xi(\tau_{\rm ret})=t'- \xi',
\end{equation}
and for $\xi' < \xi$
\begin{equation}
\tau_{\rm ret} + \eta(\tau_{\rm ret}) = t' + \eta'.
\end{equation}
In P-G coordinates an orthonormal dyad in the rest frame of the detector is
\begin{equation}
{\bf e}^{0}=(1, 0), \quad {\bf e}^{1} = (x', 1).
\end{equation}
The energy-momentum flux at the detector is
\begin{equation}
F =T^{\hat{1}}_{\hat{0}} = - T_{\hat{0}\hat{1}} =- \langle {\bf e}^{\mu}_{0}{\bf e}^{\nu}_{1}T_{\mu \nu} \rangle = - x'T_{r'r'}+T_{t'r'},
\end{equation}
where the components of $T_{\mu' \nu'}$ are obtained as usual from
\begin{equation}
T_{\mu' \nu'} = \frac {\partial \Phi}{\partial x'^{\mu}}\frac{\partial \Phi^{\dagger}}{\partial x'^{\nu}}-\frac{1}{2}g_{\mu' \nu'} \left(\frac{\partial \Phi}{\partial x'^{\lambda}} \right)^2.
\end{equation}
Using (\ref{Phi}), the expectation value of the energy momentum flux is obtained from
\begin{equation}
\label{noisy}
\begin{split}
\left\langle \frac{\partial \Phi^{\dagger}}{\partial x'^{\mu}} \frac{\partial \Phi}{\partial x'^{\nu}}\right\rangle = &
\left\langle \frac{\partial \Phi_{h}^{\dagger}}{\partial x'^{\mu}} \frac{\partial \Phi_h}{\partial x'^{\nu}}\right\rangle \\
&+ 2 \pi g \left\langle \frac{\partial \Phi_{h}^{\dagger}}{\partial x'^{\mu}} \dot{q}(\tau_{ret}) \frac{\partial \tau_{ret}}{\partial x'^{\nu}}+ \dot{q^{\dagger}}(\tau_{ret}) \frac{\partial \tau_{ret}}{\partial x'^{\mu}} \frac{\partial \Phi_h}{\partial x'^{\nu}}\right\rangle \\
& + 4 \pi g^2 \left\langle \dot{q^{\dagger}}(\tau_{ret}) \frac{\partial \tau_{ret}}{\partial x'^{\mu}} \dot{q}(\tau_{ret}) \frac{\partial \tau_{ret}}{\partial x'^{\nu}}\right\rangle.
\end{split}
\end{equation}
The first term on the right of equation (\ref{noisy}) involves only the unperturbed field and represents the flux present in the absence of the oscillator. The Hawking radiation is obtained by the standard calculation (e.g. \cite {hawking1975particle}) that relates the incoming modes on $\mathscr{I}^{-}$ at $r \rightarrow -\infty$ that do not fall into the horizon (defining the in-vacuum) to the outgoing modes on $\mathscr{I}^{+}$ at $r\rightarrow +\infty$, defining the out-vacuum. In the Painlev\'{e}-Gullstrand manifold here (figure \ref{Penrose-Carter}) there is only one vacuum state and no mixing of modes in the absence of the infalling oscillator. This term is therefore just the zero point flux and will have no influence on the detector. We can confirm this by an explicit calculation.
\\
The normally ordered expression for the unperturbed $\langle : T_{\mu' \nu'}: \rangle$ based on the Painlev\'{e}-Gullstrand modes gives zero contribution. The covariant form can be calculated from the conformal factor, $e^{2\rho}=(1-f^2)$, in the usual way for a 1+1 dimensional metric \cite{bh-evap2005} \cite{davies1976}
\begin{equation}
\langle T_{uu}\rangle \propto \frac{\partial^2 \rho}{\partial u ^2} -\left(\frac{\partial \rho}{\partial u}\right)^2
\end{equation}
with similar expressions for $\langle T_{vv} \rangle$. We obtain (up to a numerical factor)
\begin{equation}
\langle T_{uu}\rangle= \langle T_{vv}\rangle =\frac{M}{2r^3}- \frac{3M^2}{4r^4}
\end{equation}
and
\begin{equation}
\langle T_{uv}\rangle=\frac{\partial^2 \rho}{\partial u \partial v} = - \frac{M}{2r^3}\left( 1 - \frac{2M}{r} \right).
\end{equation}
which vanish at infinity (exactly as in the Boulware vacuum). Thus there is no Hawking flux. (The flux on the horizon is formally non-zero, but this is non-physical since the coordinates do not satisfy the regularity conditions there \cite{christensen1977}.)
Of course, the situation would be different if we were to take into account the collapse phase in the formation of the black hole, when this term would yield the usual Hawking effect.
\\
The term on the second line of (\ref{noisy}) represents the interference between the outgoing emission from the oscillator and the vacuum excitations of the detector. This is the flux we would get from first order perturbation theory treating the scalar field as an external potential. The final term in (\ref{noisy}) represents the direct contribution to the flux arising from the oscillator.
\\
We can tidy up equation (\ref{noisy}) using the fact that the time dependence in $\Phi$ comes through $\tau_{\rm ret}$, since $\Phi_{h}(t'-\xi') = \Phi_{h}(\tau_{\rm ret} - \xi(\tau_{\rm ret}))$ and $q=q(\tau_{\rm ret})$.
Thus the derivatives in $T_{\mu\nu}$ contribute factors of:
\begin{eqnarray*}
\frac{\partial \Phi}{\partial r'} \frac{\partial \Phi^{\dagger}}{\partial r'}&\rightarrow & \langle \dot{\Phi} \dot{\Phi}^{\dagger} \rangle \left(\frac{\partial \tau}{\partial \xi'}\right)^{2} \left( \frac{\partial \xi'}{\partial r'}\right)^2\\
\frac{\partial \Phi}{\partial r'} \frac{\partial \Phi^{\dagger}}{\partial t'}& \rightarrow & \langle \dot{\Phi} \dot{\Phi}^{\dagger}\rangle \left(\frac{\partial \tau}{\partial \xi'}\right) \left( \frac{\partial \xi'}{\partial r'}\right)\left(\frac{\partial \tau}{\partial t'} \right)\\
\frac{\partial \Phi}{\partial t'} \frac{\partial \Phi^{\dagger}}{\partial t'}& \rightarrow & \langle \dot{\Phi} \dot{\Phi}^{\dagger}\rangle \left(\frac{\partial \tau}{\partial t'}\right)^{2}
\end{eqnarray*}
where $\dot{\Phi} = d\Phi/d\tau_{\rm ret}.$
Now, $q(\tau_{\rm ret})$ contributes a factor $\exp(\mp ik' \tau_{\rm ret})$ to $\Phi$ and $\Phi_h = \phi_{\rm out}$ contributes a factor $\exp[\pm ik'(t'-\xi')] = \exp[\pm ik'(\tau_{\rm ret} - \xi (\tau_{\rm ret}))]$. Thus $\langle \dot{\Phi} \dot{\Phi}^{\dagger} \rangle = k'k'' \langle \Phi \Phi^{\dagger} \rangle$.
Writing $f = \sqrt {2M/r} $, $f'= \sqrt{2M/r'}$ as above (for $r>0$, $r'>0$), from the definitions (\ref{tau-ret}) we find
\begin{equation}
\frac{\partial \tau}{\partial \xi'} =- \frac{\partial \tau}{\partial t'}= -(1-f) \quad \text{and} \quad \frac{\partial \xi'}{\partial r'} = \frac{1}{1-f'}. \quad
\end{equation}
Putting this together we find that
\begin{equation}
F = \frac{(1-f)^2}{(1-f')^2} (\langle \mcal J \rangle_\mathrm{dir} + \langle \mcal J \rangle_\mathrm{int})
\label{Fdir}
\end{equation}
where
\begin{equation}
\label{Jdir1}
\begin{split}
&\langle \mcal J \rangle_\mathrm{dir} = 4 \pi g^2 \langle \dot{q}^\dagger \dot{q} \rangle \\
=&\gamma^2 \sum_k e_k^2
\int_{-\infty}^\infty\int_{-\infty}^\infty
k' k'' e^{i(k''-k')\tau_{\mrm{ret}}}\Delta \chi(k') \Delta \chi^*(k'')
[\alpha_k(k')\alpha_k^*(k'')+\beta_k(k')\beta^*(k'')]\ dk' dk''.
\end{split}
\end{equation}
with $\gamma = \pi g^2 /m$ (equation (\ref{gamma1})) and $\Delta \chi$ is given by (\ref{Delta chi}). The interference term for the detector at $\xi' > \xi$ is given by
\begin{equation}
\label{Jint1}
\begin{split}
\langle \mcal J \rangle_\mathrm{int} = &2 \pi g \langle \dot{q}^{\dagger} \dot{\phi_{\rm out}}+ \dot{q} \dot{\phi}_{\rm out}^{\dagger} \rangle\\
=& -\gamma\sum_ke_k^2 \int_{-\infty}^\infty\int_{-\infty}^\infty
k'k''e^{i(k''-k')\tau_{\mrm{ret}}}\alpha_k(k')\alpha_k^*(k'') [i\Delta \chi^*(k'') + i\Delta \chi(k')] \ dk'\ dk''.
\end{split}
\end{equation}
We can write the redshift in terms of proper time along the world line of the oscillator. We have
\begin{equation}
\bar{\tau} \equiv \tau + \tau_s = \tau_s\left(1- \frac{1}{f^3} \right) = \frac{\tau_s}{f^3}(f-1)(f^2+f+1).
\end{equation}
So the redshift factor in the flux is $\propto \bar{\tau}^2,$ the proper time measured to the horizon and so for the oscillator close to the horizon and the detector at infinity ($x \sim 1$, $x' \sim 0$),
\begin{equation}
\label{redshifted}
F = \left(\frac{\bar{\tau}}{9\tau_s}\right)^2(\langle \mcal J \rangle_\mathrm{int} + \langle \mcal J \rangle_\mathrm{dir}).
\end{equation}
We now proceed to compare the direct flux and the interference term. In appendix 2 we look at the stationary phase approximation to the $k'$ and $k''$ integrals in equations (\ref{Jdir1}) and (\ref{Jint1}). The result is that the impedance terms, $\Delta\chi$, are evaluated at the stationary points. This allows us to write $\langle \mcal J \rangle_\mathrm{dir}$ as
\begin{equation}
\begin{split}
\langle \mcal J \rangle_\mathrm{dir} = & \gamma^2 \int_{0}^{\infty} \frac{dk}{k} \int_{-\infty}^{\infty} dk'' \int_{-\infty}^{\infty}\ dk' \Big\{k' k'' e^{i(k''-k') \tau_{\rm ret}} \\
& \times \left[\alpha_k^*(k'')\alpha_k(k')|\Delta \chi (k_{\alpha})|^2 +\beta_{k}^*(k'')\beta_{k}(k') |\Delta \chi (k_{\beta})|^2\right]\Big\}.
\end{split}
\label{dir-noise}
\end{equation}
where we have inserted the stationary points
\begin{equation}
k'(k) = k_{\alpha} = \frac{(-3k \tau_{\rm ret})}{\bar{\tau}}\ \ \mathrm{and}\ \ k'(k) = k_{\beta} = -2k
\end{equation}
given in appendix 2. We now compare this with the interference term. Again, we use the stationary phase approximation to justify writing $\langle \mcal J \rangle_\mathrm{int}$ (equation (\ref{Jint1})) as
\begin{equation}
\langle \mcal J \rangle_\mathrm{int}=-2\gamma \int_{0}^{\infty} \frac{dk}{k} \int_{-\infty}^\infty\int_{-\infty}^\infty
e^{i(k''-k')\tau_{\mrm{ret}}}\alpha_k(k')\alpha_k^*(k'')[i\Delta \chi^*(k_{\alpha}) + i\Delta \chi(k_{\alpha})].
\end{equation}
We now use the fluctuation-dissipation theorem (appendix 4) to write this as
\begin{equation}
\langle \mcal J \rangle_\mathrm{int} = -2 \gamma^2 \int_{0}^{\infty} \frac{dk}{k} \int_{-\infty}^\infty\int_{-\infty}^\infty
e^{i(k''-k')\tau_{\mrm{ret}}}\alpha_k(k')\alpha_k^*(k'')|\Delta \chi(k_{\alpha})|^2.
\end{equation}
Comparing with equation (\ref{dir-noise}) we see that the total flux (up to the redshift factor) is
\begin{equation}
\label{totnoise}
\begin{split}
\langle \mcal J \rangle_\mathrm{dir}+\langle \mcal J \rangle_\mathrm{int} =& \gamma^2 \int_{0}^{\infty} \frac{dk}{k} \int_{-\infty}^{\infty} dk'' \int_{-\infty}^{\infty}\ dk' \Big\{ k' k'' e^{i(k''-k') \tau_{\rm ret}} \\
& \left[-\alpha_k^*(k'')\alpha_k(k')|\Delta \chi (k_{\alpha})|^2+\beta_{k}^*(k'')\beta_{k}(k') \right]| \Delta \chi (k_{\beta})|^2 \Big\}\\
\equiv & - \mathcal F_\alpha + \mathcal F_\beta.
\end{split}
\end{equation}
Note that if it were the case that $\alpha_k(k') = \beta_k(k')$, then $k_{\alpha} = k_{\beta}$ and this expression vanishes. This accords with our result for the case of a constantly accelerated oscillator in flat spacetime.
\\
In the case that $\alpha_k(k') \ne \beta_k(k')$, the flux at the detector does not vanish. If we place the detector closer to the hole than the oscillator, the interference term now involves the ingoing modes. Thus it makes a contribution $-2\mathcal F_\beta$ to the flux which therefore becomes $-F$. Thus, if the energy radiated to infinity by the infalling oscillator is positive, the energy radiated into the hole is negative.
\\
Given our particular forms for $\alpha_k(k')$ and $\beta_k(k')$ corresponding to a freely-falling oscillator we proceed to show that the flux has a blackbody spectrum (modified by the oscillator impedance) as the oscillator approaches the black hole.
\\
We can write the fluxes in equation (\ref{totnoise}) as
\begin{equation}
\label{Ialphabeta}
\mathcal F _\alpha = \gamma^2 \int_{0}^{\infty} \frac{dk}{k} I_{\alpha}(k)I^{*}_{\alpha}(k)
\end{equation}
where
\begin{equation}
\label{J_1}
I_{\alpha}(k) = \int_{-\infty}^{\infty}k' \alpha_k(k')e^{-ik'\tau} \Delta \chi(k')dk'
\end{equation}
with corresponding definitions for $\mathcal F_\beta$ and $I_\beta(k)$. The susceptibility factor is peaked around $k' = \pm \omega$. The stationary phase approximation to the integral for $k>0$ will turn out to require $k' <0.$ Thus we can make the replacement $\Delta \chi(k') = \chi^* (k') - \chi(-k') \approx \chi^*(k').$ This corresponds to keeping the energy conserving terms $a^{\dagger} b + a b^ {\dagger}$ in the Hamiltonian. We shall find that the remaining terms that give rise to the Unruh effect (coming from $a^{\dagger}b^{\dagger} + ab$ in $\mathscr{H}$) give a contribution $\gamma/\omega$ smaller.
\\
We now have
\begin{equation}
I_{\alpha}(k) = \frac{2Mk}{\pi} e^{2\pi Mk} e^{-2 \pi iMk}e^{i (\pi/2 - 2\omega /\gamma)}(4M)^{4iMk} \Gamma(-4iMk)J_1
\end{equation}
where
\begin{equation}
\label{J_1D}
J_1 = \int_{-\infty}^{\infty}dk'e^{-ik'\bar{\tau}}(3k'+k)^{4iMk}f(k')
\end{equation}
with
\begin{equation}
f(k') = k'(3k+k')^{-1}\Delta \chi(k').
\end{equation}
Evaluating $J_1$ for large $Mk$ by stationary phase (appendix 2) gives
\begin{equation}
J_1 \sim (2\pi)^{1/2} e^{-4iMk +3ik\tau - i \pi /4} (4Mk)^{4iMk-1/2}\bar{\tau}^{-4iMk} f\left(\frac{-3k\tau}{\bar{\tau}}\right).
\end{equation}
The contribution to the flux from $I_{\alpha}(k)I^{*}_{\alpha}(k) $ is therefore
\begin{equation}
\label{alphaflux}
\begin{split}
-\mathcal F_\alpha = &- 9\gamma^2 \left(\frac{\tau}{\bar{\tau}}\right)^2 \int_0^{\infty}kB(-8\pi Mk) \frac{dk}{(\omega + k'(k))^2 + \gamma^2 /4}\\
=& - \gamma^2 \int_{0}^{-\infty}k'B\left(2\pi \bar{\tau}k'\right)\frac{dk'}{(\omega+k')^2 + \gamma^2/4},
\end{split}
\end{equation}
where we have substituted $k'(k) = -3k\tau / \bar{\tau}$ and taken the limit $\tau \rightarrow \tau_s,$ and where the black-body function $B(x)$ is:
\begin{equation}
B(x)=\frac{1}{e^x-1}=-B(-x)-1.
\end{equation}
Finally, we can extend the integration to the full range with an error of order $\gamma / \omega$ since
\begin{equation*}
\begin{split}
\int_{0}^{\infty} \frac{dk}{(\omega+k)^2 + \gamma^2/4}=&\int_{\omega}^{\infty}\frac{dx}{x^2 + \gamma^2/4} = \frac{1}{\gamma}\int_{\omega/\gamma}^{\infty}\frac{dy}{y^2+1}\\
\sim &\frac{1}{\gamma} \left(\frac{\gamma}{\omega}\right).
\end{split}
\end{equation*}
This contribution to the flux is therefore
\begin{equation}
-\mathcal F_\alpha = -\gamma^2 \int_{-\infty}^{\infty}k'B\left(2\pi \bar{\tau}k'\right)\frac{dk'}{(\omega+k')^2 + \gamma^2/4}
\end{equation}
where the integration is now over the full range of $k'$.
\\
Two comments are required here. The appearance of the blackbody factor arises from the Fourier analysis of the time dependence of the oscillator trajectory. It may seem strange that approximating the Fourier integral in (\ref{alpha}) and its inverse transform in (\ref{J_1}) introduces a blackbody factor. This arises through our treatment of the gamma functions which for consistency should strictly be evaluated in the asymptotic (large $Mk$) limit. Doing this would lead to the Wien tail of the blackbody emission. However, we shall stick with precedent (dating back to Hawking's original paper) and retain the full blackbody form.
\\
The second comment is the equally apparently strange way in which taking the Fourier transform followed by its inverse leads us merely to a form for $k'(k)$ in the oscillator susceptibility which is not contributing to the phase. In fact, including the phase of the susceptibility in the stationary phase approximation makes no difference to the result to the lowest order in $\gamma/\omega$. An alternative approach is to note that the susceptibility is peaked around $k'=\omega$ and to expand the integrand about this point. To the accuracy of our approximation this leads to the same final result.
\\
To calculate the contribution from the ingoing ($\beta$) modes we start from (\ref{Ialphabeta}) with
\begin{equation}
I_{\beta} =4 \left(\frac{2M}{\pi k}\right)^{1/2} e^{iMk(-10/3 + 4 \ln 2) + i\pi /4} \int_{-\infty}^{\infty}k' \Delta \chi (k') e^{i[-k'\bar{\tau}+(2M/k)(k+2k')^2]}dk'.
\end{equation}
We evaluate this again by stationary phase. We let
\begin{equation}
\phi(k') = -k'\bar{\tau} + \frac{2M}{k} (2k'+k)^{2}.
\end{equation}
The stationary point is
\begin{equation}
k'(k)=-\frac{k}{2}\left(\frac{\bar{\tau}}{8M}+1 \right)
\end{equation}
from which we obtain
\begin{equation}
\int_{-\infty}^{\infty}k' \Delta \chi (k') e^{i[-k'\bar{\tau}+(2M/k)(k+2k')^2]}dk' \sim -\frac{k^{3/2}}{2} \left(\frac{2\pi}{M}\right)^{1/2}\left(\frac{ \bar{\tau}}{8M}+1\right) e^{i[\bar{\tau}k/2 - \bar{\tau}^2 k/(32M)]}\Delta\chi(k'(k)).
\end{equation}
The dominant term in $\Delta \chi$ comes from $\chi(+k')$. Thus, to lowest order in $\gamma$, as $\bar{\tau} \rightarrow 0$
\begin{equation}
\begin{split}
\mathcal F_\beta=\gamma^2 \int_{0}^{\infty}\frac{dk}{k} I_{\beta}I^{*}_{\beta} & \sim \gamma^2 \int_{0}^{\infty}\frac{k}{16}|\chi(k'(k))|^2 dk \\
& \sim -\gamma^2 \int_{-\infty}^{\infty} \frac{k' dk'}{(\omega + k')^2 + \gamma^2 /4}\\
\end{split}
\end{equation}
Now use the relation $B(x) = -B(-x)-1$ to write (\ref{alphaflux}) as
\begin{equation}
-\mathcal F_\alpha = -\gamma^2\int_{-\infty}^{\infty}k'[-B(2\pi \bar{\tau}k')-1] \frac{dk}{(\omega + k'(k))^2 + \gamma^2 /4}.
\end{equation}
This gives a positive frequency blackbody term plus the zero point energy that cancels the contribution from $I_{\beta}$. The total flux is therefore
\begin{equation}
\label{dir+int}
\langle \mcal J \rangle_\mathrm{dir}+\langle \mcal J \rangle_\mathrm{int}=\gamma^2\int_{-\infty}^{\infty}k'[B(2\pi \bar{\tau}k')] \frac{dk}{(\omega + k'(k))^2 + \gamma^2 /4}.
\end{equation}
As promised, the result is a blackbody spectrum modulated by the oscillator susceptibility.
\\
We now proceed to evaluate the remaining integral over wave number in (\ref{dir+int}). This has the form of a smoothly varying factor multiplied by the susceptibility which (for an under-damped oscillator) is peaked around $k'=-\omega$. (We have $k'(k) = -3k\tau/\bar{\tau} < 0 $ since $k>0$, which justifies the inclusion of only the terms in $k' + \omega$ in (\ref{J_1})). We have
\begin{equation}
\int_{0}^{-\infty} \frac{k'dk'}{(\omega + k')^2 + \gamma^2/4} = \int_{-\infty}^{\infty}\frac{dx}{x^2 + \gamma^2/4} + \mathcal O (\gamma/\omega) = \frac{2 \pi} { \gamma }
\end{equation}
giving
\begin{equation}
\label{F-flux}
F\sim 2 \pi \left(\frac{\bar{\tau}} {9 \tau_s} \right)^2 \gamma \omega B(2\pi \omega (-\bar{\tau}))
\end{equation}
for $ |\bar{\tau}| > \omega ^{-1}$ and $|\bar{\tau}| < \sim M$, where we have re-instated the redshift factor (equation (\ref{redshifted})).The blackbody factor peaks at $\bar{\tau}\omega \sim 1$ and the flux at the peak is of order $ \gamma \omega / (M^2 \omega^2)$ or $ (t_d/ t_{in})(\lambda/R_s)$ where $t_d \sim 1/\gamma$ is the decay (or equilibration) time, $t_{in} \sim M$ is the infall time, $\lambda = 1/\omega$ is the wavelength of the oscillator and $R_s$ is the radius of the black hole.
\\
For $\bar{\tau} < \omega^{-1}$ the oscillator susceptibility is no longer peaked, but we can evaluate the flux as follows. For $\omega \bar{\tau} \rightarrow 0$ we have
\begin{equation}
|\chi(k')|^2 = [(\omega + k'(k))^2 + \gamma^2 /4]^{-1} \sim \bar{\tau}^2 (9k^2 \tau^2 +\bar{\tau}^2 \gamma^2 /4)^{-1}.
\end{equation}
The contribution to the total flux is
\begin{equation}
\begin{split}
F_{0} &= 9\gamma^2\left(\frac{\tau}{\bar{\tau}}\right)^{2}\int_{0}^{\infty}k B(8\pi Mk) \frac{\bar{\tau}^2dk}{9k^2 \tau^2+\bar{\tau}^2 \gamma^2 /4}\\
&\sim \gamma^2\int_{0}^{\infty} B(8\pi Mk) \frac{d(MK)}{Mk}.
\end{split}
\end{equation}
as $\bar{\tau} \rightarrow 0$ and $\tau \rightarrow \tau_s$. The divergence at the lower limit can be dealt with by insisting that we are considering the case $Mk > 1$ or by a more accurate treatment of the stationary phase, which brings in an extra factor of $4Mk(16M^2 k^2 +1)^{-1}$ (see appendix 3). In either case the contribution is proportional to $\gamma^2$ which is $\gamma /\omega$ smaller than $F$. We can therefore ignore this contribution.
\\
We can estimate the total energy, $E$, emitted by integrating (\ref{F-flux}) over time, $t'$, at the detector. Taking into account the redshift factors from (\ref{Fdir}), and $dt' = \frac{dt'}{d\tau}d\tau = \frac{\tau_s}{\bar{\tau}}d\bar{\tau}$, we have
\begin{equation}
\label{E-B}
\begin{split}
E=&\pi \gamma \omega \int_{-\infty}^{\omega^{-1}} B(-2\pi\omega \bar{\tau}) \left(\frac{\bar{\tau}}{\tau_s}\right) d\bar{\tau}\\
=& \frac{3\gamma}{16 \pi M \omega} \int_{2\pi}^{\infty}xB(x) dx.
\end{split}
\end{equation}
We can write this in terms of the infall time $t_{\rm in}$, the decay time of the oscillator $t_{\rm d}$ and the wavelength $\lambda = 1/\omega $ as
\begin{equation}
E\sim \omega \frac{\gamma M}{\omega^2 M^2} \sim \omega \left(\frac{t_{\rm in}}{t_{\rm d}}\right)\left(\frac{\lambda}{R_s}\right)^2
\end{equation}
for $\lambda <\sim R_s$. Note how the time-dependence of the infall in (\ref{E-B}) spreads the expectation value of the energy from the oscillator at frequency $\omega$ (or more precisely, the renormalised frequency) into a blackbody spectrum.
\section{Discussion}
We can summarise our conclusions as follows.
The infalling oscillator emits positive energy to infinity and negative energy into the black hole. This arises from the difference between the outgoing and ingoing modes (brought about by the cross-term in the metric, which provides the time asymmetry). The flux comes from a distance $ > 1/\omega$ from the horizon. This suggests that as long as the ingoing modes are close to Painlev\'{e}-Gullstrand modes the oscillator radiates even if the true horizon does not form \cite{bardeen2014}.
\\
The dominant effect comes from the energy conserving term in the Hamiltonian. (The atom is de-excited and emits a (scalar) photon. This is possible because the usual balance between excitation and de-excitation that results in the stability of the ground state is disturbed by the difference between ingoing and outgoing modes. Thus the ground state is no longer stable moment by moment. The Unruh terms in the Hamiltonian in this case yield a blackbody flux to infinity (and a negative flux into the hole), which is a factor $\gamma/\omega $ smaller than the dominant terms. In this model, the Unruh effect would dominate by neglecting the back-reaction of the field on the oscillator. Indeed, if we replace $\chi(k')$ in (\ref{Jint1}) by $\lim_{\gamma \rightarrow 0} \chi(k') = i \pi \delta (k'+\omega)$ and put $\nu =- 2k/3$ (the minus sign allowing for the energy non-conserving term in $\mathscr{H}$) we obtain the expression for the energy flux equivalent to that in Scully et al.\cite{scully2018quantum} (although in the Boulware vacuum of their choice, this would be cancelled by the contribution from the direct flux).
\\
As a simple model we can imagine a shell of oscillators (or atoms) collapsing to form a black hole. They radiate a blackbody flux to infinity and a negative energy flux into the (putative) black hole. But the emission from the oscillators is not the Hawking radiation: the flux from the oscillators depends on the strength of the matter-field coupling, so is not independent of their material properties; it occurs on a collapse timescale, not the Hawking timescale; and furthermore the flux is independent of the mass of the black hole (although spectrum depends on $M$ and the total energy radiated depends on $M$ through the infall time). Nevertheless, the model may provide some useful hints.
\\
In a self-consistent picture, the black hole is bathed in both incoming negative energy fluctuations and outflowing positive energy. The fluctuations in this radiation field will perturb the hole and cause it to radiate. Thus, in the fuller picture, the emission can be seen as a two-quantum process. Furthermore, the second quantum carries information about the first. In other words, it is conceivable that information is not lost in the process in much the same way that it is not lost in the "burning paper" illustration. In other words, we need to consider the reaction of the hole not just to its own radiation, but to that from the infalling matter (which, if nothing else, will be coupled to gravity). This may alter the argument from timescales \cite{mathur2011}.
\\
One final speculation based on the model presented here. The Lamb shift is usually absorbed into the mass of the oscillator by renormalisation and therefore in effect neglected. In a Bohr atom $\omega \propto m$, the electron mass, so the energy radiated (which we found to be proportional to $\omega$) is the (renormalised) mass. This suggests we need to incorporate a theoretical account of the origin of mass into the theory if we are to understand the quantum mechanics of black holes.
\newpage
\section*{Appendix 1: Fourier Transforms $\alpha_k(k')$ and $\beta_k(k')$ of the Modes}
We have defined $\alpha_k(k')$ as the Fourier transform of the out-going modes evaluated along the worldline of the
oscillator,
\begin{equation}
\alpha_k(k')=\frac{1}{2\pi}\int_{-\infty}^{-\tau_s} e^{ik(\tau-\xi(\tau))}e^{ik'\tau}\ d\tau,
\label{alpha1}
\end{equation}
where $\tau = - \tau_s = - 4M/3$ is the location of the event horizon. The worldline of the oscillator in free-fall is, in
Painlev\'{e}-Gullstrand coordinates,
\begin{equation}
r_s(\tau)=\left(\frac{9M}{2} \right)^{\frac 1 3}(-\tau)^{2/3}.
\end{equation}
with the function $\xi(r)$ defined as:
\begin{equation}
\xi(r)=r+2\sqrt{2Mr}+4M \ln\left(\sqrt{\frac{r}{2M}}-1 \right).
\end{equation}
When evaluated on the worldline of the oscillator this becomes:
\begin{equation}
\xi(\tau)=\left(\frac{9M}{2} \right)^{1/3}(-\tau)^{2/3}+(48M^2)^{1/3}(-\tau)^{1/3}+4M\ln\left[
\left(\frac{3}{4M}\right)^{1/3}(-\tau)^{1/3}-1\right]
\label{xi(t)}
\end{equation}
The behaviour of the integral is dominated by the argument of the exponential at the endpoint of the range, namely near the horizon. We therefore expand $\xi(\tau)$ about the horizon with
\begin{equation}
\tau_0=-\tau_s-\epsilon
\end{equation}
where $\epsilon>0$ is small compared to $\tau_s$. Note that this expansion means that we remain in the exterior region. Performing the expansion for each of the terms in (\ref{xi(t)}) yields the asymptotic form
\begin{equation}
\xi(\epsilon)\approx 6M+2\epsilon+4M\ln(\epsilon)-4M\ln(4M),
\end{equation}
and
\begin{equation}
\tau_0-\xi(\epsilon)\approx -\frac{22M}{3}-3\epsilon-4M\ln(\epsilon)+4M\ln(4M).
\end{equation}
Using this expansion in (\ref{alpha1}) means that
\begin{equation}
\alpha_k(k')=\frac{1}{2\pi}e^{-iM(22k+4k')/3}(4M)^{4iMk}\int_0^\infty e^{-i(3k+k')\epsilon}\epsilon^{-4iMk}\ d\epsilon.
\end{equation}
The integral over $\epsilon$ may be converted into a Gamma function. To do this, we use the result from \cite{inttransvol1}:
\begin{equation}
\Gamma(z)=s^z\int_{0}^{\infty\ e^{i\delta}}e^{-st}t^{z-1}\ dt,
\end{equation}
with $-(\pi/2+\delta)<\mathrm{arg}\ s<\pi/2-\delta$, $\Re(z)>0$. This result holds for $\mathrm{arg}\ s+\delta=\pm\pi/2$ provided $0<\Re(z)<1$. Applying this to our $\epsilon$-integral we obtain, after some algebra, that for $(3k+k') > 0$\\
\begin{equation}
\label{alphaplus}
\alpha_k(k')=\frac{2M}{\pi}ke^{-2\pi Mk}e^{-iM(22k+4k')/3}(4M)^{4iMk}(3k+k')^{4iMk-1}\Gamma(-4iMk)
\end{equation}
and for $(3k+k') < 0$
\begin{equation}
\label{alphaminus}
\alpha_k(k')=-\frac{2M}{\pi}ke^{2\pi Mk}e^{-iM(22k+4k')/3}(4M)^{4iMk}(-3k-k')^{4iMk-1}\Gamma(-4iMk).
\end{equation}
We now determine an approximate expression for
the Fourier transform $\beta_k(k')$ of the in-going modes along the worldline of the
oscillator.
\begin{equation}
\begin{split}
\beta_k(k')=&\frac{1}{2\pi}\int_{-\infty}^{\tau_s} e^{ik(\tau+\eta(\tau))} e^{ik'\tau}\ d\tau \\
=&\frac{1}{2\pi}\int_{-\infty}^{\tau_s} e^{ikM\phi(\tau)} \ d\tau
\end{split}
\label{beta2}
\end{equation}
with
\begin{equation}
\eta(r)=r-2\sqrt{2Mr}+4M\ln\left(\sqrt{\frac{r}{2M}}+1\right).
\label{eta(r)}
\end{equation}
We are treating $Mk$ as a large parameter in the integrand. Since $\phi(\tau)$ does not have a stationary point in the range (or in $(-\infty, 0)$), we expand about the end point on the horizon. (In fact, our result for the energy flux is independent of the point chosen to the accuracy of the approximation.)
Define $t = -\tau/\tau_s = 3 \tau / 4 M$. Then, expanding $\phi(\tau)$ to second order gives
\begin{equation}
k\phi(t) = -\frac{10}{3} + 4 \ln 2 -\frac{1}{3} (t-1) + \frac{1}{18}(t-1)^2 + \frac{4}{3} k't.
\end{equation}
In terms of $y=\sqrt{k\tau_s/24} (t-1)$, the integral for $\beta_k(k')$ becomes
\begin{equation}
\beta_k(k') \sim \frac{1}{\pi^2}\left( \frac{6\tau_s}{k}\right)^{1/2} \int_{-\infty}^{\infty} dy \exp \left\{ \mp i \left [y+ \sqrt{\frac{2M}{|k|}(k + 2k')}\right]^2 +\frac{2M}{k} (k+2k')^2 \right\}
\end{equation}
with the signs $\pm$ according as $k > 0$ or $k<0$. We justify the extension of the range to $-\infty$ as follows. We are going to use this expression in the approximate evaluation of the energy flux by stationary phase about the stationary point $2k'+k = -ik(t-1)/3$, with $k>0$, which gives a term $-4y^2$ in the exponential. Thus the integrand converges rapidly as $y \rightarrow +\infty$. We obtain
\begin{equation}
\beta_k(k')= 2 \sqrt{\frac{2 M}{\pi |k|}}e^{i Mk(-10/3 + 4 \ln(2)) \pm i\pi /4 + i\tau_s k' -i\frac{2M}{k}(k+2k')^2}.
\end{equation}
\section*{Appendix 2:
Evaluation of integrals by stationary phase}
\label{stat-phase}
We want to evaluate integrals of the form
\begin{equation}
\label{stat-phase}
I(k) = \int_{-\infty}^{\infty} e^{-iu\bar{\tau}+4iMk\ln u}f(u) \frac{du}{u}
\end{equation}
where $u=(3k+k')$ and $\bar{\tau} = \tau + \tau_s < 0$ and $f(u)$ is a smoothly varying function of $u$. We have
\begin{equation}
I(k) = \int_{-\infty}^{\infty}e^{i\phi(u)}f(u) du
\end{equation}
where
\begin{equation}
\phi(u) = -u\bar{\tau}+(4Mk+i)\ln u.
\end{equation}
The stationary point occurs at $u=u_0=\frac{-4Mk+ i }{\bar{\tau}}$. We therefore have
\begin{displaymath}
\phi''(u_0)=-\bar{\tau}^2/(4Mk+i).
\end{displaymath}
Thus
\begin{equation}
\begin{split}
I(k) &\sim e^{i\phi(u_0)}f(u_0)\int_{-\infty}^{\infty}\exp \left[\frac{i}{2}\phi''(u_0)(u-u_o)^2\right]du\\
&= e^{-i\pi /4}e^{i\phi(u_0)}f(u_0)\left[\frac{2\pi(4Mk+i)}{\bar{\tau}^2}\right]^{1/2}\\
&= (2\pi)^{1/2}e^{-i\pi /4}e^{(1-4iMk)}\left(4Mk+i\right)^{4iMk-1/2}f((4Mk+i)/\bar{\tau})
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
I(k)I^{*}(k)&=2\pi e^2 (4Mk+i)^{4iMk}(4Mk-i)^{-4iMk} (16M^2 k^2 +1 )^{1/2}f\left(\frac{4Mk-i}{\bar{\tau}}\right)
f\left(\frac{4Mk+i}{\bar{\tau}}\right)\\
&=2\pi e^2 (16M^2 k^2 +1 )^{1/2}\exp[{-8Mk \tan^{-1} (4Mk)^{-1}}]\left|f\left(\frac{4Mk-i}{\bar{\tau}}\right)\right|^2.
\end{split}
\end{equation}
In the large $Mk$ limit we have
\begin{equation}
I(k) \sim (2\pi)^{1/2}e^{-i\pi /4}e^{-4iMk}\left(4Mk\right)^{4iMk-1/2}f((4Mk)/\bar{\tau})
\end{equation}
which is the form we would obtain directly from (\ref{stat-phase}) and which we use in the body of the text.
\section*{Appendix 3 Evaluation of the decay rate, $\gamma$}
\label{gamma}
Let $\Gamma = \gamma /2 + i \Delta \omega$, then we have
\begin{equation}
\begin{split}
\Gamma &=i \frac{\omega g^2}{2m} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{dk}{k}\left[ \alpha^{*}_{k}(k'')\alpha_{k}(k')+\beta_k(k')\beta^{*}_k(k'')\right]\frac{e^{i(k''-k')\tau}}{\omega + k'} dk''dk'\\
&= \Gamma_{\alpha} + \Gamma_{\beta},
\end{split}
\end{equation}
where $\alpha_k(k')$ and $\beta_k(k')$ are given by (\ref{Alpha1}) and (\ref{beta1}). We are going to evaluate the $k'$ and $k''$ integrals by stationary phase. It will turn out that the stationary point lies in the region $3k+k' <0$ so we use the corresponding form for the $\alpha_k(k')$. Then, from appendix 2, we have
\begin{equation}
I_1=\int_{-\infty}^{\infty}\alpha_{k}^{*}(k'')e^{ik''\tau} dk'' = -\frac{2eMk}{(i \pi \bar{\tau})^{1/2}} e^{2\pi Mk} \Gamma(4iMk) e^{i \psi}\left(\frac{4Mk-i}{\bar{\tau}}\right)^{-4iMk-1/2}
\end{equation}
where $\psi =4iMk + 22iMk/3-3ik\tau$.
\\
Similarly
\begin{equation}
I_2 = \int_{-\infty}^{\infty}\alpha_{k}(k')e^{-ik'\tau} \frac{dk'}{\omega+k'}=I_{1}^{*}\times (\omega)^{-1}\left (1-\frac{3k\tau}{\omega \bar{\tau}}+\frac{i}{\omega \bar{\tau}}\right)^{-1}
\end{equation}
In the large $Mk$ limit we have
\begin{equation}
(4kM \pm i)^{\pm 4iMk} = (16M^2 k^2 +1)^{1/2} \exp \left[ \pm 4Mk \tan^{-1} (\pm 1/4Mk)\right] \sim e^{-1}.
\end{equation}
Finally we obtain
\begin{equation}
i \frac{\omega g^2}{2m}I_1 I_2 = \frac{i \lambda^2 M}{\omega} \int_{-\infty}^{\infty} \frac{e^{4\pi Mk}}{\sinh (4\pi Mk)} (1+16 M^2k^2)^{-1/2} \left(1-\frac{3k\tau}{\omega \bar{\tau}} + \frac{i}{\omega \bar{\tau}} \right)^{-1} dk
\end{equation}
To evaluate this we consider the large $Mk$ limit as $\tau \rightarrow \tau_s$. Provided the oscillator is more than a small fraction of $\tau_s$ outside he horizon, this implies that $\omega \bar{\tau}$ is large. (Large here means $>\ \mathcal O(1)$ since the integrand is exponentially decreasing as a function of $Mk$). Thus we can write
\begin{equation}
\frac{1}{ 1-\frac{3k\tau}{\omega \bar{\tau}} + \frac{i}{\omega \bar{\tau}}}=i\pi \delta\left(1-\frac{3k\tau}{\omega \bar{\tau}}\right) + {\rm PP}
\end{equation}
where PP stands for the principal part of the integral. The delta function then restricts $k$ to $k = \omega \bar{\tau}/3 \tau \sim - \omega \bar{\tau}/4M > 0$ and the contribution to $\gamma/2$ from the outgoing modes is
\begin{equation}
\gamma /2 = -\frac{\pi \lambda^2}{2 \omega} B(-2\pi \omega \bar{\tau} \tau_s /\tau) = -\frac{\pi \lambda^2}{2 \omega} [-1 + B(2\pi \omega \bar{\tau})] \rightarrow \pi g^2 /4m
\end{equation}
for $\bar{\tau}$ of order $M$ in the large $Mk$ limit (i.e.for $\bar{\tau} \gg \omega^{-1}$).
\\
We now have to consider the ingoing modes (the terms in $\beta_k(k')$ in (\ref{beta1})). Note that the contribution to $\gamma$ comes from the zero point energy, so we expect the ingoing modes to make an equal contribution to $\gamma$. We want to evaluate
\begin{equation}
\Gamma_{\beta} = i\lambda^2 \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{dk}{k}\beta_k(k')\beta^{*}_k(k'')\frac{e^{i(k''-k')\tau}}{\omega + k'} dk''dk'.
\end{equation}
To ensure convergence of the Fourier integrals we add a small imaginary part to $k'$. Inserting the expressions for $\beta_k(k')$ from (\ref{beta1}) we find the condition for stationary phase is $k'(k)=-(k/2)(1+\bar{\tau} /M) \rightarrow -k/2$ and the contribution to $\gamma/2 $ is
\begin{equation}
\frac{\gamma}{2}= i\lambda^2 \Re\left(\lim_{\epsilon \rightarrow 0}\left\{ \int \frac{dk}{k} \frac{1}{(\omega - 2k/3 - i \epsilon)}\right\} \right)= \frac{\pi g^2}{4m }.
\end{equation}
(We ignore the infrared divergence from the pole at $k=0$ since we are restricted to large $Mk$; in any case, it is an artefact of 1+1 dimensions.)
\\
Thus, in the vicinity of the black hole, the equal contributions from $\Gamma_{\alpha}$ and $\Gamma_{\beta}$ sum to give
\begin{equation}
\frac{\gamma}{2} = \frac{\pi g^2}{2m}.
\end{equation}
\section*{Appendix 4: The Fluctuation-Dissipation Theorem}
First we determine an expression for $|\chi^*(k)-\chi(-k)|^2$ using the definition given in (\ref{chi}):
\begin{equation}
|\chi^*(k)-\chi(-k)|^2=\left|\frac{1}{\omega+k+i\gamma/2}-\frac{1}{\omega-k-i\gamma/2} \right|^2
\end{equation}
We have that:
\begin{equation}
\Delta \chi(k)= \chi^*(k)-\chi(-k)=\frac{-2k-i\gamma}{(\omega + k +i\gamma/2)(\omega -k - i\gamma/2)}
\end{equation}
Thus:
\begin{equation}
|\chi^{*}(k)-\chi(-k)|^2=\frac{4k^2 +\gamma^2}{(\omega^2-k^2)^2 +\gamma^2 (\omega^2 + k^2)/2 + \gamma^4/4}
\label{mod chis}
\end{equation}
We also have
\begin{equation}
i\chi(k)+i\chi^*(k)=-\frac{\gamma}{2(\omega+k)^2+\gamma^2/2}
\end{equation}
and hence
\begin{equation}
\label{add chis}
(i\chi(k)+i\chi^*(k))-(i\chi(-k)+i\chi^*(-k))=\frac{4\gamma \omega k}{(\omega_c^2-k^2)^2 +\gamma^2 \omega k +\gamma^4/16}
\end{equation}
The functions (\ref{mod chis}) and (\ref{add chis}) are peaked around $k=\omega.$ Also $\gamma \ll \omega.$ Thus
\begin{equation}
|\chi^*(k)-\chi(-k)|^2 \sim \frac{4\omega^2 }{(\omega^2-k^2)^2 +\gamma^2 \omega^2 }
\end{equation}
and
\begin{equation}
(i\chi(k)+i\chi^*(k))-(i\chi(-k)+i\chi^*(-k)) \sim\frac{4\gamma \omega^2}{(\omega_c^2-k^2)^2 +\gamma^2 \omega^2}
\end{equation}
giving us the final result
\begin{equation}
i\Delta \chi(k) + i \Delta \chi^{*}(k)=\gamma |\Delta \chi(k)|^2
\end{equation}
which is our fluctuation-dissipation theorem. Note that we use the theorem to establish the relationship between the direct and interference terms at the detector, but we use the exact expression for the impedances in evaluating the integrals over frequency.
\section*{Appendix 5: Derivation of the Langevin Equation}
\label{Louiselle}
The integro-differential equation for the oscillator annihilation operator $A(t)$ derived from the
Hamiltonian (\ref{hamiltonian}) with the rotating wave approximation is \cite{louisell73quantum} (where $\kappa_j \rightarrow \lambda$ in our notation)
\begin{equation}
\frac{dA}{dt}=-\sum_j |\kappa_j|^2\int_0^tA(t')e^{i(\omega_j-\omega)(t'-t)}\ dt'+G_A
\label{L1}
\end{equation}
where
\begin{equation}
G_A=-i\sum_j \kappa_jb_j(0)e^{-i\omega_jt}.
\label{G-L}
\end{equation}
To ensure convergence of the Fourier transform we give $\omega_j$ a small imaginary part, $\omega_j \rightarrow \omega_j - i\epsilon$.
\\
The Wigner-Weisskopff approach to solving this equation involves taking the Laplace transform of both sides, and then applying an approximation, which essentially allows the replacement of (\ref{L1}) by the Langevin equation
\begin{equation}
\label{Langevin2}
\frac{dA}{dt}=-\left(\frac 1 2 \gamma + i \Delta \omega\right)A(t)+G_A(t),
\end{equation}
where
\begin{equation}
\gamma=2\pi g(\omega)|\kappa(\omega)|^2 \ \ \mathrm{and} \ \ \Delta\omega =-\int\frac{g(\omega_j)|\kappa(\omega_j)|^2\
d\omega_j}{\omega_j-\omega}.
\label{L-gamma}
\end{equation}
This approach works because the Laplace transform leads to an equation for $\tilde{A}(s)$, the Laplace transform of $A(t)$. This method is not available to us since the Laplace transform of (\ref{L1}) does not yield an equation for $\tilde{A}(s)$. (We could expand $\tilde{A}$ as a power series in $s$ but it is then difficult to control the approximation.) We now show that the Langevin equation may be obtained using integration by parts. Returning to (\ref{L1}) we integrate by parts with respect to $t'$ :
\begin{equation}
\int_0^t A(t')e^{i(\omega_j-\omega)(t'-t)}\ dt'=\frac{A(t)}{i(\omega_j-\omega)}-\frac{A(0)e^{-i(\omega_j - \omega)t}}{i(\omega_j - \omega)}-\int_0^t \frac{dA}{dt'} \frac{e^{i(\omega_j-\omega)(t'-t)}}{i(\omega_j-\omega)}\ dt'.
\label{IBP-L}
\end{equation}
The integral in (\ref{IBP-L}) is of order $g$ smaller than the other terms, so can be neglected. The term in $A(0)$ represents the initial conditions and can be neglected in the differential equation. Thus (\ref{L1}) now becomes:
\begin{equation}
\frac{dA}{dt}=iA(t)\sum_ j\frac{|\kappa(\omega_j)|^2
e^{i(\omega_j-\omega)t}}{\omega_j-\omega}+G_A.
\end{equation}
We now convert the sum over $j$ into an integral over $\omega_j$:
\begin{displaymath}
\sum_j|\kappa(\omega_j)|^2 \rightarrow \int_0^\infty g(\omega_j)|\kappa(\omega_j)|^2 \ d\omega_j,
\end{displaymath}
and so, with $\omega_j\rightarrow \omega_j-i\epsilon $ then:
\begin{equation}
\frac{dA}{dt}=G_A+iA(t)\int_{0}^{\infty}\frac{|\kappa(\omega_j)|^2g(\omega_j)e^{i(\omega_j-\omega-i \epsilon)t}}{\omega_j-\omega-i\epsilon}\ d\omega_j
\label{L2}
\end{equation}
Using the Sokhotski-Plemelj theorem \cite{inteqnsvol1}
\begin{equation}
\begin{split}
\lim_{s\rightarrow 0}\int_{0}^{\infty}\frac{|\kappa(\omega_j)|^2g(\omega_j)e^{i(\omega_j-\omega-is)t}}{\omega_j-\omega-is}\ d\omega_j=&
\mathcal P \left(\int_{0}^{\infty}\frac{|\kappa(\omega_j)|^2g(\omega_j)e^{i(\omega_j-\omega)t}}{\omega_j-\omega}\ d\omega_j\right)\\
&+i\pi \int_0^\infty |\kappa(\omega_j)|^2g(\omega_j)e^{i(\omega_j-\omega)t}\delta(\omega_j-\omega)\ d\omega_j
\end{split}
\end{equation}
Thus, after integrating out the delta function:
\begin{equation}
\frac{dA}{dt}=G_A-(\pi|\kappa(\omega)|^2g(\omega)+i\Delta \omega)A(t)
\end{equation}
which gives (\ref{Langevin2}) with $\gamma$ and $\Delta \omega$ as defined in (\ref{L-gamma})
\newpage
| -77,705.105146 |
[
-3.447265625,
3.142578125
] | 21.923077 |
[
-3.0390625,
0.104736328125,
-2.306640625,
-5.78125,
-0.71142578125,
8.125
] |
[
3.5859375,
8.5703125,
3.833984375,
6.2890625
] | 337 | 7,638 |
[
-3.130859375,
3.603515625
] | 33.509212 |
[
-5.546875,
-3.83203125,
-4.41796875,
-2.564453125,
1.5380859375,
12.0546875
] | 0.926991 | 10.590421 | 23.906782 | 1.940867 |
[
3.2312541007995605
] | -47,077.499687 | 6.143755 | -77,440.070283 | 0.543409 | 6.113327 |
[
-2.58203125,
-3.65234375,
-3.650390625,
-4.984375,
2.220703125,
12.4375
] |
[
-5.671875,
-2.482421875,
-2.146484375,
-1.517578125,
3.78125,
4.8984375
] | |
BkiUeKs5qoYDgaG4SBYQ
|
\section{Introduction}
\label{sec:introduction}
Advanced mathematical methods are used in finance for a long time to understand the functioning of the market. In this continuously fluctuating environment probability theory provides that solid basis, on which the assessment of the present values, and the risk mitigation techniques can be based. This aspect of the market has become even more enhanced after the crisis in 2008. Since then the market is more prudent, collateralization is applied often even for simple products. New, more complicated financial products have appeared, the use of computers in the trading becomes more and more widespread. All of these facts result in the increase of the role of mathematical methods in the finance.
There are numerous well written books on mathematical finance, for example \cite{Hull2006, shreve03,shreve032}. These books, and most of the financial literature uses the phrasing of probability theory that was founded by Kolgomorov \cite{kolmogorov2013} and {It$\hat{\mathrm{o}}$}{} \cite{Ito1986} in the first half of the XX. century. This approach considers the stochastic process as a measure which can be used for integrating a function (adapted process). This thought nicely fits into the mathematical movements of the early XX. century, namely the raise of measure theory and Lebesque integral.
In the same time, however, a different formalism describing probabilistic processes was also born, mainly driven by physicists, Einstein, Langevin, Fokker, Planck, later Dirac and Feynman. Here we treat the stochastic process as a differential equation (\emph{Langevin-equation}), where in the source term an unusual, fast oscillating function appears, called white noise. The white noise is a normally distributed random function where the correlation between different times is described by a Dirac-delta. In the 1920's, however, it was absolutely unclear how to deal with the Dirac-delta "function". It was only the 1950's where Schwartz gave a mathematically satisfying description \cite{Wikidistribution} as a distribution.
An alternative rephrasing of the Langevin-equations can be given using an integral representation, called \emph{functional (or path) integral}. This approach was initiated by Wiener in the 1920's, but its full weight has obtained by Dirac and Feynman in the 1940's \cite{Wikipathintegral}. With this formulation the same problem appeared as for the Dirac-delta earlier: the continuum limit, except for some elementary cases like the Wiener-integral, seemed to be senseless.
The solution for giving sense for the path integral (and, in fact, for all the quantum field theory) arrived only in the 1970's with the ideas of renormalization (for summary and references c.f. \cite{WikiRG}). The main idea is in fact related to the ones used in defining the Lebesque-integral and the Dirac-delta: we approach the continuum limit through some discretization, and we study the change of the results under the change of the discretization. But, unlike in the case of integrals and the distributions, the continuum limit is much more complicated in this case, and we always must keep referring to the discretization scale. Actually, although this could be seem a bug in the line of thought, it leads to new, measurable effects (running coupling constants, trace anomaly) \cite{Collins1984}.
This solution gave a huge impact on the development of statistical physics and quantum field theory, in disciplines where the formalism strongly relies on the path integral. Present day numerical computations of elementary particle physics use mostly path integral methods in some discretization, and no sooner can the continuum limit be achieved than at the end of the computations. In this way, however, precise numerical results could be obtained (c.f. for example \cite{Borsanyi:2016ksw}).
MC simulations are used in various fields nowadays, including finance. In the financial sector the most models are extensions of the Brownian motion, and so Gaussian MC simulations can be applied to simulate the price movements.
The purpose of this note is to give an introduction to finance in the language of physics. Being so, it is the part of an ongoing effort to bring the ideas of physics into finance and vice versa \cite{Mantegna2000, Baaquie:2002tt, Schmidt2004,Kakushadze:2014bea, Jovanovic2017, Baaquie2018}.
This note is built up as follows. We define the mathematical space that corresponds to the market (Section \ref{sec:spaceoftrade}), then we discuss the value of a portfolio in Section \ref{sec:valueofportfolio}. In Section \ref{sec:statapproach} we look at the market from the point of view of the statistics, and introduce the tools of treating the price changes in a discretized formulation. In Section \ref{sec:continuous} we turn to the possibility of continuous approximation. In the next section (Section \ref{sec:solutionofstochdiff}) we solve some stochastic differential equations. In Section \ref{sec:risks} we discuss risk mitigation techniques applied in the market, and the way how the assumption of risk neutrality leads to the determination of the price of a derivative (Section \ref{sec:pv}). The paper closes with a Summary section (Section \ref{sec:summary}).
\section{The space of trades}
\label{sec:spaceoftrade}
In order to be able to speak about the financial products we have to define an abstract space that represents the trades. To understand the logics we recall that trading traditionally stems from the exchange of properties of different people, families, tribes, or later firms. All tradeable properties will be called asset, let it be direct material goods like vegetables, cattles or tools, or indirect ones as field, workpower or even the life of a person (which is traded for example when somebody enters the army). The assets can have parameters (for example quality, expiration date etc.), then we treat them as different assets.
The property of a trader usually consists of several assets. They can have a house, two horses, five and a half barrel oil and also three and a half cows if two persons have seven cows together. In the property (we will call it a \emph{portfolio}) thus all assets has some quantity. The property or portfolio is thus the list of all the assets with their available quantity.
The mathematical structure corresponding to this construction is the \emph{vector space}. Let us denote by $A$ that vector space (asset space or portfolio space) where the basis elements are the assets. Although it can be thought to be infinite dimensional (because, for example, the quality forms a continuum), in practice only a finite number of asset types are traded, so we do not loose anything if we think it as a finite dimensional vector space. We mathematically define the portfolio as an element of the asset space
\[\P\in A.\]
In finance there is a singled out asset that plays a universal role, and this is money. In economics money has various roles, here we just consider one aspect, the universal exchange tool. We use US dollars as numeraire, and denote the corresponding asset by USD. So if we have ten dollars and two dogs, then our portfolio can be described as $\P=10\mathrm{USD}+2\mathrm{dogs}$. Logical.
\subsection{Loans and other promises}
What makes it more interesting is that not only the actual goods can be traded in a spot exchange, but other ``financial products'' as well. One of the simplest financial product is a loan. This can be money, but other assets can be lended and borrowed, too.
Who has a debt, has, in some sense, a negative property. If we owe three cows then our portfolio could be written as $-3\mathrm{cow}$. But it is not the most adequate notation, and sometimes it can lead to misunderstandings. The reason is that if we have three cows and owe three cows, the above notation would suggest writing $\P=3\mathrm{cow}-3\mathrm{cow}=0$. But it is not true that we have nothing, because we can use the benefits of the cows, for example we can drink their milk.
Thus, somewhat generalizing the concept of the loan, we will speak about general \emph{promises} or liabilities. A debt can be considered as a promise that we will give (back) a certain asset if we are asked for. The loan is the opposite, somebody have promised us a payoff at some time. In fact the actual assets and the promises on actual assets are the main constituents of the more complicated financial products.
Let us denote the promise with $p$, and its argument is the asset that is promised. The loan is a positive promise, because when it is given, one will possess the given asset. This means that if we have three cows and owe three cows, then our property is
\begin{equation}
\P = 3\mathrm{cows} - 3p(\mathrm{cows}).
\end{equation}
Now we can not simplify this equation, this means exactly what we want to. $p$ is defined to be a linear map of the asset space
\begin{equation}
p : A\to A,\qquad p(\alpha a + \beta b) = \alpha p(a)+\beta p(b)
\end{equation}
A promise, since it concerns future events, can have several more parameters, that is why it is worth to denote them as a function. A usual parameter is the maturity or tenor or expiration time, denoting when the promise is due. If we denote the present time as $t=0$, then
\begin{equation}
\P = 3\mathrm{cows} - 3p(\mathrm{cows},T),\qquad T=1y
\end{equation}
means that we should deliver 3 cows in one year from now. $T$ can be a time interval, discussed later.
\subsection{Common financial products}
In this language we can describe a lot of financial products. For example a loan with notional $X$ USD, payed back in parts, can be described as
\begin{equation}
\label{eq:loan}
\P = X\mathrm{USD} -\sum_{n=1}^Nc_n p(\mathrm{USD},t_n)-X_rp(\mathrm{USD},T),
\end{equation}
where $c_n$ is the interest rate to be paid at time $t_n$ (for example $t_n=n\mathrm{m}$ for monthly payoff), and $X_r$ is the remainder due at expiration time $T$. To determine the value of the parameters $c_n,\,N,\,T$ and $X_r$ at fixed $t_n$, we can use different techniques discussed later. For a fixed rate loan $c_n$ is constant.
Another product is the futures trade when an asset 'a' is agreed to be bought or sold at a given, \emph{strike} price $K$ at maturity time $T$. If we want to buy that asset, called we are in \emph{long position}, then our portfolio consists of
\begin{equation}
\label{eq:longfutures}
\P = p(a,T)-Kp(\mathrm{USD},T).
\end{equation}
If we want to sell the asset, called we are in the \emph{short position}, then our portfolio is
\begin{equation}
\label{eq:shortfutures}
\P = -p(a,T)+Kp(\mathrm{USD},T).
\end{equation}
Another interesting parameter of the promise can be its optionality. One of the counterparties may have the right not to fulfill or not to exercise their promise. In this case the two parties are not equivalent. We call the one who possesses the optionality to be in the long position, the other counterparty (who ``sells the optionality'') is in the short position, irrespective whether the promise is about to buy or sell something.
A possible notation for the options is to multiply the possible payoffs by a number $\alpha\in\{0,1\}$. When $\alpha=1$, then the promise is fulfilled, otherwise it is denied. It is also important that who has the right to decide the value of $\alpha$, that we indicate as a $\pm$ index: if the index is $+$, then the portfolio owner has the right to set the value of $\alpha$ (i.e. she is in the long position), if the index is $-$ then someone else determines its value (so the portfolio owner is in the short position with respect to the option).
For example if we agreed that trader 'A' has the option to buy a product 'a' at time (or time interval) $T$ for a strike price $K$ from trader 'B' (European option), then their portfolios read
\begin{equation}
\P_A = \alpha_+(p(a,T)-Kp(\mathrm{USD},T)),\qquad \P_B = \alpha_-(-p(a,T)+Kp(\mathrm{USD},T)).
\end{equation}
The exercise date can be also optional, in American option it is any value in $[0,T]$, in Bermudan option there are some fixed dates. Similarly as in the previous case, we can denote its optionality by a subscript $\pm$. An American option can be described as
\begin{equation}
\P_A = \alpha_+(p(a,T_+)-Kp(\mathrm{USD},T_+)),\qquad \P_B = \alpha_-(-p(a,T_-)+Kp(\mathrm{USD},T_-))
\end{equation}
where $T_+=T_-\in[0,T].$
We note that the strike price can also be a complicated construction, even depending on the price history. For example we can agree that the buyer of the option has the right to sell a given asset at the average price that was achieved in a given time interval (Asian option), or anything more exotic ones.
We also note that, although the choice of $\alpha$ is completely up to the trader in the long position, sensible traders choose $\alpha=1$ if it is beneficial to them. This makes it possible to determine the price of the option, see later.
\section{Value of the portfolio}
\label{sec:valueofportfolio}
By now we can describe what we have currently. In a trade we exchange two (or more) assets. But the question is, how much is a given asset worth? Clearly no one would bargain away his property, but at the same time everybody wants to achieve the highest price possible.
On the other hand there is not an explicit value measure for the goods. In particular because goods may have hidden advantage for somebody, and this person is willing to buy them at a higher price, too. So the only measure for the value of an asset is that for how much is it used to trade. A well informed trader will trade the asset at exactly the price that is adequate at that moment. The lack of information leads to failed trade, or to \emph{arbitrage}, when an asset can be bought from and sold to different parties, realizing a net profit.
If a market is well informed, and there are a lot of vigilant merchants around, then arbitrage can not be hold for a long time. If it was strictly true, then there would be a single price for each asset. But actually it is just an approximation, since nobody knows that value, and so all the trades modify somewhat the price. A momentary excess in demand will raise the price, while a momentary excess of offers will lower it, and this is repeated time and time again. So, if we insist having a definite price, we have to say that the prices \emph{fluctuate}.
If we sell or buy several assets, then we trade them separately. This means that the price (value of the portfolio) is a linear map from the asset space and time to the real numbers (actually $\bm R_+$). Thus
\begin{equation}
\begin{array}[t]{lrll}
S\,:\, & A\times \bm R& \to \bm R_+ & \qquad\mathrm{linear}\cr
& (a,t) & \mapsto S(a,t)&\cr
\end{array}
\end{equation}
gives the price/value of the asset $a$ at a time $t$.
In a fair business neither of the counterparties lose, both of them give or receive the price which corresponds to the assets they trade. If it is a spot bargain, then both parties know the market price, and this serves as a relation point. But if the payoffs happen in the future, one needs a tool to compute the value of the asset at present. This is the \emph{present value}, and this forms the basis of a fair trade.
\subsection{Discounting a risk free zero coupon bond}
The most simple future payoff is the zero coupon bond, which is $p(\mathrm{USD},T)$, i.e. it pays 1USD at a future time $T$ once. We also assume that it is risk free, meaning that we can count on the payoff with hundred percent certainty. For example we may think of a US government bond. Our task is to tell its value at time $t$, which is called \emph{discounting} the value of the payoff.
To tell the present value, we have to compare the investment in a zero coupon bond to a bank deposit in a safe bank. If it would be more advantageous to invest into a bank deposit, then we would short the zero coupon bond now, and put the money in the bank deposit. A time $T$ the bank deposit would have a higher value, and so we could gain money with zero starting capital. If the investment into the zero coupon bond would be more advantageous, we could do the inverse: we borrow money from a bank, and put it into the bond, and realize a net profit at time $T$. To avoid these arbitrage possibilities, the present values of a risk free zero coupon bond and a risk free bank deposit must agree.
But the bank pays interest rate for all the deposits. In the most simple case it is a fixed annual interest rate $r_1$. Technically the paying of the interest happens periodically in each $dt$ time period, with the corresponding interest rate $r_{dt}$. $r_{dt}$ can be determined from the condition that after one year we get $r_1$ rate (assuming $1/dt$ is integer)
\begin{equation}
(1+r_{dt})^{[1/dt]}=1+r_1\quad\Rightarrow\quad r_{dt}=(1+r_1)^{dt}-1
\end{equation}
In case $dt\to0$ (called \emph{continuous compounding}) we denote $r_{dt}=dt\,r$. Then
\begin{equation}
(1+r_{dt})^{[t/dt]} = \left(1+\frac{rt\,dt}t\right)^{t/dt}\stackrel{dt\to0}{\longrightarrow} e^{rt}.
\end{equation}
This also means that $r=\ln(1+r_1)$.
If we deposited $X$USD in the bank at time $t$, at a later time $T$ it is worth $Xe^{r(T-t)}$ USD. This should be compared to the case, when we buy a zero coupon bond at time $t$ with maturity $T$. In an arbitrage-free fair business both should have a value of 1USD at time $T$, so we require $Xe^{r(T-t)}=1$. Thus the value of the zero coupon bond at time $t$ is
\begin{equation}
X = S(p(\mathrm{USD},T),t) = e^{-r(T-t)} = (1+r_1)^{-(T-t)}.
\end{equation}
This formula makes it possible to determine the value of $c$ for a fixed rate loan. The portfolio was given in \eqref{eq:loan}. In a fair business the value of the portfolio is zero at all times. Let us compute it at time zero (present time), when we have
\begin{equation}
0=S(\P,0)= X - \sum_{n=1}^N c S(p(\mathrm{USD},t_n),0) - X_r S(p(\mathrm{USD},T),0).
\end{equation}
Let us choose $t_n=n\,\Delta t$, $T=(N+1)\Delta t$, and denote the actual interest rate (which is the risk free interest rate plus the spread) by $r$. Then we find
\begin{equation}
X = c \frac{e^{-r\Delta t}-e^{-rT}}{1-e^{-r\Delta t}} + X_r e^{-rT},
\end{equation}
and, correspondingly,
\begin{equation}
c = (X- X_r e^{-rT}) \frac{1-e^{-r\Delta t}}{e^{-r\Delta t}-e^{-rT}}.
\end{equation}
Therefore the condition of arbitrage freeness in the absence of risk leads to a definite price for the zero coupon bond, and a definite value of the fixed rate paying.
\subsection{Discounting the price of an asset}
Let us assume that we have a promise that we are given an asset $a$ at time $T$, so our portfolio is $p(a,T)$. What is the value of the portfolio at time $t$?
What we certainly know is that
\begin{equation}
\label{eq:SpaT}
S(p(a,T),T)=S(a,T),
\end{equation}
since the promise is fulfilled then, we obtain the asset, and its price is what is determined by the market at that time. We claim that it is true at other times as well, i.e.
\begin{equation}
S(p(a,T),t)=S(a,t),
\end{equation}
it does not depend on $T$.
The reason is that if $S(p(a,T),t)>S(a,t)$, then we buy the asset now, and, at the same time we sell the promise of delivery at time $T$. Therefore we have now the asset $a$, payed its value ($-S(a,t)\,\mathrm{USD}$), we promised a delivery of $a$ at time $T$ (this is $- p(a,T)$), and we obtained the price for the promise $S(p(a,T),t)\,\mathrm{USD}$. Our portfolio therefore reads
\begin{equation}
\P_1(t) = a - S(a,t)\,\mathrm{USD} - p(a,T) + S(p(a,T),t)\,\mathrm{USD}.
\end{equation}
The value of the portfolio is zero at time $t$. Its value at time $T$, if the promise is fulfilled
\begin{equation}
S(\P_1,T) = S(a,T)- S(p(a,T),T) + (S(p(a,T),t)-S(a,t)) S(\mathrm{USD},T).
\end{equation}
But the first two term cancel each other by equation \eqref{eq:SpaT}, and so what remains is
\begin{equation}
S(\P_1,T) = (S(p(a,T),t)-S(a,t)) S(\mathrm{USD},T) >0.
\end{equation}
Therefore we could gain money. If $S(p(a,T),t)<S(a,t)$, then we build a portfolio
\begin{equation}
\P_2 = -a + S(a,t)\,\mathrm{USD} + p(a,T) - S(p(a,T),t)\,\mathrm{USD},
\end{equation}
for that $S(\P_2,0)=0$ and $S(\P_2,T)= (S(a,t)-S(p(a,T),t)) S(USD,T) >0$ again. To exclude this arbitrage possibility we need to have $S(p(a,T),t)=S(a,t)$, which we wanted to demonstrate.
We remark that the two cases are somewhat different. If the price of the promise is larger than the actual price, we immediately can realize a profit without any original capital. The other case is feasible if we have the asset previously, otherwise we can not realize the $-a$ part of the portfolio. But, if the asset is liquid enough, there are enough assets in the market to forbid this arbitrage.
Using this result we can give the price of a futures trade. The portfolio of a long position is given by \eqref{eq:longfutures}, its price is therefore
\begin{equation}
S(p(a,T)-Kp(\mathrm{USD},T),t) = S(a,t)-Ke^{-r(T-t)}.
\end{equation}
\section{Statistical approach to the market}
\label{sec:statapproach}
In fact the discounting of an asset price is the only one which is independent on the way the market operates. Already the calculation of the discount factor of a fixed payoff depends strongly on the details, in this case on the interest rate. A fair business takes into account the market rates which, however, fluctuate in time. Therefore we should understand, how the market operates, how the prices are determined, why, and how do they fluctuate. This is a very complicated question, and we can just hope that we find a satisfactory approximation.
The first point we have to clarify is the recording of the prices. Although previously we used a continuous time notation, but it is an abstraction, an approximation. In reality all the recordings have a time stamp that is not infinitely fine. There is a smallest time difference that can be resolved, say $d\tau=1\mu$sec (as an upper estimate), thus all trades and prices can be characterized by an integer; in particular the price of asset $a$ at time $t=n d\tau$ will be denoted as $S_{na}$. We will use a fixed $N$ number of assets, then the vector of all prices is $S_n = (S_{n1}, S_{n2},\dots,S_{nN})$. Sometimes we will put a comma between the two indices in order to avoid misunderstanding, for example we will write $S_{n+1,a}$.
When we think about a dynamic model of price changes we must pin down that in a \emph{complete model} the price in the future must depend solely on the information available at the present. In fact, we can not make decisions based on past events if they are forgotten. The only way of remembering the past events is to make notes (eventually in our memory) about them, and then it is an available information in the present. So we may write generally
\begin{equation}
S_{n+1} = S_{n} + {\cal F}(\mathrm{information\ available\ at\ present}).
\end{equation}
The factors determining the evolution of the price, of course, are numerous. Moreover, for a quantitative prediction we should have known the actual form of the $\cal F$. Thus predicting the price in the future seems to be impossible.
Still, we can benefit from the generic form above. We may divide the information available at present into three parts. The first part are externalities that do not depend on the status of the market: for example the natural events like wheather, new discoveries, inventions, political or military actions. In a a market model we do not want to describe their dynamics, we take them as given processes, and as such these can be taken into account as an explicit time dependence. We may hope that these effects are slow (usually they are, but for example the weather can have significant influence in certain areas also on daily basis).
The second part of the variables describe the market. Among them there are the asset prices, but other market factors can also be present like forward rates. They appear on both sides of the equation, and we denote them unified with $S$.
The third part is again (mainly) independent on the status of the market, but these are fast processes. They consist, for example, of the momentary intentions of the participants of the market. Let us denote them as $\xi_i$, where $i$ runs through some (large) index set. These processes are in principle well defined, they follow their own dynamics, but it is impossible to tell their time dependence from the knowledge of the asset prices. All in all we have the equation
\begin{equation}
\label{eq:deteq}
S_{n+1} = S_{n} + {\cal F}_n(S_n, \xi_{in}).
\end{equation}
Were the $\xi_i$ absent from the above equation, we could determine $\cal F$ from the observation of price changes in the past, and eventually recalibrate its form from time to time. But it is hopeless to determine the actual form of the $\xi_i$ functions. What helps us in this situation is that they are numerous, and although they are deterministic one-by-one, their net effect is still something that can be described \emph{statistically}. This means that we \emph{assume} a time dependence for them, solve the above equation for all possible time dependences, and finally we average over the results with some weight. We will assume that these variables are normalized in a way that they fluctuate around zero (their mean is treated as a deterministic effect).
\subsection{Linearization}
Using the fact that the $\xi_i$ effects are small one-by-one, we can power expand the $\cal F$ function to first order
\begin{equation}
S_{n+1} = S_{n} + {\cal F}_n(S_{n},0) + \xi_{in}\frac{\partial{\cal F}_n}{\partial\xi_{in}}\biggr|_{(S_{n},0)}+\dots.
\end{equation}
The last term is a weighted sum of the $\xi_i$ variables at time index $n$. Now we can argue that the distribution of the sum of mostly independent random variables (with bounded variance) is a \emph{Gaussian}. This is the central limit theorem, and in fact we need to fulfil some conditions that we tacitly assume that is in fact the case here. Thus the last term can be substituted by a single term with some generic coefficient:
\begin{equation}
S_{n+1} = S_{n} + {\cal F}_n(S_{n},0) + Z_n(S_{n}) \xi_n,
\end{equation}
where the $\xi_n$ variables are all Gaussian distributed random variables with zero mean and unit variance. We will assume that these random variables are independent for different times: indeed, we can argue that there are different trades throughout the world at random times, and so their interrelation is weak. But we must know that this is again an approximation, because if we do not observe all effects, the effective dynamics of the rest will contain memory effects. What we assume is that these memory effects are small.
Although all the formulae are supposed to be written for multi-component variables, it may be useful to write out the indices explicitly. In the multi-component notation the above equation can be written as
\begin{equation}
\label{eq:multivariateS0}
S_{n+1,a} = S_{na} + {\cal F}_{na}(S_{n},0) + Z_{na}(S_{n}) \xi_{na}.
\end{equation}
The $\xi_{na}$ random variables are not necessarily independent for different assets
\begin{equation}
\label{eq:covariancematrix}
\bm{E}\left(\xi_{na}\xi_{mb}\right) = C_{n,ab} \delta_{nm},
\end{equation}
and so the covariance matrix of the complete noise term reads
\begin{equation}
\label{eq:covariancematrixtotal}
\bm{E}Z_{na}\xi_{na}Z_{mb}\xi_{mb} = \delta_{nm} Z_{na}Z_{nb} C_{n,ab}.
\end{equation}
To simplify the treatment, we diagonalize the correlation matrix (which is a symmetric regular real matrix) as
\begin{equation}
C_{n,ab} = \sum_{k=1}^N \lambda_{nk} v_a^{(nk)}v_b^{(nk)},
\end{equation}
where the $\bm v^{(nk)}$ vectors are eigenvectors of the covariance matrix $\bm C_n$, and they are orthonormal: $\bm v^{(nk)}\bm v^{(n\ell)}=\delta_{k\ell}$. Then we can write \eqref{eq:multivariateS0} as
\begin{equation}
\label{eq:multivariateS}
S_{n+1,a} = S_{na} + {\cal F}_{na}(S_{n},0) + \sum_{k=1}^N Z_{n,ak}(S_{n}) \xi_{nk}
\end{equation}
with the volatility matrix
\begin{equation}
Z_{n,ak} = Z_{na} \sqrt{\lambda_{nk}} v_a^{(nk)},
\end{equation}
and uncorrelated noise terms
\begin{equation}
\bm{E}\left(\xi_{nk}\xi_{m\ell}\right) = \delta_{k\ell} \delta_{nm}.
\end{equation}
Indeed, the correlation of the noise term reads now as
\begin{equation}
\bm E\left(\sum_{k=1}^N Z_{n,ak}\xi_{nk}\right)\left(\sum_{\ell=1}^N Z_{m,b\ell}\xi_{m\ell}\right) = \delta_{nm} Z_{na}Z_{nb} \sum_{k=1}^N\lambda_{nk} v_a^{(k)} v_b^{(k)}
\end{equation}
which is exactly the complete covariance matrix \eqref{eq:covariancematrixtotal}.
All the above means that it is enough to have as many random Gaussian variables, as the number of the assets on the market (originally we had much more). These variables can be thought to be independent, and appear in the evolution equations multiplied by the volatility matrix $Z_{n,ak}$. Thus the cumulative distribution of the random variables is
\begin{equation}
{\cal P}(\{\xi\}) = \prod_{nk}{\cal P}_G(\xi_{nk}),\qquad {\cal P}_G(\xi) = \frac1{\sqrt{2\pi}}e^{-\frac{\xi^2}2}.
\end{equation}
From now on we suppress the multidimensional indices, treat $Z$ as a matrix $Z_{n,ak}$, and $\xi$ as a vector $\xi_{nk}$.
\subsection{Scaling under changing of the discretization time}
In the above discussion the value of $d\tau$ could be chosen arbitrarily. Our first guess was $1\mu$sec, but just as well could it be $2\mu$sec or even $0.5\mu$sec. What effect does it have on the form of the dynamic equation?
Let us first assume that we want to work with $dt=2d\tau$. This can be thought that we want to tell $S_{n+2}$ from $S_{n}$. When we recursively substitute the equation of $S_{n+1}$ into the equation of $S_{n+2}$ we have a lengthy expression. But the price changes are so very little in this time interval that in the argument of $\mu$ and $\sigma$ functions we can use the previous value. This simplifies the discussion to
\begin{equation}
S_{n+2} = S_{n} + 2{\cal F}_n(S_{n},0) + \sqrt{2}Z_n(S_{n})\frac{\xi_n+\xi_{n+1}}{\sqrt2},
\end{equation}
where in the last expression we divided and multiplied by $\sqrt{2}$. The distribution of the sum of independent Gaussian random variables is a Gaussian random variable. The correlation matrix coming from the the last expression is thus
\begin{equation}
\frac12\bm{E}(\xi_{na}+\xi_{n+1,a})(\xi_{n,b}+\xi_{n+1,b}) = \frac12\bm{E}(\xi_{na}\xi_{nb})+\frac12\bm{E}(\xi_{n+1,a}\xi_{n+1,b}) = \delta_{ab}
\end{equation}
is the same as for $\xi_n$. Thus we may write
\begin{equation}
S_{n+2} = S_{n} + 2{\cal F}_n(S_{n},0) + \sqrt{2}Z_n(S_{n})\xi_n.
\end{equation}
This can be generalized to arbitrary $dt$ (as far the change of the prices in this time interval is negligible): the first term is multiplied by $dt/d\tau$, the second term, on the other hand, by $\sqrt{dt/d\tau}$.
\begin{equation}
S_{n+dt/d\tau} = S_{n} + \frac{dt}{d\tau}{\cal F}_n(S_{n},0) + \sqrt{\frac{dt}{d\tau}}Z_n(S_{n})\xi_n.
\end{equation}
We may introduce the notations
\begin{equation}
\mu_n(S_{n})=\frac1{d\tau}{\cal F}_n(S_{n},0),\qquad
\sigma_n(S_{n}) = \frac1{\sqrt{d\tau}}Z_n(S_{n}),\qquad
dS_{n}=S_{n+dt/d\tau}-S_{n},
\end{equation}
and then we can write for the $dt$ discretization time
\begin{equation}
\label{eq:dSdisc}
dS_{n} = \mu_n(S_{n})\,dt + \sigma_n(S_{n})\sqrt{dt}\,\xi_n.
\end{equation}
We remark that in the multi-dimensional case $\sigma$ is a matrix in the asset price space.
This form shows that the continuous time limit is not trivial: not all the variables scale like $dt$, and so in the $dt\to0$ limit the above equation does not go to a differential equation. Indeed, the continuous limit is known as a \emph{stochastic differential equation}.
\subsection{Numerical computation of an expectation value}
In the practical point of view the treatment of \eqref{eq:dSdisc} looks like the following. First we find the solution $S_{n+1}$ depending on the time series $\xi = \{\xi_0,\xi_1,\dots,\xi_n\}$ and on the initial condition $S_0$. Let us denote it
\begin{equation}
S_{n+1}(S_0,\xi).
\end{equation}
Here we have used the fact that $S_{n+1}$ can depend only on the past events. Then we should calculate the expected value of any function of $S_{n+1}$ by averaging over the possible $\xi$ series over independent Gaussian distributions. In formula this reads
\begin{equation}
\label{eq:avr}
\bm E f(S_{n+1}) = \int\limits_{-\infty}^\infty \frac{d^N\xi_0}{(2\pi)^{N/2}}e^{-\frac12\xi^2_0}\dots \frac{d^N\xi_n}{(2\pi)^{N/2}}e^{-\frac12\xi^2_n} f(S_{n+1}(S_0,\xi)).
\end{equation}
The two equations \eqref{eq:dSdisc} and \eqref{eq:avr} provide a well defined numerical framework to solve any stochastic problem numerically.
Often we use a momentum generation function that is defined as
\begin{equation}
\label{eq:MGF}
\bm E e^{JS} = \int\limits_{-\infty}^\infty \frac{d^N\xi_0}{(2\pi)^{N/2}}e^{-\frac12\xi^2_0+J_0S_0}\dots \frac{d^N\xi_n}{(2\pi)^{N/2}}e^{-\frac12\xi^2_n+J_nS_n},
\end{equation}
where $JS=\sum_{a,n} J_{na} S_{na}$ and the $S$ series satisfy \eqref{eq:dSdisc}.
\subsection{Change of variables}
A very interesting consequence of the different scaling properties of the various terms in \eqref{eq:dSdisc} is that, in case of a variable change, a nontrivial factor appears.
Let us assume that we have a new variable $X=f(t,S)$, where $f$ is a smooth function. In discretized case it reads $X_n=f_n(S_n)$. Let us consider the change in $X$ up to ${\cal O}(dt^{3/2})$:
\begin{equation}
dX_n = f_{n+1}(S_{n+1})-f_n(S_n) = \partial_t f_n(S_n)dt + f_n(S_n + dS_n) - f_n(S_n).
\end{equation}
If all terms were scale as $dt$ in $dS$, then we could power expand $f$ to first order. But the different terms scale in different ways, so we must go until the second order term:
\begin{equation}
dX_n = \partial_t f_n(S_n) dt+ \partial_S f_n(S_n) dS_n +\frac12 \partial_S^2 f_n(S_n) dS_n^2+{\cal O}(dS_n^3).
\end{equation}
Here we can use \eqref{eq:dSdisc} for the value of $dS_n$. We remark that in $dS_n^2$ there is a single term that is proportional to $dt$, all other terms are of higher order. Thus we shall write
\begin{equation}
dX = \partial_t f dt + \partial_S f \left(\mu dt + \sigma\sqrt{dt}\,\xi\right) +\frac12\sigma^2 \partial_S^2 f dt \xi^2+{\cal O}(dt^{3/2}),
\end{equation}
where we omitted the arguments for brevity (note that $f(t,S)$ is a differentiable funciton, so it is sensible to speak about $\partial_t f$ even if the time steps are discrete). We rewrite this formula as
\begin{equation}
dX = \partial_t f dt + \partial_S f \mu dt +\frac12\sigma^2 \partial_S^2 f dt + \partial_S f \sigma\sqrt{dt}\,\bar\xi,
\end{equation}
where we introduced a new random variable having zero mean as
\begin{equation}
\bar\xi = \xi + \frac{\sigma \partial_S^2 f}{2\partial_S f }\sqrt{dt}(\xi^2-1).
\end{equation}
As we see, the change of $X$ is not Gaussian distributed, so $X$ is not a Brownian motion any more. But the difference from a Brownian motion vanishes like $\sim \sqrt{dt}$ as $dt\to0$. So in the limit we can omit the difference of $\bar\xi$ and $\xi$. Then we find
\begin{equation}
dX = \left(\partial_t f+ \mu \partial_S f +\frac12\sigma^2 \partial_S^2 f\right) dt + \sigma \partial_Sf \sqrt{dt}\,\xi .
\end{equation}
This is the \emph{{It$\hat{\mathrm{o}}$}{}-formula}. In case of any number of correlated assets it reads
\begin{equation}
dX_c = \left(\partial_t f_c+ \mu_a \frac{\partial f_c}{\partial S_a}+ \frac12(\sigma^T\sigma)_{ab}\frac{\partial^2 f_c}{\partial S_a\partial S_b}\right) dt + \frac{\partial f_a}{\partial S_a}\sigma_{ab} \xi_b\sqrt{dt}.
\end{equation}
\subsection{Evolution equation of the distribution functions}
Let us assume that we have a statistical information about the price at present, we know its distribution function ${\cal P}_0(S)$. Then what will be the distribution function at later times?
To give a formal definition for the distribution function we realize that for any quantity $g(\xi)$ depending on a real valued random variable the expected value can be written with the help of the Dirac-delta
\begin{equation}
\bm E_\xi g(\xi) = \int\limits_{-\infty}^\infty dx \, \bm E \delta(\xi-x) g(x).
\end{equation}
The $g(x)$ does not depend on $\xi$, so we can take it out from the scope of the expected value and obtain
\begin{equation}
\bm E_\xi g(\xi) = \int\limits_{-\infty}^\infty dx \,{\cal P}(x) g(x),
\end{equation}
where ${\cal P}$ is the distribution function
\begin{equation}
{\cal P}(x) = \bm E_\xi \delta(\xi-x).
\end{equation}
The question we want to answer is that what is the distribution function of the prices at time $t=ndt$ if we know the distribution ${\cal P}_m(S)$ at time $t=mdt$. What we have to do is to solve the price motion using the equation \eqref{eq:dSdisc}, starting from some $S=S_m$ initial condition at $t=m$, and assuming given $\xi = \{\xi_m,\dots,\xi_{n-1}\}$. Having obtained a solution $S_n(S_m,\xi)$, finally we have to average over all $\xi$ and $S_m$.
Then we can write for the expected value of any $f(S_n)$ function
\begin{equation}
\bm E f(S_n) =\bm E_{S_m}\bm E_\xi f(S_n(S_m,\xi)) =\! \int\limits_{-\infty}^\infty\! dS dS'\bm E_{S_m}\delta(S'-S_m) \bm E_\xi \delta(S-S_n(S',\xi))f(S).
\end{equation}
With the distribution functions we can write this expression as
\begin{equation}
\bm E f(S_n) = \int\limits_{-\infty}^\infty\! dS dS' {\cal P}_m(S') {\cal P}_{mn}(S',S) f(S),
\end{equation}
where
\begin{equation}
\label{eq:Pnss}
{\cal P}_{mn}(S',S) = \bm E_\xi \delta(S-S_n(S',\xi)).
\end{equation}
There are different methods to derive this quantity, here we will use the {It$\hat{\mathrm{o}}$}{} formula, applied to the expectation value of the $f(S_n)$ function above. First let us fix the initial distribution to
\begin{equation}
{\cal P}_m(S') \to \delta(S'-S_m),
\end{equation}
then the $S'$ integral disappears. Now we change $n$, and write up the change in the expected value in two ways. At the one hand we have
\begin{equation}
d(\bm E f(S_n)) = \int\limits_{-\infty}^\infty dS\,d{\cal P}_{mn}(S_m,S)f(S).
\end{equation}
At the other hand from \eqref{eq:dSdisc} we have
\begin{equation}
\begin{split}
d(\bm E f(S_n)) &= \bm E df(S_n) = \bm E \left(\mu \partial_S f +\frac12\sigma^2 \partial_S^2 f\right)dt = \\
&= \int\limits_{-\infty}^\infty dS\,\left(\mu \partial_S f +\frac12\sigma^2 \partial_S^2 f\right) {\cal P}_{mn}(S_m,S)dt = \\
&= \int\limits_{-\infty}^\infty dS\,f(S) \left(-\partial_S (\mu {\cal P}) +\frac12\partial_S^2 (\sigma^2 {\cal P}) \right)dt,
\end{split}
\end{equation}
where in the last line we performed partial integration, and omitted the arguments of ${\cal P}$ for brevity. Since the two expressions are equal for any $f$ function, we can conclude
\begin{equation}
\label{eq:dPn}
d{\cal P} = \left(-\partial_S (\mu {\cal P}) +\frac12\partial_S^2 (\sigma^2 {\cal P}) \right)dt.
\end{equation}
In continuous time this leads to a partial differential equation known as the \emph{Fokker-Planck-equation} or \emph{Kolmogorov-PDE}:
\begin{equation}
\label{eq:FokkerPlanck}
\partial_t{\cal P} = -\partial_S (\mu {\cal P}) +\frac12\partial_S^2 (\sigma^2 {\cal P}).
\end{equation}
If we wanted to write out the indices explicitly we would write
\begin{equation}
\frac\partial{\partial t}{\cal P}_a = -\frac\partial{\partial S_b} (\mu_b {\cal P}_a) +\frac12\frac{\partial^2}{\partial S_b\partial S_c} ((\sigma^T\sigma)_{bc} {\cal P}_a).
\end{equation}
\subsubsection{Composition rule and dependence on the initial conditions}
We can perform the evaluation of the expected value \eqref{eq:Pnss} in two parts, if we want. We choose a $m<k<n$ internal time, and we draw up the condition that at $k$ we arrived at $S=S_k$, and then, starting from this value, we proceed from $k\to n$. Formally we can write
\begin{equation}
{\cal P}_{mn}(S',S) = \int\limits_{-\infty}^\infty dS'' \bm E_\xi \delta(S''-S_k(S',\xi))\delta(S-S_n(S'',\xi)),
\end{equation}
where in the last delta function we tacitly assumed that we start the time evolution from $k$. The two Dirac-deltas are independent on each other, because in the first case we have to average only over $\{\xi_m,\dots,\xi_{k-1}\}$, in the second case only over $\{\xi_k,\dots,\xi_{n-1}\}$. Therefore we can write
\begin{equation}
{\cal P}_{mn}(S',S) = \int\limits_{-\infty}^\infty dS'' {\cal P}_{mk}(S',S''){\cal P}_{kn}(S'',S).
\end{equation}
This formula makes it possible to find a differential equation with respect to the initial conditions of the distribution function. If we change $k$, namely, the left hand side does not vary. Thus
\begin{equation}
0 = \int\limits_{-\infty}^\infty dS'' \left[d_k{\cal P}_{mk}(S',S'') \right] {\cal P}_{kn}(S'',S) + {\cal P}_{mk}(S',S'')d_k{\cal P}_{kn}(S'',S).
\end{equation}
We can use \eqref{eq:dPn} to write for the first term
\begin{equation}
\int\limits_{-\infty}^\infty dS'' \left(-\partial_{S''} (\mu {\cal P}_{mk}(S',S'')) +\frac12\partial_{S''}^2 (\sigma^2 {\cal P}_{mk}(S',S'')) \right)dt_k {\cal P}_{kn}(S'',S).
\end{equation}
We perform partial integration, and substitute the result back into the previous expression. Since this must be true for any ${\cal P}_{mk}(S',S'')$ we conclude
\begin{equation}
d{\cal P}_{kn}(S'',S) = \left(-\mu(S'') \partial_{S''}{\cal P}_{kn}(S'',S) -\frac12\sigma^2(S'')\partial_{S''}^2 {\cal P}_{kn}(S'',S) \right)dt_k.
\end{equation}
In continuous time it reads
\begin{equation}
\label{eq:FokkerPlanck1}
\partial_{t_0}{\cal P} = -\mu \partial_{S_0} {\cal P} -\frac12 \sigma^2\partial_{S_0}^2 {\cal P},
\end{equation}
where the $0$ index denotes the initial conditions.
\subsubsection{Change of variables in the distribution function}
We may also work out the change of the distribution function under the change of its argument. We change the variable from $x\to y=Y(x)$, when $Y$ is invertible. Then the distribution of $y$ reads
\begin{equation}
{\cal P}_y(y) = \int dx {\cal P}_x(x) \delta(y-Y(x)).
\end{equation}
Changing to new variable $y'=Y(x)$, the integral measure changes by the Jacobian, and we find
\begin{equation}
\label{eq:distofnewvariable}
{\cal P}_y(y) = \left|\frac{\partial Y}{\partial x}\right|^{-1} \!\!\! {\cal P}_x(x)\;\biggr|_{x=Y^{-1}(y)}.
\end{equation}
\subsection{Path integral}
In \eqref{eq:avr} we have seen how to compute an expectation value numerically. Here we continue this line of thought, rewriting that formula.
To treat \eqref{eq:avr} we have to know the additional information of how to determine $S$, i.e. we need the equation \eqref{eq:dSdisc}. We may work out a formula which is self-contained, i.e. it contains both the time evolution as well as the averaging.
The key is that we can represent a recursion through an integral over a Dirac-delta
\begin{equation}
f(S_{m+1}) = \int \delta\left(S_m+g_m(S_m,\xi_m)-S_{m+1}\right) f(S_{m+1})dS_{m+1},
\end{equation}
where in the present case \eqref{eq:dSdisc} corresponds to $g_m(S,\xi)=\mu_m(S)dt + \sigma_m(S)\sqrt{dt}\xi_m$. This form can be applied for all $m=1,2\dots n$, and obtain
\begin{equation}
\bm{E} f(S_n) = \int \prod_{m=1}^n \left[e^{-\frac12\xi^2_m} \delta(S_{m}-S_{m-1}-g_{m-1}(S_{m-1},\xi_{m-1})) \right]\frac{ f(S_n)\, {\cal D}\xi{\cal D}S}{(2\pi)^{Nn/2}},
\end{equation}
where with initial condition $S_0=$given, and we also introduced the notation
\begin{equation}
{\cal D}h = dh_1dh_2\dots dh_n
\end{equation}
for $h=\xi$ and $S$.
In order to simplify the formulae, and get rid of the disturbing constant factors, we may introduce
\begin{equation}
\label{eq:exvfsn}
\exv{f(S_n)} = \int \prod_{m=1}^n \left[e^{-\frac12\xi^2_m} \delta(S_{m}-S_{m-1}-g_{m-1}(S_{m-1},\xi_{m-1})) \right] f(S_n)\, {\cal D}\xi{\cal D}S,
\end{equation}
and then
\begin{equation}
\bm E f(s_n) = \frac1{\exv{1}} \exv{f(S_n)}.
\end{equation}
We can also introduce the generator functional
\begin{equation}
Z[S_0;J] = \left.\exv{e^{J S}}\right|_{S_0}
\end{equation}
where we also indicated the initial condition. We usually denote $Z(S_0)=Z[S_0;0]=\exv1$, it is sometimes called partition function in physics. Then
\begin{equation}
\bm E f(s_n) = \frac1{Z(S_0)} \left.\exv{f(S_n)}\right|_{S_0}.
\end{equation}
In the sequel we will omit all constant factors in all expected values, the division with the corresponding $Z$ will take care of the correct normalization.
We note that the upper limit of the product term in \eqref{eq:exvfsn} can be extended to infinity. The reason is that if the integrand does not depend on the last variable, then the Dirac-delta simply gives one. In this way we can get rid of the last integral unless $f$ depends on it. Finally we have
\begin{equation}
\exv{f(S_n)} = \int \prod_{m=1}^\infty \left[e^{-\frac12\xi^2_m} \delta(S_{m}-S_{m-1}-g_{m-1}(S_{m-1})) \right] f(S_n)\, {\cal D}\xi{\cal D}S.
\end{equation}
The next step is to integrate over the $\xi_m$ variables. This is not difficult, because the $g_m$ are linear in this variable. The master formula is
\begin{equation}
\int\limits_{-\infty}^\infty \frac{d^N\xi}{(2\pi)^{N/2}} e^{-\frac12\xi^2} \delta(A-B\xi) = \frac1{\det B} e^{-\frac12 (B^{-1}A)^2}.
\end{equation}
We then obtain, using \eqref{eq:dSdisc}
\begin{equation}
\exv{f(S_n)} = \int e^{-\frac12 \sum_{m=0}^\infty dt (\dot S_m-\mu_m)C_m^{-1}(\dot S_m-\mu_m)} f(S_n)\,{\cal D}_C S,
\end{equation}
where we denoted
\begin{equation}
\dot S_m = \frac{S_{m+1}-S_m}{dt},\quad C=\sigma_m^T\sigma_m,\qquad {\cal D}_C S = \frac{dS_1}{\sqrt{\det C_1}}\dots \frac{dS_n}{\sqrt{\det C_n}}\dots.
\end{equation}
Here we also used that $\det\sigma =\sqrt{\det C}$.
This formula has the big advantage that it does not need any supplementary condition, we can calculate the expectation values simply by performing the integrals.
In physical terms the exponent is called the Hamiltonian, or, in other context, the Euclidean Lagrangian. So we can write
\begin{equation}
L_m = \frac12 (\dot S_m-\mu_m)C_m^{-1}(\dot S_m-\mu_m),
\end{equation}
then
\begin{equation}
\label{eq:PIdisc}
\left.\exv{f(S_n)} \right|_{S_0}= \int \left.e^{-\frac12 \sum_{m=0}^\infty dt L_m} f(S_n)\,{\cal D}_\sigma S\right|_{S_0},
\end{equation}
which is called the \emph{path integral representation} of the expectation value.
The distribution function is the expected value of the Dirac-delta:
\begin{equation}
{\cal P}(0,S_0;t,S) = \frac1{Z(S_0)} \int e^{-\frac12 \sum_{m=0}^\infty dt L_m}\delta(S_n-S) {\cal D}_\sigma S \biggr|_{S_0}.
\end{equation}
\section{Continuous approaches}
\label{sec:continuous}
In the previous section we used a discrete representation of the stochastic process. Traditionally, however, the continuous description is used in general. In this section we overview some of them.
\subsection{Langevin-equaiton: a differential equation form}
In physics the usual procedure is to write up a formal differential equation
\begin{equation}
\frac{dS}{dt} = \mu + \sigma \xi,
\end{equation}
known as the \emph{Langevin-equation}; the symbols $\mu$ and $\sigma$ denote general $f(t,S)$ functions, while $\xi(t)$ is a continuous random variable known as a \emph{white noise}.
In order to reproduce the discretized form \eqref{eq:dSdisc} we have to choose the correlation function of these random variables carefully. The correct choice is
\begin{equation}
\bm{E}\xi^{(a)}(t)\xi^{(b)}(t') = C_{ab}\delta(t-t'),
\end{equation}
where $\delta(t)$ is the Dirac-delta distribution. In this case, namely, by integrating the Langevin-equation from $t$ to $t+dt$ we obtain
\begin{equation}
dS = \mu dt + \sigma \int\limits_t^{t+dt}\xi(t')dt'.
\end{equation}
We re-introduce $\xi_n$ as
\begin{equation}
\xi_n = \frac1{\sqrt{dt}}\int\limits_t^{t+dt}\xi(t')dt'.
\end{equation}
The correlation between $\xi_n$ and $\xi_m$ for different $n\neq m$ is zero, and
\begin{equation}
\bm{E}\xi^{(a)}_n\xi^{(b)}_n=\frac1{dt}\int\limits_t^{t+dt} \left[\bm{E}\xi^{(a)}(t')\xi^{(b)}(t'')\right] dt'dt'' = C_{ab}.
\end{equation}
Thus the "average" of a stochastic variable must be calculated by dividing the square-root of the time interval, not the time interval itself.
\subsection{Ito calculus: measures}
Equation \eqref{eq:dSdisc} can be thought as a relation for measures. Then $dt$ serves as an ordinary Riemann-measure, while the $dW=\{\sqrt{dt}\,\xi_n\,|\,n=0,\dots\infty\}$ set is interpreted as a probability measure, usually referred to as the Brownian motion. We now discuss the one dimensional case with $C=1$.
\subsubsection{Probability theory in nutshell}
This approach needs somewhat more preparation, and we recommend the interested reader to turn to more detailed description; here we just list the very essence of what we need. The point is that we try to generalize the concept of random variable to continuous "indices". In the discrete version one defines the sample space $\Omega$ that consists of elementary events, like an actual series of results of finite number of dice throwing (e.g. $(1,3,3,2,4,5)$). The event space $\cal F$ is the power set of $\Omega$, consisting of all the subsets of it. Under the union operation this is a $\sigma$-algebra.
A probability measure is first defined as a function $P:\Omega\to[0,1]$, but it can be lifted to $P:{\cal F}\to[0,1]$ with $P(E)=\sum_{\omega\in E}P(\omega)$. $P$ must satisfy $P(\Omega)=1$. A random variable is $X:\Omega\to\bm R$. The expected value of a random variable is defined as
\begin{equation}
\bm{E}_PX = \sum_{\omega\in\Omega} X(\omega) P(\omega).
\end{equation}
In the continuous case the problem is that the elementary events (also called atoms), forming $\Omega$, all have zero probability. Therefore the probability measure can be defined only on $\cal F$, which is additive for unions of (countable) mutually disjunct subsets of $\Omega$:
\begin{equation}
P:{\cal F} \to[0,1],\qquad P(\Omega)=1,\qquad P(\cup_{i\in I} A_i) = \sum_{i\in I} P(A_i),
\end{equation}
where $I$ is a countable index set, and $A_i\cap A_j = \{\}$ for $i\neq j$. The $(\Omega,{\cal F},P)$ set is called probability space.
The generalization of the discrete expected value to continuous case is a stochastic integral denoted by
\begin{equation}
\bm{E}_PX = \int_\Omega X(\omega) dP(\omega).
\end{equation}
This is defined as a limiting procedure. First define the integral if $X$ is a step function, i.e. $X=\sum_{i\in I} x_i \bm{I}_{A_i}$, where $A_i$ are disjoint elements of $\cal F$ and $\bm{I}_{A_i}(\omega)=1$ if $\omega\in A_i$ and 0 otherwise (indicator function). Then
\begin{equation}
\bm{E}_PX = \int_\Omega X(\omega) dP(\omega) = \sum_{i\in I} x_i P(A_i).
\end{equation}
Then this definition can be extended to any function that can be approached as a limit of step functions.
\subsubsection{The Ito process}
The integral associated to the $dW$ measure is the It$\hat{\mathrm{o}}$ integral. In our approach, fixing the $dt$ time steps, we can integrate a function that is constant during these time steps (i.e. a fine step function, in mathematics it is called a process adapted to the discretization). The result of the integral is then a stochastic variable
\begin{equation}
I_T= \int\limits_0^T \Delta(t) dW(t) = \sum_{n\le T/dt} \Delta(n\,dt) \xi_n \sqrt{dt}.
\end{equation}
This sum is also a Gaussian variable with zero mean and the following variance
\begin{equation}
\bm{E} I_T^2 = \int\limits_0^T \Delta^2(t)dt,
\end{equation}
as it can be seen from the square of the sum.
It is not hard to see that this definition does not depend on the length of the time intervals, just because of the Gaussian nature of the $\xi_n$ variables. So refining the time mesh we can approach the integral of any functions that can be described as a limit of step functions (measurable functions).
The quadratic variance of the integration measure reads
\begin{equation}
dW\,dW = dt \xi_n\xi_n = dt + dt(\xi_n\xi_n-1) = dt + {\cal O}(dt^{3/2}),
\end{equation}
and the last term vanish when $dt\to0$. This formula makes the basis of It$\hat{\mathrm{o}}$ calculus.
\subsection{Path integral}
There is also a continuous notation for the path integral. The sum in \eqref{eq:PIdisc} multiplied by $dt$ naturally leads to the integral notation
\begin{equation}
\int\limits_0^\infty L(t) dt = \sum_{m=0}^\infty dt L_m
\end{equation}
with $t=m\,dt$. Then
\begin{equation}
\label{eq:PIcont}
\left\langle f(S(t))\right\rangle = \int {\cal D}_\sigma S\, e^{-\int\limits_0^\infty dt L(t)} f(S(t)),
\end{equation}
where
\begin{equation}
L(t,\dot S, S) = \frac12 (\dot S-\mu)C^{-1}(\dot S-\mu).
\end{equation}
\section{Solutions of some stochastic differential equations}
\label{sec:solutionofstochdiff}
In this section we discuss some stochastic differential equations, and give their distribution functions. We will always start from the initial condition $S(t=0)=S_0$, or ${\cal P}(t=0,S) = \delta(S-S_0)$.
\subsection{The Brownian motion}
The simplest stochastic equation is when the drift and the variance are constant. Then we can diagonalize the covariance matrix, and so we may deal with one dimensional problems. The equation we have to solve, in the discrete notation reads
\begin{equation}
S_{n+1} = S_n + \mu dt + \sigma \sqrt{dt} \xi_n,
\end{equation}
where $\xi_n$ are independent Gaussian variables with zero mean and unit variance. The solution of the recursion is very simple
\begin{equation}
S_n = S_0 + \mu n dt + \sigma \sqrt{dt} \sum_{i=0}^{n-1} \xi_i.
\end{equation}
We introduce
\begin{equation}
\xi = \frac1{\sqrt{n}} \sum_{i=0}^{n-1} \xi_i,
\end{equation}
which is a Gaussian random variable with zero mean and unit variance. So we have
\begin{equation}
S_n = S_0 + \mu t + \sigma \sqrt{t}\xi,
\end{equation}
where $t=ndt$. Thus the distribution function reads
\begin{equation}
\label{eq:BMdist}
{\cal P}_{BM}(t,S) = \frac1{\sqrt{2\pi t \sigma^2}} e^{-\frac{(S-S_0-\mu t)^2}{2t\sigma^2}}.
\end{equation}
\subsection{Geometric Brownian motion (GBM)}
The most prominent feature of the market prices is that it is not important in which unit we measure the prices. We can use any currencies, gold prices or any other asset price as numeraire, the dynamics of the market is the same. Therefore only the relative price changes must be important. The stochastic differential equaiton that describes this property is simplest
\begin{equation}
\frac{\dot S}S = \mu + \sigma \xi
\end{equation}
in the Langevin notation.
With new variable $X=\ln S/S_0$ with some $S_0$ we obtain, using the {It$\hat{\mathrm{o}}$}{} formula
\begin{equation}
\dot X = \mu -\frac12 \sigma^2 + \sigma\xi.
\end{equation}
This is the Brownian motion discussed above. Using this equation it is usual to give the solution of the GBM as
\begin{equation}
\label{eq:GBMsol}
S = S_0\exp \left[\left(\mu -\frac12 \sigma^2\right)t + \sigma\sqrt{t} \xi \right].
\end{equation}
The distribution function of $X$ is the one given in \eqref{eq:BMdist}. The formula \eqref{eq:distofnewvariable} gives the distribution function of $S$, using $S'=S$
\begin{equation}
\label{eq:GBMdist}
{\cal P}_{GBM}(t,S) = \frac1S\, \frac1{\sqrt{2\pi t \sigma^2}} \exp\left[-\frac1{2t\sigma^2}\left(\ln \frac S{S_0}-(\mu-\frac12\sigma^2) t\right)^2\right],
\end{equation}
this is a lognormal distribution.
\subsection{Vasicek/Hull-White model}
In finance the mean reverting model means that for long terms the random variable fluctuates around a single value. Such model is the following
\begin{equation}
\dot S = a(b- S) + \sigma \xi.
\end{equation}
Depending on whether the parameters are time dependent or not, do we call this model (extended) Vasicek or Hull-White model.
Here we solve the model with constant parameters.
Introduce a new variable $S=e^{-at}R+b$, then
\begin{equation}
\dot S = e^{-at}\dot R -ae^{-at}R = -ae^{-at}R + \sigma \xi,
\end{equation}
therefore
\begin{equation}
\dot R = \sigma e^{at} \xi.
\end{equation}
This equation can be solved to $R$ by a simple integral. So we find for the original variable
\begin{equation}
S= S_0 e^{-at} + b(1-e^{-at}) +\sigma \int\limits_0^t\!ds\, e^{-a(t-s)}\xi(s).
\end{equation}
This describes a Gaussian random variable with mean
\begin{equation}
\bar\mu = S_0 e^{-at} + b(1-e^{-at}),
\end{equation}
and variance
\begin{equation}
{\bar\sigma}^2 = \sigma^2 \int\limits_0^t\!dsds'\, e^{-a(t-s)-a(t-s')}\exv{ \xi(s)\xi(s')}= \sigma^2\frac{1-e^{-2at}}{2a}.
\end{equation}
So the distribution is
\begin{equation}
{\cal P}_{VHW}(t,S) = \frac1{\sqrt{2\pi \bar\sigma^2(t)}} \exp\left[-\frac{(S-S_0-\bar\mu(t))^2}{2\bar\sigma^2(t)}\right].
\end{equation}
As we see, the mean in long terms goes to $b$, the process fluctuates around it with a variance $\sigma^2/(2a)$.
\section{Risk of a portfolio}
\label{sec:risks}
In the previous sections we discussed the general framework of the price dynamics. Now let us think about the evaluation of the present value of an asset.
The most striking question is that if there are two assets with interest rates $r_1>r_2$, then why is not there an arbitrage possibility? Indeed, the portfolio
\begin{equation}
{\cal P} = S_2a_1 - S_1a_2
\end{equation}
has zero value at $t=0$, but at $t=T$ it is worth
\begin{equation}
S({\cal P},T) = S_2 S_1(T) - S_1 S_2(T) = \left(e^{r_1T}-e^{r_2T}\right) S_1S_2 >0.
\end{equation}
So it seems that it is worth to realize this portfolio, we gain money from nothing.
The main point that we did not take into account is the risk. Let us assume for example that $a_2$ is practically risk-free, while $a_1$ has an annual default risk $d$. The average annual rate thus is $0\times d + r_1\times(1-d)=r_1\times(1-d)$. The risk therefore diminishes the rate.
The first problem here is that it is very hard to tell the exact value of $d$ before a real default will occur. We may give vague estimates, but we can easily miss a factor of two or even ten. As a number example consider the case when $a_1$ pays an interest rate 20\%, $a_2$ has a risk-free rate 10\%. If the default risk is 5\% for $a_1$, then the average interest rate is still $14\%$, so $a_1$ is a better investment. But if the default risk is 10\% then the average rate is 8\%, then already $a_2$ takes over.
But there is another effect. Let us assume that we can borrow money for rate $r<r_2<r_1$, and we want to buy the assets from a loan. To be sure we hold back a relative amount $c$ as a collateral (usually it is demanded by the bank lending the money, too). So if we have a principal of 1USD we can borrow $1/c$USD, and after a year we have
\begin{equation}
1\,\mathrm{USD}\; \to\; \frac{r_2-r}c\,\mathrm{USD}.
\end{equation}
This is the leverage effect, resulting that the effective rate of a risk-free investment can be raised to very high. In the ideal case when $c\to0$, any small difference between the risk-free and bank loan rate makes the effective rate grow to infinity.
The first lesson here is that if there were a risk-free investment possibility with higher annual rate than another, then this would indeed cause a very high level arbitrage possibility. Therefore the completely risk-free rate is a unique number.
The second remark is that we can leverage, of course, the risky investments, too. But there is a possibility to lose all the money with non negligible probability rate, then we stay back with the debt liability. This means that we must reserve a higher collateral in the risky case, preparing for the worst case. This will easily make the effective rate much lower than the effective rate for a risk-free investment.
So the real question is that how conservatively, how prudently do the banks evaluate and treat the risk. The practice nowadays is that the banks do not tolerate risky investments too well. This has some psychological factors in it, the market could work in different ways. But the present day practice requires the business to be practically risk-free.
\subsection{Risk mitigation by creating indices}
The assets, of course are not risk-free one-by-one, so we must make efforts to get rid of the risk. We can do it by combining assets into a portfolio. There are two main techniques to do this. The first one is to combine \emph{independent} assets into a single portfolio: these are called \emph{indices}. So we consider the portfolio
\begin{equation}
{\cal P} = \sum_{i=1}^N w_i a_i.
\end{equation}
The value of the portfolio reads:
\begin{equation}
S_P = \sum_{i=1}^N w_i S_i.
\end{equation}
We will assume that the $a_i$ assets follow the equation
\begin{equation}
\dot S_i = S_i(\mu_i+\sigma_i\xi_i),
\end{equation}
where we factored out the price itself, and $\exv{\xi_i\xi_j}=\delta_{ij}$. If $w_i$ are independent of the prices of the underlying assets, then $S_P$ satisfies
\begin{equation}
\dot S_P = \sum_{i=1}^N w_i S_i(\mu_i+\sigma_i\xi_i) = S_P(\bar \mu + \bar\sigma\xi),
\end{equation}
where
\begin{equation}
\bar \mu = \sum_{i=1}^N w_i x_i \mu_i,\qquad \bar\sigma^2 = \sum_{i=1}^N w_i^2 x_i^2 \sigma_i^2,
\end{equation}
where $x_i=S_i/S_P$.
To diminish the effective risk, we should minimize the above expression by choosing the correct weights with the constraint that we should keep the value of the portfolio fixed, i.e.
\begin{equation}
1 = \sum_{i=1}^N w_i x_i.
\end{equation}
Then we have to satisfy
\begin{equation}
\frac\partial{\partial w_i}\sum_{i=1}^N \left(w_i^2x_i^2\sigma_i^2 -\lambda w_i x_i\right) = 0,
\end{equation}
where $\lambda$ is a Lagrange multiplicator. This results in
\begin{equation}
w_i =\frac{\lambda}{2x_i\sigma_i^2}.
\end{equation}
The value of the $\lambda$ comes from
\begin{equation}
1= \sum_i \frac{\lambda}{2\sigma_i^2} \quad\Rightarrow\quad \lambda = \frac1{\sum_i \frac1{2\sigma_i^2}}.
\end{equation}
Putting all together, after some algebra, we find
\begin{equation}
\frac1{\bar\sigma^2} = \sum_{i=1}^N \frac1{\sigma_i^2}.
\end{equation}
We see that in this way we can not achieve a complete risk-free portfolio, but we can mitigate the risks of the single underlying assets.
While this is simple in theory, practically it is not simple to reliably make an estimate on the $\sigma_i$ values. It is also a question, how many assets do we want to include in the index, how do we treat the default risk, etc. We can make also the optimization in a different way, for example fixing a given risk and optimizing the effective interest rate. This results in the fact that there are various indices in the market that differ in the way we compute the weights.
\subsection{Risk mitigation by hedging}
The other way we can mitigate the risk is that we combine assets in a portfolio that have interdependent risks. In the market there are asset classes where the asset prices depend on each other, so there is a correlation between the risks. The most simple example of this case is when we consider an asset, and a \emph{derivative} of it. A derivative in this sense is an asset that is built exclusively on the other, underlying asset (e.g. option, swap or similar products).
So let us assume that we have a portfolio where the underlying asset is $a$, and we add some derivatives $a_i$ to it. So we have
\begin{equation}
{\cal P} = \sum_i \alpha_i a_i - \delta a,
\end{equation}
where the weights $\alpha_i$ and $\delta$ are real numbers. The value of the portfolio is
\begin{equation}
\label{eq:SPval}
S_P = \sum_i \alpha_i f_i(t,S) - \delta S,
\end{equation}
where we have denoted the value of the derivatives at time $t$ and at spot price $S$ as $f_i(t,S)$. Now we think about these functions as prices that can be obtained by observing the market.
What is somewhat more complicated here compared with the previous case, is that the price of the portfolio may depend non-linearly on the price of the underlying, and so its dynamics must be computed using the {It$\hat{\mathrm{o}}$}{} lemma. So, if
\begin{equation}
\dot S = \mu + \sigma \xi,
\end{equation}
where $\mu$ and $\sigma$ can be $S$ dependent, then we have for the complete portfolio
\begin{equation}
\label{eq:dotSP1}
\dot S_P = \partial_t S_P + \mu S\partial_S S_P + \frac12 \sigma^2 S^2 \partial^2_S S_P + \sigma S\partial_S S_P \xi.
\end{equation}
This expression is risk-free, if the term containing $\xi$ is zero. This leads to
\begin{equation}
\partial_S S_P = 0.
\end{equation}
This would mean, however, that $S_P$ does not depend on $S$, put another way, it is not built on the asset $a$. This contradicts our first equation.
So perfect risk-freeness can not be achieved in this way, either. The best we can do is to ensure vanishing derivative at a given price of the underlying, practically at the actual spot price $S=S_0$. Thus we require
\begin{equation}
0= \partial_S S_P\biggr|_{S_0}.
\end{equation}
It is usual to introduce the $\Delta$ risk of the portfolio by the definition
\begin{equation}
\Delta_P = \partial_S S_P\biggr|_{S_0}.
\end{equation}
Risk-freeness at the spot price requires that the delta-risk of the portfolio vanishes
\begin{equation}
\Delta_P=0.
\end{equation}
It is also said that we have a \emph{delta-neutral} portfolio, or that we hedged out the delta risk.
Using our portfolio we have
\begin{equation}
\Delta_P = \sum_i\alpha_i \Delta_i -\delta,
\end{equation}
where
\begin{equation}
\Delta_i = \partial_S f_i(t,S_0).
\end{equation}
A delta-neutral portfolio can be achieved using one single derivative with $\alpha=1$ and the underlying, by choosing
\begin{equation}
\label{eq:Deltaformula}
\delta = \partial_S f(t,S_0).
\end{equation}
\subsubsection{Higher order hedging and the "greeks"}
There are several issues with the hedging strategy described above. One is that we do not really know the relation of the underlying and the derivative prices. We can observe the spot price of the derivative, i.e. $f(t,S_0)$, but to estimate $\partial_S f(t,S)$ we should know it for any other prices as well. This can not be observed directly, thus we need a market model. So, strictly speaking, what we can do is to use the estimated present value $\tilde f(t,S, {\cal M})$ which already depends on the market model ${\cal M}$.
In practice the market model has some parameters, first of all the (estimated) volatility parameter $\sigma_0$ of the underlying asset. But, since no market model is perfect, the actual market can be described only with a non-constant volatility parameter. So, in this sense not just the price, but also the model has fluctuations. Now the complete analysis of the previous subsection can be repeated with the substitution $S\to \sigma_0$. What we obtain is that for a risk-free portfolio we need both
\begin{equation}
\partial_S S_P\biggr|_{S_0} = \partial_\sigma S_P\biggr|_{S_0} = 0.
\end{equation}
It is usual to introduce the quantity $\kappa$ (kappa; sometimes it is called ${\cal V}$ vega), the analogue of $\Delta$, corresponding to the price change under the changing volatility parameter:
\begin{equation}
\kappa = \partial_\sigma S_P(t,S_0,\sigma).
\end{equation}
We need that the kappa value of the complete portfolio is zero (\emph{delta-kappa neutral} position).
Another issue is that we can ensure risk-free portfolio only at a single price $S=S_0$. As soon as the price moves, the risk will grow. Practically one always has to fine-tune the portfolio by adjusting the $\Delta$ (and $\kappa$) to the actual price. If, however, $\Delta$ strongly depends on the price of the underlying, then a sudden price change is hard to follow. This motivates the introduction of $\Gamma$ as the derivative of $\Delta$ (the second derivative of the present value of the derivative)
\begin{equation}
\Gamma_P = \partial_S\Delta_P(S) = \partial_S^2 S_P(t,S_0,\sigma).
\end{equation}
To ensure stability of a portfolio not just the delta, but also the $\Gamma_P$ should be zero (\emph{delta-gamma neutral} position).
We could continue this analysis, and introduce other "greeks" to denote the higher derivatives, c.f. for example \cite{Wikigreeks}, all characterize the sanity of a portfolio. But usually, besides delta-risk, the kappa and/or the gamma is the most important to hedge out.
For all the greeks, the risk of the portfolio is the weighted sum of the individual assets
\begin{equation}
\kappa_P = \sum_i\alpha_i \kappa_i,\qquad
\Gamma_P = \sum_i\alpha_i \Gamma_i,\dots.
\end{equation}
If, for example, we have two derivatives, then we can require
\begin{equation}
\delta = \alpha_1 \Delta_1 + \alpha_2 \Delta_2
\end{equation}
to hedge out the Delta-risk, and
\begin{equation}
0 = \alpha_1\kappa_1 +\alpha_2 \kappa_2
\end{equation}
to hedge out the kappa-risk. If we want to hedge out the gamma-risk as well, we need a third derivative.
If we continuously monitor the different greeks of the portfolio, we see, how sensitive it is for various ways of price changes. The best practice is to keep all the risks in a given narrow range.
\section{Present value and pricing}
\label{sec:pv}
As we have argued, the market requires the investments to be the possibly most risk-free. This also means that single assets are practically never traded one-by-one, only in portfolios where the risks are mitigated.
But all risk-free portfolios must grow with the same rate, otherwise arbitrage would show up. This means that the rates of the individual assets play no role at all. Being part of a portfolio, all assets must be treated as if they had a common drift factor. In this artificial world, called the risk-neutral world we find for all derivatives (including the underlying asset)
\begin{equation}
\label{eq:riskneutralf}
\frac d{dt} \exv{f(t,S)}_{rn} = r \exv{f(t,S)}_{rn}
\end{equation}
where $rn$ stands for "risk-neutral". The rate itself can be a time dependent function, but it can not depend on the single asset prices.
This equation, in fact, is enough to determine the present value of an asset. We can do it in two equivalent ways, one leading to a differential equation, the other an integral formula.
\subsection{Black-Scholes-Merton formula}
In this approach we consider a portfolio built on an underlying and one derivative. Its value is
\begin{equation}
S_P = f(t,S)-\delta S.
\end{equation}
If it is in the delta-neutral position, then
\begin{equation}
\delta = \partial_S f(t,S_0).
\end{equation}
Now we express the time derivative of the portfolio in two ways. On the one hand the portfolio is risk free at $S=S_0$, so we require \eqref{eq:riskneutralf} to be hold
\begin{equation}
\frac d{dt} S_P(t,S_0) = r S_P(t,S_0).
\end{equation}
We find for our portfolio above
\begin{equation}
\frac d{dt} S_P(t,S_0) = rf(t,S_0) - r S_0 \partial_S f(t, S_0).
\end{equation}
On the other hand, if $\partial_S S_P(t,S_0)=0$, then from \eqref{eq:dotSP1} we find
\begin{equation}
\label{eq:rfpdt}
\frac d{dt} S_P(t,S_0) = \partial_t f(t,S_0) + \frac12 \sigma^2 S_0^2 \partial^2_Sf(t,S_0).
\end{equation}
Putting the two equations together we find
\begin{equation}
\partial_t f(t,S_0) + \frac12 \sigma^2 S_0^2 \partial^2_Sf(t,S_0) = r \left(f(t,S_0) - \partial_S f(t, S_0) S_0\right).
\end{equation}
Strictly speaking the above equation is valid only at $t$ and $S_0$. But as the best approximation for the risk-free portfolio, we can demand that it holds for other $S$ as well. This leads to the \emph{Black-Scholes-Merton differential equation}
\begin{equation}
\label{eq:BlackScholes}
\partial_t f +rS\partial_S f+ \frac12 \sigma^2 S^2 \partial^2_Sf = rf.
\end{equation}
The solution of the Black-Scholes-Merton model requires initial condition in time and boundary conditions in $S$. This latter is usually omitted, the boundaries being in the infinity. The initial condition of time, on the other hand, is set by the promised payoff in the future
\begin{equation}
f(T,S) = P(S).
\end{equation}
It is then a final condition, not an initial one, and we should evolve the time backwards in order to obtain the derivative price today at $t=t_0$. This will give the present value of the derivative.
\subsection{Integral formula}
We can use a different route to have an expression from the condition \eqref{eq:riskneutralf}. First we find
\begin{equation}
\frac d{dt} e^{-\int_{t_0}^t dt'r(t')} \exv{f(t,S)}_{rn} =0.
\end{equation}
This means that the quantity
\begin{equation}
M(t,S) = e^{-\int_{t_0}^t dt'r(t')} f(t,S)
\end{equation}
is a random variable whose expected value under the risk-neutral measure is time independent (called to be a martingale under the risk-neutral measure).
At $t=t_0$ present time we know the price of the asset, $S=S_0$, thus the price distribution is $\delta(S-S_0)$, and so so the expected value $\exv{M(t_0,S)} = f(t_0,S_0)$. From time independence of the expected value of $M$ follows
\begin{equation}
f(t_0,S_0) = e^{-\int_{t_0}^t dt'r(t')} \exv{ f(t,S)}_{t,rn}.
\end{equation}
If we have a promised payoff $P(S)$ at time $t$, then $f(t,S)=P(S)$ (assuming the promise is fulfilled). Therefore
\begin{equation}
\label{eq:PVexv}
f(t_0,S_0) = e^{-\int_{t_0}^t dt'r(t')} \exv{P(S)}_{t,rn}.
\end{equation}
This formula does not assume any underlying market model, so it can be used in general.
If we write the payoff as an integral over Dirac-deltas, we can write
\begin{equation}
f(t_0,S_0) = e^{-\int_{t_0}^t dt'r(t')} \int\limits_{-\infty}^\infty dS' P(S') \exv{\delta(S-S')}_{t,rn}.
\end{equation}
The last term is the distribution function in the risk-neutral world:
\begin{equation}
f(t_0,S_0) = e^{-\int_{t_0}^t dt'r(t')} \int\limits_{-\infty}^\infty dS' {\cal P}_{rn}(t_0,S_0;t,S')\, P(S') .
\end{equation}
This last formula shows that the Green's function of the present value determination is
\begin{equation}
\label{eq:Greendef}
{\cal G}(t_0,S_0;t,S) = e^{-\int_{t_0}^t dt'r(t')} {\cal P}_{rn}(t_0,S_0; t,S').
\end{equation}
Using \eqref{eq:FokkerPlanck1} we see that, if the underlying follows a Langevin equation, then the Green's function satisfies
\begin{equation}
\partial_{t_0}{\cal G} = r {\cal G}
-\mu \partial_{S_0}{\cal G}
-\frac12 \sigma^2 \partial^2_{S_0}{\cal G},
\end{equation}
which is the Black-Scholes-Merton equation \eqref{eq:BlackScholes}. This shows that $\cal G$ is the Green's function of the Black-Scholes equation, too. It also proves that $f(t_0,S_0)$ satisfies the Black-Scholes equation, so the integral approach is equivalent to the differential equation approach.
Using path integral formula we can write from \eqref{eq:PIcont}
\begin{equation}
{\cal G}(t_0,S_0;t,S) =\frac1{Z(S_0)} \int{\cal D}_C S\,e^{-\int\limits_{t_0}^\infty dt'L(t')} e^{-\int\limits_{t_0}^t dt' r(t')} \delta(S(t)-S)\bigr|_{S_0},
\end{equation}
where
\begin{equation}
Z(S_0) = \int{\cal D}_C S\,e^{-\int\limits_{t_0}^\infty dt'L(t')} \bigr|_{S_0}.
\end{equation}
If there are several payoffs, then the linearity of the above equation tells us that the present values simply add up. So we can generalize the computation of a present value to arbitrary, continuously compounded payoffs $p(t,S)$
\begin{equation}
f(t_0,S_0) = \int\limits_{-\infty}^\infty dt \int\limits_{-\infty}^\infty dS\, {\cal G}(t_0,S_0;t,S) p(t,S).
\end{equation}
A fixed payoff at time $T$ can be the represented as $p(t,x) = \delta(t-T)P(T)$.
\subsection{Option price in the GBM market model}
To see an example we will compute the present value of the European call option in the geometric Brownian motion market model. The promised payoff of the call option reads
\begin{equation}
p(t,S) = (S-K)^+\delta(t-T),
\end{equation}
where $x^+=x\Theta(x)$. To determine the present value, we use \eqref{eq:PVexv}. It contains an expected value calculation, where the best is to use the explicit solution \eqref{eq:GBMsol}, where we shall use the drift $\mu=r=$const. Then we find, with $\xi\to-\xi$:
\begin{equation}
f(0,S) = e^{-rt} \int\frac{d\xi}{\sqrt{2\pi}} e^{-\frac12 \xi^2 }\left(S e^{(r-\frac12\sigma^2)t - \sigma \xi\sqrt t}-K\right)^+.
\end{equation}
The condition of positivity is $\xi < d_-$, where
\begin{equation}
d_-= \frac1{\sigma\sqrt t}\left(\ln \frac {S}K + (r-\frac12\sigma^2)t\right).
\end{equation}
Thus we have
\begin{equation}
f(0,S) = \int\limits_{-\infty}^{d_-} \frac{d\xi}{\sqrt{2\pi}} e^{-\frac12 \xi^2 }\left(S e^{-\frac12\sigma^2 t - \sigma \xi\sqrt t}-Ke^{-rt} \right).
\end{equation}
The negative of the exponent in the first term is
\begin{equation}
\frac12 \xi^2 + \frac12 \sigma^2 t +\sigma\sqrt t \xi = \frac12(\xi+\sigma\sqrt t).
\end{equation}
We can change variable in the first term to $\xi' = \xi +\sigma\sqrt t :\in [-\infty, d_+]$, then the upper limit of the integration is
\begin{equation}
d_+ = \frac1{\sigma\sqrt t}\left(\ln \frac {S}K + (r+\frac12\sigma^2)t\right)
\end{equation}
Then in both terms we can realize the erf function, and we arrive finally at the \emph{Black-Scholes-formula}
\begin{equation}
f(0,S) = S \Phi(d_+) - Ke^{-rt}\Phi(d_-),
\end{equation}
where
\begin{equation}
\Phi(x) = \int\limits_{-\infty}^x \frac{d\xi}{\sqrt{2\pi}} e^{-\frac12\xi^2}.
\end{equation}
A different form for it reads
\begin{equation}
\frac{f(0,S)}{K e^{-rt}} = e^m \Phi(\frac m z +\frac z2) - \Phi(\frac m z - \frac z2),
\end{equation}
where
\begin{equation}
m = \ln\frac{S}{K e^{-rt}},\qquad z = \sigma\sqrt t.
\end{equation}
$m$ at $t=0$ is sometimes called moneyness, $m=0$, i.e. $K=S$ corresponds to the at-the-money (ATM) trade.
From this form we can also calculate the greeks, for example
\begin{equation}
\begin{split}
\Delta & = \partial_S f = \Phi(d_+) + \frac1{\sigma\sqrt t}\left( {\cal N}(d_+)-\frac{Ke^{-rt}}S{\cal N}(d_-)\right) \\
\kappa & = \partial_\sigma f = S \frac{\partial d_+}{\partial \sigma} {\cal N}(d_+) - Ke^{-rt}\frac{\partial d_-}{\partial \sigma}{\cal N}(d_-),
\end{split}
\end{equation}
where ${\cal N}$ denotes the normal Gaussian function, and
\begin{equation}
\frac{\partial d_\pm}{\partial \sigma} = -\frac1{\sigma^2\sqrt t}\left( \ln \frac SK + rt\right) \pm \frac12 \sqrt t.
\end{equation}
\section{Summary}
\label{sec:summary}
The goal of this note was to summarize the ideas used in the financial practice in the language of physics. We have used the discrete time description of the time evolution which fits best to the philosophy of the renormalization group.
This note is far from being comprehensive, there are a lot of details missing. Also most of the discussed material is known and was written in various books even in more elaborated way. What makes this note somewhat different is that it puts emphasis on topics that are not usual to discuss (such as discrete time formalism or path integral).
\section*{Acknowledgment}
The author gratefully acknowledges useful discussions with K. Cziszter, G. Fath and Z. Foris. This research was supported by the Hungarian Research Fund under the contract K104292.
| -59,764.767932 |
[
-2.095703125,
2.0703125
] | 36.121324 |
[
-3.37109375,
0.60205078125,
-1.9716796875,
-5.5703125,
0.50927734375,
7.359375
] |
[
3.68359375,
7.3515625,
1.5283203125,
6.1015625
] | 586 | 10,974 |
[
-2.498046875,
2.63671875
] | 29.303737 |
[
-5.4765625,
-2.82421875,
-3.2265625,
-1.720703125,
1.8251953125,
9.59375
] | 1.200756 | 18.034704 | 19.281939 | 1.319433 |
[
2.4686217308044434
] | -35,060.707712 | 5.16047 | -60,881.925973 | 0.353163 | 6.072263 |
[
-3.501953125,
-3.22265625,
-1.6015625,
-3.123046875,
2.728515625,
8.2265625
] |
[
-5.4921875,
-2.18359375,
-2.171875,
-1.103515625,
3.90234375,
4.9921875
] | |
BkiUfBHxK7ICUmfb0iHq
|
\section{Introduction} \label{sec: introduction}
The interaction between a two-level system (qubit) and a harmonic oscillator (resonator) has been widely studied, originally as one of the simplest systems to study light-matter interaction~\cite{Rabi1937, Jaynes1963}, and later as a platform for quantum optics~\cite{Walls2008,Gleyzes2007,Goppl2008} and quantum information processing~\cite{Blais2004, Blais2007,Billangeon2015,Billangeon2015a}.
The deep strong coupling (DSC) regime of the qubit-resonator (Q-R) interaction, where the coupling strength $g$ is comparable to or even larger than the transition energies of the qubit ($\Delta$) and the resonator ($\omega_{\rm r}$), has recently been achieved using artificial atoms in superconducting circuit QED systems~\cite{Yoshihara2017,Yoshihara2017a,Yoshihara2018} and THz metamaterials coupled to the cyclotron resonance of a 2D electron gas~\cite{Bayer2017}, as reviewed in Refs.~\cite{FriskKockum2019,Forn-Diaz2019}.
In the DSC regime, the ground state of the Q-R system is quite different from that for weaker coupling~\cite{Ashhab2010,Rossatto2017}.
First, it is an entangled state between qubit and resonator.
Second, such a Schr\"odinger's cat-like state has a nonzero expectation value of the photon number, $\braket{n}=|g/\omega_{\rm r}|^2$.
These photons are referred to as virtual photons, since the system is in the ground state and therefore the photons cannot be spontaneously emitted.
Such nonclassical properties of the ground state are proposed to be useful for quantum metrology~\cite{Facon2016} and the preparation of nonclassical states of photons~\cite{Gheeraert2017,Leroux2017, Gu2017}.
Any quantum system realized in an actual experimental setup, however, is coupled to external degrees of freedom, intentionally or unintentionally.
Particularly, in a superconducting circuit QED system, transmission lines are usually attached to the resonators and the qubits for control and measurement.
A standard prescription for treating an open quantum system in the DSC regime is the Lindblad master equation in the dressed state picture~\cite{Beaudoin2011}, in which the system relaxes to the ground state of the system Hamiltonian at zero temperature.
However, the master equation is based on the Born-Markov approximation, which assume that the system-environment interaction is sufficiently weak so that the system and the environment are always in a product state and the state of the environment does not change during the time evolution.
When there is a nonzero interaction between the system and the environment, however, the total ground state can be entangled~\cite{Ashhab2006} and its reduced density matrix for the system is not necessarily the ground state of the system Hamiltonian.
It is not clear to what extent the Born-markov approximation is valid, or how robust the ground state properties of the DSC system are.
In particular, the energy gap between the ground state and the first excited state becomes exponentially small as $\Delta\expo{-2g^2/\omega_{\rm r}^2}$ with increasing $g$~\cite{Nataf2010a}.
So the nonclassicality of the ground state is expected to be fragile against the coupling to an environment in the DSC regime, although the average number of virtual photons is shown to be only quantitatively affected by losses~\cite{DeLiberato2017}.
By increasing $g$, there is a competition between two effects: the increase of virtual photons, which enhances the nonclassicality, and the exponential decrease of the energy gap ($\sim\Delta\expo{-2g^2/\omega_{\rm r}^2}$), which degrades the nonclassicality due to the increase of fragility, in any realistic setting.
Because of this competition, it is not clear whether just increasing the coupling strength $g$ is helpful to obtain the maximum nonclassicality in the presence of an environment, even at zero temperature.
In this paper, we investigate the ground state properties of an open DSC system.
For this purpose, we propose a variational ground state for the enlarged qubit-resonator-environment system, which we call coherent variational state (CVS), by extending the qubit-state-dependent coherent state~[Eq.~\eref{cat state}] to the total system including the environmental degrees of freedom.
Based on the analysis with the CVS and the numerical diagonalization of the truncated total Hamiltonian, we find that the effect of the coupling to the environment strongly depends on how the system is coupled to the environment, i.e., inductively or capacitively.
This strong dependence results from the fact that the ground state of the Q-R system in the DSC regime breaks rotational symmetry around the origin in the phase space.
When the resonator couples to the qubit and the environment in the same way (for instance, both inductively), we find that the average number of virtual photons increases, and that the quantum superposition realized in the Q-R system is partially degraded.
Furthermore, the ground state superposition tends to be more fragile when the Q-R coupling $g$ is larger, so that the nonclassicality of the resonator state, measured by the metrological power~\cite{Kwon2019}, is maximized at a moderate strength of the Q-R coupling.
When the resonator couples to the qubit inductively and to the environment capacitively, on the other hand, we did not observe any peak of the metrological power in the parameter region that we investigated, so that enhancement in metrological power is achievable.
This paper is organized as follows.
In Sec.~\ref{sec: model}, we describe the theoretical model and present examples of superconducting circuits realizing the Hamiltonian.
In Sec.~\ref{sec: CVS}, we outline the variational method based on the CVS.
In Sec.~\ref{sec: numeric}, by using the CVS and the numerical diagonalization, we numerically evaluate the number of virtual photons, the purity in the Q-R system, and the degree of nonclassicality measured by the metrological power.
In Sec.~\ref{sec: stability}, we discuss how the coupling-type dependence arises in the DSC system.
We conclude the paper in Sec.~\ref{sec: conclusion}.
In \ref{sec: circuit hamiltonian}, we describe the detailed derivation of the Hamiltonian from the circuit model.
In \ref{sec: spin-boson model}, we discuss the relation to the spin-boson model.
In \ref{sec: symmetry}, we discuss how the symmetry of the total Hamiltonian restricts the form of the CVS.
In \ref{sec: stationary}, we show that the stationary equations for the CVS can be substantially simplified by introducing a collective variable.
In \ref{sec: validity}, we check the validity of the CVS in the inductive coupling case by comparing the result from the numerical diagonalization.
\section{Model} \label{sec: model}
In this section, we describe the model considered in this paper.
The relationship among the system, the environment, and the total system is summarized in Fig.~\ref{EnergyDiagram}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.6\columnwidth]{EnergyDiagram.pdf}
\caption{Energy diagrams of the qubit-resonator-waveguide system.
(a) The Q-R system (described by $H_{\rm S}$) for $g >\omega_{\rm r}/2$ and the environmental waveguide system (described by $H_{\rm E}$) are coupled with $H_{\rm SE}$.
(b) We denote the enlarged ground state of $H_{\rm tot}=H_{\rm S}+H_{\rm E}+H_{\rm SE}$ (qubit-resonator-waveguide) by $\ket{\rm EGS}=\ket\psi$.
Its reduced matrix to the Q-R space is referred to as the zero-temperature state and is denoted by $\rho_{ZTS}={\rm tr}_{\rm E}\big[ \ket\psi\bra\psi \big]$.
\label{EnergyDiagram}}
\end{center}
\end{figure}
\subsection{Qubit and resonator}
We consider the quantum Rabi model, which is described by the Hamiltonian
\eq{
H_{\rm S}&=\omega_{\rm r}a^\dagger a + \frac{\Delta}{2}\sigma_x + g\sigma_z (a+a^\dagger). \label{system Hamiltonian}
}
Here, $a$ $(a^\dagger)$ is the annihilation (creation) operator of a resonator photon with energy $\omega_{\rm r}$, $\sigma_j$ ($j=x,y,z$) is the Pauli operator of the qubit with transition energy $\Delta(>0)$, and $g$ is the coupling strength between them.
The Planck constant $\hbar$ is set to unity throughout this paper.
Here, we assume that $X_I=a+a^\dagger$ is proportional to the flux operator, so that the Q-R coupling is inductive.
When the coupling is capacitive, $X_I$ is replaced with $X_C=(a-a^\dagger)/i$ as
\eq{
H_{\rm S}'&=\omega_{\rm r}a^\dagger a + \frac{\Delta}{2}\sigma_x -i g\sigma_z (a-a^\dagger),
}
after choosing an appropriate basis for qubit states.
Both $H_{\rm S}$ and $H_{\rm S}'$ are unitary equivalent through a gauge transformation $a\rightarrow ia$, where the unitary operator is explicitly given by $U=\expo{\pm i\pi a^\dagger a/4}$.
We note that there is a subtlety in the coupling type (inductive/capacitive) because of the gauge ambiguity~\cite{DeBernardis2018a,DiStefano2019,Roth2019}.
Here, the inductive/capacitive coupling refers to the Hamiltonian after choosing the optimal gauge with which the truncated Hamiltonian approximates the original gauge-invariant Hamiltonian well.
When $\omega_{\rm r} \gg \Delta$ or $g \gg \omega_{\rm r},\Delta$, the low-lying eigenstates of the quantum Rabi model are approximately described by~\cite{Ashhab2010, Rossatto2017}
\eq{
\ket{\phi_n^{(\pm)}(\alpha)} = \frac{1}{\sqrt{2}}\left( \ket\uparrow\otimes D(-\alpha)\ket n \pm \ket\downarrow\otimes D(\alpha)\ket n \right), \label{approx eigenstate}
}
which is also denoted as $\ket{\phi_n^{(\pm)}}$.
Here, $\ket\uparrow$ ($\ket\downarrow$) is the qubit eigenstate corresponding to $\sigma_z=1$ ($-1$), $\alpha=g/\omega_{\rm r}$ is the amplitude of displacement in the resonator, and $D(\alpha)=e^{\alpha a^\dagger-\alpha^* a}$ is the displacement operator.
These parameter regions are referred to as the adiabatic oscillator limit~\cite{Ashhab2010, Irish2005,Irish2007,Albert2011} or the perturbative DSC regime~\cite{Forn-Diaz2019, Rossatto2017}, in the sense that the perturbation in terms of $\Delta$ is valid.
The eigenenergy is then, up to first order in $\Delta/\omega_{\rm r}$, given by
\eq{
E_n^{\pm}\simeq n\omega_{\rm r}-\frac{g^2}{\omega_{\rm r}}\pm\frac{\Delta}{2}\braket{n|D(2\alpha)|n} =n\omega_{\rm r}-\frac{g^2}{\omega_{\rm r}}\pm\frac{\Delta \ex^{-2\alpha^2}}{2}L_n(4\alpha^2),
}
where $L_n(x)$ is the Laguerre polynomial of the $n$-th order.
We note that the ground state,
\eq{
\ket{{\rm GS}}=\ket{\phi_0^{(-)}}=\frac{1}{\sqrt{2}}\left( \ket\uparrow\otimes \ket{-\alpha} - \ket\downarrow\otimes\ket{\alpha} \right), \label{cat state}
}is the superposition of two coherent states displaced in opposite directions depending on the qubit state.
\subsection{Coupling to the environment}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.8\columnwidth]{Circuit.pdf}
\caption{Circuit diagrams of a Q-R system coupled to a waveguide.
The blue shaded area is an arbitrary circuit representing the qubit.
The resonator-waveguide coupling is mediated by (a) a capacitance $C_c$ or (b) an inductance $L_c$.
\label{Circuit}}
\end{center}
\end{figure}
The environment can be modeled by an ensemble of harmonic oscillators~\cite{Leggett1987} as
\eq{
H_{\rm E}&=\sum_k\omega_kb_k^\dagger b_k,
}
where $b_k$ $(b_k^\dagger)$ is the annihilation (creation) operator of the $k$-th mode with energy $\omega_k$.
Each mode of the environment interacts with a system operator $X$ with strength $\xi_k$ as
\eq{
H_{\rm SE}&=\sum_k\xi_kX(b_k+b_k^\dagger). \label{interaction}
}
The total system is then described by the Hamiltonian $H_{\rm tot}=H_{\rm S} + H_{\rm E} + H_{\rm SE}$.
As a concrete model of an open DSC system, we investigate a DSC system coupled to a waveguide through the resonator (Fig.~\ref{Circuit}).
The shaded area can be an arbitrary circuit constituting a qubit, such as a Cooper pair box or a flux qubit.
The waveguide is coupled to the Q-R system through (a) a capacitance $C_c$ or (b) an inductance $L_c$.
When the interaction is mediated by an inductance (a capacitance), the system operator $X$ is the quadrature operator of the resonator, given as $X_I=a+a^\dagger$ ($X_C=(a-a^\dagger)/i$).
We note that in the case of the capacitive coupling, the interaction Hamiltonian is given by $H_{\rm SE}=-\sum_k\xi_k(a-a^\dagger)(b_k-b_k^\dagger)$. This is equivalent to Eq.~\eref{interaction} under the unitary transformation $b_k\rightarrow -ib_k$.
While the phase of the quadrature operator for $b_k$ does not affect the physical result because of the gauge invariance of $H_{\rm E}$, the phase of the quadrature operator for $a$ has the physical influence since it is used to couple the resonator to not only the waveguide but also the qubit.
In the following, we always assume that the Q-R coupling is inductive, and we analyze the cases where the resonator-waveguide (R-W) coupling is inductive or capacitive, except for Sec.~\ref{sec: stability}, where we discuss the relativity of the coupling.
Assuming a finite length $L$ of the waveguide, the wavenumber $k$ of the waveguide modes is discretized as $k=n\pi/L$ ($n\in\mathbb N$), and the energy is given by $\omega_k=vk$, where $v$ is the speed of microwave fields in the waveguide.
The coupling constant $\xi_k$ is given by
\eq{
\xi_k=\xi_0 \sqrt{\frac{\omega_k}{1+(\omega_k/\omega_{\rm cutoff})^2}\times \frac{\pi}{L}}\ \label{interaction spectrum}
}
in both the capacitive~\cite{Bamba2014} and inductive coupling case.
We do not have to put the cutoff factor by hand, since it is naturally included in the Hamiltonian derived from the circuit, and the cutoff energy $\omega_{\rm cutoff}$ is determined from the circuit parameters [Eqs.~\eref{cutoff_I} and~\eref{cutoff_C}].
In the small frequency region ($\omega_k \ll \omega_{\rm cutoff}$), the squared coupling strength $|\xi_k|^2$ is proportional to $\omega_k$, which corresponds to the Ohmic case in the spin-boson model (See~\ref{sec: spin-boson model} for details).
The loss rate $\kappa$ of a bare resonator photon into the waveguide is determined by the Fermi Golden rule as
\eq{
\frac{\kappa}{2\pi}= \xi_0^2\frac{\omega_{\rm r}}{1+(\omega_{\rm r}/\omega_{\rm cutoff})^2}. \label{loss rate formula}
}
\section{Coherent variational state} \label{sec: CVS}
In this section, we introduce the CVS and analyze the ground state of $H_{\rm tot}$.
In analogy to the approximate ground state of the quantum Rabi model [Eq.~\eref{cat state}],
we define the CVS of the total system as
\eq{
\ket{\psi_C(\alpha,\{\beta_k\})}=\frac{1}{\sqrt{2}}(\ket\uparrow\otimes\ket{-\alpha;\{-\beta_k\} } - \ket\downarrow\otimes\ket{\alpha;\{\beta_k\} }), \label{Def:CoherentVariationalState}
}
where $\ket{\alpha;\{\beta_k\}}$ is the product of coherent states of the resonator and waveguide modes, satisfying $a\ket{\alpha;\{\beta_k\}}=\alpha\ket{\alpha;\{\beta_k\}}$ and $b_k\ket{\alpha;\{\beta_k\}}=\beta_k\ket{\alpha;\{\beta_k\}}$ for each $k$.
The variational parameters for the CVS are $\alpha$ and $\beta_k$.
We note that a more general form $c_0\ket\uparrow\otimes\ket{\alpha;\{\beta_j\} }+ c_1\ket\downarrow \otimes\ket{\alpha';\{\beta'_j\} }$ leads to the same results as the simpler form in Eq.~\eref{Def:CoherentVariationalState} due to the parity symmetry of the quantum Rabi Hamiltonian, as discussed in \ref{sec: symmetry}.
We also note that the renormalization of the qubit energy and the Rabi oscillation were analyzed using a similar ansatz by performing a polaron transformation ~\cite{Zueco2019}.
The total energy for the CVS is given by
\eq{
E_{\rm CVS}&=& \braket{{\psi_C(\alpha,\{\beta_k\})}| H_{\rm tot}|{\psi_C(\alpha,\{\beta_k\})}} \nonumber\\
&=&\omega_{\rm r}|\alpha|^2 - g(\alpha+\alpha^*) + \sum_k \omega_k |\beta_k|^2 \nonumber\\
&&\pm \sum_k\xi_k(\alpha\pm\alpha^*)(\beta_k\pm\beta_k^*) - \frac{\Delta}{2}\expo{-2(|\alpha|^2 +\sum_k|\beta_k|^2)},
}
where the plus (minus) sign represents the inductive (capacitive) coupling.
The approximate ground state of the total system is the CVS $\ket{\psi_C(\bar{\alpha},\{\bar{\beta_k}\})}$, where $\bar{\alpha}$ and $\bar{\beta_k}$'s are the variational parameters that minimize the total energy $E_{\rm CVS}$.
Although there are a large number of degrees of freedom due to the numerous waveguide modes, the problem can be simplified into a stationary state problem with only two unknown parameters, $\alpha$ and $S=\sum_k|\beta_k|^2,$ as discussed in \ref{sec: stationary}.
Here, $S$ is a collective variable for the waveguide modes, representing the total number of virtual photons in the waveguide modes.
Once $\bar{\alpha}$ and $\bar{S}=\sum_k |\bar\beta_k|^2$ are obtained, the reduced density operator for the system, $\rho_{\scalebox{0.65}{ZTS}}={\rm tr}_{\rm E}[\ket{\psi_C(\bar{\alpha},\{\bar{\beta_k}\})}\bra{\psi_C(\bar{\alpha},\{\bar{\beta_k}\})}]$, which we call the zero-temperature state, is completely characterized by two parameters $\bar{\alpha}$ and $C=\expo{-2\bar{S}}$.
It is explicitly expressed as
\eq{
\rho_{\scalebox{0.65}{ZTS}}=&\frac{1+C}{2}\ket{\phi_0^{(-)}(\bar{\alpha})}\bra{\phi_0^{(-)}(\bar{\alpha})} + \frac{1-C}{2}\ket{\phi_0^{(+)}(\bar{\alpha})}\bra{\phi_0^{(+)}(\bar{\alpha})}. \label{reduced density matrix}
}
This equation implies that the R-W coupling has two effects.
First, the displacement $\bar{\alpha}$ is modified, which means that the average number of virtual photons $|\bar{\alpha}|^2$ is changed.
In fact, as we will see in Sec.~\ref{sec: numeric}, the virtual photons increases as the R-W coupling increases.
Second, $\rho_{\scalebox{0.65}{ZTS}}$ includes the first excited state of $H_{\rm S}$ with a fraction $P_e=\frac{1-C}{2}$.
A similar behavior can be found in the case of a single two-level system coupled to an environment~\cite{Leggett1987}.
The quantity $C$ serves as a measure of coherence, which is a real quantity within the range of $0\le C\le 1$.
If $\beta_k=0$ and hence $C=1$, the system is in a pure state.
On the other hand, if $C=0$, the quantum superposition is completely destroyed and the reduced density matrix $\rho_{\rm S}$ is maximally mixed.
When we represent $\rho_{\rm S}$ in the basis of $\ket\uparrow\otimes\ket{-\bar{\alpha}}$ and $\ket\downarrow\otimes\ket{\bar{\alpha}}$, the quantity $C$ appears in the off-diagonal element:
\eq{
\rho_{\scalebox{0.65}{ZTS}}=\frac{1}{2}\left(
\begin{array}{cc}
1 & -C \\
-C & 1
\end{array}
\right).
}
In this sense, R-W coupling reduces the coherence realized in this basis.
As for the validity of the coherent variational state, we note that it gives the exact ground state of the total Hamiltonian if the qubit energy $\Delta$ and the counter-rotating terms $\xi_k( ab_k + a^\dagger b_k^\dagger )$ are neglected.
In Ref.~\cite{Yoshihara2017}, it is argued that the Q-R state $\ket{\phi_0^{(-)}}$ gives a rather accurate description of the ground state of the quantum Rabi Hamiltonian even when the qubit energy $\Delta$ is finite.
Furthermore, we compare the result of CVS in the inductive coupling case with that of the numerical diagonalization for a few waveguide mode case in \ref{sec: validity}, and show that the CVS describes not only the virtual photons but also the nonclassical properties well in the presence of the R-W coupling.
\section{Numerical calculations} \label{sec: numeric}
In this section, we numerically investigate the properties of $\rho_{\scalebox{0.65}{ZTS}}$ based on the CVS.
We adopt the bare resonator photon loss rate $\kappa$ [Eq.~\eref{loss rate formula}] as a measure of the R-W coupling strength.
The other parameters are set to be $\omega_{\rm r}/2\pi=6$ GHz and $\Delta/2\pi=1.2$ GHz.
See~\ref{sec: circuit hamiltonian} for details.
We again note that the Q-R coupling is assumed to be inductive in this section, and we refer to the coupling between the resonator and the waveguide when we mention the inductive or capacitive coupling.
\subsection{Inductive R-W coupling}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.8\columnwidth]{Inductive_VP.pdf}
\caption{The average virtual photon number (a) and the purity (b) plotted against the bare loss rate $\kappa$ in the inductive coupling case.
The Q-R coupling is $g/2\pi=$ 3 GHz (dashed line) and 6 GHz (solid line).
\label{Inductive_VP}}
\end{center}
\end{figure}
We first calculate the average number of virtual photons $|\bar{\alpha}|^2$ in the inductive coupling case.
Figure~\ref{Inductive_VP} (a) shows that, in the inductive coupling case, the number of virtual photons increase as the R-W coupling $\kappa$ increases.
Indeed, the interaction term $\sum_k\xi_k(a+a^\dagger)(b_k+b_k^\dagger)$ acts as a shifter on the resonator phase space in the real direction.
Furthermore, we can show that the energy gradient $ \partial E/\partial\alpha$ takes a negative value at $\alpha=\tilde\alpha$, where $\tilde\alpha$ is the stationary solution in the absence of the R-W coupling, which implies that the average number of virtual photons always increases due to the R-W coupling, independent of the details of the interaction spectrum $\xi_k$.
Next, we calculate the purity of $\rho_{\scalebox{0.65}{ZTS}}$, defined by
\eq{
\gamma=\tr[\rho_{\scalebox{0.65}{ZTS}}^2].
}
Figure~\ref{Inductive_VP} (b) shows that, in the inductive coupling case, the purity decreases as the loss rate $\kappa$ increases.
By comparing the results for $g/2\pi=$ 3 and 6 GHz, we see that the purity also decreases as the Q-R coupling $g$ increases.
In other words, in the DSC regime, the quantum coherence of the ground state becomes fragile when the Q-R interaction $g$ is extremely large.
It implies that, even though the exact ground state of the quantum Rabi model becomes increasingly useful with increasing $g$, for instance, for quantum metrological tasks~\cite{Facon2016}, the maximum performance is achieved at a moderate strength of the interaction $g$ when the coupling to the environment is taken into account.
Indeed, the nonclassicality, measured by the metrological power~\cite{Kwon2019}, has the maximum at a certain value of $g$, as we discuss below.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.8\columnwidth]{WignerFunc.pdf}
\caption{The Wigner functions of the post-measurement state $\rho_{\sigma_x=-1}$ in the inductive coupling case for $g/2\pi=6$ GHz. (a) $\kappa/2\pi=1$ MHz. The state possesses sufficient coherence ($C=0.88$) for the Wigner function to take a negative value around the origin in phase space. (b) $\kappa/2\pi= 40$ MHz. The state is almost decohered ($|C|<10^{-4}$), and the Wigner function is that of the mixture of two coherent states.
\label{WignerFunc}}
\end{center}
\end{figure}
Let us evaluate the ``quantumness" of the ground state of the system.
There are many ways of defining and quantifying the quantumness~\cite{Kwon2019,Genoni2010,Takagi2018,Albarelli2018,Yadin2018}.
Here, we project the state onto the eigenstates of some qubit operator, and calculate the quantumness from the reduced density matrix of the resonator, by exploiting the resource theory of the nonclassicality in continuous variable systems.
One of the measures of the nonclassicality is the metrological power~\cite{Kwon2019}, which quantifies the maximum achievable quantum enhancement in displacement metrology based on the quantum Fisher information~\cite{Helstrom1968,Holevo2011}.
For a resonator state with the spectral decomposition $\rho_{\rm R}=\sum_i\lambda_i\ket i\bra i$, the elements of quantum Fisher information matrix for quadrature operators are defined as
\eq{
F_{kl}=2\sum_{i,j}\frac{(\lambda_i-\lambda_j)^2}{\lambda_i+\lambda_j}\braket{i|R^{(k)}|j}\braket{j|R^{(l)}|i} , \ (k,l=1,2),
}
where $R^{(1)}=(a+a^\dagger)/\sqrt{2}$ and $R^{(2)}=(a-a^\dagger)/\sqrt{2}i$ are the quadrature operators.
Then the metrological power of the resonator state $\rho_{\rm R}$ is given by
\eq{
{\mathcal M}(\rho_{\rm R})=\max\left\{ \frac{\lambda_{\rm max}(F)-1}{2} ,0 \right\},
}
where $\lambda_{\rm max}(F)$ is the maximum eigenvalue of the quantum Fisher information matrix.
We consider a projective measurement of a qubit operator
\eq{
\sigma_{\theta,\phi}=\sigma_x\sin\theta\cos\phi+\sigma_y\sin\theta\sin\phi+\sigma_z\cos\theta.
}
The measurement outcome $\sigma_{\theta,\phi}=\pm 1$ is obtained with probability
\eq{
{\rm Prob}[\sigma_{\theta,\phi}=\pm1]= {\rm tr}\left[ P_{\sigma_{\theta,\phi}=\pm 1} \rho_{\scalebox{0.65}{ZTS}} \right],
}
and the post-measurement state for the resonator is
\eq{
\rho_{\sigma_{\theta,\phi}=\pm 1}\propto {\rm tr}_{\rm qubit}\left[ P_{\sigma_{\theta,\phi}=\pm 1} \rho_{\scalebox{0.65}{ZTS}} \right],
}
where $P_{\sigma_{\theta,\phi}=\pm 1} $ is the projection operator.
Physically, the post-measurement resonator state is a partially decohered cat state possessing interference fringes around the origin in the phase space representation.
These fringes become clearer as $\alpha$ and $|C|$ increase, which enables us to measure the displacement more precisely than coherent states (Fig.~\ref{WignerFunc}).
We can define the average metrological power as
\eq{
{\mathcal M}^{\rm av}(\rho_{\scalebox{0.65}{ZTS}}, \sigma_{\theta,\phi})=\sum_{a=\pm1} {\rm Prob}[\sigma_{\theta,\phi}=a]{\mathcal M}(\rho_{\sigma_{\theta,\phi}=a}).
}
Finally we define the metrological power of the Q-R system state by optimizing the measurement axis of the qubit as
\eq{
{\mathcal M}(\rho_{\scalebox{0.65}{ZTS}})=\max_{\theta,\phi} {\mathcal M}^{\rm av}(\rho_{\scalebox{0.65}{ZTS}}, \sigma_{\theta,\phi}). \label{MP for system}
}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.8\columnwidth]{Inductive_MP.pdf}
\caption{
Metrological power ${\mathcal M}$ [Eq.~\eref{MP for system}] plotted against (a) the loss rate $\kappa$ and (b) the Q-R coupling $g$ in the inductive coupling case.
(a) The Q-R coupling is $g/2\pi=$ 3 GHz (dashed line) and 6 GHz (solid line).
(b) The bare loss rate is $\kappa/2\pi=$ 0 (black solid line), 2 MHz (black dashed line), 10 MHz (green dashed line), and 50 MHz (blue dashed line).
\label{Inductive_MP}}
\end{center}
\end{figure}
In Fig.~\ref{Inductive_MP}, the metrological power of $\rho_{\scalebox{0.65}{ZTS}}$~[Eq.~\eref{MP for system}] is plotted against (a) the loss rate $\kappa$ and (b) the Q-R coupling $g$.
In our setting, the average metrological power is found to be maximized at $\theta=\phi=\pi/2$, corresponding to the measurement of $\sigma_y$.
Figure~\ref{Inductive_MP} (a) shows that for each value of $g$, the metrological power rapidly decreases to zero when $\kappa$ becomes larger than a certain value.
This critical value of $\kappa$ becomes small as the Q-R coupling $g$ increases.
Figure~\ref{Inductive_MP} (b) shows that the average metrological power has a maximum at some finite value of $g$.
This maximum is achieved when the loss rate $\kappa$ is comparable to the energy gap $\Delta\ex^{-2g^2/\omega_{\rm r}^2}$, or $g_{\rm opt} \sim\omega_{\rm r}\sqrt{\log(\Delta/\kappa)/2}$, so that the optimal coupling strength $g_{\rm opt}$ increases only logarithmically by decreasing $\kappa$ and increasing $\Delta$.
In practice, the loss rate $\kappa$ cannot be too small because the measurement and control need time duration $T\sim1/\kappa$, during which decoherence occurs.
Therefore, this result implies that it is important to design a circuit to have a proper strength of the Q-R coupling $g$ and the loss rate $\kappa$ to obtain an optimal metrological advantage.
\subsection{Capacitive R-W coupling}
The capacitive coupling affects the system much less than the inductive coupling does, as we see below.
Indeed, in the capacitive coupling case, the CVS cannot capture the effect of R-W coupling, since it gives the exactly same result as the noninteracting case regardless of the coupling strength, as proved in~\ref{sec: stationary}.
Therefore, we compare the result from the CVS and the numerical diagonalization in the capacitive coupling case, and the CVS in the inductive coupling case.
To perform the numerical calculation, the total Hamiltonian is truncated as follows.
We take 14 photons and 3 photons into account for the resonator mode and each waveguide mode, respectively.
As the waveguide modes, we consider 4 modes with energies $\omega_k/2\pi=5,10,15,20$ GHz.
Although this truncation is not sufficient to quantitatively discuss the effect of the coupling to the environment, it shows the difference between the inductive coupling and the capacitive coupling.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.8\columnwidth]{Capacitive_VP.pdf}
\caption{The average virtual photon number (a) and the purity (b) plotted against the bare loss rate $\kappa$, calculated from the numerical diagonalization (black) and the CVS (blue) in the capacitive coupling case, and from the CVS in the inductive coupling case (red).
The Q-R coupling is $g/2\pi=$ 3 GHz (dashed line) and 6 GHz (solid line).
The results of the numerical diagonalization for the inductive coupling case are omitted here, and are shown in~\ref{sec: validity}.
\label{Capacitive_VP}}
\end{center}
\end{figure}
In Fig.~\ref{Capacitive_VP}, the average virtual photon number (a) and the purity (b) are plotted against the loss rate $\kappa$.
In the capacitive coupling case, the average number of virtual photons is much less sensitive to the R-W coupling compared to the inductive coupling case.
This fact can be qualitatively understood as follows.
The coupling operator $X_C=(a-a^\dagger)/i$ acts as a shifter of the displacement $\alpha$ in the imaginary direction.
However, since $\alpha=g/\omega_{\rm r}$ is real without the environment, the amplitude $|\alpha|^2$ is much less sensitive to the imaginary shift than the real shift.
We also see that in Fig.~\ref{Capacitive_VP} (b), the purity is less affected by the capacitive coupling to the waveguide compared to the inductive coupling case.
We will discuss the origin of this stability against the capacitive coupling to the waveguide in Sec.~\ref{sec: stability}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width = 0.8\columnwidth]{Capacitive_MP.pdf}
\caption{Metrological power ${\mathcal M}$ [Eq.~\eref{MP for system}] plotted against (a) the bare loss rate $\kappa$ and (b) the Q-R coupling $g$ calculated from the numerical diagonalization (black) and the CVS (blue) in the capacitive coupling case, and from the CVS in the inductive coupling case (red).
(a) The Q-R coupling is $g/2\pi=$ 3 GHz (dashed line) and 6 GHz (solid line).
(b) The loss rate is $\kappa/2\pi=$ 1000 MHz.
\label{Capacitive_MP}}
\end{center}
\end{figure}
In Fig.~\ref{Capacitive_MP}, the average metrological power is plotted against (a) the bare loss rate $\kappa$ and (b) the Q-R coupling $g$.
Since the effect of the W-R coupling is much underevaluated due to the truncation and the small number of environmental modes, we choose a rather huge value of $\kappa/2\pi=1000$ MHz, which is formally obtained from Eq.~\eref{loss rate formula}.
We see that in the capacitive coupling case, the metrological power calculated from the CVS and the numerical diagonalization agrees very well, and also that the nonclassicality is hardly affected by the capacitive coupling to the waveguide, compared to the inductive coupling case.
\section{Stability in the R-W capacitive coupling case} \label{sec: stability}
In this section, we discuss the origin of the stability of the ground state when the R-W coupling is capacitive.
To obtain some physical insight, let us first consider the case where a bare resonator is coupled to a waveguide.
The fraction of the first excited state $\ket 1$ contained in the total ground state is proportional to $|\braket{1|X|0}|^2$ at the lowest order, where $X$ is a quadrature operator.
This transition amplitude is completely insensitive to the type of coupling, i.e., inductive $X_I=a+a^\dagger$ or capacitive $X_C=(a-a^\dagger)/i$, as
\eq{
|\braket{1|X_I|0}|^2=|\braket{1|X_C|0}|^2=1.
}
This is due to the fact that the resonator Hamiltonian $H=\omega_{\rm r}a^\dagger a$, and the energy eigenstates $\ket n$ are invariant under the rotation around the origin in the phase space, represented by the unitary transformation $U=\exp\kakko{i\theta a^\dagger a}$.
On the other hand, since the eigenstates of a DSC system are not invariant under this rotation, the transition amplitude strongly depends on the type of coupling as
\eq{
|\braket{\phi_0^{(+)}(\alpha)|X_I|\phi_0^{(-)}(\alpha)}|^2&=4|{\rm Re}[\alpha]|^2=4\kakko{\frac{g}{\omega_{\rm r}}}^2, \\
|\braket{\phi_0^{(+)}(\alpha)|X_C|\phi_0^{(-)}(\alpha)}|^2&=4|{\rm Im}[\alpha]|^2=0. \label{matrix element capacitive}
}
Therefore, when the Q-R coupling is mediated by the inductance, i.e., $\alpha$ is real, the system is stable against capacitive coupling to the waveguide.
A similar argument is applied in Ref.~\cite{Nataf2011} to protect a qubit from noise based on the fact that the transition amplitude of the qubit operator becomes exponentially small in $g$.
In contrast, in our case, the transition amplitude is exactly zero when the R-W coupling is capacitive.
To see this coupling-type-dependence more directly, we perform a numerical diagonalization of the truncated total Hamiltonian of the qubit-resonator-waveguide system.
The truncation is the same as in the previous section.
Figure~\ref{circuit inductive} shows the fraction of the excited states of the Q-R system contained in the ground state of $H_{\rm tot}$.
In the inductive coupling case, the most dominant excitation is the first excited state $\ket{\phi_0^{(+)}}$.
On the other hands, the fraction of $\ket{\phi_0^{(+)}}$ is not dominant in the capacitive coupling case, as is expected from Eq.~\eref{matrix element capacitive}.
Instead, the most dominant excited state is $\ket{\phi_1^{(-)}}$, which is the only excited state with a nonzero transition amplitude as $|\braket{\phi_1^{(-)}(\alpha)|X_C|\phi_0^{(-)}(\alpha)}|^2=1$.
The reason why the system is not changed in the capacitive coupling case in the CVS analysis in Sec.~\ref{sec: numeric} is that the CVS considers only the two lowest eigenstates $\ket{\phi_0^{(-)}}$ and $\ket{\phi_0^{(+)}}$.
From the analysis performed in~Sec.\ref{sec: numeric}, we cannot determine whether the metrological power monotonically increases as a function of $g$ or peaks at a certain value of $g$ in the capacitive coupling case.
However, the peak, if it exists, is expected to occur at a much larger value of $g$ than in the inductive coupling case, so that a higher metrological power is achievable in the capacitive coupling case.
We need to be careful to conclude from our results that the capacitive R-W coupling is superior to the inductive coupling, since there is a tradeoff between the gate speed and the relaxation time~\cite{Koshino2020}.
In the capacitive coupling case, this tradeoff relation indicates that the long relaxation time implies a slow control between the ground state and the first excited state.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width =0.9\columnwidth]{ExcitedStates.pdf}
\caption{The fraction of the excited states of the isolated system [Eq.~\eref{system Hamiltonian}] ($\ket{\phi_0^{(+)}}$ (solid line), $\ket{\phi_1^{(-)}}$ (dashed line) and $\ket{\phi_1^{(+)}}$ (dotted line)) contained in $\rho_{\scalebox{0.65}{ZTS}}$, calculated from the numerical diagonalization, plotted against the loss rate $\kappa$.
The R-W coupling is (a) capacitive or (b) inductive.
\label{circuit inductive}}
\end{center}
\end{figure}
Finally, we stress that only the relative phase between the resonator quadrature operators coupled to the qubit and the waveguide is relevant to this stability.
When the Q-R coupling is assumed to be capacitive, the system is sensitive to the capacitive coupling to the waveguide and insensitive to the inductive coupling.
These results are summarized in Table~\ref{Correspondence}.
\begin{table}[htbp]
\begin{center}
\caption{The transition amplitude between the ground state and the first excited state for several system Hamiltonians and different types of the R-W coupling. Pairs of columns connected by arrows are unitary equivalent.
\label{Correspondence}}
\qquad \includegraphics[width = 0.8\columnwidth]{Correspondence.pdf}
\end{center}
\end{table}
\section{Conclusion} \label{sec: conclusion}
In this paper, by analyzing the ground state of a qubit-resonator-waveguide system, we have investigated the effect of an environment on the ground state of the quantum Rabi model in the DSC regime.
We have introduced the qubit-state-dependent coherent variational state (CVS)~[Eq.~\eref{Def:CoherentVariationalState}].
This variational ansatz is easy to analyze and is consistent with the result from numerical diagonalization.
We have shown that the zero-temperature state $\rho_{\scalebox{0.65}{ZTS}}$ strongly depends on the type of the R-W coupling because of the broken rotational symmetry in the eigenstates of the DSC system.
When the resonator couples to the qubit and the waveguide in the same way (for instance, both are inductive), the number of virtual photons increases due to the R-W coupling, which might be advantageous to detect virtual photons experimentally~\cite{Lolli2015,Munoz2018}.
We have also shown that, even at zero temperature, the Q-R system is a mixed state and contains the excited states of the quantum Rabi Hamiltonian, which implies the fragility of the quantum superposition realized in the ground state.
As a result, the nonclassicality of the resonator system, measured by the metrological power, is maximized at a certain coupling strength $g$, when the environment is taken into account.
We note that the analysis based on the multi-polaron expansion~\cite{Bera2014} suggests that the CVS underestimate the coherence in the system.
To obtain a more accurate result in the inductive coupling case, we may modify the CVS to include more than one polaron.
On the other hand, when the resonator couples to the qubit and the waveguide in different ways (for instance, one is inductive and the other is capacitive), the system is almost unaffected, so that a higher metrological power than the same coupling case is achievable in the presence of environment.
It is worth considering a better variational ansatz that can quantitatively describe the ground state in such case.
Our results offer guiding principles to obtain a better metrological advantage when we design superconducting circuit QED systems.
Since it is necessary to perform projective measurements on the qubit to exploit this metrological advantage, our results also demonstrate the advantages of achieving dynamically controllable coupling between qubit and resonator.
\ack
We would like to thank S. Masuda, R. Takagi, and I. Iakoupov for fruitful discussions.
This work was supported by Japan Science and Technology Agency (JST) Core Research for Evolutionary Science and Technology (CREST) Grant Number \mbox{JPMJCR1775} and JST Precursory Research for Embryonic Science and Technology (PRESTO) Grant number \mbox{JPMJPR1767}, Japan.
| -27,015.819503 |
[
-2.208984375,
2.017578125
] | 61.483254 |
[
-2.49609375,
0.81640625,
-1.79296875,
-5.80859375,
-1.14453125,
8.2421875
] |
[
4.3515625,
8.546875,
3.232421875,
6.67578125
] | 284 | 5,141 |
[
-3.54296875,
4.12890625
] | 26.035518 |
[
-6.234375,
-4.3203125,
-4.55859375,
-2.341796875,
1.9765625,
12.8203125
] | 1.569002 | 16.94967 | 21.649485 | 1.547164 |
[
2.2833969593048096
] | -19,368.906929 | 5.702782 | -26,195.391414 | 1.146054 | 5.55061 |
[
-2.474609375,
-3.892578125,
-4.1875,
-5.19140625,
2.24609375,
12.875
] |
[
-5.33984375,
-2.091796875,
-2.33203125,
-1.482421875,
3.375,
4.66015625
] | |
BkiUdw85qhLA-t92JkvO
|
\section{Keywords:} Urban segregation, Spatial heterogeneity, Syn\-chro\-ni\-za\-tion dynamics, Diffusion dynamics, phase-oscillators}
\end{abstract}
\maketitle
\section{Introduction}
The expansion of urbanization and progressive increase of the population in cities has intensified the concern over the many dimensions of segregation ---i.e., school, economic or ethnics--- that have a tangible impact in the health, education and equal opportunities of citizens \cite{kennedy1998income,Elliott1999,collins2000residential,ross2001income,mayer2002economic,acevedo2003residential,wheeler2006urban,owens2018income}. In fact, quantifying the extent of segregation and the identification of economically and socially isolated neighborhoods has been a topic of wide interest that first led to the development of global metrics, and which were later extended to spatial metrics \cite{Cliff1981,Dawkins2004,Brown2006,Dawkins2006,Wong2011,Rey2013}. Most of the initial spatial measures were limited to first neighbour indices, which facilitated the development of multi-scalar indices that provide a more nuanced picture of segregation \cite{Farber2012,Louf2016,chodrow2017structure,Olteanu2019,sousa2020quantifying,bassolas2021first,bassolas2021diffusion}, yet understanding the role played by each of the scales and their interplay still remains a challenge.
Dynamical processes in general, and in particular diffusion \cite{gomez2013diffusion,sole2013spectral,de2013mathematical,li2013influence,delvenne2015diffusion,de2017diffusion,masuda2017random,cencetti2019diffusive,bertagnolli2021diffusion} and synchronization \cite{arenas2006bsynchronization,arenas2006synchronization,gomez2007paths,gomez2007synchronizability,arenas2008synchronization} dynamics, have been widely studied in complex networks on account of their relation with the spread of diseases and information \cite{gomez2018critical,zhang2016dynamics} and real-world phenomena in social or economic systems \cite{pluchino2005changing,calderon2007trade,erola2012modeling}. Interestingly, they provide insights on the topological scales and structure of networks and reveal the existence of functional meso-scale structures \cite{de2017diffusion,bertagnolli2021diffusion,arenas2006synchronization,gomez2007synchronizability,motter2005network}.
Here we use previous knowledge on diffusion and synchronization dynamics to assess the multi-scale patterns of residential segregation. By moving the focus from the network topology and organization to the node states, we are able to measure how well distributed a population with a certain characteristic is using the time needed to reach the absorbing state.
Our framework requires thus the implementation of a population dynamic to drive the system towards the homogeneous state, in our case diffusion and synchronization dynamics. None of them constitute here an attempt to model or predict the changes in the spatial distribution of a population characteristic but are highly stylized simplifications of their evolution that allow us to measure the time needed to attain the homogeneous state, which we consider to be the non-segregated scenario. Dynamical approaches are thus introduced here not because they provide a realistic approximation to the evolution of population dynamics but because they offer a significant advantage to measure multi-scale correlations as they do not require to take distance explicitly into account. Moreover, the assumption that cities converge towards uniformity is rather unrealistic without a heavy external driver, and is only a means to construct our measures.
As case studies we provide an analysis on the distribution of citizens of a certain income category in cities from the United States, and the distribution of a set of socioeconomic indicators in the city of Paris throughout an average day (see Supplementary Material Section~2 and Supplementary Figs.~S8-S10). The analysis on the spatial organization of income categories reveals that the most deprived and affluent sectors display higher diffusion and synchronization times linked to a higher heterogeneity, and allow us to split the cities in two groups depending on the difference on the level of segregation. Finally, we evaluate the level of synchronization at the neighborhood level which allow us to spot the more sensitive places in a city.
\section{Results}
\subsection{Diffusion dynamics and income segregation}
Citizens exhibit a huge diversity of characteristics usually captured by socioeconomic indicators such as education level, income or ethnicity, and they are often heterogeneously distributed in space: those individuals with similar characteristics tend to live close between them. To assess the heterogeneity of a population with a characteristic $k\in K$, we consider a graph $G(V,E)$ with adjacency matrix $A=\{a_{ij}\}$ in which the spatial units are represented as a set of nodes $V$ connected by a set of edges $E$. The adjacency matrix $A$ we have considered takes $a_{ij}=1$ when spatial units $i$ and $j$ are adjacent and $a_{ij}=0$ otherwise, which is the traditional connectivity matrix used to capture residential segregation. Still, other types of (weighted) matrices could be considered to assess, for example, the impact of mobility in segregation. The state of a node $x_i^k$ is given by the fraction of citizens living in node $i$ that belong to socioeconomic category (or class) $k$, written as
\begin{equation}
x_i^k=\frac{n_i^k}{\sum\limits_{k'} n_i^{k'}},
\end{equation}
where $n_i^k$ is the total number of citizens in unit~$i$ that belong to category~$k$. As extreme cases, $x_i^k=0$ when there are no citizens of category~$k$ living in $i$, and $x_i^k=1$ when all the citizens in node~$i$ belong to category~$k$. Of course, the normalization condition
\begin{equation}
\sum_{k\in K} x_i^k = 1
\end{equation}
is fulfilled for all nodes~$i$.
To measure the multi-scalar patterns of segregation, our assumption is that cities suffering from stronger residential segregation are further from the stationary state where the citizens of category $k$ are homogeneously distributed in space. Although cities are in continuous change and most likely far from equilibrium, similar approaches such as the long-standing Schelling and the Alonso-Muth-Mills models have been able to draw relevant conclusions from the equilibrium state \cite{schelling1971dynamic,fujita1989urban}.
By adopting diffusion dynamics we do not refuse the high complexity of population dynamics influenced by a wide variety of demographic, economic, political, and behavioral factors \cite{zhang2004dynamic,clark2009changing,zhang2011tipping,deluca2013segregating} but avoid introducing further parameters and factors that could hinder our aim of characterizing the segregation of a particular population category. Bear in mind that our final goal is by no means to assess real-world migration processes but to construct a multi-scalar measure of segregation that does not explicitly include the distance and the use of more complex and realistic approaches that would complicate the interpretation of the results. Diffusion constitutes one of the most basic approximations to how information, or any other characteristic, is transmitted through a system. Although far from the real behavior, it provides one of the simplest scenarios where the flow of population follows a gradient.
In fact, we focus on one of the best-case scenarios where the values of $x_i^k$ converge towards equilibrium following a gradient, which could be interpreted as the change of residence of citizens of category $k$ to regions where they are less abundant.
\begin{table}[t!]
\centering
\begin{tabular}{lc}
\hline
\hline
Class & Income (\$) \\
\hline
1 & Less than 10,000 \\
2 & 10,000 -- 14,999 \\
3 & 15,000 -- 19,999 \\
4 & 20,000 -- 24,999 \\
5 & 25,000 -- 29,999 \\
6 & 30,000 -- 34,999 \\
7 & 35,000 -- 39,999 \\
8 & 40,000 -- 44,999 \\
9 & 45,000 -- 49,999 \\
10 & 50,000 -- 59,999 \\
11 & 60,000 -- 74,999 \\
12 & 75,000 -- 99,999 \\
13 & 100,000 -- 124,999 \\
14 & 125,000 -- 149,999 \\
15 & 150,000 -- 199,999 \\
16 & 200,000 or more \\
\hline
\end{tabular}
\caption{Income range (in US dollars) corresponding to each category (or class).} \label{Table1}
\end{table}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.95\textwidth]{Fig1.pdf}
\end{center}
\caption{\textbf{Diffusion dynamics as a measure for income segregation.} (A) Synchronization time for each of the $16$ income categories in Boston, Cleveland, Denver and Detroit. (B) Median value of $\widetilde{\tau}_{\rm diff}(k)$ across income categories as a function of its variance. (C) Ranking for the median value of $\widetilde{\tau}_{\rm diff}(k)$ for the studied set of US cities. }
\label{fig1}
\end{figure*}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.999\columnwidth]{Fig2.pdf}
\end{center}
\caption{\textbf{Average temporal evolution of the abundance of households within the lowest and highest income.} Temporal evolution of centroids after performing a k-means clustering on the normalized abundance of households with category~$k$, $P(x_i^k,t)$, as a function of time~$t$ for the lower (A) and higher (B) income categories.}
\label{fig2}
\end{figure}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.90\textwidth]{Fig3.pdf}
\end{center}
\caption{\textbf{Synchronization time as a measure for income segregation.} (A) Synchronization time for each of the $16$ income categories in Boston, Cleveland, Denver and Detroit. (B) Ranking for the median value of $\widetilde{\tau}_{\rm sync}(k)$ for the studied set of US cities. (C) Average value of $P(\widetilde{\tau}_{\rm sync}(k))$ as a function of each income category~$i$ for the two main clusters detected. (D) Location and cluster assignment for each of the analyzed cities.}
\label{fig3}
\end{figure*}
We focus on the economic segregation in the metropolitan areas of the United States with more than $1$~million inhabitants and analyze a dataset containing the number of households within an income interval~$k$ residing in each census tract (see Table~\ref{Table1}).
Once we have the set of initial node states $x_i^k$, their evolution through time is determined by the diffusion dynamics
\begin{equation}
\frac{dx_i^k}{dt}=\frac{1}{s_i}\sum\limits_{j=1}^N a_{ij}(x_j^k-x_i^k),
\end{equation}
where
\begin{equation}
s_i = \sum\limits_{j=1}^N a_{ij}
\end{equation}
is the degree of node $i$. For simplification purposes, we have opted to use a normalized diffusion dynamic, with diffusion strength equal to~1. Note that we have independent diffusion processes for each category~$k$.
The diffusion dynamic lasts until the stationary state, $x_i^k=\avg{x^k}$, $\forall i$ is reached, and we denote the spanned time as $\tau_{\rm diff}(k)$. Since the time to reach the stationary state can be infinitely large, we have considered that it is reached when the variance of $x_i^k$, in time, becomes lower than $0.0001$. We hypothesize that lower values of $\tau_{\rm diff}(k)$ are related to a more homogeneous distribution of the population within a category~$k$, and the other way around when it is higher. In the extreme case in which all units have the same initial value of $x_i^k$, the diffusion time $\tau_{\rm diff}(k)$ would attain its minimum value. As we aim to compare cities with different characteristics, we control for confounding factors such as the particular distribution of $x^k$ or the topology of the graph by running the same diffusion dynamics on the same graph but where the values of $x^k$ have been reshuffled, thus defining the average null-model diffusion time $\tau^{\rm null}_{\rm diff}(k)$ calculated over $500$ reshuffling realizations. The relative diffusion time we will use throughout this manuscript can then be written as
\begin{equation}
\widetilde{\tau}_{\rm diff}(k) = \frac{\tau_{\rm diff}(k)}{\tau^{\rm null}_{\rm diff}(k)}.
\end{equation}
A relative diffusion time equal to one means that
it is compatible with the null model, i.e., there are no remarkable spatial dependencies, while a greater value suggests that spatial heterogeneities delay the arrival to the stationary state.
We analyze the normalized diffusion times $\widetilde{\tau}_{\rm diff}(k)$ by running simulations for all US cities above $1$ million of inhabitants and each of the $16$ income categories~$k$ as a proxy for how heterogeneously distributed is the population; we have excluded New York City, whose adjacency network does not provide an accurate picture of residential segregation due to the particular geography of Manhattan. In Fig.~\ref{fig1}(A) we display $\widetilde{\tau}_{\rm diff}(k)$ in Boston, Cleveland, Detroit and Denver observing a common qualitative behavior: smaller values for middle-income categories, and higher ones for the categories in the extremes of the income distribution. Our results suggest that the wealthier and most deprived citizens suffer from stronger segregation and display a more clustered spatial distribution. More interestingly, category~$9$ seems to be the more homogeneously distributed across space, in agreement with the results observed in \cite{Bassolas2020b} and with the mean and standard deviation of $x^k$ as well as the Moran's I (see Supplementary Material Section~1 and Supplementary Fig.~S1). Still, there are strong quantitative differences, with Cleveland and Detroit displaying higher values for most of the categories, in contrast to Boston and Denver.
Since $\widetilde{\tau}_{\rm diff}(k)$ takes a set of $16$~values for each city, we calculate their median and variance values over all categories to ease the comparison between the set of cities studied. While the median value provides information on the segregation across all economic categories, its variance reports the variability among them. Figure~\ref{fig1}(B) shows this median value of $\widetilde{\tau}_{\rm diff}(k)$, $\mbox{med}(\widetilde{\tau}_{\rm diff}(k))$ as a function of its variance, $\mbox{var}(\widetilde{\tau}_{\rm diff}(k))$. The prior cities appear ordered as Detroit, Cleveland, Boston and Denver, although the variance is very similar for Cleveland and Boston, likely due to the high values observed for low-income categories in Boston. Finally, we provide in Fig.~\ref{fig1}(C) the ranking of the selected US cities according to $\mbox{med}(\widetilde{\tau}_{\rm diff}(k))$, as a measure of the overall segregation in cities. On top of it, we find cities such as Milwaukee or Detroit, which have been reported to suffer from economic and ethnic segregation \cite{adelman2004neighborhood,thomas2015race,florida2015segregated}.
By applying diffusion dynamics we implicitly assume that $x^k$ evolves homogeneously towards consensus, which more than a realistic scenario, it is a means to calculate the time needed to reach consensus and obtain a measure of segregation. To further inspect the actual change of $x_i^k$ between 2011 and 2019 in each of the spatial units~$i$, we first construct the normalized time-series for each spatial unit across those years as
\begin{equation}
P(x_i^k,t)=\frac{x_i^k(t)}{\sum\limits_{t'} x_i^k(t')},
\end{equation}
and then cluster, for each category~$k$, the temporal profiles of all the nodes. For the clustering, we have made use of the k-means algorithm \cite{hartigan1979algorithm,likas2003global}, grouping together those units with a similar temporal evolution, and setting the number of clusters to~3. The resulting time-series of the corresponding centroids for the highest and lowest income categories are depicted in Fig.~\ref{fig2}, where a non-monotonic behavior is observed in most of the cases, with oscillatory behaviors through time of varying amplitude.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.85\textwidth]{Fig4.pdf}
\end{center}
\caption{\textbf{Local synchronization time as a measure for income segregation.} Normalized synchronization time for each census tract in Denver (A--C) and Detroit (E--G) for three different income categories: (A, E) class~1 (low income); (B, F) class~8 (middle income); (C, G) class~16 (high income). We provide as a reference the median income of each census tract in (D) Denver and (H) Detroit.}
\label{fig4}
\end{figure*}
\subsection{Synchronization dynamics and income segregation}
According to the oscillations in the temporal evolution of $x^k_i$ (Fig.~\ref{fig2}), diffusion dynamics appear to be a rather simplistic approach to assess the time needed to converge. Even thought we do not aim to mimic the real evolution of $x^k_i$, we seek for a dynamic that at least can resemble its real behaviour in a qualitative way. Thus, despite still constituting a stylized approximation, a dynamical process with an oscillatory behavior, like a system of coupled Kuramoto oscillators, appears to be a better way to assess the spatial heterogeneity of socioeconomic indicators across cities. To analyze segregation in terms of synchronization dynamics, we treat each of the spatial units~$i$ as an individual Kuramoto oscillator, with an initial phase $\theta_i^k(0)$ that is set by distributing the fraction of population in node $i$ that belongs to a category~$k$ within the range $[0,\pi]$ as
\begin{equation}
\theta_i^k(0)=x_i^k\pi.
\end{equation}
The interaction between spatial units is given by the Kuramoto model
\begin{equation}
\dot{\theta}_i^k(t)=\omega_i^k+\frac{1}{s_i}\sum\limits_{j=1}^N a_{ij} \sin\left(\frac{\theta^k_j(t)-\theta_i^k(t)}{2}\right),
\end{equation}
where we have modified the traditional interaction term between oscillators by dividing the angle difference by two, allowing for the interaction between regions displaying extreme values of $x_i^k$. Additionally, to facilitate the global synchronization of the system, we set all the individual natural frequencies of the oscillators to the same value, i.e., $\omega_i=1,\ \forall i$. In order to account separately the segregation of each category $k$, our approach assumes that there is no interaction between categories and, thus, $x_i^k$ synchronize independently of $k$.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.80\textwidth]{Fig5.pdf}
\end{center}
\caption{\textbf{Synchronization time and median income.} Normalized synchronization time as a function of the median income averaged over bins of \$5,000 for class~1 (A), class~8 (B) and class~16 (C).}
\label{fig5}
\end{figure*}
We use the standard order parameter $|z^k|$ to assess the global level of synchronization for a category $k$ in a city, where
\begin{equation}
z^k=\frac{1}{N}\sum\limits_{j=1}^N e^{i\theta_j^k},
\end{equation}
and $N$ is the total number of spatial units or Kuramoto oscillators \cite{arenas2008synchronization}. We consider that a city has reached the synchronized state when $|z^k|>0.999$. As in the case of diffusion, we assess how the distribution of initial phases determines the synchronization of the system, a city in our case, by measuring the time $\tau_{\rm sync}(k)$ required to reach the synchronized state. The more heterogeneously distributed the initial phases are, the higher the time the system requires to synchronize. To distinguish between the effect produced by the spatial distribution $x_i^k$ from its overall distribution as well as the topology of the graph, we also measure the average time the system needs to synchronize when the same phases are redistributed at random, $\tau^{\rm null}_{\rm sync}(k)$. The normalized synchronization time of the system is then given by the ratio
\begin{equation}
\widetilde{\tau}_{\rm sync}(k)=\frac{\tau_{\rm sync}(k)}{\tau^{\rm null}_{\rm sync}(k)}.
\end{equation}
Like for diffusion, a synchronization time close to one means that the spatial distribution of phases is compatible with the null model, and a larger value indicates that spatial heterogeneities delay the appearance of a synchronized state.
In Fig.~\ref{fig3}(A) we inspect the normalized synchronization time in Boston, Cleveland, Detroit and Denver when spatial units interact through Kuramoto-like dynamics. All four of them share similar features, with central classes displaying smaller synchronization times compared to the most disadvantaged and wealthier ones. An expected result since those individuals in the extremes of the income distribution tend to be more isolated and clustered together compared to middle-income citizens. Despite sharing qualitative features, the cities shown display sharp quantitative differences. Almost all categories appear to be significantly more isolated in Detroit and Cleveland compared to Denver and Boston, where $\widetilde{\tau}_{\rm sync}(k)$ looks much flat. Overall, the synchronization results are compatible with the diffusion ones, likely because both dynamical processes share common features. We have further checked that the mean $\avg{x^k}$ does not determine directly the normalized synchronization times in Supplementary Fig.~S1.
Likewise with diffusion, we calculate the median and variance of $\widetilde{\tau}_{\rm sync}(k)$ over all categories to be able to compare between analyzed cities (see Supplementary Fig.~S2 for the individual rankings of $\widetilde{\tau}_{\rm sync}(k)$ for the categories $k=1$ and $k=16$). The ranking is shown in Fig.~\ref{fig3}(B) and has cities such as Detroit, Cleveland, Milwaukee or Memphis close to the top, which are well-known for being among the most economically segregated cities in the United States. The location in the ranking of the cities in Fig.~\ref{fig3}(A) is consistent with our observations, with Boston and Denver on the bottom of the ranking and Detroit and Cleveland on the top of it.
Our index is given by the median value of the normalized synchronization times, yet depending on the dimension of segregation we aim to capture, we can also construct an index based on a population-weighted average. Whereas the median gives equal weight to each economic category focusing on the segregation suffered by residents of category $k$, the weighted average provides an overall picture of segregation taking the population of each category into account. We show the ranking obtained for the weighted average index and its relation with $\mbox{med}(\widetilde{\tau}_{\rm sync}(k))$ in Supplementary Figs.~S6 and~S7. Additionally, we show in Supplementary Figs.~S3 and~S4 how $\widetilde{\tau}_{\rm sync}(k)$ significantly correlates with the traditional Moran's I \cite{moran1948interpretation} as well as a multi-scale quantity based on class mean first passage times developed in \cite{bassolas2021first,bassolas2021diffusion}, reinforcing the idea that synchronization (and diffusion) dynamics indeed capture the patterns of residential segregation. Despite the dynamics we have used are stylized versions of the real behavior of the quantity $x^k_i$ and do not capture the full complexity of its temporal evolution, it is able to capture segregation with values comparable to other segregation indicators.
Although $\widetilde{\tau}_{\rm sync}(k)$ is larger for extreme categories in most of the cities, some of them like Denver display smaller variations than others such as Detroit and, therefore, it might be of interest to group cities according to the change in synchronization times. By running a k-means algorithm on the normalized value of $P(\widetilde{\tau}_{\rm sync}(k))$ so that $\sum_{k} P(\widetilde{\tau}_{\rm sync}(k))=1$, we can split the cities of study between those with higher and smaller differences in $\widetilde{\tau}_{\rm sync}(k)$, see Fig.~\ref{fig3}(C). In Fig.~\ref{fig3}(D), we display the cluster assigned to each metropolitan area, where no strong spatial pattern is observed. Still, the cities in the Midwest, which are known for being economically segregated, fall into the red cluster, together with other cities such as Baltimore or Los Angeles. If, instead, we focus on the blue cluster, we have cities such as Sacramento or Washington D.C. Among the cities discussed in Fig.~\ref{fig3}(A), Denver falls into the group with more homogeneous segregation (in blue) and the rest into the one with more unequal segregation patterns (in red).
Beyond the global quantification of segregation, we can also evaluate the local level of segregation of a concrete census tract $i$ at a given time step $t$ by computing
\begin{equation}
\rho^k_i(t)=\cos(\theta^k_i(t)-\Phi^k(t)),
\end{equation}
where $\theta^k_i(t)$ is the phase of unit $i$ at time $t$ and $\Phi^k(t)$ is the average phase of all the oscillators in a city in a given time $t$ \cite{arenas2006synchronization}. When $\rho^k_i(t)>0.999$ we consider that oscillator $i$ has synchronized, from which we can obtain $\tau^{\rm loc}_i(k)$. However, given that $\rho_i(\tau^{\rm loc}_i(k))$ can oscillate through time, we only consider that a unit $i$ has reached the global synchronized state at a time $\tau^{\rm loc}_i(k)$ when $\rho_i(t>\tau^{\rm loc}_i(k))$ does not go below $0.999$ anymore, otherwise our methodology could fail to capture long-range correlations. In order to provide a metric for each spatial unit, simulations last until each of the spatial units have fullfiled the synchronization criteria. Normalizing $\tau^{\rm loc}_i(k)$ by its null model counterpart, it yields $\widetilde{\tau}^{\rm loc}_i(k)$, a measure of the local synchronization time.
Figure~\ref{fig4} displays the normalized synchronization times for each of the census tracts in Denver and Detroit, focusing on three very distinct income categories: low income, Fig.~\ref{fig4}(A,E); middle income, Fig.~\ref{fig4}(B,F); and high income, Fig.~\ref{fig4}(C,G). To ease the comparison between income categories, the range of values is common for all the maps, evincing the strong differences between Detroit and Denver, especially for the low and high-income categories. The shape of the segregation in Detroit can be outlined by the lower-income downtown and the richer suburbs, being the most segregated parts, and a less-segregated region in-between. In the case of Denver, we only slightly see high values for the low-income category in the North of the city and the high-income category in the South.
As we detail in Supplementary Fig.~S5, the spatial patterns of segregation product of the synchronization dynamics are significantly different to those obtained from first-neighbor quantities such as the Moran's I. Instead of focusing on those regions whose proportion of citizens is high (or low) compared to its neighbors, our methodology highlights those with a ratio of population within a category~$k$ distinct than the average, either because it is high or low, and spatially isolated from those regions with average values. In other words, a region with a high proportion of residents of category $k$ might not show a large local spatial correlation if their neighbors have similar values but could, instead, produce high values of $\widetilde{\tau}^{\rm loc}_i(k)$ if it is isolated from those regions displaying a proportion of citizens closer to the city average. As the majority of spatial measures, our approach can also suffer the so-called modifiable areal unit problem \cite{fotheringham1991modifiable} in a similar fashion. However, given that our methodology captures mid and long-range correlations instead of local differences, it might be less affected by such small local changes.
Finally, we inspect if the synchronization time of a region displays any type of connection with its actual income. To do so, we plot in Fig.~\ref{fig5} the normalized local synchronization time as a function of the median income averaged over all the census tracts within bins of \$5,000 in four US cities. Again we see that segregation is much stronger in Detroit followed by Cleveland and Boston. High-income regions are more segregated in Boston compared to Cleveland. In general terms, the census tracts with a median income between \$50,000 and \$80,000 seem to be the less segregated ones as they synchronize faster for both low and high-income categories. These results are in agreement with the cluster assignment of the previous cities, with Detroit, Cleveland and Boston in the red cluster where low and high-income categories need more time to synchronize, and Denver in the blue cluster where only the high-income categories need more time to synchronize.
\section{Discussion}
Traditional spatial segregation indicators that focus on local scale of segregation fail in most cases to capture the presence of long-range correlations, thus highlighting the need of multi-scale indices \cite{Farber2012,Louf2016,chodrow2017structure,Olteanu2019,bassolas2021first,
sousa2020quantifying,bassolas2021diffusion}. Our framework does not consider any specific scale, but uses a dynamical approach that captures the patterns of segregation across the multiple scales. We have revealed how categories in the extreme of the income distribution are more heterogeneously distributed in space compared to middle classes, displaying larger diffusion and synchronization times. This approach has also allowed us to group together those cities that display common features of segregation. In this context, it is important to note that our work does not attempt to model the evolution of income segregation nor can be used as a forecasting tool, but takes modeling assumptions to assess the level of segregation that a distribution of population exhibits.
Despite the main manuscript focuses on the economic segregation, our methodology can be used to assess the heterogeneity in the spatial distribution of any characteristic. Moreover, it can go beyond the spatial component of segregation by including in the analysis other types of graphs, e.g., the daily mobility network of citizens. In this way, we could assess how citizens of diverse socioeconomic environments interact through mobility \cite{xu2019quantifying,toth2021inequality,bokanyi2021universal,moro2021mobility}.
Summarizing, we show how diffusion and synchronization dynamics can be used in some systems to assess the heterogeneity in the distribution of node features. While the present work focuses on the initial phases of oscillators and their synchronization time, node metadata could also be understood as an internal frequency and provide further insights on feature correlation across topological scales.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
A.B.\ performed the research. A.B.\, S.G.\ and A.A.\ designed the research and wrote the manuscript.
\section*{Acknowledgments}
A.B.\ acknowledges financial support from the Ministerio de Ciencia e Innovaci\'on under the Juan de la Cierva program (FJC2019-038958-I). We acknowledge support by Ministerio de Econom\'{\i}a y Competitividad (PGC2018-094754-BC21, FIS2017-90782-REDT and RED2018-102518-T), Generalitat de Catalunya (2017SGR-896 and 2020PANDE00098), and Universitat Rovira i Virgili (2019PFR-URV-B2-41). A.A.\ acknowledges also ICREA Academia and the James S.\ McDonnell Foundation (220020325).
\section*{Data Availability Statement}
The income data analyzed in the present text can be found at \cite{income}.
\clearpage
\renewcommand\theequation{{S\arabic{equation}}}
\renewcommand\thetable{{Supplementary S\Roman{table}}}
\renewcommand{\figurename}{Supplementary Figure}
\renewcommand\thefigure{{S\arabic{figure}}}
\renewcommand\thesection{{Section S\arabic{section}}}
\setcounter{section}{0}
\setcounter{table}{0}
\setcounter{figure}{0}
\setcounter{equation}{0}
\onecolumngrid
\section{Supplementary results for diffusion and synchronization dynamics and economic segregation}
We provide here supplementary results related to the study of income segregation in US cities. Figure~\ref{figS1} reports (A) the mean and (B) standard deviation of $x_i^{k}$ in Boston, Cleveland, Detroit and Denver. Both of them reach minimum values between 8-10.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.8\textwidth]{FigS01.pdf}
\end{center}
\caption{(A) Mean of $x^k$ for each income category in Boston, Cleveland, Detroit and Denver. (B)~Standard deviation $\sigma(x^k)$ for each income category in Boston, Cleveland, Detroit and Denver. (C)~Moran's I for each income category in Boston, Cleveland, Detroit and Denver. (D)~Scatter plot of the mean of $x^k$ as a function of $\widetilde{\tau}_{\rm sync}(k)$.}
\label{figS1}
\end{figure}
The fact that classes 8-10 appear to be the less segregated is also supported by the Moran's I as Fig.~\ref{figS1}(C) shows. To further assess that the mean $<x^{k}>$ does not strongly determine the values of $\widetilde{\tau}_{\rm sync}(k)$, we plot both quantities in Fig.~\ref{figS1}(D), where no strong pattern is observed. Categories with low $<x^{k}>$ display high variability in $\widetilde{\tau}_{\rm sync}(k)$ and vice-versa.
In Fig.~\ref{figS2} we provide the ranking of the selected US cities according to the value of $\widetilde{\tau}_{\rm sync}(k)$ for the lowest and highest income categories~1 and 16, respectively. As can be seen, there are significant variations in the ranking depending on which economic category is shown; for example, Cleveland is close to the top for category 1 but far apart for 16, and the other way around for Seattle.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{l}
A\hfill Class 1 \hfill\mbox{} \\
\includegraphics[width=0.95\textwidth]{FigS02a.pdf} \\
B\hfill Class 16 \hfill\mbox{} \\
\includegraphics[width=0.95\textwidth]{FigS02b.pdf}
\end{tabular}
\end{center}
\caption{Ranking of the selected US cities according to the value of $\widetilde{\tau}_{\rm sync}(k)$, for income class~1 (A) and income class~16 (B).}
\label{figS2}
\end{figure}
\clearpage
\section{Comparison with other segregation measures}
In this section we assess how the normalized synchronization time $\widetilde{\tau}^k_{\rm sync}$ relates to other segregation measures. In particular we focus on the widely used Moran's I \cite{moran1948interpretation}, which focuses on local correlations, and one obtained from class mean first passage times (CMFPT) developed in \cite{bassolas2021first,bassolas2021diffusion}, which captures long-range spatial correlations.
For each city and category $k$ the Moran's I can be written as
\begin{equation}
I^k =
\frac{\displaystyle\frac{1}{W}\sum_{i=1}^n \sum_{j=1}^n w_{ij}(x^k_i - \bar{x}^k)(x^k_j -
\bar{x}^k)}{\displaystyle\frac{1}{n}\sum_{i=1}^n (x^k_i - \bar{x}^k)^2},\label{eq:morani}
\end{equation}
where $x^k_i$ is the fraction of population in $i$ that belongs to category $k$, $\bar{x}^k$ is its mean across all spatial units, the weights $w_{ij}$ correspond in our case to the spatial adjacency matrix $a_{ij}$, and $W=\sum_{i=1}^n\sum_{j=1}^n w_{ij}$ is the total weight.
As an index to assess the long-range correlations in the spatial distribution of the income categories, we will use the class mean first passage times between classes. In this methodology \cite{bassolas2021first,bassolas2021diffusion}, random walkers start from each of the spatial units in a system and move through the spatial adjacency graph until they have visited the $16$~classes at least once. For this, each location is assigned to a class with probability proportional to its corresponding fraction of population. By averaging the number of steps that a walker needs to reach class $j$ across all the units that belong to category $i$ and for multiple realizations, we can obtain the class mean first passage times $\tau_{ij}$, which encapsulate the average number of steps needed to reach a unit of category $j$ when a walker departs from a unit of category $i$. After normalizing by a null-model in which colors are uniformly reshuffled at random to compensate for uneven class abundances, we finally obtain the normalized class mean first passage times $\widetilde{\tau}_{ij}$. The quantity $\widetilde{\tau}_{ij}$ provides thus information on how much time you need to reach category $j$ when a walker departs from a unit of category $i$ as compared to the null-model, values below $1$ mean that two categories are closer than in the null-model and vice-versa for values above $1$. To summarize the segregation of category $k$ in a city we will use the CMFPT index, i.e., the $\mbox{med}(\widetilde{\tau})_k$ given by the median value of $\tau_{jk}$ $\forall j$.
For each city included in our analysis, we measure the Pearson correlation coefficient $r_p$ between each of the additional segregation quantities and $\widetilde{\tau}^k_{\rm sync}$ for all the $16$ categories $k$. More specifically, for each city $r_p$ is calculated over a set of $16$ points. The distribution of $r_p$ across cities is shown in Fig.~\ref{figrp} for the Moran's I (A) and $\mbox{med}(\widetilde{\tau})_k$ (B), where a skewness towards high values is clearly observed. Most of the cities display correlations above $0.8$ with the Moran's I and $0.7$ with the CMFPT index. Additionally, we also show in Fig.~\ref{figsig} the significance of the correlations observed in each of the cities, which are also below $0.001$ in most of the cases.
\begin{figure}[bh!]
\begin{center}
\includegraphics[width=0.45\textwidth]{FigS03a.pdf}
\includegraphics[width=0.45\textwidth]{FigS03b.pdf}
\end{center}
\caption{\textbf{Correlation between $\widetilde{\tau}^k_{\rm sync}$ and the additional segregation indicators.} For each city in our study, we calculate the Pearson correlation coefficient $r_p$ between $\widetilde{\tau}^k_{\rm sync}$ and the additional segregation metrics over the $16$ income categories. The correlation coefficient for a city is thus obtained from a set of $16$ points, one per category. (A) Distribution of $r_p$ between Moran's I and $\widetilde{\tau}^k_{\rm sync}$ across cities. (B) Distribution of $r_p$ between the segregation calculated through normalized CMFPT $\mbox{med}(\widetilde{\tau})_k$ and $\widetilde{\tau}^k_{\rm sync}$ across cities.}
\label{figrp}
\end{figure}
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.45\textwidth]{FigS04a.pdf}
\includegraphics[width=0.45\textwidth]{FigS04b.pdf}
\end{center}
\caption{\textbf{Significance of the Pearson correlation coefficients between $\widetilde{\tau}^k_{\rm sync}$ and the other segregation indicators.} For each of the additional indices, we display the significance of the correlations across cities. (A) Significance of correlations between Moran's I and $\widetilde{\tau}^k_{\rm sync}$. (B) Significance of correlations between the segregation calculated through normalized class mean first passage times $\mbox{med}(\widetilde{\tau})_i$ and $\widetilde{\tau}^k_{\rm sync}$ . The correlation coefficient and significance for each city is obtained by comparing the segregation values for the $16$ income categories. The significance values are depicted as * for $\mbox{p-value}<0.05$, ** for $\mbox{p-value}<0.01$, and *** for $\mbox{p-value}<0.001$.}
\label{figsig}
\end{figure}
\newpage
In the main text we discuss the potential of our methodology to assess the multiscale patterns of segregation in front of traditional first-neighbor approaches. In Fig.~\ref{figcities} we further investigate this fact by plotting for Boston, Cleveland, Denver and Detroit the local normalized synchronization times, the local Moran's $I^{\rm loc}_i(k)$, and the raw ratio of population of category $k$ in each of the census tracts.
Although the segregation hotspots detected by our methodology and the local Moran's I seem similar, the patterns detected are significantly different. Whereas $I^{\rm loc}_i(k)$ captures strong differences between neighboors, $\widetilde{\tau}_i^{\rm loc}(k)$ highlights isolated regions even if the differences with their first-neighboors is low; most likely, this is because they are far apart from regions displaying ratios of population closer to the city average and require more time to reach the global synchronized state. In fact, the areas highlighted by synchronization dynamics have a larger scale and allow us to identify common mesoscale patterns of segregation across cities: a downtown that displays high values, a ring around it with low values, and finally the suburbs with high values again. By focusing on Detroit, we can see that not only the poorer downtown appears highlighted but also the suburbs due to their very low ratio of population of category $1$. Similar patterns can also be observed in Cleveland and Denver.
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.90\textwidth]{FigS05a-Boston.pdf} \\
\includegraphics[width=0.90\textwidth]{FigS05b-Cleveland.pdf} \\
\includegraphics[width=0.90\textwidth]{FigS05c-Denver.pdf} \\
\includegraphics[width=0.90\textwidth]{FigS05d-Detroit.pdf}
\end{center}
\caption{\textbf{Comparison of local segregation indicators in Boston, Cleveland, Denver and Detroit.} (A) Normalized synchronization time, (B) Local Moran correlation, and (C) proportion of citizens for each census tract and income class~1 (most deprived).}
\label{figcities}
\end{figure}
The segregation index developed in the main text is calculated as the median of $\widetilde{\tau}^k_{\rm sync}$ which confers an equal weight to each of the income categories, disregarding the amount of population in each category. However, we can also construct a weighted index $\bar{\widetilde{\tau}}_{\rm sync}$ that can be built as
\begin{equation}
\bar{\widetilde{\tau}}_{\rm sync}=\frac{\displaystyle\sum_k P_k \widetilde{\tau}_{\rm sync}(k)}{\displaystyle\sum_k P_k},
\end{equation}
where $P_k$ is the total number of citizens that belong to category $k$ in a given city. The ranking of cities according to the value of $\bar{\widetilde{\tau}}_{\rm sync}$ (Fig.~\ref{figrank}) displays only slight changes with, for example, Philadelphia and Los Angeles closer to the top of the ranking. We test the relation between both indices in Fig.~\ref{figcomp}, where a clear relationship between both quantities is revealed.
\begin{figure}[th!]
\begin{center}
\includegraphics[width=0.95\textwidth]{FigS06.pdf}
\end{center}
\caption{\textbf{Segregation in US cities according to an index calculated through a weighted average.} Ranking of cities according to the weighted index of segregation $\bar{\widetilde{\tau}}_{\rm sync}$.}
\label{figrank}
\end{figure}
\begin{figure}[th!]
\begin{center}
\includegraphics[width=0.55\textwidth]{FigS07.pdf}
\end{center}
\caption{\textbf{Comparison between segregation indicators obtained through synchronization dynamics.} Comparison between the weighted index of segregation $\bar{\widetilde{\tau}}_{\rm sync}$ and the index $\mbox{med}(\widetilde{\tau}_{\rm diff}(k))$ used in the main text.}
\label{figcomp}
\end{figure}
\clearpage
\section{Beyond economic segregation: Paris around the clock}
Besides only economic segregation, our methodology can be used to assess the spatial heterogeneity of any other quantity, and to exemplify it, we assess in this section the segregation of the population in Paris according to a wide set of socioeconomic indicators. The data compiles the fraction of population per district within a certain category at each hour of the day in French cities; in this work, we focus on Paris \cite{lecomte2018mobiliscope,julie2021,vallee2021}. The list of indicators and categories analyzed can be found in Table~\ref{TableS1}.
\begin{table}[ht!]
\centering
\begin{tabular}{llllll}
\hline \hline
Indicator & Categories \\
\hline
Activity type & At home & At work & Studying & Shopping & Leisure \\
Age & 16-24 & 25-34 & 35-64 & 65 and more & \\
Educational level & Low & Middle-low & Middle-high & High & \\
Socioprofessional status & Inactive & Low & Middle-low & Middle-high & High \\
Last travel mode & Pub. trans. & Private motor & Soft mobility & & \\
Occupational status & Active & Student & Unemployed & Retired & Inactive \\
Sex & Male & Female & & & \\ \hline
\end{tabular}
\caption{Socio-economic indicators and activity types analyzed for Paris.} \label{TableS1}
\end{table}
For each indicator or category, we have a certain distribution of population per spatial unit and hour of the day, thus we can compute how the quantity $\widetilde{\tau}_{\rm sync}(k)$
varies during the day, as we show in Fig.~\ref{figS3}(A,B) for the five activity types, and the five socio-professional status; the patterns of synchronization through time turn out to be very distinct.
For example, the level of synchronization remains basically constant throughout the day for low, middle and high socio-professional status, while it increases (decreases) between 8am and 8pm for inactive (high) socio-professional status. If we focus instead on the ranking of $\widetilde{\tau}_{\rm sync}(k)$ at 10am and 10pm, see Fig.~\ref{figS3}(C), the lower occupational and socio-professional status seem to be the most segregated indicators as they are on top of the ranking at both times of the day. Other categories that should be uniformly distributed across the city, such as sex, are very close to $1$, thus indicating no segregation.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{FigS08.pdf}
\end{center}
\caption{\textbf{Synchronization around the clock in Paris.} (A) Normalized synchronization time for the distribution of population performing each of the five types of activities. (B) Synchronization time for the distribution of population of each socio-professional status. (C) Change of synchronization times for all of the indicators at 10am (green) and 10pm (red).}
\label{figS3}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.55\textwidth]{FigS09.pdf}
\end{center}
\caption{\textbf{Clustering analysis of segregation around the clock in Paris.} (A) Pattern of synchronization times $P$ for each of the four main groups detected with the K-Means algorithm. (B) Cluster assignment for each of the indicators analyzed.}
\label{figS4}
\end{figure}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=0.80\textwidth]{FigS10.pdf}
\end{center}
\caption{\textbf{Local synchronization around the clock in Paris.} (A, B) Normalized synchronization time for each Paris district for the population performing leisure activities at 10am and 10pm. (C, D) Normalized synchronization time for each Paris district for the population with inactive socio-professional status at 10am and 10pm. For visualization purposes the color range is common to all four maps.}
\label{figS5}
\end{figure*}
The hourly patterns of each metric allow for the grouping of indicators behaving similarly as we did for US cities. As before, we focus more on the time-series profile rather than the specific values taken by bon $\widetilde{\tau}^h_{\rm sync}(k)$, thus analyzing the normalized $P(\widetilde{\tau}^h_{\rm sync}(k))$ for each hour of the day $h$. The k-means clustering reveals four distinct clusters (see Fig.~\ref{figS4}) which correspond to: those increasing during workings, those decreasing, those remaining almost constant, and those with a more characteristic behavior with a peak during midday and at the end of the day, roughly around the lunch and dinner times.
Finally, we assess the local segregation of districts by measuring their local normalized synchronization time. In particular, we show an example in Fig.~\ref{figS5} for the population performing leisure activities and those with inactive socio-professional status.
In agreement with the temporal pattern shown in Fig.~\ref{figS3}, the segregation is much higher at 10pm compared to 10am, especially concentrated in the centre of the city; a not so surprising result given that most of the leisure activities are concentrated in that part of the city. In the case of the population with inactive socio-professional status, the hotspots seem to be concentrated in the northern part of the city, a region known for suffering a thriving inequality.
\section{Keywords:} Urban segregation, Spatial heterogeneity, Syn\-chro\-ni\-za\-tion dynamics, Diffusion dynamics, phase-oscillators}
\end{abstract}
\maketitle
\section{Introduction}
The expansion of urbanization and progressive increase of the population in cities has intensified the concern over the many dimensions of segregation ---i.e., school, economic or ethnics--- that have a tangible impact in the health, education and equal opportunities of citizens \cite{kennedy1998income,Elliott1999,collins2000residential,ross2001income,mayer2002economic,acevedo2003residential,wheeler2006urban,owens2018income}. In fact, quantifying the extent of segregation and the identification of economically and socially isolated neighborhoods has been a topic of wide interest that first led to the development of global metrics, and which were later extended to spatial metrics \cite{Cliff1981,Dawkins2004,Brown2006,Dawkins2006,Wong2011,Rey2013}. Most of the initial spatial measures were limited to first neighbour indices, which facilitated the development of multi-scalar indices that provide a more nuanced picture of segregation \cite{Farber2012,Louf2016,chodrow2017structure,Olteanu2019,sousa2020quantifying,bassolas2021first,bassolas2021diffusion}, yet understanding the role played by each of the scales and their interplay still remains a challenge.
Dynamical processes in general, and in particular diffusion \cite{gomez2013diffusion,sole2013spectral,de2013mathematical,li2013influence,delvenne2015diffusion,de2017diffusion,masuda2017random,cencetti2019diffusive,bertagnolli2021diffusion} and synchronization \cite{arenas2006bsynchronization,arenas2006synchronization,gomez2007paths,gomez2007synchronizability,arenas2008synchronization} dynamics, have been widely studied in complex networks on account of their relation with the spread of diseases and information \cite{gomez2018critical,zhang2016dynamics} and real-world phenomena in social or economic systems \cite{pluchino2005changing,calderon2007trade,erola2012modeling}. Interestingly, they provide insights on the topological scales and structure of networks and reveal the existence of functional meso-scale structures \cite{de2017diffusion,bertagnolli2021diffusion,arenas2006synchronization,gomez2007synchronizability,motter2005network}.
Here we use previous knowledge on diffusion and synchronization dynamics to assess the multi-scale patterns of residential segregation. By moving the focus from the network topology and organization to the node states, we are able to measure how well distributed a population with a certain characteristic is using the time needed to reach the absorbing state.
Our framework requires thus the implementation of a population dynamic to drive the system towards the homogeneous state, in our case diffusion and synchronization dynamics. None of them constitute here an attempt to model or predict the changes in the spatial distribution of a population characteristic but are highly stylized simplifications of their evolution that allow us to measure the time needed to attain the homogeneous state, which we consider to be the non-segregated scenario. Dynamical approaches are thus introduced here not because they provide a realistic approximation to the evolution of population dynamics but because they offer a significant advantage to measure multi-scale correlations as they do not require to take distance explicitly into account. Moreover, the assumption that cities converge towards uniformity is rather unrealistic without a heavy external driver, and is only a means to construct our measures.
As case studies we provide an analysis on the distribution of citizens of a certain income category in cities from the United States, and the distribution of a set of socioeconomic indicators in the city of Paris throughout an average day (see Supplementary Material Section~2 and Supplementary Figs.~S8-S10). The analysis on the spatial organization of income categories reveals that the most deprived and affluent sectors display higher diffusion and synchronization times linked to a higher heterogeneity, and allow us to split the cities in two groups depending on the difference on the level of segregation. Finally, we evaluate the level of synchronization at the neighborhood level which allow us to spot the more sensitive places in a city.
\section{Results}
\subsection{Diffusion dynamics and income segregation}
Citizens exhibit a huge diversity of characteristics usually captured by socioeconomic indicators such as education level, income or ethnicity, and they are often heterogeneously distributed in space: those individuals with similar characteristics tend to live close between them. To assess the heterogeneity of a population with a characteristic $k\in K$, we consider a graph $G(V,E)$ with adjacency matrix $A=\{a_{ij}\}$ in which the spatial units are represented as a set of nodes $V$ connected by a set of edges $E$. The adjacency matrix $A$ we have considered takes $a_{ij}=1$ when spatial units $i$ and $j$ are adjacent and $a_{ij}=0$ otherwise, which is the traditional connectivity matrix used to capture residential segregation. Still, other types of (weighted) matrices could be considered to assess, for example, the impact of mobility in segregation. The state of a node $x_i^k$ is given by the fraction of citizens living in node $i$ that belong to socioeconomic category (or class) $k$, written as
\begin{equation}
x_i^k=\frac{n_i^k}{\sum\limits_{k'} n_i^{k'}},
\end{equation}
where $n_i^k$ is the total number of citizens in unit~$i$ that belong to category~$k$. As extreme cases, $x_i^k=0$ when there are no citizens of category~$k$ living in $i$, and $x_i^k=1$ when all the citizens in node~$i$ belong to category~$k$. Of course, the normalization condition
\begin{equation}
\sum_{k\in K} x_i^k = 1
\end{equation}
is fulfilled for all nodes~$i$.
To measure the multi-scalar patterns of segregation, our assumption is that cities suffering from stronger residential segregation are further from the stationary state where the citizens of category $k$ are homogeneously distributed in space. Although cities are in continuous change and most likely far from equilibrium, similar approaches such as the long-standing Schelling and the Alonso-Muth-Mills models have been able to draw relevant conclusions from the equilibrium state \cite{schelling1971dynamic,fujita1989urban}.
By adopting diffusion dynamics we do not refuse the high complexity of population dynamics influenced by a wide variety of demographic, economic, political, and behavioral factors \cite{zhang2004dynamic,clark2009changing,zhang2011tipping,deluca2013segregating} but avoid introducing further parameters and factors that could hinder our aim of characterizing the segregation of a particular population category. Bear in mind that our final goal is by no means to assess real-world migration processes but to construct a multi-scalar measure of segregation that does not explicitly include the distance and the use of more complex and realistic approaches that would complicate the interpretation of the results. Diffusion constitutes one of the most basic approximations to how information, or any other characteristic, is transmitted through a system. Although far from the real behavior, it provides one of the simplest scenarios where the flow of population follows a gradient.
In fact, we focus on one of the best-case scenarios where the values of $x_i^k$ converge towards equilibrium following a gradient, which could be interpreted as the change of residence of citizens of category $k$ to regions where they are less abundant.
\begin{table}[t!]
\centering
\begin{tabular}{lc}
\hline
\hline
Class & Income (\$) \\
\hline
1 & Less than 10,000 \\
2 & 10,000 -- 14,999 \\
3 & 15,000 -- 19,999 \\
4 & 20,000 -- 24,999 \\
5 & 25,000 -- 29,999 \\
6 & 30,000 -- 34,999 \\
7 & 35,000 -- 39,999 \\
8 & 40,000 -- 44,999 \\
9 & 45,000 -- 49,999 \\
10 & 50,000 -- 59,999 \\
11 & 60,000 -- 74,999 \\
12 & 75,000 -- 99,999 \\
13 & 100,000 -- 124,999 \\
14 & 125,000 -- 149,999 \\
15 & 150,000 -- 199,999 \\
16 & 200,000 or more \\
\hline
\end{tabular}
\caption{Income range (in US dollars) corresponding to each category (or class).} \label{Table1}
\end{table}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.95\textwidth]{Fig1.pdf}
\end{center}
\caption{\textbf{Diffusion dynamics as a measure for income segregation.} (A) Synchronization time for each of the $16$ income categories in Boston, Cleveland, Denver and Detroit. (B) Median value of $\widetilde{\tau}_{\rm diff}(k)$ across income categories as a function of its variance. (C) Ranking for the median value of $\widetilde{\tau}_{\rm diff}(k)$ for the studied set of US cities. }
\label{fig1}
\end{figure*}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.999\columnwidth]{Fig2.pdf}
\end{center}
\caption{\textbf{Average temporal evolution of the abundance of households within the lowest and highest income.} Temporal evolution of centroids after performing a k-means clustering on the normalized abundance of households with category~$k$, $P(x_i^k,t)$, as a function of time~$t$ for the lower (A) and higher (B) income categories.}
\label{fig2}
\end{figure}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.90\textwidth]{Fig3.pdf}
\end{center}
\caption{\textbf{Synchronization time as a measure for income segregation.} (A) Synchronization time for each of the $16$ income categories in Boston, Cleveland, Denver and Detroit. (B) Ranking for the median value of $\widetilde{\tau}_{\rm sync}(k)$ for the studied set of US cities. (C) Average value of $P(\widetilde{\tau}_{\rm sync}(k))$ as a function of each income category~$i$ for the two main clusters detected. (D) Location and cluster assignment for each of the analyzed cities.}
\label{fig3}
\end{figure*}
We focus on the economic segregation in the metropolitan areas of the United States with more than $1$~million inhabitants and analyze a dataset containing the number of households within an income interval~$k$ residing in each census tract (see Table~\ref{Table1}).
Once we have the set of initial node states $x_i^k$, their evolution through time is determined by the diffusion dynamics
\begin{equation}
\frac{dx_i^k}{dt}=\frac{1}{s_i}\sum\limits_{j=1}^N a_{ij}(x_j^k-x_i^k),
\end{equation}
where
\begin{equation}
s_i = \sum\limits_{j=1}^N a_{ij}
\end{equation}
is the degree of node $i$. For simplification purposes, we have opted to use a normalized diffusion dynamic, with diffusion strength equal to~1. Note that we have independent diffusion processes for each category~$k$.
The diffusion dynamic lasts until the stationary state, $x_i^k=\avg{x^k}$, $\forall i$ is reached, and we denote the spanned time as $\tau_{\rm diff}(k)$. Since the time to reach the stationary state can be infinitely large, we have considered that it is reached when the variance of $x_i^k$, in time, becomes lower than $0.0001$. We hypothesize that lower values of $\tau_{\rm diff}(k)$ are related to a more homogeneous distribution of the population within a category~$k$, and the other way around when it is higher. In the extreme case in which all units have the same initial value of $x_i^k$, the diffusion time $\tau_{\rm diff}(k)$ would attain its minimum value. As we aim to compare cities with different characteristics, we control for confounding factors such as the particular distribution of $x^k$ or the topology of the graph by running the same diffusion dynamics on the same graph but where the values of $x^k$ have been reshuffled, thus defining the average null-model diffusion time $\tau^{\rm null}_{\rm diff}(k)$ calculated over $500$ reshuffling realizations. The relative diffusion time we will use throughout this manuscript can then be written as
\begin{equation}
\widetilde{\tau}_{\rm diff}(k) = \frac{\tau_{\rm diff}(k)}{\tau^{\rm null}_{\rm diff}(k)}.
\end{equation}
A relative diffusion time equal to one means that
it is compatible with the null model, i.e., there are no remarkable spatial dependencies, while a greater value suggests that spatial heterogeneities delay the arrival to the stationary state.
We analyze the normalized diffusion times $\widetilde{\tau}_{\rm diff}(k)$ by running simulations for all US cities above $1$ million of inhabitants and each of the $16$ income categories~$k$ as a proxy for how heterogeneously distributed is the population; we have excluded New York City, whose adjacency network does not provide an accurate picture of residential segregation due to the particular geography of Manhattan. In Fig.~\ref{fig1}(A) we display $\widetilde{\tau}_{\rm diff}(k)$ in Boston, Cleveland, Detroit and Denver observing a common qualitative behavior: smaller values for middle-income categories, and higher ones for the categories in the extremes of the income distribution. Our results suggest that the wealthier and most deprived citizens suffer from stronger segregation and display a more clustered spatial distribution. More interestingly, category~$9$ seems to be the more homogeneously distributed across space, in agreement with the results observed in \cite{Bassolas2020b} and with the mean and standard deviation of $x^k$ as well as the Moran's I (see Supplementary Material Section~1 and Supplementary Fig.~S1). Still, there are strong quantitative differences, with Cleveland and Detroit displaying higher values for most of the categories, in contrast to Boston and Denver.
Since $\widetilde{\tau}_{\rm diff}(k)$ takes a set of $16$~values for each city, we calculate their median and variance values over all categories to ease the comparison between the set of cities studied. While the median value provides information on the segregation across all economic categories, its variance reports the variability among them. Figure~\ref{fig1}(B) shows this median value of $\widetilde{\tau}_{\rm diff}(k)$, $\mbox{med}(\widetilde{\tau}_{\rm diff}(k))$ as a function of its variance, $\mbox{var}(\widetilde{\tau}_{\rm diff}(k))$. The prior cities appear ordered as Detroit, Cleveland, Boston and Denver, although the variance is very similar for Cleveland and Boston, likely due to the high values observed for low-income categories in Boston. Finally, we provide in Fig.~\ref{fig1}(C) the ranking of the selected US cities according to $\mbox{med}(\widetilde{\tau}_{\rm diff}(k))$, as a measure of the overall segregation in cities. On top of it, we find cities such as Milwaukee or Detroit, which have been reported to suffer from economic and ethnic segregation \cite{adelman2004neighborhood,thomas2015race,florida2015segregated}.
By applying diffusion dynamics we implicitly assume that $x^k$ evolves homogeneously towards consensus, which more than a realistic scenario, it is a means to calculate the time needed to reach consensus and obtain a measure of segregation. To further inspect the actual change of $x_i^k$ between 2011 and 2019 in each of the spatial units~$i$, we first construct the normalized time-series for each spatial unit across those years as
\begin{equation}
P(x_i^k,t)=\frac{x_i^k(t)}{\sum\limits_{t'} x_i^k(t')},
\end{equation}
and then cluster, for each category~$k$, the temporal profiles of all the nodes. For the clustering, we have made use of the k-means algorithm \cite{hartigan1979algorithm,likas2003global}, grouping together those units with a similar temporal evolution, and setting the number of clusters to~3. The resulting time-series of the corresponding centroids for the highest and lowest income categories are depicted in Fig.~\ref{fig2}, where a non-monotonic behavior is observed in most of the cases, with oscillatory behaviors through time of varying amplitude.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.85\textwidth]{Fig4.pdf}
\end{center}
\caption{\textbf{Local synchronization time as a measure for income segregation.} Normalized synchronization time for each census tract in Denver (A--C) and Detroit (E--G) for three different income categories: (A, E) class~1 (low income); (B, F) class~8 (middle income); (C, G) class~16 (high income). We provide as a reference the median income of each census tract in (D) Denver and (H) Detroit.}
\label{fig4}
\end{figure*}
\subsection{Synchronization dynamics and income segregation}
According to the oscillations in the temporal evolution of $x^k_i$ (Fig.~\ref{fig2}), diffusion dynamics appear to be a rather simplistic approach to assess the time needed to converge. Even thought we do not aim to mimic the real evolution of $x^k_i$, we seek for a dynamic that at least can resemble its real behaviour in a qualitative way. Thus, despite still constituting a stylized approximation, a dynamical process with an oscillatory behavior, like a system of coupled Kuramoto oscillators, appears to be a better way to assess the spatial heterogeneity of socioeconomic indicators across cities. To analyze segregation in terms of synchronization dynamics, we treat each of the spatial units~$i$ as an individual Kuramoto oscillator, with an initial phase $\theta_i^k(0)$ that is set by distributing the fraction of population in node $i$ that belongs to a category~$k$ within the range $[0,\pi]$ as
\begin{equation}
\theta_i^k(0)=x_i^k\pi.
\end{equation}
The interaction between spatial units is given by the Kuramoto model
\begin{equation}
\dot{\theta}_i^k(t)=\omega_i^k+\frac{1}{s_i}\sum\limits_{j=1}^N a_{ij} \sin\left(\frac{\theta^k_j(t)-\theta_i^k(t)}{2}\right),
\end{equation}
where we have modified the traditional interaction term between oscillators by dividing the angle difference by two, allowing for the interaction between regions displaying extreme values of $x_i^k$. Additionally, to facilitate the global synchronization of the system, we set all the individual natural frequencies of the oscillators to the same value, i.e., $\omega_i=1,\ \forall i$. In order to account separately the segregation of each category $k$, our approach assumes that there is no interaction between categories and, thus, $x_i^k$ synchronize independently of $k$.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=0.80\textwidth]{Fig5.pdf}
\end{center}
\caption{\textbf{Synchronization time and median income.} Normalized synchronization time as a function of the median income averaged over bins of \$5,000 for class~1 (A), class~8 (B) and class~16 (C).}
\label{fig5}
\end{figure*}
We use the standard order parameter $|z^k|$ to assess the global level of synchronization for a category $k$ in a city, where
\begin{equation}
z^k=\frac{1}{N}\sum\limits_{j=1}^N e^{i\theta_j^k},
\end{equation}
and $N$ is the total number of spatial units or Kuramoto oscillators \cite{arenas2008synchronization}. We consider that a city has reached the synchronized state when $|z^k|>0.999$. As in the case of diffusion, we assess how the distribution of initial phases determines the synchronization of the system, a city in our case, by measuring the time $\tau_{\rm sync}(k)$ required to reach the synchronized state. The more heterogeneously distributed the initial phases are, the higher the time the system requires to synchronize. To distinguish between the effect produced by the spatial distribution $x_i^k$ from its overall distribution as well as the topology of the graph, we also measure the average time the system needs to synchronize when the same phases are redistributed at random, $\tau^{\rm null}_{\rm sync}(k)$. The normalized synchronization time of the system is then given by the ratio
\begin{equation}
\widetilde{\tau}_{\rm sync}(k)=\frac{\tau_{\rm sync}(k)}{\tau^{\rm null}_{\rm sync}(k)}.
\end{equation}
Like for diffusion, a synchronization time close to one means that the spatial distribution of phases is compatible with the null model, and a larger value indicates that spatial heterogeneities delay the appearance of a synchronized state.
In Fig.~\ref{fig3}(A) we inspect the normalized synchronization time in Boston, Cleveland, Detroit and Denver when spatial units interact through Kuramoto-like dynamics. All four of them share similar features, with central classes displaying smaller synchronization times compared to the most disadvantaged and wealthier ones. An expected result since those individuals in the extremes of the income distribution tend to be more isolated and clustered together compared to middle-income citizens. Despite sharing qualitative features, the cities shown display sharp quantitative differences. Almost all categories appear to be significantly more isolated in Detroit and Cleveland compared to Denver and Boston, where $\widetilde{\tau}_{\rm sync}(k)$ looks much flat. Overall, the synchronization results are compatible with the diffusion ones, likely because both dynamical processes share common features. We have further checked that the mean $\avg{x^k}$ does not determine directly the normalized synchronization times in Supplementary Fig.~S1.
Likewise with diffusion, we calculate the median and variance of $\widetilde{\tau}_{\rm sync}(k)$ over all categories to be able to compare between analyzed cities (see Supplementary Fig.~S2 for the individual rankings of $\widetilde{\tau}_{\rm sync}(k)$ for the categories $k=1$ and $k=16$). The ranking is shown in Fig.~\ref{fig3}(B) and has cities such as Detroit, Cleveland, Milwaukee or Memphis close to the top, which are well-known for being among the most economically segregated cities in the United States. The location in the ranking of the cities in Fig.~\ref{fig3}(A) is consistent with our observations, with Boston and Denver on the bottom of the ranking and Detroit and Cleveland on the top of it.
Our index is given by the median value of the normalized synchronization times, yet depending on the dimension of segregation we aim to capture, we can also construct an index based on a population-weighted average. Whereas the median gives equal weight to each economic category focusing on the segregation suffered by residents of category $k$, the weighted average provides an overall picture of segregation taking the population of each category into account. We show the ranking obtained for the weighted average index and its relation with $\mbox{med}(\widetilde{\tau}_{\rm sync}(k))$ in Supplementary Figs.~S6 and~S7. Additionally, we show in Supplementary Figs.~S3 and~S4 how $\widetilde{\tau}_{\rm sync}(k)$ significantly correlates with the traditional Moran's I \cite{moran1948interpretation} as well as a multi-scale quantity based on class mean first passage times developed in \cite{bassolas2021first,bassolas2021diffusion}, reinforcing the idea that synchronization (and diffusion) dynamics indeed capture the patterns of residential segregation. Despite the dynamics we have used are stylized versions of the real behavior of the quantity $x^k_i$ and do not capture the full complexity of its temporal evolution, it is able to capture segregation with values comparable to other segregation indicators.
Although $\widetilde{\tau}_{\rm sync}(k)$ is larger for extreme categories in most of the cities, some of them like Denver display smaller variations than others such as Detroit and, therefore, it might be of interest to group cities according to the change in synchronization times. By running a k-means algorithm on the normalized value of $P(\widetilde{\tau}_{\rm sync}(k))$ so that $\sum_{k} P(\widetilde{\tau}_{\rm sync}(k))=1$, we can split the cities of study between those with higher and smaller differences in $\widetilde{\tau}_{\rm sync}(k)$, see Fig.~\ref{fig3}(C). In Fig.~\ref{fig3}(D), we display the cluster assigned to each metropolitan area, where no strong spatial pattern is observed. Still, the cities in the Midwest, which are known for being economically segregated, fall into the red cluster, together with other cities such as Baltimore or Los Angeles. If, instead, we focus on the blue cluster, we have cities such as Sacramento or Washington D.C. Among the cities discussed in Fig.~\ref{fig3}(A), Denver falls into the group with more homogeneous segregation (in blue) and the rest into the one with more unequal segregation patterns (in red).
Beyond the global quantification of segregation, we can also evaluate the local level of segregation of a concrete census tract $i$ at a given time step $t$ by computing
\begin{equation}
\rho^k_i(t)=\cos(\theta^k_i(t)-\Phi^k(t)),
\end{equation}
where $\theta^k_i(t)$ is the phase of unit $i$ at time $t$ and $\Phi^k(t)$ is the average phase of all the oscillators in a city in a given time $t$ \cite{arenas2006synchronization}. When $\rho^k_i(t)>0.999$ we consider that oscillator $i$ has synchronized, from which we can obtain $\tau^{\rm loc}_i(k)$. However, given that $\rho_i(\tau^{\rm loc}_i(k))$ can oscillate through time, we only consider that a unit $i$ has reached the global synchronized state at a time $\tau^{\rm loc}_i(k)$ when $\rho_i(t>\tau^{\rm loc}_i(k))$ does not go below $0.999$ anymore, otherwise our methodology could fail to capture long-range correlations. In order to provide a metric for each spatial unit, simulations last until each of the spatial units have fullfiled the synchronization criteria. Normalizing $\tau^{\rm loc}_i(k)$ by its null model counterpart, it yields $\widetilde{\tau}^{\rm loc}_i(k)$, a measure of the local synchronization time.
Figure~\ref{fig4} displays the normalized synchronization times for each of the census tracts in Denver and Detroit, focusing on three very distinct income categories: low income, Fig.~\ref{fig4}(A,E); middle income, Fig.~\ref{fig4}(B,F); and high income, Fig.~\ref{fig4}(C,G). To ease the comparison between income categories, the range of values is common for all the maps, evincing the strong differences between Detroit and Denver, especially for the low and high-income categories. The shape of the segregation in Detroit can be outlined by the lower-income downtown and the richer suburbs, being the most segregated parts, and a less-segregated region in-between. In the case of Denver, we only slightly see high values for the low-income category in the North of the city and the high-income category in the South.
As we detail in Supplementary Fig.~S5, the spatial patterns of segregation product of the synchronization dynamics are significantly different to those obtained from first-neighbor quantities such as the Moran's I. Instead of focusing on those regions whose proportion of citizens is high (or low) compared to its neighbors, our methodology highlights those with a ratio of population within a category~$k$ distinct than the average, either because it is high or low, and spatially isolated from those regions with average values. In other words, a region with a high proportion of residents of category $k$ might not show a large local spatial correlation if their neighbors have similar values but could, instead, produce high values of $\widetilde{\tau}^{\rm loc}_i(k)$ if it is isolated from those regions displaying a proportion of citizens closer to the city average. As the majority of spatial measures, our approach can also suffer the so-called modifiable areal unit problem \cite{fotheringham1991modifiable} in a similar fashion. However, given that our methodology captures mid and long-range correlations instead of local differences, it might be less affected by such small local changes.
Finally, we inspect if the synchronization time of a region displays any type of connection with its actual income. To do so, we plot in Fig.~\ref{fig5} the normalized local synchronization time as a function of the median income averaged over all the census tracts within bins of \$5,000 in four US cities. Again we see that segregation is much stronger in Detroit followed by Cleveland and Boston. High-income regions are more segregated in Boston compared to Cleveland. In general terms, the census tracts with a median income between \$50,000 and \$80,000 seem to be the less segregated ones as they synchronize faster for both low and high-income categories. These results are in agreement with the cluster assignment of the previous cities, with Detroit, Cleveland and Boston in the red cluster where low and high-income categories need more time to synchronize, and Denver in the blue cluster where only the high-income categories need more time to synchronize.
\section{Discussion}
Traditional spatial segregation indicators that focus on local scale of segregation fail in most cases to capture the presence of long-range correlations, thus highlighting the need of multi-scale indices \cite{Farber2012,Louf2016,chodrow2017structure,Olteanu2019,bassolas2021first,
sousa2020quantifying,bassolas2021diffusion}. Our framework does not consider any specific scale, but uses a dynamical approach that captures the patterns of segregation across the multiple scales. We have revealed how categories in the extreme of the income distribution are more heterogeneously distributed in space compared to middle classes, displaying larger diffusion and synchronization times. This approach has also allowed us to group together those cities that display common features of segregation. In this context, it is important to note that our work does not attempt to model the evolution of income segregation nor can be used as a forecasting tool, but takes modeling assumptions to assess the level of segregation that a distribution of population exhibits.
Despite the main manuscript focuses on the economic segregation, our methodology can be used to assess the heterogeneity in the spatial distribution of any characteristic. Moreover, it can go beyond the spatial component of segregation by including in the analysis other types of graphs, e.g., the daily mobility network of citizens. In this way, we could assess how citizens of diverse socioeconomic environments interact through mobility \cite{xu2019quantifying,toth2021inequality,bokanyi2021universal,moro2021mobility}.
Summarizing, we show how diffusion and synchronization dynamics can be used in some systems to assess the heterogeneity in the distribution of node features. While the present work focuses on the initial phases of oscillators and their synchronization time, node metadata could also be understood as an internal frequency and provide further insights on feature correlation across topological scales.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
A.B.\ performed the research. A.B.\, S.G.\ and A.A.\ designed the research and wrote the manuscript.
\section*{Acknowledgments}
A.B.\ acknowledges financial support from the Ministerio de Ciencia e Innovaci\'on under the Juan de la Cierva program (FJC2019-038958-I). We acknowledge support by Ministerio de Econom\'{\i}a y Competitividad (PGC2018-094754-BC21, FIS2017-90782-REDT and RED2018-102518-T), Generalitat de Catalunya (2017SGR-896 and 2020PANDE00098), and Universitat Rovira i Virgili (2019PFR-URV-B2-41). A.A.\ acknowledges also ICREA Academia and the James S.\ McDonnell Foundation (220020325).
\section*{Data Availability Statement}
The income data analyzed in the present text can be found at \cite{income}.
\clearpage
\renewcommand\theequation{{S\arabic{equation}}}
\renewcommand\thetable{{Supplementary S\Roman{table}}}
\renewcommand{\figurename}{Supplementary Figure}
\renewcommand\thefigure{{S\arabic{figure}}}
\renewcommand\thesection{{Section S\arabic{section}}}
\setcounter{section}{0}
\setcounter{table}{0}
\setcounter{figure}{0}
\setcounter{equation}{0}
\onecolumngrid
\section{Supplementary results for diffusion and synchronization dynamics and economic segregation}
We provide here supplementary results related to the study of income segregation in US cities. Figure~\ref{figS1} reports (A) the mean and (B) standard deviation of $x_i^{k}$ in Boston, Cleveland, Detroit and Denver. Both of them reach minimum values between 8-10.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.8\textwidth]{FigS01.pdf}
\end{center}
\caption{(A) Mean of $x^k$ for each income category in Boston, Cleveland, Detroit and Denver. (B)~Standard deviation $\sigma(x^k)$ for each income category in Boston, Cleveland, Detroit and Denver. (C)~Moran's I for each income category in Boston, Cleveland, Detroit and Denver. (D)~Scatter plot of the mean of $x^k$ as a function of $\widetilde{\tau}_{\rm sync}(k)$.}
\label{figS1}
\end{figure}
The fact that classes 8-10 appear to be the less segregated is also supported by the Moran's I as Fig.~\ref{figS1}(C) shows. To further assess that the mean $<x^{k}>$ does not strongly determine the values of $\widetilde{\tau}_{\rm sync}(k)$, we plot both quantities in Fig.~\ref{figS1}(D), where no strong pattern is observed. Categories with low $<x^{k}>$ display high variability in $\widetilde{\tau}_{\rm sync}(k)$ and vice-versa.
In Fig.~\ref{figS2} we provide the ranking of the selected US cities according to the value of $\widetilde{\tau}_{\rm sync}(k)$ for the lowest and highest income categories~1 and 16, respectively. As can be seen, there are significant variations in the ranking depending on which economic category is shown; for example, Cleveland is close to the top for category 1 but far apart for 16, and the other way around for Seattle.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{l}
A\hfill Class 1 \hfill\mbox{} \\
\includegraphics[width=0.95\textwidth]{FigS02a.pdf} \\
B\hfill Class 16 \hfill\mbox{} \\
\includegraphics[width=0.95\textwidth]{FigS02b.pdf}
\end{tabular}
\end{center}
\caption{Ranking of the selected US cities according to the value of $\widetilde{\tau}_{\rm sync}(k)$, for income class~1 (A) and income class~16 (B).}
\label{figS2}
\end{figure}
\clearpage
\section{Comparison with other segregation measures}
In this section we assess how the normalized synchronization time $\widetilde{\tau}^k_{\rm sync}$ relates to other segregation measures. In particular we focus on the widely used Moran's I \cite{moran1948interpretation}, which focuses on local correlations, and one obtained from class mean first passage times (CMFPT) developed in \cite{bassolas2021first,bassolas2021diffusion}, which captures long-range spatial correlations.
For each city and category $k$ the Moran's I can be written as
\begin{equation}
I^k =
\frac{\displaystyle\frac{1}{W}\sum_{i=1}^n \sum_{j=1}^n w_{ij}(x^k_i - \bar{x}^k)(x^k_j -
\bar{x}^k)}{\displaystyle\frac{1}{n}\sum_{i=1}^n (x^k_i - \bar{x}^k)^2},\label{eq:morani}
\end{equation}
where $x^k_i$ is the fraction of population in $i$ that belongs to category $k$, $\bar{x}^k$ is its mean across all spatial units, the weights $w_{ij}$ correspond in our case to the spatial adjacency matrix $a_{ij}$, and $W=\sum_{i=1}^n\sum_{j=1}^n w_{ij}$ is the total weight.
As an index to assess the long-range correlations in the spatial distribution of the income categories, we will use the class mean first passage times between classes. In this methodology \cite{bassolas2021first,bassolas2021diffusion}, random walkers start from each of the spatial units in a system and move through the spatial adjacency graph until they have visited the $16$~classes at least once. For this, each location is assigned to a class with probability proportional to its corresponding fraction of population. By averaging the number of steps that a walker needs to reach class $j$ across all the units that belong to category $i$ and for multiple realizations, we can obtain the class mean first passage times $\tau_{ij}$, which encapsulate the average number of steps needed to reach a unit of category $j$ when a walker departs from a unit of category $i$. After normalizing by a null-model in which colors are uniformly reshuffled at random to compensate for uneven class abundances, we finally obtain the normalized class mean first passage times $\widetilde{\tau}_{ij}$. The quantity $\widetilde{\tau}_{ij}$ provides thus information on how much time you need to reach category $j$ when a walker departs from a unit of category $i$ as compared to the null-model, values below $1$ mean that two categories are closer than in the null-model and vice-versa for values above $1$. To summarize the segregation of category $k$ in a city we will use the CMFPT index, i.e., the $\mbox{med}(\widetilde{\tau})_k$ given by the median value of $\tau_{jk}$ $\forall j$.
For each city included in our analysis, we measure the Pearson correlation coefficient $r_p$ between each of the additional segregation quantities and $\widetilde{\tau}^k_{\rm sync}$ for all the $16$ categories $k$. More specifically, for each city $r_p$ is calculated over a set of $16$ points. The distribution of $r_p$ across cities is shown in Fig.~\ref{figrp} for the Moran's I (A) and $\mbox{med}(\widetilde{\tau})_k$ (B), where a skewness towards high values is clearly observed. Most of the cities display correlations above $0.8$ with the Moran's I and $0.7$ with the CMFPT index. Additionally, we also show in Fig.~\ref{figsig} the significance of the correlations observed in each of the cities, which are also below $0.001$ in most of the cases.
\begin{figure}[bh!]
\begin{center}
\includegraphics[width=0.45\textwidth]{FigS03a.pdf}
\includegraphics[width=0.45\textwidth]{FigS03b.pdf}
\end{center}
\caption{\textbf{Correlation between $\widetilde{\tau}^k_{\rm sync}$ and the additional segregation indicators.} For each city in our study, we calculate the Pearson correlation coefficient $r_p$ between $\widetilde{\tau}^k_{\rm sync}$ and the additional segregation metrics over the $16$ income categories. The correlation coefficient for a city is thus obtained from a set of $16$ points, one per category. (A) Distribution of $r_p$ between Moran's I and $\widetilde{\tau}^k_{\rm sync}$ across cities. (B) Distribution of $r_p$ between the segregation calculated through normalized CMFPT $\mbox{med}(\widetilde{\tau})_k$ and $\widetilde{\tau}^k_{\rm sync}$ across cities.}
\label{figrp}
\end{figure}
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.45\textwidth]{FigS04a.pdf}
\includegraphics[width=0.45\textwidth]{FigS04b.pdf}
\end{center}
\caption{\textbf{Significance of the Pearson correlation coefficients between $\widetilde{\tau}^k_{\rm sync}$ and the other segregation indicators.} For each of the additional indices, we display the significance of the correlations across cities. (A) Significance of correlations between Moran's I and $\widetilde{\tau}^k_{\rm sync}$. (B) Significance of correlations between the segregation calculated through normalized class mean first passage times $\mbox{med}(\widetilde{\tau})_i$ and $\widetilde{\tau}^k_{\rm sync}$ . The correlation coefficient and significance for each city is obtained by comparing the segregation values for the $16$ income categories. The significance values are depicted as * for $\mbox{p-value}<0.05$, ** for $\mbox{p-value}<0.01$, and *** for $\mbox{p-value}<0.001$.}
\label{figsig}
\end{figure}
\newpage
In the main text we discuss the potential of our methodology to assess the multiscale patterns of segregation in front of traditional first-neighbor approaches. In Fig.~\ref{figcities} we further investigate this fact by plotting for Boston, Cleveland, Denver and Detroit the local normalized synchronization times, the local Moran's $I^{\rm loc}_i(k)$, and the raw ratio of population of category $k$ in each of the census tracts.
Although the segregation hotspots detected by our methodology and the local Moran's I seem similar, the patterns detected are significantly different. Whereas $I^{\rm loc}_i(k)$ captures strong differences between neighboors, $\widetilde{\tau}_i^{\rm loc}(k)$ highlights isolated regions even if the differences with their first-neighboors is low; most likely, this is because they are far apart from regions displaying ratios of population closer to the city average and require more time to reach the global synchronized state. In fact, the areas highlighted by synchronization dynamics have a larger scale and allow us to identify common mesoscale patterns of segregation across cities: a downtown that displays high values, a ring around it with low values, and finally the suburbs with high values again. By focusing on Detroit, we can see that not only the poorer downtown appears highlighted but also the suburbs due to their very low ratio of population of category $1$. Similar patterns can also be observed in Cleveland and Denver.
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.90\textwidth]{FigS05a-Boston.pdf} \\
\includegraphics[width=0.90\textwidth]{FigS05b-Cleveland.pdf} \\
\includegraphics[width=0.90\textwidth]{FigS05c-Denver.pdf} \\
\includegraphics[width=0.90\textwidth]{FigS05d-Detroit.pdf}
\end{center}
\caption{\textbf{Comparison of local segregation indicators in Boston, Cleveland, Denver and Detroit.} (A) Normalized synchronization time, (B) Local Moran correlation, and (C) proportion of citizens for each census tract and income class~1 (most deprived).}
\label{figcities}
\end{figure}
The segregation index developed in the main text is calculated as the median of $\widetilde{\tau}^k_{\rm sync}$ which confers an equal weight to each of the income categories, disregarding the amount of population in each category. However, we can also construct a weighted index $\bar{\widetilde{\tau}}_{\rm sync}$ that can be built as
\begin{equation}
\bar{\widetilde{\tau}}_{\rm sync}=\frac{\displaystyle\sum_k P_k \widetilde{\tau}_{\rm sync}(k)}{\displaystyle\sum_k P_k},
\end{equation}
where $P_k$ is the total number of citizens that belong to category $k$ in a given city. The ranking of cities according to the value of $\bar{\widetilde{\tau}}_{\rm sync}$ (Fig.~\ref{figrank}) displays only slight changes with, for example, Philadelphia and Los Angeles closer to the top of the ranking. We test the relation between both indices in Fig.~\ref{figcomp}, where a clear relationship between both quantities is revealed.
\begin{figure}[th!]
\begin{center}
\includegraphics[width=0.95\textwidth]{FigS06.pdf}
\end{center}
\caption{\textbf{Segregation in US cities according to an index calculated through a weighted average.} Ranking of cities according to the weighted index of segregation $\bar{\widetilde{\tau}}_{\rm sync}$.}
\label{figrank}
\end{figure}
\begin{figure}[th!]
\begin{center}
\includegraphics[width=0.55\textwidth]{FigS07.pdf}
\end{center}
\caption{\textbf{Comparison between segregation indicators obtained through synchronization dynamics.} Comparison between the weighted index of segregation $\bar{\widetilde{\tau}}_{\rm sync}$ and the index $\mbox{med}(\widetilde{\tau}_{\rm diff}(k))$ used in the main text.}
\label{figcomp}
\end{figure}
\clearpage
\section{Beyond economic segregation: Paris around the clock}
Besides only economic segregation, our methodology can be used to assess the spatial heterogeneity of any other quantity, and to exemplify it, we assess in this section the segregation of the population in Paris according to a wide set of socioeconomic indicators. The data compiles the fraction of population per district within a certain category at each hour of the day in French cities; in this work, we focus on Paris \cite{lecomte2018mobiliscope,julie2021,vallee2021}. The list of indicators and categories analyzed can be found in Table~\ref{TableS1}.
\begin{table}[ht!]
\centering
\begin{tabular}{llllll}
\hline \hline
Indicator & Categories \\
\hline
Activity type & At home & At work & Studying & Shopping & Leisure \\
Age & 16-24 & 25-34 & 35-64 & 65 and more & \\
Educational level & Low & Middle-low & Middle-high & High & \\
Socioprofessional status & Inactive & Low & Middle-low & Middle-high & High \\
Last travel mode & Pub. trans. & Private motor & Soft mobility & & \\
Occupational status & Active & Student & Unemployed & Retired & Inactive \\
Sex & Male & Female & & & \\ \hline
\end{tabular}
\caption{Socio-economic indicators and activity types analyzed for Paris.} \label{TableS1}
\end{table}
For each indicator or category, we have a certain distribution of population per spatial unit and hour of the day, thus we can compute how the quantity $\widetilde{\tau}_{\rm sync}(k)$
varies during the day, as we show in Fig.~\ref{figS3}(A,B) for the five activity types, and the five socio-professional status; the patterns of synchronization through time turn out to be very distinct.
For example, the level of synchronization remains basically constant throughout the day for low, middle and high socio-professional status, while it increases (decreases) between 8am and 8pm for inactive (high) socio-professional status. If we focus instead on the ranking of $\widetilde{\tau}_{\rm sync}(k)$ at 10am and 10pm, see Fig.~\ref{figS3}(C), the lower occupational and socio-professional status seem to be the most segregated indicators as they are on top of the ranking at both times of the day. Other categories that should be uniformly distributed across the city, such as sex, are very close to $1$, thus indicating no segregation.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.90\textwidth]{FigS08.pdf}
\end{center}
\caption{\textbf{Synchronization around the clock in Paris.} (A) Normalized synchronization time for the distribution of population performing each of the five types of activities. (B) Synchronization time for the distribution of population of each socio-professional status. (C) Change of synchronization times for all of the indicators at 10am (green) and 10pm (red).}
\label{figS3}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=0.55\textwidth]{FigS09.pdf}
\end{center}
\caption{\textbf{Clustering analysis of segregation around the clock in Paris.} (A) Pattern of synchronization times $P$ for each of the four main groups detected with the K-Means algorithm. (B) Cluster assignment for each of the indicators analyzed.}
\label{figS4}
\end{figure}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=0.80\textwidth]{FigS10.pdf}
\end{center}
\caption{\textbf{Local synchronization around the clock in Paris.} (A, B) Normalized synchronization time for each Paris district for the population performing leisure activities at 10am and 10pm. (C, D) Normalized synchronization time for each Paris district for the population with inactive socio-professional status at 10am and 10pm. For visualization purposes the color range is common to all four maps.}
\label{figS5}
\end{figure*}
The hourly patterns of each metric allow for the grouping of indicators behaving similarly as we did for US cities. As before, we focus more on the time-series profile rather than the specific values taken by bon $\widetilde{\tau}^h_{\rm sync}(k)$, thus analyzing the normalized $P(\widetilde{\tau}^h_{\rm sync}(k))$ for each hour of the day $h$. The k-means clustering reveals four distinct clusters (see Fig.~\ref{figS4}) which correspond to: those increasing during workings, those decreasing, those remaining almost constant, and those with a more characteristic behavior with a peak during midday and at the end of the day, roughly around the lunch and dinner times.
Finally, we assess the local segregation of districts by measuring their local normalized synchronization time. In particular, we show an example in Fig.~\ref{figS5} for the population performing leisure activities and those with inactive socio-professional status.
In agreement with the temporal pattern shown in Fig.~\ref{figS3}, the segregation is much higher at 10pm compared to 10am, especially concentrated in the centre of the city; a not so surprising result given that most of the leisure activities are concentrated in that part of the city. In the case of the population with inactive socio-professional status, the hotspots seem to be concentrated in the northern part of the city, a region known for suffering a thriving inequality.
| -55,382.085826 |
[
-0.331787109375,
0.69677734375
] | 42.755344 |
[
-3.013671875,
1.1591796875,
-1.8203125,
-5.44140625,
-0.939453125,
7.515625
] |
[
1.75,
5.91796875,
0.66064453125,
5.06640625
] | 708 | 12,690 |
[
-2.625,
2.78515625
] | 24.646359 |
[
-6.47265625,
-3.90234375,
-4.51171875,
-2.509765625,
2.28515625,
13.078125
] | 1.038753 | 31.188027 | 10.70134 | 6.342747 |
[
2.295788288116455
] | -40,258.045858 | 5.917258 | -54,356.912643 | 0.431482 | 5.875982 |
[
-3.310546875,
-3.453125,
-2.939453125,
-4,
2.626953125,
10.640625
] |
[
-5.5078125,
-2.484375,
-2.31640625,
-2.14453125,
3.544921875,
5.9453125
] | |
BkiUff3xK6-gDyrT_jbl
|
\section{Introduction
\subsection{Background}
The inverse Schr\"{o}dinger potential problem arises from electrical impedance tomography (EIT) \cite{C1980} and has attracted much attention both theoretically and computationally. In a general setting, we can formulate the following Schr\"{o}dinger equation
\begin{equation}\label{eq_generalInverSchrodinger}
\left\{~
\eqalign{
\Delta u + k^{2} u - c(x) u = 0 &\quad \textrm{in\ } \Omega \subset \mathbb{R}^{n}, \\
u = g_{0} &\quad \textrm{on\ } \partial\Omega,
}
\right.
\end{equation}
where, throughout the article, $\Omega \subset \mathbb{R}^{n}$ is assumed to be a bounded open domain with smooth boundary $\partial\Omega$ and dimension $n \geq 2$. The inverse Schr\"{o}dinger potential problem is to identify the unknown potential function $c(x)$ from many boundary measurements or the Dirichlet-to-Neumann map defined below. A classical result in \cite{A1988} shows that if the wavenumber $k = 0$ in \eref{eq_generalInverSchrodinger} the stability of the inverse Schr\"{o}dinger potential problem is logarithmic. When the wavenumber is sufficiently large, increasing stability with respect to the wavenumber $k$ has been observed and well documented, starting with \cite{I2011} and with many further results given in \cite{IW2014, ILW2016, ILX2020} for \eref{eq_generalInverSchrodinger} or its linearized form. These results are often stated as stability estimates involving a H\"{o}lder term and a logarithmic term which goes to zero as the wavenumber goes to infinity. An alternative way to observe increasing stability is to note that one can determine the Fourier transform of the unknown coefficient in a stable way for a range of frequencies, and that this range increases with the wavenumber. We note that these increasing stability results have also been verified both theoretically and numerically in other inverse source, obstacle or medium problems where we refer to \cite{BLT2010, BT2010, NUW2013, BG2015, BLLT2015, BLRX2015, CIL2016, ZZ2017, IL2018, KKK2018, BLZ2020, BT2020} and references therein.
There have also been several recent works on inverse problems for nonlinear elliptic equations. In such problems, it has been observed that higher order linearizations of the nonlinear Dirichlet-to-Neumann map carry information about the unknown coefficients. This method allows one to exploit nonlinear effects in order to obtain better results than those that are currently known for corresponding linear equations. The higher order linearization method goes back to \cite{KLU2018} in the hyperbolic case and to \cite{FO2020, LLLS_I} in the elliptic case. The method has been further applied to more general equations and partial data problems. See \cite{KU2019, KKU2020, LLST2020, CFKKU2021, LLLS_II} for a selection of recent results.
This article studies possible improvements in stability properties of inverse problems for nonlinear Schr\"{o}dinger type equations with a large wavenumber.
More specifically, we study the inverse Schr\"{o}dinger potential problem with an arbitrary power type nonlinearity term and discuss its unique determination, increasing stability and numerical reconstruction algorithms. In particular, we consider the problem of recovering the potential function $c(x)$, defined in $\Omega \subset \mathbb{R}^{n}$, in the following nonlinear Schr\"{o}dinger equation, with an integer $m \geq 2$ denoting the nonlinearity index,
\begin{equation}\label{eq_HelmholtzEqmain}
\left\{~
\eqalign{
\Delta u + k^{2} u - c(x) u^{m} = 0 &\quad \textrm{in\ } \Omega, \\
u = g_{0} &\quad \textrm{on\ } \partial\Omega,
}
\right.
\end{equation}
from many boundary measurements. Here, we assume that the squared wavenumber $k^{2}$ is sufficiently large, $0$ is not a Dirichlet eigenvalue of $\Delta + k^{2}$ in $\Omega$ \reviewC{and the Dirichlet boundary data $g_{0}$ is sufficiently small}. Meanwhile, by assuming that $c(x)$ is compactly supported in $\Omega$, the well-posedness of the forward problem \eref{eq_HelmholtzEqmain} can be verified following the variational framework developed in \cite[Theorem 1]{EW2014}. Thus, the boundary measurements can be given by the nonlinear Dirichlet-to-Neumann (DtN) map
\begin{equation}\label{eq_DtN}
\Lambda_{c}: g_{0} \mapsto \partial_{\nu} u \quad \textrm{on\ } \partial\Omega.
\end{equation}
The precise definition of $\Lambda_{c}$ and its two linearized forms $D^{m}_{0} \Lambda_{c}$, $\Lambda'_{c}$ will be specified later.
\reviewC{When the wavenumber $k = 0$ in (\ref{eq_HelmholtzEqmain}), unique identification of the potential function $c(x)$ has been provided in \cite{LLLS_II} by measurement of the DtN map in (\ref{eq_DtN}) and its linearized form $D^{m}_{0} \Lambda_{c}$. In current work, we particularly focus on the increasing stability estimate for (\ref{eq_HelmholtzEqmain}) under two different linearized forms of $\Lambda_{c}$.}
\subsection{Linearization approaches}\label{se1.2}
To solve the nonlinear inverse Schr\"{o}dinger potential problem stably, we implement linearization approaches and discuss recovery of the potential function by the linearized DtN maps accordingly. In this subsection, we briefly overview two linearization approaches with respect to small boundary data and small potential function, \reviewC{which have been studied in linear and nonlinear elliptic inverse problems, for instance in \cite{C1980, LLLS_I}, when $k = 0$ in \eref{eq_generalInverSchrodinger} and \eref{eq_HelmholtzEqmain}.}
To treat elliptic equations with power type nonlinearities, a novel linearization approach with respect to small boundary data has recently been discussed in \cite{FO2020, LLLS_I} \reviewA{and been extended to a fractional nonlinearity index in \cite{LLST2020}}.
We briefly introduce its extension to the nonlinear Schr\"{o}dinger potential problem \eref{eq_HelmholtzEqmain} below. Assume that $c \in C^{\alpha}(\overline{\Omega})$ for some $\alpha$ with $0 < \alpha < 1$, and $0$ is not a Dirichlet eigenvalue of $\Delta + k^{2}$ in $\Omega$. By \cite[Proposition 2.1]{LLST2020}, we can find a constant $\tau > 0$ such that for any Dirichlet boundary value $f$ in $U_{\tau} := \{ f \in C^{2,\alpha}(\partial\Omega) \,:\, \|f\|_{C^{2,\alpha}(\partial\Omega)} \leq \tau \}$, there is a unique small solution $u \in C^{2,\alpha}(\overline{\Omega})$ and $u|_{\partial\Omega} = f$. The nonlinear DtN map in the H\"{o}lder spaces is defined by
\begin{equation*}
\Lambda_{c}: U_{\tau} \subset C^{2,\alpha}(\partial\Omega) \to C^{1,\alpha}(\partial\Omega), \qquad
f \mapsto \partial_{\nu} u |_{\partial\Omega}.
\end{equation*}%
Let $\varepsilon = (\varepsilon_{1}, \ldots, \varepsilon_{m})$ where each $\varepsilon_{j} > 0$ is small, and consider the solution $u_{\varepsilon}$ corresponding to the Dirichlet boundary value
\begin{equation*}
f_{\varepsilon} = \varepsilon_{1} f_{1} + \ldots + \varepsilon_{m} f_{m}.
\end{equation*}
By \cite[Proposition 2.1]{LLST2020} the solution $u_{\varepsilon}$ depends smoothly on the parameters $\varepsilon_{j}$. We may thus differentiate the equation
\begin{equation}\label{eq_Linearization1}
\Delta u_{\varepsilon} + k^{2} u_{\varepsilon} - c(x) u_{\varepsilon}^{m} = 0 \quad \textrm{in\ } \Omega, \qquad
u_{\varepsilon} |_{\partial\Omega} = f_{\varepsilon}
\end{equation}
with respect to the parameters $\varepsilon_{j}$. Writing $v_{j} = \partial_{\varepsilon_{j}} u_{\varepsilon} |_{\varepsilon = 0}$, we observe that $v_{j}$ is the unique solution of
\begin{equation*}
\Delta v_{j} + k^{2} v_{j} = 0 \quad \textrm{in\ } \Omega, \qquad
v_{j} |_{\partial\Omega} = f_{j}.
\end{equation*}
Similarly, applying $\partial_{\varepsilon_{1}} \cdots \partial_{\varepsilon_{m}}$ to the equation \eref{eq_Linearization1} and setting $\varepsilon = 0$, we can define $w = \partial_{\varepsilon_{1}} \cdots \partial_{\varepsilon_{m}} u_{\varepsilon} |_{\varepsilon = 0}$ which solves the equation
\begin{equation}\label{mth_equation_difference}
\Delta w + k^{2} w = (m!) c(x) v_{1} \cdots v_{m} \quad \textrm{in\ } \Omega, \qquad
w |_{\partial\Omega} = 0.
\end{equation}
Moreover, the Neumann boundary data can be obtained in form of
\begin{equation}\label{eq_HighOrderLinearDtN}
\eqalign{
\partial_{\nu} w |_{\partial\Omega}
&= \partial_{\varepsilon_{1}} \cdots \partial_{\varepsilon_{m}} ( \partial_{\nu} u_{\varepsilon} |_{\partial\Omega} ) |_{\varepsilon = 0}
= \partial_{\varepsilon_{1}} \cdots \partial_{\varepsilon_{m}} \Lambda_{c} (f_{\varepsilon}) |_{\varepsilon = 0} \\
&= D^{m}_{0} \Lambda_{c} (f_{1}, \ldots, f_{m})
}
\end{equation}
where $D^{m}_{0}$ denotes the $m$th Fr\'{e}chet derivative at $0$ considered as an $m$-linear form. If we integrate the equation \eref{mth_equation_difference} against another function $v_{m+1}$ solving
\begin{equation*}
\Delta v_{m+1} + k^{2} v_{m+1} = 0 \quad \textrm{in\ } \Omega, \qquad
v_{m+1} |_{\partial\Omega} = f_{m+1},
\end{equation*}
we obtain a Calder\'{o}n or Alessandrini type identity
\begin{equation}\label{eq_nonlinearsmallboundary_equality}
(m!) \int_{\Omega} c(x) v_{1} \cdots v_{m} v_{m+1} ~\mathrm{d} x
= \int_{\partial\Omega} D^{m}_{0} \Lambda_{c} (f_{1}, \ldots, f_{m}) f_{m+1} ~\mathrm{d} S
\end{equation}
which will be revisited later.
Noticing that the $m$th Fr\'{e}chet derivative $D^{m}_{0} \Lambda_{c} (f_{1}, \ldots, f_{m})$ is numerically hard to obtain, we further consider the case when $c(x)$ is small compared to the wavenumber and study the linearization approach with respect to the potential function as investigated in the linear Schr\"{o}dinger potential problem in \cite{ILX2020}.
Taking the asymptotic expansion with respect to the potential function $c(x)$, we have
\begin{equation}\label{eq_asymILX}
u = u_{0} + u_{1} + u_{2} + \ldots
\end{equation}
where the remaining ``$\ldots$" denotes the ``higher" order term and following subproblems are satisfied such that
\begin{equation*}
\eqalign{
& \Delta u_{0} + k^{2} u_{0} = 0, \\
& \Delta u_{1} + k^{2} u_{1} = c(x) u_{0}^{m}, \\
& \Delta u_{2} + k^{2} u_{2} = m c(x) u_{0}^{m-1} u_{1}.
}
\end{equation*}
This shows that $u_{0}$ satisfies the Helmholtz equation $\Delta u_{0} + k^{2} u_{0} = 0$ in $\Omega$ and the first-order expansion term $u_{1}$ satisfies
\begin{equation}\label{eq_quadratic_u1}
\Delta u_{1} + k^{2} u_{1} = c(x) u_{0}^{m} \quad \textrm{in\ } \Omega.
\end{equation}
When $u_{0} |_{\partial\Omega} = g_{0}$ and $u_{1} |_{\partial\Omega} = g_{1} \equiv 0$, the linearized DtN map $\Lambda'_{c}$ is formally defined by
\begin{equation}\label{eq_LinearDtN}
\Lambda'_{c}: g_{0} \mapsto \partial_{\nu} u_{1} \quad \textrm{on\ } \partial\Omega.
\end{equation}
Note that $\Lambda'_{c}$ is actually a nonlinear map, since it corresponds to linearization with respect to the potential.
Multiplying the above equation \eref{eq_quadratic_u1} from both sides with another $\varphi$ solving $\Delta \varphi + k^{2} \varphi = 0$ in $\Omega$, we thus obtain another Calder\'{o}n or Alessandrini type identity
\begin{equation}\label{eq_calderonILX}
\int_{\Omega} c(x) u_{0}^{m} \varphi ~\mathrm{d} x
= \int_{\partial\Omega} \partial_{\nu} u_{1} \varphi ~\mathrm{d} S
\end{equation}
which will also be revisited later.
In current article, we consider the following problem:
\begin{quote}
{\bf Recover the potential function $c(x)$ from the linearized DtN map $D^{m}_{0} \Lambda_{c}$ or $\Lambda'_{c}$}.
\end{quote}
The first main result shows that from the knowledge of the $m$th Fr\'{e}chet derivative $D^{m}_{0} \Lambda_{c}$, one can determine the Fourier transform $\mathcal{F}[c](\xi)$ of $c$ in a stable way for frequencies $|\xi| \leq (m+1)k$. Thus the range of frequencies that can be determined stably increases both with respect to the wavenumber $k$ and the nonlinearity index $m$. However, determining $D^{m}_{0} \Lambda_{c}$ from $\Lambda_{c}$ becomes numerically very difficult when $m$ increases. The second main result considers the case where the potential function is small compared to the wavenumber. In this case we consider the linearization $\Lambda'_{c}$. We show that in the quadratic case where $m = 2$, from the knowledge of $\Lambda'_{c}$ one can stably determine $\mathcal{F}[c](\xi)$ for frequencies $|\xi| \leq 3k$. This is in contrast with the linear case where one can only determine frequencies $|\xi| \leq 2k$ stably \cite{ILX2020}. Thus in both main results above, the nonlinearity leads to improved stability properties in a certain sense. The theoretical stability results are confirmed by numerical results given in the end of the article.
The article is organized as follows. In \Sref{se2} we show that the linearized DtN map $D^{m}_{0} \Lambda_{c}$ provides a uniform increasing stability where the range of frequencies that can be determined stably increases with respect to $k$ and $m$. On the other hand, $\Lambda'_{c}$ only yields the uniqueness of the potential function $c(x)$ in the general setting $m \geq 2$.
In \Sref{se3} we further explore the linearized DtN map $\Lambda'_{c}$ for the inverse Schr\"{o}dinger potential problem with a quadratic nonlinearity term. By calibrating the identity \eref{eq_calderonILX} carefully, we verify an improved increasing stability for the specific inverse Schr\"{o}dinger potential problem with a quadratic nonlinearity term.
Noticing that both linearized DtN maps $D^{m}_{0} \Lambda_{c}$ and $\Lambda'_{c}$ can be numerically approximated, we extend the reconstruction algorithm in \cite{ILX2020} to the inverse Schr\"{o}dinger potential problem with quadratic and general nonlinearity terms in \Sref{se4}, respectively. We note that one of these reconstruction algorithms is realized by the linearized DtN map $\Lambda'_{c}$ with multiple wavenumbers. In the same \Sref{se4} we provide some numerical examples and extended discussion verifying the efficiency of our proposed algorithms.
\section{Linearized inverse Schr\"{o}dinger potential problem with an arbitrary power type nonlinearity term}\label{se2}
In this section, we investigate the linearized inverse Schr\"{o}dinger potential problem with an arbitrary power type nonlinearity term provided with the linearized DtN map $D^{m}_{0} \Lambda_{c}$ or $\Lambda'_{c}$. The analysis is based on the Calder\'{o}n type identities \eref{eq_nonlinearsmallboundary_equality} and \eref{eq_calderonILX}.
\reviewC{For the map $D^{m}_{0} \Lambda_{c}$ with $k = 0$, \cite{LLLS_I} has verified that by linearizing the small boundary data, the stability estimate for the inverse potential problem is logarithmic, which is consistent with the classical result in EIT \cite{A1988}.} In the current section, we verify that by constructing an appropriate set of complex exponential solutions, there will be improved stability when the wavenumber is large.
Recall the identity \eref{eq_nonlinearsmallboundary_equality},
\begin{equation*}
(m!) \int_{\Omega} c(x) v_{1} \cdots v_{m} v_{m+1} ~\mathrm{d} x
= \int_{\partial\Omega} D^{m}_{0} \Lambda_{c} (f_{1}, \ldots, f_{m}) f_{m+1} ~\mathrm{d} S.
\end{equation*}
Here $v_{j}$ solve $\Delta v_{j} + k^{2} v_{j} = 0$ in $\Omega$ with $v_{j} |_{\partial\Omega} = f_{j}$. To derive the stability estimate, we rely on the above identity and observe that
\begin{equation}\label{eq_linearization1_estimate}
\left| \int_{\Omega} c(x) v_{1} \cdots v_{m} v_{m+1} ~\mathrm{d} x \right|
\leq \frac{\epsilon}{m!} \left( \prod_{j=1}^{m} \| f_{j} \|_{C^{2,\alpha}(\partial\Omega)} \right) \| f_{m+1} \|_{L^{2}(\partial\Omega)}
\end{equation}
where we define $\epsilon := \sup_{\|f_{j}\|_{C^{2,\alpha}(\partial\Omega)} \leq 1} \left\|D^{m}_{0} \Lambda_{c} (f_{1}, \ldots, f_{m})\right\|_{L^{2}(\partial\Omega)}$. Thus \eref{eq_linearization1_estimate} further yields the inequality
\begin{equation}\label{mth_derivative_identity_estimate}
\left| \int_{\Omega} c(x) v_{1} \cdots v_{m+1} ~\mathrm{d} x \right|
\leq \frac{\epsilon}{m!} \prod_{j=1}^{m+1} \| v_{j} \|_{C^{2,\alpha}(\overline{\Omega})}.
\end{equation}
Let $\mathcal{F}[c](\xi)$ denote the Fourier transform of $c$ (extended by zero outside $\Omega$) at a frequency $\xi \in \mathbb{R}^{n}$. The following result shows that frequencies $|\xi| \leq (m+1)k$ can be recovered in a Lipschitz stable way from the knowledge of the linearized map $D^{m}_{0} \Lambda_{c}$.
\begin{thm
Let $m \geq 2$ be an integer, $k \geq 1$, and assume that $|\xi| \leq (m+1)k$. Then
\begin{equation*}
| \mathcal{F}[c](\xi) |
\leq \frac{\epsilon}{m!} \left( 3(1+k^6) \right)^{\frac{m+1}{2}}.
\end{equation*}
\end{thm}
\begin{proof}
We first claim that if $\ell \geq 2$ is an integer, then for any $\eta \in \mathbb{R}^{n}$ with $|\eta| \leq \ell$ there are unit vectors $\omega_{1}, \ldots, \omega_{\ell} \in \mathbb{R}^{n}$ such that
\begin{equation*}
\sum_{j=1}^{\ell} \omega_{j} = \eta.
\end{equation*}
This can be proved by induction. When $\ell = 2$ and $|\eta| \leq 2$, we may choose
\begin{equation*}
\omega_{1} = \frac{\eta}{2} + \sqrt{1-\frac{|\eta|^{2}}{4}}\, \omega, \qquad
\omega_{2} = \frac{\eta}{2} - \sqrt{1-\frac{|\eta|^{2}}{4}}\, \omega,
\end{equation*}
where $\omega$ is any unit vector orthogonal to $\eta$. We make the induction hypothesis that the claim holds for some $\ell \geq 2$. Let $\eta$ be a vector with $|\eta| \leq \ell+1$. We can write $\eta = \eta_{0} + \tilde{\omega}$ where $\eta_{0}$ and $\tilde{\omega}$ are parallel to $\eta$ and $|\eta_{0}| \leq \ell$, $|\tilde{\omega}| = 1$. Applying the induction hypothesis to $\eta_{0}$ gives unit vectors $\omega_{1}, \ldots, \omega_{\ell}$ that add up to $\eta_{0}$. The induction step is completed by setting $\omega_{\ell+1} = \tilde{\omega}$.
To prove the theorem we choose special solutions of $\Delta v_{j} + k^{2} v_{j} = 0$ in $\Omega$ ($j=1,\dots,m+1$) having the form
\begin{equation*}
v_{j} = \mathrm{e}^{\mathbf{i} \zeta_{j} \cdot x}
\end{equation*}
where $\zeta_{j} \in \mathbb{C}^{n}$ satisfy $\zeta_{j} \cdot \zeta_{j} = k^{2}$. Since $|\frac{\xi}{k}| \leq m+1$, the claim above shows that we can find unit vectors $\omega_{1}, \ldots, \omega_{m+1}$ such that
\begin{equation*}
\sum_{j=1}^{m+1} \omega_{j} = \frac{\xi}{k}.
\end{equation*}
Thus, choosing $\zeta_{j} = k \omega_{j}$, we have
\begin{equation*}
\sum_{j=1}^{m+1} \zeta_{j} = \xi.
\end{equation*}
It follows that $v_{1} \cdots v_{m+1} = \mathrm{e}^{\mathbf{i} \xi \cdot x}$. Now \eref{mth_derivative_identity_estimate} shows that
\begin{equation*}
| \mathcal{F}[c](\xi) |
\leq \frac{\epsilon}{m!} \prod_{j=1}^{m+1} \| v_{j} \|_{C^{3}(\overline{\Omega})}.
\end{equation*}
The proof is completed upon observing that $\| v_{j} \|_{C^{3}(\overline{\Omega})}^2 \leq 3(1+k^{6})$ when $k \geq 1$.
\end{proof}
The assumption $|\xi| \leq (m+1) k$ ensured that we could choose solutions $v_{j} = \mathrm{e}^{\mathbf{i} \zeta_{j} \cdot x}$ with $\zeta_{j}$ purely real in the proof. When $|\xi| > (m+1) k$ this will no longer be possible, and there will be a logarithmic component in the increasing stability estimate. We will next prove such an estimate for the linearized DtN map $D^{m}_{0} \Lambda_{c}$ by making a more careful choice of the vectors $\zeta_{j}$. Without loss of generality we assume that $0 \in \Omega$ and denote $D := 2 \sup_{x \in \Omega} \left| x \right|$.
\begin{thm}\label{thm_holder}
Let $D \leq 1$, $\|c\|_{C^{1}(\overline{\Omega})} \leq M_{1}$, and $k > 1$, $\epsilon < 1$, then the following estimate holds true
\begin{equation*}
\|c\|_{L^{2}(\Omega)}^{2}
\leq C k^{n+6(m+1)} \epsilon^{2} +C E^{n+6(m+1)} \epsilon
+ \frac{M_{1}^{2}}{1 + m^{2} k^{2} + E^{2}}
\end{equation*}
for the linearized system \eref{mth_equation_difference} with $E = -\ln\epsilon$ and the constant $C$ depending on the domain $\Omega$, the nonlinearity index $m$ and the dimensionality $n$.
\end{thm}
\begin{proof}
To prove the stability estimate, we shall choose the complex exponential solutions $v_{j}$ in \eref{mth_derivative_identity_estimate} carefully. Let $\xi \in \mathbb{R}^{n}$ with $\xi \neq 0$ and choose an orthonormal base $\left\{ e_{1} := \frac{\xi}{|\xi|}, e_{2}, \ldots, e_{n} \right\}$ of $\mathbb{R}^{n}$, $n \geq 2$. Let $v_{j} = \mathrm{e}^{\mathbf{i} \zeta_{j} \cdot x}$ be a solution of the Helmholtz equations where the complex vectors $\zeta_{j} \in \mathbb{C}^{n}$, $j = 1,2,\ldots,m+1$ satisfy $\zeta_{j} \cdot \zeta_{j} = k^{2}$ and $\sum_{j=1}^{m+1} \zeta_{j} = \xi$.
We carry out the proof by choosing the nonlinearity index $m$ differently.
\begin{description}
\item[Case 1: even $m$. ]
The complex exponential solutions $v_{j} = \mathrm{e}^{\mathbf{i} \zeta_{j} \cdot x}$ are constructed below by
\begin{equation}\label{eq_thm1_CExsol1}
\left\{~
\eqalign{
\zeta_{1} &= \frac{1}{m} (-k+|\xi|)e_{1} +\frac{1}{m} \sqrt{(m^{2}-1) k^{2} + 2k|\xi| - |\xi|^{2}} e_{2}, \\
\zeta_{2} &= \frac{1}{m} (-k+|\xi|)e_{1} -\frac{1}{m} \sqrt{(m^{2}-1) k^{2} + 2k|\xi| - |\xi|^{2}} e_{2}, \\
\zeta_{3} &= \zeta_{1}, \\
\zeta_{4} &= \zeta_{2}, \\
\ldots \\
\zeta_{m-1} &=\zeta_{1}, \\
\zeta_{m} &=\zeta_{2}, \\
\zeta_{m+1} &= k e_{1}.
}
\right.
\end{equation}
Denote $\Xi := \sqrt{|\xi|^{2} - 2k|\xi|- (m^{2}-1) k^{2}}$. If $k \geq \frac{|\xi|}{m+1}$ and $k > 1$, then we obtain
\begin{equation*}
\| v_{j} \|^{2}_{C^{3}(\overline{\Omega})} \leq 3(1+k^{6}), \quad j = 1,\ldots,m+1.
\end{equation*}
If $k < \frac{|\xi|}{m+1}$ and $k > 1$, for $j = 1,\ldots,m$, we derive
\begin{equation*}
\| v_{j} \|^{2}_{C^{3}(\overline{\Omega})} \leq 3(1+k^{6}) \sup| \mathrm{e}^{\mathbf{i} \zeta_{j} \cdot x}|^{2}
\leq 3(1+k^{6}) \mathrm{e}^{D\frac{\Xi}{m}}
\end{equation*}
and for $j = m+1$
\begin{equation*}
\| v_{m+1} \|^{2}_{C^{3}(\overline{\Omega})} \leq 3(1+k^{6}).
\end{equation*}
Recalling the identity \eref{eq_nonlinearsmallboundary_equality} and the inequality \eref{mth_derivative_identity_estimate} we have
\begin{equation*}
| \mathcal{F}[c](\xi) |^{2}
= \left| \int_{\Omega} c(x) v_{1} \cdots v_{m+1} ~\mathrm{d} x \right|^{2}
\leq \frac{\epsilon^{2}}{(m!)^{2}} \prod_{j=1}^{m+1} \| v_{j} \|^{2}_{C^{3}(\overline{\Omega})}.
\end{equation*}
Thus it is straightforward to obtain, for $k \geq \frac{|\xi|}{m+1}$ and $k>1$, that
\begin{equation*}
|\mathcal{F}[c](\xi)|^{2} \leq \frac{3^{m+1}}{(m!)^{2}} \epsilon^{2} (1+k^{6})^{m+1}
\end{equation*}
and for $k < \frac{|\xi|}{m+1}$, that
\begin{equation*}
|\mathcal{F}[c](\xi)|^{2} \leq \frac{3^{m+1}}{(m!)^{2}} \epsilon^{2} (1+k^{6})^{m+1} \mathrm{e}^{D \Xi}.
\end{equation*}
Now we let $E := -\ln\epsilon > 0$ by assuming $\epsilon < 1$ and consider two situations such that
\begin{description}
\item[a)] $k > E$ (i.e.\ $\epsilon = \mathrm{e}^{-E} > \mathrm{e}^{-k}$) and
\item[b)] $k \leq E$ (i.e.\ $\epsilon = \mathrm{e}^{-E} \leq \mathrm{e}^{-k}$).
\end{description}
In the situation of a), we directly obtain, with a generic constant $C := C(\Omega,m,n)$,
\begin{equation*}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
& = \int |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
= \int_{k \geq \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
+ \int_{k < \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
& \leq C (1+k^{6})^{m+1} (m+1)^{n} k^{n} \epsilon^{2} + \frac{M_{1}^{2}}{1+(m+1)^{2}k^{2}} \\
& \leq C k^{n+6(m+1)} \epsilon^{2} + \frac{M_{1}^{2}}{1 + m^{2} k^{2} + E^{2}}.
}
\end{equation*}
In the situation of b), we let $\rho := k + \sqrt{m^{2} k^{2}+\left(\frac{E}{D}\right)^{2}}$ such that $\sqrt{\rho^{2} - 2k\rho- (m^{2}-1)k^{2}} = \frac{E}{D}$ and split
\begin{equation}\label{eq_thm1_c}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
& = \int_{k \geq \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
+ \int_{k < \frac{|\xi|}{m+1} < \frac{\rho}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
&\quad + \int_{\rho \leq |\xi|} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi.
}
\end{equation}
Meanwhile, we bound, noticing $\rho \leq (m+1)k + \frac{E}{D}$ and $k \leq E$,
\begin{equation}\label{eq_thm1_interval}
\eqalign{
\int_{k < \frac{|\xi|}{m+1} < \frac{\rho}{m+1}} ~\mathrm{d} \xi
& = \sigma_{n} \left( \rho^{n} - (m+1)^{n} k^{n} \right) \\
& \leq \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left(1+(m+1)k\frac{D}{E}\right)^{n} - \left((m+1)k\frac{D}{E}\right)^{n} \right] \\
& \leq \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left(1+(m+1)D\right)^{n} - \left((m+1)D\right)^{n} \right]
}
\end{equation}
where $\sigma_{n}$ is the volume of an unit ball in $\mathbb{R}^{n}$.
Then the first two terms in \eref{eq_thm1_c} can be bounded by
\begin{equation*}
\int_{k \geq \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
\leq C k^{n+6(m+1)} \epsilon^{2} \leq C E^{n+6(m+1)} \epsilon^{2},
\end{equation*}
\begin{equation*}
\eqalign{
\int_{k < \frac{|\xi|}{m+1} < \frac{\rho}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi & \leq C k^{6(m+1)} \epsilon^{2} \mathrm{e}^E \int_{k < \frac{|\xi|}{m+1} < \frac{\rho}{m+1}} ~\mathrm{d} \xi \\
& \leq C E^{n+6(m+1)} \epsilon,
}
\end{equation*}
noticing $\int_{k < \frac{|\xi|}{m+1} < \frac{\rho}{m+1}} ~\mathrm{d} \xi \leq C E^{n}$ by \eref{eq_thm1_interval} and $k \leq E$.
We thus obtain
\begin{equation*}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
& \leq C E^{n+6(m+1)} \epsilon^{2}+ C E^{n+6(m+1)} \epsilon + \frac{M_{1}^{2}}{1+m^{2}k^{2}+\frac{E^{2}}{D^{2}}} \\
& \leq C E^{n+6(m+1)} \epsilon + \frac{M_{1}^{2}}{1+m^{2}k^{2}+E^{2}}
}
\end{equation*}
since $\rho \geq \sqrt{m^{2} k^{2}+\left(\frac{E}{D}\right)^{2}}$, $\epsilon<1$ and $D \leq 1$.
\item[Case 2: odd $m$. ]
In this case, we could construct
\begin{equation}\label{eq_thm1_CExsol2}
\left\{~
\eqalign{
\zeta_{1} &= \frac{1}{m+1} |\xi| e_{1} + \frac{1}{m+1}\sqrt{(m+1)^{2} k^{2} -|\xi|^{2}} e_{2}, \\
\zeta_{2} &= \frac{1}{m+1} |\xi| e_{1} - \frac{1}{m+1}\sqrt{(m+1)^{2} k^{2} -|\xi|^{2}} e_{2}, \\
\ldots \\
\zeta_{m} &= \zeta_{1}, \\
\zeta_{m+1} &= \zeta_{2}.
}
\right.
\end{equation}
The analysis is similar to {\bf Case 1} by replacing $\Xi := \sqrt{|\xi|^{2}-(m+1)^{2} k^{2}}$ and $\rho := \sqrt{(m+1)^{2} k^{2} + \left(\frac{E}{D}\right)^{2}}$. If $k > E$, we obtain
\begin{equation*}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
&= \int_{k \geq \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
+ \int_{k < \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
& \leq C (1+k^{6})^{m+1} (m+1)^{n} k^{n} \epsilon^{2} + \frac{M_{1}^{2}}{1+(m+1)^{2}k^{2}} \\
& \leq C k^{n+6(m+1)} \epsilon^{2} + \frac{M_{1}^{2}}{1 + m^{2} k^{2} + E^{2}}.
}
\end{equation*}
If $k \leq E$ and $D \leq 1$, we have
\begin{equation*}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
&= \int_{k \geq \frac{|\xi|}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
+ \int_{k < \frac{|\xi|}{m+1} < \frac{\rho}{m+1}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
&\quad + \int_{\rho \leq |\xi|} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
& \leq C E^{n+6(m+1)} \epsilon^{2} + C E^{n+6(m+1)} \epsilon + \frac{M_{1}^{2}}{1+(m+1)^{2}k^{2}+E^{2}}.
}
\end{equation*}
\end{description}
\end{proof}
\begin{rem
We shall mention that treatment of the identity \eref{eq_nonlinearsmallboundary_equality} in current work is quite different from that in \cite{LLLS_I}. More precisely, in \cite{LLLS_I}, the authors consider an inverse problem for elliptic equations where $v_{j}$ are solutions of Laplace equations. Since any constant is a trivial solution there, the uniqueness in \cite{LLLS_I} can be obtained based on the classic arguments in \cite{C1980}. On the other hand, in current work, $v_{j}$ represent the solutions of Helmholtz equations and we have to choose them very carefully as shown in the above proof.
\end{rem}
Despite the profound theoretical justification by the linearized DtN map $D^{m}_{0} \Lambda_{c}$, it is somehow not easy to approximate such a linearized DtN map numerically which will be shown in \Sref{se4.3}. In particular, the small boundary data yields a solution with small values which is easily contaminated by numerical differentiation error. To further study the linearized inverse Schr\"{o}dinger potential problem of \eref{eq_HelmholtzEqmain}, it is worthwhile to consider the linearized DtN map $\Lambda'_{c}$ corresponding to the case where $c$ is small compared to the wavenumber $k$. In particular, we prove the uniqueness for the linearized inverse Schr\"{o}dinger potential problem with an arbitrary power type nonlinearity term below given the linearized DtN map $\Lambda'_{c}$ at a fixed wavenumber $k > 0$.
\begin{thm}\label{thm_unique_m}
Let $c_{1}$ and $c_{2}$ be two functions in $L^{\infty}(\Omega)$. If the two linearized DtN maps in \eref{eq_LinearDtN} obey $\Lambda'_{c_{1}} = \Lambda'_{c_{2}}$, then $c_{1} = c_{2}$ in $\Omega$.
\end{thm}
\begin{proof}
The proof is similar to the seminal work by Calder\'{o}n \cite{C1980} but one needs to choose appropriate complex exponential solutions. To this end, we let $\xi \in \mathbb{R}^{n}$ with $\xi \neq 0$ and choose an orthonormal base $\left\{ e_{1} := \frac{\xi}{|\xi|}, e_{2}, \ldots, e_{n} \right\}$ of $\mathbb{R}^{n}$, $n \geq 2$. Then we can define the following vectors $\mu_{\ell} \in \mathbb{C}^{n}$, $\ell = 1,2$ such that
\begin{equation}\label{eq_exposol_first}
\left\{~
\eqalign{
\mu_{1} & = \frac{+(m^{2}-1) k^{2} + |\xi|^{2}}{2m|\xi|} e_{1} - \frac{\sqrt{-(m^{2}-1)^{2} k^{4} + 2(m^{2}+1) k^{2} |\xi|^{2} -|\xi|^{4}}}{2m|\xi|} e_{2}, \\
\mu_{2} & = \frac{-(m^{2}-1) k^{2} + |\xi|^{2}}{2 |\xi|} e_{1} + \frac{\sqrt{-(m^{2}-1)^{2} k^{4} + 2(m^{2}+1) k^{2} |\xi|^{2} -|\xi|^{4}}}{2 |\xi|} e_{2}.
}
\right.
\end{equation}
We end the proof by assigning, in \eref{eq_calderonILX},
\begin{equation}\label{eq_exposol_m}
u_{0}(x) = \mathrm{e}^{\mathbf{i} \mu_{1} \cdot x}, \qquad
\varphi(x) = \mathrm{e}^{\mathbf{i} \mu_{2} \cdot x},
\end{equation}
such that $u_{0}^{m}(x) \varphi(x) = \mathrm{e}^{\mathbf{i} \xi \cdot x}$.
\end{proof}
\begin{rem}\label{rem_uniqueComSol}
For $m = 1$, namely, when the power-type nonlinearity term reduces to a linear one, the complex exponential solutions in \eref{eq_exposol_m} are exactly those solutions used in \cite{ILX2020}. Nevertheless, it is somehow disappointing that when $m \geq 2$, \eref{eq_exposol_m} does not easily give a stability estimate.
In particular, the failure is exactly induced by the behavior of the complex vectors in \eref{eq_exposol_first}. It is easy to verify that
\begin{equation*}
\eqalign{
&-(m^{2}-1)^{2} k^{4} + 2(m^{2}+1) k^{2} |\xi|^{2} - |\xi|^{4} \\
&\quad = - \big(|\xi|+(m+1)k\big) \big(|\xi|+(m-1)k\big) \big(|\xi|-(m-1)k\big) \big(|\xi|-(m+1)k\big)
\geq 0,
}
\end{equation*}
for $|\xi| \in [(m-1)k,(m+1)k]$. If $|\xi| \in \big(0,(m-1)k\big)$ or $|\xi| > (m+1)k$, the complex exponential solution $u_{0}(x) = \mathrm{e}^{\mathbf{i} \mu_{1} \cdot x}$ or $\varphi(x) = \mathrm{e}^{\mathbf{i} \mu_{2} \cdot x}$ blows up at $e_{2}$ (or $-e_{2}$) direction when $|x|$ increases.
\end{rem}
Though it is not straightforward to obtain an increasing stability by the linearized DtN map $\Lambda'_{c}$ for an arbitrary choice of the nonlinearity index $m$, as shown in Remark \ref{rem_uniqueComSol}, we could still stably reconstruct the Fourier coefficients of the unknown potential function $c(x)$ within an interval given any fixed wavenumber $k > 0$. This observation allows us to design a reconstruction algorithm if the linearized DtN map $\Lambda'_{c}$ of multiple wavenumbers are provided. We will discuss this in \Sref{se4.2}.
On the other hand, if one chooses a specific power-type nonlinearity term, for instance a quadratic one with $m = 2$, we could regain the increasing stability by calibrating the identity \eref{eq_calderonILX} carefully. This result will be given in the coming \Sref{se3}.
\section{Linearized inverse Schr\"{o}dinger potential problem with a quadratic nonlinearity term}\label{se3}
To obtain a stability estimate of the linearized inverse Schr\"{o}dinger potential problem with a power type nonlinearity term by the linearized DtN map $\Lambda'_{c}$, the construction of the complex exponential solutions is essential and the standard approach in \Sref{se2} fails in view of the discussion in Remark \ref{rem_uniqueComSol}. To successfully prove the stability estimate, we may have to treat the nonlinearity term separately and the linearized inverse Schr\"{o}dinger potential problem with a quadratic nonlinearity term ($m = 2$) will be extensively investigated in current section. For the situation of a general nonlinearity index $m > 2$, we consider it as a future work and will report the result elsewhere.
To proceed further, we are inspired by the idea of small boundary data discussed above and consider three solutions of \eref{eq_HelmholtzEqmain} which are denoted by $u$, $v$ and $w$ with appropriate Dirichlet boundary conditions. By assuming that the potential function $c(x)$ is small or the squared wavenumber $k^{2}$ is sufficiently large, and recalling the asymptotical expansion of these solutions as in \eref{eq_asymILX}, we have
\begin{equation*}
\eqalign{
u &= u_{0} + u_{1} + \ldots, \\
v &= v_{0} + v_{1} + \ldots, \\
w &= w_{0} + w_{1} + \ldots,
}
\end{equation*}
where the remaining ``$\ldots$" are the higher order terms of these solutions. In fact, we can obtain that
\begin{equation}\label{eq_quadratic_uv}
\left\{~
\eqalign{
\Delta u_{0} + k^{2} u_{0} = 0, \quad
\Delta u_{1} + k^{2} u_{1} = c(x) u_{0}^{2} &\quad \textrm{in\ } \Omega, \\
\Delta v_{0} + k^{2} v_{0} = 0, ~\quad
\Delta v_{1} + k^{2} v_{1} = c(x) v_{0}^{2} &\quad \textrm{in\ } \Omega,
}
\right.
\end{equation}
for $u$, $v$ and for $w$,
\begin{equation}\label{eq_quadratic_w}
\Delta w_{0} + k^{2} w_{0} = 0, \quad
\Delta w_{1} + k^{2} w_{1} = c(x) w_{0}^{2} \quad \textrm{in\ } \Omega.
\end{equation}
The linearized DtN map $\Lambda'_{c}$ can be defined accordingly for these three solutions as in \eref{eq_LinearDtN}.
Denoting the Dirichlet boundary conditions of $u_{0}$, $v_{0}$ by $u_{0}|_{\partial\Omega}$ and $v_{0}|_{\partial\Omega}$, we define the boundary condition of $w_{0}$ by
\begin{equation*}
w_{0}|_{\partial\Omega} := u_{0}|_{\partial\Omega} + v_{0}|_{\partial\Omega}.
\end{equation*}
By the linearity of the Helmholtz equation, we know
\begin{equation*}
w_{0} = u_{0} + v_{0} \quad \textrm{in\ } \Omega.
\end{equation*}
We now take a close look of the asymptotical expansion of $w = w_{0} + w_{1} + \ldots$ in \eref{eq_quadratic_w} and choose $\varphi$ to be another solution of the Helmholtz equation $\Delta \varphi + k^{2} \varphi = 0$ in $\Omega$, then, while $w_{1}|_{\partial\Omega} = 0$, we have
\begin{equation*}
\int_{\Omega} c(x) w_{0}^{2} \varphi ~\mathrm{d} x
= \int_{\partial\Omega} \partial_{\nu} w_{1} \varphi ~\mathrm{d} S.
\end{equation*}
Noticing that $w_{0}^{2} = u_{0}^{2} + v_{0}^{2} + 2 u_{0} v_{0}$ in $\Omega$, we thus obtain
\begin{equation*}
2 \int_{\Omega} c(x) u_{0} v_{0} \varphi ~\mathrm{d} x
= \int_{\Omega} c(x) w_{0}^{2} \varphi ~\mathrm{d} x
- \left( \int_{\Omega} c(x) u_{0}^{2} \varphi ~\mathrm{d} x
+ \int_{\Omega} c(x) v_{0}^{2} \varphi ~\mathrm{d} x \right).
\end{equation*}
Recalling the asymptotical expansion of $u$ and $v$, as $u_{1}|_{\partial\Omega} = 0$ and $v_{1}|_{\partial\Omega} = 0$, we derive
\begin{equation}\label{eq_Calderonformula}
2 \int_{\Omega} c(x) u_{0} v_{0} \varphi ~\mathrm{d} x
= \int_{\partial\Omega} \partial_{\nu} w_{1} \varphi ~\mathrm{d} S
- \left( \int_{\partial\Omega} \partial_{\nu} u_{1} \varphi ~\mathrm{d} S
+ \int_{\partial\Omega} \partial_{\nu} v_{1} \varphi ~\mathrm{d} S \right).
\end{equation}
The identity \eref{eq_Calderonformula} then allows us to carry out the stability estimate and reconstruction algorithm of the linearized inverse Schr\"{o}dinger potential problem with a quadratic nonlinearity term.
Similar to Theorem \ref{thm_holder}, we again assume $0 \in \Omega$, $D = 2 \sup_{x\in\Omega} |x|$ and denote the same variable $\epsilon : = \sup_{\|\tilde{g}_{0}\|_{C^{2}(\partial\Omega)} = 1} \|\Lambda'_{c} \tilde{g}_{0}\|_{L^{\frac{3}{2}}(\partial\Omega)}$ to be the operator norm of $\Lambda'_{c}$ defined in \eref{eq_LinearDtN}. Then for any $g_{0} \in C^{2}(\partial\Omega)$, $G: =\|g_{0}\|_{C^{2}(\partial\Omega)}$ and define $\tilde{u}_{0}$ be the solution of
\begin{equation*}
\left\{~
\eqalign{
\Delta \tilde{u}_{0} + k^{2} \tilde{u}_{0} = 0 &\quad \textrm{in\ } \Omega, \\
\tilde{u}_{0} = g_{0}/G &\quad \textrm{on\ } \partial\Omega,
}
\right.
\end{equation*}
we have the solution $u_{1}$ of \eref{eq_quadratic_u1} by
\begin{equation*}
\Delta u_{1} + k^{2} u_{1} = c(x) G^2 \tilde{u}_{0}^{2} \quad \textrm{in\ } \Omega,
\end{equation*}
and $\|\partial_{\nu} u_{1}\|_{L^{\frac{3}{2}}(\partial\Omega)} = \|\Lambda'_{c} g_{0}\|_{L^{\frac{3}{2}}(\partial\Omega)} \leq \epsilon \|g_{0}\|_{C^{2}(\partial\Omega)}^{2}$ consequently. The main stability estimate is presented below.
\begin{thm}\label{thm_quadratic}
Let $D \leq 1$, $\|c\|_{H^{1}(\Omega)} \leq M_{1}$, and $k > 1$, $\epsilon < 1$, then the following estimate holds true
\begin{equation*}
\|c\|_{L^{2}(\Omega)}^{2}
\leq C \left( k^{n+8} + E^{n+8} \right) \epsilon^{2}
+ C E^{n+8} \epsilon
+ \frac{M_{1}^{2}}{1 + 4 k^{2} + E^{2}}
\end{equation*}
for the linearized system \eref{eq_quadratic_uv}, \eref{eq_quadratic_w} with $E = -\ln\epsilon$ and the constant $C$ depending on the domain $\Omega$ and the dimensionality $n$.
\end{thm}
\begin{proof}
Let $\xi \in \mathbb{R}^{n}$ with $\xi \neq 0$ and choose an orthonormal base $\left\{ e_{1} := \frac{\xi}{|\xi|}, e_{2}, \ldots, e_{n} \right\}$ of $\mathbb{R}^{n}$, $n \geq 2$. Then we can choose the following $\zeta_{\ell} \in \mathbb{C}^{n}$, $\ell = 1,2,3$ such that
\begin{equation*}
\left\{~
\eqalign{
\zeta_{1} & = \frac{1}{2} (-k+|\xi|) e_{1} - \frac{1}{2} \sqrt{3k^{2}+2k|\xi|-|\xi|^{2}} e_{2}, \\
\zeta_{2} & = \frac{1}{2} (-k+|\xi|) e_{1} + \frac{1}{2} \sqrt{3k^{2}+2k|\xi|-|\xi|^{2}} e_{2}, \\
\zeta_{3} & = k e_{1}.
}
\right.
\end{equation*}
We assign
\begin{equation}\label{eq_staproof_ComSol}
u_{0}(x) = \mathrm{e}^{\mathbf{i} \zeta_{1} \cdot x}, \qquad
v_{0}(x) = \mathrm{e}^{\mathbf{i} \zeta_{2} \cdot x}, \qquad
\varphi(x) = \mathrm{e}^{\mathbf{i} \zeta_{3} \cdot x}.
\end{equation}
Then
\begin{equation*}
u_{0} v_{0} \varphi = \mathrm{e}^{\mathbf{i} \xi \cdot x}
\end{equation*}
and the identity \eref{eq_Calderonformula} yields
\begin{equation*}
2 \mathcal{F}[c](\xi) = 2 \int_{\Omega} c(x) \mathrm{e}^{\mathbf{i} \xi \cdot x} ~\mathrm{d} x
= \int_{\partial\Omega} \partial_{\nu} w_{1} \varphi ~\mathrm{d} S
- \left( \int_{\partial\Omega} \partial_{\nu} u_{1} \varphi ~\mathrm{d} S
+ \int_{\partial\Omega} \partial_{\nu} v_{1} \varphi ~\mathrm{d} S \right).
\end{equation*}
Hence we obtain, since $w_{0} = u_{0} + v_{0}$ in $\Omega$,
\begin{equation*}
\eqalign{
|\mathcal{F}[c](\xi)|^{2}
& \leq \frac{1}{4} \left(
\|\partial_{\nu} w_{1}\|_{L^{\frac{3}{2}}(\partial\Omega)}^{2}
+ \|\partial_{\nu} u_{1}\|_{L^{\frac{3}{2}}(\partial\Omega)}^{2}
+ \|\partial_{\nu} v_{1}\|_{L^{\frac{3}{2}}(\partial\Omega)}^{2}
\right) \|\varphi\|_{L^{3}(\partial\Omega)}^{2} \\
& \leq \frac{1}{4} \epsilon^{2} \left(
\|w_{0}|_{\partial\Omega}\|_{C^{2}(\partial\Omega)}^{4}
+ \|u_{0}|_{\partial\Omega}\|_{C^{2}(\partial\Omega)}^{4}
+ \|v_{0}|_{\partial\Omega}\|_{C^{2}(\partial\Omega)}^{4}
\right) \|\varphi\|_{L^{3}(\partial\Omega)}^{2} \\
& \leq C \epsilon^{2} \left(
\|w_{0}\|_{C^{2}(\overline{\Omega})}^{4}
+ \|u_{0}\|_{C^{2}(\overline{\Omega})}^{4}
+ \|v_{0}\|_{C^{2}(\overline{\Omega})}^{4}
\right) \|\varphi\|_{L^{\infty}(\Omega)}^{2} \\
& \leq C \epsilon^{2} \left(
\|u_{0}\|_{C^{2}(\overline{\Omega})}^{4}
+ \|v_{0}\|_{C^{2}(\overline{\Omega})}^{4}
\right)
}
\end{equation*}
with a generic constant $C$ depending on the domain $\Omega$, and $\|\varphi\|_{L^{\infty}(\Omega)}^{2} = \|\mathrm{e}^{\mathbf{i} k e_{1} \cdot x}\|_{L^{\infty}(\Omega)}^{2} \leq 1$.
Noticing the fact that $|\zeta_{\ell}|^{2} = k^{2}$, $\ell = 1,2,3$, we thus obtain, if $k \geq \frac{|\xi|}{3}$,
\begin{equation*}
\|u_{0}\|_{C^{2}(\overline{\Omega})}^{4}
= \|v_{0}\|_{C^{2}(\overline{\Omega})}^{4}
\leq C \left(1+k^{8}\right).
\end{equation*}
Then, there holds
\begin{equation*}
|\mathcal{F}[c](\xi)|^{2}
\leq C \epsilon^{2} \left(1+k^{8}\right), \qquad \textrm{for\ } k \geq \frac{|\xi|}{3}.
\end{equation*}
If $k < \frac{|\xi|}{3}$, by denoting $\Xi := \sqrt{|\xi|^{2}-2k|\xi|-3k^{2}}$ we then derive the following bounds
\begin{equation*}
\|u_{0}\|_{C^{2}(\overline{\Omega})}^{4}
= \|v_{0}\|_{C^{2}(\overline{\Omega})}^{4}
\leq C \left(1+k^{8}\right) \|\mathrm{e}^{\mathbf{i} \zeta_{1} \cdot x}\|_{L^{\infty}(\Omega)}^{4}
\leq C \left(1+k^{8}\right) \mathrm{e}^{D\Xi}.
\end{equation*}
Consequently, we derive
\begin{equation*}
|\mathcal{F}[c](\xi)|^{2}
\leq C \epsilon^{2} \left(1+k^{8}\right) \mathrm{e}^{D\Xi}, \qquad \textrm{for\ } k < \frac{|\xi|}{3}.
\end{equation*}
Let $E := -\ln\epsilon > 0$ and $k > 1$, $\epsilon < 1$, we again consider two cases
\begin{itemize}
\item[a)] $k > E$ (i.e.\ $\epsilon = \mathrm{e}^{-E} > \mathrm{e}^{-k}$), and
\item[b)] $k \leq E$ (i.e.\ $\epsilon = \mathrm{e}^{-E} \leq \mathrm{e}^{-k}$).
\end{itemize}
In the case a), we have
\begin{equation*}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
& = \int |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
= \int_{k \geq \frac{|\xi|}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
+ \int_{k < \frac{|\xi|}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
& \leq C \epsilon^{2} \left(1+k^{8}\right) \sigma_{n} (3k)^{n} + \frac{M_{1}^{2}}{1+(3k)^{2}} \\
& \leq C k^{n+8} \epsilon^{2} + \frac{M_{1}^{2}}{1+8k^{2}+E^{2}}
}
\end{equation*}
where $\sigma_{n}$ is the volume of an unit ball in $\mathbb{R}^{n}$, and the constant $C$ depends on the domain $\Omega$ and the dimensionality $n$.
In the case b), we let $\rho := k + \sqrt{4k^{2}+\left(\frac{E}{D}\right)^{2}}$ such that $\sqrt{\rho^{2} - 2k \rho - 3k^{2}} = \frac{E}{D}$ and split
\begin{equation}\label{eq_proofthm31_c}
\eqalign{
\|c\|_{L^{2}(\Omega)}^{2}
& = \int_{k \geq \frac{|\xi|}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
+ \int_{k < \frac{|\xi|}{3} < \frac{\rho}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi \\
&\quad + \int_{\rho \leq |\xi|} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi.
}
\end{equation}
The first term in the right-hand side of \eref{eq_proofthm31_c} can be bounded by
\begin{equation*}
\int_{k \geq \frac{|\xi|}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
\leq C k^{n+8} \epsilon^{2}
\leq C E^{n+8} \epsilon^{2},
\end{equation*}
noticing $k \leq E$.
We focus on the second term in the right-hand side of \eref{eq_proofthm31_c} and estimate
\begin{equation*}
\int_{k < \frac{|\xi|}{3} < \frac{\rho}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
\leq C \epsilon^{2} k^{8} \left( \int_{k < \frac{|\xi|}{3} < \frac{\rho}{3}} \mathrm{e}^{D\Xi} ~\mathrm{d} \xi \right)
\leq C \epsilon k^{8} \left( \int_{k < \frac{|\xi|}{3} < \frac{\rho}{3}} ~\mathrm{d} \xi \right)
\end{equation*}
since $\mathrm{e}^{D\Xi} \leq e^{E} = \epsilon^{-1}$ when $k < \frac{|\xi|}{3} < \frac{\rho}{3}$.
Meanwhile, noticing $\rho \leq 3k + \frac{E}{D}$ and $k \leq E$, we bound
\begin{equation*}
\eqalign{
\int_{k < \frac{|\xi|}{3} < \frac{\rho}{3}} ~\mathrm{d} \xi
& = \sigma_{n} \left( \rho^{n} - (3k)^{n} \right) \\
& \leq \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left(1+3k\frac{D}{E}\right)^{n} - \left(3k\frac{D}{E}\right)^{n} \right] \\
& \leq \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left(1+3D\right)^{n} - \left(3D\right)^{n} \right]
}
\end{equation*}
where $\sigma_{n}$ is the volume of an unit ball in $\mathbb{R}^{n}$. The above inequalities yields
\begin{equation*}
\int_{k < \frac{|\xi|}{3} < \frac{\rho}{3}} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
\leq C \epsilon k^{8} \left( \sigma_{n} \frac{E^{n}}{D^{n}} \left[ \left(1+3D\right)^{n} - \left(3D\right)^{n} \right] \right)
\leq C E^{n+8} \epsilon.
\end{equation*}
Furthermore, since $\rho > \sqrt{4k^{2}+\left(\frac{E}{D}\right)^{2}}$, we bound the third term in the right-hand side of \eref{eq_proofthm31_c} by
\begin{equation*}
\int_{\rho \leq |\xi|} |\mathcal{F}[c](\xi)|^{2} ~\mathrm{d} \xi
\leq \frac{M_{1}^{2}}{1+\rho^{2}}
\leq \frac{M_{1}^{2}}{1+4k^{2}+\frac{E^{2}}{D^{2}}}.
\end{equation*}
We thus prove for both cases the proposed bound.
\end{proof}
\begin{rem}\label{rem_se3}
The stability estimate in above Theorem \ref{thm_quadratic}, if $k$ is sufficiently large, is similar as in \cite[Theorem 2.1]{ILX2020} where a linear elliptic equation is investigated ibid, i.e.\
\begin{equation*}
\left\{~
\eqalign{
\Delta u + k^{2} u - c(x) u = 0 &\quad \textrm{in\ } \Omega, \\
u = g_{0} &\quad \textrm{on\ } \partial\Omega.
}
\right.
\end{equation*}
A clear numerical evidence will be provided in \Sref{se4} and one can stably recover the Fourier coefficients with $|\xi| \leq 3k$ whereas in \cite{ILX2020} one can only recover those with $|\xi| \leq 2k$. Such gain highly depends on the sophisticatedly selected complex exponential functions and the modified identity \eref{eq_Calderonformula} considered above. It can be viewed as the advantage of the quadratic nonlinearity term when we solve the linearized inverse problems \eref{eq_HelmholtzEqmain} with $m = 2$.
\end{rem}
\section{Reconstruction algorithm and numerical examples}\label{se4}
In this section, we provide two reconstruction algorithms stably recovering the unknown potential function by the linearized DtN map $\Lambda'_{c}$ and a vanilla reconstruction algorithm by the linearized DtN map $D^{m}_{0} \Lambda_{c}$. In view of the quadratic nonlinearity term, we rely on the theoretical discussion in \Sref{se3} and deliver the first algorithm where boundary measurements of a single (large) wavenumber could offer a high resolution. Meanwhile, the second algorithm focuses on the high-order nonlinearity term discussed in \Sref{se2} and the linearized DtN map $\Lambda'_{c}$ of multiple wavenumbers is included to recover sufficiently many Fourier coefficients of the unknown potential function. Finally a vanilla reconstruction algorithm by the (approximated) linearized DtN map $D^{m}_{0} \Lambda_{c}$ is presented to verify the feasibility of the proposed linearization which, to the best of our knowledge, is the first attempt to realize the linearized DtN map $D^{m}_{0} \Lambda_{c}$ numerically.
\reviewA{We shall emphasize that by implementing the linearized DtN maps $\Lambda'_{c}$ and $D^{m}_{0} \Lambda_{c}$ the range of stably reconstructed Fourier mode $\mathcal{F}[c](\xi)$ is expanded to $|\xi| \leq (m+1)k$ as shown in Theorem \ref{thm_holder} with an arbitrary finite integer $m$ and Theorem \ref{thm_quadratic} with $m = 2$. This truncated value $(m+1)k$ can be viewed as a regularization for stably recovering the unknown potential function $c(x)$.}
\subsection{Reconstruction algorithm by $\Lambda'_{c}$ for a quadratic nonlinearity term}\label{se4.1}
Noticing that the linearized DtN map $\Lambda'_{c}$ can be numerically approximated, see \cite[Eq.(4.3)]{ILX2020}, we present the first reconstruction algorithm based on the identity \eref{eq_Calderonformula}. As an illustration, we focus on the two-dimensional space $n = 2$.
By selecting the complex exponential solutions \eref{eq_staproof_ComSol} in the proof of Theorem \ref{thm_quadratic}, we know that the left-hand side of \eref{eq_Calderonformula} reflects a Fourier coefficient of the potential function $c(x)$. Then by choosing $\xi \in \mathbb{R}^{n}$ and recalling Remark \ref{rem_se3}, we aim to recovering all the Fourier coefficients $\mathcal{F}[c](\xi)$ of the potential function $c(x)$ satisfying $|\xi| \leq 3k$. The larger wavenumber $k$, the more Fourier coefficients can be recovered.
To further address the reconstruction algorithm, we need the following discrete sets of lengths and angles of the vectors in the phase space. The discrete and finite length set is defined by
\begin{equation*}
\{\kappa_{i}\}_{i=1}^{I} \subset (0, L k \,]
\quad \textrm{for any fixed } k.
\end{equation*}
Here we choose $L \geq 3$ and $L k$ is the maximum length of the vector $\xi$. Two angle sets are defined by
\begin{equation*}
\{\hat{y}_{s}\}_{s=1}^{S} \subset \mathbb{S}^{n-1}
\quad \textrm{and} \quad \{\hat{z}_{s}\}_{s=1}^{S} \subset \mathbb{S}^{n-1},
\end{equation*}
which satisfy $\hat{y}_{s} \cdot \hat{z}_{s} = 0$.
The vector $\xi^{\langle i;s \rangle} := \kappa_{i} \hat{y}_{s} $ and following vectors $\zeta_{\ell}^{\langle i;s \rangle} \in \mathbb{C}^{n}$, $\ell = 1,2,3$ are chosen
\begin{equation*
\left\{~
\eqalign{
\zeta_{1}^{\langle i;s \rangle} &:= \frac{1}{2} (-k+\kappa_{i}) \hat{y}_{s} - \frac{1}{2} \sqrt{3k^{2}+2k\kappa_{i}-\kappa_{i}^{2}} \hat{z}_{s}, \\
\zeta_{2}^{\langle i;s \rangle} &:= \frac{1}{2} (-k+\kappa_{i}) \hat{y}_{s} + \frac{1}{2} \sqrt{3k^{2}+2k\kappa_{i}-\kappa_{i}^{2}} \hat{z}_{s}, \\
\zeta_{3}^{\langle i;s \rangle} &:= k \hat{y}_{s},
}
\right.
\end{equation*}
which further assign to the complex exponential solution as in \eref{eq_staproof_ComSol} below
\begin{equation*}
u_{0}(x) = \mathrm{e}^{\mathbf{i} \zeta_{1}^{\langle i;s \rangle} \cdot x}, \qquad
v_{0}(x) = \mathrm{e}^{\mathbf{i} \zeta_{2}^{\langle i;s \rangle} \cdot x}, \qquad
\varphi(x) = \mathrm{e}^{\mathbf{i} \zeta_{3}^{\langle i;s \rangle} \cdot x}
\end{equation*}
for $i = 1,2,\cdots,I$ and $s = 1,2,\cdots,S$. Here, the superscript notation $\cdot^{\langle i;s \rangle}$ will be referred to a vector $\xi^{\langle i;s \rangle}$ with the $i$th length $\kappa_{i}$ and the $s$th angle $\hat{y}_{s}$. Finally, for the inverse Fourier transform, a numerical quadrature rule can be constructed by a suitable choice of the weights $\sigma^{\langle i;s \rangle}$ according to these points $\xi^{\langle i;s \rangle}$.
We summarize our reconstruction algorithm below, which is similar to that in \cite{ILX2020} but one has to solve the nonlinear Schr\"{o}dinger potential problem three times at each iteration because of the quadratic nonlinearity term.
\vspace{10pt}
\hrule\hrule
\vspace{8pt}
{\parindent 0pt \bf Algorithm 1: Reconstruction Algorithm for the Linearized Schr\"{o}dinger Potential Problem, the quadratic nonlinearity term} %
\vspace{5pt}
\hrule
\vspace{8pt}
{\parindent 0pt \bf Input:} %
$k$, %
$\{\kappa_{i}\}_{i=1}^{I}$, %
$\{\hat{y}_{s}\}_{s=1}^{S}$, $\{\hat{z}_{s}\}_{s=1}^{S}$ and %
$\{\sigma^{\langle i;s \rangle}\}$; \\[5pt]%
{\parindent 0pt \bf Output:} %
Approximated Potential $c^{\langle I+1;1 \rangle}$. \\[-15pt]%
\begin{enumerate}
\item[1:] $\,$ Set $c^{\langle 1;1 \rangle} := 0$; %
\item[2:] $\,$ {\bf For} $i = 1,2,\dots,I$ (length~updating) %
\item[3:] $\,$ \quad {\bf For} $s = 1,2,\dots,S$ (angle~updating) %
\item[4:] $\,$ \quad \quad Choose $u_{0} := \exp \{ \mathbf{i} \zeta_{1}^{\langle i;s \rangle} \cdot x \}$, $v_{0} := \exp \{ \mathbf{i} \zeta_{2}^{\langle i;s \rangle} \cdot x \}$ and $w_{0} := u_{0} + v_{0}$; %
\item[5:] $\,$ \quad \quad Measure the Neumann boundary data $\partial_{\nu} u$, $\partial_{\nu} v$, $\partial_{\nu} w$ of the forward problem \eref{eq_HelmholtzEqmain} %
\item[] $\,$ \quad \qquad while the Dirichlet boundary data $u_{0}|_{\partial\Omega}$, $v_{0}|_{\partial\Omega}$, $w_{0}|_{\partial\Omega}$ are given; %
\item[6:] $\,$ \quad \quad Calculate the approximated linearized Neumann boundary data %
\item[] $\,$ \quad \qquad $g_{u}^{\,\prime} := (\partial_{\nu} u - \partial_{\nu} u_{0})|_{\partial\Omega}$, \quad $g_{v}^{\,\prime} := (\partial_{\nu} v - \partial_{\nu} v_{0})|_{\partial\Omega}$, \quad $g_{w}^{\,\prime} := (\partial_{\nu} w - \partial_{\nu} w_{0})|_{\partial\Omega}$; %
\item[7:] $\,$ \quad \quad Choose $\varphi := \exp \{ \mathbf{i} \zeta_{3}^{\langle i;s \rangle} \cdot x \}$ and $\gamma := \big[ u_{0} v_{0} \varphi \big]^{-1} = \exp \{ -\mathbf{i} \xi^{\langle i;s \rangle} \cdot x \}$; %
\item[8:] $\,$ \quad \quad Compute $\mathcal{F}[c](\xi^{\langle i;s \rangle}) \approx \frac{1}{2} \int_{\partial\Omega} ( g_{w}^{\,\prime} - g_{u}^{\,\prime} - g_{v}^{\,\prime} ) \, \varphi \,\mathrm{d} S$; %
\item[9:] $\,$ \quad \quad Update $c^{\langle i;s+1 \rangle} := c^{\langle i;s \rangle} + \mathcal{F}[c](\xi^{\langle i;s \rangle}) \, \gamma \sigma^{\langle i;s \rangle}$, \quad if $\kappa_{i} \leq 3k$; %
\item[10:] $\,$ \quad {\bf End}; %
\item[11:] $\,$ \quad Set $c^{\langle i+1;1 \rangle} := c^{\langle i;S+1 \rangle}$; %
\item[12:] $\,$ {\bf End}. %
\end{enumerate}
\vspace{8pt}
\hrule\hrule
\vspace{10pt}
In fact, the linearized Neumann boundary data $\partial_{\nu} w_{1}$ depends on the unknown potential function $c(x)$ referring to \eref{eq_quadratic_w}. As mentioned in \cite[Eq.(4.3)]{ILX2020}, we utilize
\begin{equation}\label{eq_se4linearizedNeumann}
g_{w}^{\,\prime} := (\partial_{\nu} w - \partial_{\nu} w_{0})|_{\partial\Omega}
\end{equation}
to approximate the non-measurable data $\partial_{\nu} w_{1}|_{\partial\Omega}$. The similar approximation $g_{u}^{\,\prime}$ and $g_{v}^{\,\prime}$ are employed for the linearized Neumann data $\partial_{\nu} u_{1}|_{\partial\Omega}$ and $\partial_{\nu} v_{1}|_{\partial\Omega}$, respectively.
As one can observe, the computational cost of \textbf{Algorithm 1} is quite high because of the nonlinearity term in the forward problem, see e.g. \cite{FT2005, WZ2018, XB2010, YL2017}. In particular in Steps 5-6 of \textbf{Algorithm 1}, we must solve the nonlinear elliptic equation \eref{eq_HelmholtzEqmain} three times in order to derive their Neumann traces which are necessary to compute the Fourier coefficient in Step 8.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{./fig/Xi.png}\\
\caption{The sampling points $\xi = (\xi_{1},\xi_{2})$ in frequency domain.}
\label{fig:Xi}
\end{figure}
To numerically test \textbf{Algorithm 1}, we consider the domain $\Omega = B_{0.5}(0)$ in a square $[0.5,0.5]^{2}$. To avoid the inverse crime, we use a fine grids ($200 \times 200$ equal-distance points) for the forward problem and a coarse grid ($90 \times 90$ equal-distance points) for the inversion. The sampling points $\xi = (\xi_{1},\xi_{2})$ in frequency domain are shown in \Fref{fig:Xi}, marked by blue ``$\ast$'' near which all the Fourier coefficients will be recovered. In \Fref{fig:Fc_inv}, the horizontal axis shows the length $|\xi|$ of all $\xi$, and the vertical axis shows the absolute value $|\mathcal{F}[c](\xi)|$ of Fourier coefficients near the sampling points. By comparing the exact (top) and recovered (bottom) Fourier coefficients in each sub-figure of \Fref{fig:Fc_inv}: (a) $k = 5$ and (b) $k = 10$, we conclude that, while $k$ is larger, the more Fourier modes can be recovered stably, i.e.\ $\mathcal{F}[c](\xi)$ with $|\xi| \leq 3k$.
\begin{figure}[htbp]
\centering
\,\hfill \textbf{Quadratic case}: \hfill\,\\
\,\hfill \textbf{(a)} $k = 5$ \hfill\,\hfill \textbf{(b)} $k = 10$ \hfill\,\\
\includegraphics[width=0.49\textwidth]{./fig/Fc_0050.png}
\includegraphics[width=0.49\textwidth]{./fig/Fc_0100.png}\\
\caption{The exact (Top) and recovered (Bottom) Fourier coefficients $\mathcal{F}[c](\xi)$ in each sub-figure: (a) $k=5$ and (b) $k=10$. Here, the horizontal axis shows the length $|\xi|$ of $\xi$; the vertical axis shows the absolute value $|\mathcal{F}[c](\xi)|$ of Fourier coefficients.}
\label{fig:Fc_inv}
\end{figure}
Then, by using all the recovered Fourier coefficients $\mathcal{F}[c](\xi)$ with $|\xi| \leq 3k$, we implement the inverse Fourier transform in Step 9 to reconstruct the potential function $c(x)$. In \Fref{fig:Ic_inv}, we present the exact and reconstructed potential functions $c(x)$ with different wavenumbers: (a) $k = 5$ and (b) $k = 10$, respectively. \reviewA{These results numerically verify the increasing stability in Theorem \ref{thm_quadratic} while $k$ becomes large. As one can observe, the point-wise absolute errors between the exact (left) and recovered (middle) potential functions are shown in \Fref{fig:Ic_inv}, and \textbf{Algorithm 1} reduces the maximum absolute error from $0.5$ to $0.08$ when $k$ increases from $5$ to $10$.}
\begin{figure}[htbp]
\centering
\,\hfill \textbf{Quadratic case}: \hfill\,\\
\,\hfill \textbf{(a)} $k = 5$ \hfill\,\\
\includegraphics[width=0.6\textwidth]{./fig/Ic_0050.png}
\includegraphics[clip,trim={0.3in 0 3.6in 0},width=0.3\textwidth]{./fig/Ic_0050_err.png}\\
\,\hfill \textbf{(b)} $k = 10$ \hfill\,\\
\includegraphics[width=0.6\textwidth]{./fig/Ic_0100.png}
\includegraphics[clip,trim={0.3in 0 3.6in 0},width=0.3\textwidth]{./fig/Ic_0100_err.png}\\
\caption{\reviewA{The exact (Left) and recovered (Middle) potential functions $c(x)$ together with the point-wise absolute error (Right) when (a) $k=5$ and (b) $k=10$. Here, we use the Fourier coefficients $\mathcal{F}[c](\xi)$ in the range $|\xi| \leq 3k$.}}
\label{fig:Ic_inv}
\end{figure}
\clearpage
\subsection{Reconstruction algorithm by $\Lambda'_{c}$ for high-order nonlinearity terms with multiple wavenumbers}\label{se4.2}
In this subsection, we show that the uniqueness result Theorem \ref{thm_unique_m} in \Sref{se2} indeed could provide a stable reconstruction algorithm for the nonlinear inverse Schr\"{o}dinger potential problem whose nonlinearity index is an arbitrary finite integer $m \geq 2$, if the linearized DtN map $\Lambda'_{c}$ of multiple wavenumbers is provided.
As highlighted in Remark \ref{rem_uniqueComSol}, the complex exponential solutions constructed in the proof of Theorem \ref{thm_unique_m} has a stable interval $\big[(m-1)k,(m+1)k\big]$ for any fixed $k$. Suppose that the same discrete (phase space) length and angle sets of the vectors in \Sref{se4.1} could be used, i.e.\ $\{\kappa_{i}\}_{i=1}^{I}$, $\{\hat{y}_{s}\}_{s=1}^{S}$, $\{\hat{z}_{s}\}_{s=1}^{S}$ and the following vectors $\mu_{\ell}^{\langle i;s \rangle} \in \mathbb{C}^{n}$, $\ell = 1,2$ are chosen
\begin{equation*}
\left\{~
\eqalign{
\mu_{1}^{\langle i;s \rangle} &:= \frac{+(m^{2}-1)k^{2}+\kappa_{i}^{2}}{2m\kappa_{i}} \hat{y}_{s} - \frac{\sqrt{-(m^{2}-1)^{2}k^{4}+2(m^{2}+1)k^{2}\kappa_{i}^{2}-\kappa_{i}^{4}}}{2m\kappa_{i}} \hat{z}_{s}, \\
\mu_{2}^{\langle i;s \rangle} &:= \frac{-(m^{2}-1)k^{2}+\kappa_{i}^{2}}{2\kappa_{i}} \hat{y}_{s} + \frac{\sqrt{-(m^{2}-1)^{2}k^{4}+2(m^{2}+1)k^{2}\kappa_{i}^{2}-\kappa_{i}^{4}}}{2\kappa_{i}} \hat{z}_{s},
}
\right.
\end{equation*}
similar to \textbf{Algorithm 1}, we summarize a plain reconstruction algorithm for high-order nonlinearity terms with a fixed wavenumber $k$, according to the identity \eref{eq_calderonILX}.
\vspace{10pt}
\hrule\hrule
\vspace{8pt}
{\parindent 0pt \bf Algorithm 2: Reconstruction Algorithm for the Linearized Schr\"{o}dinger Potential Problem, the high-order nonlinearity term} %
\vspace{5pt}
\hrule
\vspace{8pt}
{\parindent 0pt \bf Input:} %
$m$, $k$, %
$\{\kappa_{i}\}_{i=1}^{I}$, %
$\{\hat{y}_{s}\}_{s=1}^{S}$, $\{\hat{z}_{s}\}_{s=1}^{S}$ and %
$\{\sigma^{\langle i;s \rangle}\}$; \\[5pt]%
{\parindent 0pt \bf Output:} %
Approximated Potential $c^{\langle I+1;1 \rangle}$. \\[-15pt]%
\begin{enumerate}
\item[1:] $\,$ Set $c^{\langle 1;1 \rangle} := 0$; %
\item[2:] $\,$ {\bf For} $i = 1,2,\dots,I$ (length~updating) %
\item[3:] $\,$ \quad {\bf For} $s = 1,2,\dots,S$ (angle~updating) %
\item[4:] $\,$ \quad \quad Choose $u_{0} := \exp \{ \mathbf{i} \mu_{1}^{\langle i;s \rangle} \cdot x \}$; %
\item[5:] $\,$ \quad \quad Measure the Neumann boundary data $\partial_{\nu} u$ of the forward problem \eref{eq_HelmholtzEqmain} %
\item[] $\,$ \quad \qquad while the Dirichlet boundary data $u_{0}|_{\partial\Omega}$ are given; %
\item[6:] $\,$ \quad \quad Calculate the approximated linearized Neumann boundary data %
\item[] $\,$ \quad \qquad $g_{u}^{\,\prime} := (\partial_{\nu} u - \partial_{\nu} u_{0})|_{\partial\Omega}$; %
\item[7:] $\,$ \quad \quad Choose $\varphi := \exp \{ \mathbf{i} \mu_{2}^{\langle i;s \rangle} \cdot x \}$ and $\gamma := \big[ u_{0}^{m} \varphi \big]^{-1} = \exp \{ -\mathbf{i} \xi^{\langle i;s \rangle} \cdot x \}$; %
\item[8:] $\,$ \quad \quad Compute $\mathcal{F}[c](\xi^{\langle i;s \rangle}) \approx \int_{\partial\Omega} g_{u}^{\,\prime} \, \varphi \,\mathrm{d} S$; %
\item[9:] $\,$ \quad \quad Update $c^{\langle i;s+1 \rangle} := c^{\langle i;s \rangle} + \mathcal{F}[c](\xi^{\langle i;s \rangle}) \, \gamma \sigma^{\langle i;s \rangle}$, \quad if $\kappa_{i} \in \big[(m-1)k,(m+1)k\big]$; %
\item[10:] $\,$ \quad {\bf End}; %
\item[11:] $\,$ \quad Set $c^{\langle i+1;1 \rangle} := c^{\langle i;S+1 \rangle}$; %
\item[12:] $\,$ {\bf End}. %
\end{enumerate}
\vspace{8pt}
\hrule\hrule
\vspace{10pt}
Furthermore, if we could measure the boundary data by appropriate multiple wavenumbers, we could reconstruct sufficiently many Fourier coefficients of the unknown potential function. By choosing $k_{1}$ small and a threshold value $K$ as the maximum wavenumber, we choose a discrete set of multiple wavenumbers, namely
\begin{equation}\label{eq_numerMultiwavenumber}
\{ k_{j} \}_{j=1}^{J} \subset (0, K \,],
\end{equation}
which satisfies $k_{j+1} = \frac{m+1}{m-1} k_{j}$. Below we present an updated reconstruction algorithm of \textbf{Algorithm 2} for the linearized Schr\"{o}dinger potential problem with a high-order nonlinearity term, i.e.\ the nonlinearity index $m \geq 2$, if the linearized DtN map $\Lambda'_{c}$ of multiple wavenumbers can be obtained.
\vspace{10pt}
\hrule\hrule
\vspace{8pt}
{\parindent 0pt \bf Algorithm 2*: Reconstruction Algorithm for the Linearized Schr\"{o}dinger Potential Problem with a high-order nonlinearity term (Multiple wavenumbers)} %
\vspace{5pt}
\hrule
\vspace{8pt}
{\parindent 0pt \bf Input:} %
$m$, %
$\{ k_{j} \}_{j=1}^{J}$, %
$\{\kappa_{i}\}_{i=1}^{I}$, %
$\{\hat{y}_{s}\}_{s=1}^{S}$, $\{\hat{z}_{s}\}_{s=1}^{S}$ and %
$\{\sigma^{\langle i;s \rangle}\}$; \\[5pt]%
{\parindent 0pt \bf Output:} %
Approximated Potential $c_{\rm inv} := \sum\limits_{j=1}^{J} c^{\langle I+1;1 \rangle}_{j}$. \\[-15pt]%
\begin{enumerate}
\item[1:] $\,$ {\bf For} $j = 1,2,\dots,J$ (wavenumber~updating) %
\item[2:] $\,$ \quad Compute the approximated potential $c^{\langle I+1;1 \rangle}_{j}$ by using \textbf{Algorithm 2} and a fixed $k_{j}$; %
\item[3:] $\,$ {\bf End}. %
\end{enumerate}
\vspace{8pt}
\hrule\hrule
\vspace{10pt}
As an illustration, we consider the linearized Schr\"{o}dinger potential problem with a cubic nonlinear term ($m = 3$). The wavenumber set in \eref{eq_numerMultiwavenumber} is set with $k_{1} = 1.25$ and $K = 10$ where we recover the Fourier coefficients $\mathcal{F}[c](\xi)$ with $4$ wavenumbers $k \in \{ 1.25, 2.5, 5, 10 \}$. In \Fref{fig:3_Fc_inv_cmb}, the red region indicates the Fourier coefficients within $[2k_{1},4k_{1}) = [2.5,5)$, the green region indicates the Fourier modes within $[2k_{2},4k_{2}) = [5,10)$, the blue region indicates the Fourier modes within $[2k_{3},4k_{3}) = [10,20)$, and the cyan region indicates the Fourier coefficients within $[2k_{4},4k_{4}) = [20,40)$.
\begin{figure}[htbp]
\centering
\,\hfill \textbf{Cubic case}: \hfill\,\\
\,\hfill \textbf{multiple wavenumbers} $k \in \{ 1.25, 2.5, 5, 10 \}$ \hfill\,\\
\includegraphics[width=0.6\textwidth]{./fig/3_Fc_0013n0025n0050n0100.png}\\
\caption{(Cubic case, $m = 3$) The exact (Top) and recovered (Bottom) Fourier coefficients $\mathcal{F}[c](\xi)$ with multiple wavenumbers $k \in \{ 1.25, 2.5, 5, 10 \}$. Here, the horizontal axis shows the length $|\xi|$ of $\xi$; the vertical axis shows the absolute value $|\mathcal{F}[c](\xi)|$ of Fourier coefficients.}
\label{fig:3_Fc_inv_cmb}
\end{figure}
By using Fourier coefficients $\mathcal{F}[c](\xi)$ within $|\xi| \in \bigcup_{j=1}^{J} \big[(m-1)k_{j},(m+1)k_{j}\big) = [(m-1)k_{1},(m+1)k_{J})$, we implement the inverse Fourier transform to reconstruct the potential function $c(x)$. In \Fref{fig:3_Ic_inv_cmb}, we present the exact (left) and reconstructed (right) potential functions $c(x)$ with $4$ wavenumbers $k \in \{ 1.25, 2.5, 5, 10 \}$. It can be seen that, by including the boundary measurements of four wavenumbers, we have obtained a good approximation of the unknown potential function in \eref{eq_HelmholtzEqmain} with a cubic nonlinear term $m = 3$.
\begin{figure}[htbp]
\centering
\,\hfill \textbf{Cubic case}: \hfill\,\\
\,\hfill \textbf{multiple wavenumbers} $k \in \{ 1.25, 2.5, 5, 10 \}$ \hfill\,\\
\includegraphics[width=0.6\textwidth]{./fig/3_Ic_0013n0025n0050n0100.png}
\includegraphics[clip,trim={0.3in 0 3.6in 0},width=0.3\textwidth]{./fig/3_Ic_0013n0025n0050n0100_err.png}\\
\caption{(Cubic case, $m = 3$) \reviewA{The exact (Left) and recovered (Middle) potential functions $c(x)$ together with the point-wise absolute error (Right) when multiple wavenumbers $k \in \{ 1.25, 2.5, 5, 10 \}$ are considered.}}
\label{fig:3_Ic_inv_cmb}
\end{figure}
\subsection{Vanilla reconstruction algorithm by $D^{m}_{0}\Lambda_{c}$}\label{se4.3}
Noticing that the linearized DtN map $\Lambda'_{c}$ can be approximated by ignoring the high order terms, i.e.\ \eref{eq_se4linearizedNeumann}, we are allowed to adopt this idea and design a vanilla reconstruction algorithm for another linearized DtN map $D^{m}_{0}\Lambda_{c}$.
As shown in \Sref{se1.2}, the identity \eref{eq_nonlinearsmallboundary_equality} plays a key role in recovering the unknown potential function $c(x)$ with respect to the small boundary data $f_{\varepsilon}$. More precisely, it relies on the boundary data $\partial_{\nu} w |_{\partial\Omega} = D^{m}_{0} \Lambda_{c}(f_{1},\ldots,f_{m})$ sensitively. Thus in this subsection, we provide some numerical tests to study the consequence by the linearized DtN map $D^{m}_{0} \Lambda_{c}$, and the following formulae are employed to approximate the $m$th Fr\'{e}chet derivative $D^{m}$ with $m = 2$ and $3$, such that
\begin{equation}\label{eq_se4appro}
\left\{~
\eqalign{
\partial_{\varepsilon_{1}} \partial_{\varepsilon_{2}} \Lambda_{c} (f_{\varepsilon}) &\approx \frac{1}{\varepsilon_{1}\varepsilon_{2}} \Big( \Lambda_{c} (\varepsilon_{1} f_{1}+\varepsilon_{2} f_{2}) - \Lambda_{c} (\varepsilon_{2} f_{2}) - \Lambda_{c} (\varepsilon_{1} f_{1}) + \Lambda_{c} (0) \Big), \\
\partial_{\varepsilon_{1}} \partial_{\varepsilon_{2}} \partial_{\varepsilon_{3}} \Lambda_{c} (f_{\varepsilon}) &\approx \frac{1}{\varepsilon_{1}\varepsilon_{2}\varepsilon_{3}} \Big(
\Lambda_{c} (\varepsilon_{1} f_{1}+\varepsilon_{2} f_{2}+\varepsilon_{3} f_{3}) \\
&\quad - \Lambda_{c} (\varepsilon_{1} f_{1}+\varepsilon_{2} f_{2})
- \Lambda_{c} (\varepsilon_{1} f_{1}+\varepsilon_{3} f_{3})
- \Lambda_{c} (\varepsilon_{2} f_{2}+\varepsilon_{3} f_{3}) \\
&\quad + \Lambda_{c} (\varepsilon_{3} f_{3})
+ \Lambda_{c} (\varepsilon_{2} f_{2})
+ \Lambda_{c} (\varepsilon_{1} f_{1})
- \Lambda_{c} (0) \Big),
}
\right.
\end{equation}
when each $\varepsilon_{j}$, $j=1,2,3$ is small enough and chosen appropriately. Here we mention that $\Lambda_{c} (0) = 0$.
We note that one can modify {\bf Algorithm 1} carefully to design a reconstruction algorithm for the linearized DtN map $D^{m}_{0}\Lambda_{c}$ if appropriate complex exponential solutions \eref{eq_thm1_CExsol1} or \eref{eq_thm1_CExsol2} in the proof of Theorem \ref{thm_holder} are chosen and the above derivative approximation schemes \eref{eq_se4appro} are implemented. To save the space, we skip the pseudocode of the algorithm but present the reconstructed potential function $c(x)$ and its Fourier coefficients $\mathcal{F}[c](\xi)$ in \Fref{fig:MS_Ic_inv} for different nonlinearity index with $m = 2,3$. In both cases, we have chosen $\varepsilon_{j} = 0.1$, $j=1,2,3$ as illustration. In principle, one can extend the derivative approximation formulae \eref{eq_se4appro} to more general case with $m > 3$ and tune the small parameters $\varepsilon_{j}$ carefully to obtain better resolution. But this is beyond the scope of current work and will be considered as future work.
\begin{figure}[htbp]
\centering
\,\hfill \textbf{Linearized DtN map $D^{2}_{0} \Lambda_{c}$ (Quadratic case)}: \hfill\,\\
\includegraphics[width=0.45\textwidth]{./fig/MS_2_Fc_0100.png}
\includegraphics[width=0.45\textwidth]{./fig/MS_2_Ic_0100.png}\\
\,\hfill \textbf{Linearized DtN map $D^{3}_{0} \Lambda_{c}$ (Cubic case)}: \hfill\,\\
\includegraphics[width=0.45\textwidth]{./fig/MS_3_Fc_0100.png}
\includegraphics[width=0.45\textwidth]{./fig/MS_3_Ic_0100.png}\\
\caption{Left: The recovered Fourier coefficients $\mathcal{F}[c](\xi)$ with $k = 10$ by linearized DtN map $D^{m}_{0} \Lambda_{c}$. \reviewA{Middle: The recovered potential $c(x)$ with $k = 10$ and $|\xi| \leq (m+1)k$ by linearized DtN map $D^{m}_{0} \Lambda_{c}$. Right: the point-wise absolute error between the exact and recovered potential functions.} Here $m=2$ (Top) and $m=3$ (Bottom).}
\label{fig:MS_Ic_inv}
\end{figure}
\subsection{\reviewC{The noise propagation}
\reviewC{Finally, we consider the noise propagation on both linearized Neumann boundary data $\partial_{\nu} w |_{\partial\Omega} = D^{m}_{0} \Lambda_{c}(f_{1},\ldots,f_{m})$ in \eref{eq_HighOrderLinearDtN} and $\partial_{\nu} u_{1}|_{\partial\Omega} = \Lambda'_{c} g_{0}$ in \eref{eq_LinearDtN}.
Assume that there exists a (relative) noise level $\delta$ such that the uniformly bounded noise between the exact and noisy linearized Neumann boundary data satisfies
\begin{equation*}
\frac{\left\| (\partial_{\nu} w)^{\delta} - \partial_{\nu} w \right\|_{L^{\infty}(\partial\Omega)}}{\left\| \partial_{\nu} w \right\|_{L^{\infty}(\partial\Omega)}} \leqslant \delta,
\qquad
\frac{\left\| (\partial_{\nu} u_{1})^{\delta} - \partial_{\nu} u_{1} \right\|_{L^{\infty}(\partial\Omega)}}{\left\| \partial_{\nu} u_{1} \right\|_{L^{\infty}(\partial\Omega)}} \leqslant \delta
\end{equation*}
where $(\partial_{\nu} w)^{\delta}$ and $(\partial_{\nu} u_{1})^{\delta}$ denote the noisy Neumann boundary data, respectively.
We present the recovered Fourier coefficients (left) and potential function (middle) together with the point-wise absolute error (right) in each sub-figure of Figure \ref{fig:noisy_Ic_inv}, where $k = 10$ and $\delta = 0.1$. Though the recovered Fourier coefficients become rough when noise appears, the recovered potential retains good resolution in both linearized DtN maps $\Lambda'_{c}$ and $D^{m}_{0} \Lambda_{c}$.
More specifically, the results in \Fref{fig:noisy_Ic_inv}, recovered by the noisy measurements, can be compared with the corresponding noiseless results in \Fref{fig:Fc_inv}(b), \Fref{fig:Ic_inv}(b) for the quadratic case $m = 2$ by $\Lambda'_{c}$, and in \Fref{fig:MS_Ic_inv} (bottom) for the cubic case $m = 3$ by $D^{m}_{0} \Lambda_{c}$.
Indeed it can be observed that, with the chosen truncated value $(m+1)k$, the reconstructed results are robust with respect to the noise propagation because of our chosen complex exponential solutions in both cases.}
\begin{figure}[htbp]
\centering
\,\hfill \textbf{Quadratic case with noise (linearized DtN map $\Lambda'_{c}$)}: \hfill\,\\
\includegraphics[width=0.45\textwidth]{./fig/noisy_Fc_0100.png}
\includegraphics[width=0.45\textwidth]{./fig/noisy_Ic_0100.png}\\
\,\hfill \textbf{Cubic case with noise (linearized DtN map $D^{m}_{0} \Lambda_{c}$)}: \hfill\,\\
\includegraphics[width=0.45\textwidth]{./fig/noisy_MS_3_Fc_0100.png}
\includegraphics[width=0.45\textwidth]{./fig/noisy_MS_3_Ic_0100.png}\\
\caption{\reviewC{Left: The recovered Fourier coefficients $\mathcal{F}[c](\xi)$ with $k = 10$ and $\delta = 0.1$. Middle: The recovered potential $c(x)$ with $k = 10$, $|\xi| \leq (m+1)k$ and $\delta = 0.1$. Right: The point-wise absolute error between the exact and recovered potential functions. Here, the quadratic case $m=2$ by linearized form $\Lambda'_{c}$ (Top) and the cubic case $m=3$ by linearized form $D^{m}_{0} \Lambda_{c}$ (Bottom).}}
\label{fig:noisy_Ic_inv}
\end{figure}
\clearpage
| -96,458.548653 |
[
-3.087890625,
2.787109375
] | 23.987207 |
[
-3.16796875,
0.260498046875,
-2.513671875,
-5.69140625,
-0.83447265625,
8.859375
] |
[
2.205078125,
8.5703125,
0.57568359375,
5.98046875
] | 401 | 7,953 |
[
-3.642578125,
4.14453125
] | 37.532112 |
[
-6.12109375,
-4.60546875,
-5.125,
-2.416015625,
2.20703125,
13.703125
] | 1.197772 | 12.272841 | 16.924431 | 4.739145 |
[
1.7447096109390259
] | -56,377.986652 | 5.868226 | -96,760.100593 | 1.062781 | 6.042875 |
[
-2.306640625,
-3.662109375,
-4.3515625,
-5.55859375,
2.296875,
13.2890625
] |
[
-5.94921875,
-2.279296875,
-2.505859375,
-1.568359375,
4.1484375,
5.02734375
] | |
BkiUcD3xK7DgtAAQIHKP
|
\section{Introduction}
\label{section:introduction}
In multiple-input
multiple-output (MIMO) communication systems, where low power and low cost are key requirements, it is desirable to reduce the ADC resolution in order to save power and chip area \cite{schreier}. In fact, in high speed systems the sampling/conversion power may reach values in the order of the processing power. Therefore, coarse analog-to-digital converters (ADCs) may be a cost-effective solution for such applications, especially when the array size becomes very large or when the sampling rate becomes very high (in the GHz range) \cite{wentzloff}. Naturally, this generates a need for developing new detection and estimation algorithms operating on quantized data.
\par An early work on the subject of estimating unknown parameters based on quantized can be found in \cite{curry}. In \cite{lok,ivrlac}, the authors studied channel estimation based on single bit quantizer (comparator). In this work, a more general setting for parameter estimation based on quantized observations will be studied, which covers many processing tasks, e.g. channel estimation, synchronization, delay estimation, Direction Of Arrival (DOA) estimation, etc. An Expectation Maximization (EM) based algorithm is proposed to solve the Maximum a Posteriori Probability (MAP) estimation problem. Besides, the Cram\'er-Rao Bound (CRB) has been derived to analyze the estimation performance and its behavior with respect to the signal-to-noise ratio (SNR). The presented results treat both cases: pilot aided and non-pilot aided estimation. We extensively deal with the extreme case of single bit quantized (comparator) which simplifies the sampling hardware considerably. We also focus on MIMO channel estimation and delay estimation as application area of the presented approach. Among others, a 2$\times$2 channel estimation using 1-bit ADC is considered, which shows that reliable estimation may still be possible even when the quantization is very coarse. In order to ease the theoretical derivations, we restrict
ourselves to real-valued systems. However,
the results can be easily extended and applied to
complex valued-channels as we will do in Section~\ref{section:GNSS}.
\par Our paper is organized as follows. Section \ref{section:scmodel} describes the general system model. In Section \ref{em_algo}, the EM-algorithm operating on quantized data is derive and the estimation performance limit based on the Cram\'er-Rao Bound (CRB) is analyzed. In Section \ref{section:ExampleI}, we deal with the single-input single-output (SISO) channel estimation problem as a first application, then we generalize the analysis to the multiple-antennas (MIMO) case in Section \ref{section:MIMO}. Finally we handle the problem of signal quantization in the context of Global Navigation Satellite Systems (GNSS) in Section \ref{section:GNSS}.
\par \textit{Notation:} Vectors and matrices are denoted by lower and
upper case italic bold letters. The operators $(\bullet)^\mathrm {T}$, $(\bullet)^\mathrm {H}$, $\textrm{tr}(\bullet)$,
$(\bullet)^*$, $\text{Re}(\bullet)$ and $\text{Im}(\bullet)$ stand for transpose, Hermitian transpose, trace of a matrix, complex conjugate, real and imaginary parts of a complex number, respectively. $\textbf{\rmfamily{I}}_{M}$ denote the ($M \times M$) identity matrix. $\boldsymbol{x}_i$ is the $i$-th column of a matrix $\B{X}$ and $x_{i,j}$ denotes the ($i$th, $j$th) element of it.
The operator $\textrm{E}_{s|q}[\bullet]$ stands for expectation with respect to the random variable $s$ given $q$. The functions $p(s,q)$ and $p(s|q)$ symbolize the joint distribution and the conditional distribution of $s$ and $q$, respectively. Unless otherwise noted, all integrals are taken from $-\infty$ to $+\infty$. Finally, $\stackrel{\rm 1-bit}=$ symbolizes that the equality holds for the single bit case.
\section{System Model}
\label{section:scmodel}
As mentioned before, we start from a general signal model, described by:
\begin{equation}
\B{r}=Q(\B{y}), ~~{\rm with}
\end{equation}
\begin{equation}
\B{y}= \B{f}(\B{x},\B{\theta})+\B{\eta},
\end{equation}
where $\B{y}$ is the unquantized receive vector of dimension $N$, $\B{f}(\cdot,\cdot)$ is a general multidimensional system function of the unknown parameter vector $\B{\theta}$, to be estimated, and the known or unknown data vector $\B{x}$, while $\B{\eta}$ is an i.i.d. Gaussian noise with variance $\sigma_\eta^2$ in each dimension. We assume that the noise variance $\sigma_\eta^2$ is known, although this part of the work can be easily extended to the case where $\sigma_\eta^2$ is part of $\B{\theta}$. The operator $Q(\cdot)$ represents the quantization process, where each component $y_i$ is mapped to a quantized value from a finite set of code words as follows
\begin{equation}
r_i=Q(y_i), ~~{\rm if}~~y_i\in [r_i^{\rm lo}, r_i^{\rm up}).
\end{equation}
Thereby $r_i^{\rm lo}$ and $r_i^{\rm up}$ are the lower value and the upper limits associated to quantized value $r_i$. Additionally, we denote the prior distribution of the parameter vector by $p_{\theta} (\B{\theta})$ when available. Similarly the prior $p_x(\B{x})$ is also known, and can for instance be obtained from the extrinsic information of the decoder output. The joint probability density function involving all random system variables reads consequently as
\begin{equation}
\begin{aligned}
p(\B{r},\B{y},\B{x},\B{\theta})&= \mathbb{I}_{D(\B{r})}(\B{y})\frac{1}{(2\pi)^{\frac{N}{2}} \sigma_\eta^{N}} {\rm e}^{-\frac{\left\| \B{y}-\B{f}(\B{\theta},\B{x})\right\|^2_2}{2\sigma_\eta^2}}p_x(\B{x})p_\theta(\B{\theta}),
\end{aligned}
\label{joint_pdf}
\end{equation}
where $\mathbb{I}$ denotes the indicator function taking one if
\begin{equation}
\begin{aligned}
\B{y} \in D(\boldsymbol{r})=\left\{\boldsymbol{y}\in \mathbb {R}^N |r_i^{\rm lo}\leq y_{i} \leq r_i^{\rm up}; \forall i\in \{1,\ldots, N\}\right\},
\end{aligned}
\end{equation}
and 0 otherwise. Note that this special factorization of the joint density function is crucial for solving and analyzing the estimation problem. A factor graph representation of the joint probability density is given in Fig.~\ref{channel_graph} to illustrate this property. Each random variable is represented by a circle, referred to as variable node, and each factor of the global function (\ref{joint_pdf}) corresponds to a square, called functional node or factor node.
\begin{figure}[h]
\centerline{
\psfrag{ber}[c][c]{\small{Uncoded BER}}
\psfrag{pn}[c][c]{$\!\!\!\!\!\!\!\!\!\!\!\!\!\mathcal N(0,\sigma_\eta^2)$}
\psfrag{px}[c][c]{$p(\B{x})$}
\psfrag{N()}[c][c]{}
\psfrag{th}[c][c]{$\B{\theta}$}
\psfrag{x}[c][c]{$\B{x}$}
\psfrag{y}[c][c]{$\B{y}$}
\psfrag{z}[c][c]{$\B{r}$}
\psfrag{n}[c][c]{$\B{\eta}$}
\psfrag{pth}[c][c]{$p_\theta(\B{\theta})$}
\psfrag{Q}[c][c]{$ \mathbb{I}_{D(\B{r})}(\B{y})$}
\psfrag{f(x,th)}[c][c]{$f(\B{x},\B{\theta})$}
\epsfig{file=./dependencies/channel_graph.eps, width =7.5cm}}
\caption{Factor graph representation.}
\label{channel_graph}
\end{figure}
\section{Construction of the Estimation Algorithm and Performance Bound}
\label{em_algo}
Given the quantized observation $\B{r}$, and the log-likelihood function
\begin{equation}
\begin{aligned}
\mathcal{L}(\B{\theta})=\ln \int \int p(\B{r},\B{y},\B{x},\B{\theta}) {\rm d}\B{x} {\rm d}\B{y}=\ln p(\B{r},\B{\theta}),
\end{aligned}
\end{equation}
our goal is to find the MAP estimate $\hat{\B{\theta}}$ given by
\begin{equation}
\begin{aligned}
\hat{\B{\theta}}=\argmax_{\B{\theta}} \mathcal{L}(\B{\theta}).
\end{aligned}
\label{MAP_MAX}
\end{equation}
Naturally, the MAP solution $\hat{\B{\theta}}$ has to satisfy \\
\begin{equation}
\begin{aligned}
\nabla_{\B{\theta}}\mathcal{L}(\B{\theta})=0.
\end{aligned}
\end{equation}
This condition can be written as:
\begin{align}
\nabla_{\B{\theta}}\mathcal{L}(\B{\theta})&= \frac{\nabla_{\B{\theta}} p(\B{r},\B{\theta}) }{ p(\B{r},\B{\theta})} \nonumber \\
&=\int \int \frac{\nabla_{\B{\theta}} p(\B{r},\B{y},\B{x},\B{\theta}) }{ p(\B{r},\B{\theta})} {\rm d}\B{x} {\rm d}\B{y} \nonumber \\
&=\int \int \frac{\nabla_{\B{\theta}} p(\B{r},\B{y},\B{x},\B{\theta}) }{ p(\B{r},\B{\theta})} \cdot \frac{p(\B{x},\B{y}|\B{r},\B{\theta})}{p(\B{x},\B{y}|\B{r},\B{\theta})} {\rm d}\B{x} {\rm d}\B{y} \nonumber \\
&=\int \int \frac{\nabla_{\B{\theta}} p(\B{r},\B{y},\B{x},\B{\theta}) }{ p(\B{r},\B{y},\B{x},\B{\theta})} \cdot p(\B{x},\B{y}|\B{r},\B{\theta}) {\rm d}\B{x} {\rm d}\B{y} \nonumber \\
&= {\rm E}_{\B{x},\B{y}|\B{r},\B{\theta}} \left[ \nabla_{\B{\theta}} \ln p(\B{r},\B{y},\B{x},\B{\theta}) \right] \stackrel{!}=0.
\label{KKT_cond}
\end{align}
There is also another way to write the optimality condition, by first integrating out the variable $\B{y}$ to obtain the conditional probability of the quantized received vector
\begin{equation}
\begin{aligned}
p(\B{r}|\B{x},\B{\theta})&= \int_{r_i^{\rm lo}}^{r_i^{\rm up}} \frac{1}{(2\pi)^{\frac{N}{2}} \sigma_\eta^{N}} {\rm e}^{-\frac{\left\| \B{y}-\B{f}(\B{\theta},\B{x})\right\|^2_2}{2\sigma_\eta^2}}{\rm d}\B{y} \\
&=\prod_i ( \Phi(\frac{r_i^{\rm up}-f_i(\B{x}, \hat{\B{\theta}})}{\sigma_\eta} )-\Phi(\frac{r_i^{\rm lo}-f_i(\B{x}, \hat{\B{\theta}})}{\sigma_\eta} )),
\end{aligned}
\end{equation}
where $\Phi(x)$ represents the cumulative Gaussian distribution reading as
\begin{equation}
\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^x\exp(-t^2/2) \rm{d}t.
\end{equation}
Therefore, we also obtain an alternative condition as
\begin{align}
\nabla_{\B{\theta}}\mathcal{L}(\B{\theta})
&=\int \frac{\nabla_{\B{\theta}} \int p(\B{r},\B{y},\B{x},\B{\theta}){\rm d}\B{y} }{ p(\B{r},\B{\theta})} {\rm d}\B{x} \nonumber \\
&= \int \frac{\nabla_{\B{\theta}} p(\B{r},\B{x},\B{\theta}) }{ p(\B{r},\B{\theta})} \cdot \frac{p(\B{x}|\B{r},\B{\theta})}{p(\B{x}|\B{r},\B{\theta})} {\rm d}\B{x} \nonumber \\
&=\int \frac{\nabla_{\B{\theta}} p(\B{r},\B{x},\B{\theta}) }{ p(\B{r},\B{x},\B{\theta})} \cdot p(\B{x}|\B{r},\B{\theta}) {\rm d}\B{x} \nonumber \\
&= {\rm E}_{\B{x}|\B{r},\B{\theta}} \left[ \nabla_{\B{\theta}} \ln p(\B{r},\B{x},\B{\theta}) \right] \stackrel{!}=0,
\label{KKT_cond2}
\end{align}
which can be explicitly written as,
\begin{equation}
\begin{aligned}
&\!\!\!\!\!\sum_i\! {\rm E}_{\B{x}|\B{r},\hat{\B{\theta}}}\!\!\!\left[\!\frac{({\rm e}^{-\frac{(r_i^{\rm up}-f_i(\B{x}, \hat{\B{\theta}}))^2}{2\sigma_\eta^2} }\!\!-\!{\rm e}^{-\frac{(r_i^{\rm lo}-f_i(\B{x}, \hat{\B{\theta}}))^2}{2\sigma_\eta^2} })\nabla_{\!\B{\theta}}\! f_i(\B{x}, \hat{\B{\theta}}) }{ \sqrt{2\pi} \sigma_\eta ( \Phi(\frac{r_i^{\rm up}-f_i(\B{x}, \hat{\B{\theta}})}{\sigma_\eta} )-\Phi(\frac{r_i^{\rm lo}-f_i(\B{x}, \hat{\B{\theta}})}{\sigma_\eta} ))}\!\right]\\
&\hspace{6.0cm}-\frac{\nabla_{\B{\theta}}p_\theta(\hat{\B{\theta}})}{p_\theta(\hat{\B{\theta}})}=0 \\
&\!\!\!\!\!\stackrel{\rm 1-bit}=-\sum_i\! {\rm E}_{\B{x}|\B{r},\hat{\B{\theta}}}\!\!\!\left[r_i \frac{{\rm e}^{-\frac{f_i(\B{x}, \hat{\B{\theta}})^2 }{2 \sigma_\eta^2}}\nabla_{\!\B{\theta}}\! f_i(\B{x}, \hat{\B{\theta}})}{\sqrt{2\pi}\sigma_\eta\Phi(\frac{r_if_i(\B{x}, \hat{\B{\theta}}) }{ \sigma_\eta})}\right]\!-\frac{\nabla_{\B{\theta}}p_\theta(\hat{\B{\theta}})}{p_\theta(\hat{\B{\theta}})}=0,
\end{aligned}
\label{KKT}
\end{equation}
where the last step holds for the single bit case, i.e. $r_i \in \{\pm 1\}$.
\subsection{EM-Based MAP Solution}
\label{subsec:EM}
In general, solving (\ref{KKT}) is intractable, thus we resort to the popular Expectation Maximization (EM) algorithm as iterative procedure for solving the condition (\ref{KKT_cond}) in the following recursive way
\begin{equation}
\begin{aligned}
{\rm E}_{\B{x},\B{y}|\B{r},\B{\theta}^l} \left[ \nabla_{\B{\theta}} \ln p(\B{r},\B{y},\B{x},\B{\theta}^{l+1}) \right] =0.
\end{aligned}
\end{equation}
Thus, at each iteration $l$ the following two steps are performed: \vspace{0.15cm} \\
\underline{E-step:} Compute the expectation\\
\begin{equation}
\begin{aligned}
&g(\B{r}, \B{\theta} ,\hat{\B{\theta}}^{l})=
{\rm E}_{\B{x},\B{y}|\B{r},\hat{\B{\theta}}^{l}} [\ln p(\B{r},\B{y},\B{x},\B{\theta})] + {\rm const} \\
&~~={\rm E}_{\B{x},\B{y}|\B{r},\hat{\B{\theta}}^{l}} \Big[2\B{y}^{\rm T}\B{f}(\B{\theta},\B{x})\!+\!\left\| \B{f}(\B{\theta},\B{x})\right\|^2_2 \Big]/(2\sigma_\eta^2) \!+ \!\ln p_\theta(\B{\theta}) \\
&~~={\rm E}_{\B{x}|\B{r},\hat{\B{\theta}}^{l}} \Big[2( f_i(\hat{\B{\theta}}^l,\B{x}) +{\rm E} [\B{\eta}|\B{x},\B{r},\hat{\B{\theta}}^{l}] ) ^{\rm T}\B{f}(\B{\theta},\B{x})- \\
&~~~~~~\left\| \B{f}(\B{\theta},\B{x})\right\|^2_2 \Big]/(2\sigma_\eta^2) + \ln p_\theta(\B{\theta}),
\end{aligned}
\label{estep}
\end{equation}
where
\begin{equation}
\begin{aligned}
{ \rm E}[\eta_i|\B{x},\B{r},\hat{\B{\theta}}^{l}]&= -\frac{\sigma_\eta}{\sqrt{2\pi}}\cdot \frac{{\rm e}^{-\frac{(r_i^{\rm up}-f_i(\B{x}, \hat{\B{\theta}}^{l}))^2 }{ 2\sigma_\eta^2}}-{\rm e}^{-\frac{(r_i^{\rm lo}-f_i(\B{x}, \hat{\B{\theta}}^{l}) )^2 }{ 2\sigma_\eta^2 }}}{\Phi(\frac{r_i^{\rm up}-f_i(\B{x}, \hat{\B{\theta}}^{l}) }{ \sigma_\eta})-\Phi(\frac{r_i^{\rm lo}-f_i(\B{x}, \hat{\B{\theta}}^{l}) }{ \sigma_\eta})} \\
&\stackrel{\rm 1-bit}= r_i\frac{\sigma_\eta}{\sqrt{2\pi}}\cdot \frac{{\rm e}^{-\frac{f_i(\B{x}, \hat{\B{\theta}}^{l})^2 }{2 \sigma_\eta^2}}}{\Phi(\frac{r_if_i(\B{x}, \hat{\B{\theta}}^{l}) }{ \sigma_\eta})}.
\nonumber
\end{aligned}
\end{equation}
\underline{M-step:} Solve the maximization
\begin{equation}
\begin{aligned}
\hat{\B{\theta}}^{l+1}=\argmax_{\B{\theta}} g(\B{r}, \B{\theta} ,\hat{\B{\theta}}^{l}).
\end{aligned}
\label{mstep}
\vspace{0.25cm}
\end{equation}
In many cases, this maximization is much easier than (\ref{MAP_MAX}), as we will see in the examples considered later.
\subsection{Standard Cram\'er-Rao Bound (CRB)}
\label{SEC_CRB}
The standard CRB\footnote{ The standard CRB, in contrast to the Bayesian CRB, holds for a deterministic parameter, i.e. the prior $p_\theta(\B{\theta})$ is not taken into account.} is the lower bound on the estimation error for any unbiased estimator, that can be obtained from the Fisher information matrix $\B{J}(\B{\theta})$ under certain conditions
\begin{equation}
\begin{aligned}
{\rm E}[(\B{\theta}-\hat{\B{\theta}})(\B{\theta}-\hat{\B{\theta}})^{\rm T}] \succeq (\B{J}(\B{\theta}))^{-1}.
\end{aligned}
\end{equation}
Hereby, the Fisher information matrix reads as \cite{papoulis}
\begin{equation}
\begin{aligned}
\! \B{J}\!&= {\rm E}_{\B{r}|\B{\theta}} [\nabla_{\B{\theta}}\mathcal{L}(\B{\theta})\nabla_{\B{\theta}}^{\rm T}\mathcal{L}(\B{\theta})] \\
&= {\rm E}_{\B{r}|\B{\theta}} \Big[ {\rm E}_{\B{x}|\B{r},\B{\theta}} \left[ \nabla_{\B{\theta}} \ln p(\B{r},\B{x},\B{\theta}) \right] \cdot \\
&~~~~~~~~~~{\rm E}_{\B{x}|\B{r},\B{\theta}} \left[ \nabla_{\B{\theta}}^{\rm T} \ln p(\B{r},\B{x},\B{\theta}) \right] \Big]\\
&= \!\!\!\sum_i\!{\rm E}_{\B{r}|\B{\theta}} \!\left[ \! {\rm E}_{\B{x}|\B{r},\B{\theta}} \!\Bigg[\!\frac{{\rm e}^{-\frac{(r_i^{\rm up}\!-\!f_i(\B{x}, \B{\theta}))^2}{2\sigma_\eta^2} }\!\!\!-\!{\rm e}^{-\frac{(r_i^{\rm lo}\!-\!f_i(\B{x}, \B{\theta}))^2}{2\sigma_\eta^2} } }{ \Phi(\frac{r_i^{\rm up}\!-\!f_i(\B{x}, \B{\theta})}{\sigma_\eta} )\!-\!\Phi(\frac{r_i^{\rm lo}\!-\!f_i(\B{x}, \B{\theta})}{\sigma_\eta} )}\! \nabla_{\B{\theta}}^{\rm T} \!f_i(\B{x},\B{\theta}) \! \Bigg] \! \cdot \right. \\
&~~\left.{\rm E}_{\B{x}|\B{r},\B{\theta}} \!\Bigg[\!\frac{{\rm e}^{-\frac{(r_i^{\rm up}\!-\!f_i(\B{x}, \B{\theta}))^2}{2\sigma_\eta^2} }\!\!\!-\!{\rm e}^{-\frac{(r_i^{\rm lo}\!-\!f_i(\B{x}, \B{\theta}))^2}{2\sigma_\eta^2} } }{ \Phi(\frac{r_i^{\rm up}\!-\!f_i(\B{x}, \B{\theta})}{\sigma_\eta} )\!-\!\Phi(\frac{r_i^{\rm lo}\!-\!f_i(\B{x}, \B{\theta})}{\sigma_\eta} )} \nabla_{\B{\theta}}^{\rm T} f_i(\B{x},\B{\theta})\! \Bigg] \right]\frac{1}{2\pi\sigma_\eta^2}.
\end{aligned}
\end{equation}
In the pilot-based estimation case ($\B{x}$ is known), it simplifies to
\begin{equation}
\begin{aligned}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\B{J}\!
&= \sum_{i,r_i} \!\frac{\!({\rm e}^{-\frac{(r_i^{\rm up}\!-\!f_i(\B{x}, \B{\theta}))^2}{2\sigma_\eta^2} }\!\!-\!{\rm e}^{-\frac{(r_i^{\rm lo}\!\!-\!f_i(\B{x}, \B{\theta}))^2}{2\sigma_\eta^2} })^2\nabla_{\!\B{\theta}}\! f_i(\B{x}, \B{\theta}) \nabla_{\!\B{\theta}}^{\rm T}\! f_i(\B{x}, \B{\theta})\!}{2 \pi \sigma_\eta^2 ( \Phi(\frac{r_i^{\rm up}-f_i(\B{x}, \B{\theta})}{\sigma_\eta} )-\Phi(\frac{r_i^{\rm lo}-f_i(\B{x}, \B{\theta})}{\sigma_\eta} ))}\\
& \stackrel{\rm 1-bit}= \sum_i \frac{\!{\rm e}^{-\frac{(f_i(\B{x}, \B{\theta}))^2}{\sigma_\eta^2} }\!\!\nabla_{\!\B{\theta}}\! f_i(\B{x}, \B{\theta}) \nabla_{\!\B{\theta}}^{\rm T}\! f_i(\B{x}, \B{\theta})\!}{ 2\pi \sigma_\eta^2 \Phi(\frac{f_i(\B{x}, \B{\theta})}{\sigma_\eta} )\Phi(-\frac{f_i(\B{x}. \B{\theta})}{\sigma_\eta} )}.
\end{aligned}
\label{CRLB}
\end{equation}
Additionally, in the low SNR regime ($\sigma_\eta \gg |f_i(\B{x}, \B{\theta})|$), (\ref{CRLB}) can be approximated by
\begin{equation}
\begin{aligned}
\B{J}\approx \frac{\rho_Q}{ { \sigma_\eta^2}} \sum_{i} \nabla_{\!\B{\theta}}\! f_i(\B{x}, \B{\theta}) \nabla_{\!\B{\theta}}^{\rm T}\! f_i(\B{x}, \B{\theta}),
\end{aligned}
\label{fisher_low}
\end{equation}
where the factor
\begin{equation}
\begin{aligned}
\rho_Q = \frac{1}{2 \pi} \sum_{r} \!\frac{\!({\rm e}^{-\frac{(r^{\rm up})^2}{2\sigma_\eta^2} }\!\!-\!{\rm e}^{-\frac{(r^{\rm lo})^2}{2\sigma_\eta^2} })^2 \!}{ \Phi(\frac{r^{\rm up}}{\sigma_\eta} )-\Phi(\frac{r^{\rm lo}}{\sigma_\eta} )}
\leq 1,
\end{aligned}
\label{rho_Q}
\end{equation}
depends only on the quantizer characteristic (here assumed to be the same for all dimensions) and represents the information loss compared to the unquantized case at low SNR, in the pilot-based estimation case. For the single bit case, i.e., $(r^{\rm lo},r^{\rm up})\in\{(-\infty,0),(0,\infty)\}$, the Fisher information loss $\rho_Q$ is equal to $2/\pi$, which coincides to the result found in \cite{mezghaniisit2007,mezghaniisit2009} in terms of the Shannon's mutual information of the channel. For the case that we use a uniform symmetric mid-riser type quantizer \cite{proaksis}, the quantized receive alphabet is given by
\begin{equation}
r_{i}\in \{ (-\frac{2^b}{2}-\frac{1}{2}+k)\Delta;\textrm{ } k=1,\cdots,2^b\}=\mathcal{R},
\end{equation}
where $\Delta$ is the quantizer step-size and $b$ is the number of quantizer bits. Hereby the lower and upper quantization thresholds are
\begin{equation*}
r_{i}^{\rm lo }=
\begin{cases}
r_{i}-\frac{\Delta}{2} & \mathrm{for} \quad r_{i}\geq -\frac{\Delta}{2}(2^b-2)\\
-\infty & \mathrm{otherwise,}
\end{cases}
\end{equation*}
and
\begin{equation*}
r_{i}^{\rm up }=
\begin{cases}
r_{i}+\frac{\Delta}{2} & \mathrm{for} \quad r_{i}\leq \frac{\Delta}{2}(2^b-2)\\
+\infty & \mathrm{otherwise}.
\end{cases}
\end{equation*}
In order to optimize the Fisher information at low SNR (\ref{fisher_low}) and get close to the full precision estimation performance, we need to maximize $\rho_Q$ from (\ref{rho_Q}) with respect to the quantizer characteristic. Table~\ref{uniform} shows the optimal (non-unform) step size $\Delta_{\rm opt}$ (normalized by $\sigma_\eta^2$) of the uniform quantizer described above, which maximizes $\rho_Q$ for $b\in\{1,2,3,4\}$. If we do not restrict the characteristic to be uniform, then we get the optimal quantization thresholds which maximize $\rho_Q$ in Table~\ref{nonuniform}. We note that the obtained uniform/non-uniform quantizer optimized in terms of the estimation performance is not equivalent to the optimal quantizer, which we would get when minimizing the distortion, for a Gaussian input \cite{proaksis}. In addition, contrary to the quantization for minimum distortion the performance gap between the uniform and non-uniform quantization in our case is quite insignificant, as we can see from both tables.
\begin {table}[thp
\caption {Optimal Uniform Quantizer.}
\label{uniform}\centering
\begin{tabular}{ccc}
\hline
$b$ & $\Delta_{\rm opt}$ & $\rho_Q$ \\
\hline
1 & - & $2/\pi$ \\\hline
2 &0.704877 & 0.825763 \\\hline
3 &0.484899 & 0.945807 \\\hline
4 &0.294778 & 0.984735 \\\hline
\end {tabular}
\end {table}
\begin {table}[thp
\caption {Optimal non-Uniform Quantizer.}
\label {nonuniform}\centering
\begin{tabular}{ccc}
\hline
$b$ & Optimal thresholds & $\rho_Q$ \\
\hline
1 & 0 & $2/\pi$ \\\hline
2 &0;$\pm$0.704877 & 0.825763 \\\hline
3 &0;$\pm$0.306654;$\pm$0.895107;$\pm$1.626803 & 0.956911 \\\hline
4 &0;$\pm$0.143878;$\pm$0.4204768;$\pm$0.708440;$\pm$1.017896; & 0.989318 \\
& $\pm$1.364802;$\pm$1.780058;$\pm$2.346884 &
\\\hline
\end {tabular}
\end {table}
\par In the following, the theoretical findings will be applied to the channel estimation problem and for a GNSS problem with quantized observations.
\section{Example 1: SISO channel estimation}
\label{section:ExampleI}
We first review the simple problem of SISO channel estimation considered in \cite{ivrlac}.
\subsection{Pilot-based single bit estimation (one-tap channel)}
The SISO one-tap channel model is given by
\begin{equation}
r_i=\textrm{sign}(y_i)=\textrm{sign}(h x_i + \eta_i),\textrm{ for } i\in\{1,\ldots,N\},
\end{equation}
where $N$ is the pilot length and $x_i\in\{-1,1\}$ is the transmitted pilot sequence with normalized power. The channel coefficient $h\in\mathbb{R}$ is here our unknow parameter, i.e. $\B{\theta}=[h] $. \\
It can be shown by solving the optimality condition (\ref{KKT}) (with uniform prior $p_{\B{\theta}}(\B{\theta})$) that the ML-estimate of the scalar channel from the single bit outputs $r_i$, is given by \cite{ivrlac}
\begin{equation}
\hat{h}=\sqrt{2\sigma_\eta^2}\textrm{erf}^{-1}(\frac{\B{r}^{\rm T}\B{x}}{N}).
\end{equation}
Besides, the Fisher information (\ref{CRLB}) becomes in this case
\begin{align}
J(h)= \frac{N{\rm e}^{-\frac{h^2}{\sigma_\eta^2} }}{ 2\pi \sigma_\eta^2 \Phi(\frac{h}{\sigma_\eta} )\Phi(-\frac{h}{\sigma_\eta} )}.
\end{align}
This expression of the Fisher information is shown in Fig.~\ref{fish} for $N=200$ as function of $h^2/\sigma_\eta^2$. In Fig.~\ref{siso} the CRB, i.e. $1/J$ and the relative exact \emph{mean square error} (MSE) of the ML-estimate from $N=200$ observations, both normalized by $h^2$, are depicted as function of the SNR$=h^2/\sigma_\eta^2$. We interestingly observe that above a certain SNR, the estimation performance degrades, which means that noise may be favorable at a certain level, contrary to the unquantized channel. This phenomenon is known as stochastic resonance, which occurs when dealing with such nonlinearities. We can naturally seek the optimal SNR that maximizes the normalized Fisher information, i.e. minimizes CRB$/h^2$:
\begin{align}
\left.\frac{h^2}{\sigma_\eta^2}\right|_{\rm opt} \!\!= \!\argmax_ {\gamma} \! \frac{N{\rm e}^{-\gamma }}{ 2\pi \gamma \Phi(\sqrt{\gamma})\Phi(-\sqrt{\gamma} )}= 2.4807 \equiv 3.9458{\rm dB}.
\end{align}
This results obtained by numerical optimization of the Fisher information coincide with the results found in \cite{ivrlac} through observations at asymptotically large $N$.
\vspace{-0.3cm}
\begin{figure}[ht]
\centerline{
\psfrag{Fisher Information}[c][c]{\footnotesize Fisher Information$\cdot h^2$}
\psfrag{SNR}[c][c]{ \footnotesize $h^2/\sigma_\eta^2$ (linear)}
\epsfig{file=./dependencies/fisher.eps, width =8.5cm}}
\caption{Fisher Information vs. $\sigma_\eta^2$ for a SISO channel, $b=1$, $N=200$.}
\label{fish}
\end{figure}
\vspace{-0.5cm}
\begin{figure}[ht]
\centerline{
\psfrag{CRLB, MSE}[c][c]{\footnotesize (CRB, MSE)/$h^2$}
\psfrag{SNR}[c][c]{ \footnotesize $h^2/\sigma_\eta^2$ (linear)}
\epsfig{file=./dependencies/CRLB.eps, width =8.5cm}}
\caption{Estimation error vs. $\sigma_\eta^2$ for a SISO channel, $b=1$, $N=200$.}
\label{siso}
\end{figure}
\subsection{Pilot Based Estimation (two-tap channel)}
\label{section:twotap}
Now let us consider a more general setting with a two-tap inter-symbol-interference (ISI) channel
\begin{equation}
r_i=\textrm{sign}(y_i)=\textrm{sign}(h_0 x_i + h_1 x_{i-1}+ \eta_i),\textrm{ for } i\in\{1,\ldots,N\},
\end{equation}
where $h_0$ and $h_1$ are the channel taps. Again we utilize a binary amplitude pilot sequence
$\B{x} \in \{-1, 1\}^N$ and we try to find the the ML-estimate of the parameter vector $\B{\theta}=[h_0, h_1]^{\rm T}$ in closed form. Ignoring the first output $r_1$, the ML-condition (\ref{KKT}) turns to be ($p_\theta(\B{\theta})=1$)
\begin{equation}
\begin{aligned}
\sum_{i=2}^N r_ix_i\frac{{\rm e}^{-\frac{(h_0x_i+h_1x_{i-1})^2 }{2 \sigma_\eta^2}}}{\Phi(\frac{r_i(h_0x_i +h_1x_{i-1} ) }{ \sigma_\eta})}&=0, \\
\sum_{i=2}^N r_ix_{i-1}\frac{{\rm e}^{-\frac{(h_0x_i+h_1x_{i-1})^2 }{ 2\sigma_\eta^2}}}{\Phi(\frac{r_i(h_0x_i +h_1x_{i-1} ) }{ \sigma_\eta})}&=0. \\
\end{aligned}
\end{equation}
Taking the sum and the difference of these equations delivers respectively
\begin{equation}
\begin{aligned}
\sum_{i=2}^N r_i(x_i+x_{i-1})\frac{{\rm e}^{-\frac{(h_0x_i+h_1x_{i-1})^2 }{2 \sigma_\eta^2}}}{\Phi(\frac{r_i(h_0x_i +h_1x_{i-1} ) }{ \sigma_\eta})}&=0, \\
\sum_{i=2}^N r_i(x_i-x_{i-1})\frac{{\rm e}^{-\frac{(h_0x_i+h_1x_{i-1})^2 }{ 2\sigma_\eta^2}}}{\Phi(\frac{r_i(h_0x_i +h_1x_{i-1} ) }{ \sigma_\eta})}&=0. \\
\end{aligned}
\end{equation}
Next, we multiply the numerator and denominator of each equation by $\Phi(-\frac{r_i(h_0x_i +h_1x_{i-1} ) }{ \sigma_\eta})$ to get
\begin{equation}
\begin{aligned}
\sum_{i=2}^N r_i(x_i+x_{i-1})\frac{\Phi(-r_ix_i\frac{h_0 +h_1 }{ \sigma_\eta})}{\Phi(\frac{h_0 +h_1 }{ \sigma_\eta})\Phi(-\frac{h_0 +h_1 }{ \sigma_\eta})}&=0, \\
\sum_{i=2}^N r_i(x_i-x_{i-1})\frac{\Phi(-r_ix_i\frac{h_0 +h_1 }{ \sigma_\eta})}{\Phi(\frac{h_0 -h_1 }{ \sigma_\eta})\Phi(-\frac{h_0 -h_1 }{ \sigma_\eta})}&=0. \\
\end{aligned}
\end{equation}
Then, using the fact that
\begin{equation}
\begin{aligned}
2\Phi(-r_ix_i\frac{h_0 +h_1 }{ \sigma_\eta})=1- r_ix_i {\rm erf}(\frac{h_0 +h_1 }{\sqrt{2} \sigma_\eta}),
\end{aligned}
\end{equation}
where ${\rm erf}(\cdot)$ denotes the Gaussian error function, we get
\begin{equation}
\begin{aligned}
\sum_{i=2}^N (x_i+x_{i-1})r_i&=(N-1+\sum_{i=2}^N x_ix_{i-1}) {\rm erf} (\frac{h_0+h_1}{\sqrt{2}\sigma_\eta}), \\
\sum_{i=2}^N (x_i-x_{i-1})r_i&=(N-1-\sum_{i=2}^N x_ix_{i-1}) {\rm erf} (\frac{h_0-h_1}{\sqrt{2}\sigma_\eta}).
\end{aligned}
\end{equation}
Finally, solving the last equations with respect to $h_0$ and $h_1$, we get the ML solution.
\begin{equation}
\begin{aligned}
\!\!\!\hat{h}_{0} \!= \!\sqrt{\!\frac{\sigma_\eta^2}{2}}\! \!\left(\! \! {\rm erf}^{-1} \!\!\left[ \! \frac{\sum\limits_{i=2}^N \! (x_i\!+\!x_{i-1})r_i}{N\!+\!\sum\limits_{i=2}^N x_ix_{i-1} } \!\right] \! \!+ \! {\rm erf}^{-1} \! \!\left[ \! \frac{\sum\limits_{i=2}^N \! (x_i\!-\!x_{i-1})r_i}{N\!+\!\sum\limits_{i=2}^N x_ix_{i-1} } \!\right] \!\right)\!,
\end{aligned}
\nonumber
\end{equation}
\begin{equation}
\begin{aligned}
\!\!\!\hat{h}_{1} \!= \!\sqrt{\!\frac{\sigma_\eta^2}{2}}\! \!\left(\! \! {\rm erf}^{-1} \!\!\left[ \! \frac{\sum\limits_{i=2}^N \! (x_i\!+\!x_{i-1})r_i}{N\!+\!\sum\limits_{i=2}^N x_ix_{i-1} } \!\right] \! \!+ \! {\rm erf}^{-1} \! \!\left[ \! \frac{\sum\limits_{i=2}^N \! (x_{i-1}\!-\!x_i)r_i}{N\!+\!\sum\limits_{i=2}^N x_ix_{i-1} } \!\right] \!\right)\!.
\end{aligned}
\nonumber
\end{equation}
The solution consists in quite simple computations (apart of the final application of ${\rm erf}^{-1}$) since we only have to do with binary data ($r_i,x_i \in\{\pm 1\}$).
\subsection{Non-Pilot Aided (Blind) Estimation}
\label{section:ExampleIII}
Suppose now that a unknown binary symbol sequence $x_i \in \{+1,.1\}$ is transmitted
over an additive white Gaussian noise (AWGN) channel with an unknown
real gain $h$. The analog channel output is
\begin{equation}
y_i=h\cdot x_i+ \eta,
\end{equation}
where the variance of the noise $\eta$, $\sigma_\eta^2$, is also unknown. Additionally, the receiver is unaware of the transmitted symbols $x_i$. Based on $N$ quantized observations $r_i = Q(y_i)$,
we wish to estimate the parameter vector $\B{\theta}=[h,\sigma_\eta]^{\rm T}$. We note an inherent ambiguity in the
problem: the sign of the gain $h$ and the sign of $x_i$ cannot be determined
individually. We also note that at least 2 bits are needed in this case, because a single bit output does not contain any information about $h$. Since the ML problem is intractable in closed form, we resort to the EM approach. The EM-update for $h$ can be obtained from the general expressions in (\ref{estep}) and (\ref{mstep}) as
\begin{equation}
\begin{aligned}
&\hat{h}^{l+1}= \frac{1}{N}\sum_{i,x\in\{+1,-1\}}\\
&\frac{ \hat{h}^{l}[\Phi(\frac{r_i^{\rm up}\!-\!x \hat{h}^{l} }{ \hat{\sigma}_\eta^l})\!-\!\Phi(\frac{r_i^{\rm lo}\!-\!x \hat{h}^{l} }{ \hat{\sigma}_\eta^l})]\!-\!\frac{x\hat{\sigma}_\eta^l}{\sqrt{2\pi}} ({{\rm e}^{-\frac{(r_i^{\rm up}-x\hat{h}^{l})^2 }{ 2\hat{\sigma}_\eta^{l,2}}}\!\!-\!{\rm e}^{-\frac{(r_i^{\rm lo}\!-\!x \hat{h}^{l} )^2 }{ 2\hat{\sigma}_\eta^{l,2} }}} )}{\Phi(\frac{r_i^{\rm up}\!-\!\hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}- \hat{h}^{l} }{ \hat{\sigma}_\eta^l})+ \Phi(\frac{r_i^{\rm up}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})}\\
&~~~~=\hat{h}^{l}-\frac{\hat{\sigma}_\eta^l}{\sqrt{2\pi}} \frac{1}{N} \sum_i\\
&~~~\frac{ {{\rm e}^{-\frac{(r_i^{\rm up}-\hat{h}^{l})^2 }{ 2\hat{\sigma}_\eta^{l,2}}}-{\rm e}^{-\frac{(r_i^{\rm lo}- \hat{h}^{l} )^2 }{ 2\hat{\sigma}_\eta^{l,2} }}} - {{\rm e}^{-\frac{(r_i^{\rm up}+\hat{h}^{l})^2 }{ 2\hat{\sigma}_\eta^{l,2}}}+{\rm e}^{-\frac{(r_i^{\rm lo}+ \hat{h}^{l} )^2 }{2 \hat{\sigma}_\eta^{l,2} }}} }{\Phi(\frac{r_i^{\rm up}-\hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}- \hat{h}^{l} }{ \hat{\sigma}_\eta^l})+ \Phi(\frac{r_i^{\rm up}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})},
\end{aligned}
\end{equation}
while the update for the noise variance follows from the expectation
\begin{equation}
\begin{aligned}
&\hat{\sigma}^{l+1,2}_\eta=\frac{1}{N} \sum_i {\rm E}_{\B{x},\eta_i|r_i,h^l,\hat{\sigma}^{l,2}}[\eta_i^2]=\hat{\sigma}^{l,2}_\eta-\frac{\sqrt{2}\hat{\sigma}_\eta^l}{N\sqrt{\pi}}\sum_{i}\Bigg[\\
&\frac{ {(r_i^{\rm up}-\hat{h}^{l}){\rm e}^{-\frac{(r_i^{\rm up}-\hat{h}^{l})^2 }{ 2\hat{\sigma}_\eta^{l,2}}}-(r_i^{\rm lo}- \hat{h}^{l} ){\rm e}^{-\frac{(r_i^{\rm lo}- \hat{h}^{l} )^2 }{ 2\hat{\sigma}_\eta^{l,2} }}} }{\Phi(\frac{r_i^{\rm up}-\hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}- \hat{h}^{l} }{ \hat{\sigma}_\eta^l})+ \Phi(\frac{r_i^{\rm up}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})}+\\
&\frac{ (r_i^{\rm up}+\hat{h}^{l}){\rm e}^{-\frac{(r_i^{\rm up}+\hat{h}^{l})^2 }{ 2\hat{\sigma}_\eta^{l,2}}}-(r_i^{\rm lo}+ \hat{h}^{l} ){\rm e}^{-\frac{(r_i^{\rm lo}+ \hat{h}^{l} )^2 }{ 2\hat{\sigma}_\eta^{l,2} }} }{\Phi(\frac{r_i^{\rm up}-\hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}- \hat{h}^{l} }{ \hat{\sigma}_\eta^l})+ \Phi(\frac{r_i^{\rm up}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})-\Phi(\frac{r_i^{\rm lo}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l})} \Bigg],
\end{aligned}
\end{equation}
where we used the conditional distribution
\begin{equation}
\begin{aligned}
\! p(\eta_i|r_i,\hat{\sigma}_\eta^l,\hat{h}^l)\!=\! \frac{ \mathbb{I}_{D(r_i)}\!(\eta_i\!+\!\hat{h}^{l})\frac{{\rm e}^{-\frac{\eta_i^2 }{ 2\hat{\sigma}_\eta^{l,2}}}}{\sqrt{\!2\pi\hat{\sigma}_\eta^{l,2}}}\!+\!\mathbb{I}_{D(r_i)}\!(\!\eta_i\!-\! \hat{h}^{l} )\frac{{\rm e}^{-\frac{\eta_i^2 }{ 2\hat{\sigma}_\eta^{l,2} }} }{\sqrt{\!2\pi\hat{\sigma}_\eta^{l,2}} } }{\Phi(\! \frac{r_i^{\rm up}-\hat{h}^{l} }{ \hat{\sigma}_\eta^l}\!)\!-\!\Phi(\!\frac{r_i^{\rm lo}- \hat{h}^{l} }{ \hat{\sigma}_\eta^l}\!)\!+\! \Phi(\!\frac{r_i^{\rm up}+ \hat{h}^{l} }{\hat{\sigma}_\eta^l}\!)\!-\!\Phi(\!\frac{r_i^{\rm lo}+ \hat{h}^{l} }{ \hat{\sigma}_\eta^l}\!)}\!.
\end{aligned}
\end{equation}
The Cram\'er-Rao Bound that can be easily obtained from the likelihood function
\begin{equation}
\begin{aligned}
\mathcal{L}(h,\sigma_\eta)\!&= \\
&\!\!\!\!\!\ln \! \left(\!\Phi(\! \frac{r_i^{\rm up}-{h} }{ \sigma_\eta}\!)\!-\!\Phi(\!\frac{r_i^{\rm lo}- {h} }{ \sigma_\eta}\!)\!+\! \Phi(\!\frac{r_i^{\rm up}+ {h} }{ \sigma_\eta}\!)\!-\!\Phi(\!\frac{r_i^{\rm lo}+ {h} }{ \sigma_\eta}\!) \!\right)\!\!,
\end{aligned}
\end{equation}
as well as the MSE of the estimates $\hat{h}$ and $\sigma_\eta$ found by Monte Carlo simulation, both normalized by $h^2$, is depicted in Fig.~\ref{fig:blind} as function of $h^2/\sigma_\eta^2$, where the pilot length is $N=100$ and the quantizer resolution is $b=3$. Clearly the MSE of the estimate $\hat{\sigma}_\eta$ also exhibits the non-monotonic behavior mentioned before with respect to the SNR.
\begin{figure}[h]
\centerline{
\psfrag{CRB, MSE}[c][c]{\footnotesize (MSE, CRB)/$h^2$}
\psfrag{SNR}[c][c]{ \footnotesize $h^2/\sigma_\eta^2$ (linear)}
\epsfig{file=./dependencies/blind_est.eps, width =9cm}}
\caption{MSE and CRB of the blind estimates of $h$ and $\sigma_\eta$ vs. $h^2/\sigma_\eta^2$ for a SISO channel, $b=3$, $N=1000$.}
\label{fig:blind}
\end{figure}
\section{Example 2: Pilot-based MIMO channel Estimation}
\label{section:MIMO}
Now, we consider the MIMO case. We begin first with the problem of estimating a $2\times 2$ MIMO channel from single bit outputs, since it can be also solved in a closed form, as shown later on.
\subsection{Single bit estimation of a $2\times 2$ MIMO Channel}
As example, let us consider a pilot-based estimation of a real valued $2\times2$ channel matrix assuming a single-bit quantizer
\begin{equation}
\begin{aligned}
&\B{r}_i= {\rm sign}( h_{i1}\B{x}_1+ h_{i2}\B{x}_2 + \B{\eta}_i),
\end{aligned}
\end{equation}
where $\B{x}_1$, $\B{x}_2\in \{-1,1\}^N$ are the pilot vectors transmitted at each Tx antenna, while $\B{r}_1$, $\B{r}_2\in \{-1,1\}^N$ are the received vectors at each Rx antenna. The maximum likelihood (ML) channel estimate $\hat{\B{\theta}}=[\hat{h}_{11},\hat{h}_{12},\hat{h}_{1},\hat{h}_{11}]^{\rm T}$ can be found by
solving (\ref{KKT}) in closed form, similarly to the 2-tap SISO channel (see Subsection~\ref{section:twotap}), as
\begin{equation}
\begin{aligned}
\hat{h}_{ij} \!= \!\sqrt{\frac{\sigma_\eta^2}{2}} \!\left( \! {\rm erf}^{-1} \!\!\left[ \frac{(\B{x}_1+\B{x}_2)^{\rm T} \B{r}_i}{N+\B{x}_1^{\rm T}\B{x}_2} \!\right] \!+ \! {\rm erf}^{-1} \! \!\left[ \! \frac{(\B{x}_j-\B{x}_{\bar j})^{\rm T} \B{r}_i}{N-\B{x}_1^{\rm T}\B{x}_2} \!\right] \!\right),
\end{aligned}
\end{equation}
with $i,j\in \{1,2\}$, ${\bar j}=3-j$ and $ {\rm erf}^{-1}$ the inverse function of the error function. We can see that the HW implementation of the estimation task is still considerably simple, since only shift registers, counters and a look-up table for $ {\rm erf}^{-1}$ would be necessary.
Fig.~\ref{CRLB_fig} shows the Monte Carlo simulation and the CRB of the estimation error $\sum_{ij}{\rm E}[(h_{ij}-\hat{h}_{ij})^2]$ for a given $2\times2$ channel as function of the noise variance $\sigma_\eta^2$. Thereby, the Fisher information matrix can be obtained from (\ref{CRLB}) as
\begin{equation}
\begin{aligned}
\B{J}=& \frac{(N+\B{x}_1^{\rm T}\B{x}_2){\rm e}^{-\frac{(h_1+h_2)^2}{\sigma_\eta^2} }}{ 4\pi \sigma_\eta^2 \Phi(\frac{h_1+h_2}{\sigma_\eta} )\Phi(-\frac{h_1+h_2}{\sigma_\eta} )}
\left[\begin{array}{ll}
1&1 \\
1&1
\end{array}\right] + \\
& \frac{(N-\B{x}_1^{\rm T}\B{x}_2){\rm e}^{-\frac{(h_1-h_2)^2}{\sigma_\eta^2} }}{ 4\pi \sigma_\eta^2 \Phi(\frac{h_1-h_2}{\sigma_\eta} )\Phi(-\frac{h_1-h_2}{\sigma_\eta} )}
\left[\begin{array}{rr}
1&-1 \\
-1&1
\end{array}\right].
\end{aligned}
\end{equation}
As example, we took the specific channel matrix
\begin{equation}
\begin{aligned}
\B{H}=\left[
\begin{array}{cc}
2 & 1.5 \\
0.5 & -1
\end{array}
\right].
\end{aligned}
\end{equation}
Fig.~\ref{CRLB_fig} shows also the MSE when using the analog (unquantized) output, which is exactly
\begin{equation}
\left.{\rm MSE}\right|_{b\rightarrow\infty}= \sigma_\eta^2 {\rm tr} \left(\left([\B{x}_1,\B{x}_2]^{\rm T}[\B{x}_1,\B{x}_2]\right)^{-1}\right).
\end{equation}
Clearly, the estimation error under quantization does not increase monotonically with higher $\sigma_\eta^2$, as we know already from the SISO case.
\begin{figure}[h]
\centerline{
\psfrag{-log(ber)}[c][c]{\small{$-\mathrm{log}_\mathrm{e}$(BER)}}
\psfrag{Estimation Error}[c][c]{\footnotesize MSE}
\epsfig{file=./dependencies/CRB_2x2.eps, width =9cm}}
\caption{Estimation error vs. $\sigma_\eta^2$ for a 2$\times$2 real valued channel, $b=1$, $N=200$.}
\label{CRLB_fig}
\end{figure}
\subsection{Pilot-based MIMO channel estimation of arbitrary size}
\label{section:ExampleII}
Let us consider now a more general setting of a quantized linear MIMO system
\begin{equation}
\B{y}={\rm vec} [\B{H}\B{X}']+\B{\eta},
\end{equation}
and
\begin{equation}
\B{r}=Q(\B{y}),
\end{equation}
with a channel matrix $\B{H}\in \mathbb{R}^{L \times M}$ and $\B{X}' \in \mathbb{R}^{M\times N}$ contains $N$ pilot-vectors of dimension $M$. Hereby, we stack the unquantized, quantized and the noise signals into the vectors $\B{y}$, $\B{r}$ and $\B{\eta}$, respectively.
Our unknown parameter vector is therefore $\B{\theta}=\B{h}={\rm vec}[\B{H}]$ and we have the system function
\begin{equation}
\B{f}(\B{h},\B{X})= \B{X}\B{h},
\end{equation}
where the new matrix $\B{X} \in \mathbb{R}^{{M\cdot N}\times {M\cdot L}}$ contains again the pilot-vectors in a proper way. Furthermore we assume, contrary to the previous cases, that a priori distribution $p(\B{h})$ is given according to $\B{h} \sim \mathcal{N} (\B{0}, \B{R}_h)$. With this definition the EM-iteration (\ref{estep}) and (\ref{mstep}) reads in this case as \vspace{0.5cm} \\
\underline{E-step:} Compute for $i=1,\ldots,N$
\begin{equation}
\begin{aligned}
b^{l}_i &= -\frac{\sigma_\eta}{\sqrt{2\pi}}\cdot \frac{{\rm e}^{-\frac{(r_{i}^{\rm up}- [\B{X}\hat{\B{h}}^{l}]_i )^2 }{ 2\sigma_\eta^2}}-{\rm e}^{-\frac{(r_{i}^{\rm lo}-[\B{X}\hat{\B{h}}^{l}]_i)^2 }{2 \sigma_\eta^2 }}}{\Phi(\frac{r_{i}^{\rm up}-[\B{X}\hat{\B{h}}^{l}]_i }{ \sigma_\eta})-\Phi(\frac{r_{i}^{\rm lo}-[\B{X}\hat{\B{h}}^{l}]_i }{ \sigma_\eta})}
\end{aligned}
\end{equation}
\underline{M-step:}
\begin{equation}
\begin{aligned}
\hat{\B{h}}^{l+1}=(\B{X}^{\rm T}\B{X}+ \sigma_\eta^2 \B{R}^{-1}_h)^{-1}\cdot \B{X}^{\rm T} (\B{X} \hat{\B{h}}^{l} +\B{b}^{l} ).
\end{aligned}
\end{equation}
Let us at this point validate the convergence of the EM-algorithm to a unique optimum solution. For this, we write the log-likelihood function explicitly
\begin{align}
\mathcal{L}(\B{\theta})=\sum_i \!\ln \!\left( \! \Phi(\frac{r_i^{\rm up}\!-\!\B{x}_i^{\rm T} \B{h}}{\sigma_\eta} )\!-\!\Phi(\frac{r_i^{\rm lo}-\B{x}_i^{\rm T} \B{h}}{\sigma_\eta} ) \! \right) \!- \!\B{h}^{\rm T} \B{R}^{-1} \B{h}.
\end{align}
This log-likelihood function is a smooth convex function with respect to $\B{\theta}$. This follows from the log-concavity of
\begin{equation}
\Phi(b-z )-\Phi(a-z),
\end{equation}
$b>a$, with respect to $z$, since it is obtained from the convolution of the Gaussian density and a normalized boxcar function localized between $a$ an $b$, which are both log-concave \cite{boyd_convex}.
Therefore, the stationary point of the EM-iteration fulfilling the condition (\ref{KKT}) is the unique optimal solution.
\begin{figure}[h]
\centerline{
\psfrag{SNR}[c][c]{\small{$-10\mathrm{log}_{10}(\sigma_\eta^2)$}}
\psfrag{b} [c][c]{\small{$b$}}
\epsfig{file=./dependencies/4x4MIMO_est.eps, width =8.5cm}}
\caption{Estimation error vs. $\sigma_\eta^2$ for a real valued 4$\times$4 MIMO channel, $b=1$, $N=1000$, $\B{R}_h={\bf I}_{16}$, $x_{i,j}\in\{-1,+1\}$.}
\label{4x4MIMO_est}
\end{figure}
Fig.~\ref{4x4MIMO_est} illustrate the average MSE defined by
\begin{align}
{\rm MSE}={\rm E}\left[\left\| \B{h}-\hat{\B{h}}\right\|_2^2\right],
\end{align}
under different bit resolution for a 4$\times$4 MIMO channel with i.i.d. unit variance entries.
Hereby, we chose an orthogonal pilot sequence, i.e. $\B{X}^{\rm T}\B{X}=\B{R}_h={\bf I}_{16}$ with $x_{i,j}\in\{-1,+1\}$. The estimation error in the unquantized case, which is given by
\begin{align}
{\rm MSE}_{b \rightarrow \infty}= \sigma_\eta^2 {\rm tr}\left( (\B{X}^{\rm T}\B{X}+ \sigma_\eta^2 \B{R}^{-1}_h)^{-1}\right)
\end{align}
is also shown for comparison. Obviously, at medium and low SNR, the coarse quantized does not affect the estimation performance considerably.
\section{Example 3: Quantization of GNSS Signals}
\label{section:GNSS}
The quality of the data provided by a GNSS receiver depends largely on the synchronization error with the signal transmitted by the
GNSS satellite (navigation signal), that is, on the accuracy in the propagation time-delay estimation of the direct signal
(line-of-sight signal, LOSS). In the following we will study the effect of quantization in terms of simulations and the CRB as derived in Section~\ref{SEC_CRB}. We assumed an optimal uniform quantizer as given in Table \ref{uniform}. We will first assess the accuracy of a standard one-antenna GNSS receiver in case no multipath is present. Secondly, we will assess the behavior of array synchronization signal processing in a multipath scenario applying the innovative derivation of the EM algorithm as shown in Section \ref{subsec:EM}. This assessment is based on the work presented in \cite{AnNoSeSw2009}. In the following we assume a GPS C/A code signal with a chip duration $T_c=977.52$ ns, a code length of $1$ ms and a bandwidth of $B=1.023$ MHz. The received signal is sampled with the sampling frequency $f_s=2 B$. We only use one code period as an observation time where the channel is assumed constant during this observation time.\par
The synchronization of a navigation signal is usually performed by a Delay Lock Loop (DLL), which in case no multipath signals are present, efficiently implements a maximum likelihood estimator (MLE) for the time-delay of the LOSS $\tau_1$.
\begin{figure}[h!]
\centerline{
\psfrag{CRLB [Meter]}[B][c]{\small $\sqrt{\rm {CRB} (\hat{\tau}_1)}\cdot c_0$ [meter]}
\psfrag{SNR} [c][c]{\small SNR [dB]}
\psfrag{b=1} [c][c]{\tiny $b=1$}
\psfrag{b=2} [c][c]{\tiny $b=2$}
\psfrag{b=3} [c][c]{\tiny $b=3$}
\psfrag{b=4} [c][c]{\tiny $b=4$}
\epsfig{file=./dependencies/CRLB_vs_P_bit.eps, width =8.5cm}}
\caption{RMSE of $\hat{\tau}\cdot c_0$ vs. bit resolution and SNR with one antenna $f_s=2.046$MHz. One code period is used for estimation.}
\label{SISO_GNSS_del}
\end{figure}
In Fig.~\ref{SISO_GNSS_del}, the lower bound of the RMSE of $\tau_1$, for different the number of bits, is given in terms of the $\sqrt{\rm {CRB} (\hat{\tau}_1)}$ in meters where $c_0$ denotes the speed of light. A nominal SNR for a GPS C/A signal is approximately $-20$ dB. In Fig.~\ref{SISO_GNSS_del} one can observe that the $\sqrt{\rm {CRB} (\hat{\tau}_1)}$ does not significantly decrease further for more than 3 bits, thus a rather simple hardware implementation is sufficient for such a GNSS receiver.\par
Now, we assess the EM algorithm as derived in Section \ref{subsec:EM} with $p_{\theta}(\B{\theta})$ being a uniform distribution, hence considering a ML estimator. We consider a two path scenario where the LOSS and one reflective multipath signal are received by an uniform linear antenna array (ULA) with $M=8$ isotropic sensor elements. We define
\begin{equation}
\B{\theta}=[\rm{Re}\{\B{\gamma}\}^{\rm T},\rm{Im}\{\B{\gamma}\}^{\rm T},\B{\tau}^{\rm T},\B{\nu}^{\rm T},\B{\phi}^{\rm T}]^{\rm T},
\end{equation}
with the vector of complex amplitudes $\B{\gamma}=[\gamma_{1},\gamma_{2}]^{\rm{T}}$, the vector of azimuth angles $\B{\phi}=[\phi_{1},\phi_{2}]^{\rm{T}}$, the vector of time-delays
$\B{\tau}=[\tau_{1},\tau_{2}]^{\rm{T}}$, and the vector of Doppler frequencies
$\B{\nu}=[\nu_{1},\nu_{2}]^{\rm{T}}$. The parameters with the subscript 1 refer to the LOSS and parameters with the subscript 2 refer to the reflection. The reflected multipath and the LOSS are considered to be in-phase, which means $\arg (\gamma_1) = \arg (\gamma_2) $, and the signal-to-multipath ratio (SMR) is 5dB. Signal-to-noise ratio (SNR) denotes the LOSS-to-noise ratio and we assume $\rm{SNR}=-22.8$dB. The DOAs for the LOSS and the multipath are $\phi_{1}=-30^{\circ}$ and $\phi_{2}=62^{\circ}$ respectively. Further, we define the relative time-delay between the LOSS and the multipath as $\Delta\tau=|\tau_1-\tau_2|=0.3 T_c$ and relative Doppler $\Delta \nu=|\nu_1-\nu_2|=0$Hz. In Fig.~\ref{8ant_GNSS_del} the RMSE of $\hat{\tau}_{1}$ and $\hat{\tau}_{2}$ vs. the bit resolution is depicted.
\begin{figure}[h]
\centerline{
\psfrag{RMSE, CRLB}[B][c]{\small RMSE$\cdot c_0$, $\sqrt{\rm {CRB}}\cdot c_0$ [meter]}
\psfrag{bits} [c][c]{\small{$b$}}
\psfrag{CRB, LOSS}[c][c]{\tiny $\sqrt{\rm {CRB}(\hat{\tau}_1)}$}
\psfrag{CRB, MULT.}[c][c]{\tiny $\sqrt{\rm {CRB}(\hat{\tau}_2)}$}
\psfrag{RMSE, LOSS}[c][c]{\tiny $\rm{RMSE}(\hat{\tau}_1)$}
\psfrag{RMSE, MULT.}[c][c]{\tiny $\rm{RMSE}(\hat{\tau}_2)$}
\epsfig{file=./dependencies/RMSE_delay.eps, width =9cm}}
\caption{RMSE of $\hat{\tau}_{1}\cdot c_0$ and $\hat{\tau}_{2}\cdot c_0$ (in meter) vs. bit resolution for $M=8$, $\phi_{1}=-30^\circ$, $\phi_{2}=62^\circ$,
$\Delta \tau=0.3 T_c$, ${\rm SNR}=-22.8$dB, ${\rm SMR}=5$dB, $\Delta \nu=0$Hz. One code period is used for estimation.}
\label{8ant_GNSS_del}
\end{figure}
In Fig.~\ref{8ant_GNSS_azi} the RMSE of $\hat{\phi}_{1}$ and $\hat{\phi}_{2}$ vs. the bit resolution is shown.
\begin{figure}[h]
\centerline{
\psfrag{RMSE, CRLB}[B][c]{\small RMSE, $\sqrt{\rm CRB}$ [$^{\circ}$]}
\psfrag{bits} [c][c]{\small{$b$}}
\psfrag{CRLB, LOSS}[c][c]{\tiny $\sqrt{\rm {CRB}(\hat{\phi}_1)}$}
\psfrag{CRLB, MULT.}[c][c]{\tiny $\sqrt{\rm {CRB}(\hat{\phi}_2)}$}
\psfrag{RMSE, LOSS}[c][c]{\tiny $\rm{RMSE}(\hat{\phi}_1)$}
\psfrag{RMSE, MULT.}[c][c]{\tiny $\rm{RMSE}(\hat{\phi}_2)$}
\epsfig{file=./dependencies/RMSE_azi.eps, width =9cm}}
\caption{RMSE of $\hat{\phi}_{1}$ and $\hat{\phi}_{2}$ (in degree) vs. bit resolution for $M=8$, $\phi_{2}=-30^\circ$, $\phi_{2}=62^\circ$, $\Delta \tau=0.3 T_c$, ${\rm SNR}=-22.8$dB, ${\rm SMR}=5$dB, $\Delta \nu=0$Hz. One code period is used for estimation.}
\label{8ant_GNSS_azi}
\end{figure}
Based on the results presented in Fig.~\ref{8ant_GNSS_del} and Fig.~\ref{8ant_GNSS_azi} one can derive the important statement that 4 bits seem to be sufficient for high-resolution estimates with respect to the considered channel conditions.
\section{Conclusion}
A general EM-based approach for optimal parameter estimation based on quantized channel outputs has been presented. It has been applied in channel estimation and for GNSS. Besides, the performance limit given by the Cram\'er-Rao Bound (CRB) has been discussed as well as the effects of quantization and the optimal choice of the ADC characteristic. It turns out that the gap to the ideal (infinite precision) case in terms of estimation performance is relatively small especially at low SNR. This holds independently of whether the quantizer is uniform or not. Additionally, we observed that the additive noise might, at certain level, be favorable when operating on quantized data, since the MSE curves that we obtained were not monotonic with the SNR. This is an interesting phenomenon that could be investigated in future works.
\bibliographystyle{IEEEbib}
\setlength{\textheight}{16.5 cm}
| -67,207.768442 |
[
-2.837890625,
2.77734375
] | 10.526316 |
[
-3.630859375,
0.31982421875,
-1.669921875,
-5.44921875,
-0.9140625,
8.0703125
] |
[
1.720703125,
7.1640625,
0.876953125,
5.5234375
] | 446 | 4,648 |
[
-2.52734375,
2.630859375
] | 38.653834 |
[
-6.2734375,
-4.5625,
-4.234375,
-1.9755859375,
2.595703125,
11.671875
] | 0.700163 | 4.706056 | 29.345955 | 3.452746 |
[
1.3931735754013062
] | -42,745.631561 | 6.452883 | -66,213.64708 | 1.000233 | 6.214545 |
[
-2.837890625,
-3.748046875,
-4.0234375,
-5.37109375,
2.650390625,
12.8125
] |
[
-5.4609375,
-1.640625,
-2.23046875,
-1.58203125,
3.267578125,
3.83984375
] | |
BkiUbWU5qoaAwnMw70bx
|
\section{Introduction}
Continual learning refers to the ability of a model to learn from a stream of incoming data sequentially over time, while retaining knowledge acquired from previous data. Continual learning is a vital component of machine learning. It enables a model to generalise in situations where the stream of data may be non-stationary, with unavailable data during training, or when new information is incrementally made available to the model over time~\citep{kirkpatrick2017overcoming}.
A phenomenon called catastrophic forgetting hinders continual learning in many artificial neural networks (ANNs)~\citep{howard2018universal, schak2019study}. Catastrophic forgetting refers to the model losing knowledge of previous datasets or tasks as the model is trained sequentially on information relevant to a new dataset or task~\citep{kirkpatrick2017overcoming}. Catastrophic forgetting is also called catastrophic interference in older works~\citep{MCCLOSKEY1989109}.
If a model, e.g. an ANN, has very robust memory that is not susceptible to catastrophic forgetting, then such a model may be susceptible to overfitting (i.e. memorisation of the training set). Overfitting leads to poor generalisation. Humans, however, have both reasonable (albeit imperfect) memory retention and good generalisation, so in principle it should be possible to build a model with similar desirable properties.
Splines are piece-wise defined functions. The application of splines to mitigate catastrophic forgetting was absent in most of the reviewed literature. Due to the piece-wise definition, each spline parameter only affects the function on some small region while keeping the rest of the function unchanged, thus making it a good candidate for continual learning. Cubic B-splines are considered in this paper.
Catastrophic forgetting is typically mitigated with two broad strategies: carefully designed and parameterised models, or through augmented training and regularisation techniques ~\citep{robins1995catastrophic,shin2017continual}. This paper attempts to address catastrophic forgetting through the following novel contributions:
\begin{itemize}
\item A novel Spline Additive Model (SAM) with guaranteed robustness to catastrophic forgetting is proposed, which is useful for many applications. However, it is not a universal function approximator.
\item The Kolmogorov-Arnold Spline Additive Model (KASAM) is proposed, which is a novel architecture that combines SAMs with the Kolmogorov-Arnold representation theorem to create a universal function approximator.
\end{itemize}
These goals demand reviewing the fundamentals of function approximation in one variable, with B-spline functions that are resistant to catastrophic forgetting. The paper proceeds to build multi-variable function approximators with single-variable B-spline functions and the Kolmogorov-Arnold representation theorem.
The rest of the paper is structured as follows: Section~\ref{sec:previous} provides an overview of the related literature. Section~\ref{sec:catastrophic} illustrates the concept of catastrophic forgetting. Section~\ref{sec:singleVar} lists the properties of splines in the context of single-variable function approximation. Section~\ref{sec:sam} discusses SAMs as function approximators. Section~\ref{sec:KASAM} introduces the KASAM architecture. Section~\ref{sec:pseudo} describes the pseudo-rehearsal technique employed in this study. Section~\ref{sec:method} details the methodology. Section~\ref{sec:exp} presents the empirical results. Section~\ref{sec:conclusions} concludes the paper, and Section~\ref{sec:opportunities} proposes some directions for future research.
\section{Relevant Studies}\label{sec:previous}
It has been hypothesized that overlapping, dense and distributed representations in ANNs lead to catastrophic forgetting~\citep{kaushik2021understanding}. Catastrophic forgetting occurs when many parameter estimates that store knowledge for one task change during sequential learning to meet the objectives of another task~\citep{kirkpatrick2017overcoming, mcrae1993catastrophic}. If the same parameter is shared or overlaps with many inputs, then it is more susceptible to catastrophic forgetting. Any gradient-based updates would affect the same shared parameter and thus, the parameter value would be more likely to change between tasks. Catastrophic forgetting can be ameliorated with models that are parameterised in such a way that weight sharing or overlap is minimised over all inputs.
Training techniques to counteract catastrophic forgetting include identifying and protecting key parameters for different tasks. Parameter regularisation to penalise adjusting parameters from their initial values is one approach. Retraining over all training data for all tasks can also be done, although this scales poorly as the amount of training data increases.
Data augmentation techniques can also be used to counteract catastrophic forgetting, some of which are referred to as rehearsal techniques. There are many suggested rehearsal and pseudo-rehearsal techniques~\citep{robins1995catastrophic}. Pseudo-rehearsal works quite well for low dimensional problems, but there is some experimental evidence and practical use cases that suggest pseudo-rehearsal has poorer performance in high dimensional problems. It is hypothesised that the concentration of measure in high dimensional probability distributions require more complexity to be modelled for acceptable results. As a consequence, some researchers considered estimating the data's distribution using a Generative Adversarial Network (GAN) for higher dimensional problems.
However, training GANs requires a lot of computational resources, and might not be a scalable solution for all problems and use-cases.
Using splines in ANNs has been studied to some extent. \citet{scardapane2017learning} studied learnable activation functions parameterised by splines. \citet{scardapane2017learning} introduced vectorised gradient descent based methods to train the parameters of their architecture. \citet{scardapane2017learning} tested the function approximation ability of the architecture. \citet{douzette2017b} made use of spline networks, allowing trainable and non-uniform sub-intervals, and developed algorithms for evaluating splines and their derivatives with respect to all their parameters. The research concluded that splines which allowed non-uniform partitions that vary during training achieved state-of-the-art results for spline-based architectures. The accuracy of non-uniform splines compared well against conventional neural networks. However, allowing the partitions of sub-intervals to vary had counter-productive effects. Intermediate or hidden layers could take on values outside the support interval of the splines. Since splines are zero outside their sub-intervals, this could lead to increased training times~\citep{douzette2017b}. To the best of the authors' knowledge, catastrophic forgetting was not considered in the context of splines.
A lot of research has taken place on the neural network approximations of the Kolmogorov-Arnold representation theorem~\citep{funahashi1989approximate,scarselli1998universal,igelnik2003kolmogorov,braun2009constructive,guliyev2018approximation,sannai2019universal, schmidt2021kolmogorov,shen2021neural}. \citet{shen2021neural} gives a good overview of the direction of research, and has shown that the focus has been on reducing the number and width of the hidden layers required to approximate the theorem with a neural network, at the trade-off of increasing the bounded approximation error. The difference between the well-known universal function approximation theorems for arbitrarily deep or arbitrarily wide neural network and the Kolmogorov-Arnold representation theorem is striking.
ANN architectures using splines and the Kolmogorov-Arnold representation theorem have also been explored~\citep{igelnik2003kolmogorov,lane1991multi}. \citet{igelnik2003kolmogorov} put forth a Kolmogorov Spline Network architecture, which was also based on the Kolmogorov-Arnold representation theorem. However, this differed from our work through the manner in which the weights are applied~\citep{igelnik2003kolmogorov}. Furthermore, no prototype was constructed, and no experimental work was performed.
The NeurIPS (originally NIPS) paper \citet{lane1991multi} explored a Kolmogorov based multi-layer perceptron architecture which employed B-splines in the nodes. \citet{lane1991multi} trained the architecture with gradient-descent based techniques and tested out the function approximation ability. \citet{lane1991multi} states that the architecture was able to approximate functions, and commented on the fast rate of convergence of the network's parameters. Their work is most relevant to this paper. KASAM has a skip-connection to the output and can more easily represent functions that are sum-decomposable into a sum of single-variable functions. This paper focuses on memory retention and catastrophic forgetting, whereas the paper \citet{lane1991multi} does not consider catastrophic forgetting.
The research by Numenta and Jeffrey Hawkins is focused on neuroscience and reverse-engineering the computational mechanisms employed in the brain~\citep{htm_neocortex}. Their research is based on discrete and stochastic models like hierarchical temporal memory with update rules that appear otherworldly compared to the familiar differentiable models that can be trained with gradient descent algorithms. One aspect that is missing is a universal function approximation theorem proving the expressive power of their models. One component of their technology is Sparse Distributed Representations (SDRs) that encode real numbers with sparse vectors \citep{Hinton1990DistributedR}. SDRs are similar to zeroth order B-splines, although the connection has not been explicitly shown in the reviewed literature.
The potential use of B-spline functions to mitigate catastrophic forgetting was not thoroughly investigated in any of the reviewed literature. The convergence rate and numerical stability of B-splines require more thorough analysis and empirical study.
\section{Susceptibility to Catastrophic Forgetting}\label{sec:catastrophic}
Model parameters that are shared over the entire input-domain of a function approximator make models such as ANNs or linear functions susceptible to catastrophic forgetting. A parameter that affects a model over all inputs is a globally shared parameter, and not a localised parameter. Parameters that are localised only affect a model's output over a small region of the input-domain. Catastrophic forgetting is easily demonstrated with a simple linear function approximator for a single-variable scalar function.
A linear model is trained on the data sampled from the distribution for the first task in Figure~\ref{fig:fig_label_linear_functions}. The model is thereafter trained on the second task. Without any additional guidance, such as revision of the first task or weight regularisation to prevent catastrophic forgetting, the model promptly unlearns the first task.
\begin{figure}[h!]
\centering
\noindent
\includegraphics[width=0.7\textwidth]{KASAM_Theory/Linear_functions_catastrophic_interference.png}
\caption{\small A linear model with only two parameters is trained on data for the first task. The same model is then trained on data from a second task, and will adapt to the new task. The model abruptly forgets about the first task without revision. Trainable parameters that are globally shared across the input-domain make the model susceptible to catastrophic forgetting.}
\label{fig:fig_label_linear_functions}
\end{figure}
If the linear model had been trained on both tasks simultaneously, then it would have found a reasonable solution for both tasks. The individual tasks have different data distributions, which affects the out-of-sample performance and the potential for continual or incremental learning. The joint distribution for both tasks has a non-linear optimal function (which is piece-wise linear for this specific example).
The seemingly disparate ideas of catastrophic forgetting, distribution shift, out-of-sample performance, continual learning, and the non-linearity of a target function are related. The relationship is not a simple analytical rule, but it is worth investigating.
\section{Splines for Single Variable Function Approximation}\label{sec:singleVar}
Single-variable function approximation on the unit interval is typically done with the use of single-variable basis functions. Examples include the polynomial basis (Legendre polynomials), or the Fourier basis. Using a large density (i.e. number) of basis functions can lead to high-variance models with extreme oscillations between the finitely many training data points. It is an open question if there are regularisation methods that attenuate such oscillations for any choice of basis functions. The specific basis functions considered in this paper are uniform cubic B-splines, illustrated in Figure~\ref{fig:fig_label_uniform_cubic_spline_basis_functions}.
\begin{figure}[h]
\centering
\noindent
\includegraphics[width=0.7\textwidth]{KASAM_Theory/uniform_cubic_b_spline_basis_functions.png}
\caption{\small Uniform cubic B-spline basis functions with a density of 8. Note that the basis functions have been plotted for input values outside the unit interval to clearly see all 8 basis functions. Each basis function is zero almost everywhere except on a small sub-interval. Each basis function is a scaled and translated version of the same function such that $ S_{i}(x) = S(w_{i}x + b_{i})$. }
\label{fig:fig_label_uniform_cubic_spline_basis_functions}
\end{figure}
The use of B-splines is a flexible and expressive method of function approximation. The analysis of B-splines and characterising their general behaviour is simple compared to other basis functions. Note that the number of basis functions is fixed and chosen independently of the training data. The model considered in this paper does not necessarily interpolate between consecutive data points. Exact interpolation of training data is ill-advised for tasks with noisy training data, since the resulting model would have a lot of variance and oscillate wildly. The order of a B-spline (e.g. zeroth or cubic) is different to the number of basis functions. The number or density of basis functions is sometimes referred to as sub-intervals or knots in the literature on splines and B-splines.
Each basis function for a uniform cubic B-spline $S_{i}$ can be obtained by scaling and translating the input of the following activation function:
$$ S(x) =\begin{cases}
\frac{1}{6} x^{3} & 0 \leq x < 1\\
\frac{1}{6} \left[-3(x-1)^{3} +3(x-1)^{2} +3(x-1) + 1 \right] & 1 \leq x < 2\\
\frac{1}{6} \left[3(x-2)^{3} -6(x-2)^{2} + 4 \right] & 2 \leq x < 3\\
\frac{1}{6} ( 4-x ) ^{3} & 3 \leq x < 4\\
0 & otherwise
\end{cases}
$$
The initial impetus for considering B-splines followed from the revelation that Numenta's Sparse Distributed Representations (SDRs) \citep{Hinton1990DistributedR} are mathematically similar to zeroth-order B-splines. Zeroth order splines incidentally resemble lookup tables, and are not differentiable over their entire domain. It is also known that lookup tables are very robust to catastrophic forgetting \citep{look_up_tables_are_robust}. Replacing zeroth-order B-splines with cubic B-splines yields a differentiable model that is smooth and amicable to gradient descent-based learning methods. If the unit interval is uniformly partitioned B-splines as in Figure~\ref{fig:fig_label_uniform_cubic_spline_basis_functions}, then the subsequent set of basis functions have the same shape, as illustrated in Figure~\ref{fig:fig_label_uniform_cubic_spline_activation}.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=0.8\textwidth]{KASAM_Theory/uniform_cubic_b_spline_activation_function.png}
\caption{\small The activation function used to implement uniformly spaced cubic B-splines. In the uniform case all basis functions are the same shape with only the input or argument being scaled and translated with a fixed linear function for each basis function.}
\label{fig:fig_label_uniform_cubic_spline_activation}
\end{figure}
Each cubic B-spline basis function is simply a re-scaled or translated version of any other cubic B-spline basis function, as seen in Figures~\ref{fig:fig_label_uniform_cubic_spline_basis_functions} and~\ref{fig:fig_label_uniform_cubic_spline_activation}. This symmetry can promote reuse. In a neural network context, uniform cubic B-splines can be modelled using a two-layer neural network with an activation function corresponding to the shape of each basis function, and a pre-defined set of untrainable weights and biases. A final trainable linear layer multiplies each basis function with its coefficient parameter and sums together the results. A single variable function approximator for uniform B-splines is given by:
$$
f(x)
= \sum_{i=1}^{K} \theta_{i} S_{i}(x)
= \sum_{i=1}^{K} \theta_{i} S(w_{i}x + b_{i})
$$
This construction presented above is similar to other basis functions. The Fourier series is composed of basis functions that are orthogonal to each other. There are many analytical and practical reasons for considering basis functions that are orthogonal with respect to each other. The Fourier basis for functions on the unit interval is:
$$
f(x)
= c + \sum_{n=1}^{K} a_{n} \sin(2 \pi n x) + b_{n} \cos(2 \pi n x)
$$
Polynomial functions can also constitute a set of basis functions. The relatively simple monomial basis is non-orthogonal and of the form:
$$
f(x)
= c + \sum_{n=1}^{K} a_{n} x^{n}
$$
In many applications it is preferable to use orthogonal basis functions. Legendre polynomials are orthogonal polynomial basis functions. A function approximator in terms of Legendre basis functions $P_{n}(x)$:
$$
f(x)
= \sum_{n=0}^{K} a_{n} P_{n}(x)
$$
The choice of B-splines seems arbitrary, but it is a critical design choice. The problem with trigonometric and polynomial functions is that they are non-zero almost everywhere. There are only finitely many points where each basis function is zero. This means that each parameter could affect a model's output almost everywhere and lead to catastrophic forgetting. B-splines are zero almost everywhere and don't suffer such off-target effects - each parameter affects only a small region. Cubic B-splines have three properties that are atypical of most function approximators, namely sparsity, bounded gradients, and orthogonal distal gradients. These properties hold for arbitrarily many basis functions. The listed properties are specified below for cubic B-splines, but similar properties hold for any fixed order B-spline functions.
\subsection*{Property 1: Sparsity of the Gradient Vector}
The maximum number of non-zero cubic B-spline basis functions for any input is four, regardless of the number of basis functions $K$:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f(x)}_{0}
:= \sum_{i=1}^{K} d_{Hamming} \left(\frac{\partial f}{\partial \theta_{i}}(x),0 \right)
\leq 4
\; \forall x \in D(f)
$$
Sparsity is related to the robustness of a model to catastrophic forgetting. Uniform cubic B-splines require a minimum of four basis functions over the unit interval, so $K\geq 4$. For very large models with a large density of basis functions the gradient vector is zero for nearly all trainable parameter as shown in Figure~\ref{fig:Proof_properties_1_2}.
\subsection*{Property 2: Gradient Flow Attenuation}
The gradient vector has a bounded L1 norm, for any number of B-spline basis functions:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f(x)}_{1}
= \sum_{i=1}^{K} \left| \frac{\partial f}{\partial \theta_{i}} (x) \right|
< 4U
\; \forall x \in D(f)
$$
Gradient flow attenuation affects the stability of training a model with many trainable parameters. If the gradient vector is bounded, then the flow of gradient updates during training is attenuated and numerically stable. The visual proof of the boundedness of the gradient vector is demonstrated with Figure~\ref{fig:Proof_properties_1_2}.
\begin{figure}[!ht]
\centering
\noindent
\includegraphics[width=0.6\textwidth]{KASAM_Theory/SAM_properties_1_2_Proof.png}
\caption{\small Plot showing which basis functions are active for a single input. At most four basis functions are non-zero for any given input to a uniform cubic B-spline function, so one has the property of sparsity. Each of the four active basis functions that are active are bounded, so the sum of their absolute values is also bounded. Thus the $L_{1}$ norm of the gradient vector with respect to the trainable parameters is bounded.}
\label{fig:Proof_properties_1_2}
\end{figure}
\subsection*{Property 3: Distal Orthogonality}
For any uniform cubic B-spline function approximator of one variable there exists a $\delta>0$ such that:
$$
|x - y| > \delta
\implies \langle \grad_{\vec{\mathbf{\theta}}} f(x),\grad_{\vec{\mathbf{\theta}}} f(y) \rangle = 0
\; \forall x,y \in D(f)
$$
If two input points are sufficiently far apart, then the gradient vectors with respect to the trainable parameters at each input are orthogonal to each other. Thus, only points within the same neighbourhood have non-orthogonal gradient vectors that can influence one another. A visual proof of distal orthogonality can be done with a diagram given in Figure~\ref{fig:Proof_properties_3}.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=0.7\textwidth]{KASAM_Theory/SAM_properties_3_Proof.png}
\caption{\small Plot showing which basis functions are active for two distant input-points.
If two input-points are sufficiently distant from each other,
then there are no overlapping basis functions.
The gradients with respect to trainable parameters are zero almost everywhere,
except for the active basis functions. Thus, the inner-product of such gradient vectors must be zero. This property is best described as distal orthogonality.}
\label{fig:Proof_properties_3}
\end{figure}
Distal orthogonality guarantees memory retention for single-variable function approximators based on cubic B-splines. Gradient flow attenuation guarantees bounded gradient vectors, which mean cubic B-splines are numerically stable during training, making optimisation much easier and well-posed. Sparsity of the gradient vector and sparse activation mean it is possible to implement very efficient models and training procedures that only compute the non-zero values. The above properties lead to a single-variable function approximator that is efficient, easily trained, and robust to catastrophic forgetting with intrinsic memory retention - in theory.
To the authors' knowledge, there are not many such guarantees given to other function approximators.
\subsection*{Stratification}
Uniform cubic B-splines with a large density of basis functions can exhibit what is best described as stratification, illustrated in Figure~\ref{fig:fig_label_uniform_cubic_spline_stratification}. Stratification can be considered to be a special form of overfitting, where the data points are learned exactly, and the model is not adjusted for any regions not explicitly represented by the training data. The models in Figure~\ref{fig:fig_label_uniform_cubic_spline_stratification} are initialised to zero. Regions with no training data are never modified and retain their initial values. The gaps between data-points can thus lead to stratification, since only the regions with training data are updated. Therefore, over-parameterised B-spline models can easily memorise all training data.
\begin{figure}[h]
\centering
\noindent
\includegraphics[width=0.6\textwidth]{KASAM_Theory/uniform_cubic_b_spline_stratification.png}
\caption{\small Visualising stratification. The training data was sampled from the sine function $\sin(2 \pi x)$ to illustrate the effect of increasing the number basis function (denoted K). The uniform cubic B-spline models were initialised to be zero everywhere prior to training.}
\label{fig:fig_label_uniform_cubic_spline_stratification}
\end{figure}
There are a few qualitative differences between stratification and overfitting. Regions with no training data have predictable values and do not exhibit oscillations as seen in the Runge phenomenon with other kinds of basis functions~\cite{RungeBOYD199257,RungeFORNBERG2007379,RungeDEMARCHI2020112347}. Over-fitting is task-specific: Suppose the target function was known to be zero almost everywhere except at finitely many points, then an over-parametrised model would perform well. Anomaly detection is an ideal application of such over-parametrised B-spline models. On the other hand, over-parametrised B-spline models would perform poorly on regression problems like estimating a sine function with few training data-points as seen in Figure~\ref{fig:fig_label_uniform_cubic_spline_stratification}.
Stratification could be detrimental or advantageous, depending on the application. Manifold Mixup regularisation and data augmentation could be ideal strategies for counteracting stratification when necessary ~\cite{mixupBeyondEmpiricalRiskMinimization, ManifoldMixupBetterRepresentations, MixUpasLocallyLinearOut-Of-ManifoldRegularization}.
\section{Spline Additive Model (SAM)}\label{sec:sam}
Extending single-variable function approximators to multi-variable function approximation is non-trivial. One possibility is to map a higher dimensional input to the unit interval using space-filling curves (e.g. fractals like Hilbert curves), or some other projection technique based on path integrals. In this paper, the choice was made to create a function approximator that is a sum of single variable functions of each input variable, called the Spline Additive Model (SAM) and illustrated in Figure~\ref{fig:SAM_implementation}.
Consider any target function $y$ that can be expressed as the sum of continuous single variable functions $y_{j}$ in each of the input variables $x_{j}$ given by:
$$
y(\vec{\mathbf{x}})
= y(x_{1},...,x_{n})
= \sum^{n}_{j=1} y_{j}( x_{j} )
$$
SAM uses a uniform cubic B-spline function approximator $f_{j}$ with $K$ trainable parameters to approximate each single-variable function $y_{j}$. There are $K$ basis functions for each $f_{j}$, and there are $n$ input variables. The total number of trainable parameters is $nK$ for the entire model $f(\vec{\mathbf{x}})$. SAM inherits some of the properties discussed in Section~\ref{sec:singleVar}. SAM is given by the sum of $n$ single-variable B-spline functions:
$$
f(\vec{\mathbf{x}})
= \sum^{n}_{j=1} f_{j}( x_{j} )
= \sum^{n}_{j=1} \sum_{i=1}^{K} \theta_{i,j} S_{i,j}( x_{j} )
= \sum^{n}_{j=1} \sum_{i=1}^{K} \theta_{i,j} S(w_{i,j}x_{j} + b_{i,j})
$$
\begin{figure} [!h]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Models/graph_SAM_fully_connected.txt.png}
\caption{SAM with all weights}
\label{fig:SAM_dense}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Models/graph_SAM.txt.png}
\caption{Non-zero or trainable weights}
\label{fig:sam_dense}
\end{subfigure}
\caption{\small
Structure of SAM. The weights and biases for the first layer are constant and specified with model creation to correctly implement B-splines. The nodes in the black rectangle apply a non-linear activation function given by the cubic B-spline activation function. The output layer is a trainable linear layer with no bias term. The resulting model is a sum of single-variable functions in each parameter.}
\label{fig:SAM_implementation}
\end{figure}
\subsection*{Property 1: Sparsity of the gradient vector}
For a fixed number of variables $n$, the gradient vector has a maximum of $4n$ non-zero entries for any number of basis functions $K \geq 4$:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f(\vec{\mathbf{x}} )}_{0}
= \sum_{i=1}^{Kn} d_{Hamming} \left(\frac{\partial f}{\partial \theta_{i}} (\vec{\mathbf{x}}),0 \right)
\leq 4 n
\; \forall \; \vec{\mathbf{x}} \in D(f) \subset R^{n}
$$
The sparsity of the multi-variable SAM model follows from the sparsity of each single-variable function used. SAM is robust to catastrophic forgetting.
\subsection*{Property 2: Gradient flow attenuation}
For a fixed number of variables $n$, the gradient vector has bounded L1 norm for any number of basis functions $K \geq 4$:
$$
\norm{ \grad_{\vec{\mathbf{\theta}}} f( \vec{\mathbf{x}} )}_{1}
= \sum_{i=1}^{Kn} \left| \frac{\partial f}{\partial \theta_{i}} ( \vec{\mathbf{x}} ) \right|
< 4Un
\; \forall \; \vec{\mathbf{x}} \in D(f)
$$
The bounded norm for SAM follows from the single-variable case. SAM is numerically stable during training and well-behaved in the limit of arbitrarily many basis functions.
\subsection*{Property 3: Distal orthogonality}
For any spline additive model, there exists a $\delta>0$ such that:
$$
\min_{j=1, \dots , n }
\{ |x_{j} - y_{j}| \} > \delta
\implies
\langle
\grad_{\vec{\mathbf{\theta}}} f(\vec{\mathbf{x}})
,
\grad_{\vec{\mathbf{\theta}}} f(\vec{\mathbf{y}})
\rangle
= 0 \;
\forall \; \vec{\mathbf{x}},\vec{\mathbf{y}} \in D(f) \subset R^{n}
$$
The distal orthogonality follows from the single-variable case. Two points that sufficiently differ in each input-variable have orthogonal parameter gradients. It is worth mentioning that the condition resembles a cross-like region in two variables, and planes that intersect in higher dimensions. Distal orthogonality means SAM is reasonably robust to catastrophic forgetting.
\subsection*{General Overview of SAM}
SAMs also exhibit stratification and inherent memory retention that is robust, but not perfect since the overlapping regions are not as localised as in the single-variable case. SAMs can be implemented as neural networks, as illustrated in Figure~\ref{fig:SAM_implementation}. SAMs are not a universal function approximation scheme, there are continuous multi-variable functions that cannot be expressed as a sum of single variable functions. However, for problems where the manifold hypothesis is true and data lies on a very low-dimensional manifold, SAMs can be sufficient. Additionally, SAMs could be well-suited for use with kernel techniques, or the Fourier transform of the input. Recurrent neural networks and reservoir computers may also benefit from the robust nature of SAMs.
\section{KASAM: Universal Function Approximation}\label{sec:KASAM}
The Kolmogorov-Arnold representation shows how any multi-variable function can be represented with single-variable functions~\cite{kolmogorov1957representation,kolmogorovRevisited}. The theorem states that any continuous multi-variable function $y$ on the unit hyper-cube with $n$ input-variables $x_{p}$ can be exactly represented with continuous single-variable functions $\Phi_{q}$, $\phi_{q,p}$ of the form:
$$
y(\vec{\mathbf{x}})
= y(x_{1},...,x_{n})
= \sum^{2n}_{q=0} \Phi_{q} \left( \sum^{n}_{p=1} \phi_{q,p}( x_{p} ) \right)
$$
The representation theorem is not an approximation - it is exact~\cite{kolmogorov1957representation,kolmogorovRevisited}. Furthermore, there is no mention of the computability or learnability of the representation in the theorem. It is also notable that the summation in the representation is finite, and not arbitrarily large or infinite as with Taylor series. It is also unlike the Universal Function Approximation theorems for arbitrarily wide and arbitrarily deep neural networks, which also correspond to summing together arbitrarily many terms. The core building block for the Kolmogorov-Arnold representation theorem is continuous single-variable functions.
The approximation scheme for KASAM stems from the arbitrary density of basis functions to approximate each single-variable function. A sum of single-variable functions $g_{p}$ is added to the expression. Inspiration was taken from ResNet arcitectures to make optimisation easier~\citep{resnetpaper}. The Kolmogorov-Arnold representation theorem can be extended with a ResNet-style residual skip-connection for some constant $\lambda$ to obtain the definition for KASAM. The function approximator $f$ replaces the summation over $2n$ with a summation over some $N$. The functions $ h_{q,p}$ and $g_{j}$ are uniform B-spline functions. The outer functions are given by $H_{q}(z) = b_{q} ( \sigma(z))$ where $\sigma$ is the sigmoid function that maps any real number to the unit interval. The functions $s_{q}$ are uniform B-spline functions. The overall expression for Kolmogorov-Arnold Spline Additive Model (KASAM) is given by:
$$
f(\vec{\mathbf{x}})
= \sum^{N}_{q=1} H_{q} \left( \sum^{n}_{p=1} h_{q,p}( x_{p} ) \right)
+ \lambda \sum^{N}_{q=1}
\left( \sum^{n}_{p=1} h_{q,p}( x_{p} ) \right)
+ \sum^{n}_{j=1} g_{j}( x_{j} )
$$
There is a one-to-one correspondence between the exact representation and the structure of KASAM (assuming $N = 2n$). Keep in mind that for $\lambda \neq 0$ one can choose the functions $g_{p}$ such that it cancels out with the residual term, yielding the original representation theorem.
If an arbitrary density of basis functions are used, then one can approximate the exact target function. It is hoped that using cubic B-splines and SAM would give rise to a model that is easy to implement and train. Unfortunately the analytical guarantees that SAM possesses do not hold for KASAM in general.
A more compact vector notation can be given, replacing $\lambda$ with a constant matrix or linear projection $T$:
$$
f(\vec{\mathbf{x}})
= H \left( \vec{\mathbf{h}}(\vec{\mathbf{x}}) \right)
+ T \vec{\mathbf{h}}(\vec{\mathbf{x}})
+ g(\vec{\mathbf{x}})
$$
KASAM uses SAM modules throughout to approximate all sums of single-variable functions, as shown in Figure~\ref{fig:fig_graph_KASAM_stucture}. The TensorFlow implementation and prototype is a proof of concept and can be seen in the linked \href{https://github.com/hpdeventer/KASAM}{GitHub repository}\footnote{\href {https://github.com/hpdeventer/KASAM}{https://github.com/hpdeventer/KASAM}}. The focus was on developing a working prototype, and not computational efficiency. Future research into more efficient implementations would be ideal.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/graph_KASAM.png}
\caption{\small Structure of KASAM. The computational graph represents layer(s) as nodes. The SAM module is used extensively in KASAM. One branch is passed through a sigmoid function, and is then passed to another SAM layer (implementing the exterior functions). There is a residual skip-connection implemented with a constant linear layer between the interior SAM module and the output of the model.}
\label{fig:fig_graph_KASAM_stucture}
\end{figure}
\subsection*{General Overview of KASAM}
KASAM is a universal function approximator. KASAM can be implemented as a special type of artificial neural network with specifically chosen weights and activation functions, and trainable weights that can be optimised with gradient descent algorithms. The KASAM neural network that is constructed from SAM might inherit some memory retention, although it is not as predictably and reliably robust to catastrophic forgetting as SAMs. Stratification could hinder KASAM's generalisation on certain tasks, and it is not obvious how to manage this potential weakness. To reduce catastrophic forgetting, a pseudo-rehearsal training technique can be used. Further implementation details can be found in Section~\ref{sec:method} and the Github repository.
\section{Pseudo-Rehearsal Training Techniques}\label{sec:pseudo}
Numerous review, rehearsal and pseudo-rehearsal techniques exist~\citep{robins1995catastrophic}. Some use generative models like GANs~\citep{shin2017continual}. The expected risk has the familiar form:
$$ R(f) = \int \ell (f(x),y) \mathrm{d}P(x,y)$$
Pseudo-rehearsal explicitly refers back to the model's previous state, and is similar to the ideas related to generative replay~\citep{shin2017continual}. The distribution $\mathcal{P}(z)=\mathcal{P}_{D(f)}$ of the input data with $z \sim \mathcal{P}_{D(f)}$ could be given by a generative model or a uniform distribution $z \sim U_{D(f)}$ over the domain of $f$ denoted $D(f)$. Using a uniform distribution is less computationally demanding than training a generative model. The listed integral elicits memory. The mixing coefficient $\rho \in \left[0,1 \right]$ controls novel learning and memory retention. This technique is given by the discrete-time functional of the form:
$$
R(f_{t+1}) =
\rho \int \ell (f_{t+1}(x),y) \mathrm{d}P(x,y) + (1-\rho)\int \ell (f_{t+1}(z),f_{t}(z))\mathrm{d}\mathcal{P}(z)
$$
The key idea is to minimise the loss on new data, subject to the constraint that the model retains its input-output mapping over the rest of its domain. The same functional can be seen in the paper~\citep{shin2017continual}. This could mitigate catastrophic forgetting and is an iterative process that may or may not converge depending on the choice of loss function $\ell$, model $f$, and the target values. A simple version of pseudo-rehearsal is demonstrated in Figure~\ref{fig:fig_label_reveries}.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=0.6\textwidth]{KASAM_Theory/pseudo_rehearsal.png}
\caption{\small Pseudo-rehearsal is demonstrated. The training data is augmented with data sampled from the model's initial input-output values. Pseudo-rehearsal exploits the model's memory of the previous tasks to retain memory and change only in regions where the new data is present.}
\label{fig:fig_label_reveries}
\end{figure}
\section{Methodology}\label{sec:method}
Four models were considered for experimental evaluation. The first was a SAM based model with no additional regularisation, called \textit{SAM}. The second model was a feed-forward ANN with the same structure and activation functions as KASAM, but all the model parameters were trainable and randomly initialised instead of set to predefined constants, further referred to as simply \textit{ANN}. The third was KASAM, with some parameters being trainable and others being fixed to correctly implement cubic B-splines, named \textit{KASAM}. (Note: the ANN is capable of learning an identical clone of KASAM, but such eventuality is unlikely given the random initialisation of the ANN). The fourth model was KASAM in combination with the pseudo-rehearsal (PR) data-augmentation technique, referred to as \textit{KASAM+PR}.
\subsubsection*{SAM}
The SAM model is a scalar-valued function that maps a two-dimensional input to a one-dimensional output. A density of 32 basis functions was chosen for each input variable. The SAM model chosen implements:
$$
f(\vec{\mathbf{x}})
= \sum^{2}_{j=1} f_{j}( x_{j} )
= \sum^{2}_{j=1} \sum_{i=1}^{32} \theta_{i,j} S_{i,j}( x_{j} )
= \sum^{2}_{j=1} \sum_{i=1}^{32} \theta_{i,j} S(w_{i,j}x_{j} + b_{i,j})
$$
SAM has a reasonably simple structure shown in Figure~\ref{fig:fig_SAM_model_specifics}. The inputs were two-dimensional and the output was one-dimensional. The density of basis functions for each input-variable is 32, so there are 64 neural units in the hidden layer. The activation function and weights were chosen to correctly implement B-spline basis functions. The final linear layer is trainable - each basis function is multiplied by its trainable parameter and summed together to give output of SAM.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/SAM_model_specifics.png}
\caption{\small Structure of SAM.}
\label{fig:fig_SAM_model_specifics}
\end{figure}
\subsubsection*{KASAM}
The KASAM model is a scalar valued function that maps a two dimensional input to a one-dimensional output. It was decided to use three exterior functions such that:
$$
f(\vec{\mathbf{x}})
= \sum^{3}_{q=1} H_{q} \left( \sum^{2}_{p=1} h_{q,p}( x_{p} ) \right)
+ \lambda \sum^{3}_{q=1}
\left( \sum^{2}_{p=1} h_{q,p}( x_{p} ) \right)
+ \sum^{2}_{j=1} g_{j}( x_{j} )
$$
The structure of KASAM is a generalisation of SAM. One branch is the same as SAM. For technical and practical reasons to make optimisation easier it was necessary to use a mixture of different densities of basis functions. There are densities of 4, 8, 16 and 32 basis functions for each input variable. The maximum density of 32 corresponds to the same expressive power of the mentioned SAM model also with a density of 32 basis functions. Adding together all the densities of basis functions in KASAM gives a total 60 basis functions for each variable. The two input variables have 120 basis functions or neural units in total. The three hidden variables have 180 basis functions or neural units in total - this implements three exterior functions each with their own input.
Most of the connections represent fully connected dense layers. The only exception is the activation layer that only applies an element-wise sigmoid to each input value, with no trainable parameters. The layers with trainable parameters are indicated with solid black arrows. The constant parameters or weights are not solid black arrows.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/KASAM_model_specifics.png}
\caption{\small Structure of KASAM.}
\label{fig:fig_SAM_Two_var}
\end{figure}
KASAM is fully capable of representing the SAM model exactly, if the appropriate parameters were chosen or zeroed.
\subsubsection*{KASAM+PR}
KASAM+PR has the same structure as KASAM. The only difference is that a pseudo-rehearsal data augmentation is utilised in training. The training data for the second task is mixed with input-output values of the model after it was trained on Task 1 (but before being trained on Task 2). The rehearsal dataset is constructed from 10000 uniformly sampled points of two dimensions on the unit interval. The target values for rehearsal are predicted by the model itself, using the stored memory of the previous data in the model. The augmented dataset is 10000 points randomly sampled with $50\%$ probability of choosing either a rehearsal data point or training data point for Task 2.
\subsubsection*{ANN}
The ANN model has the same structure as KASAM. The ANN model is similar to more commonly used initialisation methods. The ANN model has randomly initialised and trainable parameters, whereas KASAM has many specifically chosen parameters that are not trainable. It is possible for ANN to implement KASAM, but it is unlikely. ANN does not neatly decompose into single-variable function because some of the parameters in the dense layers are not zero. All that remains is the structure as seen in Figure~\ref{fig:fig_ANN_model_specifics}, but all parameters are trainable and randomly initialised with the default values for TensorFlow.
\begin{figure}[!h]
\centering
\noindent
\includegraphics[width=\textwidth]{KASAM_Models/ANN_model_specifics.png}
\caption{\small Structure of the ANN model.}
\label{fig:fig_ANN_model_specifics}
\end{figure}
The ANN model was chosen to show how randomly initialised parameters compare to the specifically chosen parameters for KASAM. Given appropriate random initialisation, hyper-parameters, optimizer, and excessive data, it is possible to get the same performance from ANN as with KASAM. Most ANNs are very sensitive to many externally chosen values, and require a lot of fine-tuning for acceptable performance.
\subsection{Experiments}
The target functions for Experiments A, B and C were chosen to demonstrate the expressive power and limitation of all four models. The target function for Experiment A is a sum of single-variable functions which is easily expressed (in theory) by the SAM, KASAM and KASAM+PR models. The target function for Experiment B is a modified version of a difficult to learn two variable function that is the sum of single variable and multi-variable functions~\citep{malan_cleghorn}. The target function for Experiment C is a product of periodic functions which is impossible for SAM to represent.
All models and experiments were performed with Python and TensorFlow. The loss function chosen for training and evaluation in all presented experiments is mean absolute error (MAE). The training data set and test set in all experiments had $10 000$ data points. Gaussian noise of variance $0.05$ was added to all training and test data target values. The test set was also used as a validation set to quantify the test error during training. All models were trained with a learning rate of $0.001$ with the Adam optimizer. All models and experiments used batch sizes of 100 during training.
\subsubsection*{Task 1}
The training and test sets were sampled uniformly from the Task 1 target functions over the domain $\left[0.,1. \right]^{2}$, with Gaussian noise added to the target values. All models were trained for $200$ epochs. The Task 1 target functions for experiments A, B, and C are given by:
\begin{equation*} \label{eq1}
\begin{split}
Y_{A}(x_{1},x_{2}) &= \cos{(4\pi x_{1})}\exp(-(2x_{1}-1)^{2})+\sin{(\pi x_{2})} \\
Y_{B}(x_{1},x_{2}) &= 2\exp(-\sum_{i=1}^{2} (10 x_{i}-5)^{2}) + \sum_{i=1}^{2}\sin^{2}(10x_{i}-5) \\
Y_{C}(x_{1},x_{2}) &= 1 + \cos(20x_{1}-10) \cos(20x_{2}-10)
\end{split}
\end{equation*}
\subsubsection*{Task 2}
The test sets were sampled uniformly from the Task 2 target functions over the domain $\left[0.,1. \right]^{2}$, with Gaussian noise added to the target values. The training sets were sampled uniformly over the domain $\left[0.45,0.55 \right]^{2}$, and target values of zero with added Gaussian noise. All models were trained for $20$ epochs. The Task 2 target functions for experiments A, B, and C are given by:
$$ Y'_{A}(x_{1},x_{2}) =\begin{cases}
0 & 0.45 < x_{1} < 0.55, \text{ and } 0.45 < x_{2} < 0.55 \\
Y_{A}(x_{1},x_{2}) & \text{otherwise.}
\end{cases}
$$
$$ Y'_{B}(x_{1},x_{2}) =\begin{cases}
0 & 0.45 < x_{1} < 0.55, \text{ and } 0.45 < x_{2} < 0.55 \\
Y_{B}(x_{1},x_{2}) & \text{otherwise.}
\end{cases}
$$
$$ Y'_{C}(x_{1},x_{2}) =\begin{cases}
0 & 0.45 < x_{1} < 0.55, \text{ and } 0.45 < x_{2} < 0.55 \\
Y_{C}(x_{1},x_{2}) & \text{otherwise.}
\end{cases}
$$
\section{Empirical Results}\label{sec:exp}
\subsection{Experiment A}
The mean and standard deviation of the test MAE over thirty independent trials for each model are shown in Table~\ref{table:A_results_averaged}. The null hypothesis for each pair-wise comparison between models is that they have indistinguishable test errors (threshold is $p<0.0001$). The p-values were calculated from raw data.
Task 1 indicated that KASAM and KASAM+PR have indistinguishable test errors, and the null hypothesis was accepted ($p=0.1822$). All other pair-wise comparisons for Task 1 indicated distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$). Rounding the test MSE in Task 1 to a few decimal places shows that SAM, KASAM, and KASAM+PR have practically the same test error as shown in Table~\ref{table:A_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:A_task_1_training_validation_plot}. The test sets were used for validation as well. The SAM, KASAM and KASAM+PR models easily learned the target function, and reasonably quickly. The ANN model struggled to learn the Task 1 target function for experiment A as seen in Figure~\ref{fig:A_task_1_training_validation_plot}.
Task 2 indicated that all four models had distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$) for each pair-wise comparison. KASAM+PR had the best test MAE indicating the benefit of using pseudo-rehearsal techniques with KASAM as shown in Table~\ref{table:A_results_averaged}. The SAM model had the second best performance on Task 2 with some memory retention. The KASAM model alone had the third best performance on Task 2, indicating marginal memory retention. The ANN model had the worst performance and suffered catastrophic forgetting that severely impedes its performance compared to the other models in Table~\ref{table:A_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:A_task_2_training_validation_plot}. The test sets were used for validation as well. All models had similar training loss curves in Figure~\ref{fig:A_task_2_training_validation_plot}. The validation loss curves displayed interesting dynamics during training Figure~\ref{fig:A_task_2_training_validation_plot}. KASAM+PR performed the best of all models, and pseudo-rehearsal limited catastrophic forgetting and allowed the model to improve after initially degrading in performance. The SAM model degraded in performance and plateaued with little variance. The KASAM and ANN models had the worst performance with a lot of variance in validation MAE. The ANN validation loss curves exhibited oscillatory behaviour possibly due to the use of Adam as an optimizer, as shown in Figure~\ref{fig:A_task_2_training_validation_plot}.
\begin{table}[!h]
\centering
\begin{tabular}{|c c c|}
\hline
& Task 1 MAE & Task 2 MAE \\ [0.5ex]
\hline\hline
SAM & \bf{0.040 (0.000)} & 0.334 (0.004) \\
ANN & 0.169 (0.029) & 1.279 (0.107) \\
KASAM & 0.042 (0.001) & 0.977 (0.055) \\
KASAM+PR & 0.042 (0.001) & \bf{0.051 (0.001)} \\ [0.5ex]
\hline
\end{tabular}
\caption{Experiment A: final test mean absolute error (MAE) for Task 1 and Task 2 averaged over 30 trials, rounded to two decimal places.}
\label{table:A_results_averaged}
\end{table}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/training_time_task_1.png}
\caption{Training loss curve.}
\label{fig:A_task_1_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/validation_task_1.png}
\caption{Validation loss curve.}
\label{fig:A_task_1_validation_plot}
\end{subfigure}
\caption{\small
Experiment A: Task 1 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:A_task_1_training_validation_plot}
\end{figure}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/training_time_task_2.png}
\caption{Training loss curve.}
\label{fig:A_task_2_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/validation_task_2.png}
\caption{Validation loss curve.}
\label{fig:A_task_2_validation_plot}
\end{subfigure}
\caption{\small
Experiment A: Task 2 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:A_task_2_training_validation_plot}
\end{figure}
The outputs of the models and the target functions were visualised in Figure~\ref{fig:visualisation_experiment_A} with grid-sampled points. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.
For Task 1 all versions of SAM and KASAM could easily fit the target function as visualised in Figure~\ref{fig:visualisation_experiment_A}. The ANN model struggled to fit to the first target function as visualised in Figure~\ref{fig:visualisation_experiment_A}.
In Task 2 SAM exhibited intrinsic memory retention, except within a cruciform region of overlap between Task 1 and Task 2 as visualised in Figure~\ref{fig:visualisation_experiment_A}, consistent with the developed theory. The ANN suffered catastrophic forgetting while training to output zero in the central region, which fits the training data very well, but it ruined global memory retention. KASAM did not exhibit perfect memory retention on its own as seen in Figure~\ref{fig:visualisation_experiment_A}. KASAM+PR which used pseudo-rehearsal yielded the best memory retention of all four models and boasted nearly perfect performance on the second target function, as seen in Figure~\ref{fig:visualisation_experiment_A}.
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_SAM_0.png}
\caption{SAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_ANN_0.png}
\caption{ANN}
\label{fig:three sin x}
\end{subfigure}
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_KASAM_0.png}
\caption{KASAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_A/Inspection_KASAM_PR_0.png}
\caption{KASAM+PR}
\label{fig:y equals x}
\end{subfigure}
\hfill
\caption{\small
Experiment A: outputs of the models and the target functions. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.}
\label{fig:visualisation_experiment_A}
\end{figure}
\subsection{Experiment B}
The mean and standard deviation of the test MAE over thirty independent trials for each model is shown in Table~\ref{table:B_results_averaged}. The null hypothesis for each pair-wise comparison between models is that they have indistinguishable test errors (threshold is $p<0.0001$). The p-values were calculated from raw data.
Task 1 indicated that KASAM and KASAM+PR have indistinguishable test errors, and the null hypothesis was accepted ($p=0.0598$). All other pair-wise comparisons for Task 1 indicated distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$). Rounding the test MSE in Task 1 to a few decimal places shows that KASAM and KASAM+PR have the same test error as shown in Table~\ref{table:B_results_averaged}. SAM had slightly worse performance compared to KASAM. The ANN model was the worst-performing. The training and validation MAE during training is shown in Figure~\ref{fig:B_task_1_training_validation_plot}. The test sets were used for validation as well. The KASAM and KASAM+PR models easily learned the target function, and reasonably quickly. The SAM model couldn't represent the Task 1 target function. The ANN model struggled to learn the Task 1 target function for experiment B as seen in Figure~\ref{fig:B_task_1_training_validation_plot}.
Task 2 indicated that all four models had distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$) for each pair-wise comparison. KASAM+PR had the best test MAE indicating the benefit of using pseudo-rehearsal techniques with KASAM as shown in Table~\ref{table:B_results_averaged}. The SAM model had the second best performance on Task 2 with some memory retention. The KASAM model alone had the third best performance on Task 2, indicating marginal memory retention. The ANN model had the worst performance and suffered catastrophic forgetting that severely impedes its performance compared to the other models in Table~\ref{table:B_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:B_task_2_training_validation_plot}. The test sets were used for validation as well. All models had similar training loss curves in Figure~\ref{fig:B_task_2_training_validation_plot}. The validation loss curves displayed peculiar dynamics during training Figure~\ref{fig:B_task_2_training_validation_plot}. KASAM+PR performed the best of all models, and pseudo-rehearsal limited catastrophic forgetting and allowed the model to improve after initially degrading in performance. The SAM model degraded in performance and plateaued with little variance. The KASAM and ANN models had the worst performance with a lot of variance in validation MAE, as shown in Figure~\ref{fig:B_task_2_training_validation_plot}.
\begin{table}[!h]
\centering
\begin{tabular}{|c c c|}
\hline
& Task 1 MAE & Task 2 MAE \\ [0.5ex]
\hline\hline
SAM & 0.092 (0.002) & 0.144 (0.003) \\
ANN & 0.311 (0.005) & 1.465 (0.080) \\
KASAM & \bf{0.042 (0.001)} & 0.875 (0.376) \\
KASAM+PR & \bf{0.045 (0.007)} & \bf{0.061 (0.008)} \\ [0.5ex]
\hline
\end{tabular}
\caption{Experiment B: final test mean absolute error (MAE) for Task 1 and Task 2 averaged over 30 trials, rounded to two decimal places.}
\label{table:B_results_averaged}
\end{table}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/training_time_task_1.png}
\caption{Training loss curve.}
\label{fig:B_task_1_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/validation_task_1.png}
\caption{Validation loss curve.}
\label{fig:B_task_1_validation_plot}
\end{subfigure}
\caption{\small
Experiment B: Task 1 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:B_task_1_training_validation_plot}
\end{figure}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/training_time_task_2.png}
\caption{Training loss curve.}
\label{fig:B_task_2_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/validation_task_2.png}
\caption{Validation loss curve.}
\label{fig:B_task_2_validation_plot}
\end{subfigure}
\caption{\small
Experiment B: Task 2 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:B_task_2_training_validation_plot}
\end{figure}
The outputs of the models and the target functions were visualised in Figure~\ref{fig:visualisation_experiment_B} with grid-sampled points. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.
For Task 1 KASAM could easily fit the target function as visualised in Figure~\ref{fig:visualisation_experiment_B}. The SAM and ANN model struggled to fit to the first target function as visualised in Figure~\ref{fig:visualisation_experiment_B}.
In Task 2 SAM exhibited intrinsic memory retention, except within a cruciform region of overlap between Task 1 and Task 2 as visualised in Figure~\ref{fig:visualisation_experiment_B}, consistent with the developed theory. The ANN suffered catastrophic forgetting while training to output zero in the central region, which fits the training data very well, but it ruined global memory retention. KASAM did not exhibit perfect memory retention on its own as seen in Figure~\ref{fig:visualisation_experiment_B}. KASAM+PR which used pseudo-rehearsal yielded the best memory retention of all four models and boasted nearly perfect performance on the second target function, as seen in Figure~\ref{fig:visualisation_experiment_B}.
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_SAM_0.png}
\caption{SAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_ANN_0.png}
\caption{ANN}
\label{fig:three sin x}
\end{subfigure}
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_KASAM_0.png}
\caption{KASAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_B/Inspection_KASAM_PR_0.png}
\caption{KASAM+PR}
\label{fig:y equals x}
\end{subfigure}
\hfill
\caption{\small
Experiment B: outputs of the models and the target functions. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.}
\label{fig:visualisation_experiment_B}
\end{figure}
\subsection{Experiment C}
The mean and standard deviation of the test MAE over thirty independent trials for each model is shown in Table~\ref{table:C_results_averaged}. The null hypothesis for each pair-wise comparison between models is that they have indistinguishable test errors (threshold is $p<0.0001$). The p-values were calculated from raw data.
Task 1 test MAE indicated that KASAM and KASAM+PR have indistinguishable test errors, and the null hypothesis was accepted ($p=0.6018$). KASAM and KASAM had the best performance. The SAM and ANN models also have the same test error ($p=0.0007$), and the worst performance. All other pair-wise comparisons for Task 1 indicated distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$). The test MSE in Task 1 is rounded to a few decimal places and presented in Table~\ref{table:C_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:C_task_1_training_validation_plot}. The test sets were used for validation as well. The KASAM and KASAM+PR models easily learned the target function, and reasonably quickly. The SAM and ANN model struggled to learn the Task 1 target function for experiment C as seen in Figure~\ref{fig:C_task_1_training_validation_plot}.
Task 2 indicated that all four models had distinguishable test errors, and the null hypothesis was rejected ($p<0.0001$) for each pair-wise comparison. KASAM+PR had the best test MAE indicating the benefit of using pseudo-rehearsal techniques with KASAM as shown in Table~\ref{table:C_results_averaged}. The SAM model had the second best performance on Task 2 with some memory retention. The KASAM model alone had the third best performance on Task 2, indicating marginal memory retention. The ANN model had the worst performance and suffered catastrophic forgetting that severely impedes its performance compared to the other models in Table~\ref{table:C_results_averaged}. The training and validation MAE during training is shown in Figure~\ref{fig:C_task_2_training_validation_plot}. The test sets were used for validation as well. All models had similar training loss curves in Figure~\ref{fig:C_task_2_training_validation_plot}. The validation loss curves displayed interesting dynamics during training Figure~\ref{fig:C_task_2_training_validation_plot}. KASAM+PR performed the best of all models, and pseudo-rehearsal limited catastrophic forgetting and allowed the model to improve after initially degrading in performance. The SAM model degraded in performance and plateaued with little variance. The KASAM and ANN models had the worst performance with a lot of variance in validation MAE, as shown in Figure~\ref{fig:C_task_2_training_validation_plot}.
\begin{table}[!h]
\centering
\begin{tabular}{|c c c|}
\hline
& Task 1 MAE & Task 2 MAE \\ [0.5ex]
\hline\hline
SAM & 0.430 (0.003) & 0.467 (0.004) \\
ANN & 0.427 (0.003) & 1.004 (0.021) \\
KASAM & \bf{0.042 (0.001)} & 0.748 (0.137) \\
KASAM+PR & \bf{0.042 (0.001)} & \bf{0.063 (0.008)} \\ [0.5ex]
\hline
\end{tabular}
\caption{Experiment C: final test mean absolute error (MAE) for Task 1 and Task 2 averaged over 30 trials, rounded to two decimal places.}
\label{table:C_results_averaged}
\end{table}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/training_time_task_1.png}
\caption{Training loss curve.}
\label{fig:C_task_1_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/validation_task_1.png}
\caption{Validation loss curve.}
\label{fig:C_task_1_validation_plot}
\end{subfigure}
\caption{\small
Experiment C: Task 1 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:C_task_1_training_validation_plot}
\end{figure}
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/training_time_task_2.png}
\caption{Training loss curve.}
\label{fig:C_task_2_training_plot}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/validation_task_2.png}
\caption{Validation loss curve.}
\label{fig:C_task_2_validation_plot}
\end{subfigure}
\caption{\small
Experiment C: Task 2 training and validation loss during training. All models were trained on $10000$ training data-points. The initial loss before training is shown at the zeroth epoch.}
\label{fig:C_task_2_training_validation_plot}
\end{figure}
The outputs of the models and the target functions were visualised in Figure~\ref{fig:visualisation_experiment_C} with grid-sampled points. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.
For Task 1 all versions of KASAM could easily fit the target function as visualised in Figure~\ref{fig:visualisation_experiment_C}. The SAM and ANN model struggled to fit to the first target function as visualised in Figure~\ref{fig:visualisation_experiment_C}.
In Task 2 SAM exhibited intrinsic memory retention, except within a cruciform region of overlap between Task 1 and Task 2 as visualised in Figure~\ref{fig:visualisation_experiment_C}, consistent with the developed theory. The ANN suffered catastrophic forgetting while training to output zero in the central region, which fits the training data very well, but it ruined global memory retention. KASAM did not exhibit perfect memory retention on its own as seen in Figure~\ref{fig:visualisation_experiment_C}. KASAM+PR which used pseudo-rehearsal yielded the best memory retention of all four models and boasted nearly perfect performance on the second target function, as seen in Figure~\ref{fig:visualisation_experiment_C}.
\begin{figure} [!h]
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_SAM_0.png}
\caption{SAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_ANN_0.png}
\caption{ANN}
\label{fig:three sin x}
\end{subfigure}
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_KASAM_0.png}
\caption{KASAM}
\label{fig:y equals x}
\end{subfigure}
\hfill
\centering
\begin{subfigure}[!h]{0.9\textwidth}
\centering
\includegraphics[width=\textwidth]{KASAM_Experiment_C/Inspection_KASAM_PR_0.png}
\caption{KASAM+PR}
\label{fig:y equals x}
\end{subfigure}
\hfill
\caption{\small
Experiment C: outputs of the models and the target functions. Each row corresponds to a different model from top to bottom: SAM, ANN, KASAM, and KASAM+PR. The images for each model from left to right are: Task 1 target function; model output after training on Task 1; Task 2 target function; model output after training on Task 2; The absolute difference between the models' first and second output. A model with nearly perfect memory retention would only differ in the small square region $\left[0.45,0.55 \right] \times \left[0.45,0.55 \right]$ in the centre of the unit square.}
\label{fig:visualisation_experiment_C}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
This paper contributes in three ways: theoretically, technically, and empirically. Catastrophic forgetting was analysed and theoretical models, namely SAM and KASAM, were developed to combat catastrophic forgetting. The developed models were implemented in TensorFlow, and released for public use. The models were analysed on a simple problem to empirically demonstrate their effectiveness.
The paper introduced and implemented Spline Additive Models (SAMs) to demonstrate their robustness to catastrophic forgetting, and that SAM itself is not a universal function approximator, but is still useful for many potential applications. The Kolmogorov-Arnold Spline Additive Model (KASAM) was introduced and implemented. KASAM is shown to be a universal function approximator that is expressive, but more susceptible to catastrophic forgetting than SAM. It is unknown if tractable models exist with better guaranteed bounds for catastrophic forgetting.
The empirical scope of the paper was limited to target functions of two variables, mainly for demonstration purposes. The statistical analysis and detailed inspection of the results show that SAM exhibits intrinsic memory retention that is robust to catastrophic forgetting. The memory retention of SAM is not perfect: SAM exhibits cross-shaped regions of overlapping interference in higher dimensional models. SAM is also shown not to be a universal function approximator. The extension of SAM to a universal function approximator leads to a more expressive model: KASAM. KASAM has limited intrinsic memory retention, but is a universal function approximator.
Conventional or typical neural networks based on affine or linear transformations can be susceptible to catastrophic forgetting. A more typical artificial neural network with the exact same structure and activation functions as KASAM, with randomly initialised and trainable parameters, performed significantly worse than KASAM. The feed-forward artificial neural network (ANN) had the capacity to implement KASAM and spline functions, but it did not exploit this potential in any appreciable way during training and evaluation. KASAM exhibited superior performance with some weights being chosen constants, as compared to having all of its parameters being randomly initialised and trainable.
KASAM in combination with other regularisation, data-augmentation and training techniques can mitigate catastrophic forgetting. KASAM in combination with pseudo-rehearsal had the best performance of all models considered for the chosen regression problem. Pseudo-rehearsal works well for memory retention in low-dimensional problems.
\section{Opportunities for Further Research}\label{sec:opportunities}
This paper explored a few models and ideas to combat catastrophic forgetting. It is an open question if there are other approaches to combat catastrophic forgetting.
Future research can explore: More efficient implementation; Letting the sub-intervals vary during training instead of using uniform partitions; experimenting with different target functions; experimenting with higher dimensional problems; increasing the density of basis functions during training; the incorporation of B-spline functions into other models such as recurrent neural networks, LSTMs, GRUs or reservoir computers; evaluating the bias-variance decomposition for over-parameterised B-spline functions; fitness landscape analysis of B-spline functions; or different initialisations of parameters for KASAM.
| -44,564.077097 |
[
-3.279296875,
3.01953125
] | 36.914286 |
[
-2.685546875,
0.95751953125,
-2.52734375,
-5.5390625,
-0.478271484375,
8.2578125
] |
[
4.5625,
7.34765625,
2.6875,
10.03125
] | 722 | 9,105 |
[
-2.193359375,
2.23046875
] | 25.262671 |
[
-6.28125,
-4.04296875,
-4.7578125,
-2.580078125,
2.4921875,
13.234375
] | 0.537821 | 27.147947 | 17.243273 | 3.844036 |
[
3.121929168701172
] | -30,777.869517 | 6.330588 | -43,518.435858 | 0.527412 | 5.990825 |
[
-3.13671875,
-3.666015625,
-3.451171875,
-4.51171875,
2.578125,
11.578125
] |
[
-5.2734375,
-2.65625,
-2.880859375,
-2.498046875,
3.865234375,
6.78125
] | |
BkiUdoA5qsBDCrvnOLs-
|
\section{Introduction}
With the rapid integration of internet technology and traditional finance,
more and more financial transactions and activities, such as third-party payment
and online lending, have been digitalized. Taking online payment as an example,
it contributed over 2.8 billion users with a value of almost 3.1 trillion
US dollars worldwide in 2018.
In companion with it, financial frauds trend to be more subtle and diversified.
According to Nilson report \cite{nilson2019fraud}, fraudulent activities cost about 11.2 billion US dollars worldwide in 2012, and the number has increased almost by $150\%$ up to 27.85 billion US dollars in 2018.
Currently, financial agencies defend against fraud attacks
by implementing decision-making engine using expert-defining rules,
which are always based on learning from expert experience and analyzing
the existing frauds. It has been widely used in those above financial
scenarios and has achieved good results. In practice, however,
this expert-defining rule system always suffers two fundamental issues:
(i) it's difficult to learn effective rules due to the lack of fraud samples;
(ii) owing to the delayed characteristic of fraud detection, it inevitably
suffers from not being updated in time, as well as high false alarm rate
and expensive maintenance costs. Refer to \cite{bolton2002statistical}.
Meanwhile, traditional machine learning models also have similar issues \cite{bhattacharyya2011data,ngai2011the}. To improve the predictive ability of the model,
a federated learning framework has been recently proposed even
at the cost of mass of training time, due to relatively heavy
encryption and communication for privacy-preserving. It \cite{bonawitz2017practical,yang2019federated} is an emerging frontier field studying privacy-preserving collaborative machine learning while leaving data instances at their providers locally. Federated learning enables multiple agencies to collaboratively learn a shared model while keeping all the training data stored on their own private database.
In order to take advantage of both rule system and federated learning, we propose a F-score based ensemble model for automatic rule extraction (FEARE) and implement it
in federated learning framework (Fed-FEARE).
In the process of building a tree in FEARE,
we employ maximizing F-score as loss function or partition criterion
in each node in recursive manner.
Combination of the multiple partition logic in child nodes then forms a rule
from the built tree. Next, the data set covered by the rule is removed.
Repeating the above tree-building process on the remaining data,
it finally results in the formation a set of rules from ensemble trees.
It should be noted that the rule-extraction method of FEARE
is quit different from that of traditional decision tree \cite{Breiman:1984jka,quinlan1993c}.
For comparison, there are three major differences:
(1) loss function or partition criterion,
F-score $vs$ Gini index or gain ratio; (2) learning a set of rules progressively $vs$ simultaneously; (3) every rule extracted from one tree with high quality.
With Fed-FEARE, we train an ensemble model and evaluate its performance
on large scale real-world data sets from both
a nation-wide joint-stock commercial bank (\textbf{BANK})
and
a cloud payment platform (\textbf{CLOUD PAY})
, two separated legal entities in a China nation-wide financial holdings group
with certain number of fraud cases. The experimental results show that recall
is greatly improved with high precision compared to that without Fed-FEARE.
Also, we apply horizontal Fed-FEARE to precision marketing.
As expected, both precision and lift gain have obvious improvement.
The rest of this manuscript is organized as follows. In Sect. \ref{sec:Alg},
related work about F-score based ensemble model for automatic rule set extraction is proposed. Furthermore, the vertical and horizontal federated learning framework for the model is specifically discussed. Sect. \ref{sec:er} and \ref{sec:fm} give the details of our experimental
results in both vertical and horizontal Fed-FEARE.
Conclusions are presented in Sect. \ref{sec:con}.
\section{Algorithm for F-score based Ensemble Model for Automatic Rule Extraction and Its
Federated Learning Framework}\label{sec:Alg}
This section consists of two parts. Firstly, we propose a F-score based ensemble tree model for automatic risk-rule extraction. Secondly, we implement it in both the vertical and horizontal federated learning framework, including encryption and communication for privacy-preserving.
\subsection{Loss Function, Pruning and Automatic Rule Set Extraction}
In general, rule can be evaluated by its precision and recall.
For an given class-labeled data set $D$ with $n_{target}$ target samples,
$n_{cover}$ and $n_{correct}$ are data size for rule coverage
and correct classification, respectively. Precision and recall
are thus defined as,
\begin{equation}\label{Pre}
\begin{split}
precision &=\frac{n_{correct}}{n_{cover}},\\
recall &= \frac{n_{correct}}{n_{target}}.
\end{split}
\end{equation}
Ideally, we hope to increase recall as much as possible
under the condition of high precision.
In the anti-fraud scenario, however, it's often unreliable to simply use precision or recall as rule measurement. High precision and high recall are always difficult to coexist.
For instance, rule one correctly classifies 80 of the 100 samples it covers,
while the two samples covered by rule two are correctly classified. Though rule two
has obviously higher precision, it is still not a better one due to its small coverage.
Likewise, coverage cannot be used as a measure of rule.
Therefore, it is necessary to construct other measures to evaluate rule.
F-score ($\mbox{F}_{\beta}\mbox{-score}$), a weighted average of precision and recall,
can be treated as a rule measurement. Thus, it acts as an attribute (feature) splitting criteria.
\begin{equation}\label{F-score}
\resizebox{.7\linewidth}{!}{$
\displaystyle
\mbox{F}_{\beta}\mbox{-score}= (1 + \beta^2)\frac{precision \cdot recall}{\beta^2\cdot precision + recall}.
$}
\end{equation}
$\beta = 1$, that is, precision and recall have the same weight.
When the importance of precision is higher than recall,
$\beta < 1$ can be set, and vice versa.
The F-score gain is defined as the difference before and after
splitting an attribute of the data set.
Accordingly, an attribute with the highest F-score gain
is chosen as the best splitting attribute of child node.
In order to achieve it, we need to calculate and find the best splitting point in an attribute.
An attribute is sorted in descending order by value when it is numerical.
For categorical attributes, some encoding methods are adopted to convert to numerical type.
The average of each pair of adjacent values in an attribute with $n$ value, forms $n-1$ splitting points or values. As for this attribute, the point of the highest F-score gain can be seen as the best partition one.
Furthermore, the best splitting attribute with the highest F-score gain can be achieved by traversing all attributes. As for the best splitting attribute, the instance space would be divided into two
sub-spaces at the best splitting point. Note that the top–down, recursive partition
will continue unless there is no attribute
that explains the target with statistical significance.
\begin{algorithm}[htbp]
\caption{Learning A "IF-THEN" Classification Rule of A Single Tree
}
\label{single:algorithm}
\textbf{Input}: D, the given class-labeled data set;\\
\textbf{Parameter}: max$\_$depth, $\beta$, pruning$\_$min\\
\textbf{Output}: a "IF-THEN" classification rule
\begin{algorithmic}[1]
\STATE \textbf{Set} Rule$\_$Single = [], Max$\_$F-score = 0.0
\STATE \textbf{Set} Add\_Rule = \textbf{True}
\WHILE{depth $\leq$ max$\_$depth \textbf{and} Add\_Rule}
\STATE \textbf{Set} Keep = \{ \}, Best$\_$Split = \{ \}
\STATE depth $\leftarrow $ depth + 1
\STATE Add\_Rule = \textbf{False}
\FOR{feature in features}
\STATE Keep[feature] = F-score$\_$Cal(D, feature, $\beta$)
\ENDFOR
\FOR{feature in Keep}
\IF {feature's best F-score \textgreater Max\_F-score + pruning\_min}
\STATE Max$\_$F-score = feature's best F-score
\STATE \textbf{Add} Keep[feature] \textbf{to} Best$\_$Split
\STATE Add\_Rule = \textbf{True}
\ELSE
\STATE continue
\ENDIF
\ENDFOR
\STATE \textbf{Add} Best$\_$Split \textbf{to} Rule$\_$single
\STATE D $\leftarrow $ D $\setminus$ \{
Samples covered by Rule$\_$single\}
\ENDWHILE
\STATE \textbf{return} Rule$\_$single
\end{algorithmic}
\end{algorithm}
In the tree-building process, due to noise and outliers in the data set, many branches merely represent these abnormal points, resulting in model overfitting.
Pruning can often effectively deal with this problem. That is, using statistics to cut off unreliable branches. Since none of the pruning methods is essentially better than others, we use a relatively simple pre-pruning. That is, if the F-score gain was less the threshold, node partition
would stop. Thus, a smaller and simpler tree is constructed after pruning.
Naturally, decision-makers prefer less complex rules, since they may be considered
more comprehensible and robust in business perspective.
Algorithm \ref{single:algorithm} presents a typical algorithmic framework for top–down inducing of a rule tree using growing and pruning. It uses a greedy depth-first strategy
in constructing tree in a recursive manner. In each iteration,
the algorithm considers the splitting of the training data set using F-score gain as partition criteria.
It removes those instances that is not covered by the node logic in the original data set.
As a result, each child node hierarchically subdivides the training data set into smaller subsets, until stopping criteria is satisfied.
As we can see from Algorithm \ref{single:algorithm}, F-score$\_$Cal is a function to calculate the F-score before and after the child node partition and find one attribute (feature)'s best splitting. $max\_depth$ and $pruning\_min$ are the depth of the tree and threshold of F-score gain, respectively.
$\beta$ is the parameter in Eq.\ref{F-score}, where $\beta=1$ corresponds to
F$_{1}$-score. $Keep$ records the F-score based partition gain and calculation logic symbol in all attributes. And Rule$\_$Single is a rule formed by a single tree. Tracing the path from the root to child node in the tree, a "IF-THEN" classification rule is thus extracted.
\begin{algorithm}[htbp]
\caption{Learning A Set of "IF-THEN" Classification Rules}
\label{multi:algorithm}
\textbf{Input}: D, the given class-labeled data set;\\
\textbf{Parameter}: tree$\_$number, max$\_$depth, $\beta$\\
\textbf{Output}: a set of "IF-THEN" classification rules
\begin{algorithmic}[1]
\STATE \textbf{Set} Rule$\_$Set = \{\}, number = 0
\WHILE{number $\leq$ tree$\_$number}
\STATE rule = Single$\_$Risk$\_$Rule(D,max$\_$depth,$\beta$)\\
\STATE \textbf{Add} rule \textbf{to} Rule$\_$Set
\STATE D $\leftarrow $ D $\setminus$ data set covered by rule
\STATE number $\leftarrow$ number + 1
\ENDWHILE
\STATE \textbf{return} Rule$\_$Set
\end{algorithmic}
\end{algorithm}
Next, the censored data sets constitute the remaining data set.
Repeating the above tree-building process on the latter,
a set of rules are automatically extracted from the built trees.
As shown in Algorithm \ref{multi:algorithm}, Rule$\_$set contains and
returns a set of rules.
\subsection{FEARE in Federated Learning Framework}
Federated Learning firstly focuses on the horizontal structure, in which each node has a subset of
data instances with complete data attributes. There are also many researches studying the vertical
federated learning structure where the data set is vertically partitioned and owned by different data providers.
Each data provider holds a disjoint subset of attributes for all data instances.
For both horizontal and vertical federated learning, the target is to learn a machine learning model
collaboratively without transferring any data from one data provider to another. In our security definition,
all parties are honest-but-curious.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\linewidth]{figs/frame_work_flow_vertical.jpg}
\caption{ A Vertical Federated Learning Framework of F-score Calculation and the Best Split Search of one feature from Passive Party.}
\label{fig:VFL}
\end{figure}
The main challenge in Fed-FEARE is how to calculate the F-score. The key tools to this challenge
are partially homomorphic encryption schemes.
Paillier encryption \cite{Paillier1999} allows any party can encrypt their
data with a public key, while the private key for decryption
is owned by the third party. With this additively
homomorphic encryption we can compute the additive
of two encrypted numbers as well as the product of an
unencrypted number and an encrypted one, which can be denoted
as $[\![u]\!] + [\![v]\!] = [\![u + v]\!]$, $v[\![u]\!] = [\![vu]\!]$ by using $[\![\cdot]\!]$ as the encryption operation. Moreover, another advantage of Paillier encryption is that the results of each encryption of $u$ are different. Therefore, the encrypted labels $[\![y_i]\!]$, where $y_i \in \{0,1\}$, will not lead to information leakage.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.62\textwidth]{figs/frame_work_flow_horizontal.jpg}
\caption{ A Horizontal Federated Learning Framework of F-score Calculation and the Best Split Search of One Feature.}
\label{fig:HFL}
\end{figure*}
In \textbf{Vertical Federated Learning}, we follow the notation in \cite{cheng2019secureboost:}. The data set is vertically
partitioned and distributed on two honest-but-curious
private \textbf{Party A} (the passive data provider with only features
) and \textbf{Party B} (the active data provider with
features and labels $y_i$). The F-score calculation of the features from \textbf{Party B} is the same issue as non-federated case. For the features from \textbf{Party A}, the F-score calculation is still feasible. With the help of Paillier encryption, the vertical federated learning framework for F-score calculation and FEARE can be designed as Figure \ref{fig:VFL}. Due to the labels belong to \textbf{Party B}, the rule-set is achieved by \textbf{Party B}. However the value information of the feature from \textbf{Party A} is encoded by $(j,S_j)$. The splitting point founded by \textbf{Party B} is in the form of $j$ instead of $S_j$. Therefore, this framework is secure for passive data provider, even in multi passive parties case.
In \textbf{Horizontal Federated Learning}, we follow the notation in \cite{yang2019Quasi}. The data set is horizontally
partitioned and distributed on at least two honest-but-curious
private \textbf{Party A} (the guest data provider with features and labels $y^A_i$
) and \textbf{Party B} (the guest data provider with
features and labels $y^B_i$). The F-score calculation of the features for the data from \textbf{Party A\&B} is more complex than the issue in vertical case. Because the value information of feature is shared by \textbf{Party A\&B}, the histogram of each party should not be shared with any other party. For the purpose of privacy-preserving, we designed the horizontal federated learning framework shown in Figure \ref{fig:HFL}. An honest-but-curious third party, i.e. the \textbf{coordinator}, is introduced here. For Paillier encryption, the private key for decryption is owned by \textbf{coordinator} and any party can encrypt their data with a public key. After receiving the value information $S^A_i$ and $S^B_j$ from guest parties, the \textbf{coordinator} sends a encrypted random histogram of one feature to one party
(\textbf{Party B} in Figure \ref{fig:HFL}). The calculation of F-score can be accomplished by \textbf{coordinator} (Step 6 in Figure \ref{fig:HFL}), when the encrypted histogram of one feature comes back. In this framework, the guest parties only know the histogram of one feature based on their own data. The \textbf{coordinator} knows the histogram of features based on the whole data. The final rule-set will be shared by all parties.
\section{Financial Anti-Fraud within Horizontal Fed-FEARE Framework}\label{sec:er}
This section is organized with three parts. Firstly, we show parameter assignment in training a model. Secondly, we introduce the data sets that are tested in our horizontal Fed-FEARE framework. Finally, we demonstrate the results of our experiments.
\subsection{Parameter Assignment}
There are only four parameters in Fed-FEARE: $max\_depth$, $tree\_{number}$, pruning threshold ($pruning\_min$) and weight factor $\beta$. Generally, two main factors of business logic and the difficulty in online deployment, determine parameter assignment. Due to the requirement of model generalization and its interpretability, $max\_depth$ is set as 3. That is, the business logic of a rule is always less than or equals to three.
And, $tree\_{number}$ is defined as 3, implying that
a set of no more than three rules is used to solve specific anti-fraud problems. It can avoid insufficient coverage due to too few trees or low accuracy due to overmuch trees, as well as online maintenance and other issues.
Pruning threshold ($pruning\_min$) is fixed as constant, 0.01.
Moreover, $\beta = 1.0$ is often set, indicating that both recall and precision
are of equal importance.
It can be adjusted according to the business target of pursuing high precision or high recall.
\subsection{Data set Description}
For risk management in the financial anti-fraud, variable names of the relevant data in finance agencies won't be disclosed.
With the usage of horizontal Fed-FEARE,
\textbf{BANK}
trains an ensemble tree model with
\textbf{CLOUD PAY},
which has China's convenience payment data.
Note that the target variable is coded as 1 to indicate default (according to a default or fraud definition chosen by the bank) and 0 to indicate non-default. From the \textbf{BANK}, there are 75,295 non-defaults and only 20 defaults (events or frauds) in the data set, separately.
Combining with 60 defaults from \textbf{CLOUD PAY} horizontally,
a training data set of 75,375 observations is thus formed.
It corresponds to a highly imbalanced ratio of positive and negative
instances, about 1:940.
The total data set consists of 25 variables from the training data set.
Different from the traditional tree model, Fed-FEARE, maintains their models in both
agencies, without any private data transferring.
\subsection{Results and Discussions}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.33\textwidth]{figs/fig1.pdf}
\includegraphics[width=0.33\linewidth]{figs/fig2.pdf}
\includegraphics[width=0.33\linewidth]{figs/fig3.pdf}
\caption{ Precision (left, bars), cp (left, lines), recall (middle, bars), cr (middle, lines) and F-score (right) within (blue) and without (orange) our horizontal Fed-FEARE.}
\label{fig:PR}
\end{figure*}
Among these variables, 15 of them are characterized as identity traits,
consumption grade, net assets, loan amount etc. The rest variables
represent payment of apartment utilities (including electricity,
water, gas and property charges) and mobile communication
costs.
\begin{table}[htbp]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{llll}
\hline
rule number &"1" &"2" &"3"\\
\hline
node logic &var$\_$12 $\leq$ -4.8 &var$\_$17 $\leq$ -3.0 &var$\_$14 $\leq$ -4.7 \\
node logic &var$\_$1 $>$ -26.6 &var$\_$14 $>$ 1.3 &var$\_$10 $\leq$ -2.27\\
node logic &null &var$\_$5 $>$ -3.3 &var$\_$2 $\leq$ 3.2 \\
pi &0.08$\%$ &0.011$\%$ &0.009$\%$ \\
cpi &0.08$\%$ &0.091$\%$ &0.10$\%$ \\
F-score &0.75 &0.44 &0.52 \\
precision &83.0$\%$ &90.0$\%$ &88.8$\%$ \\
recall &68.0$\%$ &29.0$\%$ &36.3$\%$ \\
cp &83.0$\%$ &84.0$\%$ &84.6$\%$ \\
cr &68.0$\%$ &72.5$\%$ &82.5$\%$ \\
\hline
\end{tabular}
}
\caption{Rule set and its corresponding proportion of instances (pi), cumulative proportion of instances (cpi), F-score, precision, recall, cumulative precision (cp) and cumulative recall (cr) within our horizontal Fed-FEARE framework (Financial Anti-Fraud).}
\label{tab:plain11}
\end{table}
\begin{table}[htbp]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{llll}
\hline
rule number &"1" &"2" &"3"\\
\hline
node logic &var$\_$12 $\leq$ -5.6 &var$\_$14 $\leq$ -10.2 &var$\_$18 $>$ 3.4 \\
node logic &var$\_$21 $>$ 30073 &var$\_$20 $\leq$ 0.99 &var$\_$1 $>$ 0.98\\
node logic &var$\_$2 $>$ 2.25 &null &null \\
pi &0.0172$\%$ &0.0027$\%$ &0.0013$\%$ \\
cpi &0.0172$\%$ &0.0199$\%$ &0.021$\%$ \\
F-score &0.72 &0.4 &0.28 \\
precision &92.3$\%$ &100.0$\%$ &100.0$\%$ \\
recall &60.0$\%$ &25.0$\%$ &16.6$\%$ \\
cp &92.3$\%$ &93.3$\%$ &93.7$\%$ \\
cr &60.0$\%$ &70.0$\%$ &75.0$\%$ \\
\hline
\end{tabular}
}
\caption{Rule set and its corresponding statistical indicators using only bank data (Financial Anti-Fraud).}
\label{tab:plain12}
\end{table}
According to Table \ref{tab:plain11} and Table \ref{tab:plain12},
it can be seen that rule sets have changed remarkably.
We can see clearly that rule "1" and "2" evolves from var$\_$12 $\leq$ -5.6 $\&$ var$\_$21 $>$ 30073 $\&$ var$\_$2 $>$ 2.25 and var$\_$14 $\leq$ -10.2 $\&$ var$\_$20 $\leq$ 0.99
to var$\_$12 $\leq$ -4.8 $\&$ var$\_$1 $>$ -26.6 and
var$\_$17 $\leq$ -3.0 $\&$ var$\_$14 $>$ 1.3 $\&$ var$\_$5 $>$ -3.3, respectively.
As well for rule "3", var$\_$18 $>$ 3.4 $\&$ var$\_$1 $>$ 0.98 becomes var$\_$14 $\leq$ -4.7 $\&$ var$\_$10 $\leq$ -2.27 $\&$ var$\_$2 $\leq$ 3.2.
It leads to significant changes in child node logic of both two rule sets in identifying fraud cases.
Within our horizontal Fed-FEARE framework, F-score in both rule "1" and "2" have an average increment of about 7$\%$, while F-score in rule "3" reaches 0.52 from 0.28 .
Also, there are evident changes in terms of accumulative precision (cp) and accumulative recall (cr). The former decreases from 93.7$\%$ to 84.6$\%$, while the latter increases from 75.0$\%$ to 82.5$\%$, separately.
Moreover, a new data set is used to verify the generalization of both two rule sets.
It consists of 10,000 non-defaults and 67 defaults in all.
We can see clearly from Figure \ref{fig:PR} that
there is no data instance covered by rule "3" without federated learning.
As a result, cp trends to be equal in the above two rule sets.
And, F-score of each rule with federated learning are greatly improved due to data enrichment.
They increase from 0.46, 0.12 and 0.0 to 0.69, 0.29 and 0.48, respectively.
Eventually, with approximately equal cp, cr in our horizontal Fed-FEARE framework reaches 74.6$\%$ from 34.3$\%$,
with an evident increment of more than 117$\%$.
Therefore, based on horizontal Fed-FEARE, frauds and non-frauds will be classified more easily
by our rule system with clear business explanation.
\section{Precision Marketing within Vertical Fed-FEARE Framework}\label{sec:fm}
We further extend our algorithm framework to precision marketing for new customer activation.
Taking advantage of our algorithmic framework,
\textbf{BANK}
trains a Fed-FEARE model with
\textbf{CLOUD PAY} vertically.
\begin{table}[htbp]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{llll}
\hline
rule number &"1" &"2" &"3"\\
\hline
node logic &var$\_$0 $>$ 0 &var$\_$7 $>$ 36.4 &var$\_$0 $>$ 0 \\
node logic &var$\_$0 $\leq$ 1 &null &var$\_$9 $>$ 990\\
node logic &null &null &null \\
pi &5.6$\%$ &7.4$\%$ &2.1$\%$ \\
cpi &5.6$\%$ &13$\%$ &15.1$\%$ \\
F-score &0.14 &0.07 &0.09 \\
precision &4.7$\%$ &1.99$\%$ &3.7$\%$ \\
recall &27.8$\%$ &21.2$\%$ &15.3$\%$ \\
cp &4.7$\%$ &3.1$\%$ &3.2$\%$ \\
cr &27.8$\%$ &43.1$\%$ &51.8$\%$ \\
cl &4.96 &3.3 &3.4\\
\hline
\end{tabular}
}
\caption{Rule set and statistics indicators within our vertical Fed-FEARE framework (Precision Marketing).}
\label{tab:plain21}
\end{table}
There are 5,438,267 observations in the data set. Note that the target variable is coded as 1 and 0 to indicate activation and non-activation, respectively. There are 51,203 activation and 5,387,064 non-activation in the data set, corresponding to a ratio of positive and negative instances, about 1:105. The total data set consists of 10 variables from the above two agencies.
In this business scenario, $\beta = 0.5$ is often set, indicating that precision are relatively more important than recall, while other parameters are the same as above.
\begin{table}[htbp]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{llll}
\hline
rule number &"1" &"2" &"3"\\
\hline
node logic &var$\_$0 $>$ 0 &var$\_$0 $>$ 0 &var$\_$3 $<$ 24 \\
node logic &var$\_$0 $\leq$ 1 &null &null\\
node logic &null &null &null \\
pi &5.6$\%$ &18.2$\%$ &22.7$\%$ \\
cpi &5.6$\%$ &23.8$\%$ &46.5$\%$ \\
F-score &0.14 &0.07 &0.03 \\
precision &4.7$\%$ &1.89$\%$ &0.7$\%$ \\
recall &27.8$\%$ &48.4$\%$ &45.8$\%$ \\
cp &4.7$\%$ &2.5$\%$ &1.6$\%$ \\
cr &27.8$\%$ &48.5$\%$ &79.8$\%$ \\
cl &4.96 &2.64 &1.7\\
\hline
\end{tabular}
}
\caption{Rule set and its corresponding statistical indicators using only bank data (Precision Marketing).}
\label{tab:plain22}
\end{table}
According to Table \ref{tab:plain21} and Table \ref{tab:plain22},
rule "1" in two cases is exactly the same, while rule "2" and rule "3" change remarkably.
We can see that both of them evolves from var$\_$0 $>$ 0
and var$\_$3 $<$ 24 to var$\_$7 $>$ 36.4 and var$\_$0 $>$ 0 $\&$ var$\_$9 $>$ 990, respectively.
It leads to significant change in business logic. Moreover,
it is obvious that the performance in terms of cp and cumulative lift (cl) is greatly improved due to data enrichment.
Compared to the rule set
using only
bank data, cp of vertical Fed-FEARE model has an increase of more than 100$\%$, reaching 3.2$\%$.
Correspondingly, cl of 3.4 has an evident improvement with raise by 100$\%$.
It means that more target customers can be transformed with less marketing resource.
As a result, we are able to
do precision marketing to target customers identified by our rule system.
Next, we will cooperate with more agencies such as Insurance companies, Trust companies, travel agencies,
and E-commerce platforms in multiple business scenarios with the help of our Fed-FEARE.
\section{Conclusion}\label{sec:con}
This manuscript proposes a F-score based ensemble tree model in
federated learning for automatic rule set extraction,
Fed-FEARE in short. It is applicable to multiple business scenarios,
including anti-fraud,precision marketing etc.
Compared with that without Fed-FEARE,
measures in evaluating model performance
are highly improved.
Fed-FEARE not only has the characteristics of fast calculation and strong portability, but also ensures interpretability and robustness.
\section*{Acknowledgements}
The authors gratefully acknowledge Haiying Han, Yiming Cheng, Yu Wang, Hua Zou, Xinzhu Yang and Hongkun Hao, from Bank and Cloud Payment, for our valuable discussions on business understanding and inspirations for the application design. The computing was executed on Everbright Data Haven (EDH), so we would like to express the deepest gratitude to the substantial help from EDH.
| -19,660.917475 |
[
-1.37109375,
1.578125
] | 36.50108 |
[
-2.888671875,
0.030364990234375,
-2.09375,
-4.94921875,
0.00812530517578125,
7.50390625
] |
[
2.2578125,
5.984375,
2.390625,
5.5546875
] | 352 | 3,677 |
[
-2.26171875,
2.328125
] | 32.360688 |
[
-5.984375,
-4.2265625,
-4.015625,
-1.42578125,
2.716796875,
11.2109375
] | 0.761396 | 19.037096 | 28.649089 | 6.420743 |
[
2.481034755706787
] | -14,813.061825 | 5.50068 | -19,351.252459 | 1.156927 | 6.003443 |
[
-2.97265625,
-3.515625,
-3.06640625,
-4.0546875,
2.8125,
10.5546875
] |
[
-5.34375,
-2.2109375,
-2.115234375,
-0.94921875,
3.794921875,
4.9375
] | |
BkiUdLc4uzlhdExj6jy2
|
\section{Introduction}
BPS solutions to supergravity theories have played, and continue to play,
an important role in string theory developments. Supersymmetric black holes
represent perhaps one of the most notable examples of this: In presence of a
sufficient amount of supersymmetry, non-renormalization
theorems allow to extrapolate an entropy computation at weak string coupling
(when the system is generically described by a configuration of strings and branes)
to the strong-coupling regime, where a description in terms of a black hole is
valid \cite{Strominger:1996sh}. These entropy calculations have been essential for
our current understanding of black hole microstates.
It is therefore important to dispose of a systematic classification of BPS
solutions, that allows to construct such backgrounds without the necessity to
guess suitable ansaetze. Of particular interest in this context are gauged
supergravities, which are related to supersymmetric field theories by
the AdS/CFT correspondence. While we know by now a broad landscape of BPS
solutions to ungauged supergravities, including many different types of
black holes and black rings \cite{Elvang:2004rt}, only a few of their analogues
in gauged supergravity have been constructed\footnote{Note that some
of these analogues might not exist \cite{Kunduri:2006uh}.}. For instance,
in four dimensions, there should exist rotating black holes in gauged ${\cal N}=8$
supergravity (that admits a truncation to ${\cal N}=2$ gauged supergravity coupled
to three abelian vector multiplets \cite{Cvetic:1999xp}) with four independent
electromagnetic charges. Until now, the only known solutions of this type are the
Kerr-Newman AdS black holes, which correspond to setting the four charges equal,
and the black holes in SO$(4)$ gauged ${\cal N}=4$ supergravity with two pairwise
equal charges \cite{Chong:2004na}.
In this paper, we consider the theory of ${\cal N}=2$, $D=4$ gauged
supergravity coupled to an arbitrary number of abelian vector multiplets,
but with no hypermultiplets (so-called Fayet-Iliopoulos gauging). The
constraints obeyed by backgrounds admitting at least one timelike Killing
spinor were given in \cite{Cacciatori:2008ek}, generalizing the results
for minimal gauged supergravity \cite{Caldarelli:2003pb}.
Although the equations determining the BPS geometries are rather involved,
some explicit solutions of them describing static black holes with nontrivial
scalars turned on have been obtained in \cite{Cacciatori:2009iz}. These black holes
provide a new ground to test the AdS/CFT correspondence: In principle
it should be possible to compute their microscopic entropy using the
recently discovered Chern-Simons-matter theories \cite{Aharony:2008ug}, and to
compare it then with the macroscopic Bekenstein-Hawking result.
Here we go one step further with respect to \cite{Cacciatori:2008ek} and
impose the existence of at least two Killing spinors, so we want to determine
the most general half-supersymmetric configurations\footnote{In five
dimensions, this was done in \cite{Gutowski:2007ai} and \cite{Grover:2008ih}
for the timelike and null cases respectively. Maximally supersymmetric
solutions to four-dimensional ${\cal N}=2$ gauged supergravity were
classified in \cite{Hristov:2009uj}.}. There are several
reasons motivating this:
First of all, it is of special interest to address cases of the AdS$_4$/CFT$_3$
correspondence with less than maximal supersymmetry. For instance, supergravity
vacua with lower supersymmetry may have an interpretation on the CFT side as
vacua with non-zero expectation values of certain operators (spontaneous symmetry
breaking), or as deformations of the CFT (explicit symmetry breaking).
The second point is the attractor mechanism \cite{Ferrara:1995ih,Strominger:1996kf,
Ferrara:1996dd,Ferrara:1996um,Ferrara:1997tw}. While
the BPS attractor flow has been studied extensively for asymptotically
flat black holes, the AdS case was considered only
recently \cite{Cacciatori:2009iz}\footnote{For an analysis of the attractor
mechanism in ${\cal N}=2$, $D=4$ supergravity with SU$(2)$ gauging
cf.~\cite{Huebscher:2007hj}.}.
In order to explore the BPS attractor flow in AdS, one needs the
near-horizon geometry of (possibly rotating) AdS black holes with scalar
fields turned on. In the asymptotically flat case, such near-horizon
geometries are typically fully supersymmetric, whereas, as we shall
see below, in AdS they generically break one half of the supersymmetries.
Furthermore, in gauged supergravity, interesting mathematical structures
appear in the base manifolds of reduced holonomy, over which supersymmetric
spacetimes are fibered. For instance, one can have U$(1)$ holonomy with
torsion \cite{Cacciatori:2008ek} (the torsion coming from the gauging),
Einstein-Weyl spaces \cite{Grover:2009ms} or hyper-K\"ahler torsion
manifolds \cite{Grover:2008jr}, and one might ask how these structures are
modified if one imposes the existence of more than one Killing spinor.
Finally, in minimal ${\cal N}=2$, $D=4$ gauged supergravity, the equations
determining the BPS solutions reduce, under some assumptions, to the equations
of motion following from the gravitational Chern-Simons action \cite{Cacciatori:2004rt}.
While the deeper reason for this remains obscure, it indicates that the
full set of equations actually might be integrable, i.e., it should be possible to
construct a Lax pair for them. Requiring additional supersymmetries can help
to better understand the integrability structure of this system.
The remainder of this paper is organized as follows:
In section \ref{FIsugra}, we briefly review the theory of ${\cal N}=2$,
$D=4$ supergravity with Fayet-Iliopoulos gauging. After that, in \ref{1/2susy},
we impose the existence of a second Killing spinor, obtain the linear system
into which the Killing spinor equations turn, and derive the time-dependence of
this second covariantly constant spinor. Subsequently, the linear system is
solved under some relatively mild assumptions, and the spacetime geometry,
the fluxes as well as a scalar flow equation are obtained.
The reader who is interested only in the final results can skip the technical
details and immediately jump to the summaries in sections \eqref{summary},
\eqref{X-Xb=0}, \eqref{X-Xbneq0} and \eqref{2=12=0}.
\section{${\cal N}=2$, $D=4$ supergravity with Fayet-Iliopoulos gauging}
\label{FIsugra}
We consider ${\cal N}=2$, $D=4$ gauged supergravity coupled to $n_V$ abelian
vector multiplets \cite{Andrianopoli:1996cm}\footnote{Throughout this paper,
we use the notations and conventions of \cite{Vambroes}.}.
Apart from the vierbein $e^a_{\mu}$, the bosonic field content includes the
vectors $A^I_{\mu}$ enumerated by $I=0,\ldots,n_V$, and the complex scalars
$z^{\alpha}$ where $\alpha=1,\ldots,n_V$. These scalars parametrize
a special K\"ahler manifold, i.~e.~, an $n_V$-dimensional
Hodge-K\"ahler manifold that is the base of a symplectic bundle, with the
covariantly holomorphic sections
\begin{equation}
{\cal V} = \left(\begin{array}{c} X^I \\ F_I\end{array}\right)\,, \qquad
{\cal D}_{\bar\alpha}{\cal V} = \partial_{\bar\alpha}{\cal V}-\frac 12
(\partial_{\bar\alpha}{\cal K}){\cal V}=0\,, \label{sympl-vec}
\end{equation}
where ${\cal K}$ is the K\"ahler potential and ${\cal D}$ denotes the
K\"ahler-covariant derivative. ${\cal V}$ obeys the symplectic constraint
\begin{equation}
\langle {\cal V}\,,\bar{\cal V}\rangle = X^I\bar F_I-F_I\bar X^I=i\,.
\end{equation}
To solve this condition, one defines
\begin{equation}
{\cal V}=e^{{\cal K}(z,\bar z)/2}v(z)\,,
\end{equation}
where $v(z)$ is a holomorphic symplectic vector,
\begin{equation}
v(z) = \left(\begin{array}{c} Z^I(z) \\ \frac{\partial}{\partial Z^I}F(Z)
\end{array}\right)\,.
\end{equation}
F is a homogeneous function of degree two, called the prepotential,
whose existence is assumed to obtain the last expression.
The K\"ahler potential is then
\begin{equation}
e^{-{\cal K}(z,\bar z)} = -i\langle v\,,\bar v\rangle\,.
\end{equation}
The matrix ${\cal N}_{IJ}$ determining the coupling between the scalars
$z^{\alpha}$ and the vectors $A^I_{\mu}$ is defined by the relations
\begin{equation}\label{defN}
F_I = {\cal N}_{IJ}X^J\,, \qquad {\cal D}_{\bar\alpha}\bar F_I = {\cal N}_{IJ}
{\cal D}_{\bar\alpha}\bar X^J\,.
\end{equation}
The bosonic action reads
\begin{eqnarray}
e^{-1}{\cal L}_{\text{bos}} &=& \frac 1{16\pi G}R + \frac 14(\text{Im}\,
{\cal N})_{IJ}F^I_{\mu\nu}F^{J\mu\nu} - \frac 18(\text{Re}\,{\cal N})_{IJ}\,e^{-1}
\epsilon^{\mu\nu\rho\sigma}F^I_{\mu\nu}F^J_{\rho\sigma} \nonumber \\
&& -g_{\alpha\bar\beta}\partial_{\mu}z^{\alpha}\partial^{\mu}\bar z^{\bar\beta}
- V\,, \label{action}
\end{eqnarray}
with the scalar potential
\begin{equation}
V = -2g^2\xi_I\xi_J[(\text{Im}\,{\cal N})^{-1|IJ}+8\bar X^IX^J]\,,
\label{scal-pot}
\end{equation}
that results from U$(1)$ Fayet-Iliopoulos gauging. Here, $g$ denotes the
gauge coupling and the $\xi_I$ are constants. In what follows, we define
$g_I=g\xi_I$.
The supersymmetry transformations of the gravitini $\psi^i_{\mu}$ ($i=1,2$)
and gaugini $\lambda^{\alpha}_i$ are\footnote{They result from the expressions
given in \cite{Vambroes} by taking ${\vec P}_I=\vec e\,\xi_I$ for the moment
maps (FI gauging), where $\vec e$ denotes a unit vector that can be
chosen to point in the 3-direction without loss of generality. The antiselfdual
parts $F^{-I}$ of the fluxes as well as the $\sigma$-matrices and the
K\"ahler-covariant derivatives $\cal D$ are also given in \cite{Vambroes}.}
\begin{equation}
\delta\psi^i_{\mu} = D_{\mu}(\omega)\epsilon^i + ig_IX^I\gamma_{\mu}
{\sigma_3}^{ij}\epsilon_j + \frac14\gamma_{ab}F^{-Iab}\epsilon^{ij}\gamma_{\mu}
\epsilon_j(\text{Im}\,{\cal N})_{IJ}X^J\,, \label{delta-psi}
\end{equation}
\begin{equation}
\delta\lambda^{\alpha}_i = -\frac12 g^{\alpha\bar\beta}{\cal D}_{\bar\beta}\bar X^I(\text{Im}\,{\cal N})_{IJ}
F^{-J}_{\mu\nu}\gamma^{\mu\nu}\epsilon_{ij}\epsilon^j + \gamma^{\mu}\partial_{\mu}z^{\alpha}
\epsilon_i - 2ig_I{\sigma_3}_{ij} g^{\alpha\bar\beta}{\cal D}_{\bar\beta}\bar X^I\epsilon^j\,,
\label{delta-lambda}
\end{equation}
where
\begin{equation}
D_{\mu}(\omega)\epsilon^i = (\partial_{\mu} + \frac14\omega^{ab}_{\mu}\gamma_{ab})\epsilon^i
+ \frac i2A_{\mu}\epsilon^i + ig_IA^I_{\mu}{\sigma_{3j}}^i\epsilon^j\,.
\end{equation}
Here, $A_{\mu}$ is the gauge field of the K\"ahler U$(1)$,
\begin{equation}
A_{\mu} = -\frac i2(\partial_{\alpha}{\cal K}\partial_{\mu}z^{\alpha} -
\partial_{\bar\alpha}{\cal K}\partial_{\mu}{\bar z}^{\bar\alpha})\,. \label{KaehlerU(1)}
\end{equation}
The most general timelike supersymmetric background of the theory described
above was constructed in \cite{Cacciatori:2008ek}, and is given by
\begin{equation}
ds^2 = -4|b|^2(dt+\sigma)^2 + |b|^{-2}(dz^2+e^{2\Phi}dwd\bar w)\ ,
\end{equation}
where the complex function $b(z,w,\bar w)$, the real function $\Phi(z,w,\bar w)$
and the one-form $\sigma=\sigma_wdw+\sigma_{\bar w}d\bar w$, together with the
symplectic section \eqref{sympl-vec}\footnote{Note that also $\sigma$ and
$\cal V$ are independent of $t$.} are determined by the equations
\begin{equation}
\partial_z\Phi = 2ig_I\left(\frac{{\bar X}^I}b-\frac{X^I}{\bar b}\right)\ ,
\label{dzPhi}
\end{equation}
\begin{eqnarray}
&&\qquad 4\partial\bar\partial\left(\frac{X^I}{\bar b}-\frac{\bar X^I}b\right) + \partial_z\left[e^{2\Phi}\partial_z
\left(\frac{X^I}{\bar b}-\frac{\bar X^I}b\right)\right] \label{bianchi} \\
&&-2ig_J\partial_z\left\{e^{2\Phi}\left[|b|^{-2}(\text{Im}\,{\cal N})^{-1|IJ}
+ 2\left(\frac{X^I}{\bar b}+\frac{\bar X^I}b\right)\left(\frac{X^J}{\bar b}+\frac{\bar X^J}b\right)\right]\right\}= 0\,,
\nonumber
\end{eqnarray}
\begin{eqnarray}
&&\qquad 4\partial\bar\partial\left(\frac{F_I}{\bar b}-\frac{\bar F_I}b\right) + \partial_z\left[e^{2\Phi}\partial_z
\left(\frac{F_I}{\bar b}-\frac{\bar F_I}b\right)\right] \nonumber \\
&&-2ig_J\partial_z\left\{e^{2\Phi}\left[|b|^{-2}\text{Re}\,{\cal N}_{IL}(\text{Im}\,{\cal N})^{-1|JL}
+ 2\left(\frac{F_I}{\bar b}+\frac{\bar F_I}b\right)\left(\frac{X^J}{\bar b}+\frac{\bar X^J}b\right)\right]\right\}
\nonumber \\
&&-8ig_I e^{2\Phi}\left[\langle {\cal I}\,,\partial_z {\cal I}\rangle-\frac{g_J}{|b|^2}\left(\frac{X^J}{\bar b}
+\frac{\bar X^J}b\right)\right] = 0\,, \label{maxwell}
\end{eqnarray}
\begin{equation}
2\partial\bar\partial\Phi=e^{2\Phi}\left[ig_I\partial_z\left(\frac{X^I}{\bar b}-\frac{\bar X^I}b\right)
+\frac2{|b|^2}g_Ig_J(\text{Im}\,{\cal N})^{-1|IJ}+4\left(\frac{g_I X^I}{\bar b}+\frac{g_I \bar X^I}b
\right)^2\right]\,, \label{Delta-Phi}
\end{equation}
\begin{equation}
d\sigma + 2\,\star^{(3)}\!\langle{\cal I}\,,d{\cal I}\rangle - \frac i{|b|^2}g_I\left(\frac{\bar X^I}b
+\frac{X^I}{\bar b}\right)e^{2\Phi}dw\wedge d\bar w=0\,. \label{dsigma}
\end{equation}
Here $\star^{(3)}$ is the Hodge star on the three-dimensional base with metric\footnote{Whereas
in the ungauged case, this base space is flat and thus has trivial holonomy, here we have U(1)
holonomy with torsion \cite{Cacciatori:2008ek}.}
\begin{equation}
ds_3^2 = dz^2+e^{2\Phi}dwd\bar w\ ,
\end{equation}
and we defined $\partial=\partial_w$, $\bar\partial=\partial_{\bar w}$, as well as
\begin{equation}
{\cal I} = \text{Im}\left({\cal V}/\bar b\right)\ .
\end{equation}
Given $b$, $\Phi$, $\sigma$ and $\cal V$, the fluxes read
\begin{eqnarray}
F^I&=&2(dt+\sigma)\wedge d\left[bX^I+\bar b\bar X^I\right]+|b|^{-2}dz\wedge d\bar w
\left[\bar X^I(\bar\partial\bar b+iA_{\bar w}\bar b)+({\cal D}_{\alpha}X^I)b\bar\partial z^{\alpha}-
\right. \nonumber \\
&&\left. X^I(\bar\partial b-iA_{\bar w}b)-({\cal D}_{\bar\alpha}\bar X^I)\bar b\bar\partial\bar z^{\bar\alpha}
\right]-|b|^{-2}dz\wedge dw\left[\bar X^I(\partial\bar b+iA_w\bar b)+\right. \nonumber \\
&&\left.({\cal D}_{\alpha}X^I)b\partial z^{\alpha}-X^I(\partial b-iA_w b)-({\cal D}_{\bar\alpha}\bar X^I)
\bar b\partial\bar z^{\bar\alpha}\right]- \nonumber \\
&&\frac 12|b|^{-2}e^{2\Phi}dw\wedge d\bar w\left[\bar X^I(\partial_z\bar b+iA_z\bar b)+({\cal D}_{\alpha}
X^I)b\partial_z z^{\alpha}-X^I(\partial_z b-iA_z b)- \right.\nonumber \\
&&\left.({\cal D}_{\bar\alpha}\bar X^I)\bar b\partial_z\bar z^{\bar\alpha}-2ig_J
(\text{Im}\,{\cal N})^{-1|IJ}\right]\,. \label{fluxes}
\end{eqnarray}
If the constraints \eqref{dzPhi}-\eqref{dsigma} are satisfied, the solution admits the Killing spinor
$(\epsilon^1,\epsilon_2)=(1,be_2)$ (cf.~appendix \ref{spinors} for a summary of the essential
information needed to realize spinors in terms of forms).
Before we continue, a short comment on K\"ahler-covariance is in order. Under a K\"ahler
transformation
\begin{equation}
{\cal K}\mapsto {\cal K}+f(z^{\alpha})+\bar f(\bar z^{\bar\alpha})\ ,
\end{equation}
the Killing spinors transform as
\begin{equation}
\epsilon^i\mapsto e^{(\bar f-f)/4}\epsilon^i\ , \qquad \epsilon_i\mapsto e^{-(\bar f-f)/4}\epsilon_i\ .
\end{equation}
On the other hand, under a U$(1)$ gauge transformation
\begin{equation}
A^I_{\mu}\mapsto A^I_{\mu}+\partial_{\mu}\chi^I\ ,
\end{equation}
we have
\begin{equation}
\epsilon^1\mapsto e^{-ig_I\chi^I}\epsilon^1\ , \qquad \epsilon_2\mapsto e^{-ig_I\chi^I}\epsilon_2\ .
\end{equation}
Under a combined K\"ahler/U$(1)$ transformation with $ig_I\chi^I=(\bar f-f)/4$, the Killing spinor
representative $(\epsilon^1,\epsilon_2)=(1,be_2)$ is forminvariant; it goes over into $(1,b'e_2)$, with
$b'=e^{-(\bar f-f)/2}b$. One easily checks that the eqns.~\eqref{dzPhi}-\eqref{dsigma} are covariant
under K\"ahler transformations if $b$ is replaced by $b'$. In what follows we sometimes use the
K\"ahler-covariant derivatives of $b$ defined by
\begin{equation}
D_{\mu}b = (\partial_{\mu}-iA_{\mu})b\ , \qquad D_{\mu}\bar b = (\partial_{\mu}+iA_{\mu})\bar b\ ,
\end{equation}
as well as $D\equiv D_w$, $\bar D\equiv D_{\bar w}$. These satisfy
$D'_{\mu}b'=e^{-(\bar f-f)/2}D_{\mu}b$.
\section{Half-supersymmetric backgrounds}
\label{1/2susy}
Let us now investigate the additional conditions satisfied by
half-supersymmetric vacua in the timelike class.
As the stability subgroup of the first
Killing spinor was already used in \cite{Cacciatori:2008ek}
to obtain the eqns.~\eqref{dzPhi}-\eqref{dsigma}, the second one cannot
be simplified anymore, and is thus of the general form
\begin{equation}
\epsilon^1=\alpha1+\beta e_{12}\ ,\qquad \epsilon^2=\gamma1+\delta e_{12}\ ,\qquad
\epsilon_1=\bar\alpha e_1-\bar\beta e_2\ ,\qquad \epsilon_2=\bar\gamma e_1-\bar
\delta e_2\ , \label{2nd-spinor}
\end{equation}
where $\alpha,\beta,\gamma,\delta$ are complex-valued functions.
The conditions coming from an additional Killing spinor are easily obtained by
plugging \eqref{2nd-spinor} into \eqref{delta-psi} and \eqref{delta-lambda} (with
$\delta\psi^i_{\mu}=\delta\lambda^{\alpha}_i=0$), and taking into account the constraints
on the bosonic fields implied by the first Killing spinor $(\epsilon^1,\epsilon_2)=(1,be_2)$,
given in \cite{Cacciatori:2008ek}. This will be done in the following subsection.
\subsection{The linear system}
From the vanishing of the gaugini supersymmetry transformations
(\ref{delta-lambda}) we get
\begin{eqnarray}
\label{htg1}(\bar\beta-b\gamma)\partial_z z^\alpha+2
e^{-\Phi}\sqrt{\frac b{\bar b}}(\bar b\bar\alpha+\delta)\partial z^\alpha
&=&4ig^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar X^Ig_I\gamma\ ,\\
\label{htg2}(\bar b\bar\alpha+\delta)
\partial_zz^\alpha-2e^{-\Phi}\sqrt{\frac{\bar b}b}(\bar\beta-b\gamma)
\bar\partial z^\alpha&=&0\ ,\\
\label{htg3}(b\alpha+\bar\delta)\partial_z
z^\alpha-2e^{-\Phi}\sqrt{\frac b{\bar b}}(\beta-\bar b\bar\gamma)
\partial z^\alpha&=&0\ ,\\
\label{htg4}(\beta-\bar b\bar\gamma)
\partial_zz^\alpha+2e^{-\Phi}\sqrt{\frac{\bar b}b}(b\alpha+\bar\delta)
\bar\partial z^\alpha&=&-\frac{4i}b g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}
\bar X^Ig_I\beta\ ,
\end{eqnarray}
while the gravitini variations (\ref{delta-psi}) yield
\begin{eqnarray}
\partial_t\alpha&=&-i\bar b\Omega_z(b\alpha+\bar\delta)
+2ie^{-\Phi}|b|\Omega_w(\beta-\bar b\bar\gamma)\ ,\nonumber\\
\partial_t\beta&=&2ie^{-\Phi}\bar b|b|\Omega_{\bar w}(b\alpha+\bar\delta)
+ib\bar b\Omega_z(\beta-\bar b\bar\gamma)
+4i(bX\!\!\cdot\!g+\bar b{\bar X}\!\!\cdot\!g)\beta-4ib\bar bX\!\!\cdot\!g\bar\gamma\ ,\nonumber\\
\partial_t\gamma&=&2i|b|e^{-\Phi}\Omega_w(\bar b\bar\alpha+\delta)+i\bar b
\Omega_z(\bar\beta-b\gamma)+4iX\!\!\cdot\!g\bar\beta-4i(bX\!\!\cdot\!g+\bar b{\bar X}\!\!\cdot\!g)\gamma\ ,\nonumber\\
\label{htgr1}\partial_t\delta&=&ib\bar b\Omega_z(\bar b\bar\alpha+\delta)-
2ie^{-\Phi}\bar b|b|\Omega_{\bar w}(\bar\beta-b\gamma)\ ,
\end{eqnarray}
\begin{align}
\partial_z\alpha&=-\frac{i\Omega_z}{2b}(b\alpha+\bar\delta)
-\frac{ie^{-\Phi}}{|b|}\Omega_w(\beta-\bar b\bar\gamma)\ ,\nonumber \displaybreak[0] \\
\partial_z\beta&=i\sqrt{\frac{\bar b}b}e^{-\Phi}\Omega_{\bar w}(b\alpha+\bar
\delta)-\frac i2\Omega_z(\beta-\bar b\bar\gamma)
+\beta\partial_z\ln|b|+2iX\!\!\cdot\!g\bar\gamma\ ,\nonumber \displaybreak[0] \\
\partial_z\gamma&=-\frac{ie^{-\Phi}}{|b|}\Omega_w(\bar b\bar\alpha+\delta)+
\frac i{2b}\Omega_z(\bar\beta-b\gamma)+\frac{2iX\!\!\cdot\!g}{b\bar b}\bar\beta-
\frac{\gamma}2\partial_z\ln\frac b{\bar b}\ ,\nonumber \displaybreak[0] \\
\label{htgr2}\partial_z\delta&=-ie^{-\Phi}\sqrt{\frac{\bar b}b}
\Omega_{\bar w}(\bar\beta-b\gamma)-\frac i2\Omega_z(\bar b\bar\alpha+\delta)
+\delta\partial_z\ln\bar b\ ,
\end{align}
\begin{eqnarray}
\partial\alpha&=&-\frac ib(\Omega_w+b\bar b\Omega_z\sigma_w)(b\alpha+\bar
\delta)+2ie^{-\Phi}|b|\Omega_w\sigma_w(\beta-\bar b\bar\gamma)\ ,\nonumber\\
\partial\beta&=&-\frac{ie^{\Phi}}2\sqrt{\frac{\bar b}b}\left(\Omega_z-
4e^{-2\Phi}b\bar b\Omega_{\bar w}\sigma_w+\frac{4X\!\!\cdot\!g}{\bar b}\right)(b\alpha+\bar
\delta)-\beta\partial(\Phi-\ln|b|)\nonumber\\
&&+ib\bar b\Omega_z\sigma_w(\beta-\bar b\bar\gamma)+4i(bX\!\!\cdot\!g+\bar b{\bar X}\!\!\cdot\!g)\sigma_w
\beta-4ib\bar bX\!\!\cdot\!g\sigma_w\bar\gamma\ ,\nonumber\\
\partial\gamma&=&\frac ib(\Omega_w+b\bar b\Omega_z\sigma_w)(\bar\beta-b\gamma)
+\gamma\partial\left(\Phi-\frac12\ln\frac b{\bar b}\right)\nonumber\\
&&+2i|b|e^{-\Phi}\Omega_w\sigma_w(\bar b\bar\alpha+\delta)+4iX\!\!\cdot\!g\sigma_w
\bar\beta-4i(bX\!\!\cdot\!g+\bar b{\bar X}\!\!\cdot\!g)\sigma_w\gamma\ ,\nonumber\\
\label{htgr3}\partial\delta&=&ib\bar b\Omega_z\sigma_w(\bar b\bar\alpha+\delta)
+\frac{ie^\Phi}2\sqrt{\frac{\bar b}b}(\Omega_z-4e^{-2\Phi}b\bar b\Omega_{\bar w}
\sigma_w)(\bar\beta-b\gamma)\nonumber\\
&&-2iX\!\!\cdot\!g e^\Phi\sqrt{\frac b{\bar b}}\gamma+\delta\partial\ln\bar b\ ,
\end{eqnarray}
\begin{eqnarray}
\bar\partial\alpha&=&-i\bar b\Omega_z\sigma_{\bar w}(b\alpha+\bar\delta)+
\frac{2iX\!\!\cdot\!g e^\Phi}{\bar b|b|}\beta
+\frac{ie^\Phi}{2|b|}(\Omega_z+4b\bar b e^{-2\Phi}\Omega_w\sigma_{\bar w})
(\beta-\bar b\bar\gamma)\ ,\nonumber\\
\bar\partial\beta&=&-i(\Omega_{\bar w}-b\bar b\Omega_z\sigma_{\bar w})
(\beta-\bar b\bar\gamma)+\beta\bar\partial(\Phi+\ln|b|)\nonumber\\
&&+2ie^{-\Phi}\bar b|b|\Omega_{\bar w}\sigma_{\bar w}(b\alpha+\bar\delta)+
4i(bX\!\!\cdot\!g+\bar b{\bar X}\!\!\cdot\!g)\sigma_{\bar w}\beta-4ib\bar bX\!\!\cdot\!g\sigma_{\bar w}\bar\gamma\ ,
\nonumber\\
\bar\partial\gamma&=&\frac{ie^\Phi}{2|b|}\left(\Omega_z+4b\bar b e^{-2\Phi}
\Omega_w\sigma_{\bar w}+\frac{4X\!\!\cdot\!g}{\bar b}\right)(\bar b\bar\alpha+\delta)-
\gamma\bar\partial\left(\Phi+\frac12\ln\frac b{\bar b}\right)\nonumber\\
&&+i\bar b\Omega_z\sigma_{\bar w}(\bar\beta-b\gamma)+4iX\!\!\cdot\!g\sigma_{\bar w}
\bar\beta-4i(bX\!\!\cdot\!g+\bar b{\bar X}\!\!\cdot\!g)\sigma_{\bar w}\gamma\ ,\nonumber\\
\label{htgr4}\bar\partial\delta&=&-i\left(\Omega_{\bar w}-b\bar b\Omega_z
\sigma_{\bar w}\right)(\bar b\bar\alpha+\delta)-2ie^{-\Phi}\bar b|b|
\Omega_{\bar w}\sigma_{\bar w}(\bar\beta-b\gamma)+\delta\bar\partial\ln\bar b\ ,
\end{eqnarray}
where $X\!\!\cdot\!g=X^Ig_I$ and $\Omega_\mu=A_\mu-i\partial_\mu\ln\bar b$.
To proceed it is convenient to set $b=re^{i\varphi}$ and to introduce the new
basis\footnote{Note that the first Killing spinor has components $(1,0,0,0)$
in this basis.}
\begin{equation}
\vec\psi=\left(
\begin{array}{c}
\psi_0\\
\psi_1\\
\psi_2\\
\psi_{12}
\end{array}
\right)=\left(
\begin{array}{c}
\alpha\\
-r^2\alpha-\bar b\bar\delta\\
re^{-\Phi}\bar b\bar\gamma\\
re^{-\Phi}\beta
\end{array}
\right)\ , \label{basis-psi}
\end{equation}
in which the gaugini conditions (\ref{htg1})-(\ref{htg4}) become
\begin{align}
\label{htgIII1}\bar\psi_-\partial_zz^\alpha+2e^{-2\Phi}\bar\psi_1\partial
z^\alpha&=-\frac{4i}b g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar X^Ig_I
\bar\psi_2\ ,\\
\label{htgIII2}\bar\psi_1\partial_zz^\alpha-2\bar\psi_-\bar\partial
z^\alpha&=0\ ,\\
\label{htgIII3}\psi_1\partial_zz^\alpha-2\psi_-\partial z^\alpha&=0\ ,\\
\label{htgIII4}\psi_-\partial_zz^\alpha+2e^{-2\Phi}\psi_1\bar\partial z^\alpha
&=\frac{4i}b g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar X^Ig_I\psi_{12}\ ,
\end{align}
with $\psi_{\pm}=\psi_2\pm\psi_{12}$.
In general the Killing spinor equations do not readily provide
information and one has to resort to their integrability conditions.
Rewriting the linear system \eqref{htgr1}-\eqref{htgr4} in the basis
\eqref{basis-psi}, and defining $Q=e^{-2\Phi}\bar bD\bar b$, $P=e^{-2\Phi}bDb$,
one finds that the $t$-$w$ integrability condition implies
\begin{align}
\label{httw1} -\frac 12 \left(D_zQ-ie^{-2\Phi}\bar b^2F_{zw}\right)\psi_1
+\left(DQ\right)\psi_-&=0\ ,\\
\label{httw2} -\frac 12 \left(D_zP+ie^{-2\Phi}b^2F_{zw}\right)\psi_1
+\left(DP\right)\psi_-&=0\ ,\\
\label{httw3}f_A\psi_1+f_B\psi_--2i\partial(bX\!\!\cdot\!g)\psi_2&=0\ ,\\
\label{httw4}f_C\psi_1+f_D\psi_-+2i\partial(\bar b{\bar X}\!\!\cdot\!g)\psi_{12}&=0\ ,
\end{align}
where $F_{\mu\nu}$ denotes the field strength of the K\"ahler U$(1)$ \eqref{KaehlerU(1)}, and
\begin{align}
f_A=&\frac{\bar b}{2b}\left[-2e^{-2\Phi}Db\bar D b+2e^{-2\Phi}bD\bar D b
-(D_zb)^2+6i{\bar X}\!\!\cdot\!g D_zb+8({\bar X}\!\!\cdot\!g)^2\right]\ ,\nonumber\\
f_B=&\frac{\bar b}{2b}e^{2\Phi}(D_z P+ie^{-2\Phi}b^2F_{zw})-2i[X\!\!\cdot\!g Db+\bar bD{\bar X}\!\!\cdot\!g]\ ,
\nonumber\\
f_C=&-\frac b{2\bar b}\left[-2e^{-2\Phi}D\bar b\bar D\bar b+2e^{-2\Phi}\bar bD\bar D
\bar b-(D_z\bar b)^2-6iX\!\!\cdot\!g D_z\bar b+8(X\!\!\cdot\!g)^2\right]\ ,\nonumber\\
f_D=&-\frac b{2\bar b}e^{2\Phi}(D_z Q-ie^{-2\Phi}\bar b^2F_{zw})-2i[{\bar X}\!\!\cdot\!g D\bar b+bDX\!\!\cdot\!g]\ .\nonumber
\end{align}
\subsection{Time-dependence of second Killing spinor}
\label{time-dep-Kill}
In this subsection we will make use of the Killing spinor equations
\eqref{htgr1}-\eqref{htgr4} and the integrability conditions
(\ref{httw1})-(\ref{httw4}) to derive the time-dependence of the second
Killing spinor. Let us define $\texttt{g}(t,z,w,\bar w)$ by
\[
\psi_-=\frac12\texttt{g}(t,z,w,\bar w)(D_zP+ie^{-2\Phi}b^2F_{zw})\ .
\]
Plugging this into (\ref{httw2}), one gets under the assumption
$D_zP+ie^{-2\Phi}b^2F_{zw}\neq 0$\footnote{The case
$D_zP+ie^{-2\Phi}b^2F_{zw}=0$ will be considered in appendix \ref{DzP}.}
\[
\psi_1=\texttt{g}DP\ .
\]
Using this form of $\psi_-$ and $\psi_1$, the integrability
condition (\ref{httw3}) becomes
\begin{equation}
\label{httw3II}f_A\texttt{g}DP+f_B\frac{\texttt{g}}2
(D_zP+ie^{-2\Phi}b^2F_{zw})-2i\psi_2\partial(bX\!\!\cdot\!g)=0\ .
\end{equation}
Now, if $\texttt{g}=0$ the gravitini equations
(\ref{htgr1})-(\ref{htgr4}) imply that $X\!\!\cdot\!g=0$. If we exclude for
the time being this degenerate subcase, we have
$\texttt{g}\neq0$ and thus $\texttt{g}=:e^\texttt{G}$.
Dividing (\ref{httw3II}) by $\texttt{g}$ and deriving with respect
to $t$ yields $\partial_t(\psi_2/\texttt{g})=0$ (if $\partial(bX\!\!\cdot\!g)\neq 0$)
and hence
\[
\psi_2=e^\texttt{G}\tilde{\psi}_2(z,w,\bar w)\ .
\]
It is then clear that $\partial_t\psi_\texttt{i}=\psi_\texttt{i}\partial_t\texttt{G}$,
$\texttt{i}=1,2,12$. The Killing spinor equations are of the form
$\partial_\mu\psi_\texttt{i}=\mathcal{M}_{\mu\texttt{i}\texttt{j}}\psi_\texttt{j}$,
for some time-independent matrices $\mathcal{M}_\mu$. Taking the
derivative of this with respect to $t$, one gets
$\partial_\mu\partial_t\texttt{G}=0$, and therefore
\[
\texttt{G}=\texttt{G}_0t+\tilde{\texttt{G}}(z,w,\bar w)\ ,
\]
with $\texttt{G}_0\in \mathbb{C}$ constant. We have thus
\begin{equation}
\partial_t\psi_{\texttt{i}}=\texttt{G}_0\psi_{\texttt{i}} \label{time-der-psi}
\end{equation}
Furthermore the time-dependence of $\psi_0$ can be easily deduced
from the Killing spinor equations for $\psi_0$,
\begin{align}
\label{httda0}\partial_t\psi_0=&i\Omega_z\psi_1-2i\Omega_w\psi_-\ ,\\
\label{htzda0}\partial_z\psi_0=&\frac i{2r^2}\Omega_z\psi_1+
\frac i{r^2}\Omega_w\psi_-\ ,\\
\label{htwda0}\partial\psi_0=&\left(\frac i{r^2}\Omega_w+i\Omega_z\sigma_w
\right)\psi_1-2i\Omega_w\sigma_w\psi_-\ ,\\
\label{htwbda0}\bar\partial\psi_0=&i\Omega_z\sigma_{\bar w}\psi_1
-\left(\frac{ie^{2\Phi}}{2r^2}\Omega_z+2i\Omega_w\sigma_{\bar w}\right)\psi_-
+\frac{2iX\!\!\cdot\!g e^{2\Phi}}{\bar b r^2}\psi_{12}\ .
\end{align}
Deriving \eqref{httda0}-\eqref{htwbda0} with respect to $t$ and taking into
account \eqref{time-der-psi}, one obtains
$\partial_t\partial_\mu\psi_0=\texttt{G}_0\partial_\mu\psi_0$.
Hence $\partial_t\psi_0=\texttt{G}_0\psi_0+\lambda$ where
$\lambda$ is an arbitrary constant. If $\texttt{G}_0\neq 0$, this implies
\begin{equation}
\psi_0 = -\frac{\lambda}{\texttt{G}_0} + \tilde\psi_0(z,w,\bar w)
e^{\texttt{G}_0t}\ .
\end{equation}
In that case one can set $\lambda=0$ without loss of generality, because
a nonvanishing $\lambda$ simply corresponds to adding a multiple of the
first Killing spinor to the second. The time-dependence of $\psi_0$ is thus
of the same exponential form as that of the other components of the second
Killing spinor,
\[
\psi_0=\tilde{\psi}_0(z,w,\bar w)e^{\texttt{G}_0t}\ ,\qquad
\psi_{\texttt{i}}=\tilde{\psi}_{\texttt{i}}(z,w,\bar w)e^{\texttt{G}_0t}\ .
\]
If $\texttt{G}_0$ vanishes we have
\begin{equation}
\psi_0 = \lambda t + \breve{\psi}_0(z,w,\bar w)\ , \qquad
\psi_{\texttt{i}} = \breve{\psi}_{\texttt{i}}(z,w,\bar w)
\end{equation}
(so that one cannot choose $\lambda=0$ in this case).
Plugging this time-dependence into the subsystem of the Killing
spinor equations not containing $\psi_0$ one obtains the following
reduced system for $\psi_{\texttt{i}}$:
\begin{align}
\label{thpsi01}\partial_z\psi_1&+\left(\frac{\texttt{G}_0}{2b\bar b}-
\frac{\partial_zb}b+iA_z\right)\psi_1+2\left(\frac{\partial b}b-iA_w\right)
\psi_-=0\ ,\\
\label{thpsi02}\partial_z\psi_2&+\left(\frac{\texttt{G}_0}{2b\bar b}-
\frac{\partial_z\bar b}{\bar b}-4i\frac{X\!\!\cdot\!g}{\bar b}-iA_z\right)\psi_2
-\left(\frac{\partial_zb}b-4i\frac{{\bar X}\!\!\cdot\!g}b-iA_z\right)\psi_{12}=0\ ,\\
\label{thpsi03}\partial_z\psi_{12}&+2e^{-2\Phi}\left(\frac{\bar\partial\bar b}
{\bar b}+iA_{\bar w}\right)\psi_1
+\left(\frac{\texttt{G}_0}{2b\bar b}-\frac{\partial_zb}b-\frac{\partial_z
\bar b}{\bar b}-4i\frac{X\!\!\cdot\!g}{\bar b}\right)\psi_{12}=0\ ,
\end{align}
\begin{align}
\label{thpsi04}\partial_z\psi_1&-\left(\frac{\texttt{G}_0}{2b\bar b}+
\frac{\partial_z\bar b}{\bar b}+iA_z\right)\psi_1
+2\left(\frac{\partial\bar b}{\bar b}+iA_w\right)\psi_-=0\ ,\\
\label{thpsi05}\partial_z\psi_2&-2e^{-2\Phi}\left(\frac{\bar\partial b}b
-iA_{\bar w}\right)\psi_1
-\left(\frac{\texttt{G}_0}{2b\bar b}+\frac{\partial_zb}b+\frac{\partial_z
\bar b}{\bar b}-4i\frac{{\bar X}\!\!\cdot\!g}b\right)\psi_2=0\ ,\\
\label{thpsi06}\partial_z\psi_{12}&-\left(\frac{\partial_z\bar b}{\bar b}
+4i\frac{X\!\!\cdot\!g}{\bar b}+iA_z\right)\psi_2
-\left(\frac{\texttt{G}_0}{2b\bar b}+\frac{\partial_zb}b-4i\frac{{\bar X}\!\!\cdot\!g}b-iA_z
\right)\psi_{12}=0\ ,
\end{align}
\begin{align}
\label{thpsi07}\partial\psi_1&-\texttt{G}_0\sigma_w\psi_1=0\ ,\\
\label{thpsi08}\partial\psi_2&+\left(\frac{\partial_zb}{2b}-2i\frac{{\bar X}\!\!\cdot\!g}b-
\frac i2A_z\right)\psi_1
-\left(\texttt{G}_0\sigma_w+\frac{\partial b}b+\frac{\partial\bar b}{\bar b}
-2\partial\Phi\right)\psi_2=0\ ,\\
\label{thpsi09}\partial\psi_{12}&-\left(\frac{\partial_z\bar b}{2\bar b}+
2i\frac{X\!\!\cdot\!g}{\bar b}+\frac i2A_z\right)\psi_1
-\left(\texttt{G}_0\sigma_w+\frac{\partial b}b+\frac{\partial\bar b}{\bar b}
-2\partial\Phi\right)\psi_{12}=0\ ,
\end{align}
\begin{align}
\label{thpsi10}\bar\partial\psi_1&-\left(\texttt{G}_0\sigma_{\bar w}+
\frac{\bar\partial b}b+\frac{\bar\partial\bar b}{\bar b}\right)\psi_1\nonumber\\
&-e^{2\Phi}\left[\left(\frac{\partial_zb}{2b}+\frac{\partial_z\bar b}{2\bar b}
\right)\psi_--2i\left(\frac{{\bar X}\!\!\cdot\!g}b\psi_2+\frac{X\!\!\cdot\!g}{\bar b}\psi_{12}\right)
\right]=0\ ,\\
\label{thpsi11}\bar\partial\psi_2&-\left(\texttt{G}_0\sigma_{\bar w}+
\frac{\bar\partial\bar b}{\bar b}+iA_{\bar w}\right)\psi_2
-\left(\frac{\bar\partial b}b-iA_{\bar w}\right)\psi_{12}=0\ ,\\
\label{thpsi12}\bar\partial\psi_{12}&-\left(\frac{\bar\partial\bar b}
{\bar b}+iA_{\bar w}\right)\psi_2-\left(\texttt{G}_0\sigma_{\bar w}+
\frac{\bar\partial b}b-iA_{\bar w}\right)\psi_{12}=0\ .
\end{align}
From the difference of eqns.~(\ref{thpsi02})-(\ref{thpsi06}) and
(\ref{thpsi11})-(\ref{thpsi12}) one gets respectively
\begin{equation}
\partial_z\psi_-=-\frac{\texttt{G}_0}{2b\bar b}\psi_+\ ,\qquad
\bar\partial\psi_-=\texttt{G}_0\sigma_{\bar w}\psi_-\ . \label{deriv-psi_-}
\end{equation}
Furthermore, $[(\ref{thpsi05})-(\ref{thpsi03})-2e^{-2\Phi}(\ref{thpsi10})]$
yields
\begin{equation}
\bar\partial\psi_1=\frac{e^{2\Phi}}2\partial_z\psi_--\texttt{G}_0
\left(\frac{e^{2\Phi}}{4b\bar b}\psi_+-\sigma_{\bar w}\psi_1\right)\ .
\label{deriv-psi_1}
\end{equation}
Obviously for $\texttt{G}_0=0$, the equations (\ref{thpsi01})-(\ref{thpsi12})
simplify significantly. Let us now study this particular case under
the additional assumption $\psi_-\neq0$ and $\psi_1\neq0$.
\subsection {Case $\texttt{G}_0=0$, $\psi_-\neq0$ and $\psi_1\neq0$}
\label{psi_-neq0}
For $\texttt{G}_0=0$ one gets from \eqref{thpsi07}, \eqref{deriv-psi_-} and
\eqref{deriv-psi_1}
\[
\psi_1=\psi_1(z)\ ,\qquad \psi_-=\psi_-(w)\ .
\]
Assuming $\psi_-\neq0$, the gaugini equations \eqref{htgIII1}-\eqref{htgIII4}
imply
\begin{eqnarray}
\label{htgIV1}\partial_zz^\alpha&=&-\frac{4i}b g^{\alpha\bar\beta}
\mathcal{D}_{\bar\beta}\bar X^Ig_I\frac{\psi_-\bar\psi_2}{\psi_-\bar\psi_-+
e^{-2\Phi}\psi_1\bar\psi_1}\ ,\\
\label{htgIV2}\partial z^\alpha&=&\frac{\psi_1}{2\psi_-}\partial_zz^\alpha\ ,\\
\label{htgIV3}\bar\partial z^\alpha&=&\frac{\bar\psi_1}{2\bar\psi_-}\partial_z
z^\alpha\ ,\\
\label{htgIV4}0&=&g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar
X^Ig_I\left(\psi_2\bar\psi_2-\psi_{12}\bar\psi_{12}\right)\ .
\end{eqnarray}
From eqns.~(\ref{htgIV2}) and (\ref{htgIV3}) we obtain
\begin{equation}
A_z\psi_1-2A_w\psi_-=0\ . \label{AzAw}
\end{equation}
(\ref{thpsi01})$+$(\ref{thpsi04}) and (\ref{thpsi03})$-$(\ref{thpsi05}) yield
respectively
\begin{align}
\label{thG01}&\partial_z\psi_1=\psi_1\partial_z\ln|b|-2\psi_-\partial\ln|b|\ ,\\
\label{thG02}&0=\psi_-\partial_z\ln|b|+2e^{-2\Phi}\psi_1\bar\partial\ln|b|
-2i\left(\frac{{\bar X}\!\!\cdot\!g}b\psi_2+\frac{X\!\!\cdot\!g}{\bar b}\psi_{12}\right)\ .
\end{align}
Using (\ref{thG01}) and (\ref{thG02}) it is easy to shew that
\begin{equation}
\label{thG03}\bar\psi_1\partial_z\psi_1-\psi_1\partial_z\bar\psi_1=2ie^{2\Phi}
\left(\frac{X\!\!\cdot\!g}{\bar b}+\frac{{\bar X}\!\!\cdot\!g}b\right)(\psi_2\bar\psi_2-
\psi_{12}\bar\psi_{12})\ .
\end{equation}
Because we are interested only in the case in which
$g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar X^Ig_I\neq0$\footnote{One readily
shows that $g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar X^Ig_I=0$ leads to
$\partial_{\bar\beta}V=0$, where $V$ is the scalar potential \eqref{scal-pot}.
Unless there are flat directions in the potential, these equations
completely fix the moduli which are thus constant.}, (\ref{htgIV4}) implies
$|\psi_2|=|\psi_{12}|$ and thus from (\ref{thG03}) one gets
\begin{equation}
\label{thG04}\bar\psi_1\partial_z\psi_1-\psi_1\partial_z\bar\psi_1=0\ .
\end{equation}
Hence $\psi_1=\zeta(z)e^{i\theta_0}$ where $\theta_0$ is a constant
and $\zeta(z)$ is a real function. By rescaling
$\psi_{\texttt{i}}\rightarrow e^{-i\theta_0}\psi_{\texttt{i}}$ we can take
$\psi_1$ real and positive without loss of generality. By assumption both
$\psi_1$ and $\psi_-$ are non-vanishing, which allows to introduce new
coordinates $Z$, $W$ and $\bar W$ such that
\[
dZ=-\frac{2dz}{\psi_1(z)}\ ,\qquad dW=\frac{dw}{\psi_-(w)}\ ,\qquad d\bar
W=\frac{d\bar w}{\bar\psi_-(\bar w)}\ .
\]
Note that one can set $\psi_-=1$ using the residual gauge invariance
$w\mapsto W(w)$,
$\Phi\mapsto\Phi-\frac12\ln(dW/dw)-\frac12\ln(d\bar W/d\bar w)$ leaving
invariant the metric $e^{2\Phi}dwd\bar w$. We can thus take $W=w$ in the
following. (\ref{thpsi01}) and (\ref{thpsi04}) are then equivalent to
\[
(\partial_Z+\partial)\varphi=0\ ,\qquad
\partial_Z\ln\psi_1-(\partial_Z+\partial)\ln r=0\ .
\]
From the real part of the first equation one has
\[
\varphi=\varphi(Z-w-\bar w)\ .
\]
Using $\psi_1=\psi_1(Z)$, the second equation implies
\begin{equation}
(\partial_Z+\partial)\frac{r}{\psi_1}=0\ , \label{r/psi_1}
\end{equation}
and therefore
\[
\frac{r}{\psi_1}=\rho(Z-w-\bar w)\ .
\]
The function $b$ must thus have the form
\[
b(Z,w,\bar w)=\psi_1(Z)B(Z-w-\bar w)\ ,
\]
where $B(Z-w-\bar w)=\rho(Z-w-\bar w)e^{i\varphi(Z-w-\bar w)}$. Taking into
account \eqref{dzPhi} and \eqref{r/psi_1}, the difference between
(\ref{thpsi08}) and (\ref{thpsi09}) yields
\[
(\partial_Z+\partial)(\ln\psi_1-\Phi)=0\ ,
\]
so that $\ln\psi_1-\Phi=-H(Z-w-\bar w)$ with $H$ real. This gives
\[
e^{2\Phi}=\psi_1^2e^{2H}
\]
for the conformal factor. The conditions (\ref{htgIV1})-(\ref{htgIV4})
coming from the gaugino variations boil down to
\begin{align}
\label{htgV1}&\partial_Z
z^\alpha=\frac iBg^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}\bar X^Ig_I
\frac{1-\psi_+}{1+e^{-2H}}\ ,\\
\label{htgV2}&\partial z^\alpha=\bar\partial z^\alpha=-\partial_Z z^\alpha\ ,\\
\label{htgV3}&\bar\psi_+=-\psi_+\ .
\end{align}
From equation (\ref{htgV2}) we obtain that $z^{\alpha}=z^{\alpha}(Z-w-\bar w)$.
In terms of the new coordinate $Z$, \eqref{dzPhi} reads
\[
\partial_Z\Phi+i\left(\frac{{\bar X}\!\!\cdot\!g}{B}-\frac{X\!\!\cdot\!g}{\bar B}\right)=0\ .
\]
Using the definition of $H$ we get
\begin{equation}
\label{thpsic}\partial_Z\ln\psi_1=-\dot{H}-i\left(\frac{{\bar X}\!\!\cdot\!g}B-\frac{X\!\!\cdot\!g}{\bar B}
\right)\ ,
\end{equation}
where a dot denotes a derivative w.r.t.~$Z-w-\bar w$. As the lhs depends only
on $Z$ and the rhs depends only on $Z-w-\bar w$, we can conclude that
$\partial_Z\ln\psi_1=\kappa$ with some real constant $\kappa$, i.e.,
$\psi_1(Z)=\psi_1^{(0)}e^{\kappa Z}$. By shifting $Z$ one can set
$\psi_1^{(0)}=1$. The only remaining nontrivial equations in the system
(\ref{thpsi01})-(\ref{thpsi12}) read
\begin{align}
\label{thpsiII1}&\partial_Z\psi_+-2\left(\frac{\dot{\rho}}{\rho}-\dot{H}\right)
\psi_++2i\left(\dot{\varphi}-A_Z\right)+2i\left(\frac{{\bar X}\!\!\cdot\!g}B+\frac{X\!\!\cdot\!g}{\bar B}
\right)=0\ ,\\
\label{thpsiII2}&\partial_Z\psi_+-\left(2\frac{\dot{\rho}}{\rho}-\dot{H}+
\kappa\right)\psi_+-2ie^{-2H}\left(\dot{\varphi}-A_Z\right)-i\left(\frac{{\bar X}\!\!\cdot\!g}B
+\frac{X\!\!\cdot\!g}{\bar B}\right)=0\ ,\\
\label{thpsiII3}&\partial\psi_++2\left(\frac{\dot{\rho}}{\rho}-\dot{H}\right)
\psi_+-2i\left(\dot{\varphi}-A_Z\right)-2i\left(\frac{{\bar X}\!\!\cdot\!g}B+\frac{X\!\!\cdot\!g}{\bar B}
\right)=0\ ,\\
\label{thpsiII4}&\bar\partial\psi_++2\frac{\dot{\rho}}{\rho}\psi_+-
2i\left(\dot{\varphi}-A_Z\right)=0\ ,\\
\label{thpsiII5}&i\left(\frac{{\bar X}\!\!\cdot\!g}B+\frac{X\!\!\cdot\!g}{\bar B}\right)\psi_++
2\left(1+e^{-2H}\right)\frac{\dot{\rho}}{\rho}-\dot{H}+\kappa=0\ .
\end{align}
From (\ref{thpsiII1})+(\ref{thpsiII3}) and
(\ref{thpsiII1})+(\ref{thpsiII4}) we obtain respectively
\begin{align}
\label{thpsiIIc1}&\left(\partial_Z+\partial\right)\psi_+=0\ ,\\
\label{thpsiIIc2}&\left(\partial_Z+\bar\partial\right)\psi_+=-2\dot{H}\psi_+
-2i\left(\frac{{\bar X}\!\!\cdot\!g}B+\frac{X\!\!\cdot\!g}{\bar B}\right)\ .
\end{align}
Since $\psi_+$ is imaginary (cf.~\eqref{htgV3}), \eqref{thpsiIIc1} implies
$\psi_+=\psi_+(Z-w-\bar w)$ so that \eqref{thpsiIIc2} yields
\begin{equation}
\label{thpsiIIc3}\dot{H}\psi_++i\left(\frac{{\bar X}\!\!\cdot\!g}B+\frac{X\!\!\cdot\!g}{\bar B}\right)=0\ .
\end{equation}
Using these informations, eqns.~(\ref{thpsiII1})-(\ref{thpsiII5}) reduce
further to
\begin{align}
\label{thpsiIII1}\left[\left(1+e^{2H}\right)\frac{\psi_+}{\rho^2}
\right]^{\!\text{\large{$\cdot$}}}-\kappa e^{2H}\frac{\psi_+}{\rho^2}&=0\ ,\\
\label{thpsiIII2}\left(\frac{\psi_+}{\rho^2}\right)^{\!\text{\large{$\cdot$}}}
+2i\frac{\dot{\varphi}-A_Z}{\rho^2}&=0\ ,\\
\label{thpsiIII3}\dot{H}\left(1+\psi_+^2\right)-2\frac{\dot{\rho}}{\rho}
\left(1+e^{-2H}\right)&=\kappa\ .
\end{align}
Eliminating $\dot{\rho}/\rho$ from \eqref{thpsiIII1} and \eqref{thpsiIII3}
leads to
\begin{equation}
\label{thpsiIV1-3b}\dot{H}\psi_+(1-\psi_+^2)+(1+e^{-2H})\dot{\psi}_+=0\ ,
\end{equation}
that can be integrated to give
\begin{equation}
\label{thpsi+}\psi_+=\frac{ia}{\sqrt{1+e^{2H}-a^2}}\ ,
\end{equation}
where $a$ is real integration constant. To proceed we observe that from
\eqref{thpsic} and \eqref{thpsiIIc3} one obtains for the function $B$,
\begin{equation}
\label{thB} B=-\frac{2i{\bar X}\!\!\cdot\!g}{\dot H(1+\psi_+)+\kappa}\ ,
\end{equation}
and thus for its absolute value $\rho$ and phase $\varphi$
\begin{align}
\label{thBph}\rho^{-2}&=\frac{(\kappa+\dot{H})^2-\dot{H}^2\psi_+^2}{4X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ ,\\
\label{thBmo}\tan\varphi&=i\frac{(X\!\!\cdot\!g+{\bar X}\!\!\cdot\!g)(\kappa+\dot{H})+(X\!\!\cdot\!g-{\bar X}\!\!\cdot\!g)(\dot{H}
\psi_+)}{(X\!\!\cdot\!g-{\bar X}\!\!\cdot\!g)(\kappa+\dot{H})+(X\!\!\cdot\!g+{\bar X}\!\!\cdot\!g)(\dot{H}\psi_+)}\ .
\end{align}
Using \eqref{thBph}, \eqref{thpsiIII3} yields a relation between $H$ and $X\!\!\cdot\!g$,
\begin{eqnarray}
\label{thpsiIV1+3}0&=&2\left(1+e^{-2H}\right)\ddot{H}+\dot{H}^2\left(1+3\psi_+^2\right)-\kappa^2\nonumber\\
&&-\frac{\left(\dot{H}+\kappa\right)^2-\dot{H}^2\psi_+^2}{\dot{H}\left(1-\psi_+^2\right)+\kappa}
\left(1+e^{-2H}\right)\left(\frac{\dot{X}\!\!\cdot\!g}{X\!\!\cdot\!g}+\frac{{\dot{\bar X}}\!\!\cdot\!g}{{\bar X}\!\!\cdot\!g}\right)\ ,
\end{eqnarray}
while (\ref{thpsiIII2}) gives $A_Z$,
\begin{align}\label{thH1}
&A_Z=\frac i2\left\{(1+\psi_+)\frac{\dot{X}\!\!\cdot\!g}{X\!\!\cdot\!g}-(1-\psi_+)\frac{{\dot{\bar X}}\!\!\cdot\!g}{{\bar X}\!\!\cdot\!g}\right. \\
&\left.-\frac{\dot{H}\psi_+\left(1-\psi_+^2\right)\left(1+e^{-2H}\right)^{-1}}{\left(\dot{H}+\kappa\right)^2-\dot{H}^2\psi_+^2}\left[2\left(1+e^{-2H}\right)\ddot{H}+\dot{H}^2\left(1+3\psi_+^2\right)-\kappa^2\right]\right\}\ . \nonumber
\end{align}
Making use of \eqref{thpsiIV1+3}, this boils down to
\begin{equation}\label{thAZ}
A_Z=-\left[\dot{H}\left(1-\psi_+^2\right)+\kappa\right]^{-1}
\mbox{Im}\left\{\left[\dot{H}\left(1-\psi_+\right)+\kappa\right](1+\psi_+)\frac{\dot{X}\!\!\cdot\!g}{X\!\!\cdot\!g}\right\}\ .
\end{equation}
The condition \eqref{Delta-Phi} is then automatically satisfied: Plugging the relation
\begin{displaymath}
\dot{X}\!\!\cdot\!g+iA_ZX\!\!\cdot\!g=\dot{z}^{\alpha}{\cal D}_{\alpha}X\!\!\cdot\!g=\frac iB g^{\alpha\bar\beta}{\cal D}_{\alpha}X\!\!\cdot\!g
{\cal D}_{\bar\beta}{\bar X}\!\!\cdot\!g\frac{1-\psi_+}{1+e^{-2H}}\ ,
\end{displaymath}
(where we used \eqref{htgV1} in the second step) into
\begin{displaymath}
-\frac12(\text{Im}\,{\cal N})^{-1|IJ}g_Ig_J=X\!\!\cdot\!g{\bar X}\!\!\cdot\!g+g^{\alpha\bar\beta}{\cal D}_{\alpha}X\!\!\cdot\!g
{\cal D}_{\bar\beta}{\bar X}\!\!\cdot\!g\ ,
\end{displaymath}
that follows from special geometry \cite{Vambroes}, one gets
\begin{displaymath}
(\mbox{Im}\,\mathcal{N})^{-1|IJ}g_Ig_J=-2X\!\!\cdot\!g{\bar X}\!\!\cdot\!g+\frac{4{\bar X}\!\!\cdot\!g}{\dot{H}(1+\psi_+)+\kappa}
\frac{1+e^{-2H}}{1-\psi_+}\left(\dot{X}\!\!\cdot\!g+iA_ZX\!\!\cdot\!g\right)\ .
\end{displaymath}
Inserting this into \eqref{Delta-Phi}, the latter becomes
\begin{eqnarray}
\label{thF=dA}0&=&2\left(1+e^{-2H}\right)\ddot{H}+\dot{H}^2\left(1+3\psi_+^2\right)-\kappa^2\nonumber\\
&&-2\left[\dot{H}(1-\psi_+)+\kappa\right]\frac{1+e^{-2H}}{1-\psi_+}
\left(\frac{\dot{X}\!\!\cdot\!g}{X\!\!\cdot\!g}+i A_Z\right)\ ,
\end{eqnarray}
which coincides with (\ref{thpsiIV1+3}) once we substitute in it the
expression (\ref{thAZ}) for $A_Z$.
The Bianchi identities (\ref{bianchi}) and Maxwell equations \eqref{maxwell}
can be integrated once, with the result
\begin{eqnarray}\label{thbianchi}
&&(1+e^{2H})\left(\frac{X^I}{\bar B}-\frac{\bar
X^I}B\right)^{\!\text{\large{$\cdot$}}}-\kappa e^{2H} \left(\frac{X^I}{\bar
B}-\frac{\bar X^I}B\right)\nonumber\\
&+&ie^{2H}\left[\frac{\left(\mbox{Im}\,\mathcal{N}\right)^{-1|IJ}g_J}{B\bar
B}+2i\dot{H}\psi_+\left(\frac{X^I}{\bar B}+\frac{\bar X^I}B\right)\right]=ip^I\ ,
\end{eqnarray}
\begin{eqnarray}\label{thmaxwell}
&&(1+e^{2H})\left(\frac{F_I}{\bar B}-\frac{\bar
F_I}B\right)^{\!\text{\large{$\cdot$}}}-\kappa e^{2H} \left(\frac{F_I}{\bar
B}-\frac{\bar F_I}B\right)-g_Ie^{2H}\frac{\psi_+}{\rho^2}\nonumber\\
&+&ie^{2H}\left[\frac{\mbox{Re}\,\mathcal{N}_{IL}\left(\mbox{Im}\,\mathcal{N}\right)^{-1|JL}g_J}{B\bar
B}+2i\dot{H}\psi_+\left(\frac{F_I}{\bar B}+\frac{\bar F_I}B\right)\right]=iq_I\ ,
\end{eqnarray}
where $p^I,q_I$ are integration constants. It is straightforward to show that \eqref{thbianchi}
and \eqref{thmaxwell} are implied by \eqref{htgV1}, \eqref{thpsiIII2}-\eqref{thpsiIV1-3b} and
\eqref{thB} iff $p^I=q_I=0$\footnote{This does not mean that all the fluxes vanish.}.
Finally, the shift vector $\sigma$ follows from \eqref{dsigma} that simplifies to
\begin{equation}
\label{thsigma}\partial_Z\sigma_w=\frac{e^{-\kappa
Z}}4\left(\frac{\psi_+}{\rho^2}\right)^{\!\text{\large{$\cdot$}}}\ , \qquad
\partial\sigma_{\bar w}-\bar\partial\sigma_w=-\frac{e^{-\kappa
Z}}2\left(e^{2H}\frac{\psi_+}{\rho^2}\right)^{\!\text{\large{$\cdot$}}}\ ,
\end{equation}
whose solution is
\begin{equation}{\label{thsigmaint}}
\sigma=-\frac{e^{-\kappa Z}}4e^{2H}\frac{\psi_+}{\rho^2}(dw-d\bar w)\ .
\end{equation}
Note that in the case $\kappa\neq0$ one can always set $\kappa=1$ by
rescaling the coordinates.
The missing component $\psi_0$ of the second Killing spinor is determined
by the system \eqref{httda0}-\eqref{htwbda0} that can be integrated straightforwardly.
This yields (after going back to the original basis)
\begin{align}
\alpha&=\hat\alpha-2\kappa t-\frac{\psi_1}{2b\bar b}-\frac{e^{2\Phi}\psi_+}{2\psi_1b\bar b}\ , \qquad
\beta=-\frac{e^{\Phi}}{2|b|}\left(1-\psi_+\right)\ , \nonumber\\
\gamma&=-\frac{\beta}b\ , \qquad \delta=-\bar b\bar\alpha-\frac{\psi_1}b
\end{align}
for the second Killing spinor. Here, $\hat\alpha$ denotes an integration constant.
As is clear from \eqref{delta-psi} and \eqref{delta-lambda}, $C(\epsilon^1,\epsilon_2)$,
with $C\in\bb{C}$ an arbitrary constant, is again Killing if $(\epsilon^1,\epsilon_2)$ is.
This means that multiplication of $\alpha$ and $\beta$ by $C$ and of $\gamma$ and $\delta$
by $\bar C$ gives again a solution of the Killing spinor equations.
Choosing $\hat\alpha=1/C$, in order to obtain the first Killing spinor when
$C\rightarrow0$, the norm squared of the associated Killing vector
$V_{\mu}=A(\epsilon^i,\gamma_{\mu}\epsilon_i)$ (with $A$ given in \eqref{Majorana})
turns out to be
\begin{align}
V^2=&-4|b|^2\left\{|1-2\kappa C t|^2-\left[\frac{|C|\psi_1\left(1+e^{2H}\right)}{2|b|^2}\right]^2
\frac{1-a^2}{1+e^{2H}-a^2}\right.\nonumber\\
&\left.+\frac{\psi_1e^{2H}}{|b|^2}\frac{a\mbox{Im}C}{\sqrt{1+e^{2H}-a^2}}\right\}^2
-\left(\frac{2\psi_1\mbox{Im}C}{|b|}\right)^2\ .
\end{align}
For $V^2=0$ the solution belongs also to the null class considered
in \cite{Klemm:2009uw}. This happens for $\mbox{Im}C=0$, $\kappa=0$,
$a^2<1$ and
\begin{equation}\label{htV2=0}
\dot{H}=\sqrt{\frac{8X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}{|C|(1-a^2)^{1/2}}}\frac{(1+e^{2H}-a^2)^{3/4}}
{1+e^{2H}}\ .
\end{equation}
(\ref{htV2=0}) is actually the general form of $\dot{H}$ in the case $\kappa=0$.
To see this, observe that (\ref{thpsiIII1}) implies
\begin{equation}
(1+e^{2H})\frac{\psi_+}{\rho^2}=ih_0\ ,
\end{equation}
if $\kappa=0$, where $h_0$ is a real integration constant. Using the expressions
(\ref{thpsi+}) and (\ref{thBph}) for $\psi_+$ and $\rho^2$ we obtain
exactly (\ref{htV2=0}), with $h_0$ and $C$ related by
$h_0|C|(1-a^2)^{1/2}=2a$. Plugging the expression for $\dot H$ into
(\ref{htgV1}) we find that the scalars have to satisfy the flow equation
\begin{equation}\label{thg0z}
\dot{z}^\alpha=-\left(\frac{h_0X\!\!\cdot\!g}{a{\bar X}\!\!\cdot\!g}\right)^{1/2}
\frac{g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}{\bar X}\!\!\cdot\!g}{\left(1+e^{-2H}\right)
\left(1+e^{-2H}-a^2\right)^{1/4}}\ .
\end{equation}
Using $w=x+iy$ and $dZ=\frac{dH}{\dot H}+2dx$, the metric reads
\begin{equation}
ds^2 = -4\rho^2\left[dt-e^{2H}\frac{i\psi_+}{2\rho^2}dy\right]^2
+ \frac 1{4\rho^2}\left(\frac{dH}{\dot H}+2dx\right)^2 + \frac{e^{2H}}{\rho^2}
(dx^2+dy^2)\ , \label{metr-kappa=0}
\end{equation}
where $\psi_+$, $\rho^2$ and $\dot H$ are given by \eqref{thpsi+},
\eqref{thBph} and \eqref{htV2=0} respectively. As a check, let us show
that this solution does indeed coincide with one of the 1/2 BPS lightlike
case classified in \cite{Klemm:2009uw}. To this end, consider the
coordinate transformation
\begin{displaymath}
u = \frac{2a}{h_0}(1-a^2)^{-1/2}t + x + \mu(\chi)\ , \qquad
v = \frac t{\sqrt 2} - \frac{h_0}{2\sqrt 2a}(1-a^2)^{1/2}x +
\nu(\chi)\ ,
\end{displaymath}
\begin{displaymath}
\Psi = 4a\left(\frac a{h_0}\right)^{1/2}(1-a^2)^{-1/4}t
- 2\left(\frac{h_0}a\right)^{1/2}(1-a^2)^{3/4}y\ ,
\end{displaymath}
\begin{displaymath}
\coth\chi = (1-a^2)^{-1/2}(1+e^{2H}-a^2)^{1/2}\ ,
\end{displaymath}
with
\begin{displaymath}
\frac{d\nu}{d\chi} = \frac{(\tanh\chi)^{1/2}}{8\sqrt 2(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g)^{1/2}
(1-a^2)^{1/4}}\left(\frac{h_0}a\right)^{1/2}\ , \qquad
\frac{d\mu}{d\chi} = -\frac{2\sqrt 2a}{h_0}(1-a^2)^{-1/2}
\frac{d\nu}{d\chi}\ .
\end{displaymath}
Then, the metric \eqref{metr-kappa=0}, the fluxes \eqref{fluxes} and the flow
equation \eqref{thg0z} become
\begin{equation}
ds^2 = -2\sqrt2\coth\chi dudv + \frac{d\chi^2}{16\sinh^2\!\chiX\!\!\cdot\!g{\bar X}\!\!\cdot\!g} +
\frac{d\Psi^2}{2\sinh2\chi}\ ,
\end{equation}
\begin{equation}
F^I = \frac{(\text{Im}\,{\cal N})^{-1|IJ}g_J}{4\cosh^2\!\chi(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g
\tanh\chi)^{1/2}}d\Psi\wedge d\chi\ , \qquad
\frac{dz^{\alpha}}{d\chi} = \frac{g^{\alpha\bar\beta}{\cal D}_{\bar\beta}{\bar X}\!\!\cdot\!g}
{{\bar X}\!\!\cdot\!g\sinh 2\chi}\ , \label{flow-kappa=0}
\end{equation}
which are exactly the eqns.~(5.33), (5.34) and (5.24) of \cite{Klemm:2009uw}.
We also see that in this case, $a$ can be eliminated by a diffeomorphism,
and thus is not really a parameter of the solution.
\subsubsection{Summary}
\label{summary}
In the case $D_zP+ie^{-2\Phi}b^2F_{zw}\neq0$ and $\texttt{G}_0=0$ and under
the additional assumptions $\psi_-\neq0$ and $\psi_1\neq0$, the
fields are given in terms of the solutions of the system
\begin{equation}
\dot{z}^\alpha=-\left[\dot{H}(1+\psi_+)+\kappa\right]\frac{1-\psi_+}{1+e^{-2H}}
\frac{g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}{\bar X}\!\!\cdot\!g}{2{\bar X}\!\!\cdot\!g} \label{flow-gen}
\end{equation}
and \eqref{thpsiIV1+3}, where $\kappa=0,1$, the scalars $z^\alpha$ and the
real function $H$ depend only on the combination $Z-w-\bar w$, and $\psi_+$ is
given by \eqref{thpsi+}, with $a\in\mathbb{R}$ an arbitrary constant.
Furthermore, a dot denotes a derivative w.r.t.~$Z-w-\bar w$. Once a solution
($z^{\alpha},H$) is determined, one defines $\rho$ by \eqref{thBph}.
Then, the metric and the fluxes read respectively
\begin{equation}
ds^2=-4\rho^2e^{2\kappa Z}\left[dt-e^{2H-\kappa Z}\frac{\psi_+}{4\rho^2}
(dw-d\bar w)\right]^2+\frac1{\rho^2}\left(\frac{dZ^2}4+e^{2H}dwd\bar w\right)\ ,
\end{equation}
\begin{align}
F^I=&8\kappa e^{\kappa Z}\mbox{Im}\left[\frac{{\bar X}\!\!\cdot\!g
X^I}{\dot{H}(1+\psi_+)+\kappa}\right]dt\wedge dZ\nonumber\\
&+\frac{2ie^{\kappa Z}}{1+e^{-2H}}\left\{\psi_+\left(\mbox{Im}\mathcal{N}
\right)^{-1|IJ}g_J\right.\nonumber\\
&\left.+4i\kappa\mbox{Im}\left[\frac{(1+\psi_+){\bar X}\!\!\cdot\!g
X^I}{\dot{H}(1+\psi_+)+\kappa}\right]\right\}dt\wedge d(Z-w-\bar w)\nonumber\\
&+\frac{i\left[\left(\dot{H}+\kappa\right)^2-\dot{H}^2\psi_+^2\right]
\left(1+e^{2H}\psi_+^2\right)}{4X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\left(1+e^{-2H}\right)}
\left\{\left(\mbox{Im}\mathcal{N}\right)^{-1|IJ}g_J\right.\nonumber\\
&\left.+4\kappa\mbox{Re}\left[\frac{{\bar X}\!\!\cdot\!g X^I}{\dot{H}(1+\psi_+)+\kappa}\right]
\right\}\left[\frac{dZ}2\wedge(dw-d\bar w)+dw\wedge d\bar w\right]\ .
\end{align}
\subsubsection{Explicit solutions}
We shall now give some explicit solutions for the simple model determined by
the prepotential $F=-iZ^0Z^1$ that has $n_V=1$ (one vector multiplet), and thus
just one complex scalar $\tau$. Choosing $Z^0=1$, $Z^1=\tau$
(cf.~\cite{Vambroes}), the symplectic vector $v$ reads
\begin{equation}
v = \left(\begin{array}{c} 1 \\ \tau \\ -i\tau \\ -i\end{array}\right)\ .
\end{equation}
The K\"ahler potential, metric and kinetic matrix for the vectors
are given respectively by
\begin{equation} e^{-{\cal K}} = 2(\tau + \bar\tau)\ ,
\qquad g_{\tau\bar\tau} = \partial_\tau\partial_{\bar\tau}{\cal K} = (\tau +
\bar\tau)^{-2}\ ,
\end{equation}
\begin{equation}
{\cal N} = \left(\begin{array}{cc} -i\tau & 0 \\ 0 & -\frac i\tau
\end{array}\right)\ .
\end{equation}
Note that positivity of the kinetic terms in the action requires
${\mathrm{Re}}\tau>0$. For the scalar potential one obtains
\begin{equation}
V = -\frac4{\tau+\bar\tau}(g_0^2 + 2g_0g_1\tau + 2g_0g_1\bar\tau +
g_1^2\tau\bar\tau)\ ,
\end{equation}
which has an extremum at $\tau=\bar\tau=|g_0/g_1|$. In what
follows we assume $g_I>0$. The K\"ahler U(1) is
\begin{equation} A_{\mu} = \frac i{2(\tau+\bar\tau)}\partial_{\mu}(\tau-\bar\tau)\ .
\end{equation}
In order to proceed we shall take $\tau=\bar\tau$ (this includes the
extremum of the potential and thus the AdS vacuum). Then $A=0$ and
equation (\ref{thAZ}) imposes $\kappa\psi_+=0$ if $\dot{X}\!\!\cdot\!g\neq0$.
The case $\kappa=0$ was considered in generality above, and an explicit
solution of the flow equation \eqref{flow-kappa=0} for the prepotential of this
paragraph can be found in section 4.5 of \cite{Klemm:2009uw} (put
${\cal G}=0$ there). Thus, we shall focus on the case $\psi_+=0$ in the
following. Then, eqns.~(\ref{thpsiIV1+3}) and (\ref{flow-gen}) boil down to
\begin{equation}
\label{thscalarIp}2(1+e^{-2H})\ddot{H}+\dot{H}^2-\kappa^2+\left(1+e^{-2H}\right)
(\dot{H}+\kappa)\frac{g_0-g_1\tau}{g_0+g_1\tau}\frac{\dot{\tau}}{\tau}=0\ ,
\end{equation}
\begin{equation}
\label{thscalarIIp}\frac{\dot{\tau}}{\tau}=\frac{\dot{H}+\kappa}{1+e^{-2H}}
\frac{g_0-g_1\tau}{g_0+g_1\tau}\ .
\end{equation}
Plugging \eqref{thscalarIIp} into \eqref{thscalarIp} yields an expression
for $\tau$ in terms of $H$ and its derivatives. Reinserting this into
\eqref{thscalarIIp} gives a third order differential equation for $H$ only,
\begin{equation}
\left(1+e^{-2H}\right)^2\dddot{H}+\left[\left(3-2e^{-2H}\right)\left(1+e^{-2H}
\right)\ddot{H}+\dot{H}^2-\kappa^2\right]\dot{H}=0\ ,
\end{equation}
that can be integrated twice, with the result
\begin{equation}
\dot{H}=\frac1{\left(1+e^{2H}\right)^{1/4}}\sqrt{2E_1+\frac{E_2}{2\left(1+
e^{2H}\right)^{1/2}}+\kappa^2\left(1+e^{2H}\right)^{1/2}}\ ,
\end{equation}
where $E_1$ and $E_2$ are two integration constants. If ${\dot H}\neq0$,
we can use the function $H$ in place of $w+\bar w$ as a new coordinate.
Using $w=x+iy$, in the coordinate system $\{t,H,y,Z\}$ the solution is given by
\begin{align}
ds^2=&-\left[\frac{2(g_0+g_1\tau)}{\sqrt\tau\left(\dot{H}+\kappa\right)}
\right]^2e^{2\kappa Z}dt^2\nonumber\\
&+\left[\frac{2(g_0+g_1\tau)}{\sqrt\tau\left(\dot{H}+\kappa\right)}\right]^{-2}
\left[dZ^2+e^{2H}\left(dZ-\frac{dH}{\dot{H}}\right)^2+4e^{2H}dy^2\right]\ ,
\label{metr-expl}
\end{align}
\begin{align}
F^0=&-\frac{\left(\dot{H}+\kappa\right)\left(\kappa g_1\tau-g_0\dot{H}\right)}
{\dot{H}\left(g_0+g_1\tau\right)^2\left(1+e^{-2H}\right)}dH\wedge dy\ ,
\nonumber\\
F^1=&-\frac{\tau\left(\dot{H}+\kappa\right)\left(\kappa g_0-g_1\dot{H}\tau
\right)}{\dot{H}\left(g_0+g_1\tau\right)^2\left(1+e^{-2H}\right)}dH\wedge dy\ ,
\end{align}
\begin{equation}
\tau=\frac{g_0}{g_1}\frac{\sqrt2(\dot H+\kappa)\left(1+e^{2H}\right)^{1/2}-
\sqrt{E_2}}{\sqrt2(\dot H+\kappa)\left(1+e^{2H}\right)^{1/2}+\sqrt{E_2}}\ .
\end{equation}
Asymptotically for $H\to\infty$ the scalar field goes to its critical value,
$\tau\to g_0/g_1$, and the metric approaches AdS$_4$.
A more detailed analysis of the geometry \eqref{metr-expl} will be
presented elsewhere.
\subsection{$\texttt{G}_0=\psi_-=0$}
\label{G_0=psi_-=0}
For $\texttt{G}_0=\psi_-=0$ one has $\psi_1=\psi_1(z)$ by virtue of \eqref{thpsi07}
and \eqref{deriv-psi_1}. Moreover, the sum of \eqref{thpsi01} and \eqref{thpsi04}
yields
\begin{equation}
\psi_1=r\chi(w,\bar w)\ , \label{psi_1=rchi}
\end{equation}
with $\chi(w,\bar w)$ an arbitrary function, while the difference of \eqref{thpsi01}
and \eqref{thpsi04} implies $A_z=\partial_z\varphi$. Subtracting \eqref{thpsi09} from
\eqref{thpsi08} leads to
\begin{equation}
\partial_z\ln r + 2i\left(\frac{X\!\!\cdot\!g}{\bar b}-\frac{{\bar X}\!\!\cdot\!g}b\right) = 0\ . \label{dzlnr}
\end{equation}
Plugging this into \eqref{thpsi02}, one gets $\partial_z\psi_2=0$.
Using equ.~\eqref{dzPhi} in \eqref{dzlnr}, we obtain $\partial_z\Phi=\partial_z\ln r$,
and thus
\begin{equation}
e^{\Phi} = r\Lambda(w,\bar w)\ , \label{PhiLambda}
\end{equation}
where $\Lambda$ is again an arbitrary function. \eqref{thpsi12}, together with
$\partial_z\psi_2=0$, gives
\begin{equation}
\psi_2 = \frac{r^2}{\psi_1^2}\nu(w)\ , \label{psi_2}
\end{equation}
with $\nu(w)$ holomorphic. Note that \eqref{psi_1=rchi}, combined with $\psi_1=\psi_1(z)$,
forces the phase $\theta$ of $\psi_1$ to be constant. By rescaling all the $\psi_{\texttt{i}}$'s
with $e^{-i\theta}$ we can thus choose $\psi_1$ real without loss of generality.
From the gaugino equations \eqref{htgIII1}-\eqref{htgIII4} one has
\begin{equation}
\partial_z z^{\alpha}=0\ , \qquad \psi_2\partial z^{\alpha} + \bar\psi_2\bar\partial
z^{\alpha} = 0\ , \label{dxz}
\end{equation}
and hence $z^{\alpha}=z^{\alpha}(w,\bar w)$, $A_z=0=\partial_z\varphi$. In order to
proceed, it is convenient to distinguish two subcases, namely
$X\!\!\cdot\!g e^{i\varphi}-{\bar X}\!\!\cdot\!g e^{-i\varphi}=0$ and $X\!\!\cdot\!g e^{i\varphi}-{\bar X}\!\!\cdot\!g e^{-i\varphi}\neq 0$.
\subsubsection{$X\!\!\cdot\!g e^{i\varphi}-{\bar X}\!\!\cdot\!g e^{-i\varphi}=0$}
\label{X-Xb=0}
If $X\!\!\cdot\!g e^{i\varphi}-{\bar X}\!\!\cdot\!g e^{-i\varphi}=0$, \eqref{dzlnr} implies $r=r(w,\bar w)$.
Plugging this into \eqref{psi_1=rchi} and taking into account that $\psi_1=\psi_1(z)$,
we find that $\psi_1$ must be constant. By rescaling the $\psi_{\texttt{i}}$'s one can
then choose $\psi_1=1$ without loss of generality. Notice that \eqref{PhiLambda}
gives $\partial_z\Phi=0$ in this case, which is compatible with \eqref{dzPhi}.
From the sum of eqns.~\eqref{thpsi03} and \eqref{thpsi05} we get
\begin{equation}
A_w = \partial\varphi\ , \qquad A_{\bar w} = \bar\partial\varphi\ , \label{Avarphi}
\end{equation}
whereas their difference leads to
\begin{equation}
\psi_2^{-1}e^{-2\Phi}\bar\partial\ln r = i\left(\frac{X\!\!\cdot\!g}{\bar b}+\frac{{\bar X}\!\!\cdot\!g}b\right)\ .
\label{dbarlnr}
\end{equation}
Taking the sum of \eqref{dbarlnr} and its complex conjugate, and using \eqref{psi_2},
one obtains
\begin{equation}
(\bar\nu(\bar w)\bar\partial+\nu(w)\partial)r = 0\ . \label{dxr}
\end{equation}
Let us first consider the subcase $\psi_2\neq 0$, i.e., $\nu(w)\neq 0$. (The
case $\psi_2=0$ will be dealt with in section \ref{2=12=0}.) This allows to
introduce new coordinates $W,\bar W$ such that $\nu\partial=\partial_W$,
$\bar\nu\bar\partial=\partial_{\bar W}$. Using the residual gauge invariance
$w\mapsto W(w)$, $\Phi\mapsto\Phi-\frac12\ln(dW/dw)-\frac12\ln(d\bar W/d\bar w)$
leaving invariant the metric $e^{2\Phi}dwd\bar w$, one can set $\nu(w)=1$ and hence $w=W$
without loss of generality. Then, eqns.~\eqref{dxz} and \eqref{dxr} boil down to
\begin{equation}
\partial_z z^{\alpha} = \partial_x z^{\alpha} = \partial_x r = 0\ ,
\end{equation}
where $x$ is defined by $w=x+iy$. Thus, $r=r(y)$, $z^{\alpha}=z^{\alpha}(y)$, $A_x=0$, and
from \eqref{Avarphi} also $\partial_x\varphi=0$ so that $\varphi=\varphi(y)$.
\eqref{dbarlnr} simplifies to
\begin{equation}
e^{-2\Phi}\partial_y r - 2r^2(X\!\!\cdot\!g e^{i\varphi}+{\bar X}\!\!\cdot\!g e^{-i\varphi}) = 0\ . \label{dyr}
\end{equation}
Plugging this into the sum of \eqref{thpsi08} and \eqref{thpsi09} yields
\begin{equation}
\partial e^{2\Phi} = \frac i{2r^5}\partial_y r\ , \label{partialPhi}
\end{equation}
which implies $(\partial+\bar\partial)\Phi=0$, and thus $\Phi=\Phi(y)$. Integration
of \eqref{partialPhi} gives then
\begin{equation}
e^{2\Phi} = \frac1{4r^4}+L\ ,
\end{equation}
with $L$ a real constant. In what follows, we shall use $r$ as a new coordinate in
place of $y$\footnote{This is possible as long as $X\!\!\cdot\!g\neq 0$, cf.~\eqref{dyr}.}.
The only nontrivial gaugino equation of the system \eqref{htgIII1}-\eqref{htgIII4}
becomes
\begin{equation}
r\frac{dz^{\alpha}}{dr} = \frac{g^{\alpha\bar\beta}{\cal D}_{\bar\beta}{\bar X}\!\!\cdot\!g}{{\bar X}\!\!\cdot\!g}\ .
\label{flow-psi_-=0}
\end{equation}
One also has to check whether the equations \eqref{bianchi}-\eqref{Delta-Phi} for
the first Killing spinor are satisfied. The Bianchi identities \eqref{bianchi} and
Maxwell equations \eqref{maxwell} can be integrated once, with the result
\begin{equation}
\partial_y\left(\frac{X^I}{\bar b}-\frac{\bar X^I}b\right) = ip^I\ , \qquad
\partial_y\left(\frac{F_I}{\bar b}-\frac{\bar F_I}b\right) - \frac{ig_I}{r^4} = iq_I\ ,
\label{bianchi-max-psi_-=0}
\end{equation}
where $p^I,q_I$ are integration constants. Using the flow equation \eqref{flow-psi_-=0}
together with the special geometry relation \cite{Vambroes}
\begin{equation}
-\frac12(\text{Im}\,{\cal N})^{-1|IJ} = \bar X^IX^J + g^{\alpha\bar\beta}{\cal D}_{\alpha}
X^I{\cal D}_{\bar\beta}\bar X^J\ , \label{ImN^-1}
\end{equation}
one finds that \eqref{bianchi-max-psi_-=0}, as well as \eqref{Delta-Phi}, indeed hold,
if $p^I=0$, $q_I=4Lg_I$.
Finally, the shift vector $\sigma$ follows from \eqref{dsigma}, which implies
\begin{displaymath}
\sigma = \frac{dx}{4r^4}\ .
\end{displaymath}
Then the metric and the fluxes read respectively
\begin{equation}
ds^2 = -4r^2\left(dt+\frac{dx}{4r^4}\right)^2 + \frac{dz^2}{r^2} + \left(\frac1{4r^4}
+L\right)\frac{dx^2}{r^2} + \frac{dr^2}{16r^6X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\left(\frac1{4r^4}+L\right)}\ ,
\label{metr-psi_-=0}
\end{equation}
\begin{equation}
F^I = -\frac2{\sqrt{X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}}(\text{Im}\,{\cal N})^{-1|IJ}g_J dt\wedge dr\ .
\label{fluxes-psi_-=0}
\end{equation}
Actually the solutions with $L\neq 0$ can be cast into a simpler form by the
coordinate transformation
\begin{displaymath}
Lx = t-\psi\ , \qquad \zeta = |L|^{1/2}z\ , \qquad \rho^2 = \frac1{|L|r^2}\ .
\end{displaymath}
Defining also $q^2\equiv 4/|L|$, we get for $L>0$
\begin{equation}
ds^2 = -\left(\rho^2+\frac{q^2}{\rho^2}\right)dt^2 + \frac{d\rho^2}{4X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\left(\rho^2
+\frac{q^2}{\rho^2}\right)} + \rho^2(d\zeta^2+d\psi^2)\ , \label{naked}
\end{equation}
and for $L<0$
\begin{equation}
ds^2 = \left(\rho^2-\frac{q^2}{\rho^2}\right)dt^2 + \frac{d\rho^2}{4X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\left(\rho^2
-\frac{q^2}{\rho^2}\right)} + \rho^2(d\zeta^2-d\psi^2)\ . \label{bubble}
\end{equation}
In both cases, the fluxes and the flow equation \eqref{flow-psi_-=0} become
\begin{equation}
F^I = \frac q{\rho^2\sqrt{X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}}(\text{Im}\,{\cal N})^{-1|IJ}g_J dt\wedge d\rho\ ,
\qquad -\rho\frac{dz^{\alpha}}{d\rho} = \frac{g^{\alpha\bar\beta}{\cal D}_{\bar\beta}{\bar X}\!\!\cdot\!g}
{{\bar X}\!\!\cdot\!g}\ .
\end{equation}
\eqref{naked} represents a generalization of the naked singularity solution to minimal
gauged supergravity found in \cite{Caldarelli:1998hg} with nontrivial scalars turned on.
Its double analytic continuation $t\mapsto it$, $\psi\mapsto i\psi$, $q\mapsto -iq$
yields \eqref{bubble}, which has the interpretation of a bubble of
nothing \cite{Witten:1981gj}: In order to avoid the conical singularity at
$\rho^2=q\equiv\rho_{\text s}^2$ in the $(t,\rho)$-hypersurface, we must compactify $t$
such that\footnote{We assumed that $\lim_{\rho\to\rho_{\text s}}g_IX^I(\rho)
\equiv X_{\text s}\neq 0$.}
\begin{displaymath}
t \sim t + \frac{\pi}{2\rho_{\text s}|X_{\text s}|}\ .
\end{displaymath}
Note that the limit $L\to0$ is naively singular in the coordinates $t,\rho,\zeta,\psi$,
because the charge $q$ diverges, but it can be taken if we perform a Penrose
limit \cite{Penrose}: Start for instance from the $L>0$ solution and set
\begin{displaymath}
\psi-t = -\epsilon^2X^+\ , \qquad \psi+t = 2X^-\ , \qquad \rho = \frac1{\epsilon R}\ ,
\qquad \zeta = \epsilon Z\ , \qquad q = \frac 2{\epsilon}\ .
\end{displaymath}
Then, the limit $\epsilon\to 0$ leads to the regular solution
\begin{displaymath}
ds^2 = -4R^2 dX^{-2} - \frac2{R^2}dX^-dX^+ + \frac{dR^2}{4R^2X\!\!\cdot\!g{\bar X}\!\!\cdot\!g} + \frac{dZ^2}{R^2}\ ,
\end{displaymath}
\begin{displaymath}
F^I = -\frac2{\sqrt{X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}}(\text{Im}\,{\cal N})^{-1|IJ}g_J dX^-\wedge dR\ ,
\end{displaymath}
which is nothing else than \eqref{metr-psi_-=0} and \eqref{fluxes-psi_-=0} for $L=0$.
Integration of the system \eqref{httda0}-\eqref{htwbda0} yields
\begin{displaymath}
\psi_0 = \hat\psi_0 - \frac1{2r^2}\ ,
\end{displaymath}
with $\hat\psi_0$ a complex constant. The second Killing spinor is thus
\begin{equation}
\epsilon^1 = \left(\hat\psi_0-\frac1{2r^2}\right)1 + re^{\Phi}e_{12}\ , \qquad
\epsilon^2 = e^{\Phi-i\varphi}1 - \left(\frac1{2b}+\bar b\bar{\hat\psi}_0\right)e_{12}\ .
\label{2ndKill-psi_-=0}
\end{equation}
For $\hat\psi_0=0$, the norm squared of the associated Killing vector
$V_{\mu}=A(\epsilon^i,\gamma_{\mu}\epsilon_i)$ (with $A$ given in \eqref{Majorana}) reads
\begin{equation}
V^2 = -4r^2L^2\ ,
\end{equation}
which vanishes for $L=0$, so that in this case the solution belongs to the null class
as well. To understand what happens for $L\neq 0$, we have to consider a general linear
combination of the two Killing spinors. As was explained earlier, the rescaling
$(\epsilon^1,\epsilon^2)\mapsto(C\epsilon^1,\bar C\epsilon^2)$, with $C\in\bb{C}$ an
arbitrary constant, gives again a Killing spinor. If we apply this to \eqref{2ndKill-psi_-=0}
and choose $\hat\psi_0=1/C$ (in order to recover the first covariantly constant spinor for
$C\to 0$), the associated Killing vector has norm squared
\begin{equation}
V^2 = -4r^2\left[(1+L|C|^2)^2 + \frac{\text{Im}^2C}{r^4}\right]\ .
\end{equation}
This is zero iff $\text{Im}C=0$, $L=-1/|C|^2$, i.e. $L<0$. In conclusion, the half-BPS
solutions of this subsection belong also to the lightlike class for $L\le 0$. They must
therefore correspond to some of the geometries of \cite{Klemm:2009uw}, where the
half-supersymmetric null case was classified. This is indeed the case: Take the 1/2-BPS
solutions with $d\chi=0$ in section 5.2 of \cite{Klemm:2009uw}. Consider there the subcase
$d=\bar bX\!\!\cdot\!g/{\bar X}\!\!\cdot\!g$, equ.~(5.49). In order to solve the equations for half-supersymmetry, make
the additional assumption that the function $H$, the scalars $z^{\alpha}$ and the wave profile
$\cal G$ depend on $w-\bar w$ only. Moreover, choose $m_J=g_J$ and $l^J=0$ in the expression
(5.67) that determines the fluxes. As a solution of the eqns.~(5.59), (5.62) for the wave
profile take ${\cal G}=-1/(4\rho^4)$. Finally, set $u=-2\sqrt2 t$, $v=-x/8$, $w+\bar w=\sqrt2 z$
and $\rho=1/r$. This yields the solution \eqref{flow-psi_-=0}, \eqref{metr-psi_-=0},
\eqref{fluxes-psi_-=0} with $L=0$. Note that for constant scalars, the $L=0$ solution reduces
to a subclass of the charged generalization of the Kaigorodov spacetime found in
\cite{Cai:2003xv}.
If one starts instead from the half-BPS null case with $d\chi\neq 0$, eqns.~(5.24), (5.33),
(5.34) in \cite{Klemm:2009uw}, and sets
\begin{displaymath}
u = A(t-Lx) + \frac z{\sqrt2 A}\ , \qquad v = A(t-Lx) - \frac z{\sqrt2 A}\ ,
\end{displaymath}
\begin{displaymath}
\Psi = -2^{7/4}At\ , \qquad \tanh\chi = \frac{\sqrt2 r^2}{A^2}\ ,
\end{displaymath}
where $A=(2|L|)^{-1/4}$, one obtains the $L<0$ solution. Notice that the geometry described
by eqns.~(5.24), (5.33) and (5.34) of \cite{Klemm:2009uw} appeared also in subsection
\ref{psi_-neq0}.
\subsubsection{$X\!\!\cdot\!g e^{i\varphi}-{\bar X}\!\!\cdot\!g e^{-i\varphi}\neq 0$}
\label{X-Xbneq0}
For $X\!\!\cdot\!g e^{i\varphi}-{\bar X}\!\!\cdot\!g e^{-i\varphi}\neq 0$, taking into account that the scalar fields
$z^{\alpha}$ and the phase $\varphi$ are independent of $z$, integration of \eqref{dzlnr}
yields
\begin{equation}
r = 2iz({\bar X}\!\!\cdot\!g e^{-i\varphi}-X\!\!\cdot\!g e^{i\varphi})\ , \label{r}
\end{equation}
where a possible integration constant has been eliminated by shifting $z$. Using this
in \eqref{psi_1=rchi} and keeping in mind that $\psi_1$ depends on $z$ only, one gets
$\psi_1=cz$, with $c$ a real integration constant that we can set equal to one without
loss of generality by rescaling the $\psi_{\texttt{i}}$'s.
Plugging \eqref{r} into \eqref{PhiLambda}, we have $e^\Phi=ze^H$, with the real
function $H(w,\bar w)$ given by
\begin{displaymath}
e^H = 2i({\bar X}\!\!\cdot\!g e^{-i\varphi} - X\!\!\cdot\!g e^{i\varphi})\Lambda(w,\bar w)\ .
\end{displaymath}
From \eqref{psi_2} one obtains
\begin{displaymath}
\psi_2=-4\nu\left({\bar X}\!\!\cdot\!g e^{-i\varphi} - X\!\!\cdot\!g e^{i\varphi}\right)^2\ .
\end{displaymath}
In what follows, it is convenient to introduce the real function $Y=Y(w,\bar w)$,
\begin{equation}
Y=-i\frac{e^{i\varphi}X\!\!\cdot\!g+e^{-i\varphi}{\bar X}\!\!\cdot\!g}{e^{i\varphi}X\!\!\cdot\!g-e^{-i\varphi}{\bar X}\!\!\cdot\!g}\ ,
\end{equation}
which is related to the phase $\varphi$ of $b$ by
\[
e^{2i\varphi}=-\frac{1+iY}{1-iY}\frac{{\bar X}\!\!\cdot\!g}{X\!\!\cdot\!g}\ .
\]
In terms of $Y$, the expressions for $\psi_2$ and $b$ simplify to
\begin{equation}
\psi_2=\frac{16X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}{1+Y^2}\nu\ , \qquad b=\frac{4i{\bar X}\!\!\cdot\!g}{1-iY}z\ .
\end{equation}
The system \eqref{thpsi01}-\eqref{thpsi12} boils down to
\begin{eqnarray}
\label{gravI}e^{2H}\nu&=&-\frac{i}{8X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\left[\bar\partial
Y-\frac{1+Y^2}{2Y}\bar\partial\ln\left(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\right)\right]\ ,\\
\label{gravII}\partial\left(e^{2H}\nu\right)&=&-\frac{ie^{2H}Y(1+Y^2)}{32X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ ,
\end{eqnarray}
together with
\begin{displaymath}
A_w=\frac1{2Y}\left[\left(1+iY\right)\partial\ln
(X\!\!\cdot\!g)+\left(1-iY\right)\partial\ln ({\bar X}\!\!\cdot\!g)\right]\ .
\end{displaymath}
Equ.~(\ref{Delta-Phi}) becomes
\begin{equation}
2\partial\bar\partial H=e^{2H}\left[\frac
12+Y^2+\frac{1+Y^2}{8X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\left(\mbox{Im}\,\mathcal{N}\right)^{-1|IJ}g_Ig_J\right]\ .
\label{Delta-H}
\end{equation}
Using
\[
\left(\mbox{Im}\,\mathcal{N}\right)^{-1|IJ}g_Ig_J=-2X\!\!\cdot\!g{\bar X}\!\!\cdot\!g+\frac{i\left(1+Y^2\right)}
{8e^{2H}Y\bar\nu}\partial\ln (X\!\!\cdot\!g{\bar X}\!\!\cdot\!g)\ ,
\]
that follows from \eqref{ImN^-1}, it is easy to shew that \eqref{Delta-H} is
automatically satisfied if \eqref{gravI} and \eqref{gravII} hold.
The case $\nu=0$ (and thus $\psi_2=\psi_{12}=0$) will be considered in \ref{2=12=0}.
In the remaining part of this subsection we shall assume $\nu\neq0$, which allows to
define new coordinates $W$, $\bar W$ such that
\begin{displaymath}
\partial_W=\nu\partial\ , \qquad \partial_{\bar W}=\bar\nu\bar\partial\ .
\end{displaymath}
Making use of the residual gauge invariance $w\mapsto W(w)$,
$\Phi\mapsto\Phi-\frac12\ln(dW/dw)-\frac12\ln(d\bar W/d\bar w)$ leaving invariant
the metric $e^{2\Phi}dwd\bar w$, one can set $\nu(w)=1$ and hence $w=W$ without loss of
generality. The gaugino eqns.~\eqref{htgIII1} and \eqref{htgIII4} reduce to
\begin{equation}
(\partial+\bar\partial)z^\alpha = 0\ , \qquad
\partial z^\alpha = -\frac{8e^{2H}X\!\!\cdot\!g }{1+iY}g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}{\bar X}\!\!\cdot\!g\ ,
\label{gaug}
\end{equation}
which imply that $z^\alpha=z^\alpha(w-\bar w)$. Note also that from \eqref{gravII}
it follows that the functions $H$, $Y$ depend on $w-\bar w$ only.
The Bianchi identities (\ref{bianchi}) and Maxwell equations
\eqref{maxwell} are automatically satisfied. Finally, integration of \eqref{dsigma}
gives the shift vector
\begin{equation}
\sigma=\frac{e^{2H}}{2z}(dw+d\bar w )\ .
\end{equation}
Denoting with a dot the derivative w.r.t.~$i(w-\bar w)$, \eqref{gaug}, \eqref{gravI}
and \eqref{gravII} become
\begin{eqnarray}
\label{scal1}\dot z^\alpha&=&\frac{8ie^{2H}X\!\!\cdot\!g }{1+iY}
g^{\alpha\bar\beta}\mathcal{D}_{\bar\beta}{\bar X}\!\!\cdot\!g\ , \\
\label{Y1}e^{2H}&=&-\frac1{8X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\left\{\dot Y-\frac{1+Y^2}{2Y}
\left[\ln\left(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\right)\right]^{\!\text{\large{$\cdot$}}}\right\}\ , \\
\label{Y2}\dot H&=&-\frac{Y(1+Y^2)}{64X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ .
\end{eqnarray}
Combining (\ref{Y1}) and (\ref{Y2}) yields
\begin{eqnarray}
\left[\frac{\dot Y}{X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\right]^{\!\text{\large{$\cdot$}}}&=&-\frac{Y\left(1+Y^2\right)}
{32\left(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\right)^2}\left\{\dot Y-\frac{1+Y^2}{2Y}\left[\ln\left(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\right)
\right]^{\!\text{\large{$\cdot$}}}\right\}\nonumber\\
&&+\left\{\frac{1+Y^2}{2Y}\frac{\left[\ln\left(X\!\!\cdot\!g{\bar X}\!\!\cdot\!g\right)\right]^{\!\text{\large{$\cdot$}}}}
{X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\right\}^{\!\text{\large{$\cdot$}}}\ ,
\end{eqnarray}
which, integrated once, gives
\begin{equation}
\left(\ln\frac{X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}{1+Y^2}\right)^{\!\text{\large{$\cdot$}}} = \frac{Y(1+Y^2)}{64X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}
- \frac{64YLX\!\!\cdot\!g{\bar X}\!\!\cdot\!g}{1+Y^2}\ ,
\end{equation}
where $L$ is a real integration constant. Let us define
\begin{displaymath}
e^{\xi} = \frac{64X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}{1+Y^2}\ ,
\end{displaymath}
and use $\xi$ as a new coordinate instead of $w-\bar w$. Then, the flow equation
\eqref{scal1} becomes
\begin{equation}
\frac{dz^{\alpha}}{d\xi} = \frac i{2{\bar X}\!\!\cdot\!g Y}(1-iY)g^{\alpha\bar\beta}{\cal D}_{\bar\beta}{\bar X}\!\!\cdot\!g\ ,
\label{dzdxi}
\end{equation}
with $Y$ given by $Y^2=64e^{-\xi}X\!\!\cdot\!g{\bar X}\!\!\cdot\!g-1$. Setting $x=(w+\bar w)/2$, the metric and the
fluxes read respectively
\begin{eqnarray}
ds^2 &=& -z^2e^{\xi}\left[dt+4(e^{-2\xi}-L)\frac{dx}z\right]^2 + 4e^{-\xi}\frac{dz^2}{z^2}
\nonumber \\
&& \qquad +16e^{-\xi}(e^{-2\xi}-L)dx^2 + \frac{4e^{-2\xi}d\xi^2}{Y^2(e^{-\xi}-Le^{\xi})}\ ,
\label{metr-Y}
\end{eqnarray}
\begin{eqnarray}
F^I&=&8i\left(\frac{{\bar X}\!\!\cdot\!g X^I}{1-iY}-\frac{X\!\!\cdot\!g \bar X^I}{1+iY}\right)dt\wedge dz \\
&&+\frac 4Y\left[\frac{2{\bar X}\!\!\cdot\!g X^I}{1-iY}+\frac{2X\!\!\cdot\!g\bar X^I}{1+iY}+\left(\mbox{Im}\,\mathcal{N}
\right)^{-1|IJ}g_J\right](zdt-4Ldx)\wedge d\xi\ . \nonumber
\end{eqnarray}
For $L>0$, the line element \eqref{metr-Y} can be cast into the simple form
\begin{eqnarray}
ds^2 &=& 4e^{-\xi}\left(-z^2d{\hat t}^2 + \frac{dz^2}{z^2}\right) + 16L(e^{-\xi}-Le^{\xi})
\left(dx - \frac z{2\sqrt L}d\hat t\right)^2 \nonumber \\
&& \qquad + \frac{4e^{-2\xi}d\xi^2}{Y^2(e^{-\xi}-Le^{\xi})}\ , \label{near-hor}
\end{eqnarray}
where $\hat t\equiv t/(2\sqrt L)$. \eqref{near-hor} is of the form (3.3) of
\cite{Astefanesei:2006dd}, and describes the near-horizon geometry of extremal
rotating black holes. From \eqref{dzdxi} it is clear that the scalar fields have
a nontrivial dependence on the horizon coordinate $\xi$ unless ${\cal D}_{\alpha}X\!\!\cdot\!g=0$.
While the generic hairy black holes with the near-horizon geometry \eqref{near-hor}
are still to be discovered, the solution with constant scalars is actually known:
Start from the rotating generalization of the hyperbolic black hole solution
to minimal gauged supergravity, given by \cite{Caldarelli:1998hg}
\begin{displaymath}
ds^2 = -\frac{\Delta_r}{\rho^2}\left[dt + \frac a{\Xi}\sinh^2\!\theta d\phi\right]^2
+ \frac{\rho^2}{\Delta_r}dr^2 + \frac{\rho^2}{\Delta_{\theta}}d\theta^2 +
\frac{\Delta_{\theta}\sinh^2\!\theta}{\rho^2}\left[adt - \frac{r^2+a^2}{\Xi}d\phi\right]^2\ ,
\end{displaymath}
\begin{displaymath}
A = -\frac{q_{\text e}r}{\rho^2}\left[dt + \frac a{\Xi}\sinh^2\!\theta d\phi\right] -
\frac{q_{\text m}\cosh\theta}{\rho^2}\left[adt - \frac{r^2+a^2}{\Xi}d\phi\right]\ ,
\end{displaymath}
with
\begin{displaymath}
\Delta_r = (r^2+a^2)\left(-1+\frac{r^2}{\ell^2}\right) - 2mr + q_{\text e}^2 +
q_{\text m}^2\ , \qquad \Delta_{\theta} = 1 + \frac{a^2}{\ell^2}\cosh^2\!\theta\ ,
\end{displaymath}
\begin{displaymath}
\rho^2 = r^2 + a^2\cosh^2\!\theta\ , \qquad \Xi = 1 + \frac{a^2}{\ell^2}\ .
\end{displaymath}
Here, $a$, $m$, $q_{\text e}$ and $q_{\text m}$ denote the rotation parameter, mass
parameter, electric and magnetic charge respectively, and $\ell$ is related to
the cosmological constant by $\Lambda=-3/\ell^2$. This black hole is both extremal and
supersymmetric iff \cite{Caldarelli:1998hg}
\begin{equation}
m = q_{\text e} = 0\ , \qquad q_{\text m} = \pm\frac{\ell}2\Xi\ ,
\end{equation}
which leaves a one-parameter family of solutions, with
horizon at $r^2=r_{\text h}^2=(\ell^2-a^2)/2$. In order to obtain the
near-horizon limit, we introduce new coordinates $z$, $\hat t$, $\hat\phi$
according to
\begin{equation}
r = r_{\text h} + \epsilon r_0z\ , \qquad t = \frac{\hat t r_0}{\epsilon}\ , \qquad
\phi = \hat\phi + \Omega\frac{\hat t r_0}{\epsilon}\ ,
\end{equation}
where $\Omega=a\Xi/(r_{\text h}^2+a^2)$ is the angular velocity of the horizon, and $r_0$
is defined by
\begin{displaymath}
r_0^2 = \frac{\ell^2(r_{\text h}^2+a^2)}{4r_{\text h}^2}\ .
\end{displaymath}
After taking the limit $\epsilon\to 0$, the metric becomes
\begin{equation}
ds^2 = \frac{\ell^2\rho_{\text h}^2}{4r_{\text h}^2}\left[-z^2d{\hat t}^2 +
\frac{dz^2}{z^2}\right] + \frac{\rho_{\text h}^2}{\Delta_{\theta}}d\theta^2
+ \frac{\Delta_{\theta}\sinh^2\!\theta}{\rho_{\text h}^2\,\Xi^2}(r_{\text h}^2+a^2)^2(d\hat\phi
+ kzd\hat t)^2\ , \label{near-hor-hyp}
\end{equation}
with
\begin{displaymath}
\rho_{\text h}^2 = r_{\text h}^2 + a^2\cosh^2\!\theta\ , \qquad
k = \frac{2r_{\text h}r_0^2\Omega}{r_{\text h}^2+a^2}\ .
\end{displaymath}
If we set
\begin{displaymath}
e^{-\xi} = \frac{\ell^2\rho_{\text h}^2}{16r_{\text h}^2}\ , \qquad x =
-\frac{32r_{\text h}^3(r_{\text h}^2+a^2)}{\ell^6\,\Xi^2a}\hat\phi\ , \qquad
L = \frac{\ell^8\Xi^2}{1024r_{\text h}^4}\ , \qquad X\!\!\cdot\!g{\bar X}\!\!\cdot\!g = \frac1{4\ell^2}\ ,
\end{displaymath}
\eqref{near-hor} reduces precisely to the near-horizon geometry \eqref{near-hor-hyp}.
Let us now come back to the case of arbitrary $L$.
The missing component $\psi_0$ of the second Killing spinor is determined by the
system \eqref{httda0}-\eqref{htwbda0}, that simplifies to
\begin{equation}
\partial_t\psi_0=1\ , \qquad \partial_z\psi_0=\frac{1+Y^2}{32z^2X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ , \qquad
\partial\psi_0=-\bar\partial\psi_0=\frac{ie^{2H}Y}{2z}\ . \label{psi_0Y}
\end{equation}
Integration of \eqref{psi_0Y} yields (after going back to the original basis)
\begin{align}
\alpha&=\hat\alpha+t-\frac{1+Y^2}{32zX\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ , \qquad
\beta=-\frac{4iX\!\!\cdot\!g e^H}{1+iY}e^{i\varphi}\ , \nonumber\\
\gamma&=\frac{e^H}{z}e^{-i\varphi}\ , \qquad
\delta=\frac{4iX\!\!\cdot\!g}{1+iY}z\left(\bar{\hat\alpha}+t\right)-\frac{1-iY}{8i{\bar X}\!\!\cdot\!g}\ ,
\end{align}
where $\hat\alpha\in\bb{C}$ denotes an integration constant. As before, we rescale
$\alpha,\beta$ by $C$ and $\gamma,\delta$ by $\bar C$, with $C\in\bb{C}$ constant,
and choose $\hat\alpha=1/C$ in order to obtain the first Killing spinor for $C\to 0$.
Then, the norm squared of the associated Killing vector turns out to be
\begin{align}
V^2=&-4|b|^2\left[|1+C
t|^2+\left(\frac{e^{2H}}{z^2}-\frac{z^2}{4|b|^4}\right)|C|^2\right]^2
-\left(\frac{2z\mbox{Im}C}{|b|}\right)^2\ ,
\end{align}
which is always negative, so that the solutions considered here do not belong
to the null class\footnote{Of course, the choice $\hat\alpha=1/C$ does not cover
the case $\hat\alpha=0$, which has to be treated separately. It is easy to show
that the result is again a timelike vector.}.
Notice that in minimal supergravity, the analogue of eqns.~\eqref{Y1},
\eqref{Y2} follow from the dimensionally reduced gravitational Chern-Simons
action \cite{Cacciatori:2007vn}. It would be interesting to see if something similar
happens here. For instance, \eqref{scal1}-\eqref{Y2} might be related to the
gravitational Chern-Simons system coupled to scalar fields. We hope to come
back to these points in a future publication.
\subsubsection{$\psi_2=0$}\label{2=12=0}
In \ref{X-Xb=0} and \ref{X-Xbneq0} we assumed $\nu\neq0$, that is $\psi_2\neq0$.
Let us now consider the case $\texttt{G}_0=0$ and $\psi_2=\psi_{12}=0$. The gaugino
equations \eqref{htgIII1}-\eqref{htgIII4} imply that the scalars $z^\alpha$ are constant, while the
system \eqref{httda0}-\eqref{htwbda0} and \eqref{thpsi01}-\eqref{thpsi12} reduces to
\begin{eqnarray}\label{2=12=0psi0}
\partial_t\psi_0&=&-\frac{4iX\!\!\cdot\!g}{\bar b}\psi_1\ , \qquad
\partial_z\psi_0=\frac{\partial_t\psi_0}{2r^2}\ , \qquad
\partial\psi_0=\sigma_w\partial_t\psi_0\ , \\
\bar\partial\psi_0&=&\sigma_{\bar w}\partial_t\psi_0\ , \qquad
\psi_1=\psi_1(z)\ , \qquad \partial_z\psi_1=\frac{4i{\bar X}\!\!\cdot\!g}{b}\psi_1\ ,
\end{eqnarray}
together with
\begin{equation}\label{betaeq}
\partial_zr=-4iX\!\!\cdot\!g e^{i\varphi}\ , \qquad
\partial r=\partial\varphi=\partial_z\varphi=0\ , \qquad
e^{i\varphi}X\!\!\cdot\!g+e^{-i\varphi}{\bar X}\!\!\cdot\!g=0\ .
\end{equation}
From \eqref{dsigma} one gets $\sigma=0$, and (\ref{2=12=0psi0})-(\ref{betaeq}) give
\begin{equation}
\psi_0=\hat{\alpha}+t-\frac1{32zX\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ , \qquad \psi_1=z\ , \qquad
b=4i{\bar X}\!\!\cdot\!g z\ ,
\end{equation}
where $\hat{\alpha}\in\bb{C}$ is an integration constant. It is straightforward to shew
that the Killing vector associated to a general linear combination of the two
Killing spinors is always timelike. Integration of (\ref{dzPhi}) yields $e^\Phi=ze^H$,
with $H=H(w,\bar w)$ a real function satisfying
\begin{equation}
8\partial\bar\partial H=e^{2H} \label{Liouville-H}
\end{equation}
due to \eqref{Delta-Phi}. \eqref{Liouville-H} is the Liouville equation and implies
that the two-dimensional metric $e^{2H}dwd\bar w$ has constant negative curvature.
Note that the Bianchi identities \eqref{bianchi} and Maxwell equations
\eqref{maxwell} are automatically satisfied.
The metric and fluxes read respectively
\begin{eqnarray}
ds^2&=&-64X\!\!\cdot\!g{\bar X}\!\!\cdot\!g z^2dt^2+\frac{dz^2}{16X\!\!\cdot\!g{\bar X}\!\!\cdot\!g z^2}+\frac{e^{2H}dwd\bar
w}{16X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\ ,\\
F^I&=&-16{\text{Im}}({\bar X}\!\!\cdot\!g X^I)dt\wedge dz + \frac{ie^{2H}}{16X\!\!\cdot\!g{\bar X}\!\!\cdot\!g}\left[4{\text{Re}}
({\bar X}\!\!\cdot\!g X^I)+g_J\left({\text{Im}}\,{\cal N}\right)^{-1|IJ}\right]dw\wedge d\bar w\ .
\nonumber
\end{eqnarray}
We have thus a product spacetime AdS$_2$ $\times$ H$^2$, with constant electric
flux on AdS$_2$ and magnetic flux on H$^2$. This is the near-horizon geometry of
static supersymmetric black holes, like the ones discovered in \cite{Cacciatori:2009iz}.
\subsection{Case $\texttt{G}_0\neq 0$}
For $\texttt{G}_0\neq 0$, the gaugino eqns.~\eqref{htgIII1}-\eqref{htgIII4}
suggest to define new coordinates $Z,W,\bar W$ according to
\begin{equation}
z=z(Z,W,\bar W)\ , \qquad w=W\ , \qquad \bar w=\bar W\ ,
\end{equation}
where
\begin{equation}
\frac{\partial z}{\partial W} = -\frac{\psi_1}{2\psi_-}\ . \label{dz/dW}
\end{equation}
Then, \eqref{htgIII2} and \eqref{htgIII3} simplify to
\begin{equation}
\partial_{\bar W}z^{\alpha} = \partial_Wz^{\alpha} = 0\ ,
\end{equation}
so that the scalars depend on $Z$ only. The integrability conditions
\begin{displaymath}
\frac{\partial^2z}{\partial\bar W\partial W} =
\frac{\partial^2z}{\partial W\partial\bar W}\ ,
\end{displaymath}
of \eqref{dz/dW} and its complex conjugate read
\begin{equation}
\partial_{\bar W}\frac{\psi_1}{\psi_-} = \partial_W\frac{\bar\psi_1}
{\bar\psi_-}\ . \label{int-cond-W}
\end{equation}
Remarkably, it can be shown that \eqref{int-cond-W} is implied by the
Killing spinor eqns.~\eqref{thpsi01}-\eqref{thpsi12}. Unfortunately,
the system \eqref{thpsi01}-\eqref{thpsi12} does not seem to simplify
much after the introduction of the coordinates $Z,W,\bar W$, at least not
in an obvious way, so that we were unable to solve it in general in the
case $\texttt{G}_0\neq 0$. For minimal ${\cal N}=2$ gauged supergravity,
all known 1/2 BPS solutions have either $\texttt{G}_0=0$, or are related
to the case $\texttt{G}_0=0$ by a diffeomorphism \cite{Cacciatori:2007vn}.
This might be a general feature, and hold in the matter-coupled case as
well, but we know of no way to show this in general.
\acknowledgments
This work was partially supported by INFN and MIUR-PRIN contract 20075ATT78.
\normalsize
| -117,124.442284 |
[
-3.322265625,
3.06640625
] | 14.643075 |
[
-3.34765625,
0.06732177734375,
-2.1171875,
-6.015625,
-0.72509765625,
8.6796875
] |
[
3.734375,
9.9765625,
2.439453125,
5.15625
] | 901 | 7,145 |
[
-3.224609375,
3.72265625
] | 36.556675 |
[
-5.37109375,
-4.1015625,
-5.125,
-2.578125,
1.7783203125,
12.5234375
] | 0.471399 | 8.086637 | 33.128062 | 3.7197 |
[
1.8476650714874268
] | -72,064.712415 | 7.719384 | -116,618.185127 | 0.290092 | 6.568099 |
[
-2.015625,
-3.78515625,
-4.16015625,
-5.48828125,
2.23046875,
13.234375
] |
[
-5.09765625,
-2.34375,
-2.255859375,
-1.3798828125,
3.669921875,
4.38671875
] | |
BkiUeYHxK7IADzpkVBNJ
|
\section{Introduction}
Let $X$ be a smooth complex projective variety and $\cO_X(1)$ be an ample invertible sheaf on $X$. We consider the moduli of (isomorphism classes of) complexes of sheaves on $X$, or equivalently moduli of $Q$-sheaves over $X$ where $Q$ is the quiver
\[ \bullet \rightarrow} \newcommand{\la}{\leftarrow \bullet \rightarrow} \newcommand{\la}{\leftarrow \cdots \cdots \rightarrow} \newcommand{\la}{\leftarrow \bullet \rightarrow} \newcommand{\la}{\leftarrow \bullet \]
with relations imposed to ensure the boundary maps square to zero. Moduli of quiver sheaves have been studied in \cite{ac,acgp,gothen_king,schmitt05}. There is a construction of moduli spaces of S-equivalence classes of \lq semistable' complexes due to Schmitt \cite{schmitt05} as a geometric invariant theory quotient of a reductive group $G$ acting on a parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ for complexes with fixed invariants. The notion of semistability is determined by a choice of stability parameters and
the motivation comes from physics; it is closely related to a notion of semistability coming from a Hitchin--Kobayashi correspondence for quiver bundles due to \'{A}lvarez-C\'onsul and Garc\'ia-Prada \cite{acgp}. The stability parameters are also used to determine a linearisation of the action. The notion of S-equivalence is weaker than isomorphism and arises from the GIT construction of these moduli spaces which results in some orbits being collapsed.
As the notion of stability depends on a choice of parameters, we can ask if certain parameters reveal information about the cohomology sheaves of a complex. We show that there is a collection of stability parameters which can be used to study the cohomology sheaves of a complex. Analogously to the case of sheaves, every unstable complex has a unique maximally destabilising filtration known as its Harder--Narasimhan filtration. In this paper we give a collection of stability parameters indexed by a rational parameter $\epsilon >0$ and show for all sufficiently small values of $\epsilon$ the Harder--Narasimhan filtration of a given complex with respect to these parameters encodes the Harder--Narasimhan filtrations of the cohomology sheaves in this complex. We then go on to study a stratification of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ associated to these stability parameters.
Given an action of a reductive group $G$ on a projective scheme $B$ with respect to an ample linearisation $\cL$, there is an associated stratification $\{ S_\beta : \beta \in \cB \}$ of $B$ into $G$-invariant locally closed subschemes for which the open stratum is the geometric invariant theory (GIT) semistable set $B^{ss}$ \cite{hesselink,kempf_ness,kirwan}. When $B$ is a smooth variety, this stratification comes from a Morse type stratification associated to the norm square of the moment map for this action. This stratification has a completely algebraic description which can be extended to the above situation of a linearised action on a projective scheme (cf. \cite{hoskinskirwan} $\S$4). We give a brief summary of the algebraic description given in \cite{kirwan} as this will be used later on.
If we choose a compact maximal torus $T$ of $G$ and positive Weyl chamber $\mathfrak{t}_+$ in the Lie algebra $\mathfrak{t}$ of $T$, then the index set $\cB$ can be identified with a finite set of rational weights in $\mathfrak{t}_+$ as follows. By fixing an invariant inner product on the Lie algebra $\mathfrak{K}$ of the maximal compact subgroup $K$ of $G$, we can identify characters and cocharacters as well as weights and coweights. There are a finite number of weights for the action of $T$ on $B$ and the index set $\mathcal{B}$ can be identified with the set of rational weights in $\mathfrak{t}_+$ which are the closest points to $0$ of the convex hull of a subset of these weights.
We say a map $\lambda : \CC^* \rightarrow} \newcommand{\la}{\leftarrow G$ (which is not necessarily a group homomorphism) is a rational one-parameter subgroup if $\lambda( \CC^*)$ is a subgroup of $G$ and there is a integer $N$ such that $\lambda^N$ is a one-parameter subgroup (1-PS) of $G$. Associated to $\beta $ there is a parabolic subgroup $P_\beta \subset G$, a rational 1-PS $\lambda_\beta : \CC^* \rightarrow} \newcommand{\la}{\leftarrow T_{\CC}$ and a rational character $\chi_\beta : T_{\CC} \rightarrow} \newcommand{\la}{\leftarrow \CC^*$ which extends to a character of $P_\beta$.
Let $Z_\beta$ be the components of the fixed point locus of $\lambda_\beta$ acting on $B$ on which $\lambda_\beta$ acts with weight $|| \beta ||^2$ and $Z_\beta^{ss}$ be the GIT semistable subscheme for the action of the reductive part $\mathrm{Stab} \: \beta$ of $P_\beta$ on $Z_\beta$ with respect to the linearisation $\mathcal{L}^{\chi_{-\beta}}$ (which is the original linearisation $\cL$ twisted by the character $\chi_{-\beta} : \mathrm{Stab} \: \beta \rightarrow} \newcommand{\la}{\leftarrow \CC^*$). Then $Y_\beta$ (resp. $Y_\beta^{ss}$) is defined to be the subscheme of $B$ consisting of points whose limit under the action of $\lambda_\beta(t)$ as $t \to 0$ lies in $Z_\beta$ (resp. $Z_\beta^{ss}$). There is a retraction $p_\beta : Y_\beta \rightarrow} \newcommand{\la}{\leftarrow Z_\beta$ given by taking a point to its limit under $\lambda_\beta$.
By \cite{kirwan}, for $\beta \neq 0 $ we have
\[ S_\beta = G Y_\beta^{ss} \cong G \times^{P_\beta} Y_\beta^{ss}. \]
The definition of $S_\beta$ makes sense for any rational weight $\beta$, although $S_\beta$ is nonempty if and only if $\beta$ is an index.
This stratification has a description in terms of Kempf's notion of adapted 1-PSs due to Hesselink \cite{hesselink}. Recall that the Hilbert--Mumford criterion states a point $b \in B$ is semistable if and only if it is semistable for every 1-PS $\lambda$ of $G$; that is, $\mu^{\cL}(b, \lambda) \geq 0$ where $\mu^{\cL}(b,\lambda) $ is equal to minus the weight of the $\CC^*$-action induced by $\lambda$ on the fibre of $\cL$ over $\lim_{t \to 0} \lambda(t) \cdot b$. In \cite{kempf} Kempf defines a non-divisible 1-PS to be adapted to an unstable point $b \in B - B^{ss}$ if it minimises the normalised Hilbert--Mumford function:
\[ \mu^{\cL}(b, \lambda) = \min_{\lambda'} \frac{\mu^{\cL}(b, \lambda')}{|| \lambda'||}. \]
Hesselink used these adapted 1-PSs to stratify the unstable locus and this stratification agrees with the stratification described above. In fact if $\beta$ is an nonzero index, then the associated 1-PS $\lambda_\beta$ is a 1-PS which is adapted to every point in $Y_\beta^{ss}$.
In this paper we study the stratification obtained in this way from a suitable action of a group $G$ on a parameter space for complexes using the above collection of stability parameters (for very small $\epsilon$) which are related to cohomology. We show that for a given Harder--Narasimhan type $\tau$, the set up of the parameter scheme can be chosen so all sheaves with Harder--Narasimhan type $\tau$ are parametrised by a locally closed subscheme $R_\tau$ of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$. Moreover, $R_\tau$ is a union of connected components of a stratum $S_{\beta(\tau)}$ in the associated stratification. The scheme $R_\tau$ has the nice property that it parametrises complexes whose cohomology sheaves are of a fixed Harder--Narasimhan type.
The layout of this paper is as follows. In $\S$\ref{schmitt construction} we give a summary of the construction of Schmitt of moduli spaces of complexes and study the action of 1-PSs of $G$. In $\S$\ref{sec on stab} we give the collection of stability conditions indexed by $\epsilon >0$ and show that the Harder--Narasimhan filtration of a complex (for very small $\epsilon$) encodes the Harder--Narasimhan filtration of the cohomology sheaves. Then in $\S$\ref{sec on strat} we study the associated GIT stratification of the parameter space for complexes and relate this to the stratification by Harder--Narasimhan types. Finally, in $\S$\ref{sec on quot} we consider the problem of taking a quotient of the $G$-action on a Harder--Narasimhan stratum $R_\tau$.
\subsection*{Notation and conventions}
Throughout we let $X$ be a smooth complex projective variety and $\cO_X(1)$ be an ample invertible sheaf on $X$. All Hilbert polynomials of sheaves over $X$ will be calculated with respect to $\cO_X(1)$.
We use the term complex to mean a bounded cochain complex of torsion free sheaves. We say a complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ is concentrated in $[m_1,m_2]$ if $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i = 0$ for $i < m_1 $ and $i>m_2$.
\subsection*{Acknowledgements}
I am very grateful to my thesis supervisor Frances Kirwan for her support and guidance over the last few years.
\section{Schmitt's construction}\label{schmitt construction}
In this section we give a summary of the construction due to Schmitt \cite{schmitt05} of moduli space of S-equivalence classes of semistable complexes over $X$. We also make some important calculations about the weights of $\CC^*$-actions (some details of which can also be found in \cite{schmitt05} Section 2.1).
If we have an isomorphism between two complexes $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ and $\cxF$, then for each $i$ we have an isomorphism between the sheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ and $\cF^i$ and thus an equality of Hilbert polynomials $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i) = P(\cF^i)$. Therefore we can fix a collection of Hilbert polynomials $P = (P^i)_{i \in \ZZ}$ such that $P^i=0$ for all but finitely many $i$ and study complexes with these invariants. In fact we can assume $P$ is concentrated in $[m_1,m_2]$ and write $P = (P^{m_1},\d,P^{m_2})$.
\subsection{Semistability}\label{stab defn}
The moduli spaces of complexes only parametrise a certain collection of complexes with invariants $P$ and this collection is determined by a notion of (semi)stability. Schmitt introduces a notion of (semi)stability for complexes which depends on a collection of stability parameters $(\us, \uc)$ where $\uc := \delta \ue$ and
\begin{itemize}
\item $\us=(\sigma_i \in \ZZ_{>0})_{i \in\ZZ}$,
\item $\ue=(\eta_i \in \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T)_{i \in\ZZ}$,
\item $\delta$ is a positive rational polynomial such that $\deg \delta =\max(\dim X-1,0)$.
\end{itemize}
\begin{defn}
The reduced Hilbert polynomial of a complex $\cxF$ with respect to the parameters $(\us, \uc)$ is defined as
\[P_{\us, \uc}^{\mathrm{red}}(\cxF):= \frac{\sum_{i \in \ZZ} \sigma_i P(\cF^i) - \chi_i \rk{\cF^i}}{\sum_{i \in \ZZ} \sigma_i \rk \cF^i} \]
where $P(\cF^i)$ and $ \rk{\cF^i}$ are the Hilbert polynomial and rank of the sheaf $\cF^i$.
We say a nonzero complex $\cxF$ is $(\us, \uc)$-semistable if for any nonzero proper subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \subset \cxF$ we have an inequality of polynomials
\[P_{\us,\uc}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) \leq P_{\us, \uc}^{\mathrm{red}}(\cxF). \]
By an inequality of polynomials $R \leq Q$ we mean $R(x) \leq Q(x)$ for all $x >\!> 0$. We say the complex is $(\us, \uc)$-stable if this inequality is strict for all such subcomplexes.
\end{defn}
\begin{rmk}\label{normalise} Observe that for any rational number $C$ if we let $\ue' = \ue - C\us$ and $\uc' = \delta \ue'$, then the notions of $(\us, \uc)$-semistability and $(\us, \uc')$-semistability are equivalent. For invariants $P=(P^{m_1},\d,P^{m_2})$ and any stability parameters $(\us,\uc)$, we can let \[C = \frac{\sum_{i=m_1}^{m_2} \eta_i r^i}{\sum_{i=m_1}^{m_2} \sigma_i r^i}\] where $r^i$ is the rank determined by the leading coefficient of $P^i$ and consider the associated stability parameters $(\us,\uc')$ for $P$ which satisfy $\sum_{i=m_1}^{m_2} \eta_i' r^i = 0$. As we have fixed $P$ in this section, we may assume we have stability parameters which satisfy $\sum_{i=m_1}^{m_2} \eta_i r^i = 0$.
\end{rmk}
\subsection{The parameter space}\label{param sch const}
The set of sheaves occurring in a $(\us,\uc)$-semistable complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with invariants $P$ is bounded by the usual arguments (see \cite{simpson}, Theorem 1.1) and so we may choose $n>\!>0$ so that all these sheaves are $n$-regular.
Fix complex vector spaces $V^i$ of dimension $P^i(n)$ and let $Q^i$ be the open subscheme of the quot scheme $\mathrm{Quot}(V^i \otimes \cO_X(-n), P^i)$ consisting of torsion free quotient sheaves $q^i: V^i \otimes \cO_X(-n) \rightarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ such that $H^0(q^i(n))$ is an isomorphism. The parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ for $(\us,\uc)$-semistable complexes with invariants $P$ is constructed as a locally closed subscheme of a projective bundle $\fD$ over the product $Q:=Q^{m_1} \times \cdots \times Q^{m_2}$.
Given a $(\us,\uc)$-semistable complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with Hilbert polynomials $P$ we can use the evaluation maps
\[ H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n)) \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \]
along with a choice of isomorphism $V^i \cong H^0(\mathcal{E}^i(n))$ to parametrise the $i$th sheaf $\mathcal{E}^i$ by a point $q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i $ in $Q^i$. From the boundary morphisms $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ we can construct a homomorphism
\[ \psi:=H^0(d(n)) \circ (\oplus_i H^0(q^i(n))) : \oplus_i V^i \rightarrow} \newcommand{\la}{\leftarrow \oplus_i H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n)) \]
where $d : \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow \oplus_i\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ is the morphism determined by the boundary maps $d^i$.
Such homomorphisms $\psi$ correspond to points in the fibres of the sheaf
\[ \cR:=(\oplus_i V^i)^\vee \otimes p_* \left(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X \otimes (\pi_X^{Q \times X})^* \cO_X(n)\right) \]
over $Q$ where $p : Q \times X \rightarrow} \newcommand{\la}{\leftarrow Q$ is the projection and $\oplus_iV^i \otimes (\pi_X^{Q \times X})^*\cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X$ is the quotient sheaf over $Q \times X$ given by taking the direct sum of the pullbacks of the universal quotients $V^i \otimes (\pi_X^{Q^i \times X})^* \cO_X(-n) \rightarrow \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^i$ on $Q^i \times X$ to $Q \times X$. Note that $\cR$ is locally free for $n$ sufficiently large and so we can consider the projective bundle $\fD :=\PP(\cR \oplus \cO_Q)$ over $Q$. A point of $\fD$ over $q=(q^i:V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in Q$ is given by a pair $(\psi: \oplus_i V^i \rightarrow} \newcommand{\la}{\leftarrow \oplus_i H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n)),\zeta \in \CC)$ defined up to scalar multiplication. The parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ consists of points $(q,[\psi : \zeta])$ in $\fD$ such that:
\begin{enumerate}
\renewcommand{\labelenumi}{\roman{enumi})}
\item $\psi =H^0(d(n)) \circ (\oplus_i H^0(q^i(n)))$ where $d : \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow} \newcommand{\la}{\leftarrow \oplus_i\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ is given by morphisms $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ which satisfy $d^i \circ d^{i-1} = 0$,
\item $ \zeta\neq 0$.
\end{enumerate}
The conditions given in i) are all closed (they are cut out by the vanishing locus of homomorphisms of locally free sheaves) and condition ii) is open; therefore $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B} $ is a locally closed subscheme of $\fD$. We let $\fD'$ denote the closed subscheme of $\fD$ given by points which satisfy condition i). We will write points of $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ as $(q,d)$ where $q=(q^i:V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in Q$ and $d$ is given by $d^{i} : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ which satisfy $d^{i} \circ d^{i-1}=0$.
\begin{rmk} The construction of the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ depends on the choice of $n$ and the Hilbert polynomials $P$. We write $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{P}$ or $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}(n)$ if we wish to emphasise its dependence on $P$ or $n$.
\end{rmk}
\subsection{The group action}\label{gp act}
For $m_1 \leq i \leq m_2$ we have fixed vector spaces $V^i$ of dimension ${P^i(n)}$. The reductive group $\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ acts on both $Q$ and $\fD$: if $g = (g_{m_1}, \dots, g_{m_2}) \in \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ and $z = ((q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i,[\psi : \zeta]) \in \fD$, then
\[ g \cdot z = ((g_i \cdot q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i, [g \cdot \psi : \zeta]) \]
where
\[\xymatrix@1{
g_i \cdot q^i : & V^i \otimes \cO_X(-n) \ar[r]^{g_i^{-1 }\cdot} & V^i \otimes \cO_X(-n) \ar[r]^>>>>>{q^i} & \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i }\]
and
\[\xymatrix@1{
g \cdot \psi : & \oplus_i V^i \ar[r]^{g^{-1 }\cdot} & \oplus_i V^i \ar[r]^>>>>>{q^i} & \oplus_i H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(n))}.\]
If instead we consider $\tilde{\psi}:= \oplus_i H^0(q^i(n))^{-1} \circ \psi : \oplus_iV^i \rightarrow} \newcommand{\la}{\leftarrow \oplus_iV^i$ then this action corresponds to conjugating $\tilde{\psi}$ by $g$; that is,
\[ g \circ \tilde{\psi} \circ g^{-1} = \widetilde{g \cdot \psi} .\]
This action preserves the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ and the orbits correspond to isomorphism classes of complexes. As the subgroup $\CC^* ( I_{V_{m_1}}, \dots ,I_{V_{m_2}})$ acts trivially on $\fD$, we are really interested in the action of $(\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i))/ \CC^* $. Given integers $\us =(\sigma_{m_1} , \dots , \sigma_{m_2})$ we can define a character
\[ \begin{array}{cccc} \det_{\us} : &\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i) & \rightarrow} \newcommand{\la}{\leftarrow & \CC^* \\ &(g_i) & \mapsto & \Pi_i \det g_i^{\sigma_i} \end{array}\]
and instead consider the action of the group $G=G_{\us}:= \ker \det_{\us}$ which maps with finite kernel onto $(\Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V_i))/ \CC^* $.
\subsection{The linearisation}\label{linearisation schmitt}
Schmitt uses the stability parameters $(\us,\uc):=(\us,\delta\ue)$ to determine a linearisation of the $G$-action on the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ in three steps. The first step is given by using the parameters $\us$ to construct a proper injective morphism from $\fD$ to another projective bundle $\fB_{\us}$ over $Q$. The parameters $\us$ are used to associate to each point $z = (q,[\psi : \zeta]) \in \fD$ a nonzero decoration
\[ \varphi_{\us}(z) : ( V_{\us}^{\otimes r_{\us}})^{\oplus 2} \otimes \cO_X(-r_{\us}n) \rightarrow \det \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{\us} \]
(defined up to scalar multiplication) where $r_{\us} = \sum_i \sigma_i r^i$ and $V_{\us}:= \oplus_i (V^{i})^{\oplus \sigma_i}$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{\us}:= \oplus_i (\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i})^{\oplus \sigma_i}$. The fibre of $\fB_{\us}$ over $q \in Q$ parametrises such homomorphisms $\varphi_{\us}$ up to scalar multiplication and the morphism $\fD \rightarrow} \newcommand{\la}{\leftarrow \fB_{\us}$ is given by sending $z = (q,[\psi : \zeta]) \in \fD$ to $(q, [\varphi_{\us}(z)]) \in \fB_{\us}$. The group $G \cong\SL(V_{\us}) \cap \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ acts on $\fB_{\us}$ by acting on $Q$ and $V_{\us}$ and $\fD \rightarrow} \newcommand{\la}{\leftarrow \fB_{\us}$ is equivariant with respect to this action.
The second step is given by constructing a projective embedding $\fB_{\us} \rightarrow} \newcommand{\la}{\leftarrow B_{\us}$. This embedding is essentially given by taking the projective embedding of each $Q^i$ used by Gieseker \cite{gieseker_sheaves}. Recall that Gieseker gave an embedding of $Q^i$ into a projective bundle $B_i$ over the components $R_i$ of the Picard scheme of $X$ which contain the determinant of a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ parametrised by $Q^i$. This embedding is given by sending a quotient sheaf $q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ to a homomorphism $\wedge^{r^i} V^i \rightarrow} \newcommand{\la}{\leftarrow H^0(\det \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i(r^in))$ which represents a point in a projective bundle $B_i$ over $R_i$. The group $\SL(V^i)$ acts naturally on $B_i$ by acting on the vector space $\wedge^{r^i} V^i$ and the morphism $Q^i \rightarrow} \newcommand{\la}{\leftarrow B_i$ is equivariant with respect to this action. In a similar way Schmitt also constructs an equivariant morphism $\fB_{\us} \rightarrow} \newcommand{\la}{\leftarrow B'_{\us}$ where $B_{\us}'$ is a projective bundle over the product $\Pi_i R_i$. Let $B_{\us} = B_{m_1} \times \cdots \times B_{m_2} \times B'_{\us} $; then the map $ \fB_{\us} \rightarrow} \newcommand{\la}{\leftarrow B_{\us}$ is equivariant, injective and proper morphism (cf. \cite{schmitt05} Section 2.1).
The final step is give by choosing a linearisation on $B_{\us}$ and pulling this back to the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ via
\[ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B} \hookrightarrow \fD \hookrightarrow \fB_{\us} \hookrightarrow B_{\us} = B_{m_1} \times \cdots \times B_{m_2} \times B'_{\us} .\]
The schemes $B_i$ and $B'_{\us}$ have natural ample linearisations given by $\cL_i:=\cO_{B_i}(1)$ and $\cL':=\cO_{B'_{\us}}(1)$. The linearisation on $B_{\us}$ is given by taking a weighted tensor product of these linearisations and twisting by a character $\rho$ of $G=G_{\us}$. The character $\rho : G \rightarrow} \newcommand{\la}{\leftarrow \CC^*$ is the character determined by the rational numbers
\[ c_i := \left[ \sigma_i \left( \frac{P_{\us}(n)}{r_{\us} \delta (n)} - 1\right) \left( \frac{r_{\us}}{P_{\us}(n)} - \frac{r^i}{P^i(n)}\right) - \frac{ r^i \eta_i}{P^i(n)} \right] \]
where $P_{\us}:= \sum_i \sigma_i P^i$;
that is, if these are integral we define
\[ \rho (g_{m_1},\cdots,g_{m_2})= \Pi_{i=m_1}^{m_2} \det g_i^{c_i} \]
and if not we can scale everything by a positive integer so that they become integral. We assume $n$ is sufficiently large so that $a_i = \sigma_i ({P_{\us}(n)} - r_{\us} \delta (n))/ r_{\us} \delta (n) + \eta_i $ is positive; these positive rational numbers $\underline{a}=(a_{m_1}, \dots , a_{m_2}, 1)$ are used to define a very ample linearisation
\[ \cL_{\underline{a}}:=\bigotimes_i \cL_i^{\otimes a_i} \otimes \cL \]
on $B_{\us}$ (where again if the $a_i$ are not integral we scale everything so that this is the case). The linearisation $\cL=\cL(\us,\uc)$ on $\mathfrak{T}$ is equal to the pullback of the very ample linearisation $\cL_{\underline{a}}^{\rho}$ on $B_{\us}$ where $\cL_{\underline{a}}^{\rho}$ denotes the linearisation obtained by twisting $\cL_{\underline{a}}$ by the character $\rho$. Of course this can also be viewed as a linearisation on the schemes $\fD'$ and $\fD$ too.
\subsection{Jordan-H\"older filtrations and S-equivalence}\label{sequiv sect}
The moduli space of ($\underline{\sigma}, \underline{\chi}$)-semistable complexes with invariants $P$ is constructed as an open subscheme of the projective GIT quotient
\[ \fD' /\!/_{\cL} G \]
given by the locus where $\zeta \neq 0$ (by definition $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ is the open subscheme of $\fD'$ given by this condition). Recall that the GIT quotient is topologically the semistable set modulo S-equivalence where two orbits are S-equivalent if their orbit closures meet in the semistable locus. This notion can be expressed in terms of Jordan-H\"older filtrations as follows:
\begin{defn}\label{sequiv}
A Jordan--H\"{o}lder filtration of a $(\us,\uc)$-semistable complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ is a filtration by subcomplexes
\[ 0_\cdot= \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[0]} \subsetneqq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[1]} \subsetneqq \cdots \subsetneqq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[k]} = \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \]
such that the successive quotients $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i]}/ \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i-1]}$ are $(\us,\uc)$-stable and
\[ P^\mathrm{red}_{\us,\uc}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i]}/ \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[i-1]}) =P^\mathrm{red}_{\us,\uc}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) .\]
This filtration is in general not canonical but the associated graded object
\[ \mathrm{gr}_{ (\underline{\sigma}, \underline{\chi})}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) := \bigoplus_{j=1}^k \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[j]}/ \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{[j-1]}\]
is canonically associated to $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ up to isomorphism. We say two ($\underline{\sigma}, \underline{\chi}$)-semistable complexes are {S-equivalent} if their associated graded objects with respect to ($\underline{\sigma}, \underline{\chi}$) are isomorphic.
\end{defn}
Jordan--H\"{o}lder filtrations of ($\underline{\sigma}, \underline{\chi}$)-semistable complexes exist in exactly the same way as they do for semistable sheaves (for example, see \cite{gieseker_sheaves}).
\subsection{The moduli space}
We are now able to state one of the main results of \cite{schmitt05} for us: the existence of moduli spaces of $(\us,\uc)$-semistable complexes. Recall that there is a parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}=\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{P}$ (which is the open subscheme of $\fD'$ cut out by the condition $\zeta \neq 0$) with an action by a reductive group $G=G_{\us}$ such that the orbits correspond to isomorphism classes of complexes and the stability parameters determine a linearisation $\cL$ of this action. The moduli space is given by taking the open subscheme of the projective GIT quotient $\fD'/\!/_{\mathcal{L}} G$ given by $\zeta \neq 0$.
\begin{thm}(\cite{schmitt05}, p3)\label{schmitt theorem}
Let $X$ be a smooth complex manifold, $P$ be a collection of Hilbert polynomials of degree $\dim X$ and $(\us,\uc)$ be stability parameters. There is a quasi-projective coarse moduli space \[M^{(\underline{\sigma}, \underline{\chi})-ss}(X,{P})\] for S-equivalence classes of $(\us,\uc)$-semistable complexes over $X$ with Hilbert polynomials $P$.
\end{thm}
\subsection{The Hilbert-Mumford criterion}\label{calc HM for cx}
The Hilbert-Mumford criterion allows us to determine GIT semistable points by studying the actions of one-parameter subgroups (1-PSs); that is, nontrivial homomorphisms $\lambda : \CC^* \rightarrow} \newcommand{\la}{\leftarrow G$. In this section we give some results about the action of 1-PSs of $G=G_{\us}$ on the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ for complexes (see also \cite{schmitt05} Section 2.1).
We firstly study the limit of a point in $z = (q,[\psi : 1]) \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ under the action of a 1-PS $\lambda : \CC^* \rightarrow G$. For this limit to exist we need to instead work with a projective completion $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ of $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$. We take a projective completion which is constructed as a closed subscheme of a projective bundle $\overline{\fD}$ over the projective scheme $\overline{Q} := \Pi_i \overline{Q}^i$ where $\overline{Q}^i$ is the closure of $Q^i$ in the relevant quot scheme. The points of $\overline{\fD}$ over $q=(q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in \overline{Q}$ are nonzero pairs $[ \psi : \zeta]$ defined up to scalar multiplication where $\psi : \oplus_i V^i \rightarrow} \newcommand{\la}{\leftarrow \oplus_i H^0(\mathcal{E}^i(n))$ and $\zeta \in \CC$. Then $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ is the subscheme of points $(q,[\psi : \zeta]) \in \overline{\fD}$ such that $\psi =H^0(d(n)) \circ (\oplus_i H^0(q^i(n)))$ where $d : \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow} \newcommand{\la}{\leftarrow \oplus_i \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i $ is given by $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$ which satisfy $d^i \circ d^{i-1} =0$. It is clear that the group action and linearisation $\mathcal{L}$ extend to this projective completion.
Recall that $G \cong \SL(V_{\us}) \cap \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ where $V_{\us} = \oplus_i (V^i)^{\oplus \sigma_i}$ and so a 1-PS $\lambda : \CC^* \rightarrow} \newcommand{\la}{\leftarrow G$ is given by a collection of 1-PSs $\lambda_i : \CC^* \to \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ which satisfy \[\Pi_i \mathrm{det} \lambda_i(t)^{\sigma_i}= 1.\]
A 1-PS $\lambda_i$ of $\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)$ gives rise to a weight space decomposition of $V^i = \oplus_{j=1}^s V^i_j$ indexed by a finite collection of integers $k_1 > \cdots > k_s$
where $V^i_j=\{ v \in V^i : \lambda_i(t) \cdot v =t^{k_j} v \}$. This gives a filtration
\[ 0 \subsetneq V^i_{(1)}\subsetneq \cdots \subsetneq V^i_{(s)} = V^i\]
where $V^i_{(j)} := V^i_1 \oplus \cdots \oplus V^i_j$ and if we take a basis of $V^i$ which is compatible with this filtration then
\[ \lambda_i(t) = \left( \begin{array}{ccc}t^{k_1} I_{V^i_1} & & \\ & \ddots & \\ && t^{k_s}I_{V^i_s} \end{array}\right) \] is diagonal. We can diagonalise each of these 1-PSs $\lambda_i$ simultaneously so there is a decreasing sequence of integers $k_1 > \cdots > k_s$ and for each $i$ we have a decomposition $V^i = \oplus_{j=1}^s V^i_j$ (where we may have $V_j^i =0$), and a filtration
\[ 0 \subset V^i_{(1)} \subset V^i_{(2)} \subset \cdots \subset V^i_{(s)} = V^i \]
for which $\lambda_i$ is diagonal.
Let $z=(q, [\psi : 1])$ be a point in $ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ where $q = (q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in Q$; then we can consider its limit
\[ \overline{z} := \lim_{t \to 0} \lambda(t) \cdot z \]
under the 1-PS $\lambda$. By \cite{huybrechts} Lemma 4.4.3,
\[\overline{q}^i:= \lim_{t \to 0} \lambda_i(t) \cdot q^i = \oplus_{j=1}^s q_j^i : \oplus_{j=1}^s V_j^i \otimes \cO_X(-n) \rightarrow \oplus_{j=1}^s \mathcal{E}^i_j \]
where $\mathcal{E}^i_j$ are the successive quotients in the filtration
\[ 0 \subset \mathcal{E}^i_{(1)} \subset \cdots \subset \mathcal{E}^i_{(j)} := q^i (V^i_{(j)} \otimes \mathcal{O}_X(-n) ) \subset \cdots \subset \mathcal{E}^i_{(s)}= \mathcal{E}^i \]
induced by $\lambda_i$. For each $i$ we have a filtration of the corresponding sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ induced by $\lambda_i$ and the boundary maps can either preserve this filtration or not. If they do then we say $\lambda$ induces a filtration of the point $z$ (or corresponding complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$) by subcomplexes. It is easy to check the limit depends on whether $\lambda$ induces a filtration by subcomplexes or not:
\begin{lemma}\label{lemma on fixed pts}
Let $z=(q, [\psi : 1])$ be a point in $ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ and $\lambda$ be a 1-PS of $G$ as above with weights $k_1> \cdots > k_s$. Then the limit
\[ \overline{z}:=\lim_{t \to 0} \lambda(t) \cdot z =(\overline{q}, [\overline{\psi}: \overline{\zeta}])\]
is given by $\overline{q}$ as above and $\overline{\psi} =H^0( \overline{d}(n)) \circ (\oplus_i H^0(\overline{q}^i(n)))$ where $\overline{d}$ is given by $\overline{d}^i : \oplus_j \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \rightarrow} \newcommand{\la}{\leftarrow \oplus_j \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}_j$. Moreover:
\begin{enumerate}
\renewcommand{\labelenumi}{\roman{enumi})}
\item If $\lambda$ induces a filtration by subcomplexes, then $\overline{\zeta} = 1$ and $\overline{d}^{i} = \oplus_{j=1}^s (d^{i}_j: \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}_j)$. In particular $\overline{z} \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ and the corresponding complex is the graded complex associated to the filtration induced by $\lambda$.
\item If $\lambda$ does not induces a filtration by subcomplexes let \[N := \min_{i,j,l} \{ k_l - k_j : d^{i}(\mathcal{E}^i_{(j)}) \nsubseteq \mathcal{E}^{i+1}_{(l-1)} \} < 0.\] Then $\overline{\zeta} =0$ and we have $\overline{d}^{i}(\mathcal{E}^i_j) \cap \mathcal{E}^{i+1}_l = 0$ unless $k_l - k_j = N$. In particular, the limit $\overline{z}=(\overline{q},[\overline{\psi}:0])$ is not in the parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$.
\end{enumerate}
\end{lemma}
\begin{rmk}\label{rmk on fixed pts}
Let $z=(q, [\psi : \zeta])$ be a point in $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ given by $q=(q^i : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)_i \in \overline{Q}$ and $\psi = H^0({d}(n)) \circ \oplus_i H^0({q}^i(n))$ where $d$ is defined by homomorphisms $d^i : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i+1}$. If $z$ is fixed by $\lambda$, then the quotient sheaves are direct sums $q^i = \oplus_j q^i_j : V^i \otimes \cO_X(-n) \rightarrow} \newcommand{\la}{\leftarrow \oplus_j \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j$ and so the boundary map $d^i$ can be written as $d^i = \oplus_{j,l} d^i_{l,j}$ where $d^i_{l,j}: \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \rightarrow} \newcommand{\la}{\leftarrow \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_l$. The fixed point locus of a 1-PS $\lambda : \CC^* \rightarrow} \newcommand{\la}{\leftarrow G$ acting on $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ decomposes into 3 pieces (each piece being a union of connected components) where these pieces are given by:
\begin{itemize}
\item A diagonal piece consisting of points $z$ where $d^i=\oplus_j d^i_{j,j} $ is diagonal for all $i$ and $\zeta \in \CC$.
\item A strictly lower triangular piece consisting of points $z$ where $d^i = \oplus_{j<l} d^i_{l,j} $ is strictly lower triangular for all $i$ and $\zeta = 0$.
\item A strictly upper triangular piece consisting of points $z$ where $d^i = \oplus_{j>l} d^i_{l,j} $ is strictly lower triangular for all $i$ and $\zeta = 0$.
\end{itemize}
Note that by Lemma \ref{lemma on fixed pts} above, if we have a point $z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ its limit under $\lambda(t)$ as $t \to 0$ is in either the diagonal or strictly lower triangular piece. In fact, we have $\lim_{t \to 0} \lambda(t) \cdot z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ if and only if $\lambda$ induces a filtration of $z$ by subcomplexes.
\end{rmk}
Now we understand the limit points of $\CC^*$-actions we can compute the weight of the $\CC^*$-action on fixed points. By definition the Hilbert-Mumford function
\[ \mu^{\cL}(z,\lambda) = \mu^{\cL}(\lim_{t \to 0} \lambda(t) \cdot z , \lambda)\]
is equal to minus the weight of the $\lambda(\CC^*)$-action on the fibre of $\cL$ over $\overline{z}:=\lim_{t \to 0} \lambda(t) \cdot z$. By the construction of $\mathcal{L}$ we have
\begin{equation}\label{form1 for HM}
\mu^\mathcal{L}({z}, \lambda)=\mu^{\mathcal{L}'}(\varphi_{\us}({z}), \lambda)+ \sum_{i=m_1}^{m_2} a_i \mu^{\mathcal{L}_i}({q}^i, \lambda_i) - \rho \cdot \lambda
\end{equation}
where $\varphi_{\us}({z})$ is the decoration associated to ${z}$ and $a_i$ and $\rho$ are the rational numbers and character used to define $\cL$ (c.f. $\S$\ref{linearisation schmitt}). We let $P_{\us} = \sum_i \sigma_i P^i$ and $r_{\us} = \sum_i \sigma_i r^i$.
\begin{lemma}(\cite{schmitt05}, Section 2.1)\label{HM prop}
Let $\lambda$ be a 1-PS of $G$ which corresponds to integers $k_1> \cdots > k_s$ and decompositions $V^i = \oplus_{j=1}^s V^i_j$ for $m_1 \leq i \leq m_2$ as above. Let $z=(q,[\psi : 1]) \in \mathfrak{T}$ where $q=(q^i : V^i \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{E}^i)_i \in Q$; then
\begin{enumerate}
\renewcommand{\labelenumi}{\roman{enumi})}
\item If $\lambda$ induces a filtration of $z$ by subcomplexes
\[ {\mu^{\cL}(z, \lambda )} = \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \left( \sigma_i \frac{P_{\us}(n)}{r_{\us}\delta(n)} + \eta_i \right) \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j \]
where $ \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j =\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{(j)} / \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{(j-1)}$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{(j)} = q^i( V^i_{(j)} \otimes \cO_X(-n))$.
\item If $\lambda$ does not induce a filtration of $z$ by subcomplexes
\[ {\mu^{\mathcal{L}}(z, \lambda )} = \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \left( \sigma_i \frac{P_{\us}(n)}{r_{\us}\delta(n)} + \eta_i \right) \rk \mathcal{E}^i_j - N \]
where $N$ is the negative integer given in Lemma \ref{lemma on fixed pts}.
\end{enumerate}
\end{lemma}
\begin{proof}
The weight of the action of $\lambda_i$ on $Q^i$ with respect to $\cL_i$ was calculated by Gieseker:
\[\mu^{\mathcal{L}_i}(q^i,\lambda_i) =\sum_{j=1}^s k_j \left(\rk\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j - \dim V^i_j \frac{r^i}{P^i(n)} \right) .\]
We can insert this into the formula (\ref{form1 for HM}) along with the exact values of $a_i$ and $c_i$ and use the fact that $\lambda$ is a 1-PS of $\SL(\oplus_i (V^i)^{\oplus \sigma_i})$ to reduce this to
\[ \mu^{\cL}(z, \lambda) =\mu^{\cL'}(\varphi_{\us}(z),\lambda) + \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \left( \sigma_i \frac{P_{\us}(n)}{r_{\us}\delta(n)} -\sigma_i + \eta_i \right) \rk \mathcal{E}^i_j . \]
Finally, by studying the construction of the decoration $\varphi_{\us}(z)$ associated to $z$ (for details see \cite{schmitt05}), we see that
\[\mu^{\cL'}(\varphi_{\us}(z),\lambda) = \left\{ \begin{array}{ll} \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \sigma_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j & \mathrm{if} \: \lambda \mathrm{\:induces \: a \: filtration \: by \: subcomplexes} \\ \sum_{i=m_1}^{m_2} \sum_{j=1}^s k_j \sigma_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_j - N & \mathrm{otherwise} \end{array} \right.\]
where $N$ is the negative integer of Lemma \ref{lemma on fixed pts}.
\end{proof}
\begin{rmk}\label{only need to worry about subcxs}
Schmitt observes that we can rescale the stability parameters by picking a sufficiently large integer $K$ and replacing $(\delta,\ue)$ with $(K \delta,\ue / K)$, so that for GIT semistability we need only worry about 1-PSs which induce filtrations by subcomplexes (cf. \cite{schmitt05}, Theorem 1.7.1). This explains why the test objects for (semi)stability in Definition \ref{stab defn} are subcomplexes rather than weighted sheaf filtrations.
\end{rmk}
\section{Stability conditions relating to cohomology}\label{sec on stab}
In this section we study these notions of stability for complexes in greater depth. As we are now studying complexes with varying invariants $P$ we do not impose any condition on $\ue$ (such as $\sum_i \eta_i r^i = 0$). An important property of these stability conditions is that we can describe any complex (of torsion free sheaves) as a finite sequence of extensions of semistable complexes by studying its Harder--Narasimhan filtration.
In this section we describe a collection of stability conditions indexed by a small positive rational number $\epsilon$ which can be used to study the cohomology sheaves of a given complex.
The stability parameters we are interested in are of the form $(\underline{1}, \delta \ue/\epsilon)$ where $\underline{1}$ is the constant vector and $\eta_i$ are strictly increasing rational numbers. For a given complex $\cxF$ with torsion free cohomology sheaves
\[ \cH^i(\cxF) := \ker d^i / \im d^{i-1}\] we show that the Harder--Narasimhan filtration of this complex encodes the Harder--Narasimhan filtration of the cohomology sheaves in this complex provided $\epsilon>0$ is sufficiently small.
\subsection{Harder--Narasimhan filtrations}
Given a choice of stability parameters $(\us, \uc)$ every complex has a unique maximal destabilising filtration known as its Harder--Narasimhan filtration:
\begin{defn}\label{HN filtr}
Let $\cxF$ be a complex and $(\us,\uc)$ be stability parameters. A {Harder--Narasimhan filtration} for $\cxF$ with respect to $(\us,\uc)$ is a filtration by subcomplexes
\[ 0_\cdot= \cxF_{(0)} \subsetneqq \cxF_{(1)} \subsetneqq \cdots \subsetneqq \cxF_{(s)} = \cxF \]
such that the successive quotients $\cxF_j=\cxF_{(j)} / \cxF_{(j-1)}$ are complexes of torsion free sheaves which are $(\us,\uc)$-semistable and have decreasing reduced Hilbert polynomials with respect to $(\us,\uc)$:
\[ P_{\us, \uc}^{\mathrm{red}}(\cxF_{1}) > P_{\us, \uc}^{\mathrm{red}}(\cxF_2) > \cdots > P_{\us, \uc}^{\mathrm{red}}(\cxF_s) .\]
The Harder--Narasimhan type of $\cxF$ with respect to $(\us, \uc )$ is given by $\tau =({P}_{1}, \cdots {P}_{s})$ where ${P}_{j} = (P_j^i)_{i \in \ZZ}$ is the tuple of Hilbert polynomials of the complex $\cxF_j$ so that
\[ P^i_j := P(\cF^i_j)=P(\cF^i_{(j)}/\cF^i_{(j-1)}). \]
\end{defn}
The Harder--Narasimhan filtration can be constructed inductively from the maximal destabilising subcomplex:
\begin{defn}
Let $\cxF$ be a complex and $(\us,\uc)$ be stability parameters. A subcomplex $\cxF_1 \subset \cxF$ is a {maximal destabilising subcomplex} for $\cxF$ with respect to $(\us,\uc)$ if
\begin{enumerate}
\renewcommand{\labelenumi}{\roman{enumi})}
\item The complex $\cxF_1$ is $(\us,\uc)$-semistable,
\item For every subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot $ of $\cxF$ such that $\cxF_1 \subsetneq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ we have
\[ P_{\us,\uc}^{\mathrm{red}}(\cxF_1) > P_{\us, \uc}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot). \]
\end{enumerate}
\end{defn}
The existence and uniqueness of the maximal destabilising subcomplex follows in exactly the same way as the original proof for vector bundles of Harder and Narasimhan \cite{harder}.
\subsection{The limit as $\epsilon$ tends to zero }
Recall that we are interested in studying the collection of parameters $(\underline{1}, \delta \ue/\epsilon) $ indexed by a small positive rational number $\epsilon$ where $\underline{1}$ is the constant vector, $\eta_i$ are strictly increasing rational numbers and $\delta$ is a positive rational polynomial of degree $\max(\dim X -1 , 0)$. In this section we study the limit as $\epsilon$ tends to zero.
Observe that
\[ P_{\underline{1}, \delta \ue/\epsilon}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot) \leq P_{\underline{1}, \delta \ue/\epsilon}^{\mathrm{red}}(\cxF) \]
is equivalent to
\[ \epsilon \frac{\sum_i P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i)}{\sum_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} - \delta \frac{\sum_i \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i }{\sum_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} \leq \epsilon \frac{\sum_i P(\cF^i)}{\sum_i \rk \cF^i} - \delta \frac{\sum_i \eta_i \rk \cF^i }{\sum_i \rk \cF^i}\]
and if we take the limit as $\epsilon \to 0$ we get
\begin{equation}\label{eps is zero ineq} \frac{\sum_i \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i}{\sum_i \rk\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} \geq \frac{\sum_i \eta_i \rk \cF^i}{\sum_i \rk\cF^i}. \end{equation}
We say $\cxF$ is $(\underline{0}, \delta \ue)$-semistable if all nonzero proper subcomplexes $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \subset \cxF$ satisfy the inequality (\ref{eps is zero ineq}).
This is a slight generalisation of the parameters consider by Schmitt in \cite{schmitt05} where we now allow $\sigma_i$ to be zero. These generalised stability parameters will no longer define an ample linearisation on the parameter space (cf. $\S$\ref{linearisation schmitt}), but we can still study the corresponding notion of semistability.
\begin{lemma}\label{ss for sigma0}
Suppose $\eta_i < \eta_{i+1}$ for all integers $i$; then the only $(\underline{0}, \delta \ue)$-semistable complexes (of torsion free sheaves) are shifts of (torsion free) sheaves and complexes which are isomorphic to a shift of the cone of the identity morphism of (torsion free) sheaves.
\end{lemma}
\begin{proof}
If $\cxF$ is a shift of a torsion free sheaf $\cF^k$, then a subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ of $\cxF$ is just a subsheaf and it is trivial to verify it is $(\underline{0}, \delta \ue)$--semistable. If $\cxF$ is isomorphic to a shift of the cone on the identity morphism of a torsion free sheaf, then there is an integer $k$ such that $d^k$ is an isomorphism and $\cF^i= 0$ unless $i = k$ or $k+1$. A subcomplex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ of $\cxF$ is either concentrated in position $k+1$ or concentrated in $[k,k+1]$. In the second case we must have that $\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k \leq \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1}$ and in both cases it is easy to verify the inequality for $(\underline{0}, \delta \ue)$-semistability using the fact that the $\eta_i$ are strictly increasing.
Now suppose $\cxF$ is $(\underline{0}, \delta \ue)$-semistable. If all the boundary morphisms $d^i$ are zero, then each nonzero sheaf $\cF^k$ is both a subcomplex and quotient complex and so by semistability
\[ \eta_k = \frac{\sum_i \eta_i \rk \cF^i}{\sum_i \rk\cF^i}. \]
As the $\eta_i$ are strictly increasing, there can be at most one $k$ such that $\cF^k$ is nonzero. If there is a nonzero boundary map $d^k$ then the image of this boundary map can be viewed as a quotient complex (in position $k$) and a subcomplex (in position $k+1$) so that
\begin{equation}\label{eqn for eta 1} \eta_{k} \leq \frac{\sum \eta_i \rk \cF^i}{\sum \rk\cF^i} \leq \eta_{k+1} .\end{equation}
As the $\eta_i$ are strictly increasing, there can be at most one $k$ such that $d^k$ is nonzero. From above, we see that $\cF^i = 0$ unless $i =k$ or $ k+1$. As the $\eta_i$ are increasing, we see that the inequalities of (\ref{eqn for eta 1}) must be strict. We can consider the kernel and cokernel of $d^k$ as a subcomplex and quotient complex respectively and by comparing the inequalities obtained from semistability with (\ref{eqn for eta 1}), we see that $d^k$ must be an isomorphism and so $\cxF$ is isomorphic to a shift of the cone on the identity morphism of $\cF^{k+1}$.
\end{proof}
\begin{lemma}\label{HN filtr for sigma0}
Suppose $\eta_i < \eta_{i+1}$ for all integers $i$ and $\cxF $ is a complex. Let $k$ be the minimal integer for which $\cF^k$ is nonzero. Then the maximal destabilising subcomplex $\cxF_{(1)}$ of $\cxF $ with respect to $(\underline{0},\delta \ue)$ is
\[ \cxF_{(1)} = \left\{ \begin{array}{cccccccccl} \d \rightarrow} \newcommand{\la}{\leftarrow & 0 & \rightarrow} \newcommand{\la}{\leftarrow & \ker d^k & \rightarrow} \newcommand{\la}{\leftarrow & 0 &\rightarrow} \newcommand{\la}{\leftarrow & 0 & \rightarrow} \newcommand{\la}{\leftarrow \d & \quad \mathrm{if} \: \ker d^k \neq 0, \\ \d \rightarrow} \newcommand{\la}{\leftarrow & 0 & \rightarrow} \newcommand{\la}{\leftarrow &\cF^k & \rightarrow} \newcommand{\la}{\leftarrow & \im d^k &\rightarrow} \newcommand{\la}{\leftarrow & 0 & \rightarrow} \newcommand{\la}{\leftarrow \d & \quad \mathrm{if} \: \ker d^k = 0. \end{array} \right. \]
\end{lemma}
\begin{proof}
By Lemma \ref{ss for sigma0} these complexes are both $(\underline{0},\delta \ue)$-semistable. In order to prove this gives the maximal destabilising subcomplex we need to show that if $\cxF_{(1)} \subsetneq \mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \subset \cxF$, then
\[ \frac{\sum_i \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i}{\sum \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} > \frac{\sum_i \eta_i \rk \cF_{(1)}^i}{\sum \rk \cF_{(1)}^i}. \]
As $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot \neq \cxF_{(1)}$, the set
\[ I:= \{ i \in \mathbb{Z} : \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \neq \cF^i_{(1)} \} \]
is nonempty. We note that if $i \in I$, then $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i \neq 0 $.
Suppose $\ker d^k \neq 0$. If $k \in I$, then also $k+1 \in I$ as $\ker d^k \subsetneq \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k$ and so $0 \neq d(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k) \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1}$. As the $\eta_i$ are strictly increasing we have $ \eta_{k} \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^k + \eta_{k+1} \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1} > \eta_k(\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k} + \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{k+1} ). $ If $i > k+1$ and belongs to $I$, then $ \eta_{i} \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i > \eta_k\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i} $. So
\[ \sum_{i \in I} \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i > \sum_{i \in I} \eta_k \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i} \quad \mathrm{and} \quad \sum_{i \notin I} \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i = \sum_{i \notin I} \eta_k \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{i}; \]
hence
\[ \frac{\sum \eta_i \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i}{\sum \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i} >\eta_k = \frac{\sum \eta_i \rk \cF_{(1)}^i}{\sum \rk \cF_{(1)}^i} .\]
The case when $\ker d^k = 0$ is proved in the same way.
\end{proof}
\begin{cor}\label{HNF corr coh rmk}
If $\cxF$ has torsion free cohomology sheaves, then its Harder--Narasimhan filtration with respect to these stability parameters picks out the kernels and images of the boundary maps successively:
\[\begin{array}{cccccccccc}
\cxF_{(1)} : \quad & \cdots \rightarrow 0 & \rightarrow & \ker d^k & \rightarrow & 0 & \rightarrow & 0 & \rightarrow & 0 \cdots \\
& & & \cap & & \cap & & \cap & & \\
\cxF_{(2)} : \quad & \cdots \rightarrow 0 & \rightarrow & \cF^k & \rightarrow & \im d^k & \rightarrow & 0 & \rightarrow & 0 \cdots \\
& & & \cap & & \cap & & \cap & & \\
\cxF_{(3)} : \quad & \cdots \rightarrow 0 & \rightarrow & \cF^k & \rightarrow & \ker d^{k+1} & \rightarrow & 0 \cdots & \rightarrow & 0 \\
& & & \cap & & \cap & & \cap & & \\
\cxF_{(4)} : \quad & \cdots \rightarrow 0 & \rightarrow & \cF^k & \rightarrow & \cF^{k+1} & \rightarrow & \im d^{k+1} & \rightarrow & 0 \cdots \\
& & & \vdots & & \vdots & & \vdots & &
\end{array} \]
In particular, the successive quotients are $\cH^i(\cxF)[-i]$ or isomorphic to $\mathrm{Cone}(\mathrm{Id}_{\im d^i})[-(i+1)] $.
\end{cor}
\subsection{Semistability with respect to $(\underline{1}, \delta \ue/\epsilon)$ }
Recall that a torsion free sheaf $\cF$ is semistable (in the sense of Gieseker) if for all subsheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \subset \cF$ we have
\[ \frac{P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)}{\rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H} \leq \frac{P(\cF)}{\rk \cF}. \]
A torsion free sheaf can be viewed as a complex (by placing it in any position $k$) and it is easy to see that $(\us, \uc)$-semistability of the associated complex is equivalent to (Gieseker) semistability of the sheaf. For $\epsilon$ a small positive rational number we consider the stability parameters $(\underline{1}, \delta \ue/\epsilon)$ where $\underline{\eta}$ are strictly increasing.
\begin{lemma}\label{lemma X} Suppose $\cxF$ is a complex for which there is an $\epsilon_0>0$ such that for all positive rational $\epsilon < \epsilon_0$ this complex is $(\underline{1}, \delta \ue/\epsilon)$-semistable. Then $\cxF$ is either a shift of a Gieseker semistable torsion free sheaf or isomorphic to a shift of the cone on the identity morphism of a semistable torsion free sheaf.
\end{lemma}
\begin{proof}
By studying the limit as $\epsilon$ tends to zero we see that $\cxF$ is $(\underline{0}, \delta \ue)$-semistable and so $\cxF$ is either a shift of a torsion free sheaf or isomorphic to a shift of the cone on the identity morphism of a torsion free sheaf by Lemma \ref{ss for sigma0}. If $\cxF$ is the shift of a sheaf, then $(\underline{1}, \delta \ue/\epsilon)$-semistability for any $0<\epsilon < \epsilon_0 $ implies this sheaf must be Gieseker semistable. If $\cxF$ is the cone on the identity morphism of a torsion free sheaf $\cF$, then $\cF$ must be semistable as for any subsheaf $\mathcal{F}' \subset \mathcal{F}$ we can consider $\mathrm{Cone}(\mathrm{id}_{\cF'})$ as a subcomplex and so
\[ \frac{P(\cF')}{\rk \cF'} - \delta \frac{\eta_k + \eta_{k+1}}{2\epsilon} \leq \frac{P(\mathcal{F})}{\rk \mathcal{F}} - \delta \frac{\eta_k + \eta_{k+1}}{2 \epsilon} \]
by $(\underline{1}, \delta \ue/\epsilon)$-semistability for any $0 < \epsilon < \epsilon_0$.
\end{proof}
\begin{rmk} \label{rmk on sigma0}
Conversely, a shift of a semistable torsion free sheaf or a shift of a cone on the identity morphism of a semistable torsion free sheaf is $(\underline{1}, \delta \ue/\epsilon)$-semistable for any $\epsilon >0$.
\end{rmk}
\begin{rmk}
As $(\underline{1}, \delta \ue/\epsilon)$-semistability of a complex associated to a torsion free sheaf $\cF$ is equivalent to (Gieseker) semistability of $\cF$, it follows that the Harder--Narasimhan filtration of the associated complex with respect to $(\underline{1}, \delta \ue/\epsilon)$ is given by the Harder--Narasimhan filtration of the sheaf. Similarly we see that the Harder--Narasimhan filtration of $\mathrm{Cone}(\mathrm{id}_{\cF})$ with respect to $(\underline{1}, \delta \ue/\epsilon) $ is given by taking cones on the identity morphism of each term in the Harder--Narasimhan filtration of the sheaf $\cF$.
\end{rmk}
We have seen that the Harder--Narasimhan filtration of $\cxF$ with respect to $(\underline{0}, \delta \ue)$ picks out the successive kernels and images of each boundary map. In particular, the successive quotients are either of the form $\cH^i(\cxF)[-i]$ or isomorphic to $\mathrm{Cone}(\mathrm{Id}_{\im d^i})[-(i+1)] $.
\begin{thm}\label{HN filtrations for epsiloneta}
Let $\cxF$ be a complex concentrated in $[m_1,m_2]$ with torsion free cohomology sheaves. There is an $\epsilon_0 >0$ such that for all rational $0<\epsilon < \epsilon_0$ the Harder--Narasimhan filtration of $\cxF$ with respect to $(\underline{1}, \delta \ue/\epsilon)$ is given by refining the Harder--Narasimhan filtration of $\cxF$ with respect to $(\underline{0}, \delta \ue)$ by the Harder--Narasimhan filtrations of the cohomology sheaves $\cH^i(\cxF)$ and image sheaves $\im d^i$.
\end{thm}
\begin{proof} Firstly, we note that if $\dim X = 0$ every sheaf is Gieseker semistable (all sheaves have the same reduced Hilbert polynomial) and so any choice of $\epsilon_0$ will work. Therefore, we assume $d = \dim X >0$. Let $\cH^i(\cxF)_j$ for $1 \leq j \leq s_i$ (resp. $\im d^i_j$ for $1 \leq j \leq t_i$) denote the successive quotient sheaves in the Harder--Narasimhan filtration of $\cH^i(\cxF)$ (resp. $\im d^i$). The successive quotients in this filtration are either shifts of $\cH^i(\cxF)_j$ or isomorphic to shifts of the cone on the identity morphism of $\im d^i_j$ and so by Remark \ref{rmk on sigma0} these successive quotients are $(\underline{1}, \delta \ue/\epsilon)$-semistable for any rational $\epsilon >0$. Thus it suffices to show there is an $\epsilon_0$ such that for all $0 <\epsilon <\epsilon_0$ we have inequalities
\[ \frac{P(\cH^{m_1}(\cxF)_1)}{\rk \cH^{m_1}(\cxF)_1} - \delta \frac{\eta_{m_1}}{\epsilon} > \dots > \frac{P(\cH^{m_1}(\cxF)_{s_{m_1}})}{\rk \cH^{m_1}(\cxF)_{s_{m_1}}} - \delta \frac{\eta_{m_1}}{\epsilon} > \frac{P(\im d^{m_1}_1)}{\rk \im d_{m_1}^1} - \delta \frac{ \eta_{m_1} + \eta_{m_1 +1}}{2\epsilon} > \cdots \]
\[> \frac{P(\im d^{m_1}_{t_{m_1}})}{\rk \im d_{t_{m_1}}^{m_1}} - \delta \frac{ \eta_{m_1} + \eta_{m_1 +1}}{2\epsilon} > \frac{P(\cH^{m_1}(\cxF)_1)}{\rk \cH^{m_1 +1}(\cxF)_1} - \delta \frac{\eta_{m_1 +1}}{\epsilon} > \cdots > \frac{P(\cH^{m_2}(\cxF)_{s_{m_2}})}{\rk \cH^{m_2}(\cxF)_{s_{m_2}}} - \delta \frac{\eta_{m_2}}{\epsilon}. \]
Since we know that the reduced Hilbert polynomials of the successive quotients in the Harder--Narasimhan filtrations of the cohomology and image sheaves are decreasing, it suffices to show for $m_1 \leq i < m_2-1$ that:
\begin{equation*}\label{defining eps0} \begin{split}
1) \: \: & \epsilon \frac{P(\cH^i(\cxF)_{s_i})}{\rk \cH^i(\cxF)_{s_i}} - \delta \eta_i > \epsilon \frac{P(\im d^i_1)}{\rk \im d^i_1} - \delta \frac{ \eta_i + \eta_{i+1}}{2} \: \: \mathrm{and}\\
2) \: \: & \epsilon \frac{P(\im d^i_{t_i})}{\rk \im d^i_{t_i}} - \delta \frac{ \eta_{i} + \eta_{i+1}}{2} > \epsilon \frac{P(\cH^{i+1}(\cxF)_1)}{\rk \cH^{i+1}(\cxF)_1} - \delta \eta_{i+1}.
\end{split}
\end{equation*}
These polynomials all have the same top coefficient and we claim we can pick $\epsilon_0$ so if $0 < \epsilon < \epsilon_0$ we have strict inequalities in the second to top coefficients. Let $\mu(\mathcal{A})$ denote the second to top coefficient of the reduced Hilbert polynomial of $\mathcal{A}$ which is (up to multiplication by a positive constant) the slope of $\mathcal{A}$ and let $\delta^{\mathrm{top}} >0$ be the coefficient of $x^{d-1}$ in $\delta$. For $m_1 \leq i < m_2 -1$ let
\[ M_i := \max\left\{\mu(\im d^i_{1}) - \mu(\mathcal{H}^i(\cxF)_{s_i}), \mu(\mathcal{H}^{i+1}(\cxF)_{1}) - \mu(\im d^i_{t_i}) \right\}. \]
We pick $\epsilon_0 >0$ so that if $M_i >0$ then $ \epsilon_0 < \delta^{\mathrm{top}} ({\eta_{i+1} - \eta_i})/{2M_i} $.
\end{proof}
\section{The stratification of the parameter space}\label{sec on strat}
In the introduction we described a stratification of a projective $G$-scheme $B$ (with respect to an ample linearisation $\mathcal{L}$) by $G$-invariant subvarieties $\{S_\beta : \beta \in \mathcal{B} \}$ which is described in \cite{hesselink,kempf_ness,kirwan}. If we fix a compact maximal torus $T$ of $G$ and positive Weyl chamber $\mathfrak{t}_+$ in the Lie algebra $\mathfrak{t}$ of $T$, then the indices $\beta$ can be viewed as rational weights in $\mathfrak{t}_+$.
Associated to $\beta $ there is a parabolic subgroup $P_\beta \subset G$, a rational 1-PS $\lambda_\beta : \CC^* \rightarrow} \newcommand{\la}{\leftarrow T_{\CC}$ and a rational character $\chi_\beta : T_{\CC} \rightarrow} \newcommand{\la}{\leftarrow \CC^*$ which extends to a character of $P_\beta$.
By definition $Z_\beta$ is the components of the fixed point locus of $\lambda_\beta$ acting on $B$ on which $\lambda_\beta$ acts with weight $|| \beta ||^2$ and $Z_\beta^{ss}$ is the GIT semistable subscheme for the action of the reductive part $\mathrm{Stab} \: \beta$ of $P_\beta$ on $Z_\beta$ with respect to the linearisation $\mathcal{L}^{\chi_{-\beta}}$. Then $Y_\beta$ (resp. $Y_\beta^{ss}$) is defined to be the subscheme of $B$ consisting of points whose limit under $\lambda_\beta(t)$ as $t \to 0$ lies in $Z_\beta$ (resp. $Z_\beta^{ss}$).
By \cite{kirwan} for $\beta \neq 0$ we have $S_\beta = G Y_\beta^{ss} \cong G \times^{P_\beta} Y_\beta^{ss}$.
In this section we study the stratifications of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_P(n)$ for complexes associated to the collection of stability conditions $(\underline{1}, \delta \ue/\epsilon)$ given in $\S$\ref{sec on stab}. We relate these stratifications to the natural stratification by Harder--Narasimhan types (see Theorem \ref{HN strat is strat} below).
\subsection{GIT set up}\label{GIT set up}
We consider the collection of stability parameters $(\underline{1}, \delta \ue/\epsilon)$ indexed by a small positive rational parameter $\epsilon$ where $\delta$ is a positive rational polynomial and $\ue$ are strictly increasing rational numbers. In $\S$\ref{sec on stab} we studied semistability and Harder--Narasimhan filtrations with respect to these parameters when $\epsilon$ is very small. The Harder--Narasimhan filtration of a complex with torsion free cohomology sheaves with respect to $(\underline{1}, \delta \ue/\epsilon)$ tells us about the Harder--Narasimhan filtration of the cohomology sheaves of this complex provided $\epsilon >0$ is chosen sufficient small (see Theorem \ref{HN filtrations for epsiloneta}).
Recall that the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B} = \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{P}(n)$ for $(\underline{1},\delta \ue /\epsilon)$-semistable complexes with invariants $P$ is a locally closed subscheme of a projective bundle $\fD$ over a product $Q=Q^{m_1} \times \cdots \times Q^{m_2}$ of open subschemes $Q^i$ of quot schemes. There is an action of a reductive group $G$ on $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ where
\[ G = \SL(\oplus_i V^i) \cap \Pi_i \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i) \]
and $V^i$ are fixed vector spaces of dimension $P^i(n)$. The linearisation of this action is determined by the stability parameters (see $\S$\ref{linearisation schmitt} or \cite{schmitt05} for details). We also described a natural projective completion $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ of $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ which is a closed subscheme of a projective bundle $\overline{\fD}$ over a projective scheme $\overline{Q}$ in $\S$\ref{calc HM for cx}. Note that the group $G$ and parameter scheme $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ depend on the choice of a sufficiently large integer $n$. For any $n >\!>0$ and $\epsilon >0$, associated to this action we have a stratification of $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$ into $G$-invariant locally closed subschemes such that the open stratum is the GIT semistable subscheme. As we are primarily interested in complexes with torsion free cohomology sheaves, which form an open subscheme ${\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf}$ of the parameter space $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$, we look at restriction this stratification to the closure $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf}$ of ${\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf}$ in $\overline{\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}$:
\begin{equation}\label{snail} \overline{ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}}^{tf} = \bigsqcup_{\beta \in \mathcal{B}} S_\beta. \end{equation}
Every point in $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}$ represents a complex which has a unique Harder--Narasimhan type with respect to $(\underline{1}, \delta \ue/\epsilon)$ and so we can write
\[ \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf} = \bigsqcup_{\tau} R_\tau \]
where the union is over all Harder--Narasimhan types $\tau$.
Let us fix a complex $\cxF$ with torsion free cohomology sheaves and invariants $\underline{P}} \def\us{\underline{\sigma}} \def\uc{\underline{\chi}} \def\ue{\underline{\eta}$. We can assume we have picked $\epsilon$ sufficiently small as given by Theorem \ref{HN filtrations for epsiloneta}, so that the successive quotients appearing in its Harder--Narasimhan filtration with respect to $(\underline{1}, \delta \ue/\epsilon)$ are defined using the successive quotients in the Harder--Narasimhan filtrations of $\cH^i(\cxF)$ and $\im d^i$. Let $\tau$ be the Harder--Narasimhan type of $\cxF$ with respect to these parameters and we assume this is a nontrivial Harder--Narasimhan type (i.e. $\cxF$ is unstable with respect to $(\underline{1}, \delta \ue/\epsilon)$).
Let $H_{i,j}$ (resp. $I_{i,j}$) denote the Hilbert polynomial of the $j$th successive quotient $\cH^i(\cxF)_j$ (resp. $\im d^i_j$) in the Harder--Narasimhan filtration of the sheaf $\cH^i(\cxF)$ (resp. $\im d^i$) for $1 \leq j \leq s_i$ (resp. $1 \leq j \leq t_i$). We also let ${H}_{i,j} =( H^{k}_{i,j})_{k \in \ZZ}$ and ${I}_{i,j} =(I^{k}_{i,j})_{k \in \ZZ}$ denote the collection of Hilbert polynomials given by
\[ H^{k}_{i,j} = \left\{ \begin{array}{ll} H_{i,j} & \mathrm{if} \: k = i, \\ 0 & \mathrm{otherwise}, \end{array} \right. \quad \mathrm{and} \quad I^{k}_{i,j}= \left\{ \begin{array}{ll} I_{i,j} & \mathrm{if} \: k = i,i+1, \\ 0 & \mathrm{otherwise}. \end{array} \right.\]
Then the Harder--Narasimhan type of $\cxF$ is given by
\[ \tau = ( {H}_{m_1,1}, \dots, {H}_{m_1,s_{m_1}}, {I}_{m_1,1}, \dots, {I}_{m_1,t_{m_1}}, {H}_{m_1+1,1}, \dots, {H}_{m_2,s_{m_2}} ) \]
which we will frequently abbreviate to $\tau = ({H}, {I})$ where ${H}=(H^{k}_{i,j})_{i,j,k \in \ZZ}$ and ${I}=(I^{k}_{i,j})_{i,j,k \in \ZZ}$.
\begin{ass}\label{ass on epsilon}
We may also assume $\epsilon$ is sufficiently small so that the only $(\underline{1}, \delta \ue/\epsilon)$-semistable complexes with Hilbert polynomials ${I}_{i,j}$ are isomorphic to cones on the identity morphism of a torsion free semistable sheaf.
\end{ass}
\subsection{Boundedness}
We first give a general boundedness result for complexes of fixed Harder--Narasimhan type:
\begin{lemma}\label{cxs HN type bdd}
The set of sheaves occurring in a complex of torsion free sheaves with Harder--Narasimhan type $({P_{1}}, \cdots ,{P_{s}})$ with respect to $(\underline{\sigma}, \underline{\chi})$ is bounded.
\end{lemma}
\begin{proof}
This follows from a result of Simpson (see \cite{simpson} Theorem 1.1) which states that a collection of torsion free sheaves on $X$ of fixed Hilbert polynomial is bounded if the slopes of their subsheaves are bounded above by a fixed constant. Recall that the slope of a sheaf is (up to multiplication by a positive constant) the second to top coefficient in its reduced Hilbert polynomial. Let $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ be a complex with this Harder--Narasimhan type; then for any subcomplex $\cxG$ of $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ we have
\[ P_{\underline{\sigma}, \underline{\chi}}^{\mathrm{red}}(\cxG) \leq P_{\underline{\sigma}, \underline{\chi}}^{\mathrm{red}}(\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_{1}) = \frac{\sum_i \sigma_i P^i_1 - \delta \sum_i \eta_i r^i_1}{\sum_i \sigma_i r^i_1}=:R\]
where $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot_1$ is the maximal destabilising subcomplex of $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ which has Hilbert polynomials specified by ${P}_1$. Suppose $\cG$ is a subsheaf of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ and consider the subcomplex
\[ \cxG : \quad 0 \to \cdots \to 0 \to \cG \to \mathcal{E}^{i+1} \to \cdots \to \mathcal{E}^{m_2}; \]
then we have an inequality of polynomials
\[ \frac{\sigma_i P(\cG) - \delta \eta_i \rk \cG+\sum_{j >i} \sigma_j P^j_1 - \delta \sum_{j>i} \eta_j r^j_1}{\sigma_i \rk \cG+ \sum_{j>i} \sigma_j r^j_1} \leq R. \]
The top coefficients agree and so we have an inequality
\begin{equation*} \frac{\deg \cG}{ \rk \cG}\leq
\frac{\delta^{\mathrm{top}}}{\sigma_i} \left(\eta_i + \frac{ \sum_{j>i} \eta_j r^j_1}{ \rk \cG}\right) - \frac{\sum_{j>i}\sigma_j d^j_1 }{\sigma_i \rk \cG} + \left( 1 + \frac{\sum_{j>i}\sigma_j r^j_1}{\sigma_i \rk \cG}\right) \left(\frac{ \sum_j \sigma_j d^j_1 - \delta^{\mathrm{top}} \sum_j \eta_j r^j_1}{\sum_j \sigma_j r^j_1} \right) \end{equation*}
where $d^j_1$ is (up to multiplication by a positive constant) the second to top coefficient of $P^j_1$ and $\delta^\mathrm{top}$ is (up to multiplication by a positive constant) the leading coefficient of $\delta$. Since the rank of $\cG$ is bounded, it follows that the slope of subsheaves $\cG \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ are bounded.
\end{proof}
As the set of sheaves occurring in complexes of a given Harder--Narasimhan type are bounded, we can pick $n$ so that they are all $n$-regular and thus parametrised by $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}(n)$.
\begin{cor}\label{how to pick n cx}
Let $({P}_{1}, \cdots ,{P}_{s})$ be a Harder--Narasimhan type with respect to $(\underline{\sigma}, \underline{\chi})$. Then we can choose $n$ sufficiently large so that for $1\leq i_{1} < \dots < i_k \leq s$ all the sheaves occurring in a complex of torsion free sheaves with Harder--Narasimhan type $({P_{i_1}}, \cdots ,{P_{i_k}})$ are $n$-regular.
\end{cor}
\begin{ass}\label{assum on n}
Let $\tau=({H}, {I})$ be the Harder--Narasimhan type of the complex $\cxF$ we fixed in $\S$\ref{GIT set up}. We assume $n$ is sufficiently large so that the statement of Corollary \ref{how to pick n cx} holds for this Harder--Narasimhan type. In particular this means every complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with Harder--Narasimhan type $\tau$ is parametrised by $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}$.
\end{ass}
\subsection{The associated index}
In this section we associate to the complex $\cxF$ with Harder--Narasimhan type $\tau$ a rational weight $\beta(\tau,n)$ which we will show later on is an index for an unstable stratum in the stratification
defined at (\ref{snail}) when $n$ is sufficiently large.
Let $z=(q,[\psi : 1]) $ be a point in $\mathfrak{T}^{tf}$ which parametrises the fixed complex $\cxF$ with Harder--Narasimhan type $\tau$.
As mentioned in the introduction, the stratification can also be described by using Kempf's notion of adapted 1-PSs and so rather than searching for a rational weight $\beta$ we look for a (rational) 1-PS $\lambda_{\beta}$ which is adapted to $z$. By definition, a 1-PS $\lambda$ is adapted to $z$ if it minimises the (normalised) Hilbert--Mumford function
\[ \mu^{\cL} (z, \lambda) = \min_{\lambda'} \frac{ \mu^{\cL}(z, \lambda')}{|| \lambda'||}\]
and is therefore most responsible for the instability of $z$. It is natural to expect that $\lambda$ should induce a filtration of $\cxF$ which is most responsible for the instability of this complex; that is, its Harder--Narasimhan filtration. To distinguish between the cohomology and image parts of this Harder--Narasimhan filtration we write the Harder--Narasimhan filtration of $\cxF$ as
\[ 0 \subsetneq \mathcal{A}^\cdot_{m_1,(1)} \subsetneq \cdots \subsetneq \mathcal{A}^\cdot_{m_1,(s_{m_1})} \subsetneq \mathcal{B}^\cdot_{m_1,(1)} \cdots \mathcal{B}^\cdot_{m_1,(t_{m_1})} \subsetneq \mathcal{A}^\cdot_{m_1 +1,(1)} \subsetneq \cdots \subsetneq \mathcal{A}^\cdot_{m_2,(s_{m_2})}=\cxF \]
where the quotient $\mathcal A}\def\cB{\mathcal B}\def\cC{\mathcal C}\def\cD{\mathcal D^\cdot_{k,j}$ (resp. $\cB^\cdot_{k,j}$) of $\mathcal A}\def\cB{\mathcal B}\def\cC{\mathcal C}\def\cD{\mathcal D^{\cdot}_{k,(j)}$ (resp. $\cB^\cdot_{k,(j)}$) by its predecessor is isomorphic to $\cH^k(\cxF)_j[-k]$ (resp. $\mathrm{Cone}(\mathrm{id}_{\im d^k_j})[-(k+1)]$).
Such a filtration induces filtrations of the vector space $V^i$ for $m_1 \leq i \leq m_2$:
\begin{equation}\label{filtr of Vi} 0 \subset V^i_{m_1,(1)} \subset \cdots \subset V^i_{m_1,(s_{m_1})} \subset W^i_{m_1,(1)} \subset \cdots \subset W^i_{m_1,(t_{m_1})} \subset \dots \subset V^i_{m_2,(s_{m_2})} = V^i \end{equation}
where \[V^i_{k,(j)} := H^0(q^i(n))^{-1}H^0(\mathcal{A}^i_{k,(j)}(n)) \quad \mathrm{and} \quad W^i_{k,(j)} := H^0(q^i(n))^{-1}H^0(\mathcal{B}^i_{k,(j)}(n)).\] Let $V^i_{k,j}$ (respectively $W^i_{k,j}$) denote the quotient of $V^i_{k,(j)}$ (respectively $W^i_{k,(j)}$) by its predecessor in this filtration. Note that by the construction of the Harder--Narasimhan filtration (see Theorem \ref{HN filtrations for epsiloneta}) we have that $V^i_{k,j} =0$ unless $k=i$ and $W^i_{k,j} =0$ unless $k = i,i-1$ and we also have an isomorphism $W^i_{i,j} \cong W^{i+1}_{i,j}$.
Given integers $a_{k,j}$ for $m_1\leq k \leq m_2$ and $1 \leq j \leq s_k$ and integers $b_{k,j}$ for $m_1\leq k < m_2 -1$ and $1 \leq j \leq t_k$ which satisfy
\begin{equation}\begin{split}\label{decr weights} i) & \quad a_{m_,1} > \dots > a_{m_1,s_{m_1}} > b_{m_1,1} > \dots > b_{m_1,t_{m_1}} > a_{m_1 +1,1} > \dots > a_{m_2, s_{m_2}} \\ ii) & \quad \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} a_{i,j} \dim V^i_{i,j} + 2 \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} b_{i,j} \dim W^i_{i,j} =0, \end{split}\end{equation}
we can define a 1-PS $\lambda(\underline{a}, \underline{b})$ of $G$ as follows. Let $V^i_k = \oplus_{j=1}^{s_k} V^i_{k,j}$ and $W^i_k = \oplus_{j=1}^{t_k} W^i_{k,j}$; then define 1-PSs $\lambda_{k}^{H,i} : \mathbb{C}^* \rightarrow \mathrm{GL}(V^i_k)$ and $\lambda_{k}^{I,i} : \mathbb{C}^* \rightarrow \mathrm{GL}(W^i_k)$ by
\[ \lambda^{H,i}_k (t)= \left( \begin{array}{ccc} t^{a_{k,1}}I_{V^i_{k,1}} & & \\ & \ddots & \\ & & t^{a_{k,s_k}}I_{V^i_{k,s_k}} \end{array} \right) \quad \lambda_{k}^{I,i} (t)= \left( \begin{array}{ccc} t^{b_{k,1}}I_{W^i_{k,1}} & & \\ & \ddots & \\ & & t^{b_{k,t_k}}I_{W^i_{k,t_k}} \end{array} \right) \]
Then $\lambda(\underline{a}, \underline{b}):= (\lambda_{m_1}, \dots, \lambda_{m_2})$ is given by
\begin{equation}\label{1ps def} \lambda_i(t): = \left( \begin{array}{ccc} \lambda_{i-1}^{I,i}(t) & & \\ & \lambda^{H,i}_i(t) & \\ & & \lambda^{I,i}_i(t) \end{array} \right) \in \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(V^i)=\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^i_{i-1} \oplus V^i_i \oplus W^i_i). \end{equation}
For all pairs $(\underline{a}, \underline{b})$ the associated 1-PS $\lambda(\underline{a}, \underline{b})$ of $G$ induces the Harder--Narasimhan filtration of $\cxF$ and so by Proposition \ref{HM prop}
\begin{equation*}\begin{split} {\mu^{\mathcal{L}}(z, \lambda(\underline{a}, \underline{b}) )} = & \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} a_{i,j} \left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal{H}^i(\mathcal{F}_\cdot)_j \\ & + \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} b_{i,j} \left( 2\frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}_i' + {\eta}'_{i+1}}{\epsilon} \right) \rk \im d^i_j \end{split}\end{equation*}
where $P_{\underline{1}} = \sum_i P^i$ and $r_{\underline{1}} = \sum_i r^i$ and $(\underline{1}, \delta \ue'/\epsilon)$ are the stability parameters associated to $(\underline{1}, \delta \ue/\epsilon)$ which satisfy $\sum_i \eta_i' r^i = 0$ (cf. Remark \ref{normalise}).
We define
\[ a_{i,j} := \frac{1}{\delta(n)} -\left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} + \frac{{\eta}'_i}{\epsilon} \right) \frac{\rk(H_{i,j})}{H_{i,j}(n)} \quad \mathrm{and} \quad b_{i,j} := \frac{1}{\delta(n)} -\left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} + \frac{{\eta}'_i+{\eta}'_{i+1}}{2\epsilon} \right) \frac{\rk(I_{i,j})}{I_{i,j}(n)} \]
where $\rk(H_{i,j})$ and $\rk(I_{i,j})$ are the ranks determined by the leading coefficients of the polynomials $H_{i,j}$ and $I_{i,j}$.
The rational numbers $(\underline{a}, \underline{b})$ defined above are those which minimise the normalised Hilbert--Mumford function subject to condition $ii$) of (\ref{decr weights}) where $z \in \mathfrak{T}$ is the point which represents the complex $\cxF$ with Harder--Narasimhan type $\tau$. The choice of $\epsilon$ given by Theorem \ref{HN filtrations for epsiloneta} ensures that $(\underline{a}, \underline{b})$ also satisfy the inequalities $i$) of (\ref{decr weights}) for all sufficiently large $n$.
We choose a maximal torus $T_i$ of the maximal compact subgroup $\mathrm{U}(V^i)$ of $\mathrm{GL}(V^i)$ as follows. Take a basis of $V^i$ which is compatible with the filtration of $V^i$ defined at (\ref{filtr of Vi}) and define $T_i$ to be the maximal torus of $\mathrm{U}(V^i)$ given by taking diagonal matrices with respect to this basis. We
pick the positive Weyl chamber
\[ \mathfrak{t_i}_+ := \{ i \mathrm{diag}(a_1, \dots, a_{\dim V^i}) \in \mathfrak{t_i}: a_1 \geq \dots \geq a_{\dim V^i} \} .\]
Let $T$ be the maximal torus of the maximal compact subgroup of $G$ determined by the maximal tori $T_i$ and let $\mathfrak{t}_+$ be the positive Weyl chamber associated to the $\mathfrak{t_i}_+$.
\begin{defn}\label{defn of beta cxs}
We define $\beta = \beta(\tau,n) \in \mathfrak{t}_+$ to be the point defined by the rational weights
\[\beta_i = i \mathrm{diag} (b_{i-1,1}, \dots, b_{i-1,t_{i-1}}, a_{i,1} \dots, a_{i,s_i}, b_{i,1} \dots b_{i,t_i}) \in \mathfrak{t_i}_+\] where $a_{i,j}$ appears $H_{i,j}(n)$ times and $b_{k,j}$ appears $I_{k,j}(n)$ times. This rational weight defines a rational 1-PS $ \lambda_{\beta}$ of $G$ by $\lambda_{\beta} = \lambda(\underline{a}, \underline{b})$.
\end{defn}
\subsection{Describing components of $Z_\beta$}
By Remark \ref{rmk on fixed pts}, the $\lambda_{\beta}(\CC^*)$-fixed point locus of $\overline{\mathfrak{T}}^{tf}$ decomposes into three pieces: a diagonal piece, a strictly upper triangular piece and a strictly lower triangular piece. Each of these pieces decomposes further in terms of the Hilbert polynomials of the direct summands of each sheaf in this complex. We are interested in the component(s) of the diagonal part which may contain the graded object
associated to the Harder--Narasimhan filtration of a complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ of Harder--Narasimhan type $\tau$.
We consider the closed subscheme $F_\tau$ of $\overline{\mathfrak{T}}^{tf}$ consisting of $z = (q, [\psi : \zeta])$ where as usual $\psi$ is determined by boundary maps $d^i$ and we have decompositions
\[ q^i = \bigoplus_{j=1}^{t_{i-1}} p^i_{i-1,j} \oplus \bigoplus_{j=1}^{s_i} q^i_{i,j} \oplus \bigoplus_{j=1}^{t_i} p^i_{i,j} \quad \mathrm{and} \quad d^i = \oplus_{j=1}^{t_i}d^i_j\]
where $q^i_{i,j} : V^i_{i,j} \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{E}^i_{i,j}$ is a point in $\mathrm{Quot}(V^i_{i,j} \otimes \mathcal{O}_X(-n), H^i_{i,j})$ and $p^i_{k,j}: W^i_{k,j} \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{G}^i_{k,j}$ is a point in $\mathrm{Quot}(W^i_{k,j} \otimes \mathcal{O}_X(-n), I^i_{k,j})$ and $d^i_j : \mathcal{G}^i_{i,j} \rightarrow \mathcal{G}_{i,j}^{i+1}$.
Following the discussion above we have:
\begin{lemma}\label{fixed pt locus}
$F_\tau$ is a union of connected components of the fixed point locus of $\lambda_{\beta}$ acting on $\overline{\mathfrak{T}}^{tf}$ which is contained completely in the diagonal part of this fixed point locus.
\end{lemma}
\begin{rmk}\label{descr of F and Ttau for cx}
Every point in $\mathfrak{T}_{(\tau)}:=F_\tau \cap \mathfrak{T}^{tf}$ is a direct sum of complexes with Hilbert polynomials specified by $\tau$ and hence we have an isomorphism
\[ \mathfrak{T}_{{H}_{m_1,1}} \times \cdots \times \mathfrak{T}_{{H}_{m_1,s_{m_1}}} \times \mathfrak{T}^{tf}_{{I}_{m_1,1}} \times \cdots \times \mathfrak{T}^{tf}_{{I}_{m_1,t_{m_1}}} \times \mathfrak{T}_{{H}_{m_1 +1,1}}\times \cdots \times \mathfrak{T}_{{H}_{m_2,s_{m_2}}} \cong \mathfrak{T}_{(\tau)}.\]
\end{rmk}
Let $Z_\beta$ and $Y_\beta$ denote the subschemes of the stratum $S_\beta$ as defined in the introduction.
\begin{lemma}\label{descr of Zbeta}
Let $ z \in \mathfrak{T}_{(\tau)}:=F_\tau \cap \mathfrak{T}^{tf} $; then $z \in Z_\beta$.
\end{lemma}
\begin{proof}
Let $z = ( q, [\psi: 1]) $ be a point of $\mathfrak{T}_{(\tau)}$ as above. The weight of the action of $\lambda_{\beta}$ on $z$ is given equal to $-\mu^{\mathcal{L}}(z, \lambda_\beta )$ and by Proposition \ref{HM prop} we have
\begin{equation*} {\mu^{\mathcal{L}}(z, \lambda_\beta )} = \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} a_{i,j} \left( \frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i_{i,j} + \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} b_{i,j} \left( 2\frac{P_{\underline{1}}(n)}{r_{\underline{1}}\delta(n)} + \frac{{\eta}'_i + {\eta}'_{i+1}}{\epsilon} \right) \rk \cG^i_{i,j}. \end{equation*}
By definition $Z_\beta$ is the union of the connected components of the fixed point locus for the action of $\lambda_{\beta}$ on which $\lambda_{\beta}$ acts with weight $|| \beta||^2$ and it is easy to check that the choice of rational numbers $(\underline{a}, \underline{b})$ ensures that $|| \beta ||^2=-\mu^{\mathcal{L}}(z, \lambda_\beta )$ which completes the proof.
\end{proof}
\begin{cor}
The scheme $ \mathfrak{T}_{(\tau)} $ is a union of connected components of $Z_\beta \cap \mathfrak{T}^{tf}$.
\end{cor}
Let $F$ be the union of connected components of $Z_\beta$ meeting $\mathfrak{T}_{(\tau)}$; then $F $ is a closed subscheme of $\overline{\mathfrak{T}}^{tf}$ which is completely contained in the diagonal part of $Z_{\beta}$. Consider the subgroup
\[ \mathrm{Stab} \: \beta = \left( \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} \mathrm{GL}(V^i_{i,j}) \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} \mathrm{GL}(W^i_{i,j}) \times \mathrm{GL}(W^{i+1}_{i,j}) \right) \cap \mathrm{SL}(\oplus_{i=m_1}^{m_2} V^i) \]
of $G$ which is the stabiliser of $\beta$ under the adjoint action of $G$.
\begin{lemma}\label{descr of Ghat}
$ \mathrm{Stab} \: \beta$ has a central subgroup
\[ \hat{G}= \left\{ (u_{i,j}, w_{i,j}) \in (\mathbb{C}^*)^{\sum_{i=m_1}^{m_2} s_i +\sum_{i=m_1}^{m_2-1} t_i} : \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} (u_{i,j})^{H_{i,j}(n)} \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (w_{i,j})^{2I_{i,j}(n)} =1 \right\} \]
which fixes every point of $F$. This subgroup acts on the fibre of $\mathcal{L}$ over any point of $F$ by a character ${\chi}_F : \hat{G} \rightarrow \mathbb{C}^* $ given by \[ {\chi}_F(u_{i,j},w_{i,j})=\prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} (u_{i,j})^{- \rk H_{i,j} \left(\frac{P_{\underline{1}}(n)}{r_{\underline{1}} \delta (n)} + \frac{\eta'_i}{\epsilon} \right) } \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (w_{i,j})^{- \rk I_{i,j} \left(\frac{2P_{\underline{1}}(n)}{r_{\underline{1}} \delta (n)} + \frac{\eta'_i + \eta'_{i+1}}{\epsilon} \right) }.\]
\end{lemma}
\begin{proof}
The inclusion of $\hat{G}$ into $ \mathrm{Stab} \: \beta$ is given by
\[(u_{i,j} \: , \: w_{i,j}) \mapsto (u_{i,j} I_{V^i_{i,j}} \: , \: w_{i,j} I_{W^i_{i,j}} \: , \: w_{i,j} I_{W^{i+1}_{i,j}}) .\]
Let $z = (q, [ \psi : \zeta]) $ be a point of $F$; then we have a decomposition
\[ q^i = \bigoplus_{j=1}^{t_{i-1}} p^i_{i-1,j} \oplus \bigoplus_{j=1}^{s_i} q^i_{i,j} \oplus \bigoplus_{j=1}^{t_i} p^i_{i,j} . \]
A copy of $\mathbb{C}^*$ acts trivially on each quot scheme and so the central subgroup $\hat{G}$ fixes this quotient sheaf. The boundary maps are of the form $d^{i} = \oplus_{j=1}^{t_i} d^{i}_j$ where $d^{i}_j : \mathcal{G}^i_{i,j} \rightarrow \mathcal{G}^{i+1}_{i,j}$. As $ (u_{i,j} \: , \: w_{i,j}) \in \hat{G}$ acts on both $\mathcal{G}^i_{i,j}$ and $\mathcal{G}^{i+1}_{i,j}$ by multiplication by $w_{i,j}$, the boundary maps are also fixed by the action of $\hat{G}$.
To calculate the character ${\chi}_F : \hat{G} \rightarrow \mathbb{C}^* $ with which this torus acts we fix $(u_{i,j} \: , \: w_{i,j}) \in \hat{G}$ and calculate the weight of the action of this element on the fibre over a point $z \in F$ by modifying the calculations for $\mathbb{C}^*$-actions in $\S$\ref{calc HM for cx} to general torus actions.
\end{proof}
Let $\cL^{\chi_{-\beta}}$ denote the linearisation of the $\mathrm{Stab} \: \beta$ action on $Z_\beta$ given by twisting the original linearisation $\mathcal{L}$ by the character $\chi_{-\beta}$ associated to $-\beta$. Recall that $Z_\beta^{ss}$ is the GIT semistable set with respect to this linearisation. Consider the subgroup
\begin{equation*} \begin{aligned}
G' &:= \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} \SL(V^i_{i,j}) \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^i_{i,j}) \times \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^{i+1}_{i,j}) )\cap \SL(W^i_{i,j} \oplus W^{i+1}_{i,j})
\\ &= \left\{ (g^i_j, h^i_{i,j}, h^{i+1}_{i,j} ) \in \mathrm{Stab} \: \beta : \det g^i_j = 1 \: \mathrm{and} \: \det h^i_{i,j} \det h^{i+1}_{i,j} =1 \right\}. \end{aligned}\end{equation*}
\begin{prop}\label{descr of Fss}
Let $F$ be the components of $Z_\beta$ which meet $\mathfrak{T}_{(\tau)}$; then
\[ F^{\mathrm{Stab} \: \beta-ss}(\mathcal{L}^{\chi_{-\beta}}) = F^{G'-ss}(\mathcal{L}). \]
\end{prop}
\begin{proof}
There is a surjective homomorphism $\Phi $ from $ \mathrm{Stab} \: \beta $ to the central subgroup $ \hat{G} $ defined in Lemma \ref{descr of Ghat}:
\[ \Phi( g_{j}^i, h^{i}_{i,j}, h^{i+1}_{i,j} ) = ( (\det g^{i}_j)^{D/H_{i,j}(n)} , (\det h^i_{i,j} \det h^{i+1}_{i,j})^{D/2I_{i,j}(n)} ) \]
where $D= \Pi_{i=m_1}^{m_2} \Pi_{j=1}^{s_i} H_{i,j}(n) \times \Pi_{i=m_1}^{m_2-1} \Pi_{j=1}^{t_i} 2I_{i,j}(n)$. The composition of the inclusion of $\hat{G}$ into $\mathrm{Stab} \: \beta$ with $\Phi$ is
\[ (u_{i,j}, w_{i,j}) \mapsto ( u_{i,j}^D, w_{i,j}^D). \] Therefore, $\ker \Phi \times \hat{G}$ surjects onto $\mathrm{Stab} \: \beta$ with finite kernel and since GIT semistability is unchanged by finite subgroups we have
$ F^{\mathrm{Stab} \: \beta -ss} (\mathcal{L}^{\chi_{-\beta}}) = F^{\ker \Phi \times \hat{G} -ss} (\mathcal{L}^{\chi_{-\beta}}) .$
Observe that the restriction of $\chi_\beta$ to the central subgroup $\hat{G}$
\[ \chi_\beta (u_{i,j},v_{i,j}) = \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} u_{i,j}^{a_{i,j} H_{i,j}(n)} \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} w_{i,j}^{b_{i,j} 2I_{i,j}(n)} \]
is equal to the character $\chi_F : \hat{G} \rightarrow \mathbb{C}^*$ defined in Lemma \ref{descr of Ghat}. As we are considering the action of $\ker \Phi \times \hat{G}$ on $F$ linearised by $\mathcal{L}^{\chi_{-\beta}}$, the effects of the action of $\hat{G}$ and the modification by the character corresponding to $-\beta$ cancel so that
$ F^{\ker \Phi \times \hat{G} -ss} (\mathcal{L}^{\chi_{-\beta}}) = F^{\ker \Phi -ss} (\mathcal{L}) .$
Finally note that $G'$ injects into $\ker \Phi$ with finite cokernel and so $F^{\ker \Phi -ss} (\mathcal{L}) = F^{G' -ss} (\mathcal{L})$ which completes the proof.
\end{proof}
\subsection{A description of the stratification}\label{sec with thm}
Consider the semistable subscheme \[\mathfrak{T}_{(\tau)}^{ss} := \mathfrak{T}_{(\tau)}^{\mathrm{Stab} \: \beta -ss}(\mathcal{L}^{\chi_{-\beta}})\] for the $\mathrm{Stab} \: \beta$-action on $ \mathfrak{T}_{(\tau)}$ with respect to $\mathcal{L}^{\chi_{-\beta}}$.
Recall from Remark \ref{descr of F and Ttau for cx} that we have an isomorphism
\[\mathfrak{T}_{(\tau)} \cong \mathfrak{T}_{{H}_{m_1,1}} \times \cdots \times \mathfrak{T}_{{H}_{m_1,s_{m_1}}} \times \mathfrak{T}^{tf}_{{I}_{m_1,1}} \times \cdots \times \mathfrak{T}^{tf}_{{I}_{m_1,t_{m_1}}} \times \mathfrak{T}_{{H}_{m_1 +1,1}}\times \cdots \times \mathfrak{T}_{{H}_{m_2,s_{m_2}}}.\]
Let
\[z = \bigoplus_{i=m_1}^{m_2} \bigoplus_{j=1}^{s_i} z_{i,j} \oplus \bigoplus_{i=m_1}^{m_2-1} \bigoplus_{j=1}^{t_i} y_{i,j} \]
be a point in $\mathfrak{T}_{(\tau)}$; that is, $z_{i,j}=(q^i_{i,j}, [0,1])$ is a point in $\mathfrak{T}_{{H}_{i,j}}$ corresponding to a complex $\mathcal{H}^\cdot_{i,j}$ concentrated in degree $i$ and $y_{i,j}=(p^i_{i,j}, p^{i+1}_{i,j} [\varphi^{i}_j,1])$ is a point in $\mathfrak{T}^{tf}_{{I}_{i,j}}$ corresponding to a complex $\mathcal{I}^\cdot_{i,j}$ concentrated in degrees $i$ and $i+1$. By Proposition \ref{descr of Fss}, we have \[\mathfrak{T}_{(\tau)}^{ss} =\mathfrak{T}_{(\tau)}^{G'-ss}(\mathcal{L}|_{\mathfrak{T}_{(\tau)}});\] therefore, $z$ is in $\mathfrak{T}_{(\tau)}^{ss}$ if and only if $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$ for every 1-PS $\lambda$ of $G'$. A 1-PS $\lambda$ of $G'$ is given by
\begin{itemize} \item 1-PSs $\lambda^H_{i,j}$ of $\mathrm{SL}(V^i_{i,j})$ and
\item 1-PSs $\lambda^I_{i,j} =(\lambda^{I,i}_{i,j}, \lambda^{I,i+1}_{i,j})$ of $(\mathrm{GL}(W^i_{i,j}) \times \mathrm{GL}(W^{i+1}_{i,j}) )\cap \mathrm{SL}(W^i_{i,j} \oplus W^{i+1}_{i,j})$.
\end{itemize}
\begin{lemma}\label{lemma 1}
Suppose $n$ is sufficiently large. Then for any $z \in \mathfrak{T}_{(\tau)}$ as above for which a direct summand $\mathcal{H}^\cdot_{i,j}$ or $\mathcal{I}^\cdot_{i,j}$ is $(\underline{1}, \delta \ue /\epsilon)$-unstable, there is a 1-PS $\lambda$ of $G'$ such that $\mu^{\mathcal{L}}(z,{\lambda}) < 0$.
\end{lemma}
\begin{proof}
We suppose $n$ is sufficiently large so that Gieseker semistability of a torsion free sheaf with Hilbert polynomial $H_{i,j}$ (respectively $I_{i,j}$) is equivalent to GIT-semistability of a point in the relevant quot scheme representing this sheaf with respect to the linearisation given by Gieseker. We may also assume $n$ is sufficiently large so for $1 \leq i \leq m-1$ and $1 \leq j \leq t_i$, we have that $(\underline{1}, \delta \underline{\eta}/\epsilon)$-semistability of a complex with Hilbert polynomials ${I}_{i,j}$ is equivalent to GIT semistability of a point in $\mathfrak{T}_{{I}_{i,j}}$ for the linearisation defined by these stability parameters.
Firstly suppose $\mathcal{H}^{i}_{i,j}$ is unstable for some $i$ and $j$; then there exists a subsheaf $\mathcal{H}^{i,1}_{i,j} \subset \mathcal{H}^{i}_{i,j}$ such that
\begin{equation}\label{eqn H} \frac{H^0(\mathcal{H}^{i,1}_{i,j}(n))}{\rk \mathcal{H}^{i,1}_{i,j}} > \frac{H_{i,j}(n)}{\rk H_{i,j}} .\end{equation}
We construct a 1-PS $\lambda= (\lambda^H_{i,j}, \lambda^I_{i,j})$ of $G'$ with three weights $\gamma_1 > \gamma_2 = 0 > \gamma_3 $. Let \[V^{i,1}_{i,j}= H^0(q^i_{i,j}(n))^{-1}H^0(\mathcal{H}^{i,1}_{i,j}(n))\] and let $V^{i,3}_{i,j}$ be an orthogonal complement to $V^{i,1}_{i,j} \subset V^{i}_{i,j}$. Define \[\lambda^H_{i,j} = \left( \begin{array}{cc} t^{\gamma_1} I_{V^{i,1}_{i,j}} & \\ & t^{\gamma_3} I_{V^{i,3}_{i,j}} \end{array} \right)\]
and define all the other parts of $\lambda$ to be trivial (the weights $\gamma_1$ and $\gamma_3$ should be chosen so $\lambda^H_{i,j}$ has determinant 1). Then by Proposition \ref{HM prop}
\[ \mu^{\mathcal{L}}(z,\lambda) = \left(\frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} + \frac{\tilde{\eta}_i}{\epsilon}\right) \left[ H_{i,j}(n) \rk \mathcal{H}^{i,1}_{i,j} - H^0(\mathcal{H}^{i,1}_{i,j}(n)) \rk H_{i,j} \right] < 0 .\]
Secondly suppose $\mathcal{I}^\cdot_{i,j}$ is unstable with respect to $(\underline{1}, \delta\underline{\eta}/\epsilon)$; then by our assumption on $\epsilon$ it is not isomorphic to the cone on the identity map of a semistable sheaf. Let $d: \mathcal{I}^{i}_{i,j} \rightarrow \mathcal{I}_{i,j}^{i+1}$ denote the boundary morphism of this complex. If $d = 0$, then we can choose the 1-PS $\lambda$ to pick out the subcomplex $\mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_i^{i,j} \rightarrow 0$. For example, let $\lambda$ have three weights $1 > 0 > -1$ and let the only nontrivial part be \[\lambda^I_{i,j} = (tI_{W^i_{i,j}},t^{-1}I_{W^{i+1}_{i,j}});\] then $\mu^{\mathcal{L}}(z,\lambda) < 0 $. If $d \neq 0$ but has nonzero kernel, then consider the reduced Hilbert polynomial of this kernel. If the kernel has reduced Hilbert polynomial strictly larger than $\mathcal{I}^i_{i,j}$, then choose $\lambda$ to pick out the subcomplex $\ker d \rightarrow 0$. If the kernel has reduced Hilbert polynomial strictly smaller than $\mathcal{I}^i_{i,j}$, then choose $\lambda$ to pick out the subcomplex $0 \rightarrow \im d$. If the kernel has reduced Hilbert polynomial equal to $I_{i,j}/ \rk I_{i,j}$, then choose $\lambda$ to pick out the subcomplex $\mathcal{I}^i_{i,j} \rightarrow \im d$. In all three cases we see that $\mu^{\mathcal{L}}(z,\lambda) < 0 $. Finally, if $d$ is an isomorphism but $\mathcal{I}^i_{i,j}$ is not Gieseker semistable, then let $\mathcal{I}^{i,1}_{i,j}$ be its maximal destabilising subsheaf. A 1-PS which picks out the subcomplex $\mathcal{I}^{i,1}_{i,j} \rightarrow d^i_j(\mathcal{I}^{i,1}_{i,j})$ will destabilise $z$.
\end{proof}
\begin{lemma}\label{lemma 2}
Suppose $n$ is sufficiently large and let $z$ be a point in $\mathfrak{T}_{(\tau)}$ such that all the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ are semistable with respect to $(\underline{1}, \delta \ue /\epsilon)$. If $\lambda$ is a 1-PS of $G'$ which induces a filtration of $z$ by subcomplexes, then $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$.
\end{lemma}
\begin{proof}
We suppose $n$ is chosen as in Lemma \ref{lemma 1}. The 1-PS $\lambda$ of $G'$ is given by 1-PSs $\lambda_{i,j}^H$ of $\mathrm{SL}(V^i_{i,j})$ and $\lambda^I_{i,j} =(\lambda^{I,i}_{i,j}, \lambda^{I,i+1}_{i,j})$ of $(\mathrm{GL}(W^i_{i,j}) \times \mathrm{GL}(W^{i+1}_{i,j}) )\cap \mathrm{SL}(W^i_{i,j} \oplus W^{i+1}_{i,j})$. We can diagonalise these 1-PSs simultaneously to get decreasing integers $\gamma_1 > \cdots > \gamma_u$ and decompositions $V^i_{i,j} = V^{i,1}_{i,j} \oplus \cdots \oplus V^{i,u}_{i,j}$ and similarly $W^i_{i,j} = W^{i,1}_{i,j} \oplus \cdots \oplus W^{i,u}_{i,j}$ and $W^{i+1}_{i,j}= W^{i+1,1}_{i,j} \oplus \cdots \oplus W^{i+1,u}_{i,j}$ such that
\[\lambda^H_{i,j}(t) = \left( \begin{array}{ccc} t^{\gamma_1} I_{V^{i,1}_{i,j}} & & \\ & \ddots & \\ & & t^{\gamma_u} I_{V^{i,u}_{i,j}} \end{array} \right)\]
and similarly for $\lambda^I_{i,j}$. The corresponding filtrations of these vector spaces give rise to filtrations of the sheaves $\mathcal{H}^i_{i,j}$, $\mathcal{I}^i_{i,j}$ and $\mathcal{I}^{i+1}_{i,j}$ and we let $\mathcal{H}^{i,k}_{i,j}$, $\mathcal{I}^{i,k}_{i,j}$ and $\mathcal{I}^{i+1,k}_{i,j}$ denote the successive quotients. As $\lambda$ induces a filtration by subcomplexes we have from Proposition \ref{HM prop} that
\begin{equation}\begin{split} \mu^{\mathcal{L}}(z,\lambda) = \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} \sum_{k=1}^u \gamma_k &\left[\left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal{I}^{i,k}_{i,j} + \left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_{i+1}}{\epsilon} \right) \rk \mathcal{I}^{i+1,k}_{i,j} \right] \\ &+ \sum_{i=m_1}^{m_2} \sum_{j=1}^{s_i} \sum_{k=1}^u \gamma_k \left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_i}{\epsilon} \right) \rk \mathcal{H}^{i,k}_{i,j}. \end{split} \end{equation}
By construction of the linearisation (cf. the definition of $a_i$ in $\S$\ref{linearisation schmitt}), the numbers $P(n)/r\delta(n) + \eta_i'/\epsilon >0$. As $\mathcal{H}^i_{i,j}$, $\mathcal{I}^i_{i,j}$ and $\mathcal{I}^{i+1}_{i,j}$ are Gieseker semistable sheaves,
\[ \sum_{k=1}^u \gamma_k \rk \mathcal{H}^{i,k}_{i,j} \geq 0 \quad \mathrm{and} \quad \sum_{k=1}^u \gamma_k \left( \rk \mathcal{I}^{l,k}_{i,j} - \frac{\rk I_{i,j}}{I_{i,j}(n)} \dim W^{l,k}_{i,j} \right) \geq 0 \quad \mathrm{for } \: l = i, i+1. \]
Therefore
\begin{equation*} \begin{split} \mu^{\mathcal{L}}(z,\lambda) & \geq \frac{\rk I_{i,j}}{I_{i,j}(n)} \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} \sum_{k=1}^u \gamma_k \left[\left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_i}{\epsilon} \right)\dim W^{i,k}_{i,j} + \left( \frac{ P_{\underline{1}}(n)}{r_{\underline{1}} \delta(n)} +\frac{{\eta}'_{i+1}}{\epsilon} \right) \dim W^{i+1,k}_{i,j} \right] \\ & = \frac{\rk I_{i,j}}{I_{i,j}(n)} \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i} \sum_{k=1}^u \gamma_k \left(\frac{{\eta}'_i}{\epsilon} \dim W^{i,k}_{i,j} + \frac{{\eta}'_{i+1}}{\epsilon} \dim W^{i+1,k}_{i,j} \right) \end{split} \end{equation*}
where the equality comes from the fact that $\lambda^I_{i,j}$ is a 1-PS of $ \mathrm{SL}(W^i_{i,j} \oplus W^{i+1}_{i,j})$ and so the weights satisfy
$ \sum_{k=1}^u \gamma_k(\dim W^{i,k}_{i,j} + \dim {W}^{i+1,k}_{i,j}) =0.$
As $\lambda$ induces a filtration by subcomplexes, \[\dim (W^{i,1}_{i,j} \oplus \dots \oplus W^{i,k}_{i,j}) \leq \dim (W^{i+1,1}_{i,j} \oplus \dots \oplus W^{i+1,k}_{i,j})\]
and it follows that $- \sum_{k=1}^u \gamma_k\dim W^{i,k}_{i,j} =\sum_{k=1}^u \gamma_k\dim W^{i+1,k}_{i,j} \geq 0$. Therefore
\[ \mu^{\mathcal{L}}(z,\lambda) \geq \frac{\rk I_{i,j}}{I_{i,j}(n)} \sum_{i=m_1}^{m_2-1} \sum_{j=1}^{t_i}\frac{({\eta}'_{i+1} - {\eta}'_{i}) }{\epsilon}\sum_{k=1}^u \gamma_k \dim W^{i+1,k}_{i,j} \geq 0 .\]
\end{proof}
Let $\mathfrak{T}^{ss}_{{H}_{i,j}} $ (resp. $\mathfrak{T}^{ss}_{{I}_{i,j}} $) be the subscheme of $\mathfrak{T}_{{H}_{i,j}} $ (resp. $\mathfrak{T}^{tf}_{{I}_{i,j}} $) which parametrises $(\underline{1}, \delta\ue /\epsilon)$-semistable complexes with Hilbert polynomials ${H}_{i,j}$ (resp. ${I}_{i,j}$).
\begin{prop}\label{descr of Ttauss}
For $n$ sufficiently large and by replacing $(\delta, \ue)$ by $(K\delta , \ue/K)$ for a sufficiently large integer $K$ we have an isomorphism
\[\mathfrak{T}_{(\tau)}^{ss} \cong \mathfrak{T}^{ss}_{{H}_{m_1,1}} \times \cdots \times \mathfrak{T}^{ss}_{{H}_{m_1,s_{m_1}}} \times \mathfrak{T}^{ss}_{{I}_{m_1,1}} \times \cdots \times \mathfrak{T}^{ss}_{{I}_{m_1,t_{m_1}}} \times \mathfrak{T}^{ss}_{{H}_{m_1 + 1,1}}\times \cdots \times \mathfrak{T}^{ss}_{{H}_{m_2,s_{m_2}}}.\]
\end{prop}
\begin{proof}
We suppose $n$ is chosen as in Lemma \ref{lemma 1} and let $z$ be a point in $\mathfrak{T}_{(\tau)}$. By Proposition \ref{descr of Fss}
\[ \mathfrak{T}_{(\tau)}^{ss} := \mathfrak{T}_{(\tau)}^{\mathrm{Stab} \: \beta -ss}(\mathcal{L}^{\chi_{-\beta}}) = \mathfrak{T}_{(\tau)}^{G'-ss}(\mathcal{L}|_{\mathfrak{T}_{(\tau)}}) \]
and so $z$ is in $\mathfrak{T}_{(\tau)}^{ss}$ if and only if $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$ for every 1-PS $\lambda$ of $G'$.
If a direct summand $\mathcal{H}^\cdot_{i,j}$ or $\mathcal{I}^\cdot_{i,j}$ of $z$ is $(\underline{1}, \delta \ue /\epsilon)$-unstable, then $z \notin \mathfrak{T}_{(\tau)}^{ss}$ by Lemma \ref{lemma 1}. By Lemma \ref{lemma 2} we have seen that if each of the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ of $z$ are $(\underline{1}, \delta \ue /\epsilon)$-semistable and $\lambda$ induces a filtration by subcomplexes, then $\mu^{\mathcal{L}}(z,{\lambda}) \geq 0$. It follows from \cite{schmitt05} Theorem 1.7.1 (see also Remark \ref{only need to worry about subcxs}) that by rescaling $(\delta, \ue)$ to $(K\delta, \ue/K)$ for $K$ a large integer, we can verify GIT-semistability by only checking for 1-PSs which induce filtrations by subcomplexes. It follows that if each of the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ of $z$ are $(\underline{1}, \delta \ue /\epsilon)$-semistable, then $z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{(\tau)}^{ss}$. Therefore, $z \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{ss}_{(\tau)}$ if and only if all the direct summands $\mathcal{H}^\cdot_{i,j}$ and $\mathcal{I}^\cdot_{i,j}$ of $z$ are $(\underline{1}, \delta \ue /\epsilon)$-semistable. In particular, the above isomorphism comes from restricting the isomorphism given in Remark \ref{descr of F and Ttau for cx} to $\mathfrak{T}^{ss}_{(\tau)}$.
\end{proof}
Recall that there is a retraction $p_\beta : Y_\beta \rightarrow Z_\beta$ where
\[ p_\beta(y) = \lim_{t \to 0} \lambda_{\beta}(t) \cdot y. \]
\begin{lemma}\label{descr of pbeta inv of nice sch}
Let $F^{ss}$ denote the connected components of $Z_\beta^{ss}$ meeting $\mathfrak{T}_{(\tau)}^{ss}$; then for $n$ sufficiently large
\[ p_\beta^{-1}(F^{ss}) \cap \mathfrak{T}^{tf} = p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}). \]
\end{lemma}
\begin{proof}
Let $n$ be chosen as in Proposition \ref{descr of Ttauss}. Let $ y \in p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$ so that \[p_\beta (y)=\lim_{t \to 0} \lambda_{\beta}(t) \cdot y \in \mathfrak{T}^{ss}_{(\tau)} \subset F^{ss}.\] If $y \notin \mathfrak{T}^{tf}$, then for all $t \neq 0$ we have $\lambda_{\beta}(t) \cdot y \notin \mathfrak{T}^{tf}$ which would contradict the openness of $\mathfrak{T}^{tf} \cap F^{ss}$ in $F^{ss}$.
Conversely suppose $y = (q^{m_1}, \dots , q^{m_2}, [\varphi :1]) \in \mathfrak{T}^{tf}$ and $z=p_\beta(y) \in F^{ss}$ where $q^i : V^i \otimes \mathcal{O}_X(-n) \rightarrow \mathcal{E}^i$ and $\varphi$ is given by $d^{i}: \mathcal{E}^i \rightarrow \mathcal{E}^{i+1}$. The scheme $F^{ss}$ is contained in the diagonal components of $Z_\beta^{ss}$; therefore the 1-PS $\lambda_\beta$ induces a filtration of $y$ by subcomplexes and the associated graded point is
\[z=(\oplus_{i,j} z_{i,j}) \oplus (\oplus_{i,j} y_{i,j} )\] where $ z_{i,j}=(q^i_{i,j},[0:1])$ and $y_{i,j} = (p^i_{i,j}, p^{i+1}_{i,j}, [d^{i}_j : 1])$ both represent complexes (cf. Lemma \ref{lemma on fixed pts}) and so $z = p_\beta (y) \in \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}_{(\tau)}$. By Proposition \ref{descr of Fss}, the limit $z$ is in the GIT semistable set for the action of $G'$ on $Z_\beta$ with respect to $\mathcal{L}$. We can apply the arguments used in the proof of Lemma \ref{lemma 1} to show that $z_{i,j} \in \mathfrak{T}^{ss}_{{H}_{i,j}} $ and $y_{i,j} \in \mathfrak{T}^{ss}_{{I}_{i,j}}$.
\end{proof}
Recall that we have fixed Schmitt stability parameters $(\underline{1}, \delta \underline{\eta} / \epsilon)$ for complexes over $X$ where $\epsilon > 0$ is a rational number, $\eta_i$ are strictly increasing rational numbers indexed by the integers and $\delta$ is a positive rational polynomial such that $\deg \delta = \max (\dim X -1,0)$. We may also assume that $\delta$ is sufficiently large (so that the scaling of Proposition \ref{descr of Ttauss} above has been done). We have assumed $\epsilon$ is very small and that $\tau$ is the Harder--Narasimhan type with respect to $(\underline{1}, \delta \underline{\eta} / \epsilon)$ of a complex $\cxF$ with torsion free cohomology sheaves and Hilbert polynomials $P=(P^{m_1}, \dots, P^{m_2})$. Let $\beta = \beta(\tau,n)$ be the rational weight given in Definition \ref{defn of beta cxs}. Provided $n$ is sufficiently large, all complexes with Harder--Narasimhan type $\tau$ may be represented by points in the scheme $\mathfrak{T}^{tf}=\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}(n)$. We defined $R_\tau$ to be the set of points in $\mathfrak{T}^{tf}$ which parametrise complexes with this Harder--Narasimhan type $\tau$ and the following theorem provides $R_\tau$ with a scheme structure. There is an action of \[G = \Pi_{i=m_1}^{m_2} \mathrm{GL}(P^i(n)) \cap \mathrm{SL}(\Sigma_{i=m_1}^{m_2} P^i(n))\]
on this parameter scheme and the stability parameters determine a linearisation $\mathcal{L}$ of this action. Associated to this action there is a stratification $\{ S_\beta : \beta \in \mathcal{B} \}$ of the projective completion $\overline{\mathfrak{T}}^{tf}$ indexed by a finite set $\mathcal{B}$ of rational weights.
\begin{thm}\label{HN strat is strat}
For $n$ sufficiently large we have:
\begin{enumerate}
\renewcommand{\labelenumi}{\roman{enumi})}
\item $\beta = \beta(\tau,n)$ belongs to the index set $\mathcal{B}$ for the stratification $\{ S_\beta : \beta \in \mathcal{B} \}$ of $\overline{\mathfrak{T}}^{tf}$,
\item $R_\tau = G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$ and,
\item The subscheme $R_\tau = G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$ of the parameter scheme $\mathfrak{T}^{tf}$ parametrising complexes with Harder--Narasimhan type $\tau$ is a union of connected components of $S_\beta \cap \mathfrak{T}^{tf}$.
\end{enumerate}
\end{thm}
\begin{proof}
Suppose $n$ is sufficiently large as in Proposition \ref{descr of Ttauss}. We defined $\beta$ by fixing a point $z = (q^{m_1}, \dots, q^{m_2} , [\varphi:1]) \in R_\tau$ corresponding to the complex $\mathcal{F}^\cdot$ with Harder--Narasimhan type $\tau$. We claim that $\overline{z} :=p_\beta(z) \in Z_\beta^{ss}$ which implies i). The 1-PS $\lambda_{\beta}$ induces the Harder--Narasimhan filtration of $\cxF$ and
$\overline{z}= \lim_{t \to 0} \lambda_\beta(t) \cdot z $ is the graded object associated to this filtration. By Proposition \ref{descr of Ttauss} it suffices to show that each summand in the associated graded object is $(\underline{1}, \delta \underline{\eta} / \epsilon)$-semistable, but this follows by definition of the Harder--Narasimhan filtration.
In fact the above argument shows that $p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}) \subset R_\tau$ and since $R_\tau$ is $G$-invariant we have $G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}) \subset R_\tau$. To show ii) suppose $y = (q^{m_1}, \dots, q^{m_2} ,[\varphi : 1]) \in R_\tau$ corresponds to a complex $\mathcal{E}^\cdot} \def\cxF{\mathcal{F}^\cdot} \def\cxG{\mathcal{G}^\cdot$ with Harder--Narasimhan filtration
\[ 0 \subsetneq \mathcal{H}^\cdot_{m_1,(1)} \subsetneq \cdots \subsetneq \mathcal{H}^\cdot_{m_1,(s_{m_1})} \subsetneq \mathcal{I}^\cdot_{m_1,(1)} \cdots \mathcal{I}^\cdot_{m_1,(t_{m_1})} \subsetneq \cdots \subsetneq \mathcal{H}^\cdot_{m_2,(s_{m_2})}=\mathcal{E}_\cdot \]
of type $\tau$. Then this filtration induces a filtration of each vector space $V^i$ and we can choose a change of basis matrix $g$ which switches this filtration with the filtration of $V^i$ given at (\ref{filtr of Vi}) used to define $\beta$. Then $g \cdot y \in p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss}) $ which completes the proof of ii).
Since $F^{ss}$ is a union of connected components of $Z_\beta^{ss}$, the scheme $Gp_\beta^{-1}(F^{ss}) $ is a union of connected components of $S_\beta$. Therefore, $Gp_\beta^{-1}(F^{ss}) \cap \mathfrak{T}^{tf} $ is a union of connected components of $S_\beta \cap \mathfrak{T}^{tf}$. By ii) and Lemma \ref{descr of pbeta inv of nice sch}
\[R_\tau = G p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})= Gp_\beta^{-1}(F^{ss}) \cap \mathfrak{T}^{tf} \]
which proves iii).
\end{proof}
\section{Quotients of the Harder--Narasimhan strata}\label{sec on quot}
In the previous section we saw for $\epsilon$ very small and a fixed Harder--Narasimhan type $\tau$ with respect to $(\underline{1}, \delta \ue/\epsilon)$, there is a parameter space $R_\tau$ for complexes of this Harder--Narasimhan type and $R_\tau$ is a union of connected components of a stratum $S_\beta(\tau) \cap \mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}^{tf}(n)$ when $n$ is sufficiently large. The action of $G$ on $\mathfrak{T}} \def\fD{\mathfrak{D}} \def\fB{\mathfrak{B}$ restricts to an action on $R_\tau$ such that the orbits correspond to isomorphism classes of complexes of Harder--Narasimhan type $\tau$. In this section we consider the problem of constructing a quotient of the $G$-action on this Harder--Narasimhan stratum $R_\tau$. If a suitable quotient did exist, then it would provide a moduli space for complexes of this Harder--Narasimhan type. In particular, it would have the desirable property that for two complexes to represent the same point it is necessary that their cohomology sheaves have the same Harder--Narasimhan type.
By \cite{hoskinskirwan} Proposition 3.6, any stratum in a stratification associated to a linearised $G$-action on a projective scheme $B$ has a categorical quotient. We can apply this to our situation and produce a categorical quotient of the $G$-action on $R_\tau$.
\begin{prop}
The categorical quotient of the $G$-action on $R_\tau$ is isomorphic to the product
\[ \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i}M^{(\underline{1}, \delta \ue/\epsilon)-ss}(X,{H}_{i,j}) \times \prod_{i=m_1}^{m_2 -1} \prod_{j=1}^{t_i}M^{(\underline{1}, \delta \ue/\epsilon)-ss}(X,{I}_{i,j}) \]
where $M^{(\underline{1}, \delta \ue/\epsilon)-ss}(X,{P})$ denotes the moduli space of $(\underline{1}, \delta \ue/\epsilon)$-semistable complexes with invariants $P$. Moreover:
\begin{enumerate}
\item A complex with invariants $H_{i,j}$ is just a shift of a sheaf and it is $(\underline{1}, \delta \ue/\epsilon)$-semistable if and only if the corresponding sheaf is Gieseker semistable.
\item A complex with invariants $I_{i,j}$ is concentrated in degrees $[i,i+1]$ and it is $(\underline{1}, \delta \ue/\epsilon)$-semistable if and only if it is isomorphic to a shift of the cone on the identity morphism of a Gieseker semistable sheaf.
\end{enumerate}
\end{prop}
\begin{proof}
It follows from \cite{hoskinskirwan} Proposition 3.6, that the categorical quotient is equal to the GIT quotient of $\mathrm{Stab} \: \beta$ acting on $\mathfrak{T}_{(\tau)}$ with respect to the twisted linearisation $\cL^{\chi_{-\beta}}$. It follows from Proposition \ref{descr of Fss} this is the same as the GIT quotient of \[G'= \prod_{i=m_1}^{m_2} \prod_{j=1}^{s_i} \SL(V^i_{i,j}) \times \prod_{i=m_1}^{m_2-1} \prod_{j=1}^{t_i} (\mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^i_{i,j}) \times \mathrm{GL}} \def\SL{\mathrm{SL}} \def\d{\cdots(W^{i+1}_{i,j}) )\cap \SL(W^i_{i,j} \oplus W^{i+1}_{i,j})\] acting on $\mathfrak{T}_{(\tau)}$ with respect to $\cL$. By Theorem \ref{schmitt theorem}, this is the product of moduli spaces of $(\underline{1}, \delta \ue/\epsilon)$-semistable complexes with invariants given by $\tau$. The final statement follows from Lemma \ref{lemma X}, Remark \ref{rmk on sigma0} and the assumption on $\epsilon$ (cf. Assumption \ref{ass on epsilon}).
\end{proof}
In general this categorical quotient has lower dimension than expected and so is not a suitable quotient of the $G$-action on $R_\tau$. Instead, we suggest the quotient should be taken with respect to a perturbation of the linearisation used to provide the categorical quotient. However, as discussed in \cite{hoskinskirwan}, finding a way to perturb this linearisation and get an ample linearisation is not always possible. As $R_\tau = GY_{(\tau)}^{ss} \cong G \times^{P_\beta} Y_{(\tau)}^{ss}$ where $Y_{(\tau)}^{ss} :=p_\beta^{-1}(\mathfrak{T}_{(\tau)}^{ss})$, a categorical quotient of $G$ acting on $R_\tau$ is equivalent to a categorical quotient of $P_\beta$ acting on $Y_{(\tau)}^{ss}$. If we instead consider $P_\beta$ acting on $Y_{(\tau)}^{ss}$, then there are perturbed linearisations which are ample although $P_\beta$ is not reductive. A possible future direction is to follow the ideas of \cite{hoskinskirwan} and take a quotient of the reductive part $\mathrm{Stab} \: \beta$ of $P_\beta$ acting on $Y_{(\tau)}^{ss}$ with respect to an ample perturbed linearisation and get a moduli space for complexes of Harder--Narasimhan type with $\tau$ some additional data.
\bibliographystyle{amsplain}
| -151,313.962931 |
[
-2.79296875,
2.451171875
] | 40.206186 |
[
-2.646484375,
0.402587890625,
-2.4296875,
-6.58203125,
-1.5732421875,
9.875
] |
[
3.333984375,
9.6328125,
1.1708984375,
6.05859375
] | 462 | 13,304 |
[
-3.43359375,
3.828125
] | 34.745755 |
[
-5.4375,
-4.5859375,
-6.31640625,
-2.740234375,
1.9130859375,
14.203125
] | 0.671477 | 22.941877 | 13.236621 | 1.274043 |
[
0.9533304572105408
] | -92,605.804805 | 5.70896 | -150,908.069763 | 4.569992 | 6.090545 |
[
-1.2451171875,
-3.3671875,
-4.36328125,
-5.8203125,
1.7255859375,
12.9375
] |
[
-5.8515625,
-1.8076171875,
-2.251953125,
-1.1220703125,
3.81640625,
4.09765625
] | |
BkiUdOA4eIZjrLrbKABF
|
\section{Introduction}
High-dimensional and multi-way data processing have received considerable attention in recent years given the ever-increasing amount of data with diverse modalities generated from different kinds of sensors, networks and systems. Since tensors are algebraic objects that can be represented as multi-dimensional arrays (generalizing scalars, vectors and matrices), they have marked ability to characterize multi-way (high order) data and capture intrinsic correlations across its different dimensions. This fact justifies their wide use and efficacy in numerous applications of computer vision \cite{dian2017hyperspectral,zhang2019computational}, pattern recognition \cite{zhao2019multi,he2006tensor} and signal processing \cite{sidiropoulos2017tensor,cichocki2015tensor}.
\subsection{Related work}
Similar to matrices, the data represented by tensors may contain redundant information, which is referred to as the low rank property of tensors. To exploit the underlying low-rank structure of high order tensors, several low rank tensor models have been proposed based on different tensor decompositions, including CANDECOMP/PARAFAC (CP) decomposition \cite{kiers2000towards}, Tucker decomposition \cite{tucker1966some}, tensor ring decomposition \cite{zhao2016tensor} and tensor singular value decomposition (t-SVD) \cite{kilmer2011factorization}.
Tensor completion, a generalization of the popular matrix completion problem \cite{candes2009exact,candes2010matrix}, is the the task of filling in the missing entries of a partially observed tensor, typically by exploiting the low-rank property of the tensor. There exist several tensor completion algorithms tailored to different low-rank tensor models, such as the CANDECOMP/PARAFAC decomposition-based alternating minimization algorithm \cite{kiers2000towards,jain2014provable}, tucker decomposition-based tensor completion using the Riemannian manifold approach \cite{tucker1966some,kasai2016low} and alternating minimization \cite{xu2015parallel}, the t-SVD-based completion algorithm using alternating minimization \cite{zhou2017tensor,liu2019low}, completion using Grassmannian optimization \cite{gilman2020grassmannian} and algorithms that use convex relaxation \cite{zhang2016exact}.
In the real world, the data observed could be perturbed by different kinds of noise originating from human errors and/or signal interference. Existing algorithms largely utilize the second-order statistics as their error measure, which works well in certain noisy settings, such as with noise from a Gaussian distribution. However, when the data is contaminated with large outliers,
traditional algorithms do not perform satisfactorily in general. This motivated the development of robust algorithms for low-rank tensor recovery that are not unduly affected by the outliers \cite{goldfarb2014robust,han2018generalized,inoue2009robust}. While many such algorithms presume that all the entries of the tensor data are observed,
several algorithms were designed to deal with situations wherein some of the entries may be missing or grossly corrupted, which is the main focus of this work.
In \cite{huang2014provable}, a robust completion method that uses the Sum of Nuclear Norms (SNN) is proposed. SNN is the (weighted) sum of the nuclear norms of tensor unfoldings, which is a convex relaxation for the Tucker rank. A robust low-tucker-rank tensor completion algorithm that uses block coordinate descent is developed in \cite{yang2015robust}. The algorithm utilizes the tensor mixture model \cite{tomioka2010estimation} and introduces two M-estimators that use the Welsch loss and the Cauchy loss as error measures. Further, by casting the tensor completion problem as the sum of low-rank and sparse components, several robust tensor completion algorithms based on the tubal rank and the tensor ring rank were proposed \cite{jiang2019robust,wang2019robust,huang2020robust}. Table \ref{tb:algorithm} lists existing robust tensor completion algorithms along with the tensor rank model they adopt and the corresponding objective functions.
\begin{table}[tb]
\renewcommand\arraystretch{1.5}
\centering
\caption{Objective functions of robust tensor completion algorithms}
\label{tb:algorithm}
\resizebox{0.5\textwidth}{!}
{%
\begin{tabular}{ccc}
\hline
Algorithm & Rank model & Objective function
\\ \hline
\begin{tabular}[c]{@{}c@{}}SNN-L1\\\cite{goldfarb2014robust}\end{tabular} & Tucker & \begin{tabular}[c]{@{}c@{}}$\min\limits_{\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{S}}}\sum\limits_{i=1}^{N}\left\|\boldsymbol{X}_{i,(i)}\right\|_{*}+\lambda\|\boldsymbol{\mathcal{S}}\|_{1}$\\ $\text { s.t. } \boldsymbol{\mathcal{P}}\circ\left(\sum\limits_{i=1}^{N} \boldsymbol{\mathcal{X}}_{i}+\boldsymbol{\mathcal{S}}\right)=\boldsymbol{\mathcal{P}}\circ{\boldsymbol{\mathcal{M}}}$\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}SNN-WST\\\cite{yang2015robust}\end{tabular} & Tucker & \begin{tabular}[c]{@{}c@{}}$\min\limits_{\boldsymbol{\mathcal{X}}} \!\!\sum\limits_{i,j,k}\!\!\mathcal{P}_{ijk}\sigma^2(1\!-\!G_{\sigma}({\mathcal{M}}_{ijk}\!-\!(\!\!\sum\limits_{m=1}^{N} \!\!\boldsymbol{\mathcal{X}}_{m}\!)_{ijk})\!)$ \\ $+\lambda \sum\limits_{j=1}^{N}\left\|\boldsymbol{X}_{j,(j)}\right\|_{*}$\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}SNN-WHT\\\cite{yang2015robust}\end{tabular} & Tucker & \begin{tabular}[c]{@{}c@{}}$\min\limits_{\boldsymbol{\mathcal{X}}} \!\!\sum\limits_{i,j,k}\!\!\mathcal{P}_{ijk}\sigma^2(1\!-\!G_{\sigma}({\mathcal{M}}_{ijk}\!-\!(\!\!\sum\limits_{m=1}^{N} \!\!{\boldsymbol{\mathcal{X}}_{m}\!)_{ijk})}\!)$ \\ $+ \sum\limits_{j=1}^{N}\delta_{M_{r_j}}\!\!\left(\boldsymbol{X}_{j,(j)}\right)$\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}TRNN-L1\\\cite{huang2020robust}\end{tabular} & Tensor ring & \begin{tabular}[c]{@{}c@{}}$\min\limits_{\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{S}}} \sum\limits_{d=1}^{N} w_{d}\left\|\boldsymbol{X}_{\{d, L\}}\right\|_{*}+\lambda_{d}\|\boldsymbol{\mathcal{S}}\|_{1}$\\ $\text { s.t. } \boldsymbol{\mathcal{P}}\circ\left(\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{S}}\right)=\boldsymbol{\mathcal{P}}\circ{\boldsymbol{\mathcal{M}}}$\end{tabular}
\\ \hline
\begin{tabular}[c]{@{}c@{}}TNN-L1\\\cite{jiang2019robust}\end{tabular} & Tubal & \begin{tabular}[c]{@{}c@{}}$\min\limits _{\boldsymbol{\mathcal{X}} . \boldsymbol{\mathcal{S}}}\frac{1}{n_{3}} \sum\limits_{i=1}^{n_{3}}\left\|\bar{\boldsymbol{X}}^{(i)}\right\|_{*}+\lambda\|\boldsymbol{\mathcal{S}}\|_{1}$\\ $\text { s.t., } \boldsymbol{\mathcal{P}}\circ\left(\boldsymbol{\mathcal{X}}+\boldsymbol{\mathcal{S}}\right)=\boldsymbol{\mathcal{P}}\circ{\boldsymbol{\mathcal{M}}}$\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}HQ-TCTF\\ /HQ-TCASD\\
(This paper)\end{tabular} & Tubal & $\min\limits_{{\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}}} \!\sum\limits_{i,j,k}\! \!{{\cal{P}}_{ijk}\sigma^2\!\left({\!1\!-\!G_{\sigma}\!{{\left( {{{\cal{M}}_{ijk}} \!-\! {\left( {{\boldsymbol{\cal{X}}\!*\!\boldsymbol{{\cal{Y}}} }}\right)_{ijk}}}\! \right)}}}\! \right)}$ \\ \hline
\end{tabular}%
}
\end{table}
\subsection{Contributions}
In this work, we propose a novel robust tensor completion method using tensor factorization and the maximum correntropy criterion \cite{liu2007correntropy,chen2016generalized}. Tensor factorization \cite{kilmer2013third} has been recently introduced in the context of tensor completion.
The algorithms based on tensor factorization were shown to be efficient and to yield accurate performance \cite{zhou2017tensor,liu2019low}. Correntropy, also known as Welsch's function, is an information-theoretic
non-linear similarity measure which can provably handle the negative effect of large outliers \cite{he2010maximum,chen2017maximum,he2019robust}. By introducing correntopy as our error measure, we propose a novel correntropy-based objective function for robust low-tubal-rank tensor completion. To efficiently solve the completion problem, we first leverage a half-quadratic optimization technique \cite{nikolova2005analysis} to transform the non-convex problem to a weighted tensor factorization problem. Then, two efficient and simple algorithms based on alternating minimization and alternating steepest descent are developed.
Also, we analytically establish the convergence of both algorithms. We also propose an adaptive kernel width selection strategy to further improve the convergence rate and accuracy. The main contributions of the work are summarized below.
\textbf{1.} We propose a novel objective function for robust low-tubal-rank tensor completion, which uses tensor factorization to capture the low-rank structure and correntropy as the error measure to give robustness against outliers.
As shown in Table \ref{tb:algorithm}, the tubal rank we adopt is different from the rank model used by all existing \emph{robust} tensor completion algorithms, with the exception of TNN-L1 \cite{jiang2019robust} which uses an altogether different objective function. Further, unlike all existing \emph{robust} tensor completion algorithms which need to perform multiple SVD computations,
our approach imposes the low-rank structure through factorization, which can greatly reduce the computational cost.
\textbf{2.} We reformulate the complex correntropy-based optimization problem as a weighted tensor factorization by leveraging the half-quadratic minimization technique (Section \ref{sec:HC}). We develop two efficient algorithms (HQ-TCTF and HQ-TCASD) for robust tensor completion (See Section \ref{sec:HQ-TCTF} and \ref{sec:HQ-TCASD}). The algorithms utilize alternating minimization and alternating steepest descent, which avoid the costly computation of the SVD operations and are amenable to distributed implementation. We also analyze the convergence and computational complexity of the algorithms proposed.
\textbf{3.} We demonstrate the robust and efficient performance of the proposed algorithms through extensive numerical experiments performed with both synthetic and real data.
The paper is organized as follows. In Section \ref{sec:bkgnd} we introduce our notation and provide some preliminary background on the tensor properties, tensor completion, and the maximum correntropy criterion. In Section \ref{sec:methods}, we propose the new correntropy-based tensor completion cost and propose two HQ-based algorithms. In Section \ref{sec:results}, we present experimental results to demonstrate the reconstruction performance. Finally, conclusion is given in Section \ref{sec:conc}.
\section{Preliminaries}
\label{sec:bkgnd}
\subsection{Definitions and notation}
\label{sec:prelim}
In this section, we review some important definitions and introduce notation used throughout the paper. Boldface uppercase script letters are used to denote tensors (e.g., $\boldsymbol{\cal{X}}$), and boldface letters to denote matrices (e.g., ${\boldsymbol{X}}$). Unless stated otherwise, we focus on third order tensors, i.e., ${\boldsymbol{\cal{X}}}\in{\mathbb{C}}^{n_1\times n_2\times n_3}$ where $n_1,n_2,n_3$ are the dimensions of each way of the tensor. The notation ${\boldsymbol{\cal{X}}}(i,:,:),{\boldsymbol{\cal{X}}}(:,i,:),{\boldsymbol{\cal{X}}}(:,:,i)$ denotes the frontal, lateral, horizontal slices of $\boldsymbol{\cal{X}}$, respectively, and ${\boldsymbol{\cal{X}}}(i,j,:),{\boldsymbol{\cal{X}}}(:,j,k),{\boldsymbol{\cal{X}}}(i,:,k)$ denote the mode-1, mode-2, and mode-3 tubes, while ${\cal{X}}_{ijk}$ denotes the $(i,j,k)$-th entry of tensor $\boldsymbol{\cal{{X}}}$. The Frobenius norm of tensor is defined as $\|{\boldsymbol{\cal{X}}}\|_F=\sqrt{\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}\sum_{k=1}^{n_3}|{{\cal{X}}_{ijk}}|^2}$.
In the frequency domain, $\bar{\boldsymbol{\cal{X}}}$ denotes the Fourier transform along the third mode of $\boldsymbol{\cal{X}}$. We use the convention, $\bar{\boldsymbol{\cal{X}}}=\operatorname{fft}({\boldsymbol{\cal{X}}},[\:],3)$ to denote the Fourier transform along the third dimension. Similarly, we use ${\boldsymbol{\cal{X}}}=\operatorname{ifft}({\bar{\boldsymbol{\cal{X}}}},[\:],3)$ for the inverse transform. We also define the matrix ${\bar{\boldsymbol{X}}}\in{\mathbb{R}}^{n_1n_3\times n_2n_3}$
\[
\bar{\boldsymbol{X}}=\operatorname{bdiag}(\bar{\boldsymbol{\cal{X}}})=\left[\begin{array}{cccc}
\bar{\boldsymbol{X}}^{(1)} & & \\
& \bar{\boldsymbol{X}}^{(2)} & \\
& & \ddots & \\
& & & \bar{\boldsymbol{X}}^{\left(n_3\right)}
\end{array}\right]
\]
where ${\boldsymbol{X}}^{(i)}:={\boldsymbol{\cal{X}}}(:,:,i)$, and $\operatorname{bdiag}(\cdot)$ denotes the operator that maps the tensor $\bar{\boldsymbol{\cal{X}}}$ to the block diagonal matrix $\bar{\boldsymbol{X}}$. The block circulant operator $\operatorname{bcirc}(\cdot)$ is defined as
\[
\operatorname{bcirc}({\boldsymbol{\cal{X}}})=\left[\begin{array}{cccc}
{\boldsymbol{X}}^{(1)} & {\boldsymbol{X}}^{\left(n_3\right)} & \cdots & {\boldsymbol{X}}^{(2)} \\
{\boldsymbol{X}}^{(2)} & {\boldsymbol{X}}^{(1)} & \cdots & {\boldsymbol{X}}^{(3)} \\
\vdots & \vdots & \ddots & \vdots \\
{\boldsymbol{X}}^{\left(n_3\right)} & {\boldsymbol{X}}^{\left(n_3-1\right)} & \cdots & {\boldsymbol{X}}^{(1)}
\end{array}\right]\:.
\]
Therefore, the following relation holds,
\begin{equation}
\left(\boldsymbol{F}_{n_{3}} \otimes \boldsymbol{I}_{n_{1}}\right) \operatorname{bcirc}(\boldsymbol{\mathcal { X }}) \left(\boldsymbol{F}_{n_{3}}^{-1} \otimes \boldsymbol{I}_{n_{2}}\right)=\bar{\boldsymbol{X}}\:,
\label{eq:prelim_1}
\end{equation}
where $\boldsymbol{F}_{n_{3}}\in\mathbb{C}^{n_3\times n_3}$ is the Discrete Fourier Transform (DFT) matrix, $\otimes$ is the Kronecker product and $\boldsymbol{I}_{n_1}\in\mathbb{R}^{n_1\times n_1}$ is the identity matrix. Further, $\boldsymbol{F}_{n_{3}}^{-1}$ can be computed as $\boldsymbol{F}_{n_3}^{-1}=\boldsymbol{F}_{n_3}^{*}/n_3$, where $\boldsymbol{X}^*$ denotes the Hermitian transpose of $\boldsymbol{X}$.
To define the tensor-tensor product (t-product), we first define the unfold operator $\operatorname{unfold}(\cdot)$, which maps the tensor ${\boldsymbol{\cal{X}}}$ to a matrix $\tilde{\boldsymbol{X}}\in\mathbb{C}^{n_1n_3\times n_2}$,
\[
\tilde{\boldsymbol{X}}=\operatorname{unfold}(\boldsymbol{\mathcal{X}})=\left[\begin{array}{c}
{\boldsymbol{X}}^{(1)} \\
{\boldsymbol{X}}^{(2)} \\
\vdots \\
{\boldsymbol{X}}^{\left(n_{3}\right)}
\end{array}\right]
\]
and its inverse operator $\operatorname{fold}(\cdot)$ is defined as
\[\operatorname{fold}(\tilde{\boldsymbol{X}})=\boldsymbol{\mathcal{X}}\:.
\]
We can readily state the definition of the t-product.
\begin{mydef}
[t-product \cite{kilmer2011factorization}] The t-product $\boldsymbol{\mathcal{A}} * \boldsymbol{\mathcal{B}}$ of $\boldsymbol{\mathcal{A}} \in$ $\mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ and $\boldsymbol{\mathcal{B}} \in \mathbb{R}^{n_{2} \times n_{4} \times n_{3}}$ is the tensor of size $n_1\times n_4 \times n_3$ given by
\[
\boldsymbol{\mathcal{A}} * \boldsymbol{\mathcal{B}}=\operatorname{fold}(\operatorname{bcirc}(\boldsymbol{\mathcal{A}}) \cdot \operatorname{unfold}(\boldsymbol{\mathcal{B})})\]
\label{def:tprod}
\end{mydef}
Further, we will need the following lemma from \cite{kilmer2011factorization}.
\begin{lemma}
\cite{kilmer2011factorization} Suppose that $\boldsymbol{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}, \boldsymbol{\mathcal{B}} \in$
$\mathbb{R}^{n_{2} \times n_{4} \times n_{3}}$ are two arbitrary tensors. Let $\boldsymbol{\mathcal{F}}=\boldsymbol{\mathcal{A}} *\boldsymbol{\mathcal{B}} .$ Then,
the following properties hold.
(1) $\|\boldsymbol{\mathcal{A}}\|_{F}^{2}=\frac{1}{n_{3}}\|\bar{\boldsymbol{A}}\|_{F}^{2}$
(2) $\boldsymbol{\mathcal{F}}=\boldsymbol{\mathcal{A}} * \boldsymbol{\mathcal{B}}$ and $\bar{\boldsymbol{F}}=\bar{\boldsymbol{A}} \bar{\boldsymbol{B}}$ are equivalent.
\label{lem:t-product}
\end{lemma}
According to the second property in Lemma \ref{lem:t-product}, the t-product is equivalent to matrix multiplication in the frequency domain.
Next, we state the definitions of the Tensor Singular Value Decomposition (t-SVD) and the tubal rank.
\begin{theorem}
[t-SVD
\cite{kilmer2011factorization}] The tensor $\boldsymbol{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ can be factorized as $\boldsymbol{\mathcal{A}}=\boldsymbol{\mathcal{U}} * \boldsymbol{\mathcal{S}} * \boldsymbol{\mathcal{V}}^{*}$, where $\boldsymbol{\mathcal{U}} \in \mathbb{R}^{n_{1} \times n_{1} \times n_{3}}, \boldsymbol{\mathcal{V}} \in \mathbb{R}^{n_{2} \times n_{2} \times n_{3}}$ are orthogonal, and $\boldsymbol{\mathcal{S}} \in$
$\mathbb{R}^{n_{1} \times n_{2} \times n_{3}}$ is an $f$-diagonal tensor, i.e., each of the frontal slices of $\boldsymbol{\mathcal{S}}$ is a diagonal matrix. The entries in $\boldsymbol{\mathcal{S}}$ are called the singular values of $\boldsymbol{\mathcal{A}}$.
\end{theorem}
\begin{mydef}
[Tensor tubal-rank \cite{kilmer2013third}] For any $\boldsymbol{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}},$ the tensor tubal-rank, rank$_{t}(\boldsymbol{\mathcal{A}}),$ is the number of non-zero singular tubes of $\boldsymbol{\mathcal{S}}$ from the t-SVD, i.e.,
\[
\operatorname{rank}_{t}(\boldsymbol{\mathcal{A}})=\#\{i: \boldsymbol{\mathcal{S}}(i, i,:)\neq0\}\:.
\]
\end{mydef}
We will also need the following lemma and definition.
\begin{lemma}
[Best tubal rank-r approximation \cite{kilmer2013third}] Let the t-SVD of $\boldsymbol{\mathcal{A}} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ be $\boldsymbol{\mathcal{A}}=\boldsymbol{\mathcal{U}} * \boldsymbol{\mathcal{S}} * \boldsymbol{\mathcal{V}}^{*}$. Given a tubal rank $r$, define $\boldsymbol{\mathcal{A}}_{r}=\sum_{s=1}^{r} \boldsymbol{\mathcal{U}}(:, s,:) * \boldsymbol{\mathcal{S}}(s, s,:) * \boldsymbol{\mathcal{V}}^{*}(:, s,:)$.
Then $\boldsymbol{\mathcal{A}}_{r}=\underset{\check{\boldsymbol{\mathcal{A}}} \in \mathbb{A}}{\arg \min }\|\boldsymbol{\mathcal{A}}-\check{\boldsymbol{{\mathcal{A}}}}\|_{F},$ where $\mathbb{A}:=\left\{\boldsymbol{\mathcal{X}} * \boldsymbol{\mathcal{Y}} \mid \boldsymbol{\mathcal{X}} \in\mathbb{R}^{n_1 \times r \times n_3}, \boldsymbol{\mathcal{Y}} \in \mathbb{R}^{r \times n_2 \times n_3}\right\}$.
\end{lemma}
\begin{mydef}
[Tensor Multi-Rank \cite{kilmer2013third}] For any tensor $\boldsymbol{\mathcal{A}} \in \mathbb{R}^{n_{1} \times n_{2} \times n_{3}},$ its multi-rank rank$_{m}(\boldsymbol{\mathcal{A}})$ is a vector
defined as $\boldsymbol{r}=\left(\operatorname{rank}(\bar{\boldsymbol{A}}^{(1)}) ; \cdots ; \bar{\boldsymbol{A}}^{\left(n_{3}\right)}\right). $ Further, we have
\[
\operatorname{rank}_{t}(\boldsymbol{\mathcal{A}})=\max \left(r_{1}, \cdots, r_{n_{3}}\right)\:,
\]
where $r_i$ is the $i$-th element of $\boldsymbol{r}$.
\end{mydef}
\subsection{Low-tubal-rank tensor completion}
\label{sec:LTR_TC}
Tensor completion is the task of recovering a tensor $\boldsymbol{\cal{M}}\in\mathbb{R}^{n_1\times n_2\times n_3}$ from a subset of its entries by leveraging the low rank property of the tensor. When using tubal rank for the definition of the rank, the low-tubal-rank property amounts to $\operatorname{rank}_{t}(\boldsymbol{\mathcal{M}})\ll \max\{n_1,n_2\}$. Specifically, by defining the observed subset of entries $\boldsymbol{\Omega}\subseteq[m]\times[n]\times[k]$ and its indicator tensor $\boldsymbol{\cal{P}}$,
\begin{equation}
{\cal{P}}_{ijk}=\left\{\begin{array}{cl}
1, & \text {if }(i,j,k) \in \Omega \\
0, & \text {otherwise}
\end{array}\right.
\end{equation}
the low-tubal-rank tensor completion problem can be formulated through the minimization,
\begin{equation}
\underset{\boldsymbol{\mathcal{Z}} \in \mathbb{R}^{n_1 \times n_2 \times n_3}}{\min }\operatorname{rank}_{t}(\boldsymbol{\mathcal{Z}}) \text {, s.t. } {\boldsymbol{\cal{P}}}\circ(\boldsymbol{\mathcal{Z}}- \boldsymbol{\mathcal{M}})=\boldsymbol{0}\:,
\label{eq:MCorigin}
\end{equation}
where $\circ$ denotes the Hadamard (element-wise) product of the two same-size tensors.
It is known that \eqref{eq:MCorigin} is NP-hard. To address this problem, several methods were proposed, which can be categorized into two main categories:
1) Convex relaxation \cite{zhang2016exact}: In this approach, \eqref{eq:MCorigin} is relaxed to obtain a convex optimization problem.
Specifically,
by defining the tensor nuclear norm (TNN) $$\|\boldsymbol{\mathcal{A}}\|_{TNN}=\frac{1}{n_{3}} \sum_{i=1}^{n_{3}}\left\|\bar{\boldsymbol{A}}^{(i)}\right\|_{*}$$ where $\|\cdot\|_{*}$ denotes the matrix nuclear norm, \eqref{eq:MCorigin} can be relaxed to
\begin{equation}
\underset{\boldsymbol{\mathcal{Z}} \in \mathbb{R}^{n_1 \times n_2 \times n_3}}{\min }\sum_{i=1}^{n_{3}}\left\|\bar{\boldsymbol{Z}}^{(i)}\right\|_{*} \text {, s.t. } {\boldsymbol{\cal{P}}}\circ(\boldsymbol{\mathcal{Z}}- \boldsymbol{\mathcal{M}})=\boldsymbol{0}\:.
\end{equation}
The iterative solver to the nuclear norm-based relaxation has to compute a SVD at each iteration, which incurs high computational complexity for large scale high-dimensional data.
2) Tensor factorization: Similar to the Powerfactorization method proposed for matrix completion \cite{haldar2009rank}, a low-tubal-rank tensor can be represented as the t-product of two smaller tensors \cite{kilmer2011factorization}. Specifically, the recovered tensor $\boldsymbol{\cal{M}}\in\mathbb{R}^{n_1\times n_2 \times n_3}$ can be factorized into the t-product of two tensors $\boldsymbol{\cal{X}}\in\mathbb{R}^{n_1\times r\times n_3}$ and $\boldsymbol{\cal{Y}}\in\mathbb{R}^{r\times n_2 \times n_3}$, where $r$ is the tubal rank of $\boldsymbol{\cal{M}}$ \cite{zhou2017tensor}. The tensor factorization then solves tensor completion by utilizing the objective function
\begin{equation}
\min_{{\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}}} J({\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}}):=\left\|{\boldsymbol{\cal{P}}}\circ(\boldsymbol{\mathcal{X}} * \boldsymbol{\mathcal{Y}}-\boldsymbol{\mathcal{M}})\right\|_{F}^{2}\:.
\label{eq:TF}
\end{equation}
Tensor factorization can avoid the high complexity associated with performing the SVD,
and the complexity is reduced due to the inherent low-rank property. Two algorithms based on tensor factorization were proposed, namely, Tubal-Altmin \cite{liu2019low} and TCTF \cite{zhou2017tensor}.
\subsection{Maximum Correntropy Criterion (MCC)}
Correntropy is a local and nonlinear similarity measure between two random variables within a ``window'' in the joint space determined by the kernel width.
Given two random variables $X$ and $Y$, the correntropy is defined as \cite{liu2007correntropy}
\begin{equation}
V (X, Y)= \mathbb{E}[\kappa_{\sigma} (X, Y)]=\int \kappa_{\sigma} (x, y)dF_{XY} (x, y)\:,
\end{equation}
where $\kappa_\sigma$ is a shift-invariant Mercer kernel with kernel width $\sigma$, $F _{XY} (x, y)$ denotes the joint probability distribution function of $X$ and $Y$, and $\mathbb{E}[.]$ is the expectation operator.
Given a finite number of samples $ \{x_i, y_i \} _{i=1}^N$, and using the Gaussian kernel, $G_\sigma(x)=\exp (-\frac{x^2}{2\sigma^2})$, as the kernel function, the correntropy can be approximated by
\begin{equation}
\hat{V} (X, Y)= \frac{1}{N} \sum_{i=1}^N \exp (-\frac{e_i^2}{2\sigma^2})\:,
\end{equation}
where $e_i=x_i-y_i$.
\par
Compared with the $l_2$-norm based second-order statistic of the error, the correntropy involves all the even moments of the difference between $X$ and $Y$ and is insensitive to outliers. Replacing the second-order measure with the correntropy measure leads to the maximum correntropy criterion (MCC) \cite{Singh2009Using}. The MCC solution is obtained by maximizing the following utility function
\begin{equation}
\label{MCC}
{J_{mcc}} = \mathbb{E}\left[ {{G_{\sigma}\! }\left ( {{e (i)}}\right)}\right]\:.
\end{equation}
Moreover, in practice, the MCC can also be formulated as minimizing the following correntropy-induced loss (C-loss) function \cite{singh2014c}
\begin{equation}
\label{eq:C-loss}
J_{C-loss}= \frac{1}{M}\sum_{i=1}^M\sigma^2\left(1-{{G_{\sigma}\! }\left ( {{e (i)}} \right)}\right)\:.
\end{equation}
The cost function above is closely related to Welsch's cost function, originally introduced in \cite{dennis1978techniques}.
\section{Proposed Methods}
\label{sec:methods}
\subsection{Correntropy-based tensor completion}
Before we state our objective function for tensor completion, we first rewrite \eqref{eq:TF} as
\begin{equation}
\min_{{\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}}} J({\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}}):=\sum\limits_{i = 1}^{n_1} \sum\limits_{j = 1}^{n_2} \sum\limits_{k = 1}^{n_3}{{{{\cal{P}} _{ijk}}{{\left( {{{\cal{M}}_{ijk}} - {\left( {{\boldsymbol{\cal{X}}*\boldsymbol{{\cal{Y}}} }}\right)_{ijk}}} \right)}^2}} }\:.
\end{equation}
When the observed entries $\boldsymbol{\cal{M}}_{ijk}$ are corrupted or contain large outliers, the $l_2$ error measure can bias the optimization, which degrades the performance of tensor completion. To enhance robustness, in this work we utilize the correntropy as the error measure. By replacing the $l_2$ error measure with correntropy, we obtain the new optimization problem
\begin{equation}
\label{eq:GTF}
\begin{aligned}
&\min_{{\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}}} J_{G_{\sigma}}({\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}})\\
&:=\!\sum\limits_{i = 1}^{n_1} \sum\limits_{j = 1}^{n_2} \sum\limits_{k = 1}^{n_3} \!{{\cal{P}}_{ijk}\sigma^2\left({1-G_{\sigma}{{\left( {{{\cal{M}}_{ijk}} - {\left( {{\boldsymbol{\cal{X}}*\boldsymbol{{\cal{Y}}} }}\right)_{ijk}}} \right)}}} \right)}
\end{aligned}
\end{equation}
The formulation in \eqref{eq:GTF}
generalizes the correntropy-based formulation in \cite{he2019robust} for matrix completion. In particular, for the special case where $n_3=1$, the optimization in \eqref{eq:GTF} reduces to the correntropy-based matrix completion. Surely, since tensor algebra is substantially different from the algebra of matrices (even the definition of tensor rank is not unique), the solution in \cite{he2019robust} is no longer suitable for tensor completion, a fact which will also be verified in Section IV. Thus, here we seek new approaches to solve \eqref{eq:GTF}.
\subsection{Optimization via half-quadratic minimization}
\label{sec:HC}
In general, \eqref{eq:GTF} is non-convex and is difficult to be directly optimized. To tackle this difficulty, we utilize the half-quadratic (HQ) optimization technique to optimize the correntropy-based cost function. According to the half-quadratic optimization theory \cite{nikolova2005analysis},
there exists a convex conjugated function $\varphi$ such that
\begin{equation}
\label{eq:HQ1}
G _\sigma (e)=\max_{t}\left(\frac{e^2t}{\sigma^2}-\varphi(t)\right)\:,
\end{equation}
where $t\in\mathbb{R}$ and the maximum is reached at $t=-G _\sigma (e)$. Eq. \eqref{eq:HQ1} can be rewritten as
\begin{equation}
\sigma^2(1-G _\sigma (e))=\min_{t}\left(-e^2t+\sigma^2\varphi(t)\right)\:.
\end{equation}
By defining $s=-t$ and $\phi(s)=\sigma^2\varphi(-s)$, the above equation can be written as
\begin{equation}
\label{eq:HQ2}
\min_{e}\sigma^2(1-G _\sigma (e))=\min_{e,s}\left(e^2s+\phi(s)\right)\:.
\end{equation}
Thus, minimizing the non-convex C-loss function in terms of $e$ is equivalent to minimizing an augmented cost function in an enlarged parameter space $\{e,s\}$.
Therefore, by substituting \eqref{eq:HQ2} in \eqref{eq:GTF}, the correntropy-based objective function $J_{G_{\sigma}}({\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}}})$ can be expressed as
\begin{align}
\begin{aligned}
J_{G_{\sigma}}\!({\boldsymbol{\mathcal{X}}, \boldsymbol{\mathcal{Y}}})\!=\mathop {\min}\limits_{\boldsymbol{\cal{W}}} \sum\limits_{i = 1}^{n_1}\! \sum\limits_{j = 1}^{n_2} \!\sum\limits_{k = 1}^{n_3}\! &
\Bigg({{{{\cal{W}}_{ijk}}{{\cal{P}} _{ijk}}{{\left({\cal{M}}_{ijk}\! -\! \left( \boldsymbol{{\cal{X}}}\!*\!\boldsymbol{{\cal{Y}}} \right)_{ijk} \right)}^2}}
\\
&
+ {{\cal{P}} _{ijk}} \phi \left( {{{\cal{W}}_{ijk}}} \right)\Bigg)
\end{aligned}
\end{align}
Further, by defining the augmented cost function
\begin{equation}
\label{eq:HQ3}
J_{HQ}({\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}}},\boldsymbol{\cal{W}})\!=\!{\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{M}}\!-\!\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}\right)\|_{F}^2}+\boldsymbol{\cal{P}} \circ\phi \left( {\boldsymbol{\cal{W}}}\right)
\end{equation}
where $\phi \left( {\boldsymbol{\cal{W}}}\right)=\sum\nolimits_{i = 1}^{n_1} \sum\nolimits_{j = 1}^{n_2} \sum\nolimits_{k = 1}^{n_3}\phi \left( {{{\cal{W}}_{ijk}}} \right)$, we have the following relation
\begin{equation}
\label{HQG}
\min_{\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}}}J_{G_{\sigma}}({\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}}})=\min_{\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}},\boldsymbol{\cal{W}}}J_{HQ}({\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}},\boldsymbol{\cal{W}}})\:.
\end{equation}
Therefore, the correntropy-based optimization problem is formulated as a half-quadratic based optimization.
We propose the following alternating minimization procedure to solve the optimization problem \eqref{eq:HQ3}:
\subsubsection{Optimizing $\boldsymbol{\cal{W}}$}
According to \eqref{eq:HQ1} and \eqref{eq:HQ2}, given a certain $e$, the minimum is reached at $s=G_{\sigma}(e)$. Therefore, given the fixed $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$, the optimal solutions of ${\cal{W}}_{ijk}$ for $(i,j,k)\in \boldsymbol{\Omega}$ can be obtained as
\begin{equation}
\label{eq:HQW}
{\cal{W}}_{ijk}=G_{\sigma}{{\left( {{{\cal{M}}_{ijk}} - {\left( {{\boldsymbol{\cal{X}}*\boldsymbol{{\cal{Y}}} }}\right)_{ijk}}} \right)}}, (i,j,k)\in\boldsymbol{\Omega}\:.
\end{equation}
Since computing ${\cal{W}}_{ijk}$ for $(i,j,k)\notin\boldsymbol{\Omega}$ does not affect the solution of \eqref{eq:GTF} due to the multiplication with $\boldsymbol{\cal{P}}$, henceforth
we use ${\cal{W}}_{ijk}$ for all the entries to simplify the expressions.
\subsubsection{Optimizing $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$}
Given a fixed $\boldsymbol{\cal{W}}$, \eqref{eq:HQ3} becomes a weighted tensor completion problem
\begin{equation}
\label{eq:WTF}
\min_{\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}}}{\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{{\cal{P}}} \circ \left(\boldsymbol{\cal{M}}-\boldsymbol{\cal{X}*\cal{Y}}\right)\|_{F}^2}\:.
\end{equation}
The weighting tensor $\boldsymbol{\cal{W}}$ assigns different weights to each observed entry based on error residuals. Given the nature of the Gaussian function, a large error will lead to a small weight, such that the negative impact of large outliers for error statistics can be greatly reduced. In the following, we propose and develop two algorithms to solve \eqref{eq:WTF}.
\subsection{Alternating minimization-based algorithm}
\label{sec:HQ-TCTF}
Inspired by TCTF \cite{zhou2017tensor}, we first propose an alternating minimization-based approach to solve \eqref{eq:WTF}. By introducing an auxiliary tensor variable $\boldsymbol{\cal{Z}}$, \eqref{eq:WTF} can be rewritten as
\begin{equation}
\begin{aligned}
\label{eq:TCTF_J}
\min_{\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}},\boldsymbol{\cal{Z}}} J({\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}},\boldsymbol{\cal{Z}}}):=&\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{M}}-\boldsymbol{\cal{Z}}\right)\|_{F}^2\\
&+\beta\|\boldsymbol{\cal{X}*\cal{Y}}-\boldsymbol{\cal{Z}}\|_{F}^2\:.
\end{aligned}
\end{equation}
where $\beta$ is the regularization parameter. One can alternate between updating $\boldsymbol{\cal{Z}}$, $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$. Specifically, by fixing $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$, we can update $\boldsymbol{\cal{Z}}$ as
\begin{equation}
\label{eq:TCTF_Z}
\boldsymbol{\cal{Z}}=\arg\min_{\boldsymbol{\cal{Z}}}{\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{M}}-\boldsymbol{\cal{Z}}\right)\|_{F}^2+\beta\|\boldsymbol{\cal{X}*\cal{Y}}-\boldsymbol{\cal{Z}}\|_{F}^2}
\end{equation}
To solve \eqref{eq:TCTF_Z}, we set the first derivative of $J({\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}},\boldsymbol{\cal{Z}}})$ with respect to $\boldsymbol{\cal{Z}}$ to zero, i.e.,
\begin{equation}
\label{eq:TCTF_pZ}
\frac{\partial J}{\partial {\boldsymbol{\cal{Z}}}}=2\left( \boldsymbol{\cal{W}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{Z}}-\boldsymbol{\cal{M}}\right)+\beta\boldsymbol{\cal{Z}}-\beta\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}\right)=\boldsymbol{0}
\end{equation}
Eq. \eqref{eq:TCTF_pZ} is equivalent to the requirement that
\begin{equation}
\left.\begin{array}{cl}
\boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{W}} \circ\boldsymbol{\cal{Z}}-\boldsymbol{\cal{W}} \circ\boldsymbol{\cal{M}}+\beta\boldsymbol{\cal{Z}}-\beta\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}\right)=\boldsymbol{0} \\
(\boldsymbol{{1}}-\boldsymbol{\cal{P}})\circ(\boldsymbol{\cal{Z}}-\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}})=\boldsymbol{0}
\end{array}\right.
\end{equation}
Thus, $\boldsymbol{\cal{Z}}$ can be obtained in closed-form as
\begin{equation}
\begin{aligned}
\boldsymbol{\cal{Z}}&=\boldsymbol{\cal{P}}\circ\boldsymbol{\cal{Z}}+(\boldsymbol{{1}}-\boldsymbol{\cal{P}})\circ\boldsymbol{\cal{Z}}\\
&=\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}+\frac{\boldsymbol{\cal{W}}}{\beta\boldsymbol{1}+\boldsymbol{\cal{W}}}\circ\boldsymbol{\cal{P}}\circ(\boldsymbol{\cal{M}}-\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}})
\end{aligned}
\label{eq:TCTF_upZ}
\end{equation}
where $\boldsymbol{1}$ denotes the tensor of all ones, and the division is element-wise.
Further, by fixing $\boldsymbol{\cal{Z}}$, \eqref{eq:TCTF_J} reduces to the following minimization:
\begin{equation}
\min_{\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}}}{\|\boldsymbol{\cal{X}*\cal{Y}}-\boldsymbol{\cal{Z}}\|_{F}^2}\:.
\end{equation}
According to Lemma \ref{lem:t-product}, we have
\begin{equation}
{\|\boldsymbol{\cal{X}*\cal{Y}}-\boldsymbol{\cal{Z}}\|_{F}^2}=\frac{1}{n_3}{\|\boldsymbol{\bar{X}}\boldsymbol{\bar{Y}}-\boldsymbol{\bar{Z}}\|_{F}^2}\:.
\end{equation}
Given the block structure of $\boldsymbol{\bar{X}}$, $\boldsymbol{\bar{Y}}$ and $\boldsymbol{\bar{Z}}$, the above minimization problem is equivalent to solving the $n_3$ subproblems
\begin{equation}
\min_{\boldsymbol{\bar{X}}^{(k)},\boldsymbol{\bar{Y}}^{(k)}}{\|\boldsymbol{\bar{X}}^{(k)}\boldsymbol{\bar{Y}}^{(k)}-\boldsymbol{\bar{Z}}^{(k)}\|_{F}^2},k=1,\ldots,n_3\:.
\end{equation}
For each $k$, we can alternate between least-squares solutions to $\boldsymbol{\bar{X}}^{(k)}$ and $\boldsymbol{\bar{Y}}^{(k)}$, i.e.,
\begin{equation}
\begin{aligned}
\label{eq:TCTF_XY}
\boldsymbol{\bar{X}}^{(k)}&={\boldsymbol{\bar{Z}}}^{(k)}\left(\bar{\boldsymbol{Y}}^{(k)}\right)^{*}\left(\bar{\boldsymbol{Y}}^{(k)}\left(\bar{\boldsymbol{Y}}^{(k)}\right)^{*}\right)^{\dagger}\\
\boldsymbol{\bar{Y}}^{(k)}&=\left(\bar{\boldsymbol{Y}}^{(k)}\left(\bar{\boldsymbol{Y}}^{(k)}\right)^{*}\right)^{\dagger}\left(\bar{\boldsymbol{Y}}^{(k)}\right)^{*}{\boldsymbol{\bar{Z}}}^{(k)}
\end{aligned}
\end{equation}
where $\boldsymbol{A}^{\dagger}$ denotes the Moore-Penrose pseudo-inverse of matrix $\boldsymbol{A}$. Therefore, to solve \eqref{eq:HQ3}, we alternate between the updates in \eqref{eq:HQW}, \eqref{eq:TCTF_Z} and \eqref{eq:TCTF_XY} until convergence. We name this algorithm `Half-Quadratic based Tensor Completion by Tensor Factorization' (HQ-TCTF). The pseudocode of HQ-TCTF is summarized in Algorithm 1. Note that in step 3 of the algorithm we use an adaptive kernel width to enhance the rate of convergence. More details about this strategy are discussed in Section \ref{sec:stopping}.
Note that the $n_3$ subproblems in each alternating minimization step are independent of each other. Thus, the solution to these subproblems can be parallelized to further speed up computation.
\begin{algorithm}
\caption{HQ-TCTF for robust tensor completion}
\begin{algorithmic}[1]
\REQUIRE $\boldsymbol{\cal{P}}$, $\boldsymbol{\cal{P}}\circ\boldsymbol{\cal{M}}$, $\beta$ and $r$
\STATE initial tensors $\boldsymbol{\cal{X}}^0$ and $\boldsymbol{\cal{Y}}^0$, $t=0$\\
\REPEAT
\STATE compute $\sigma^{t+1}$ and $\boldsymbol{\cal{W}}^{t+1}$.
\STATE compute $\boldsymbol{\cal{Z}}^{t+1}$ using \eqref{eq:TCTF_Z}.
\FOR {$k=1,...,n_3$}
\STATE compute $\boldsymbol{\cal{X}}^{(k),t+1}$ and $\boldsymbol{\cal{Y}}^{(k),t+1}$ using \eqref{eq:TCTF_XY}
\ENDFOR
\STATE $t=t+1$
\UNTIL stopping criterion is satisfied
\ENSURE $\boldsymbol{\boldsymbol{\cal{X}}}^{t}*{\boldsymbol{\cal{Y}}}^{t}$
\end{algorithmic}
\label{alg:HQ-TCTF}
\end{algorithm}
\begin{remark}
One can observe that as $\sigma\rightarrow\infty$, $G_{\sigma}(e)$ approaches $1$, thus all the entries of $\boldsymbol{\cal{W}}$ become $1$. In this special case, one does not need to optimize $\boldsymbol{\cal{W}}$ in \eqref{eq:HQW}, and \eqref{eq:HQ3} reduces to
\begin{equation}
\min_{\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}}}\|{\boldsymbol{{\cal{P}}} \circ \left(\boldsymbol{\cal{M}}-\boldsymbol{\cal{X}*\cal{Y}}\right)\|_{F}^2}
\:,
\end{equation}
which is the tensor completion problem in \eqref{eq:TF}. Further, by setting $\beta=0$ in \eqref{eq:TCTF_upZ}, the updates of $\boldsymbol{\cal{Z}}$, $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$ will be the same as in TCTF.
\end{remark}
\begin{remark}
The adaptive tubal rank estimation method developed for TCTF \cite{zhou2017tensor} can be naturally applied to HQ-TCTF. Specifically, the scalar rank parameter $r$ in Algorithm 1 can be replaced with a multi-rank vector $\boldsymbol{r}=[r_1,\ldots,r_{n_3}]$
and the adaptive approach in \cite{xu2015parallel,zhou2017tensor} iteratively estimates the rank of the tensor.
\end{remark}
The following proposition establishes convergence guarantees for HQ-TCTF.
\begin{proposition} Define the cost function
\begin{equation}
\begin{aligned}
J({\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}}},\boldsymbol{\cal{Z}},\boldsymbol{\cal{W}})=&{\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{M}}\!-\!\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}\right)\|_{F}^2}\\
&+\!\|\boldsymbol{\cal{X}*\cal{Y}}\!-\!\boldsymbol{\cal{Z}}\|_{F}^2+\boldsymbol{\cal{P}} \circ\phi \left( {\boldsymbol{\cal{W}}}\right)\:.
\end{aligned}
\end{equation}
The sequence $\{J_{\sigma^t}({\boldsymbol{\cal{X}}^t,\boldsymbol{\cal{Y}}}^t,\boldsymbol{\cal{Z}}^t,\boldsymbol{\cal{W}}^t),t=1,2,\ldots\}$ generated by Algorithm \ref{alg:HQ-TCTF} converges.
\end{proposition}
\begin{proof}
Since $\boldsymbol{\cal{W}}$ and $\boldsymbol{\cal{Z}}$ are optimal solutions to \eqref{eq:HQW} and \eqref{eq:TCTF_Z}, respectively, we have
\begin{equation}
\label{eq:TCTF_p1_J}
J({\boldsymbol{\cal{X}}^{t+1},\boldsymbol{\cal{Y}}}^{t+1},\boldsymbol{\cal{Z}}^{t+1},\boldsymbol{\cal{W}}^{t+1})\leq J({\boldsymbol{\cal{X}}^{t+1},\boldsymbol{\cal{Y}}}^{t+1},\boldsymbol{\cal{Z}}^{t},\boldsymbol{\cal{W}}^{t}) \:.
\end{equation}
Then, from Lemma 3 in the supplementary material of \cite{zhou2017tensor}, one can obtain that for each matrix
${\boldsymbol{\bar{X}}^{(k)},\boldsymbol{\bar{Y}}^{(k)}},k=1,...,n_3$ generated from \eqref{eq:TCTF_XY}, the following inequality holds
\begin{equation}
{\|\boldsymbol{\bar{X}}^{(k),t+1}\boldsymbol{\bar{Y}}^{(k),t+1}-\boldsymbol{\bar{Z}}\|_{F}^2}\leq{\|\boldsymbol{\bar{X}}^{(k),t}\boldsymbol{\bar{Y}}^{(k),t}-\boldsymbol{\bar{Z}}\|_{F}^2}
\end{equation}
From Lemma \ref{lem:t-product}, we have ${\|\boldsymbol{\cal{X}}^{t}*\boldsymbol{\cal{Y}}^{t}-\boldsymbol{\cal{Z}}\|_{F}^2}=\frac{1}{n_3}\sum_{k=1}^{n_3}{\|\boldsymbol{\bar{X}}^{(k),t}\boldsymbol{\bar{Y}}^{(k),t}-\boldsymbol{\bar{Z}}\|_{F}^2}$. Thus the following inequality holds
\begin{equation}
\label{eq:TCTF_p1_XY}
{\|\boldsymbol{\cal{X}}^{t+1}*\boldsymbol{\cal{Y}}^{t+1}-\boldsymbol{\cal{Z}}\|_{F}^2}\leq{\|\boldsymbol{\cal{X}}^{t}*\boldsymbol{\cal{Y}}^{t}-\boldsymbol{\cal{Z}}\|_{F}^2}
\end{equation}
Combining \eqref{eq:TCTF_p1_J} and \eqref{eq:TCTF_p1_XY} we have
\begin{equation}
J({\boldsymbol{\cal{X}}^{t+1},\boldsymbol{\cal{Y}}}^{t+1},\boldsymbol{\cal{Z}}^{t+1},\boldsymbol{\cal{W}}^{t+1})\leq J({\boldsymbol{\cal{X}}^{t},\boldsymbol{\cal{Y}}}^{t},\boldsymbol{\cal{Z}}^{t},\boldsymbol{\cal{W}}^{t})
\end{equation}
It can be also verified that $J({\boldsymbol{\cal{X}}^{t},\boldsymbol{\cal{Y}}}^{t},\boldsymbol{\cal{Z}}^{t},\boldsymbol{\cal{W}}^{t}) $ is always bounded below for arbitrary $t$. Thus, $\{J({\boldsymbol{\cal{X}}^{t},\boldsymbol{\cal{Y}}}^{t},\boldsymbol{\cal{Z}}^{t},\boldsymbol{\cal{W}}^{t}),t=1,2,...\}$ will converge.
\end{proof}
\subsection{Alternating steepest descent-based algorithm}
\label{sec:HQ-TCASD}
In the context of matrix completion, alternating steepest descent (ASD) was introduced to efficiently solve the completion problem \cite{tanner2016low}. ASD has a lower per-iteration complexity than PowerFactorization, and can recover high rank matrices. In this section, we introduce the ASD method for tensor completion and develop an efficient robust tensor completion algorithm.
As mentioned in Section \ref{sec:HC}, we first optimize $\boldsymbol{\cal{W}}$ using \eqref{eq:HQW}. Then, instead of directly optimizing \eqref{eq:WTF}, we gradually update $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$ using gradient descent. For convenience, we first add a multiplicative factor of $\frac{1}{2}$ to \eqref{eq:WTF} such that the minimization problem becomes
\begin{equation}
\frac{1}{2}\min_{\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}}}{\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{M}}-\boldsymbol{\cal{X}*\cal{Y}}\right)\|_{F}^2}\:.
\label{eq:ASD_J}
\end{equation}
Then, using the relation \eqref{eq:prelim_1} and Definition \ref{def:tprod} in Section \ref{sec:prelim}, \eqref{eq:ASD_J} can be rewritten as
\begin{equation}
\frac{1}{2}\min_{\boldsymbol{{\cal{X}}},\boldsymbol{\cal{Y}}}{\|\sqrt{\tilde{\boldsymbol{{W}}}} \circ \tilde{\boldsymbol{{P}}} \circ \left(\tilde{\boldsymbol{{M}}}-\operatorname{bcirc}({\boldsymbol{\cal{X}})\tilde{\boldsymbol{{Y}}}}\right)\|_{F}^2}
\label{eq:ASD_J2}\:.
\end{equation}
Based on the block-circulant diagonalization \cite{kilmer2013third}, we have
\begin{equation}
\begin{aligned}
\operatorname{bcirc}({\boldsymbol{\cal{X}})\tilde{\boldsymbol{{Y}}}}&=\left(\boldsymbol{F}_{n_{3}}^{-1} \otimes \boldsymbol{I}_{n_{1}}\right) \bar{\boldsymbol{X}}{\hat{\boldsymbol{{ Y }}}}\\
&=\boldsymbol{F}^{-1}\bar{\boldsymbol{X}}{\hat{\boldsymbol{{ Y }}}}\\
&=\boldsymbol{U}{\hat{\boldsymbol{{ Y }}}}
\end{aligned}
\end{equation}
where $\boldsymbol{F}^{-1}=\boldsymbol{F}_{n_{3}}^{-1} \otimes \boldsymbol{I}_{n_{1}}$ (consequently $\boldsymbol{F}=\boldsymbol{F}^{-1} \times n_3$), $\boldsymbol{U}=\boldsymbol{F}^{-1}\bar{\boldsymbol{X}}$ and ${\hat{\boldsymbol{A}}}=\operatorname{unfold}(\bar{\boldsymbol{\cal{A}}})$. Finally, \eqref{eq:ASD_J2} can be reformulated as
\begin{equation}
\min J({\boldsymbol{U},{{\hat{\boldsymbol{Y}}}}}):=\frac{1}{2}{\left\|{\sqrt{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ \left({\tilde{\boldsymbol{M}}}-{\boldsymbol{U}{\hat{{\boldsymbol{Y}}}}}\right)}\right\|_{F}^2}\:.
\end{equation}
Using the matrix derivatives, the partial derivative of $J({\boldsymbol{U},\hat{\boldsymbol{Y}}})$ with respect to $\boldsymbol{U}$ can be computed as
\begin{equation}
\boldsymbol{g}_{\boldsymbol{U}}=\frac{\partial J}{\partial {\boldsymbol{U} }}=-{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ \left(\tilde{{\boldsymbol{M}}}-{\boldsymbol{U}{{\hat{\boldsymbol{Y}}}}}\right){{\hat{\boldsymbol{Y}}}}^* \:.
\end{equation}
Note that ${\bar{\boldsymbol{X}}}=\boldsymbol{F}\boldsymbol{U}$ is a block diagonal matrix. Following the method in \cite{gilman2020grassmannian}, we force the update of ${\bar{\boldsymbol{X}}}$ at each iteration to be block diagonal. Specifically, by defining the operator $\operatorname{bdiagz}(\cdot)$ which sets the non-block-diagonal entries of a matrix to zero, the updated gradient can be obtained as
\begin{equation}
\boldsymbol{g}'_{\boldsymbol{U}}=\boldsymbol{F}^{-1}\operatorname{bdiagz}(\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})\:.
\label{eq:ASD_gU}
\end{equation}
The steepest descent step size $\mu_{\boldsymbol{U}}$ along ${\boldsymbol{U}}$ can be obtained as the following minimizer
\begin{equation}
\begin{aligned}
\mu'_{{{\boldsymbol{U}}}}&=\arg\min_{\mu}{\left\|{\sqrt{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ \left({\tilde{\boldsymbol{M}}}-({\boldsymbol{U}-\mu\boldsymbol{g}'_{\boldsymbol{U}}){\hat{{\boldsymbol{Y}}}}}\right)}\right\|_{F}^2}\\
&=\frac{\|\boldsymbol{g}'_{{{\boldsymbol{U}}}}\|_F^2}{\left\|\sqrt{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ\left( {\boldsymbol{g}}'_{{\boldsymbol{U}}}{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}
\end{aligned}
\end{equation}
and the matrix ${\boldsymbol{U}}$ can be updated as
\begin{equation}
\label{eq:ASD_U}
\boldsymbol{U}^{t+1}=\boldsymbol{U}^{t}-\mu_{\boldsymbol{U}}'^t{\boldsymbol{g}}'^t_{\boldsymbol{U}}\:.
\end{equation}
Similarly, by fixing $\boldsymbol{U}$, the partial derivative of $\boldsymbol{J}$ w.r.t. $\hat{\boldsymbol{Y}}$ can be obtained as
\begin{equation}
\boldsymbol{g}_{\hat{\boldsymbol{Y}}}=\frac{\partial J}{\partial {\hat{\boldsymbol{Y}}}}=-{\boldsymbol{U}}^*\left({\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ \left(\tilde{{\boldsymbol{M}}}-{\boldsymbol{U}{{\hat{\boldsymbol{Y}}}}}\right)\right)\:.
\label{eq:ASD_gY}
\end{equation}
The corresponding step size $\mu_{{\hat{\boldsymbol{Y}}}}$ will be
\begin{equation}
\begin{aligned}
{\mu}_{{\hat{\boldsymbol{Y}}}}&=\frac{\|\boldsymbol{g}_{{\hat{\boldsymbol{Y}}}}\|_F^2}{\left\|\sqrt{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ \left({\boldsymbol{U}}{\boldsymbol{g}}_{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}\:.
\end{aligned}
\end{equation}
Similar to ASD, the foregoing update process suffers from a slow rate of convergence when directly applied to image and video completion tasks. To tackle this problem, following a Newton-like method for Scaled ASD \cite{tanner2016low}, we scale the gradient descent direction for $\hat{\boldsymbol{Y}}$ in \eqref{eq:ASD_gY} by $(\boldsymbol{U}^*\boldsymbol{U})^{-1}$, i.e.,
\begin{equation}
\boldsymbol{g}'_{\hat{\boldsymbol{Y}}}=(\boldsymbol{U}^*\boldsymbol{U})^{-1}\boldsymbol{g}_{\hat{\boldsymbol{Y}}}\:,
\label{eq:ASD_gY2}
\end{equation}
and the corresponding step size $\mu'_{{\hat{\boldsymbol{Y}}}}$ with exact line-search is
\begin{equation}
\begin{aligned}
{\mu}'_{{\hat{\boldsymbol{Y}}}}&=\frac{\langle\boldsymbol{g}_{\hat{\boldsymbol{Y}}},\boldsymbol{g}'_{\hat{\boldsymbol{Y}}}\rangle}{\left\|\sqrt{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}} \circ \left({\boldsymbol{U}}{\boldsymbol{g}}'_{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}\:,
\end{aligned}
\end{equation}
where $\langle \boldsymbol{A}, \boldsymbol{B}\rangle := \sum_{1 \leq i, j \leq n} a^*_{i, j} b_{i, j}$. Therefore, the matrix $\hat{\boldsymbol{Y}}$ at the $t$-th iteration can be updated by combining \eqref{eq:ASD_gY} and \eqref{eq:ASD_gY2}, i.e.,
\begin{equation}
\label{eq:ASD_Y}
{\hat{\boldsymbol{Y}}}^{t+1}={\hat{\boldsymbol{Y}}}^{t}-(1-\lambda)\mu_{{\hat{\boldsymbol{Y}}}}^t\boldsymbol{g}_{{\hat{\boldsymbol{Y}}}}^t-\lambda\mu'^t_{{\hat{\boldsymbol{Y}}}}\boldsymbol{g}'^t_{{\hat{\boldsymbol{Y}}}}\:,
\end{equation}
where $0\leq\lambda\leq1$ is a free parameter to be chosen.
Therefore, the matrices ${\boldsymbol{U}}$ and ${{\hat{\boldsymbol{Y}}}}$ can be alternately updated using \eqref{eq:ASD_U} and \eqref{eq:ASD_Y} until convergence. We term the above algorithm `Half-Quadratic based Tensor Completion by Alternating Steepest Descent' (HQ-TCASD).
Similar to HQ-TCTF, adaptive selection of the kernel width $\sigma$ is used to improve the rate of convergence and the performance of HQ-TCASD. HQ-TCASD is summarized in Algorithm \ref{alg:HQ-TCASD}. We remark that the matrices ${\boldsymbol{U}}(\bar{\boldsymbol{X}})$ and ${{\hat{\boldsymbol{Y}}}}$ have a block structure, so the matrix computation can be processed block-by-block. Also, since we have $\boldsymbol{F}\tilde{\boldsymbol{A}}=\operatorname{unfold(fft}(\boldsymbol{\cal{A}},[\:],3))$ for a tensor $\boldsymbol{\cal{A}}$, the conventional FFT operation can be used in \eqref{eq:ASD_gU} instead of matrix multiplication to further accelerate the computation.
\begin{algorithm}
\caption{HQ-TCASD for robust tensor completion}
\begin{algorithmic}[1]
\REQUIRE $\boldsymbol{\cal{P}}$, $\boldsymbol{\cal{P}}\circ\boldsymbol{\cal{M}}$, $r$ and $\lambda$
\STATE initial matrices $\boldsymbol{U}^0$ and $\hat{\boldsymbol{{Y}}}^0$, $t=0$\\
\REPEAT
\STATE compute $\sigma^{t+1}$ and $\boldsymbol{\cal{W}}^{t+1}$.
\STATE compute $\boldsymbol{U}^{t+1}$ using \eqref{eq:ASD_U}.
\STATE compute $\hat{\boldsymbol{Y}}^{t+1}$ using \eqref{eq:ASD_Y}.
\STATE $t=t+1$
\UNTIL stopping criterion is satisfied
\ENSURE $\boldsymbol{U}^{t}*{\hat{\boldsymbol{Y}}}^{t}$
\end{algorithmic}
\label{alg:HQ-TCASD}
\end{algorithm}
The following proposition verifies the convergence of the proposed HQ-TCSAD.
\begin{proposition}
Define the cost function
\begin{equation}
\begin{aligned}
J({\boldsymbol{\cal{X}},\boldsymbol{\cal{Y}}},\boldsymbol{\cal{W}})=&\frac{1}{2}{\|\sqrt{\boldsymbol{\cal{W}}} \circ \boldsymbol{\cal{P}} \circ \left(\boldsymbol{\cal{M}}\!-\!\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}\right)\|_{F}^2}\\
&+\frac{1}{2}\boldsymbol{\cal{P}} \circ\phi \left( {\boldsymbol{\cal{W}}}\right)
\end{aligned}
\end{equation}
The sequence $\{J({\boldsymbol{\cal{X}}^t,\boldsymbol{\cal{Y}}}^t,\boldsymbol{\cal{W}}^t),t=1,2,...\}$ generated by Algorithm 2 will converge.
\end{proposition}
\begin{proof}
See Appendix A.
\end{proof}
\begin{remark}
As $\sigma\rightarrow\infty$ (i.e., for the standard tensor completion cost function in \eqref{eq:TF}), one can set $\boldsymbol{W}$ to be the all ones matrix and alternately update \eqref{eq:ASD_U} and \eqref{eq:ASD_Y}. This is itself a new algorithm, which we term TCASD. It can be used for tensor completion in noise-free settings or with Gaussian noise.
\end{remark}
\subsection{Stopping criterion and adaptive kernel width selection}
\label{sec:stopping}
The relative error between iterations can be used to measure the speed of convergence and develop a stopping criterion. Specifically, the residual error tensor at the $t$-th iteration $\boldsymbol{\cal{E}}^t$ is defined as
\begin{equation}
\boldsymbol{\mathcal{E}}^t=\sqrt{\boldsymbol{\cal{W}}^t}\circ\boldsymbol{\cal{P}}\circ(\boldsymbol{\mathcal{M}}-\boldsymbol{\mathcal{X}^t}*\boldsymbol{\mathcal{Y}}^t)\:.
\end{equation}
If $\|\boldsymbol{\cal{E}}^{t}\|_F-\|\boldsymbol{\cal{E}}^{t-1}\|_F$ falls below a sufficiently small value $\varepsilon$, the algorithm is considered to have converged to a local minimum, and the iterative procedure terminates.
To further improve performance and achieve a faster rate of convergence, we use an adaptive kernel width selection strategy. Specifically, the kernel width at the $t$-th iteration is determined by
\begin{equation}
\label{sigmaout}
\sigma^{t}=\max\left(\eta\left(\max({\boldsymbol{e}_{\Omega}^t}_{(0.25)},{\boldsymbol{e}_{\Omega}^t}_{(0.75)})\right),\sigma_{min}\right)
\end{equation}
where $\boldsymbol{e}^{t}_{\Omega}\in\mathbb{R}^{|\Omega|\times 1}$ denotes the vector composed of all non-zero entries of $\boldsymbol{\mathcal{E}}^{'t}=\boldsymbol{\cal{P}}\circ(\boldsymbol{\mathcal{M}}-\boldsymbol{\mathcal{X}^t}*\boldsymbol{\mathcal{Y}}^t)$, and $\boldsymbol{y}_{(q)}$ denotes the $q$-th quantile of $\boldsymbol{y}$. The parameter $\eta$ controls the kernel width, and $\sigma_{min}$ is a lower bound on $\sigma$.
\subsection{Complexity analysis}
\label{sec:complexity}
We first present a complexity analysis of HQ-TCTF. Computing $\sigma$ involves computing $\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}$ and finding the quantile of $e_{\Omega}$, whose time complexities are ${\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_2n_3)$ and ${\cal{O}}(n_1n_2n_3)$, respectively. The complexity of computing $\boldsymbol{\cal{W}}$ and $\boldsymbol{\cal{Z}}$ are both ${\cal{O}}(n_1n_2n_3)$ since $\boldsymbol{\cal{X}}*\boldsymbol{\cal{Y}}$ was already computed. Then, the cost of updating $\boldsymbol{\cal{X}}$ and $\boldsymbol{\cal{Y}}$ is ${\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_2n_3)$. Therefore, the overall complexity of HQ-TCTF is ${\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_2n_3)$.
For HQ-TCASD, similar to HQ-TCTF, the complexity of computing $\sigma$ is ${\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_2n_3)$. Computing $\boldsymbol{g}'_{\boldsymbol{U}}$ using FFT has complexity ${\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_3\max(n_2,n_3))$, and calculation of $\mu_{\boldsymbol{U}},\boldsymbol{g}_{\hat{\boldsymbol{Y}}}$ and $\mu_{\hat{\boldsymbol{Y}}}$ is of complexity $\boldsymbol{\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_2n_3)$. Therefore, the overall complexity of HQ-TCASD is ${\cal{O}}(r(n_1+n_2)n_3\log n_3+rn_1n_3\max(n_2,n_3))$. Further, one can observe that if $n_2 > n_3$, both HQ-TCTF and HQ-TCASD have the same order complexity.
\section{Experiments}
\label{sec:results}
In this section, we thoroughly evaluate the performance of the proposed algorithms HQ-TCTF, HQ-TCASD and TCASD using both synthetic and real data. We compare to existing tensor completion algorithms, including TCTF \cite{zhou2017tensor}, TAM \cite{liu2019low} and TNN \cite{zhang2016exact}, and robust tensor completion algorithms, including SNN-L1 \cite{goldfarb2014robust}, SNN-WHT \cite{yang2015robust}, SNN-WST \cite{yang2015robust}, TRNN-L1 \cite{huang2020robust} and TNN-L1 \cite{jiang2019robust}. For fair comparison, the adaptive kernel width selection method is also applied to SNN-WHT and SNN-WST in the experiments. Further, the correntropy-based robust matrix completion algorithm \cite{he2019robust} is also included in the comparisons, where the tensor is treated as $n_3$ matrices of dimension $n_1\times n_2$. In the experiments, we refer to this matrix-completion-based method as HQ-MCASD. All the algorithms were implemented using MATLAB r2019b on a standard PC with a 2.6-GHz processor and 16-GB memory.
In the experiments, the observed entries of the tensor are perturbed by additive noise generated from the standard two-component Gaussian mixture model (GMM). The probability density function is given by $(1-c)N(0,{\sigma_A^2})+cN(0,\sigma_B^2)$, where $N(0,{\sigma_A^2})$ represents the general Gaussian noise disturbance with variance ${\sigma_A^2}$, and $N(0,\sigma_B^2)$ with a large variance $\sigma_B^2$ captures the outliers. The variable $c$ controls the occurrence probability of outliers.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{Syn_new.eps}
\caption{Curves of average relative error under different noise environments. Left: $c=0$. Middle: $\sigma_A^2=0.01,c=0.1$. Right: $\sigma_A^2=0.01$,$\sigma_B^2=1$.}
\label{fig:syn_GMM}
\end{figure*}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Syn_rank.eps}
\caption{Average relative error as function of the rank parameter $r$ with Gaussian noise.}
\label{fig:syn_rank}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Syn_size.eps}
\caption{Average relative error (left) and average running time (right) as function of $n_1$ under GMM noise.}
\label{fig:syn_size}
\end{figure}
In all simulations, the maximum number of iterations of all algorithms is set to $500$ unless explicitly mentioned. The parameter $\eta$ in \eqref{sigmaout} for adaptive kernel width selection is set to $6$ and $2$ for synthetic data and real data, respectively. The lower bound $\sigma_{min}$ for kernel width selection is experimentally set to $0.3$ for synthetic data and $0.15$ for real data. The threshold $\varepsilon$ for the stopping criterion is set to $10^{-9}$ for synthetic data and $10^{-5}$ for real data. The regularization parameter $\beta$ for HQ-TCTF is set to $1$. For real data, the $\lambda$ for HQ-TCASD is fixed to $0.2$. Other parameters for each algorithm are tuned to achieve the best performance in each task. Note that the parameters of the different algorithms are not adapted across different noise settings in each simulation. Fixing the parameters is important since, the noise properties could be changing and may not be measurable in practice.
\subsection{Synthetic Data}
In this section, we verify the performance of the proposed algorithms using synthetic data. The dimensions of the tensor are set to $n_1=n_2=200, n_3=20$. The low-tubal-rank tensor $\boldsymbol{\cal{M}}$ with tubal rank $\bar{r}$ is obtained by the t-product of two tensors whose entries are generated from a zero mean Gaussian distribution with unit variance. The indicator tensor $\boldsymbol{\cal{P}}$ with observation fraction $p$ is generated by randomly and uniformly assigning $p\times 100\%$ of the entries of $\boldsymbol{\cal{P}}$ the value $1$. The performance of an instance of tensor completion is evaluated using the relative error
\begin{equation}
\label{NMSE}
rel.err = \frac{{{{\left\|{\hat {\boldsymbol{\cal{M}}}}-\boldsymbol{\cal{M}} \right\|}_F}}}{{{\| \boldsymbol{\cal{M}} \|}_F}}\:,
\end{equation}
where $\hat{\boldsymbol{\cal{M}}}$ is the recovered tensor. The performance is evaluated by taking the ensemble average of the relative error over $T$ independent Monte Carlo runs of different instances of $\boldsymbol{\cal{P}}$ and the noise. In this section, we only compare the performance of the proposed algorithms to TNN, TNN-L1, TAM and TCTF since the other algorithms are using different definitions for the tensor rank.
\begin{table*}[]
\centering
\caption{Completion Performance (PSNR) Comparison on four images from the DAVIS 2016 dataset}
\label{tb:image}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccccc}
\hline
Image & $c$ & SNN-L1 & SNN-WST & SNN-WHT & TRNN-L1 & TNN-L1 & HQ-MCASD & HQ-TCTF & HQ-TCASD \\ \hline
\multirow{4}{*}{bus} & 0 & 25.37 & 26.19 & 25.81 & 27.34 & {\ul 28.10} & 26.37 & \textbf{28.12} & 27.04 \\
& 0.1 & 25.03 & 25.73 & 25.23 & 26.20 & {\ul 26.72} & 25.85 & \textbf{27.60} & 26.25 \\
& 0.2 & 24.16 & 24.85 & 24.28 & 24.64 & 24.82 & 24.68 & \textbf{26.83} & {\ul 25.13} \\
& 0.3 & 23.32 & {\ul 23.98} & 22.90 & 22.40 & 21.83 & 20.11 & \textbf{25.55} & 23.39 \\ \hline
\multirow{4}{*}{paragliding} & 0 & 27.90 & 29.53 & 29.17 & 30.50 & 30.34 & 30.45 & \textbf{31.07} & {\ul 30.59} \\
& 0.1 & 29.01 & 28.99 & 28.46 & {\ul 29.17} & 28.92 & 29.07 & \textbf{30.06} & 29.15 \\
& 0.2 & 28.18 & {\ul 28.41} & 26.91 & 27.61 & 27.03 & 25.25 & \textbf{28.66} & 27.34 \\
& 0.3 & 26.48 & \textbf{27.70} & 24.52 & 24.93 & 24.01 & 19.40 & {\ul 26.56} & 24.78 \\ \hline
\multirow{4}{*}{dance} & 0 & 23.18 & 29.72 & 27.76 & 29.54 & {\ul 29.78} & 28.76 & \textbf{30.04} & 29.42 \\
& 0.1 & 11.85 & {\ul 28.60} & 26.42 & 27.95 & 28.06 & 27.86 & \textbf{29.38} & 28.40 \\
& 0.2 & 26.15 & {\ul 27.34} & 24.87 & 25.83 & 25.74 & 26.19 & \textbf{28.22} & 26.77 \\
& 0.3 & 24.97 & {\ul 25.91} & 22.77 & 23.10 & 22.33 & 20.60 & \textbf{26.56} & 24.47 \\ \hline
\multirow{4}{*}{drift} & 0 & 25.95 & 29.47 & 28.14 & 29.21 & {\ul 29.49} & 28.45 & \textbf{29.95} & 29.18 \\
& 0.1 & 26.88 & {\ul 28.34} & 26.44 & 27.73 & 27.85 & 27.33 & \textbf{29.12} & 28.02 \\
& 0.2 & 25.87 & {\ul 27.13} & 24.61 & 25.86 & 25.72 & 24.30 & \textbf{27.79} & 26.35 \\
& 0.3 & 24.87 & \textbf{25.73} & 21.25 & 23.28 & 22.50 & 19.11 & {\ul 25.64} & 23.65 \\ \hline
\end{tabular}%
}
\end{table*}
We first investigate the performance of the algorithms under different settings for the noise. The observation fraction $p$ is set to $0.5$ and the tubal rank $\bar{r}$ of $\boldsymbol{\cal{M}}$ is set to $10$. The rank parameter for all the algorithms is set to the true value, i.e. $r=10$. For each noise distribution, we average over $20$ Monte Carlo runs. The average relative error under different noise distributions is shown in Fig.\ref{fig:syn_GMM}. One can observe that for Gaussian noise (i.e., $c=0$), all algorithms expect TNN and TNN-L1 achieve the same favorable performance, however, for GMM noise with $c\neq 0$, the proposed robust algorithms HQ-TCTF and HQ-TCASD outperform all the other algorithms. Also, HQ-TCASD is shown to slightly outperform HQ-TCTF.
In many practical situations, the actual rank $\bar{r}$ may not be known. Therefore, we study the performance under different settings of $r$. Again, the observation fraction $p$ is set to $0.5$ and the actual tubal rank $\bar{r}= 10$. We use a a Gaussian noise distribution with $\sigma_A^2=0.01$. For all factorization-based algorithms,
we gradually change the rank parameter $r$ between $5$ and $50$. Note that TNN and TNN-L1 do not require setting the rank since they use convex relaxation as described in Section \ref{sec:LTR_TC}. The other parameters are set as in the previous simulation. For HQ-TCTF, an additional algorithm with adaptive rank estimation (namely HQ-TCTF-RE) is also included for comparison. The average relative error under different rank parameters $r$ is shown in Fig.\ref{fig:syn_rank} for the different algorithms. As shown, HQ-TCASD is still able to successfully complete the tensor $\boldsymbol{\cal{M}}$ with low relative error even when $r$ is set larger than actual $\bar{r}$.
Finally, we compare the performance of the proposed algorithms and TNN-L1 with different tensor size under the GMM noise model. Here, we only compare to TNN-L1 since it is the only algorithm other than the proposed methods that can yield successful recovery under the GMM noise as shown in Fig. \ref{fig:syn_GMM}. The tensor size is set to $n_1 = n_2$ and $n_3=20$. The parameters of the GMM noise are set to $c=0.1, \sigma_A^2=0.01$ and $\sigma_B^2=10$. The rank $\bar{r}$ is set to $n_1\times0.05$. The rank of HQ-TCASD with $\lambda=1$ is set to $\bar{r}+5$ for fast completion speed. We gradually increase $n_1$ from $100$ to $1000$ and average the relative error over 20 Monte Calro runs. The average relative error and average running time are shown in Fig.\ref{fig:syn_size}. One can observe that the proposed algorithms always yield lower relative error and smaller computation time than TNN-L1. Further, HQ-TCASD with $\lambda=1$ incurs the shortest running time.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{image_time.eps}
\caption{Average running time for each image.}
\label{fig:image_time}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{example_image_new_large.eps}
\caption{Recovered images of different algorithms under GMM noise with $c=0.2$.}
\label{fig:image_large}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{example_image_new_small.eps}
\caption{Enlarged regions of the recovered images by different algorithms.}
\label{fig:image_small}
\end{figure*}
\subsection{Image Inpainting}
In this section, we evaluate the proposed HQ-TCTF and HQ-TCASD algorithms, along with other state-of-the-art robust completion algorithms, on the color image inpainting task. The performance evaluation metric is the peak signal-to-noise ratio (PSNR) defined as
\[
PSNR=10\log_{10}\frac{I_{max}^3n_1n_2n_3}{{{\left\| {{\hat {\boldsymbol{\mathcal{M}}}}}-\boldsymbol{\mathcal{M} }\right\|}_F^2}}\:,
\]
where $I_{max}$ denotes the largest value of the pixels of the image data. A higher PSNR signifies better recovery performance.
We choose four color images from the Densely Annotated Video Segmentation (DAVIS) 2016 dataset\footnote{https://davischallenge.org/davis2016/code.html} \cite{perazzi2016benchmark} to evaluate the completion performance of the different algorithms. Each color image can be naturally regarded as a $1920\times1080\times3$ tensor.
We set the observation fraction $p$ to 0.5, that is, we independently and randomly select $0.5n_1n_2$ pixels from each channel. Then, GMM noise with $\sigma_A^2=0.001$ and $\sigma_B^2=1$ is added to the observed pixels with probability $c$. The rank parameter for HQ-TCASD is set to $100$. For HQ-TCTF, the adaptive rank estimation procedure is utilized. The initial multi-rank vector $\boldsymbol{r}^0$ is set to $[100,100,100]$ and the minimum $\boldsymbol{r}$ is constrained to $[100,25,25]$. The average PSNR for four images are reported in Table \ref{tb:image} for different values of the noise parameter $c$, and the average running time for each image is shown in Fig. \ref{fig:image_time}. One can observe that HQ-TCTF obtain the highest PSNR for most of the images and values of $c$. Examples of the recovered full and partially enlarged images are shown in Fig.~\ref{fig:image_large} and Fig.~\ref{fig:image_small}, respectively. As shown, the methods proposed yield visually smoother texture and more accurate color than the other methods.
\begin{table*}[htbp]
\caption{Completion Performance (PSNR) Comparison on four gray-scale videos from the DAVIS 2016 dataset}
\label{tb:video}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccccc}
\hline
Video & $c$ & SNN-L1 & SNN-WST & SNN-WHT & TRNN-L1 & TNN-L1 & HQ-MCASD & HQ-TCTF & HQ-TCASD \\ \hline
\multirow{4}{*}{scooter} & 0 & 27.61 & 24.33 & 22.51 & 28.05 & 27.56 & 28.28 & {\ul 28.44} & \textbf{28.74} \\
& 0.1 & 25.71 & 23.82 & 21.26 & 25.89 & 25.77 & \textbf{27.36} & 25.08 & {\ul 27.23} \\
& 0.2 & 19.83 & 22.77 & 19.16 & 22.26 & 22.94 & {\ul 25.28} & 21.68 & \textbf{25.33} \\
& 0.3 & 19.58 & {\ul 20.88} & 13.04 & 17.99 & 18.28 & 19.83 & 17.87 & \textbf{22.69} \\ \hline
\multirow{4}{*}{surf} & 0 & 28.59 & 27.80 & 25.05 & 28.88 & 28.75 & {\ul 29.37} & 27.76 & \textbf{29.49} \\
& 0.1 & 26.14 & {\ul 27.55} & 22.30 & 27.52 & 27.36 & 26.84 & 24.36 & \textbf{28.05} \\
& 0.2 & 25.84 & \textbf{26.99} & 20.26 & 25.86 & 25.18 & 21.59 & 19.58 & {\ul 26.24} \\
& 0.3 & 22.00 & \textbf{25.77} & 15.53 & 22.03 & 20.81 & 17.10 & 19.82 & {\ul 23.90} \\ \hline
\multirow{4}{*}{train} & 0 & 25.49 & 22.97 & 21.16 & \textbf{25.90} & {\ul 25.79} & 23.70 & 24.93 & 25.51 \\
& 0.1 & 23.89 & 22.55 & 20.49 & 23.88 & 23.99 & 23.09 & {\ul 24.04} & \textbf{24.50} \\
& 0.2 & 21.90 & 21.65 & 17.44 & 21.13 & 21.37 & 21.89 & {\ul 22.48} & \textbf{23.17} \\
& 0.3 & 19.20 & {\ul 20.18} & 11.59 & 17.81 & 17.55 & 18.30 & 19.71 & \textbf{21.69} \\ \hline
\multirow{4}{*}{flamingo} & 0 & 24.48 & 21.78 & 21.60 & 24.98 & 25.39 & 24.39 & {\ul 25.65} & \textbf{25.93} \\
& 0.1 & 23.56 & 21.90 & 20.68 & 23.76 & {\ul 24.14} & 24.10 & 23.47 & \textbf{24.57} \\
& 0.2 & {\ul 22.15} & 21.50 & 18.53 & 21.80 & 22.02 & 21.56 & 20.96 & \textbf{23.07} \\
& 0.3 & 20.02 & {\ul 20.27} & 13.27 & 18.40 & 18.11 & 19.12 & 18.12 & \textbf{21.04} \\ \hline
\end{tabular}%
}
\end{table*}
\subsection{Video data completion}
In this section, we evaluate the performance of the algorithms using video data. Four gray-scale video sequences from the DAVIS 2016 dataset are used for testing completion performance. Due to the limitation of the computer memory, the resolution of the video is scaled down to $1280\times 720$ from the original $1920\times 1080$ resolution, and the first 30 frames of each sequence are selected, such that each video sequence forms a tensor of size $1280 \times 720 \times 30$. For each sequence, $0.5n_1n_2n_3$ pixels are randomly and uniformly selected as the observed data. Then, the observed data is perturbed with noise from the GMM model with the same setting of the previous experiment. The rank for HQ-TCASD is set to $80$. The initial multi-rank vector $\boldsymbol{r}^0$ for HQ-TCTF is set to $[80, 80,\ldots, 80]$ and the minimum $\boldsymbol{r}$ is constrained to $[80, 60,\ldots, 60]$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{video_time.eps}
\caption{Average running time for each video.}
\label{fig:video_time}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\textwidth]{example_video_new_large.eps}
\caption{Frames recovered by different algorithms under the GMM noise model with $c=0.2$.}
\label{fig:video_large}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1\textwidth]{example_video_new_small.eps}
\caption{Partial enlarged recovered framed of different algorithms in Fig.\ref{fig:video_large} (red rectangles).}
\label{fig:video_small}
\end{figure*}
We compare performance under different occurrence probabilities for the outliers. The average PSNR is shown in Table \ref{tb:video} for different values of $c$. The corresponding average running times are depicted in Fig.~\ref{fig:video_time}. The proposed HQ-TCASD algorithm achieves the highest PSNR values in most of situations with low computation time. To shed more light on performance, examples of recovered video frames from four video sequences are illustrated in Fig.~\ref{fig:video_large}. The small regions surrounded by red rectangles are shown enlarged in Fig.~\ref{fig:video_small}.
It can be seen that HQ-TCASD yields frames that are less noisy and with better contrast than the ones recovered by the other methods.
\subsection{Traffic data prediction}
In this section, we further evaluate the performance of the algorithms using traffic data. The traffic data is generated from the Large-scale PeMS traffic speed dataset\footnote{https://doi.org/10.5281/zenodo.3939793} \cite{mallick2020transfer}. The data registers traffic speed time series from $11160$ sensors over $4$ weeks with $288$ time points per day (i.e., 5-min frequency) in California, USA. Thus it forms a $11160\times288\times28$ tensor. Each value of the data is normalized such that all data are in the range $[0,1]$. In this experiment, we randomly and uniformly selected $50\%$ of the data points as the observed data. The noise parameter $\sigma_A^2$ is set to zero and the outliers have $\sigma_B^2=1$. For HQ-TCASD, the rank $r$ is set to $20$. For HQ-TCTF, the elements of the multi-rank vector are all fixed at $20$. $20$ Monte Calro runs are performed for each value of $c$ with different selections of observed data and noise. The values of the average relative error under different simulation settings are reported in Table \ref{tb:traffic}. HQ-TCASD achieves the best performance for $c=0.2$ and $0.3$. To better illustrate the recovery performance, an example of the data recovered from sensor No. $9960$ on the $26$-th day under $c=0.3$ is depicted in Fig.~\ref{fig:traffic}. It can be seen that the proposed HQ-TCASD outperforms the other algorithms.
\renewcommand\arraystretch{1.2}
\begin{table}[htbp]
\caption{Completion performance (relative error) comparison on traffic data.}
\label{tb:traffic}
\resizebox{0.48\textwidth}{!}
{%
\begin{tabular}{C{1.5cm}C{0.6cm}C{0.7cm}C{0.6cm}C{0.7cm}C{0.6cm}C{0.7cm}}
\hline
\multirow{2}{*}{Methods} & \multicolumn{2}{c}{$c=0.1$} & \multicolumn{2}{c}{$c=0.2$} & \multicolumn{2}{c}{$c=0.3$} \\ \cline{2-7}
& ~rel.err & ~~time & ~rel.err & ~~time & ~rel.err & ~~time \\ \hline
SNN-L1 & 0.0673 & 14437.7 & 0.0692 & 10440.9 & 0.0726 & 13092.0 \\
SNN-WST & 0.0705 & 8145.53 & 0.0728 & 9423.65 & 0.0755 & 10932.2 \\
SNN-WHT & 0.0753 & 6891.92 & 0.0809 & 6745.36 & 0.0935 & 6328.92 \\
TRNN-L1 & 0.0489 & 12208.6 & 0.0511 & 13850.3 & 0.0536 & 14832.2 \\
TNN-L1 & 0.0436 & ~568.65 & 0.0466 & ~710.64 & {\ul 0.0527} & 1011.82 \\
HQ-MCASD & \textbf{0.0377} & ~189.78 & 0.0575 & ~311.89 & 0.1179 & ~618.94 \\
HQ-TCTF & {\ul 0.0385} & ~316.56 & {\ul 0.0451} & ~351.35 & 0.0560 & ~396.08 \\
HQ-TCASD & 0.0397 & ~984.19 & \textbf{0.0423} & ~817.58 & \textbf{0.0476} & ~738.43 \\ \hline
\end{tabular}%
}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{traffic.eps}
\caption{Examples of the recovered missing signals of traffic data.}
\label{fig:traffic}
\end{figure}
\section{Conclusion}
\label{sec:conc}
In this paper, we proposed a novel robust tensor completion method, which utilizes tensor-factorization to impose a low-tubal-rank structure, which avoids the computation of the SVD. The correntropy measure is introduced to alleviate the impact of large outliers. Based on a half-quadratic minimization technique, two efficient robust tensor completion algorithms, HQ-TCTF and HQ-TCASD, were proposed and their convergence is analyzed. Experiments on both synthetic and real datasets demonstrate the superior performance of the proposed methods compared to existing state-of-the-art algorithms.
\begin{appendices}
\section{Proof of Proposition 2}
Since $\boldsymbol{\cal{W}}$ is an optimal solution for \eqref{eq:HQW}, we have
\begin{equation}
J({\boldsymbol{\cal{X}}^{t+1},\boldsymbol{\cal{Y}}}^{t+1},\boldsymbol{\cal{W}}^{t+1})\leq J({\boldsymbol{\cal{X}}^{t+1},\boldsymbol{\cal{Y}}}^{t+1},\boldsymbol{\cal{W}}^{t}) \:.
\end{equation}
By fixing $\boldsymbol{\cal{W}}$ and defining $\boldsymbol{Q}=\sqrt{\tilde{\boldsymbol{W}}} \circ \tilde{\boldsymbol{P}}$, we obtain the following \begin{equation}
\begin{aligned}
\label{eq:ASD_p1_g}
&2J(\boldsymbol{U}^{t+1},\hat{\boldsymbol{Y}}^{t})-2J(\boldsymbol{U}^{t},\hat{\boldsymbol{Y}}^{t})\\
&={\left\|{\boldsymbol{Q} \circ \left({\tilde{\boldsymbol{M}}}-{\boldsymbol{U}^{t+1}{\hat{{\boldsymbol{Y}}}^{t}}}\right)}\right\|_{F}^2}-{\left\|{\boldsymbol{Q} \circ \left({\tilde{\boldsymbol{M}}}-{\boldsymbol{U}^{t}{\hat{{\boldsymbol{Y}}}^{t}}}\right)}\right\|_{F}^2}\\
&={\left\|{\boldsymbol{Q} \!\circ\! \left({\tilde{\boldsymbol{M}}}\!-\!({\boldsymbol{U}^{t}-\mu_{\boldsymbol{U}}^t{\boldsymbol{g}}^{'t}_{\boldsymbol{U}}){\hat{{\boldsymbol{Y}}}^{t}}}\right)}\right\|_{F}^2}\!-\!{\left\|{\boldsymbol{Q} \!\circ\! \left({\tilde{\boldsymbol{M}}}\!-\!{\boldsymbol{U}^{t}{\hat{{\boldsymbol{Y}}}^{t}}}\right)}\right\|_{F}^2}\\
&=\!(\mu_{\boldsymbol{U}}^{t})^2{\left\|{\boldsymbol{Q} \!\circ\! \left({{\boldsymbol{g}}^{'t}_{\boldsymbol{U}}{\hat{{\boldsymbol{Y}}}^{t}}}\right)}\right\|_{F}^2}\!+2\mu_{\boldsymbol{U}}^{t}\!\left<{\!\boldsymbol{Q} \!\circ\! \left({\tilde{\boldsymbol{M}}}\!-\!{\boldsymbol{U}^{t}{\hat{{\boldsymbol{Y}}}^{t}}}\right)}, {{\boldsymbol{g}}^{'t}_{\boldsymbol{U}}{\hat{{\boldsymbol{Y}}}^{t}}}\right>\\
&=\frac{(\|\boldsymbol{g}^{'t}_{{{\boldsymbol{U}}}}\|_F^2)^2}{\left\|\boldsymbol{Q}\circ\left( {\boldsymbol{g}}^{'t}_{{\boldsymbol{U}}}{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}-2\mu_{\boldsymbol{U}}^{t}\left<{\boldsymbol{g}}^{t}_{\boldsymbol{U}}, {{\boldsymbol{g}}^{'t}_{\boldsymbol{U}}}\right>
\end{aligned}
\end{equation}
We can further simplify $\langle{\boldsymbol{g}}_{\boldsymbol{U}}, {{\boldsymbol{g}}^{'}_{\boldsymbol{U}}}\rangle$ as
\begin{equation}
\begin{aligned}
\label{eq:ASD_p2_g1}
\left<{\boldsymbol{g}}_{\boldsymbol{U}}, {{\boldsymbol{g}}'_{\boldsymbol{U}}}\right>=&\operatorname{tr}\left(\boldsymbol{g}^{H}_{\boldsymbol{U}}\boldsymbol{F}^{-1}\operatorname{bdiagz}(\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})\right)\\
=&\frac{1}{n_3}\operatorname{tr}\left((\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})^{H}\operatorname{bdiagz}(\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})\right)\\
=&\frac{1}{n_3}\left\|\operatorname{bdiagz}(\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})\right\|_F^2
\end{aligned}
\end{equation}
where $\operatorname{tr}(\cdot)$ denotes the trace operator. Further, $\|\boldsymbol{g}'_{{{\boldsymbol{U}}}}\|_F^2$ can be simplified as
\begin{equation}
\begin{aligned}
\label{eq:ASD_p2_g2}
\|\boldsymbol{g}'_{{{\boldsymbol{U}}}}\|_F^2=\frac{1}{n_3}\|\boldsymbol{F}\operatorname{bdiagz}(\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})\|_F^2=\frac{1}{n_3}\|\operatorname{bdiagz}(\boldsymbol{F}\boldsymbol{g}_{\boldsymbol{U}})\|_F^2
\end{aligned}
\end{equation}
where we use the fact that $\boldsymbol{F}^{H}\boldsymbol{F}=\boldsymbol{I}$. Therefore, according to \eqref{eq:ASD_p2_g1} and \eqref{eq:ASD_p2_g2} we have
\begin{equation}
\begin{aligned}
\|\boldsymbol{g}'_{{{\boldsymbol{U}}}}\|_F^2=\langle{\boldsymbol{g}}_{\boldsymbol{U}}, {{\boldsymbol{g}}^{'}_{\boldsymbol{U}}}\rangle
\end{aligned}
\end{equation}
and \eqref{eq:ASD_p1_g} can be written as
\begin{equation}
\label{eq:ASD_p2_g3}
J(\boldsymbol{U}^{t+1},\hat{\boldsymbol{Y}}^{t})-J(\boldsymbol{U}^{t},\hat{\boldsymbol{Y}}^{t})=-\frac{(\|\boldsymbol{g}^{'t}_{{{\boldsymbol{U}}}}\|_F^2)^2}{2\left\|\boldsymbol{Q}\circ\left( {\boldsymbol{g}}^{'t}_{{\boldsymbol{U}}}{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}\leq 0
\end{equation}
Similarly, we can obtain
\begin{equation}
\begin{aligned}
\label{eq:ASD_p2_g4}
&J(\boldsymbol{U}^{t+1},\hat{\boldsymbol{Y}}^{t+1})-J(\boldsymbol{U}^{t+1},\hat{\boldsymbol{Y}}^{t})\\
&=-(1\!-\!\lambda)\frac{(\|\boldsymbol{g}^t_{{\hat{\boldsymbol{Y}}}}\|_F^2)^2}{2\left\|{\boldsymbol{Q}} \!\circ\! \left({\boldsymbol{U}^{t+1}}{\boldsymbol{g}}^t_{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}\!-\!\lambda\frac{|\langle\boldsymbol{g}^t_{{\hat{\boldsymbol{Y}}}},\boldsymbol{g}'^t_{{\hat{\boldsymbol{Y}}}}\rangle|^2}{2\left\|{\boldsymbol{Q}} \!\circ\! \left({\boldsymbol{U}^{t+1}}{\boldsymbol{g}}'^t_{{\hat{\boldsymbol{Y}}}}\right)\right\|_F^2}\\
&\leq 0
\end{aligned}
\end{equation}
\eqref{eq:ASD_p2_g3} and \eqref{eq:ASD_p2_g4} imply that
\begin{equation}
J(\boldsymbol{U}^{t+1},\hat{\boldsymbol{Y}}^{t+1})\leq J(\boldsymbol{U}^{t},\hat{\boldsymbol{Y}}^{t})
\end{equation}
Thus, according to the relation between $\boldsymbol{U}$ and $\boldsymbol{\cal{X}}$, ${\hat{\boldsymbol{{ Y }}}}$ and $\boldsymbol{\cal{Y}}$, we have that
\begin{equation}
J({\boldsymbol{\cal{X}}^{t+1},\boldsymbol{\cal{Y}}}^{t+1},\boldsymbol{\cal{W}}^{t+1})\leq J({\boldsymbol{\cal{X}}^{t},\boldsymbol{\cal{Y}}}^{t},\boldsymbol{\cal{W}}^{t})\:.
\end{equation}
It can be also verified that $J({\boldsymbol{\cal{X}}^{t},\boldsymbol{\cal{Y}}}^{t},\boldsymbol{\cal{W}}^{t})$ is always bounded below for arbitrary $t$. Thus, $\{J({\boldsymbol{\cal{X}}^{t},\boldsymbol{\cal{Y}}}^{t},\boldsymbol{\cal{W}}^{t}),t=1,2,...\}$ will converge.
\end{appendices}
\bibliographystyle{IEEEtran}
| -84,122.753463 |
[
-2.5078125,
2.4375
] | 24.9694 |
[
-3.615234375,
0.376220703125,
-1.6767578125,
-5.51953125,
-1.1796875,
7.98828125
] |
[
2.875,
8.109375,
2.439453125,
7.30078125
] | 837 | 6,936 |
[
-2.86328125,
3.02734375
] | 35.553836 |
[
-6.16796875,
-4.5703125,
-4.8203125,
-2.12890625,
2.330078125,
12.5078125
] | 0.507212 | 14.287203 | 28.56113 | 4.920998 |
[
2.001929998397827
] | -55,223.243198 | 7.816897 | -83,089.242854 | 0.219484 | 6.342433 |
[
-2.291015625,
-3.60546875,
-4.3515625,
-5.39453125,
2.3515625,
12.59375
] |
[
-5.90625,
-2.77734375,
-2.455078125,
-2.154296875,
3.85546875,
5.7265625
] | |
BkiUdH45qoYDgaG4RIbR
|
\section{Introduction}
A historical reconstruction of some attempts to determine the cosmic curvature radius by means of astronomical observations is presented in the following. The framework is that of the early phase of the science of the universe in the modern accepted meaning, from the beginning of relativistic cosmology in 1917 to the diffusion of theoretical models describing the expanding universe in 1930. Some leading scientists, who participated in the debate at that time, investigated the measurable properties of the universe, which was assumed to be static. In the light of the relativistic interpretation of the curvature of space-time, they speculated on the finite dimension of the universe, and, in this sense, attempted to extrapolate the size of the universe, i.e. the value of the cosmic curvature radius. After mentioning the very first suggestion in relativistic cosmology offered by Albert Einstein (1879-1955) on the value of the world radius, we focus on the analysis carried out by Willem de Sitter (1872-1934), who obtained a rough measure of the curvature radius of space-time. We then examine the different works in this field by Ludwik Silberstein (1872-1948) and Knut Lundmark (1889-1958). Finally, we give an overview of further investigations on the topic carried out before the discovery of the expanding universe. The variety of methods to obtain the curvature radius and corresponding estimates is summarized in table \ref{final}. The aim of this paper is to illustrate how the ideas, methods, and results proposed to measure the world radius can be read as noteworthy contributions to the first interplay between some of the speculative predictions of modern cosmology and the increasing amount of pertinent empirical evidence from the observable part of the universe.
The first two pioneering and rival cosmological solutions of relativistic field equations, formulated in 1917 by Einstein and by de Sitter, were the main focus of interest during the early phase of relativistic cosmology. These solutions represented by their intention a static universe with finite curvature radius. The Einstein universe was spherical and filled with matter, whereas the hyperboloidal (or equivalently hyperspherical) model of de Sitter was empty of matter and radiation. The possibility of extrapolating the radius of the universe (hereafter denoted by $R$) from observations was recognized as one of the interesting astronomical consequences of general relativity. However, such a quest represented an issue of secondary importance with respect to the investigation of the different properties of Einstein's and de Sitter's models in that period.
Actually, during the Twenties, the first link between theoretical cosmology and observational astronomy was mainly concerned with the interpretation of the displacement of wavelengths measured in spectra of stars, globular clusters, and especially spiral nebul{\ae}. The discovery of the expanding universe arose from the search for a suitable relation between observable quantities such as spectral shifts, distances, and apparent diameters. It was especially de Sitter's empty universe which attracted interest among scientists since it offered a (rather puzzling) interpretation of spectral displacements measured in stars and spiral nebul{\ae}. With regard to the content of the universe, Edwin Hubble (1889-1953) proved in 1925 the very existence of spiral nebul{\ae} as truly extragalactic stellar systems. The subsequent consensus about the status of spirals thus marked the change from the conception of a universe made of stars and nebul{\ae} to the picture of a universe filled with galaxies (nowadays, galaxy clusters and superclusters are regarded as the fundamental pieces which contribute to the matter content of the universe). The transition to the expanding universe took place when Hubble himself, in collaboration with Milton Humason (1891-1972), provided the empirical evidence that distant galaxies receded from each other: in 1929 Hubble confirmed that a linear relation, later known as the ``Hubble law'', existed between redshift and distance of extragalactic nebul{\ae}. From 1930 on, such a discovery subsequently allowed the acceptance and the diffusion of the relativistic non-static and non-empty models of the universe, which had been formulated already in 1922 by Aleksandr Friedmann (1888-1925), and independently in 1927 by Georges Lema\^{i}tre (1894-1966). Eventually, it was in 1930 that the cosmological interpretation of redshift in spirals as due to the expansion of the universe, first proposed by Lema\^{i}tre in 1927, officially entered modern cosmology.
After the discovery of the expanding universe, the concept of a finite and constant world radius was superseded by the notion of the curvature radius depending on time $R(t)$, which later evolved to the present notion of the time-dependent expansion parameter $a(t)$, also called cosmic scale factor\footnote{In modern cosmology the expansion parameter $a(t)$ is related to the Gaussian curvature $C_{G}=\frac{k}{a^{2}}$. The parameter $k$ determines the constant curvature of spatial sections. It can be negative ($k=-1$), null ($k=0$), or positive ($k=+1$), yielding respectively an open universe (3-dimensional hyperbolical space), a flat universe (Euclidean space) or a closed universe (3-dimensional spherical space). In fact, the curvature parameter can be scaled in such a way to assume only the values $k$ = (1, 0, -1). The parameter $a(t)$ thus represents the radius of spatial curvature, which in cosmology describes the modulus of Gaussian curvature radius $R_{G}=C_{G}^{-1/2}=\frac{a}{\sqrt{|k|}}$ \cite[pp. 9-13]{Coles-Lucchin 2002}.}.
In this picture, the efforts made in the period 1917-1930 to specify the spatial extent of the universe represent a short but significant chapter in the history of the early development of relativistic cosmology. The present paper intends to show the intrinsic interest and the reactions that the quest for the size of the universe stimulated in those years\footnote{The present work is mainly based on chapters 5 and 6 of: Realdi, Matteo. 2009. \textit{Cosmology at the turning point of relativity revolution. The debates during the 1920s on the `de Sitter effect'} (PhD Thesis, University of Padova), from which some parts have been taken, and here adapted.}. De Sitter, Silberstein, and Lundmark ventured the path, now in the new framework of relativistic cosmology, of directly estimating $R$ by using several astronomical objects, which, by their intention, played the role of distance indicators. Going beyond Einstein's first attempt, these authors used either observations of stars, or assumptions on the mean density of matter, or velocities of globular clusters and spiral nebul{\ae} in order to determine the value of $R$. Indeed the study of distant galaxies during the Twenties marked a turning point in the empirical approach to cosmology. In this context, the use of distance indicators such as stars and globular clusters was soon after recognized as not relevant to search for the curvature of space and to distinguish between cosmological models. Nevertheless, the attempts made by de Sitter, Silberstein, and Lundmark influenced the early debate about the observational tests of the first relativistic world models. Moreover, such attempts can be considered as valuable aspects of the broader scientific aim to interpret empirical evidence on astronomical scales through the laws of physics.
The analysis given in the following pages will highlight the different approaches and the significance such authors attached to the science of the universe. On the one hand, we shall see, de Sitter inaugurated in his works the \textit{systematic} attempt to relate astronomical observations from available data to the geometry of the entire universe in the framework of general relativity. He studied different methods to determine the world radius of both Einstein's universe and of his empty model. Nonetheless, de Sitter clearly argued for the very speculative meaning of investigating the universe as a whole, claiming that all conclusions drawn beyond observations had to be considered pure extrapolations. On the other hand, Silberstein addressed in 1924 the question of the determination of the curvature radius of de Sitter's space-time by means of globular clusters. He firmly denied the general cosmic recession of test particles in de Sitter's universe, which had been predicted in 1923 both by Arthur Eddington (1882-1944) and independently by Hermann Weyl (1885-1955). Silberstein formulated a theoretical linear relation between shift and distance, which he applied to the observed receding and approaching motions of globulars. Lundmark showed that the result obtained by Silberstein was not correct. In his detailed empirical analysis on some classes of stars and spiral nebul{\ae}, Lundmark proposed several values of the curvature radius of de Sitter's universe, and, in agreement with the cosmology of Carl Charlier (1862-1934), suggested the picture of a hierarchical distribution of stars and nebul{\ae}.
With its focus on the specific question of early attempts to estimate the value of the cosmic curvature radius before the discovery of the expanding universe, the present analysis offers an additional point of view supplementing those given in the vast scientific literature, which already exists on the history of the early developments of modern cosmology (see for instance \cite{Ellis 1989,Ellis 1990,Hetherington 1996,Kerszberg 1989,Kragh-Smith 2003,North 1965,Nussbaumer-Bieri 2009,Osterbrock 1990,Smith 1982,Smith 2009}).
\section{Einstein, the cosmological constant, and the curvature radius}
In 1917, as mentioned above, Einstein and de Sitter proposed two different cosmological solutions of relativistic field equations. Incidentally, it was in the same year that the Hooker 100-inch (2.5 m) telescope, the instrument which was used by Hubble during the following years for his capital contributions to observational cosmology, saw ``first light'' on Mount Wilson, California\footnote{For a historical account of the Mount Wilson Observatory, see \cite{Sandage 2004}.}.
The origin of inertia can be viewed as the main question, which actually led Einstein and de Sitter to the formulation of their respective models of the universe\footnote{The debate between Einstein and de Sitter, which marked the beginning of relativistic cosmology, is analyzed in: \textit{The Einstein - de Sitter - Weyl - Klein debate}, in \cite[pp. 351-357]{CPAE 1998}. The authors of the present paper reconstructed part of the Einstein - de Sitter correspondence in \cite{Realdi-Peruzzi 2009}.}. In his famous paper \textit{Kosmologische Betrachtungen zur allgemeinen Relativit\"{a}tstheorie} (\textit{Cosmological considerations in the general theory of relativity}), which appeared in February 1917, Einstein proposed a finite and unbounded universe, where the spatial sections at constant time were spherical, with constant positive curvature radius $R$. Accounting for the complete material origin of inertia, as inspired by some ideas of Ernst Mach (1838-1916), such a closed universe involved that inertia was uniquely determined by the interaction between masses. In this way, Einstein achieved what he called the relativity of inertia, i.e. he avoided the necessity of assuming that any independent property of space could be claimed to be at the origin of inertia. Furthermore, by means of the spatial closedness he overcame the difficulty of obtaining values of the gravitational potentials (identified by the symbols $g_{\mu\nu}$) which at infinity were invariant for all transformations.
Einstein disregarded local non-homogeneous distributions of matter (like stars and planets), and introduced in his model an extremely small density of matter which hypothetically was uniformly and homogeneously distributed through space. It is worth noting that the assumption of global average properties of matter, when considering cosmological scales, turned out to be one of the typical features in the modern approach to cosmology. The condition of the homogeneity and isotropy is now referred in the literature as the ``cosmological principle'': matter and radiation are assumed to be uniformly distributed through space on very large scales, with neither privileged directions, nor privileged positions. In the present picture of the expanding universe, the cosmological principle asserts that the universe exhibits the same properties at any given cosmic time, i.e. that the 3-space surfaces of constant cosmic time are homogeneous and isotropic.
Einstein introduced in his field equations the so-called cosmological term, $\lambda g_{\mu\nu}$, which, by his intention, acted like an anti-gravity term and accounted for the supposed static equilibrium of the universe. At the beginning of the last century, in fact, astronomical observations did not (yet) reveal any large-scale systematic velocity fields. With the cosmological constant, the relativistic field equations, which relate the space-time geometry (on the left-hand side) to the energy-matter content of space-time (on the right-hand side), took the form:
\begin{equation}
G_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\mathcal{G}-\lambda g_{\mu\nu}=-\kappa
T_{\mu\nu}.
\end{equation}
Here $G_{\mu\nu}$ is the Ricci tensor, i.e. the contracted Riemann
curvature tensor, $\mathcal{G}$ is the Ricci scalar, i.e. the scalar curvature obtained
from $\sum\,g^{\mu\nu}G_{\mu\nu}$, and $T_{\mu\nu}$ is the energy-momentum
tensor; $\kappa$ is a constant equal to $\frac{8\pi\\G}{c^{4}}$, where $c$ is the speed of light and $G$ is the gravitational constant. In this way, the metric of a static and closed universe now resulted as a coherent solution of modified field equations. This was the ``rough and winding road'' \cite[p. 423]{Einstein 1917}, which Einstein acknowledged he followed in order to show that the theory of general relativity led to a system free of contradictions.
The value of the cosmological constant $\lambda$ was related to $R$, and to the mean density
of world matter $\rho$ \cite[p. 431]{Einstein 1917}:
\begin{equation}
\lambda=\frac{1}{R^{2}}=\frac{\kappa\rho c^{2}}{2}\,.
\end{equation}
By means of such a relation, Einstein derived the first estimate of $R$ in the framework of relativistic cosmology. This estimate was not reported in his 1917 cosmological paper, but can be found in some correspondence \cite[docs. 298, 300, 306, 308, 311]{CPAE 1998}. In these letters, Einstein reported that, from star counts, the spatial density of matter was of the order of $\rho\simeq 10^{-22}$ g/cm$^{3}$. Therefore, from the previous relation the world radius of his model was $R=10^{7}$ light-years ($6\cdot10^{11}$ AU)\footnote{The astronomical unit (AU) and the light-year are units of distance in astronomy and cosmology. The astronomical unit corresponds to nearly $1.49\cdot10^{11}$ m, whereas the light-year is approximately $9.42\cdot10^{15}$ m, or roughly 63,272 AU. Another unit of distance is the parsec (pc), equal to $3.09\cdot10^{16}$ m (1 pc = 206,264.8 AU $\simeq$ 3.26 light-years).}. The farthest visible stars, Einstein remarked, were estimated at $10^{4}$ light-years ($6\cdot10^{8}$ AU).
Later on, Einstein showed interest in the possibility of determining the value of the cosmological constant. Although such an estimate would directly reveal the value of the curvature radius, Einstein was mainly interested in the confirmation of the \textit{existence} in nature of the cosmological constant, which, by quoting Einstein himself, ``is not justified by our actual knowledge of gravitation'' \cite[p. 432]{Einstein 1917}, and which he had only introduced in 1917 to account for a static universe. A hint about this question can be found in some considerations which Einstein proposed in 1921 on the application of the Newtonian law of gravitation to globular clusters, and on the stationary equilibrium of such stellar systems \cite[pp. 394-395]{Einstein 1921a}, \cite{Einstein 1921b}. In this framework, Einstein referred to some observations made by Erwin Freundlich\footnote{We refer to \cite{Crelinsten 2006} for a historical reconstruction of the attempts made by astronomers to test Einstein's theory of relativity; see \cite{Hentschel 1994} for the role played by Freundlich in testing relativity theory.} (1885-1964). In fact, the comparison between the observed stellar velocities and the average theoretical velocity of stars (the latter obtained through the virial theorem in the case of Newtonian forces) would have possibly revealed the presence of a non-zero cosmological constant, which acted like an anti-gravity, and was thus able to keep the equilibrium of globular clusters. However, such an attempt, which is thoroughly analyzed in the related notes of \cite{Einstein 1921b}, did not give reliable results\footnote{In 1931, confronted with the empirical evidence that galaxies were receding from each another, Einstein abandoned the cosmological constant which he had introduced in 1917 in order to express in general relativity the static nature of the universe. In fact, this hypothesis was contradicted by the observed recession of spiral nebul{\ae} which now supported the interpretation of the expanding universe \cite{Einstein 1931}. For a historical account of the cosmological constant, see \cite{Earman 2001}.}.
\section{The size of the universe according to de Sitter}
The second relativistic model of the universe was proposed by de Sitter right after Einstein's 1917 cosmological paper was published. As discussed above, in Einstein's universe the world matter, uniformly distributed through space, was the only thing responsible for the origin of inertia, and the cosmological constant was necessary for a static world model. On the contrary, in his own model, de Sitter assumed that the contribution of the density of matter could be disregarded on the largest scales. The Dutch astronomer retained in his solution the $\lambda$-term, which was now responsible for the curvature of space-time and for the inertia of a hypothetical test particle inserted in such a world free of matter.
In fact, de Sitter developed a suggestion by Paul Ehrenfest (1880-1933), and extended Einstein's hypothesis of a finite 3-dimensional space to a finite 4-dimensional space-time of positive constant curvature. This model corresponded to a hypersphere embedded in Euclidean space, or, equivalently, to a hyperboloid embedded in Minkowski space-time. In such a model, the curvature radius was related to the cosmological constant through the relation \cite[doc. 313]{CPAE 1998}:
\begin{equation}
\lambda=\frac{3}{R^{2}}\,.
\end{equation}
De Sitter took into account the cosmological term in order to satisfy what he called the \textit{mathematical} postulate that at infinity the potentials were invariant under all transformations, a postulate which, according to de Sitter himself, did not have any real \textit{physical} meaning. It is for this purpose that he proposed a cosmological solution where all $g_{\mu\nu}$ were zero, i.e. an empty model of the universe. As de Sitter wrote in 1932, the cosmological constant was ``a name without any meaning, which (...) appeared to have something to do with the constitution of the universe; but it must not be inferred that, since we have given it a name, we know what it means. (...) It is put in the equations in order to give them the greatest possible degree of mathematical generality'' \cite[p. 121]{de Sitter 1932b}. On the one hand, as a matter of fact, de Sitter clearly acknowledged the importance of the \textit{mathematical} solutions that general relativity now offered for studying the universe as a whole. On the other hand, however, he pointed out that such investigations were built upon pure \textit{hypothesis} which could never be proven by empirical evidence since they referred to unobservable parts of the universe, and thus corresponded to \textit{extrapolations} beyond our neighborhood which ``can not be decided by physical arguments, but must depend on metaphysical or philosophical considerations'' \cite[p. 1222]{de Sitter 1917a}.
An interesting hint of this feature of de Sitter's approach to the cosmological question can be found in the original manuscripts written by the Dutch astronomer at the very early stages of relativistic cosmology. We mention for instance some notes on general relativity reported by de Sitter after conversations in Leiden between Einstein, de Sitter himself, Ehrenfest, and Gunnar Nordstr\"{o}m (1881-1923) on September 28-29, 1916. Actually, this first exchange on the problem of boundary conditions (which some months later Einstein solved by introducing a finite universe, whereas de Sitter solved it by considering that all potentials at infinity should be zero) marked the beginning of the debate between Einstein and de Sitter on the relativistic description of the universe as a whole. De Sitter wrote in late 1916 that:
\begin{quote}
Einstein wants the \underline{hypothesis of the closedness} of the world. He means by that that he makes the \underline{hypothesis} (conscious that it is a hypothesis which cannot be proven) that at infinity (that is at very large, \underline{mathematically} finite, distance, but further than any observable material object (...)) there are such masses (...) that the $g_{\mu\nu}$ at infinity assume \underline{certain degenerate values} (these have not to be 0, that is a priori not to be said), \underline{the same} in \underline{all} systems. (...) He is even prepared to give up the complete freedom of transformation (...). If it is possible to find a set of degenerate values of the $g_{\mu\nu}$, that is invariant for a not too restricted group of transformations, is a question that can be solved mathematically. Is the answer \underline{no} (what Ehrenfest and I expect), then Einstein's hypothesis of the closedness is untrue. Is the answer yes, then the hypothesis is not incompatible with the relativity theory. However, I \underline{even then} maintain my opinion that it is incompatible with the \underline{spirit} of the principle of relativity. And Einstein admits that I have the right to do so. Also the rejection of the hypothesis is completely admissible in the relativity theory\footnote{``Einstein wil de \underline{hypothese van de afgeslotenheid} der wereld. Hij verstaat daaronder dat hij de \underline{hypothese} maakt (bewust dat het een onbewijsbare hypothese is) dat er in het oneindige (d.i. op zeer groote, \underline{mathematisch} eindige, afstand, maar verder dan eenig waarneembaar materieel object, (...)) zoodanige massa's zijn (...) dat de $g_{ij}$ in het oneindige \underline{bepaalde ontaarde waarden} aannemen (deze hoeven niet 0 te zijn, dat is a priori niet te zeggen), \underline{dezelfde} in \underline{alle} co\"{o}rdinatensystemen. (...) Hij is dan ook bereid de volkomen vrijheid van transformaties op te geven (...). Of het mogelijk is een stel ontaarde waarden der $g_{ij}$ te vinden die invariant is voor een niet te erg beperkte groep van transformaties, is een vraag die mathematisch uit te maken is. Luidt het antwoord \underline{neen} (wat Ehrenfest en ik verwachten) dan is Einstein's hypothese der afgeslotenheid onwaar. Luidt het antwoord ja, dan is de hypothese niet in strijd met de relativiteitstheorie. Evenwel houd ik \underline{ook dan} vol dat zij wel in strijd is met den \underline{geest} van het relativiteitsprincipe. En Einstein geeft toe dat ik daartoe het recht heb. Ook verwerping der hypothese is volkomen geoorloofd in de relativiteitstheorie''. English translation by Jan Guichelaar.} \cite[AFA-FC-WdS-132]{de Sitter Archive}.
\end{quote}
In addition to these considerations, some 1917 correspondence with Jacobus Kapteyn (1851-1922), whom de Sitter asked for advice right during his debate with Einstein, reveals the great importance de Sitter attached to the actual picture drawn by the astronomical investigation of the observable part of the universe, which, as we shall see later, was helpful when comparing the different astronomical consequences between his own model and Einstein's. De Sitter was interested in the most recent advances on parallax measures and estimates of the total mass of the Milky Way. Furthermore, he paid special attention to whether observations of stars and nebul{\ae} revealed a nearly \textit{systematic} redshift. ``These are hard nuts - Kapteyn replied to de Sitter in June 1917 - you are giving me to crack'' \cite[p. 96]{van der Kruit-van Berkel 2000}.
In his papers de Sitter studied the properties of Einstein's universe, which he labeled as ``system A'', and those of his own empty model, labeled as ``system B''. He took into account for the sake of comparison also the properties of the solution of field equations without $\lambda$, i.e. of the line element of the special theory of relativity (``system C''). De Sitter proposed several forms of the line element of his cosmological solution, which depended on the choice of coordinates. In one of these, which became known as the ``static form'' of de Sitter's universe, the metric was the same as in Einstein's spherical model, except for the potential term which referred to the time coordinate $t$. In Einstein's world such a potential was $g_{tt}=1$, whereas in the static form of de Sitter's universe such a potential term depended on the variable $r$ (interpreted as a radial coordinate) and on the radius $R$ \cite[p. 230]{de Sitter 1917b}:
\begin{equation}
g_{tt}=\cos^{2}\frac{r}{R}.
\end{equation}
Rather than a spherical space, de Sitter assumed for his universe that the physical (closed) space was elliptical. Such a choice, in fact, avoided the existence of the antipodal point which one observes in the spherical space, where any two geodesics intersect each other in two points. To imagine the difference between these two spaces, one can consider on the one hand the analogy between spherical space and the surface of a sphere, on the other hand the analogy between elliptical space and the surface of a hemisphere \cite[p. 202]{Harrison 2000}. The volume of the elliptical space ($\pi\,R^{3}$) is half of the volume of the spherical one, and both of these spaces can be approximated to the Euclidean space for small values of $r$ compared to $R$, the curvature radius\footnote{By projecting the elliptical space on the Euclidean one through the transformation of coordinates $\mathbf{r}=R\,\tan\frac{r}{R}$, the potential term of the time coordinate became in this case:
$g_{tt}=\left(1+\frac{\mathbf{r}^{2}}{R^{2}}\right)^{-1}$ \cite[p. 232]{de Sitter 1917b}. Here the symbol $\mathbf{r}$, used by de Sitter in his papers, represents a spatial coordinate, not a vector. }.
It is clear that in the framework related to the investigation of the astronomical consequences of general relativity, de Sitter showed great interest in the determination of the curvature radius of space-time by means of observational evidence.
The first difference between the two relativistic cosmological models A and B emerged by considering trigonometric parallax\footnote{In system B the trigonometric parallax $p$ of a star at the distance $r$ from the Sun was:
$p\simeq\frac{a}{R\,\sin\frac{r}{R}}=\frac{a}{\mathbf{r}}\sqrt{1+\frac{\mathbf{r}^{2}}{R^{2}}}$,
$a$ being the average distance between the Sun and the Earth. Thus here the parallax $p$ was never zero, and reached
its minimum value at the distance from the Sun $r=\frac{\pi}{2}\,R$ \cite[p. 13]{de Sitter
1917c}. In system A such a parallax was $p\simeq\frac{a}{R}\cot\frac{r}{R}=\frac{a}{\mathbf{r}}$,
so that $p$ diminished to zero as the distance increased up to $r=\frac{\pi}{2}\,R$, and for larger distances it became negative \cite[p. 233]{de Sitter 1917b}.}. In the framework of the use of parallax to measure the curvature of space, a lower limit of the curvature radius, $R>4\cdot10^{6}$ AU, had been proposed already in 1900 by Karl Schwarzschild (1873-1916). In fact, in his pre-relativistic analysis on curved spaces\footnote{See \cite{Schemmel 2005} for an analysis of the work of Schwarzschild on the measure of the curvature of space by means of parallax.}, Schwarzschild had assumed that some stars had trigonometric annual parallax of the order of $p=0''.05$ \cite[p. 2541]{Schwarzschild 1900}. As confirmed by Kapteyn in a letter to de Sitter, such a limit for annual parallax was still in 1917 the most suitable result obtained by direct observations \cite[p. 96]{van der Kruit-van Berkel 2000}. As remarked by de Sitter:
\begin{quote}
The meaning [of observed parallaxes] is of course actually measured parallaxes, not parallaxes derived by the formula $p=a/r$ from a distance which is determined from other sources (comparison of radial and transversal velocity, absolute magnitude, etc.). Schwarzschild assumes that there are certainly stars having a parallax of $0''.05$. All parallaxes measured since then [1900] are \textit{relative} parallaxes, and consequently we must at the present time still use the same limit \cite[p. 234]{de Sitter 1917b}.
\end{quote}
De Sitter proposed alternative ways to determine the cosmic curvature radius.
With regard to Einstein's spherical universe, an estimate of $R$ was obtained by means of the relation between the angular (apparent) diameter $\delta$ of
an object of linear (absolute) diameter $d$ at the distance $r$ from the Sun.
``It is very probable - de Sitter wrote in 1917 - that at least
some of the spiral nebul{\ae} or globular clusters are galactic
systems comparable with our own in size'' \cite[p. 24]{de Sitter 1917c}. By taking, for example, for these systems $d=10^{9}$ AU and
$\delta=5'$, de Sitter asserted that the radius of system A was $R\geq10^{12}$ AU.
Furthermore, the size of Einstein's universe was obtained by some assumptions on the density of world matter $\rho$, i.e. by applying the same method first suggested by Einstein \cite[p. 24]{de Sitter 1917c}. If such a
density was assumed to be the same as the star density at the center
of the galactic system (80 stars/1000 pc$^{3}$, or $\rho=10^{-17}$ g/cm$^{3}$), then
the curvature radius of system A resulted as $R=9\cdot10^{11}$
AU, not so different from the estimate suggested some months before by Einstein. From assumptions on
the number of galactic systems ($7\cdot10^{6}$), on their average shortest distance ($10^{10}$ AU), and on the total mass of the world matter sufficient to fill the whole universe with galaxies, the density resulted to be $\rho=\frac{1}{3}\cdot10^{-20}$ g/cm$^{3}$, so that the radius resulted: $R\leq5\cdot10^{13}$ AU.
In addition, as mentioned above, one
should expect in system A to see the antipodal image of the Sun.
Since this was not observed, light should have been
absorbed in its travel around the world. According to de Sitter, an absorption of 40
magnitudes, which had been proposed already in 1900 by
Schwarzschild, was sufficient to produce such an effect along the distance $\pi\,R$ of the spherical space.
Therefore, by assuming the result
by Harlow Shapley (1885-1972) of an absorption of 0.0001 mag/10 pc, de Sitter obtained the value
$R>\frac{1}{4}\cdot10^{12}$ AU for the curvature radius of
Einstein's universe \cite[p. 25]{de Sitter 1917c}.
With regard to the empty universe, estimates of $R$ could not be obtained by using the fact
that the ``back of the Sun'' was not observed. In fact, at the distance $r=\frac{\pi}{2}\,R$, i.e. at the largest possible distance in elliptical space, it was $g_{tt}=0$. Therefore, as de Sitter pointed out,
in such a system ``light requires an infinite time for the voyage round the
world'' \cite[p. 26]{de Sitter 1917c}. Moreover, the relation between
apparent and linear diameter of spirals could not be applied in system B, since at this distance the apparent angular diameter $\delta$ was also zero.
However, the empty universe showed an interesting feature, allowing de Sitter to propose a value of the curvature radius of system B by means of the displacement of wavelengths measured in spectra of some stars and spiral nebul{\ae}.
Since in the static form of system B the $g_{tt}$ potential term diminished with increasing $r$,
the frequency of light diminished
with increasing distances from the observer at rest at the origin of
coordinates: ``the lines in the spectra of
very distant objects - de Sitter wrote - must appear displaced towards the red''
\cite[p. 235]{de Sitter 1917b}.
As usual in those years, the spectral shift $z$ was mainly interpreted as a Doppler motion, i.e. as originated by relative motions through space between the observer and the observed object. The classic Doppler formula, used when the relative velocity $v$ is small when compared to speed of light $c$, is:
\begin{equation}
z\equiv\frac{\lambda_{o}-\lambda_{e}}{\lambda_{e}}=\frac{v}{c},
\end{equation}
where $\lambda_{o}$ and $\lambda_{e}$ denote, respectively, the wavelength measured
by the observer and the one emitted from the source. The redshift (or, respectively, blueshift) is due to an
increasing (decreasing) of wavelength, and is interpreted as a receding (approaching) motion of the source, with velocity $v$.
One of the relevant phenomena described in
Einstein's new theory of gravitation was the gravitational shift, i.e. the
contribution to the shift of spectral lines produced by stars themselves. In fact,
spectral lines originating in a strong
gravitational field, for example on the star's surface, would be
displaced towards the red to an observer in a weaker gravitational
field\footnote{For a star of mass $M_{\star}$ and density $\rho_{\star}$
(with solar mass and solar density $M_{\odot}=\rho_{\odot}=1$), such a
gravitational contribution to the measured spectral shift was equal to $0.634\,M_{\star}^{2/3}\,\rho_{\star}^{1/3}$ \cite[p. 719]{de Sitter 1916}.}.
In his analysis of the gravitational shift de Sitter referred to some observations by William Campbell (1862-1938). Already in 1911 Campbell found that a constant $K$ term, initially interpreted as a systematic error, had to be taken into account in order to determine the solar motion from stellar velocities of 35 groups of $B$ stars. ``An error of obscure source - Campbell wrote in 1911 - causes the radial velocities of Class $B$ stars to be observed too great by a quantity, $K$, amounting to several kilometers'' \cite[p. 105]{Campbell 1911}. As suggested by Kapteyn to de Sitter, such an average systematic shift to the red for $B$ stars corresponded to a receding velocity of the order of $v=+ 4.3$ km/sec \cite[p. 97]{van der Kruit-van Berkel 2000}, up to nearly $v=+ 4.5$ km/sec \cite[p. 719]{de Sitter 1916}. De Sitter assumed that almost one third of this shift could be interpreted as the proper gravitational shift due to the source, i.e. at the star's surface. The remaining $v=+ 3$ km/sec corresponded to a ``spurious positive radial velocity'', in the sense that such a displacement of spectral lines was produced by the inertial field in system B, and in particular by the diminution of $g_{tt}$ in de Sitter's universe \cite[p. 26]{de Sitter 1917c}. Then $R$ could be obtained by considering the field produced by a fixed star\footnote{The relation between $R$, the star's distance $r$, and the star's spurious velocity $v$ (obtained by means of the Doppler formula) was in this case: $g_{tt}=\cos^{2}\frac{r}{R}\simeq\,1-2\frac{v}{c}=1-2\cdot10^{-5}$ \cite[p. 235]{de Sitter 1917b}.}. Since the average distance of $B$ stars was $r=3\cdot10^{7}$ AU, the curvature radius of the empty model resulted to be $R=\frac{2}{3}\cdot10^{10}$ AU \cite[p. 27]{de Sitter 1917c}.
De Sitter further developed such an analysis in relation to some nebul{\ae}.
Referring to data from the 1917 Council of the Royal Astronomical
Society \cite{Eddington 1917}, de Sitter
pointed out that the Small Magellanic Cloud was
estimated to be at $r>6\cdot10^{9}$ AU, with a radial
velocity $v\simeq+150$ km/sec. Consequently the
curvature radius of system B was $R>2\cdot10^{11}$ AU \cite[p. 27]{de
Sitter 1917c}. By some independent observations, three
nebul{\ae} (NGC 4594, NGC 1068 and the Andromeda nebula) showed
very large radial velocities compared with usual velocities of stars
in solar neighborhood (see table \ref{desittervel}). Taking $v=+600$ km/sec as the mean of their radial velocities, and the
curvature radius as $R=\frac{2}{3}\cdot10^{10}$ AU, the minimum distance of these
nebul{\ae} was $r=4\cdot10^{8}$ AU \cite[p. 236]{de
Sitter 1917b}. Alternatively, by assuming for these objects
a mean distance of about $r=2\cdot10^{10}$ AU, the
curvature radius of system B was
$R=3\cdot10^{11}$ AU \cite[p. 28]{de Sitter 1917c}.
\begin{table}
\caption{Radial velocities of the nebul{\ae} studied by de Sitter in 1917 \cite[p. 236]{de Sitter 1917b}. The average values which de Sitter used for his calculation are listed in the last column \cite[p. 27]{de Sitter 1917c}.}\label{desittervel}
\begin{center}
\begin{tabular}{llll}
\hline
nebula & observer & velocity & average\\
& & (km/sec) & (km/sec) \\
\hline
NGC 4594 & Pease & +1180 & +1185\\
& Slipher & +1190 \\
NGC 1068 & Pease & +765 & +925\\
& Slipher & +1100 \\
& Moore & +910 \\
Andromeda & Wright & --304 & --311\\
& Pease & --329 \\
& Slipher & --300 \\
\hline
\end{tabular}
\end{center}
\end{table}
De Sitter remarked that the estimates of the curvature radius of system A were very uncertain. However, he also acknowledged the scarcer value of the estimates of the size of his own universe obtained through the radial velocities of nebul{\ae}. Nevertheless, in face of the different physical consequences of these world models, he suggested that the discovery in the future of a systematic receding radial motion of spirals would allow a discrimination between system A and system B. His model, as a matter of fact, had the great advantage that it required an apparent positive radial velocity for distant objects, and the empirical confirmation of such a recession would suggest adopting model B rather than A. This was the concluding remark which de Sitter wrote in 1920, when he noted that now, thanks to the work of Vesto Slipher (1875-1969), the radial velocities of 25 spiral nebul{\ae} were known, showing a mean receding motion of $v=+ 560$ km/sec. However, de Sitter added that ``the decision between these two systems must, I fear, for a long time be left to personal predilection'' \cite[p. 868]{de Sitter 1920}.
The Dutch astronomer, who was the director of the Leiden Observatory from 1919 to 1934, did not deal with the cosmological question in other published papers up to 1930, and did not participate in the cosmological debates which took place during the Twenties. By quoting Eddington, de Sitter was ``the man who discovered a universe and forgot about it'' \cite[p. 925]{Eddington 1934}. Nevertheless, further considerations by the Dutch astronomer on the determination of the size of the universe can be found in some correspondence of the late twenties between him and Frank Schlesinger (1871-1943). In December 1929, now in view of Hubble's confirmation of the existence of a linear redshift-distance relation among galaxies, de Sitter wrote to Schlesinger that:
\begin{quote}
Now there comes a most unexpected and curious coincidence. If we suppose the whole world to be filled with spiral nebul{\ae} (...), the density becomes $2\cdot10^{-28}$ in c.g.s. units, which on first sight be thought not to differ much from emptiness. But if we now take Einstein's own solution A of his field equations, i.e. the solution for a \underline{full} world, then there is a relation between the radius of the universe and the density, and taking for the density $2\cdot10^{-28}$, the radius is found to be $2\cdot\frac{1}{4}\cdot10^{9}$ light-years ($3\cdot10^{13}$ AU), i.e. practically the same as for the empty world\footnote{It is interesting to note the remarkable difference between the estimate of the density of world matter suggested in 1917 by Einstein, which was $\rho\simeq 10^{-22}$ g/cm$^{3}$, and such a value which de Sitter took into account in 1929.}! In the solution A, however, there is \underline{no} radial velocity (...). We are thus confronted with the mathematical problem: what becomes of the empty world B if you fill it with matter. I have not yet been able to solve this problem completely but I have reasons to expect that the solution will be intermediate between the solution A and B \cite[AFA-FC-WdS-52]{de Sitter Archive}.
\end{quote}
Actually, such an intermediate solution, which interested de Sitter and, as we shall see, also Eddington, had been formulated by Lema\^{i}tre by 1927, but had passed unnoticed. Its important consequences were fully recognized by de Sitter and Eddington in 1930, when Lema\^{i}tre himself drew the attention of the two scientists to his own cosmological solution. In fact, in 1927 Lema\^{i}tre had proposed as a solution of relativistic field equations a non-empty, homogeneous, and isotropic universe where the spatial curvature radius increased in time. As shown by Lema\^{i}tre, it was the expansion of the universe which produced the cosmological redshift of galaxies.
\subsection{The debates on the empty universe and the ``de Sitter effect''}
Before 1930 it was the universe of de Sitter which stimulated debates and controversies, both for its puzzling properties, and for its actual relation to astronomical evidences. On the one hand, the static form of the de Sitter empty model showed a singularity: on the ``equator'' at the distance $r=\frac{\pi}{2}\,R$, as seen above, the potential term $g_{tt}$ was zero. Such a feature was criticized by Einstein, who did not accept the model formulated by de Sitter as a physical solution, and tried to discard it since it represented a counterexample of the relativity of inertia. The empty universe was a sort of an anti-Machian world model, where the inertia of a test particle was not determined by the world matter. Whereas de Sitter considered that such a surface was physically inaccessible to test particles, Einstein advocated the idea that it represented a ``mass-horizon''. According to Einstein, the universe of de Sitter was not really empty, but had matter concentrated at this equator. It was Felix Klein (1849-1925) who clarified this question, showing that this was a singularity due to the choice of coordinates, not a true physical singularity\footnote{We refer to \cite[pp. 351-357] {CPAE 1998} and \cite{Earman-Eisenstaedt 1999} for critical analysis of such discussions on the singularity in de Sitter's cosmological model. For the analysis of horizons in de Sitter's universe, see \cite{Rindler 1956}, for a description of horizons in modern cosmology, see \cite[chapter 21]{Harrison 2000}.}.
On the other hand, the de Sitter universe attracted the attention of scientists because, despite its lack of matter content, it offered more advantages than Einstein's one with respect to the astronomical consequences. In particular, de Sitter's static model offered an interpretation of the spectral displacement measured in distant objects by means of a twofold contribution. Namely, from de Sitter's analysis it emerged that such a cosmological solution predicted a spurious positive velocity of test particles due to the inertial field, superimposed to a real relative (Doppler) velocity. The latter contribution resulted from geodesic equations, and admitted both receding and approaching motions\footnote{By using the radial coordinate $\mathbf{r}$, the first contribution led to a quadratic redshift-distance relation, while the latter involved a linear dependence between spectral shift $z$ and distance $\mathbf{r}$ with no preference in sign: $z\simeq\frac{1}{2}\left(\frac{\mathbf{r}}{R}\right)^{2}\pm\frac{\mathbf{r}}{R}$. See \cite[pp. 195-196]{de Sitter 1933} for an analysis of such an effect given by de Sitter himself after the discovery of the expanding universe.}. It was such a twofold property, later known as the ``de Sitter effect'', which actually played during the Twenties the linking role between the relativistic description of the universe as a whole and the increasing amount of observational evidence of relevant radial velocities of spiral nebul{\ae}. As a matter of fact, in 1929 Hubble too referred to the possibility that the linear redshift-distance relation, revealed by his observations of galaxies, could actually represent the de Sitter effect, ``and hence that numerical data may be introduced into discussions of the general curvature of space'' \cite[p. 173]{Hubble 1929}.
The de Sitter universe can be regarded as the precursor of the non-static solutions to which attention was drawn after 1930. By quoting Merleau Ponty:
\begin{quote}
Statique sans l'\^{e}tre, vide mais non neutre, virtuellement actif sur toute
mati\`{e}re qu'on voudrait y mettre, r\'{e}sultat d'une sym\'{e}trie
en trompe-l'{\oe}il, solution b\^{a}tarde d'une \'{e}quation
b\^{a}tarde, l'univers de de Sitter \'{e}tait donc un curieux complexe d'\'{e}quivoques,
qui cependant portait l'avenir de la pense\'{e} cosmologiques \cite[p. 61]{Merleau Ponty 1965}.
\end{quote}
Since the geometry of de Sitter's universe is not uniquely specified\footnote{For clear descriptions of the many faces of the empty universe of de Sitter, see \cite{Schroedinger 1957,Ellis 1990}.}, during the Twenties many representations of this solution appeared, which complicated the actual interpretation of such a universe. Non-static versions of this empty model were formulated in 1922 by Kornel Lanczos (1893-1974) \cite{Lanczos 1922}, in 1923 by Weyl \cite{Weyl 1923a}, in 1925 by Lema\^{i}tre \cite{Lemaitre 1925}, and in 1928 by Howard Robertson (1903-1961) \cite{Robertson 1928}. In retrospect, these descriptions of de Sitter's universe correspond to truly expanding empty models, where the geometry of spatial sections at constant time is, respectively, positive ($k=+1$; Lanczos) and null ($k=0$; Weyl, Lema\^{i}tre, and Robertson) \cite[p. 100]{Ellis 1990}.
Each of these authors considered the theoretical redshift-distance relation, and looked for a proper formulation of the de Sitter effect. In particular, in his 1923 analysis of the hyperboloidal version of de Sitter's universe, Weyl obtained a relation which was roughly linear for small distances compared to $R$ \cite[p. 230]{Weyl 1923b}. Weyl proposed that in de Sitter's hyperboloid the world lines of test particles belonged to a pencil diverging from the past towards the future direction, which involved a general cosmic recession of nebul{\ae} in such a universe. This assumption on the choice of geodesics and their causal connection became later known as the ``Weyl principle''\footnote{We refer to \cite{Bergia-Mazzoni 1999,Ehlers 1988,Goenner 2001} for further readings on Weyl principle.}. A similar cosmic recession was suggested in the same year also by Eddington in his famous book \textit{The mathematical theory of relativity}, a compendium which later Einstein himself acknowledged as ``the finest presentation of the subject in any language'' \cite[p. 100]{Douglas 1956}. Eddington took into account the properties of the static form of de Sitter's universe, and found that a test particle could not remain at rest, but was accelerated away because of the presence of the cosmological constant. The preponderance of positive radial velocities of spirals, as revealed by some new observations of Slipher reported in Eddington's book, favored the model of de Sitter in comparison to Einstein's universe, which had matter but not motion. However, according to Eddington, these two rival solutions were ``two limiting cases, the circumstances of the actual world being intermediate between them. De Sitter's empty world is obviously intended as a limiting case; and the presence of stars and nebul{\ae} must modify it, if only slightly, in the direction of Einstein's solution'' \cite[p. 160]{Eddington 1923}.
An attempt to confirm such a cosmic recession was given by Carl Wirtz (1876-1939) with regard to the studies of the additional $K$ term found by Campbell for $B$ stars. Already in 1916 George Paddock (1879-1955) had investigated the determination of the direction of the solar motion by using now the radial motion of spiral nebul{\ae}, obtaining a $K$ term ranging from + 248 to + 295 km/sec \cite[p. 114]{Paddock 1916}. Wirtz further developed such an analysis on the velocities of spirals, and found the notable value of $K=+840\pm141$ km/sec \cite[p. 351]{Wirtz 1922}. In 1924, Wirtz introduced de Sitter's cosmology to account for such a large value of the additional $K$ term. He assumed that spirals had approximately the same linear diameter, so that their observed apparent diameter could be used as a distance indicator. Wirtz analyzed data of 42 spiral nebul{\ae}, for which both the apparent diameter and the radial velocity were known. He found a linear relation between the velocity and the logarithm of apparent diameter, which involved that the radial motion of spiral nebul{\ae} remarkably increased with increasing distance, as predicted by de Sitter's model \cite[pp. 23-24]{Wirtz 1924}.
However, in the same year the supposed general recession in the empty universe was strongly criticized by Silberstein, whose focus of interest was the determination of the curvature radius of de Sitter's space-time.
\section{Silberstein on globular clusters and de Sitter's universe}
Silberstein, a Polish-American physicist, maintained a sceptical approach towards many aspects of general relativity, which deserved him the role of one of the main critics of Einstein's theory\footnote{For some analysis of Silberstein's approach, see \cite{Desmet 2007,Flin-Duerbeck 2006,Sanchez Ron 1992}.}.
The determination of the curvature radius of de Sitter's universe was the subject of many papers written by Silberstein. Some considerations on the first two cosmological models were proposed by Silberstein already during the very early response to Einstein's new theory of gravitation. For instance, in the paper \textit{General relativity without the equivalence hypothesis}, which appeared in 1918, Silberstein acknowledged that the universe of de Sitter was particularly interesting because it did not involve any hypothetical world matter, which on the contrary was unavoidable in Einstein's model \cite[p. 105]{Silberstein 1918}. In this paper Silberstein remarked that the general covariance was one of the very strong points of Einstein's theory of gravitation. The cosmological solution proposed by de Sitter was therefore preferable to Einstein's universe, since it fully achieved this requirement, and was perfectly homogeneous and isotropic, as explicitly stated by Silberstein some years later \cite[p. 67]{Silberstein 1930}.
The objection to the world matter was reported by Silberstein also in his 1922 book on the theory of general relativity and gravitation. Here Silberstein noticed that, for a density of matter of the order of ``some thousand suns per cubic parsec'', the curvature radius of Einstein's universe could not be smaller than $10^{12}$ AU, and consequently one should admit the existence of $10^{10}$ galaxies filling this space. On the contrary, the possibility to interpret spectral shifts as predicted by de Sitter represented an ``attractive piece of reasoning'' \cite[p. 137]{Silberstein 1922}.
In order to determine the size of de Sitter's universe, Silberstein used in 1924 an approximately linear relation between spectral shift and distance.
However, he polemically criticized the general tendency of
particles to scatter in de Sitter's universe, and proposed a
theoretical relation which was valid for both receding and approaching objects.
According to Silberstein, the recession in de Sitter's universe formulated by Weyl in 1923 was
``an arbitrary hypothesis'' \cite[p. 909]{Silberstein 1924c}; furthermore,
the ``mythical'' assumption \cite[p. 350]{Silberstein 1924a}
that the world lines belonged to a pencil of geodesics diverging
towards the future was a ``sublime guess, entirely undesirable'' \cite[p. 909]{Silberstein
1924c}. Eddington's suggestion on a
universal scattering of test particles was also considered by Silberstein
``a fallacy based upon a hasty analysis'' \cite[p. 350]{Silberstein
1924a}.
In fact, the recession advocated by Eddington and Weyl was
contradicted by the negative velocities of some spirals.
Among them, the blueshift measured in the spectrum of the Andromeda
nebula revealed a relevant approaching motion with a velocity of the order of $v\simeq\,-316$ km/sec.
Therefore, Silberstein based his analysis on the observations of globular clusters.
Actually, such objects showed both receding and approaching motions equally distributed. Furthermore, Silberstein
acknowledged that the estimates of the radial velocity of globular clusters were known with small probable error in comparison to spirals. Moreover, despite the attempts made, among others, by Lundmark in 1919 \cite{Lundmark 1919} and by Ernst \"{O}pik (1893-1985) in 1922 \cite{Opik 1922} to determine the distance of the Andromeda nebula, there was not (yet) a general consensus on reliable estimates of the distance of spirals.
In his papers on the size of the universe, Silberstein referred to the works by Shapley on the observations of globular clusters and on the estimate of the size of the Milky Way. In this respect, the historical reconstruction proposed in \cite[p. 144]{Smith 1979} is useful to reveal the role played by Shapley, who initially encouraged Silberstein to investigate the de Sitter effect, but later showed less interest in his results.
It is worth recalling that, some years before, Shapley had given a fundamental contribution to the comprehension of the structure of the Milky Way. Shapley used statistical parallaxes in order to determine the absolute magnitude of some RR Lyr{\ae} stars, i.e. pulsating
variable stars, like Cepheids, which change in brightness with a
regular period. Shapley was able to estimate the distance of
these stars observed in globular clusters by means of the period-luminosity
relation discovered in 1912 by Hernrietta Leavitt (1868-1921). In
1919, Shapley set the diameter of our Galaxy of about 300,000 light-years ($19\cdot10^{9}$ AU). The center of the Galaxy, according to Shapley, was 65,000 light-years ($4\cdot10^{9}$ AU) far
from the Sun, in the direction of Sagittarius\footnote{The main features of our Galaxy have not changed much since those proposed in 1936 by John Plaskett (1865-1941). According to Plaskett, the Milky Way is a flat rotating disk, with a diameter of about 100,000 light years ($6\cdot10^{9}$ AU), surrounded by a spherical halo of globular clusters \cite{Plaskett 1936}.}. Furthermore, Shapley furnished several topics against the extragalactic interpretation of spiral nebul{\ae}, which he proposed in 1920 during the so-called ``Great Debate'', the famous discussion between himself and Heber Curtis (1872-1942), focused on the size of the Milky Way and the nature of spiral nebul{\ae}\footnote{We refer to \cite{Hoskin 1976} for a historical reconstruction of the ``Great Debate''.}. On the one hand, Curtis advocated the theory that spirals were truly external galaxies \cite{Curtis 1920}. On the other hand, Shapley remarked that ``we have no evidence that somewhere in space there are not other galaxies; we can only conclude that the most distant sidereal organizations now recognized (globular clusters, Magellanic Clouds, spiral nebul{\ae}) cannot successfully maintain their claims to galactic structure and dimensions'' \cite[p. 268]{Shapley 1919}\footnote{It is worth noting that Silberstein did not agree with Shapley's conclusion: ``it would certainly be foolish - Silberstein wrote in 1922 - to deny the possibility (...) of the existence of many more island universes'' \cite[p. 134]{Silberstein 1922}.}.
It is worth mentioning that the determination of the distance of Cepheid stars represents an important step in the cosmological distance ladder, the construction of which is fundamental in astronomy as well as in cosmology. In the present picture, the cosmic ladder is made of distinct steps, obtained by using different methods: from trigonometric parallax and kinematic methods for distances within the Galaxy, to primary and secondary indicators for extragalactic distances, as, for instance, RR Lyr{\ae} and Cepheid variable stars, Nov{\ae}, Supergiants, Supernov{\ae}, globular clusters, $HII$ regions (i.e. clouds of ionized hydrogen), brightest cluster galaxies\footnote{For further readings on the cosmological distance ladder, see \cite{Webb 1999}. For a review of the methods to determine extragalactic distances, see \cite{Freedman-Madore 2010}.}. In the investigation of astronomical distances, some steps were identified after the discovery of the expanding universe. In 1930, for instance, Robert Trumpler (1886-1956) unambiguously confirmed the existence of the interstellar absorption of light affecting astronomical observations \cite{Trumpler 1930}. In 1934, Walter Baade (1893-1960) and Fritz Zwicky (1898-1974) suggested the use of Supernov{\ae} as potential distance indicators \cite{Baade-Zwicky 1934}. In 1952, Baade provided evidence for a new calibration of the extragalactic distance scale, based on his discovery in 1944 of the existence of two stellar populations, presenting two types of Cepheid variable stars \cite{Baade 1944,Baade 1952}\footnote{For a description of the early history of the period-luminosity relation, see \cite{Fernie 1969}. For Baade's contributions to astrophysics, see \cite{Osterbrock 2001}.}.
Silberstein showed great interest in the possibility to obtain further empirical observations. This aspect is revealed for instance by the 1924 correspondence between himself and Walter Adams (1876-1956), one of the member of the American section of the Committee on stellar radial velocities, whom Silberstein (unsuccessfully) asked for obtaining velocities of 74 globular clusters. In June 1924, Silberstein wrote to Adams that:
\begin{quote}
The knowledge of radial velocities of remote objects of ascertainable distance became an urgent need, and the (spectroscopic) measurement of these velocities should, in my opinion, be incorporated into the programme of your Committee. Actually, since the spiral nebul{\ae} baffle all attempts at estimating their distance, the objects in question are the globular clusters \cite[62.108]{Adams Archive}.
\end{quote}
Silberstein considered the line element of the static form of de
Sitter's universe. Despite his critical remarks to Weyl's conclusion on the alleged cosmic recession, Silberstein used the same general
principle formulated in 1923 by Weyl himself, who related the spectral shift $z$ to the ratio of the proper time $ds$ of the observer to the proper time $ds'$ of the source: $z=\frac{ds}{ds'}-1$. Silberstein obtained for what he called ``the complete Doppler effect'', i.e. the de Sitter effect, the relation:
\begin{equation}
z=\gamma\left[1\pm\sqrt{1-\frac{\cos^{2}\frac{r}{R}}{\gamma^{2}}}\right]-1,
\end{equation}
where $\gamma=\left(1-\frac{v^{2}}{c^{2}}\right)^{-1/2}$. The positive sign corresponded to receding objects, while the
negative sign to approaching ones \cite[p. 912]{Silberstein 1924c}.
In such a general formula of the Doppler effect there were
two terms: a term depending on the velocity $v$,
which was dominant near the observer, and a second term depending
upon $\frac{r}{R}$, which, according to Silberstein, was significant
for very remote celestial objects. For near stars, the velocity
effect approximated to the special relativistic one.
On the contrary, for the most distant celestial objects, the relation became \cite[p. 351]{Silberstein 1924a}:
\begin{equation}
z\simeq\pm\frac{r}{R}.
\end{equation}
Silberstein used such a linear relation in order to
determine the value of the curvature radius of de Sitter's world. He took into account
radial velocities, both positive and negative, and distances of
seven globular clusters, and obtained a mean value of
$R=6\cdot10^{12}$ AU \cite[p. 351]{Silberstein 1924a}. Such a result was almost confirmed by using velocity
and distance of the two Magellanic Clouds \cite[p. 363]{Silberstein
1924b}.
The most distant spiral which was known at that time, NGC 584,
showed a radial velocity of about $v=+1800$ km/sec.
Therefore it followed that such an object was placed at the distance
$r=3.6\cdot10^{10}$ AU. ``Huge as this may seen -
Silberstein noted - it will be remembered that Shapley's latest
estimate of the semi-diameter of our galaxy is only four times
smaller. (...) Whether these estimates will or will not fit into the
general scheme of modern galactic and extra-galactic astronomy, is
not known to me and must be left to the scrutiny of specialists''
\cite[pp. 916-917]{Silberstein 1924c}.
Later on, Silberstein showed that now, from the velocity and
distance of ten objects, i.e. eight clusters and the Magellanic Clouds,
a linear relation was actually confirmed by plotting, as suggested to him
by Henry N. Russell (1877-1957), the modulus of the redshift: $r=|z|\,R$.
Silberstein, however, discarded data belonging to other three globular clusters
(NGC 5904, NGC 6626, NGC 7089), whose velocities were ``suspiciously small' and did not give a constant curvature radius
\cite[p. 602]{Silberstein 1924d}. From data of these ten objects,
the size of the universe, by using the general Doppler
formula, was of the order of $R\geq\,9.1\cdot10^{12}$ AU, while the approximate linear
formula led to a world radius of de Sitter's universe not exceeding
$R=8\cdot10^{12}$ AU \cite[p. 819]{Silberstein 1924e}. Silberstein also employed and further elaborated
a statistical formula in order to get $R$ in terms of the mean $z$ and $r$ of two groups of objects for which the mean velocity was the same:
\begin{equation}
\bar{z}_{2}^{2}-\bar{z}_{1}^{2}=\frac{2}{3R^{2}}(\bar{r}_{2}^{2}-\bar{r}_{1}^{2}).
\end{equation}
The bars denote average values, and the suffixes the different groups. By splitting thirteen objects in two groups of, respectively, seven and six objects, such an analysis gave
$R=7.2\cdot10^{12}$ AU \cite[p. 627]{Silberstein 1924f}. Such a method was criticized by Eddington, who pointed out that the derivation of such a formula disagreed with Lorentz transformation \cite[p. 747]{Eddington 1924}.
As we shall see in the next section, the Swedish astronomer Lundmark disapproved of Silberstein's analysis, which he found objectionable both for the choice of a \textit{selected} number of globular clusters, and for the supposed linear correlation between shift and distance. Already in 1924, Lundmark proved that the methods and the results by Silberstein were wrong, so that they did not much appeal to scientists involved in the early debates on relativistic cosmology. Nevertheless, the effort made by Silberstein stimulated further investigations on de Sitter's model, as revealed for instance by the 1925 work on this subject by Lema\^{i}tre, and later by the contributions of Robertson and Richard Tolman (1881-1948), which appeared, respectively, in 1928 and 1929. In this respect, it is worth to quote part of the draft of an obituary for Robertson written by Lema\^{i}tre in 1963. Here Lema\^{i}tre returned to his 1925 interpretation of the non-static feature of de Sitter's universe, and acknowledged that:
\begin{quote}
I was better prepared to accept it following an opinion expressed by Eddington. (...) The
errors by Silberstein have been very stimulating. I had myself had a
long discussion with him in 1924 at a British
Association Conference in Toronto and my work, as possibly later on
the work of Robertson, results as a large part as a reaction against
some unsound aspects of Silberstein's theories \cite[D32]{Lemaitre Archive}.
\end{quote}
The interest of Silberstein in this topic culminated in his book \textit{The size of the universe}, which was written in 1929 and published in 1930, i.e. just at the turning point of the discovery of the expanding universe \cite{Silberstein 1930}. Here Silberstein collected his considerations on relativistic cosmology. He maintained the objection to the general tendency of particles to scatter suggested by Weyl, and criticized some measurements made by Hubble of the distance of extragalactic nebul{\ae}. Silberstein pursued his analysis on the static metric of de Sitter's universe, and kept accepting the proposal of a theoretical relation valid for both red and blue shifts in order to obtain a constant curvature radius (see table \ref{final} for further estimates of $R$ reported by Silberstein in his book). Eddington sharply stated that the views held by Silberstein on a finite and static universe were obsolete, being now superseded by the much more interesting proposal on the expanding universe made by Lema\^{i}tre \cite[p. 850]{Eddington 1930}. Silberstein's book was later criticized also by Robertson. According to Robertson, Silberstein had not been able to account for the ``overwhelming preponderance of redshifts'' revealed by the works of Hubble and Humason \cite[p. 603]{Robertson 1932}.
\section{The analysis proposed by Lundmark}
In August 1924 Lundmark wrote a paper on the determination of the curvature radius of de Sitter's space-time. The detailed analysis given by Lundmark is a clear example of the empirical approach adopted by the Swedish astronomer, based on the accurate review of available data, on the systematic comparison of different independent observations, and on the prompt use of working hypotheses.
Lundmark started his analysis by carefully examining the question of the nature of measured shifts, and proved that the spectral displacement was nearly constant for 16 lines in the Andromeda nebula. Such a shift, according to Lundmark, was thus a Doppler one, as well as the shift measured in globular clusters. However, he claimed that the origin of such displacements was still uncertain.
Lundmark was sceptical on the alleged possibility that the motion of globular clusters showed any effect of the curvature of space-time, and criticized the method proposed by Silberstein, who ``has not given, and will probably not be able to give, any justification for the use of the velocities of the globular clusters for a determination of $R$'' \cite[p. 750]{Lundmark 1924}. A small $K$ term resulted from the analysis of 18 globular clusters, while a larger value was obtained by treating velocities of 43 spiral nebul{\ae}. Therefore Lundmark asserted that globulars were nearer than spirals. As a consequence, spirals could presumably be affected by the curvature of space-time, whereas the motion of globular clusters was a real phenomenon, and could not be interpreted as the spurious velocity predicted for distant objects by the de Sitter effect. Moreover, Silberstein's result was objectionable because of the selected choice of those radial velocities of globular clusters which gave a constant value of the curvature radius. Lundmark made use of his own observations of 18 globulars, and compared his data to Shapley's ones. He claimed that his own analysis superseded the one presented by Silberstein, and concluded that there was not any definite correlation between velocity and distance. In addition, by hypothetically admitting the validity of Silberstein's linear relation, the mean value of the curvature radius resulted to be $R\simeq\,19.7\cdot10^{12}$ AU, nearly three times larger than the radius calculated by Silberstein, and in any case a still larger radius was likely to be expected\footnote{As later noted by Silberstein, in such a 1924 paper Lundmark erroneously reported the unit of distance in km rather than in AU \cite[p. 285]{Silberstein 1925}.}.
In the second part of his paper Lundmark dealt with further possibilities to determine the radius $R$ by means of several classes of stars, which he used as distance indicators. He reviewed data of velocity and distance belonging to 30 Cepheid stars, 8 Nov{\ae}, 27 $O$ stars, 29 $R$ stars, 25 $N$ stars, and 31 Eclipsing variables. With regard to the distance of Cepheid stars, Lundmark followed the derivation proposed by Shapley, who studied their proper motion together with the period-luminosity law. Distances of Nov{\ae} were determined by assuming that the mean absolute maximum magnitude had almost a constant value. When the radial velocities were plotted according to the corresponding distances, there seemed to be no progression, while, on the contrary, one should expect a progression from Silberstein's analysis on the constant curvature radius. The average values of the curvature radius found by Lundmark (by applying Silberstein's formula to the different classes of objects mentioned above) were, respectively, 7.5, 41, 4.0, 6.7, 2.3, $2.7 \cdot10^{12}$ AU \cite[pp. 756-763]{Lundmark 1924}.
The role played by spiral nebul{\ae} as distance indicators was the subject of the last part of Lundmark's paper, where the Swedish scientist proposed a pioneering empirical analysis of the relation between the velocity and distance for 44 spiral nebul{\ae}. Already in 1919, as mentioned before, Lundmark had estimated the distance of the Andromeda nebula at 200,000 pc ($4\cdot10^{10}$ AU) by means of the Nov{\ae} maximum brightness method. Now he used such a value as the unit of the distance scale. The parallax of spirals was obtained by using the working hypothesis ``that the apparent angular dimensions and the total magnitudes of the spiral nebul{\ae} are only dependent on the distance'' \cite[p. 767]{Lundmark 1924}. Lundmark also applied the statistical method, developed by Silberstein, to two groups of, respectively, 23 and 18 objects, which gave $R=2.4 \cdot10^{12}$ AU \cite[p. 769]{Lundmark 1924}. However, the conclusion reached by Lundmark was that the values of the curvature radius derived from each single spiral were exceedingly different, and thus inconsistent. Nevertheless, he found that there seemed to be a relation between radial velocity and distance, ``although not a very definite one'' \cite[p. 768]{Lundmark 1924}.
In a subsequent paper, published in 1925, Lundmark offered a complete review of the direct and the indirect methods to estimate the distance of spiral nebul{\ae}. It was at the end of this paper that the Swedish astronomer returned to question of the extension of the universe. Here he analyzed some of his observations in the light of the static, infinite and hierarchical model of the universe supported in those years by Charlier, another Swedish astronomer. Recalling some ideas by Johann Heinrich Lambert (1728-1777), Charlier proposed that celestial bodies formed gradually increasing spherical galaxies:
\begin{itemize}
\item $N_{1}$ stars formed galaxy $G_{1}$, of order 1 and radius $R_{1}$
\item $N_{2}$ galaxies $G_{1}$ formed galaxy $G_{2}$,
of order 2 and radius $R_{2}$
\end{itemize}
and so forth... \cite[p. 186]{Charlier 1925}.
By means of counts of stars and nebul{\ae}, and of the
apparent dimension of the latter, Charlier obtained the relation:
\begin{equation}
\frac{R_{i}}{R_{i-1}}>\sqrt{N_{i}}.
\end{equation}
A nearly altered version of such a relation was useful to estimate the distance of the Andromeda nebula (NGC 224), since, according to Charlier, spirals were galaxies of the second order \cite[p. 892]{Lundmark 1925}:
\begin{equation}
\frac{R_{2}}{R_{1}}=\sqrt{N_{2}}.
\end{equation}
Lundmark found a rough agreement between Charlier's result on the Andromeda distance (28 times the diameter of the galactic system), and his own result (32 times the galactic diameter). ``Our present knowledge - Lundmark thus emphasized - as to the space-distribution of the stars and the spirals can be summed up in the statement: \textit{our stellar system and the system of spiral nebul{\ae} are constructed according to the conceptions expressed in the Lambert-Charlier cosmogony}'' \cite[p. 893]{Lundmark 1925}.
\section{Further determinations of the curvature radius before the expanding universe}
As seen in the previous sections, at the beginning of relativistic cosmology scientists as de Sitter, Silberstein, and Lundmark showed great interest in the determination of $R$. In addition to their systematic analysis in terms of astronomical observations, further considerations on the size of the universe appeared in different frameworks. For instance, Weyl and Eddington took into account the curvature radius of space-time in their speculative attempts to connect macro-systems with micro-systems, in particular in the perspective of large numbers coincidence\footnote{On the large numbers hypothesis, see for instance \cite{Barrow 1990}. We refer to \cite{Gorelik 2002} for an analysis of Weyl's considerations on large numbers in relativistic cosmology.}. In fact, Weyl considered the relation between the world radius, the radius of the electron, and the gravitational radius associated with a mass $m$. Eddington, who already in August 1917 had emphasized in a letter to de Sitter that ``it is very interesting that you can get a determination of the necessary order of magnitude of $R$'' \cite[AFA-FC-WdS-11]{de Sitter Archive}, followed Weyl in this analysis. In 1920, right in the light of Weyl's approach on the unification of electricity and gravitation, Eddington argued that $R$ of Einstein's world was of the order of $2\cdot10^{11}$ pc ($4\cdot10^{16}$ AU), ``which - Eddington noted - though somewhat larger than the provisional estimates made by de Sitter, is within the realm of possibility'' \cite[p. 179]{Eddington 1920}. Later on, a numerical value of $R$ was reported by Weyl in the appendix of the fifth edition of his \textit{Raum, Zeit, Materie}. Here Weyl, in the already mentioned investigation of the hyperboloidal version of de Sitter's universe, referred to Lundmark's result on the distance of the Andromeda nebula, and found that $R=10^{9}$ AU: the curvature radius was $10^{40}$ times the radius of the electron, which was the same ratio of this latter to the gravitational radius of the electron \cite[p. 323]{Weyl 1923a}.
It is worth mentioning Eddington's further attempts to relate the cosmological problem to the atomic one. Eddington considered the cosmological constant as one of the fundamental entities in nature, together with the fine structure constant, the number of particles expected in an expanding universe, and the ratio of electrostatic and gravitational forces. For instance, in 1931 Eddington suggested that an estimate of $\lambda$ could be obtained by means of the wave equation for an electron, in which both the number of electrons in the universe and the time-dependent world radius should appear \cite{Eddington 1931}. Some years later, Paul Dirac (1902-1984) too followed the approach of investigating in the cosmological framework the mathematical relations which, according to him, connected the large dimensionless numbers occurring in nature \cite{Dirac 1938}.
As a matter of fact, the 1924 authoritative contribution proposed by Lundmark on the determination of $R$ revealed that a common criterion for the choice of distance indicators was missing. The failure of Silberstein's analysis influenced the cosmological debate of the late twenties, in the sense that the interest shown in the size of the universe gradually decreased, while the attention of modern cosmologists was mainly focused on the nature of spiral nebul{\ae} and their role in testing cosmological models. Gustaf Str\"{o}mberg (1882-1962) gave in 1925 a comprehensive analysis of the velocity of globular clusters and ``non-galactic nebul{\ae}''. In fact, Str\"{o}mberg confirmed that the interpretation of the relevant redshifts and the form of the redshift-distance relation represented an issue still to be clarified \cite{Stromberg 1925}. In a 1925 summary of the different attempts to measure the size of the universe, Archibald Henderson (1877-1963) concluded that:
\begin{quote}
If, as now appears probable, the spirals are isolated systems, this recession must be explained, it appears, either as a wholesale error or else as a relativistic effect (...). Much additional data will be required and many further researches made before it will be possible categorically to decide between the infinite, limitless, Euclidean universe of Newton, and the finite, unbounded, non-Euclidean universe of Einstein and de Sitter \cite[p. 223]{Henderson 1925}.
\end{quote}
In this framework, the contributions proposed by Hubble marked a second renewal of cosmology. On the one hand, the determination in 1925 of the distance of the Andromeda nebula by means of Cepheid variables disclosed the depth of the realm of the galaxies \cite{Hubble 1925}\footnote{As noted above, the measurements of the distance of galaxies made by Hubble were reconsidered after Baade's discovery of the existence of two different stellar populations.}. On the other hand, the linear redshift-distance relation, formulated in 1929, held the evidence of a nearly \textit{systematic} recession of distant galaxies \cite{Hubble 1929}.
Actually, between the years 1925-1930, only few suggestions on the dimension of the universe appeared in scientific papers. Among them, Hubble proposed a value of $R$ of Einstein's universe in the last section of his 1926 paper devoted to the general classification of extragalactic nebul{\ae}. Here Hubble calculated that the mean density of world matter was $\rho=1.5\cdot10^{-31}$ g/cm$^{3}$, which involved that $R=2.7\cdot10^{10}$ pc ($5.6\cdot10^{15}$ AU) \cite[p. 369]{Hubble 1926}.
An estimate of $R$, as noted in \cite[p. 151]{Kragh 2007}, was found in the same year by Wilhelm Lenz (1888-1957), now in relation to thermodynamics equilibrium. Lenz applied to the volume of the Einstein universe the 1925-26 analysis proposed by Otto Stern (1888-1969) on the relation between the energy density of matter, the energy density of black body radiation, and the temperature of space. The result, which Lenz acknowledged to be of a ``fascinating simplicity'', was that, at the equilibrium, the matter energy was equal to the radiation energy. The relation proposed by Lenz between the temperature $T$ and $R$ was:
\begin{equation}
T^{2}=\frac{1}{R}\left(\frac{2c^{2}}{a\kappa}\right)^{1/2}\simeq\frac{10^{31}}{R},
\end{equation}
where $a$ is the Stefan-Boltzmann constant, and $R$ is expressed in cm. As Lenz noted, the density of world matter of the order of $10^{-26}$ g/cm$^{3}$ led to $R=10^{26}$ cm ($6.7\cdot10^{12}$ AU), and consequently the black body temperature was too high, about $T = 300 K$. On the contrary, by assuming in such a relation the radiation temperature of 1 $K$, $R$ resulted to be about $10^{31}$ cm ($6.7\cdot10^{17}$ AU) \cite[p. 644]{Lenz 1926}.
Two years later, Robertson considered the distance of spirals reported in Hubble's 1926 paper, in relation to Slipher's data of radial velocity reported in Eddington's 1923 book on relativity. According to Robertson, these observations were able to confirm a nearly linear redshift-distance relation which Robertson had derived by his own non-static version of the metric of de Sitter's universe. By means of such data, Robertson found the curvature radius of the empty universe to be $R=2\cdot10^{27}$ cm ($1.3\cdot10^{14}$ AU) \cite[p. 845]{Robertson 1928}.
In 1929, Tolman offered a very detailed analysis of the static form of de Sitter's line element, in which he investigated the properties of a formula of the de Sitter effect more general than the one previously found by Silberstein. Tolman assumed that $R$ would be of, at least, ten times the range of the most distant galaxies: $R\geq2\cdot10^{8}$ light-years ($1.2\cdot10^{13}$ AU) \cite[p. 271]{Tolman 1929}. Furthermore, in order to reconcile the general tendency of scatter in de Sitter's universe, Tolman introduced the hypothesis of continuous entry (even continuous formation), namely that ``nebul{\ae} are continually entering, as well as leaving the range of observation'', from which he concluded that $R=2\cdot10^{9}$ light-years ($1.2\cdot10^{14}$ AU) \cite[p. 272]{Tolman 1929}.
A further estimate of $R$ was proposed yet in 1927, but in a different context, i.e. within the considerations on the time-dependent world radius which appeared in the paper on the expanding universe written in that year by Lema\^{i}tre. As a matter of fact, it was Friedmann who first showed in 1922 that Einstein's and de Sitter's solutions were ``special cases of more general assumptions'', and then demonstrated ``the possibility of a world in which the curvature of space is independent of the three spatial coordinates but does depend on time'' \cite[p. 49]{Friedmann 1922}. However, Friedmann did not relate his theoretical predictions to astronomical observations. On the contrary, this was done in 1927 by Lema\^{i}tre, who took into account empirical data in order to calculate $R(t)$. In Lema\^{i}tre's paper, a partial English translation of which was diffused in 1931, the Belgian scientist had pointed out that ``in order to find a solution combining the advantages of those of Einstein and de Sitter, we are led to consider an Einstein universe where the radius of space or of the universe is allowed to vary in an arbitrary way'' \cite[p. 484]{Lemaitre 1931}. The radius $R\equiv\,R(t)$ asymptotically increased with time, starting from $R_{0}=\lambda^{-1/2}$, which value depended on the cosmological constant and was the radius at $t=-\infty$ \cite[p. 94]{Lemaitre 1927}. In fact, it was Lema\^{i}tre who offered in his 1927 paper a solution to the puzzling interpretation of relevant redshifts. He clearly stated that such spectral displacements in spirals were a cosmical effect due to the variation of $R$, i.e. to the expansion of the universe. The redshift-distance relation valid for near objects was \cite[p. 96]{Lemaitre 1927}:
\begin{equation}
z=\frac{v}{c}\simeq\frac{R_{2}-R_{1}}{R_{1}}=\frac{R'}{R}r,
\end{equation}
where $R_{1}$ and $R_{2}$ were, respectively, the radius of the universe at the time of emission of a light signal, and that at the epoch of reception, and $R'$ referred to the derivative of $R$ with respect to time. Lema\^{i}tre calculated the distance of 43 nebul{\ae} by assuming, as done by Hubble in 1926, that they had the same absolute magnitude. The average distance he found was $r=10^{6}$ pc ($2\cdot10^{11}$ AU) \cite[p. 96]{Lemaitre 1927}. With regard to the radial velocity, the Belgian scientist referred to the observations collected in 1925 by Str\"{o}mberg and in 1926 by Hubble, and assumed $v$ = 625 km/sec as the average velocity at this distance\footnote{Actually, this can be seen as the first suggestion of the value of what became later known as the ``Hubble constant'', in the sense that here Lema\^{i}tre stated that, by assuming the proportionality between $v$ and $r$, a galaxy observed at the distance of 1 Mpc would recede with a velocity of 625 km/sec. However, the section containing these values and calculation proposed by Lema\^{i}tre in 1927 was not reported in the 1931 English translation. In 1929, Hubble, unaware of Lema\^{i}tre's result, obtained for this constant, i.e. for $K$ term in his linear relation $v=Kr$, a value ranging from +465 to +530 km/sec \cite[pp. 170-172]{Hubble 1929}. We refer to \cite{Trimble 1996} for a historical reconstruction of the determination of the Hubble constant from 1925 to 1975.}, whether the range was between +575 and +670 km/sec \cite[p. 97]{Lemaitre 1927}. In this way Lema\^{i}tre found:
\begin{equation}
\frac{R'}{R}\equiv\,y=0.68\cdot10^{-27}\,\textrm{cm}^{-1},
\end{equation}
from which it followed:
\begin{equation}
R=R_{A}\sqrt{y}=6\cdot10^{9}\,\textrm{pc}\,(\simeq1.2\cdot10^{15}\,\textrm{AU}),
\end{equation}
\begin{equation}
R_{0}=R_{A}\,y^{3/2}=2.7\cdot10^{8}\,\textrm{pc}\,(\simeq5.6\cdot10^{13}\,\textrm{AU}).
\end{equation}
Here $R_{A}$ was the constant radius of the Einstein universe, for which Lema\^{i}tre used the value determined by Hubble in 1926 ($R_{A}=2.7\cdot10^{10}$ pc\,$\simeq5.6\cdot10^{15}$ AU) \cite[p. 98]{Lemaitre 1927}.
The relativistic field equations in the form derived by Friedmann and independently by Lema\^{i}tre, known as the ``Friedmann-Lema\^{i}tre equations'', related the time-dependent world radius to the world matter content and the cosmological constant, and were able to describe the evolution of the expanding universe (later on, in such equations $R(t)$ was substituted by $a(t)$, which refers to the expansion parameter, or cosmic scale factor). Finally, in view of the cosmic recession of galaxies from each other, empirically confirmed by Hubble's observations, the rediscovery in 1930 of the dynamical models of the expanding universe formulated by Friedmann and Lema\^{i}tre inaugurated a new phase in the modern understanding of the universe as a whole.
\begin{table}
\caption{Summary of the estimates of the size of the universe from 1917 to 1930 ($R_{A}$ and $R_{B}$ refer to, respectively, the model of the universe of Einstein, and that of de Sitter). One of the currently accepted estimates of the curvature scale $R_{c}$ of the universe is $R_{c}>42$ Gpc\, ($\simeq87\cdot10^{14}$ AU). It is worth noting that $R_{c}$ is now expressed as a function of the Hubble constant and the energy-matter density of the universe (for details, see \cite{Vardanyan-Trotta-Silk 2011}).}\label{final}
\begin{center}
\begin{tabular}{llllr}
\hline
author & year & method or astronomical objects & radius & AU\\
\hline
Einstein & 1917 & matter density (stars) & $R_{A}=$ & $6\cdot10^{11}$\\
de Sitter & 1917 & galaxy apparent diameter & $R_{A}\geq$ & $10^{12}$\\
& & matter density (stars) & $R_{A}=$ & $9\cdot10^{11}$\\
& & matter density (galaxies) & $R_{A}\leq$ & $5\cdot10^{13}$\\
& & light absorption & $R_{A}>$ & $ 1/4\cdot10^{12}$\\
& & $B$ stars & $R_{B}=$ & $2/3\cdot10^{10}$\\
& & Small Magellanic Cloud & $R_{B}>$ & $2\cdot10^{11}$\\
& & 3 galaxies & $R_{B}=$ & $3\cdot10^{11}$\\
Eddington & 1920 & large numbers hypothesis & $R_{A}=$ & $4\cdot10^{16}$\\
Silberstein & 1922 & matter density (galaxies) & $R_{A}\geq$ & $10^{12}$\\
Weyl & 1923 & Andromeda galaxy & $R_{B}=$ & $10^{9}$\\
Silberstein & 1924 & 7 globular clusters & $R_{B}=$& $6\cdot10^{12}$\\
& & 8 globular clusters + Magellanic Clouds & $R_{B}\geq$ & $9.1\cdot10^{12}$\\
& & 11 globular clusters + Magellanic Clouds& $R_{B}=$ & $7.2\cdot10^{12}$\\
Lundmark & 1924 & 18 globular clusters & $R_{B}=$ & $19.7\cdot10^{12}$\\
& & Cepheid stars & $R_{B}=$ & $7.5\cdot10^{12}$\\
& & Nov{\ae} stars & $R_{B}=$ & $41\cdot10^{12}$\\
& & $O$ stars & $R_{B}=$ & $4\cdot10^{12}$\\
& & $R$ stars & $R_{B}=$ & $6.7\cdot10^{12}$\\
& & $N$ stars & $R_{B}=$ & $2.3\cdot10^{12}$\\
& & Eclipsing variable stars & $R_{B}=$ & $2.7\cdot10^{12}$\\
& & 41 galaxies & $R_{B}=$ & $2.4\cdot10^{12}$\\
Hubble & 1926 & matter density (galaxies) & $R_{A}=$ & $5.6\cdot10^{15}$\\
Lenz & 1926 & radiation temperature & $R_{A}=$ & $6.7\cdot10^{17}$\\
Lema\^{i}tre& 1927 & 43 galaxies & $R(t)$= & $1.2\cdot10^{15}$\\
Robertson & 1928 & 42 galaxies & $R_{B}=$ & $1.3\cdot10^{14}$\\
Tolman & 1929 & galaxies & $R_{B}\geq$ & $1.2\cdot10^{13}$\\
& & continuous entry hypothesis & $R_{B}=$ & $1.2\cdot10^{14}$\\
de Sitter & 1929 & matter density (galaxies) & $R_{A}=$ & $3\cdot10^{13}$\\
Silberstein & 1930 & 18 globular clusters + Magellanic Clouds & $R_{B}=$ & $7.4\cdot10^{12}$\\
& & 29 Cepheid stars & $R_{B}=$ & $3\cdot10^{11}$\\
& & 35 $O$ stars & $R_{B}=$ & $3.2\cdot10^{11}$\\
& & 459 stars & $R_{B}=$ & $4\cdot10^{11}$\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
The period ranging from the 1917 static cosmological model of Einstein to the 1930 official entrance of the proposal of the expanding universe was characterized by a variety of ideas, discoveries, and controversies.
In those years new perspectives were opened in the far-reaching and still ongoing challenge to the comprehension of the universe by means of the laws of physics. The leading scientists dealt with several issues which emerged from the first tortuous, but at the same time fruitful, interplay between predictions of general relativity and astronomical observations. It is noteworthy that some topics faced in those years are still present in the cosmological debates. We mention, for instance, issues as the postulate of the homogeneity and isotropy of space, the existence of visual horizons, as well as the cosmological interpretation of spectral shifts, and the use of distance indicators for extragalactic objects.
Certainly, the interest in the determination of the size of the universe faded away as the expanding universe entered modern cosmology. Nonetheless, as shown in the present analysis, the contributions given from 1917 to 1930 to this specific question can be seen as an interesting example of the efforts that were made to achieve a coherent picture of the universe by understanding, in agreement with the legacy of Galileo, its \textit{mathematical language} in the light of the \textit{sensible experiences}.
``The theory of today - de Sitter wrote in 1932 - is not the theory of tomorrow. (...) Science is developing so very rapidly nowadays, that it would be preposterous to think that we had reached a final state in any subject. The whole of physical science, including astronomy, is in a state of transition and rapid evolution. Theories are continually being improved and adapted to new observed facts. It would certainly not be right to suppose at the present time that we had reached any state of finality. We are, however, certainly on the right track'' \cite[pp. 103-104]{de Sitter 1932a}.
\begin{acknowledgements}
We express our gratitude to Jan Guichelaar, Roberto Lalli, Dan Lewis, Liliane Moens, Laura Rigoli, Sofia Talas, Frans van Lunteren, Luciano Vanzo. We are very grateful to Tilman Sauer for his invaluable comments and suggestions. This work has been supported in part by the Ateneo Research Projects, University of Padova.
\end{acknowledgements}
| -36,449.263366 |
[
-3.275390625,
2.990234375
] | 41.452991 |
[
-2.517578125,
0.93505859375,
-2.380859375,
-5.69921875,
-0.178466796875,
7.98828125
] |
[
4.69921875,
7.8125,
3.939453125,
7.9453125
] | 634 | 13,147 |
[
-3.37890625,
3.8125
] | 25.905283 |
[
-5.51953125,
-3.1015625,
-3.91015625,
-1.9658203125,
1.5087890625,
11.3671875
] | 2.058022 | 21.966908 | 18.209477 | 6.235765 |
[
3.3787617683410645
] | -29,023.465648 | 5.359093 | -35,583.648925 | 0.848757 | 6.081632 |
[
-3.42578125,
-3.5625,
-2.3828125,
-3.15625,
2.4609375,
9.1640625
] |
[
-5.28515625,
-2.5234375,
-2.537109375,
-1.529296875,
3.6796875,
5.609375
] | |
BkiUf7I5qhDCpkQ4Erad
|
\section{Introduction}
Remote communication has a crucial role in modern societies.
An essential aspect of effective remote communication is co-presence, where multiple participants can see and interact with each other in a shared virtual environment so-called \emph{Virtual co-presence (VCP)}. The VCP is a sense of being with others in a virtual world that people are psychologically connected, and are available and accessible to others \cite{bulu2012place}. The VCP environments can be more engaging than text or voice-based chat in communications, collaborations, and training.
Recently, such a virtual co-presence has been studied in remote education~\cite{Mustufa12VRForDistanceEdu}, training simulations \cite{hooper2019virtual,schmidt2019heidelberg}, therapy treatments~\cite{WiederholdVRET05Book} and social interaction venues \cite{hudson2019or}.
A VCP environment can be created using a Virtual Reality (VR) platform. A well-known approach to represent the human character in VR environments is using ``Avatar'' models. An avatar is a digital form of human character that can be represented as 2D or 3D. 3D avatars have some advantages over 2D avatars, such as being more human-like, having realistic motions, and offering an immersive experience in VR and VCP environments.
In this paper, we investigate different technologies designed toward creating VCP environments based on digital human avatar models. We evaluate how the evolution of technologies in Artificial Intelligence (AI) and Computer Graphics affect the human representation quality in VCP environments.
We mainly study two types of researches: (1) works that create or use digital avatars models in VCP environments (2) works that build the avatar models or the pipelines that are highly beneficial for developing VCP environments.
For both types of research, we will explain and compare their advantages, limitations, and also applications.
We also will have a short discussion about the methods that used other forms of human character representation rather than digital human avatar to create VCP environments.
Fig. \ref{Fig:survey} shows our methodology of categorizing of the works carried out in the literature to construct VCP environments. As can be seen in the figure, we classify non-avatar based methods into three categories of \emph{Robotics}, \emph{Mobile systems}, and \emph{Images and videos}. We also compare non-avatar based researches based on their applications, advantages and disadvantages.
On the other hand, we classify the avatar-based approaches into two main types of \emph{Direct-motion retargeting} and \emph{Pre-rendered motions}. In the methods that used Direct-motion retargeting, the motion is transferred directly from users to digital avatars. In contrast, in the approaches that exploited Pre-rendered motions, the motions are pre-defined by developers.
Direct-motion retargeting methods provide authentic motions resembling users' actions. Although some of these strategies are not used directly in VCP environments, their algorithm pipelines or avatar models are highly beneficial in creating prospective VCP environments based on digital human avatars.
Next, we further divide the Direct-motion retargeting category into two classes of \emph{Image-based} and \emph{Sensor-based} approaches. In the image-based strategies, the input is commonly RGB images, video, or depth images
On the other hand, in the sensor-based approaches, the input is obtained from wearable sensor devices, tracking sensors, and tracking markers.
Both Pre-rendered motion and Sensor-based categories mostly include similar application-based approaches. As a result, we compare these two categories based on their applications, benefits, and drawbacks.
The Image-based category consists of two groups of \emph{Offline motion retargeting} and \emph{Online motion retargeting}. Offline motion retargeting approaches often offer more accurate results than online methods by reconstructing high-quality body shape, pose, and textures. Online motion retargeting is, however, faster and more suitable for a real-time VCP or VR experiments. Furthermore, the online motion retargeting approaches consist of two types of (1) \emph{3D model reconstruction}, where they reconstruct the human character model or scene, and (2) the methods that used pre-designed \emph{Rigged avatar} models.
In this paper, we compare different Image-based strategies based on their applications, advantages, and disadvantages in VCP environments. Since the image-based methods commonly made technical contributions, we also will illustrate the gradual evolution of technology in each sub-category. This illustration explains how new approaches resolve previous issues.
There have been a few research reviewing different approaches to build VCP environments. Krist et al. \cite{kristoffersson2013review} are the only researchers that conducted a comprehensive study on VCP environments. However, they discussed the strategies that utilized non-avatar/robotics methods. On the other hand, in our research, we focus on using the digital human avatar, which is the state-of-the-art form of human representation in VCP environments.
Moreover, there has also been a lack of thorough research on human avatars in general. Hudson et al. \cite{hudson2016avatar} conducted a short study about general applications of digital human avatars. Our literature review, however, investigates the strategies toward building VCP environments. We also carried out a significantly more comprehensive reviewing survey than \cite{hudson2016avatar} by categorizing different methods, investigating their advantages, limitations and applications, comparing various approaches and categories and evaluating the evolution of this new technology.
\begin{figure*}[h!tbp]
\centering
\includegraphics[width=1.0\textwidth]{images/overview.jpeg}
\caption{Our categorization methodology of different methods in the literature toward creating Virtual co-presence (VCP) environments. ~\label{Fig:survey}}
\end{figure*}
\FloatBarrier
This research's \textbf{contributions} are: This literature review is the first research that investigates different approaches toward building VCP environments upon digital human avatar models to the best of our knowledge. Notably, the categorization methodology of avatar-based methods in this paper is new.
Moreover, we conducted a comprehensive analysis on different categories and methods based on their advantages, limitations, applications, and the evolution of technologies used in VCP environments.
\section{Non-Avatar-based Approaches}
\label{Sec:non-avatar}
Non-avatar-based approaches usually rely on physical hardware to create a VCP environment. The hardware can be an interactive robot, a mobile system installed on a moving platform, or traditional video-based telepresence. While this research's primary focus is on avatar-based approaches, we briefly explain non-avatar methods as follows:
\subsection{Robotics}
Robotics has been used for various VCP/telepresence purposes, such as communication, environmental visualization, and training physicians.
However, its widespread usage has been to improve the interactions of a social VCP environment.
For example, in \cite{minato2012development} Minato et al. suggested a portable human-like to users can feel others' presence. This human-like sensation is achieved by using human-like voice, appearance, and touch that users can communicate and talk to others remotely. The experiments showed that people quickly started conversation with the robot and are impressed by its shape and feeling.
In \cite{yoon2015control}, Yoon et al. proposed a robotic telepresence system with some modern features such as a projector and a head tracker system. These features make the communication between the user and robot more interactive. The suggested features are unique in telepresence applications that led the system to be more effective than traditional robotics system.
Robotics also has been used for environmental visualization. For instance, in \cite{macharet2012collaborative} Macharet et al. designed a telepresence robot to visualize the environment for users. The robot is controlled by a remote human operator and can smoothly navigate through a house with capability of handling complicated situations such as narrow corridors and doors. The results showed that using the proposed method helps to reduce the number of environmental collisions during navigating with the robot.
The main drawbacks of using robots are the need for regular maintenance, limitation of the robot to perform human-like interactions, the difficulty of adding new features and users' discomfort to use robots.
\subsection{Mobile Systems}
A mobile system is a set of interactive tools integrated on moving frames to create a VCP environment. Compared to conventional robots, mobile systems are more focused on mobility, accessibility, and practical applications \cite{parker2016SHR}.
As an example, in ~\cite{beer2011mobile} Beer et al. designed a mobile system consisting of several modules such as a touch screen installed on a phone frame, a microphone, a web camera, and speakers to help elders to interact with visitors. The experimental results showed that elders have good experience of interactivity and visibility in the designed mobile system that will reduce traveling costs and social isolation.
In ~\cite{lee2011now} Lee et al. suggested a mobile system to enable remote workers to live and work with local coworkers similar to how they do it physically in real life. They exploited a Texai Alpha prototype \cite{texai} mobile system for this purpose. The experimental results obtained from the surveys indicated that the remote pilots have a similar experience of working with real local coworkers.
While mobile systems can be equipped with state-of-the-art electronic tools, they share the similar problems with robotics systems. Moreover, they strictly rely on manufacturer's available development kit tools for adding any new features \cite{hening2013ICPTAE}.
\subsection{Images and Videos}
Using images and videos is the traditional way of creating a co-presence environment. As an example of such an approach, in \cite{noda2015implementation} Noda et al. proposed a telecommunication system to connect different users using a configurable tile display. The proposed scheme includes several features to offer a realistic sensation of co-presence such as life-size processing, subtracting the background, and using multiple cameras. The designed VCP system provides a higher sense of presence compared to the traditional video-based approaches.
An advanced video-based VCP environment is designed in \cite{maeda2004real} called the "real-world" where users can view each other from different perspectives. The real-world implementation is accomplished by using a multiview video capturing system and eighteen PCs in a cylindrical chamber. The experimental results showed that the suggested system can be delivered efficiently to users who watch others in real-time.
Compared to robotics and mobile systems, using images and videos for creating VCP environments has some advantages and drawbacks. For example, while large displays can provide more human-like sensation and require minimum maintenance, they lack interactivity and a real 3D experience.
Table \ref{Tab:nonavatar} summarizes the different non-avatar-based methods have been suggested to create VCP environments.
\begin{figure*}[h!tbp]
\centering
\begin{tabular}{c c c}
\includegraphics[height=0.23\textwidth]{images/robot.jpeg} &
\includegraphics[height=0.23\textwidth]{images/mobile.jpeg} &
\includegraphics[height=0.23\textwidth]{images/video.jpeg} \\
(a) & (b) & (c) \\
\end{tabular}
\caption{Examples of (a) robotics \cite{minato2012development} , (b) mobile system ~\cite{lee2011now} and (c) image and videos-based \cite{noda2015implementation} VCP environments.
\label{Fig:non-avatar}}
\end{figure*}
\begin{table}[h!tbp]
\centering
\caption{Summary of non-avatar methods.
~\label{Tab:nonavatar}}
\begin{tabularx}{\linewidth}{LLLL}
\hline
Category & Applications & Advantages & Disadvantages\\ \hline
Robotics & telepresence \cite{minato2012development,yoon2015control} and remote navigation~\cite{macharet2012collaborative}
& compactness and configurable modular systems & regular maintenance, limited interactions and features and users' discomfort \\ \hline
Mobile Systems & elderly healthcare assistance~\cite{beer2011mobile} and remote working~\cite{lee2011now} & mobility, accessibility and practical applications & regular maintenance and limited features and interactions \\ \hline
Images and Videos & tele-communication \cite{noda2015implementation,maeda2004real} & large displays and minimum maintenance & lack of interactivity and real 3D experience
\\ \hline
\end{tabularx}
\end{table}
\section{Avatar-based Approaches}
\label{Sec:avatar}
In avatar-based approaches, commonly, a digital 3D human model represents real humans in VCP environments \cite{hasler2013CHB}. Compared to non-avatar methods, using an avatar doesn't require any special maintenance and offers more human-like sensations and interactions in a 3D world.
Moreover, in contrast to non-avatar approaches that rely on hardware, avatar-based strategies depend on software to create a VCP environment. Hence, they are more flexible in adding new features and can be upgraded based on state-of-the-art AI and Computer Graphics algorithms to fulfill users' needs.
In this paper, we first categorize the avatar-based approaches to two types of \emph{Pre-rendered motions} where the animations are pre-defined and \emph{Direct motion retargeting} where users directly control the avatars. We will give more explanation about the aforementioned categories as follows:
\subsection{Pre-Rendered Motions}
\label{Sec:pre-render}
A 3D avatar in a VCP environment can be animated by pre-designed motions and scenes. For example, in \cite{pazour2018virtual}, Pazour et al. simulate a conference room with user-defined avatars that can communicate with each other remotely. This simulation's primary goal is to evaluate users' realistic feeling of co-presence in a virtual environment. Two experiments were conducted, where two scenarios control head motions: (1) tracking by a Head-Mounted Device (HMD), HTC Vive, and (2) using mouse movements in a desktop PC. The avatars' upper and lower body are animated using pre-rendered animations developed by Mixamo~\cite{Mixamo}. The experimental results showed that using the VR headset outperforms the desktop (mouse movement) in terms of feeling other users' presence.
Pre-rendered motions have also been used to simulate metropolitan structures such as college \cite{monahan2008virtual}, airport \cite{li2016virtual} and Museum \cite{mu2009implementation}. For example, in \cite{monahan2008virtual}, Monahan et al. simulated a college environment with students and teachers. In the designed scenario, students select a unique 3D character during the registration process to represent them onscreen in the 3D university environment. The avatars are human-like and can perform a variety of pre-designed actions associated with their roles. The results illustrated the effectiveness of the proposed technique, and users' realistic feeling of others' activities and presence.
In another example, in \cite{li2016virtual}, Li et al. simulated taking off an aircraft in an airport runway environment. Five users can interact with each other while wearing VR goggles to see other users` avatars. The users can take the role of a pilot, a support office, a tractor guide, a carrier aircraft guide, and a tractor driver. The carried surveys illustrated that the proposed type of exhibit is exciting and attractive for users.
In \cite{mu2009implementation} Mu et al. designed a VR environment for multi-user learning in Museum. The primary way of the interaction between users is pre-defined gestures that can be customized by users using a Graphical User Interface (GUI). Participants can select their appearance and clothing before entering the VR environment. The results indicated that the method was effective for users to transfer their motion in the virtual location.
Pre-rendered motions also have been used for psychological testing purposes. For example in ~\cite{brown2017coordinating}, Brown et al. presented a narrative story in a VR environment. Users can play the game over a network connection wearing a head-mounted display (Oculus Rift). The goal is to study a set of guided camera and gaze distracting techniques to determine how to attract unfocused individuals to the same story. The results showed a better understanding of factors that cause users' attraction to a narrative story.
Using pre-rendered motions and scenes can be suitable for the scenarios that specific animations satisfy users' needs. However, in most cases, a more authentic approach to create movements directly from users' actions is desirable. To effectuate this, many researchers directly retarget motions from users to 3D avatars.
\subsection{Direct-Motion Retrageting}
In this category, the motion data is directly transferred from humans to 3D avatars. The surrounding environment can be pre-designed \cite{shapiro2014automatic,li20193d,beck2013immersive} or reconstructed scenes \cite{newcombe2015dynamicfusion,orts2016holoportation}. We divide the Direct-motion retargeting strategies into two types of \emph{Image-based} methods, where the inputs are image, video, or depth data and \emph{Sensors-based} approaches where the inputs are obtained from sensor equipment.
While some of the researches in this category are directly implemented in VCP environments, some others suggested avatar modeling pipelines that are profoundly beneficial toward creating VCP environments.
\begin{figure*}[h!tbp]
\centering
\begin{tabular}{c c}
\includegraphics[height=0.28\textwidth]{images/airport.jpeg} &
\includegraphics[height=0.28\textwidth]{images/psy.jpeg} \\
(a) & (b) \\
\end{tabular}
\caption{Pre-rendered motions used for creating VCP environments to simulate an airport~\cite{li2016virtual} and a psychological test~\cite{brown2017coordinating}.
\label{Fig:pre-rendered}}
\end{figure*}
\textbf{Sensor-based Approaches}
\label{Sec:sensor}
To animate human avatars, different sensors have been used; such as wearable sensors \cite{lifkooee2019real,han2017simulating,andreadis2010real}, head-mounted sensors \cite{herder2019avatars,wang2019effect,rauter2019augmenting}, and motion capturing systems with tracking markers~\cite{camporesi2015effects}.
There have been different applications for sensor-based human avatar animation generation. Some researchers utilized sensors to evaluate the effectiveness of human avatars \cite{wang2019effect,camporesi2015effects,han2017simulating}. As an example, in \cite{wang2019effect} Wang et al. adopted HMD sensors (HTC Vive) to investigate the performance of different segment levels of a human avatar such as partial hands, full hands and full-body avatar. The experimental results indicated that the full-body avatar leads to the highest performance and satisfaction when they used HMD sensors.
As another example, in \cite{camporesi2015effects} Camporesi et al. evaluated the performance of avatars and non-avatars techniques using a motion capturing system with ten cameras that can track users' head and body based on trackable markers. The results indicated that exploiting avatar-based categories can improve users' quality and speed on finishing their assigned tasks.
In \cite{han2017simulating} Han et al. stimulated human's upper body motions using head and hand motion capturing sensors whose data are transferred in two different channels. The experimental results show an 80\% consistency rate between real human actions and digital avatar motions.
In \cite{herder2019avatars}, Herder et al. created a VCP environment to simulate a large factory with industrial machines. Users' roles are as new workers and are trained based on a basic tutorial. Participants are tracked and animated in real-time using HMD sensors and development kits provided by HMD manufacturers. The results indicated that using a human avatar helps stimulating communication between users by having high immersive interactions and engagements.
Sensors-based animated avatars also have been used to simulate social events \cite{andreadis2010real,de2019watching}.
For example, in \cite{andreadis2010real} Andreadis et al. suggested a VCP environment to simulate a theater containing actors and other scenery subjects that are streamed in a multi-screen display. While the main actors' interactions are captured using a real-time motion capturing system and wearable magnet sensors, other subjects such as animals are animated using automatic AI-assisted motions. The effectiveness of the proposed VCP environment is validated by receiving positive feedback from both regular audiences and experts.
As another example, in \cite{de2019watching}. De et al. utilized Facebook Space, a pre-designed commercial software, to evaluate user's interaction. Participants interact with other human-like avatars by talking to them and listening to their conversation while watching movies. The final results showed that the users have similar experiences of watching movies together when using 3D avatars and traditional video-based environment.
Another application of sensor-based animated avatars is to analyze human behavior and motions in different scenarios~\cite{park2019investigation,rauter2019augmenting}.
As an example, in \cite{park2019investigation} Park et al. created a system to capture and analyze users' walking-in-place movement in VR environments. The motion sensors are installed on users' lower legs, and the avatar's lower leg is animated when any motion is detected. They concluded that the avatar's movement is natural and accurate.
In \cite{rauter2019augmenting}, a mixed-reality environment is designed to investigate the interaction between users and objects. They combined the real world and virtual objects using depth sensors and an HTC Vive Pro HMD. The experimental results indicated that creating such a scenario to interact with virtual near-real objects in a mixed-reality environment is highly viable for real-world applications. As another instance, in \cite{maloney2019dc} Maloney suggested a VR environment where users can embody the avatar with different races to measure their racial bias. Users need to shoot human-like targets that are explained to be aliens invading the earth. They concluded that the proposed strategy is successful in decreasing users' implicit bias against different races.
Compared to pre-rendered animations, generating motions by using sensors is more flexible, allowing users to perform various actions. However, using sensors has some limitations such as the need of proper maintenance, calibration, training and difficulty of wearing or using the sensor devices.
Both of senor-based and pre-rendered motion-based approaches are more focused on application rather than techniques. So, in Table \ref{Tab:avatar} we compare these two categories based on their applications, advantages and disadvantages.
\begin{figure*}[h!tbp]
\centering
\begin{tabular}{c c}
\includegraphics[height=0.32\textwidth]{images/factory.jpeg} &
\includegraphics[height=0.32\textwidth]{images/facebook.jpeg} \\
(a) & (b) \\
\end{tabular}
\caption{Sensor-based VCP environments created to (a) simulate a large factory~\cite{herder2019avatars} and (b) watching movies~\cite{de2019watching}.
\label{Fig:sensors}}
\end{figure*}
\begin{table}[h!tbp]
\centering
\caption{Comparison between Pre-rendered motion and Sensor-based (direct motion retargeting)
~\label{Tab:avatar}}
\begin{tabularx}{\linewidth}{LLLL}
\hline
Category & Applications & Advantages & Disadvantages\\ \hline
Pre-rendered motions & virtual conference room \cite{pazour2018virtual}, simulating metropolitan structures: university~\cite{monahan2008virtual}, airport~\cite{li2016virtual} and Museum~\cite{mu2009implementation}, psychological test \cite{brown2017coordinating} & minimal motion transfer error, ease of design based on the application, no need for maintenance and calibrations or wearing extra equipment & limited number of interactions and lack of authentic motion directly transferred from users\\ \hline
Sensor-based direct motion retargeting & evaluating the effectiveness of human avatars~\cite{wang2019effect,camporesi2015effects,han2017simulating}, simulating social events~\cite{andreadis2010real,de2019watching}, analyzing human behavior and motions~\cite{park2019investigation,rauter2019augmenting} & authentic motion transfer directly from users, no limitation on variety of motions & need for regular maintenance, calibration and training and difficulty of using wearable devices\\ \hline
\end{tabularx}
\end{table}
\textbf{Image-based Approaches}
\label{Sec:image}
Images (including RGB and depth) have been widely used as the primary inputs for motion synthesis and retargeting \cite{wang2019generative,yang2020transmomo,lifkooee2018image,doersch2019sim2real}. As a result, images can be utilized as the inputs to animate human avatars in a VCP environment.
In contrast to wearing/using sensors, capturing images does not require specific maintenance, training, and calibration. Moreover, users have more freedom to perform any action without the limitations caused by wearable devices.
Image-based strategies can be further divided into two Offline and Online retargeting approaches. In Offline methods, the process of retargeting can be computationally expensive and might not be suitable for real-time applications. On the other hand, they might offer better accuracy by reconstructing human's body shape, pose, face, and textures. Nevertheless, Online motion retargeting is capable of being used in real-time since they are optimized and sometimes implemented with minimal realization~\cite{orts2016holoportation}.
We summarize the comparison between different image-based strategies in Table \ref{Tab:image} and then will explain each sub-category with details as follows.
\begin{table}[h!tbp]
\centering
\caption{Brief comparison of different Image-based Direct motion retargeting approaches
~\label{Tab:image}}
\begin{tabularx}{\linewidth}{LLLL}
\hline
Category & Applications & Advantages & Disadvantages\\ \hline
Image-based Offline motion retargeting & non-real time applications such as creating pre-rendered avatars, repairing avatar model damages caused by scanning error & accurate and high resolution geometry reconstruction and texturing, accurate offline inpainting to fix the 3D model damages & expensive computational cost and difficulty of being implemented in real-time applications \\ \hline
Image-based Online motion retargeting (Rigged avatar) & VCP environments with generic avatars such as workers in training environments, system with simple setup & low computational cost, simple system setup, flexibility of selecting the avatars based on the application & lack of authentic avatar that resembles the user's appearance, motion retargeting error caused by different skeleton structures \\ \hline
Image-based Online motion retargeting (3D model reconstruction) & VCP environments with avatar resembling users: tele-presence, tele-conference and meeting & authentic avatar resembling users appearance, capability of reconstructing the surrounding enviroment in real-time & high computational cost, complex system design and setup\\ \hline
\end{tabularx}
\end{table}
\textbf{Offline Motion Retargeting}
\label{Sec:offline}
Offline motion retargeting is mainly used to reconstruct high-quality human body shape, pose and textures \cite{bogo2015detailed,cui2012kinectavatar,korban20193d,lim2014rapid,malleson2017rapid,shapiro2014rapid,zhang2014quality}.
As an example, in \cite{lim2014rapid} Lim et al. fused several approaches such as KinectFusion \cite{pagliari2014kinect} and ICP-based registration algorithm to generate the 3D avatar. The proposed method can create a human avatar with textures automatically. They enforce positional constraints to avoid motion artifacts. However, the ICP-based registration algorithm might not be suitable for highly deformable subjects such as humans.
To overcome such a non-rigid deformation in human body geometry, \cite{cui2012kinectavatar}, Cui et al. suggested an approach based on a non-rigid registration algorithm to reconstruct 3D human body geometry. First, several images are captured using a depth camera (Microsoft Kinect). Then, multiple stages mainly including a super-resolution-based algorithm and the Poisson mesh reconstruction \cite{kazhdan2006poisson}, are suggested to reconstruct the human body geometry and textures. The proposed method can automatically reconstruct non-rigid subjects such as humans smoothly and with high accuracy due to adding color constraints. Nevertheless, the suggested approach might be sensitive to issues such as holes or missing regions and soft tissue deformation since they utilized low-dimensional statistical models.
To handle the soft tissue deformation and hole filling problems, in \cite{bogo2015detailed} Bogo et al. exploited both high and low dimensional models to reconstruct human body shape and pose from RGB-D video sequences. While the low dimensional model is used for initial pose estimation, the high dimensional model is utilized to repose and accurately reconstruct the geometry. The reconstruction process is done by displacement mapping between the local and global geometries. The results showed that their method is reliable even in the challenging situations such as varying resolution and soft tissue deformation. Still, the facial details can not be encoded accurately because the statistical model is built upon human body landmarks.
There have been some methods that developed strategies to encode human facial features to be used in VR environments. These methods have been based on recognizing Action units \cite{vicente2019development,lifkooee2019video}, motion tracking of facial landmarks \cite{kegel2020dynamic} and bone marker tools \cite{el2019open}. Yet, these approaches only render faces while most VCP environments require full human body avatar models.
To include effective facial features in full human body avatar models, in \cite{malleson2017rapid} Malleson et al. designed a system to create full-body avatars replicating the person's body shape, face and textures. When the body shape is created using blendshapes based on the body dimensions obtained from the depth images, the face is reconstructed using blendweights and the facial landmarks obtained from images. The experimental results illustrated that the reconstructed avatars look real enough for users to feel other's real presence in a VR environment. However, the computational cost of the whole reconstruction process is expensive (around 10 seconds).
We summarize the evolution of technology (as mentioned above) in the Image-based Offline motion retargeting category in Table \ref{Tab:offline}.
\begin{table}[h!tbp]
\centering
\caption{Gradual evolution of technology in the Image-based Offline motion retargeting approaches
~\label{Tab:offline}}
\begin{tabularx}{\linewidth}{LLLL}
\hline
Paper/Year & Solved issue(s) from previous paper(s) & Proposed method to solve the issue(s) & Other used algorithms\\ \hline
Cui et al.~\cite{cui2012kinectavatar} 2012 & highly deformable human body &
non-rigid registration algorithm & super-resolution-based algorithm, Poisson -based mesh reconstruction \\ \hline
Bogo et al.~\cite{bogo2015detailed} 2015 & soft tissue deformation ~\cite{cui2012kinectavatar} & high dimensional models & low dimensional model for initial pose estimation, displacement mapping \\ \hline
Malleson et al.~\cite{malleson2017rapid} 2017 & lack of encoding facial landmarks~~\cite{bogo2015detailed} & blendweights and facial landmarks & blendshapes for body shape reconstruction \\ \hline
\end{tabularx}
\end{table}
\textbf{Online Motion Retargeting}
\label{Sec:online}
Compared to Offline motion retargeting, Online methods are optimized and can be utilized in real-time. We categorize the Online motion retargeting strategies into two types of Rigged-avatar-based motion retargeting and 3D avatar model reconstruction. We will give more explanation as follows:
\textbf{Rigged-Avatar-Based Motion Retargeting}:
In the Rigged-avatar-based motion retargeting category, the main focus is on the quality of motion transfer rather than quality of 3D avatar models. In this group, often pre-designed 3D avatar models depending on the application are used. As a result, they are more suitable for VCP environments that don't require the human avatars resembling users' appearance. They also can be implemented cheaper and faster.
For example, in \cite{jo2014avatar} Jo et al. created a 3D teleconference in an Augmented Reality (AR) environment using a generic rigged avatar model whose motion information is obtained by the Microsoft Kinect. They preserve the spatial property of objects that users can sit on digital chairs similarly to a real environment. The obtained surveys indicated that the users have more impressive and realistic experience than the traditional video-based communication approaches. Still, they evaluated their AR system based on limited motions, while complex movements are inevitable in a real-time VCP environment.
This limitation \cite{jo2014avatar}, is solved in \cite{lugrin2015avatar} and \cite{choi2019effects} by considering various motions to have a more reliable interactive environment. Specifically, in, \cite{lugrin2015avatar} Lugrin et al. suggested a strategy to evaluate the impact of different types of avatars on the performance of fitness training. They exploited the Microsoft Kinect to capture the motion data in real-time and transfer it to human-like rigged avatars.
As another example, in \cite{choi2019effects} Choi et al. evaluated the impact of different types of motions on users' feelings of body ownership and presence. They utilized a OptiTrack Motion Capturing System \cite{opt} with six cameras for real-time motion tracking and retargeting.
The experimental results indicated the effectiveness of these approaches \cite{lugrin2015avatar,choi2019effects} for training and testing purposes, respectively. However, the Kinect or motion capturing system cannot extract the facial features effectively because of the limited resolution of commercial motion cameras.
To tackle the shortage of facial features \cite{lugrin2015avatar,choi2019effects}, in \cite{roth2017socially} Roth et al. designed an immersive environment that uses a tracking system (OptiTrack sensors \cite{opt}), an eye gaze tracking and facial expression tracking (Faceshift software) to evaluate users' social interactions. The proposed strategy offers various interactive features that help users' immersion in the environment based on a stereoscopic projected display. Still, the real-time tracking errors caused by complex motions are not evaluated while these errors can negatively affect a high quality VCP experiment.
To reduce the artifacts caused by complex motions such as turning around or partial view Kang et al. \cite{kang2019real} suggested an adjustable filter integrated with multiple Inverse Kinematics (IK) constraints. The motion information is obtained from a Kinect device and is transferred to a rigged avatar using quaternions calculations.
We summarize the aforementioned evolution of technology in the Rigged-avatar-based Online motion retargeting group in Table \ref{Tab:rigged}.
\begin{figure*}[h!tbp]
\centering
\begin{tabular}{c c}
\includegraphics[height=0.5\textwidth]{images/conference.jpeg} \\
\end{tabular}
\caption{Rigged avatars used to simulate a 3D tele-conference \cite{jo2014avatar}
\label{Fig:rigged}}
\end{figure*}
\begin{table}[h!tbp]
\centering
\caption{Gradual evolution of technology in the Rigged-avatar-based Online motion retargeting approaches
~\label{Tab:rigged}}
\begin{tabularx}{\linewidth}{LLLL}
\hline
Paper/Year & Solved issue(s) from previous paper & Proposed method to solve the issue(s) & Application \\ \hline
Jo et al.~\cite{jo2014avatar} 2014 & preserving spatial properties & global and local motion adaptation & tele-conference \\ \hline
Lurgin et al. ~\cite{lugrin2015avatar} 2015 & limited motions ~\cite{jo2014avatar} &
advanced tracking & virtual fitness training \\ \hline
Roth et all. ~\cite{roth2017socially} 2017 & lack of facial features ~\cite{lugrin2015avatar} & Faceshift software & virtual social gathering \\ \hline
Kang et al. ~\cite{kang2019real} 2019 & artifacts in complex motions~~\cite{roth2017socially} & adjustable filters & virtual fitness training\\ \hline
\end{tabularx}
\end{table}
\textbf{3D Model Reconstruction:}
3D reconstruction of avatar models has some advantages over rigged avatars, such as generating avatars that resemble users' body shape, face, and reconstructing the environmental scene. In this category, some researchers suggested approaches to reconstruct the human 3D model \cite{shapiro2014automatic,li20193d,beck2013immersive} or the whole scene \cite{newcombe2015dynamicfusion,orts2016holoportation}. The image-based 3D Model reconstruction category often includes the most advanced state-of-the-art algorithms for human avatar modelling.
As an example in, \cite{shapiro2014automatic} Shapiro et al. combined several methods such as KinectFusion \cite{newcombe2011kinectfusion} and Poisson surface reconstruction algorithm \cite{kazhdan2006poisson} to reconstruct the human body geometry and animation based on key-poses and superresolution range scana. Although they can reconstruct the whole human geometry in a few minutes, the scanning process and auto-rigging of the reconstructed mesh are performed offline. As a result, the entire motion retargeting process can not be implemented in real-time.
To resolve the offline scanning issue \cite{shapiro2014automatic}, in \cite{newcombe2015dynamicfusion}, Newcombe et al. proposed a scene reconstruction algorithm that can reconstruct the whole scene in real-time. The suggested approach is based on a DynamicFusion scheme consisting of three stages of parameter estimation, fusion and structure adaptation, to fuse the RGBD data and reconstruct the subjects. The suggested strategy can handle reconstructing highly deforming subjects such as human body in real-time. Nevertheless, they did not suggest any solution to add textures and handle the tracking errors caused by complex geometry, partial view, occluded or shadowed parts.
To resolve the tracking errors and real-time texture mapping issues, \cite{shapiro2014automatic,newcombe2015dynamicfusion}, Orts et al. \cite{orts2016holoportation} developed a system called Holoportation that can handle the occlusion using a temporal reconstruction algorithm. The proposed system includes multiple RGB and infrared cameras to capture and transmit the moving human body's dynamic 3D geometry and the surrounding scene. While the method can reconstruct high-quality human body 3D models with textures, there are some drawbacks such as expensive and sophisticated setup due using multiple RGB and depth cameras to reconstruct the whole scene.
To overcome the difficulty of complex setup \cite{shapiro2014automatic}, in \cite{li20193d} Li et al. proposed a strategy to reconstructs human body model and textures from a single RGB image. The suggested approach includes multiple stages, including human body segmentation, fitting the segmented body to a parametric model, wrapping the initial geometry to the final model based on a dense correspondence and silhouette. They suggested a new network called InferGAN to interpret the textures of invisible parts from users behind. Yet, they concluded some limitations such as limited camera view and sensitivity to occluded body limbs. We summarize the aforementioned evolution of technology in the 3D model reconstruction-based online motion retargeting category in Table \ref{Tab:model}
\begin{table}[h!tbp]
\centering
\caption{Gradual evolution of technology in the 3D model reconstruction-based online motion retargeting approaches
~\label{Tab:model}}
\begin{tabularx}{\linewidth}{LLLLL|}
\hline
Paper/Year & Solved issue(s) from previous paper & Proposed method to solve the issue(s) & Other used algorithms \\ \hline
Shapiro et al. ~\cite{shapiro2014automatic} 2014 & slow human geometry reconstruction ~\cite{lim2014rapid} & representative frames (key-poses) & Poisson Surface Reconstruction, superresolution range scanning algorithm \\ \hline
Newcombe et al.~\cite{newcombe2015dynamicfusion} 2015 & offline scanning ~\cite{shapiro2014automatic} & DynamicFusion (online-scan-streaming) & parameters estimation, fusion and structure adaptation \\ \hline
Orts et al. ~\cite{orts2016holoportation} 2016 & real-time tracking errors and texture mapping ~\cite{newcombe2015dynamicfusion,shapiro2014automatic} & temporal reconstruction algorithm and textures compressing & dynamic 3D geometry reconstruction \\ \hline
Li et al ~\cite{li20193d} 2019 & expensive and complex setup ~\cite{orts2016holoportation} & single RGB image-based human body reconstruction and texturing & InferGAN to interpret the textures of invisible parts, parametric model \\ \hline
\end{tabularx}
\end{table}
\begin{figure*}[h!tbp]
\centering
\begin{tabular}{c c c}
\includegraphics[height=0.23\textwidth]{images/holo1.jpeg} &
\includegraphics[height=0.23\textwidth]{images/holo2.jpeg} \\
\end{tabular}
\caption{Online Image-based 3D reconstruction of avatar and scene models ~\cite{orts2016holoportation},
\label{Fig:holo}}
\end{figure*}
\FloatBarrier
\section{Conclusion}
In this survey paper, we reviewed the methods which created and used human avatar models toward designing VCP environments. After a short discussion about the non-avatar strategies, we discussed the avatar-based techniques, their advantages and disadvantages, and the gradual advancement of technology for each method and category.
We conclude that the recent advancement in computer graphics/vision algorithms profoundly has improved the quality of human avatar representation in VCP environments.
\bibliographystyle{unsrt}
\section*{Paragraph}
In some applications, virtual co-presence are implemented without involving digital avatars, such as using robots~\cite{macharet2012collaborative,minato2012development,yoon2015control}, mobile devices~\cite{beer2011mobile,lee2011now}, or video conferencing~\cite{noda2015implementation,maeda2004real}. These designs could be effective in their targeting applications, but in general, they do not provide a realistic immersive 3D environment for VR-based motor learning.
In contrast, realizing shared virtual environment through integrating digital avatars with 3D virtual scenes makes the fully immersive experiencing possible and attract greater attention recently.
An approach to creating 3D avatars in a VCP environment is using avatars with pre-designed motions \cite{monahan2008virtual, li2016virtual, mu2009implementation}.
Such pre-designed virtual gaming like experience could be effective in some training tasks (e.g., routines in pilot training of taking off/landing aircraft~\cite{li2016virtual}), but can not be used to simulate spontaneous user behaviors and their interactions.
In order to support virtual co-presence that reflects multiple users' actions and their interactions, we need to build digital avatar models that can mimic the user's actions in real-time. The key to build such action-mimicking avatars rely on two components: (1) action acquisition, and (2) action transfer and reconstruction.
To capture the user's real-time actions, some use wearable sensors~\cite{wang2019effect,han2017simulating, rauter2019augmenting,park2019investigation,de2019watching} and many recent studies use regular cameras~\cite{bogo2015detailed, lim2019rapid, malleson2017rapid, shapiro2014rapid, zhang2014quality}.
While sensor-based acquisitions usually offer reliable and occlusion-tolerant motion capturing, compared with the image-based acquisitions, they have limitations on higher cost, less accessibility, needing calibration and wearing, and less localization accuracy.
Image-based acquisitions attracted greater attention recently.
However, most such acquisition systems are designed for reconstructing relatively static contents through an off-line procedure~\cite{bogo2015detailed, lim2019rapid, malleson2017rapid, shapiro2014rapid, zhang2014quality}.
While a few real-time image-based action acquisition systems have been designed recently~\cite{orts2016holoportation, klauck2016collaborative, newcombe2015dynamicfusion, gunkel2019dc,roth2017socially, herder2019avatars, lugrin2015avatar}, their reliability and efficiency are still insufficient for our VCP task.
\section{Non-Avatar-based Approaches}
Non-avatar-based approaches usually rely on physical hardware to create a VCP environment.The hardware can be an interactive robot, a mobile system installed on a moving platform or a traditional video-based tele-presence. These methods are explained in this section as follows:
\subsection{Robotics}
Robotics have been used for various VCP/telepresence purposes such as communication, environmental visualization, training physicians \cite{macharet2012collaborative, minato2012development, yoon2015control}.
As an example, in \cite{macharet2012collaborative} Macharet et al. designed a telepresence robot that can be controlled by remote human operator to visualize the environment. While the robot can handle complicated situations such as narrow corridors and give a smooth feeling of telepresence, the robotic systems has some drawbacks such as the lack of any interactions with objects in the scene and limited filed of view for the user.
\subsection{Mobile Systems}
Some researchers designed a mobile system integrated on a moving frame to create a VCP environment \cite{beer2011mobile, lee2011now}. As an example, in ~\cite{beer2011mobile} Beer et al. designed a mobile system consists of a touch screen installed on a phone frame, a microphone, web camera, and speakers to help elders to interact with visitors. The designed system can reduce travel cost and social isolation but still the number of interactive features is limited to the programmed modules by the developers.
\subsection{Images and Videos}
Using images and video is the classical way of creating a co-presence environment \cite{noda2015implementation, maeda2004real}. As an example of such an approach, In \cite{noda2015implementation}. Noda et al. suggested an architecture for tele-communication between different users based on a configurable tile display. While the proposed architecture includes some features such as adjusting the size and resolution of display based on user's position to give a realistic sensation of co-presence, there are some disadvantages such as the processing time needed for adjustment of the tile display and the absence of 3D vision.
\section{Avatar-based Approaches}
In contrast to non-avatar approach, avatar-based approaches rely more on software to create a VCP environment. Hence, compared to non-avatar-based approaches, they are more flexible and can be updated based on the application and state-of-the-art algorithms. In this research we categorize the avatar-based approaches to two types of pre-rendered motions where the animations are pre-defined and direct motion retargeting where the avatars are directly controlled by the users. More explanation can be seen as follows:
\subsection{Pre-Rendered Motions Approaches }
An approach to creating 3D avatars in a VCP environment is using pre-designed motions and scene. The pre-rendered motions have been used commonly to simulate structures \cite{monahan2008virtual, li2016virtual, mu2009implementation}. For example, in \cite{li2016virtual}, Li et al. simulate the taking off an aircraft in a airport runway environment while five users can interact with each other by wearing VR goggles to see the other users` avatars. Although using pre-rendered motions can be initially attractive and simple for the users, it does not give a full immersive and natural experience to the users since the number of interactions is limited and also the pre-rendered interactions might not totally match the users intentions.
\subsection{Direct-Motion Retrageting Approaches}
Direct-motion retargeting is the main focus of this research. In this category, the motion data is directly transferred from the avatar to the user. It can be divided into two types of image-based where the inputs are image, video or depth data and sensros-based approaches that are explained in the following;
\subsubsection{Sensor-based Approaches}
Some researchers used the sensors to evaluate or compared different types of designed systems \cite{wang2019effect, han2017simulating, rauter2019augmenting, park2019investigation}. In \cite{de2019watching}. De et al. used the Facebook Spaces a pre-designed commercial software to evaluate the interaction between users who are represented as human-like avatars while they are watching movie trailers and are tracked by the HMD sensors. Although the method can provide a good sense of presence for users, still sensors have some limitations such as the need to wear/ hold specific markers/tools that affect a natural sense of co-presence.
\subsubsection{Image-based Approaches}
Images (including RGB and depth) have been widely used for motion retargeting to create a VCP environment. Image-based approach can be further divided to two offline and online retargeting approaches. In the offline methods, the process of retargeting is too long to be useful for real-time. However, they offer a better accuracy usually by reconstructing users body shape, pose and textures. Online motion retargeting however is capable of being used in real-time as it sis optimized and sometimes minimized for a faster system.
\textbf{Offline Retargeting}
Offline motion retargeting is mainly used to reconstruct high quality human body shape, pose and textures \cite{bogo2015detailed, lim2019rapid, malleson2017rapid, shapiro2014rapid, zhang2014quality}, As an example of offline motion retargeting in \cite{bogo2015detailed} Bogo et al. proposed a method to reconstruct human body shape and pose from RGB-D video sequence of moving human by segmenting the RGB-D video to short intervals, an optimization and displacement mapping to reconstruct the final geometries. While the method is reliable in challenging situations such as varying resolutions and soft tissue deformation, still there are some drawbacks such as lack of any strategy to handle the error caused by the mistracked skeletons and also relatively expensive computational cost.
\textbf{Online Retargeting}
A direct way for having the avatar reproducing the user's motion authentically is through online motion retargeting \cite{orts2016holoportation, klauck2016collaborative, newcombe2015dynamicfusion, gunkel2019dc,roth2017socially, herder2019avatars, lugrin2015avatar}. Such a the system developed in \cite{orts2016holoportation} which is called Holoportation and uses a system including multiple RGB and infrared cameras to capture and transmit the dynamic 3D geometry of the moving human body and the surrounding scene. While the methods can reconstruct high quality human body 3D meshes with textures, there are some drawbacks such as expensive setup and computational cost due to reconstructing the whole scene.
\bibliographystyle{unsrt}
\section{Introduction}
Remote communication has a crucial role in the modern societies.
An important aspect of an effective remote communication is co-presence, where multiple participants can see and interact with each other in a shared virtual environment (SVE). Virtual co-presence (VCP) environments can be more engaging than text or voice-based chat in communications, collaborations, and training.
Recently, such a virtual co-presence has been studied in remote education~\cite{Mustufa12VRForDistanceEdu}, training simulations \cite{hooper2019virtual, schmidt2019heidelberg}, therapy treatments~\cite{WiederholdVRET05Book} and social interaction venues \cite{hudson2019or}. A VCP environment can be created using a Virtual Reality (VR) platform. An approach to represent human character in the VR environments is using "Avatar" models. An avatar is a digital from of human character that can be represented as 2D or 3D. 3D avatars can have some advantages such as being more human-like, having realistic motions and offering immersive experience in the VR and VCP environments. In this research we study the researches that use avatar model in the VCP environment. Fig. \ref{Fig:survey} shows our methodology to categories the works done in the VCP environments with the focus on the avatar-based approaches. As can be seen in the figure, non-avatar based methods are classified into three categories of Robotics, Mobile Systems and Images and Videos. On the other hand, the avatar-based approaches are categorized into two types of Direct-Motion Retargeting and Pre-rendered Motions. In the Direct-Motion Retargeting methods the motions are transferred directly from the user while in the Pre-rendered Motions approaches the motion are pre-defined by the developer. Direct-Motion Retargeting category further is divided into two classes of Image-based and Sensor-based approaches. In the image-based approaches the input are commonly RGB image, video or depth images while in the sensor-based approaches they use motion or tracking sensors used with tracking markers. The Image-based category also consists of two categories of Offline retargeting and online retargeting. Offline retargeting approaches usually offer more accurate results than online methods by reconstructing high quality body shape, pose and textures. Online motion retargeting however is faster and more suitable for a real-time VCP or VR experiments. The methods in each category will be explained in the following sections.
\begin{figure*}[h!tbp]
\centering
\includegraphics[width=1.1\textwidth]{images/survey.jpg}
\caption{Virtual co-presence (VCP) environments categories. ~\label{Fig:survey}}
\end{figure*}
\FloatBarrier
\section{Non-Avatar-based Approaches}
Non-avatar-based approaches usually rely on physical hardware to create a VCP environment.The hardware can be an interactive robot, a mobile system installed on a moving platform or a traditional video-based tele-presence. These methods are explained in this section as follows:
\subsection{Robotics}
Robotics have been used for various VCP/telepresence purposes such as communication, environmental visualization, training physicians.
Some researchers used the robot to improve the interactions of a social VCP environment. For example, in \cite{macharet2012collaborative} Macharet et al. designed a telepresence robot to visualize the environment for the users. The robot is controlled by remote human operator and is able to smoothly navigate through a house. Specifically, it can handle complicated situations such as narrow corridors and doors. So, by decreasing the delay time and avoiding obstacles, the robot is able to give a smooth feeling of telepresence to users. Results show that the use of the proposed collaborative control helped reduce the number of robot-enviroment collisions. The experiments have showed that people quickly adapt conversation with the robot. They are also impressed by robot's shape and feeling. As another example, in \cite{minato2012development} Minato et al. designed a portable human-like robot. The robot is able to communicate and talk to other users remotely. The robot is designed in such a way that users can feel other users presence by using voice, human-like appearance, and touch. Another instance can be seen in \cite{yoon2015control}, where Yoon et al. proposed a tele-presence robotic system with a variety of feature to make the communication between the user and robot more interactive. For example, using a projector to show their finger points and a tracker system that can detect and face to the user's head. The researchers claim that the proposed method has some unique features compared to the traditional tele-presence robotics system.
Robots also have been used for training in a VCP environment. For example, in ~\cite{vespa2007intensive} the robotic telepresence is used to improve physicians response for insecure patients. They designed an experimental setup with multiple episodes and different patients and the results were satisfactory decreasing the physicians delay time.
\subsection{Mobile Systems}
Some researchers designed a mobile system integrated on a moving frame to create a VCP environment.
As an example, in ~\cite{beer2011mobile} Beer et al. they designed a mobile system to help elders to interact with visitors.
The designed system consists of some modules such as a touch screen installed on a phone frame, a microphone, web camera, and speakers. The phone base has an active caster with two passive wheels, a computer, and a large battery. In this study participants interacted with a visitor who uses the mobile system. The results show that elder reported to have a good experience of visibility in the designed mobile system. As a result, while the designed system can reduce travel cost and social isolation it maintains an efficient interactive environment.
In ~\cite{lee2011now} Lee et al. designed a mobile system to enable remote workers to live and work with local coworkers similar to the way that they were physically there. They exploited a Texai Alpha prototype \cite{texai} (a pre-manufactured mobile system) to work more effectively with a remote coworker. They claimed that the results obtained from surveys showed the mobile system allowed remote pilots to work with local coworkers almost as if they were there physically.
\subsection{Images and Videos}
Using images and video is the classical way of creating a co-presence environment. As an example of such an approach, In \cite{noda2015implementation}. Noda et al. suggested an architecture for tele-communication between different users. The proposed system is based on a configurable tile display that include several features to offer a realistic sensation of co-presence: (1) life-size processing that can adjust the size and resolution of display based on user's position and angle (2) Subtract the background to give a enhanced vision of other users (3) multiple cameras to shows the users from different points of view. They claimed that their method offers a higher sense of presence compared to the traditional video-based approaches.
An advanced video-based VCP environment is designed in \cite{maeda2004real} where is called "real-world" video. In this method, the users can be viewed from different perspectives. The multiview video capturing is done by using eighteen PCs in a cylindrical chamber. The experimental results showed that the delay time was acceptable for the watching users.
\section{Avatar-based Approaches}
In contrast to non-avatar approach, avatar-based approaches rely more on software to create a VCP environment. Hence, compared to non-avatar-based approaches, they are more flexible and can be updated based on the application and state-of-the-art algorithms. In this research we categorize the avatar-based approaches to two types of pre-rendered motions where the animations are pre-defined and direct motion retargeting where the avatars are directly controlled by the users. More explanation can be seen as follows:
\subsection{Pre-Rendered Motions Approaches }
An approach to creating 3D avatars in a VCP environment is using pre-designed motions and scene. As an example, in \cite{pazour2018virtual}, Pazour et al. simulated a conference room with user-defined avatars that can communicate with each other remotely. The main goal of this simulation was to evaluate the realistic feeling of co-presence in a virtual environment. Two experiments were conducted, with headsets that head motions are tracked by HTC Vive and desktop that motions are defined by mouse movements. The avatars upper and lower body are animated using pre-rendered animations developed by Mixamo \footnote{Homepage https://www.mixamo.com/}. The experimental results show that using the headset outperform the desktop in term of feeling of other users presence.
The pre-rendered motions have been used commonly to simulate specific structures such as college \cite{monahan2008virtual}, airport \cite{li2016virtual} and museum \cite{mu2009implementation}. For example, in \cite{monahan2008virtual}, Monahan et al. simulate a college environment with students and teachers. In the designed scenario, during the registration process students selects a unique 3D character known as an avatar to represent them onscreen in the 3D university. The avatars are human-like and are able to perform a variety of pre-designed actions associated with people. The results show the effectiveness of the technique, user presence, real feeling of other users actions.
Moreover, in \cite{li2016virtual}, Li et al. simulate the taking off an aircraft in a airport runway environment. Five users can interact with each other while wearing VR goggles to see the other users` avatars. The users can take the roles of a pilot, a support office, a tractor guide, a carrier aircraft guide and a tractor driver. The surveys show that the proposed type of exhibit was interesting and attractive for users.
In \cite{mu2009implementation} Mu et al. designed a VR environment for a multi-user learning in Museum. The main way of the interaction between users are pre-defined gestures that can be customized by users using a Graphical User Interface (GUI). The users can select their appearance and clothing before entering the VR environment. The results shows that the method was effective for users to transfer their motion in a virtual location.
The pre-rendered motions also has been used for psychological testing purposes. for example in ~\cite{brown2017coordinating}, Brown et al. designed a narrative story in a VR environment. The users can play the game over a network connection wearing head mounted display (Oculus Rift). The goal is to study a set of guided camera techniques and a set of gaze distracting techniques to determine how best to attract disparate users to the same story. The results show a better understanding of factors cause users' attraction to a narrative story
\subsection{Direct-Motion Retrageting Approaches}
Direct-motion retargeting is the main focus of this research. In this category, the motion data is directly transferred from the avatar to the user. It can be divided into two types of image-based where the inputs are image, video or depth data and sensros-based approaches that are explained in the following;
\subsubsection{Image-based Approaches}
Images (including RGB and depth) have been widely used for motion retargeting to create a VCP environment. Image-based approach can be further divided to two offline and online retargeting approaches. In the offline methods, the process of retargeting is too long to be useful for real-time. However, they offer a better accuracy usually by reconstructing users body shape, pose and textures. Online motion retargeting however is capable of being used in real-time as it sis optimized and sometimes minimized for a faster system.
\textbf{Online Retargeting}
Although pre-defined motions can satisfy users in specific scenarios, having the avatar reproducing the user's motion authentically in a data-driven way is important. It is because body motions is the main means of communication for the users in the VR environment. A direct way to achieve this is through real-time motion capturing, such as the system developed in \cite{orts2016holoportation} which is called Holoportation. It uses a system including multiple RGB and infrared cameras to capture and transmit the dynamic 3D geometry of the moving human body and the surrounding scene.
Some researches used online motion retargeting to reconstruct the 3D avatar based on the scan data \cite{shapiro2014automatic, klauck2016collaborative, newcombe2015dynamicfusion} . For example in , \cite{shapiro2014automatic} Shapiro et al. fused several methods to reconstruct the human body geometry and animation. To do that firstly, a Kinect camera captures the key-poses from several views. Then, for key-poses the frames are merged using KinectFusion \cite{newcombe2011kinectfusion}. In the next stage, the fused frames are projected to the image plane to form a superresolution range scan. Next, using a contour-based method super-resolution range scans are aligned. Finally, the mesh is generated using the Poisson Surface Reconstruction algorithm \cite{kazhdan2006poisson}. The results show that the proposed method can be exploited in the real-time VR environments. As another example in \cite{klauck2016collaborative} suggest a prototype platform that can collect face data in a point cloud and send it through a Virtual Reality Peripheral Network (VRPN) to other applications. This data can be used to reconstruct the facial expressions for avatars in the VR. The proposed system includes an infrared tracking system to detect the location of head and a depth camera to process the point cloud. The result shows the proposed system can be used in real-time with frame rate of 30 (FPS). In \cite{newcombe2015dynamicfusion} a scene reconstruction approach is proposed that can handle deforming subjects such as human. They used a DynamicFusion method to fuse the RGBD data and reconstruct the subjects in real-time. Their approach includes three steps of (1) volumetric model-to-frame parameters estimation (2) Live depth map to canonical space fusion and (3) warp-field structure adaptation for deforming geometries. The authors claim that the suggested method can be used in different applications.
Online motion retargeting also has been used to create a social VCP environment \cite{beck2013immersive,jo2014avatar, gunkel2019dc, roth2017socially}. As an example, in \cite{beck2013immersive} Beck et al. designed a co-presence environments for multiple users to interact and communicate with each other in a shared location. Two Kinect clusters are used to capture the motion of two sites with two groups of people that are remotely connected with a network. The users can see each other using a projected display and 3D glasses. The Kinect data are rendered as reconstructed mesh textured with real images. To improve the texture mapping, they used an intrinsic and depth calibration to map the depth pixels to rgb pixels more accurately. As another example, In \cite{jo2014avatar} Jo et al. created a 3D tele-conference in an Augmented Reality (AR) environment. To have a more realistic environment, they preserve spatial property of the objects so that users can sit on digital chairs as they do in the reality. The users are tracked by a Kinect and their joints are mapped to a 3D digital avatar. The results show that the users had more impressive and realistic experience compared to the video-based communication approaches. In \cite{gunkel2019dc} Gunkel et al. suggested social VR environment is acclaimed to be able to accommodate a large number of users. The avatar in the VR scene is represented as realistic human body geometry. The human body geometry is segmented from the background and then reconstructed by depth cameras. However, the capability of running in real-time is not discussed in this research. Moreover, the did not provide details about the transition between a VR to a social VR environment. Finally, in \cite{roth2017socially} Roth et al. designed an immersive environment to evaluate the social users interactions. The system includes a tracking system (OptiTrack sensors) an eye gaze tracking and a facial expression tracking (Faceshift software). Users can see the environment using a stereoscopic projected display which is called "fish tank". To transmit the facial information, the Facial expression blendshapes are streamed through a network to a Unity platform to animate the avatar's face. The skeleton data captured by the tracking system also are transmitted to the network to animate the avatar's body. The results show the they relatively meet the requirements for a effective immersive social VR environment.
As another application online motion retaregeting has been used for training purposes such as new worker in a factory \cite{herder2019avatars}, fitness training \cite{lugrin2015avatar} and driving \cite{zhao2013realistic}. As an example, in \cite{herder2019avatars}, Herder et al. created a co-presence environment to simulate a large factory with machines. The users' roles are as new workers that are trained based on a basic tutorial. The users are tracked and animated in real-time using the sensors and the platform provided by the Head Mounted device developers. The results show that the avatar-based approach improved and stimulated the communication between users by having a high immersive interactions and engagement. As another instance, in, \cite{lugrin2015avatar} Lugrin et al. suggested a method to evaluate the impact of different types of avatars on the fitness training performance. In their experiments, they used two types of human-like and non-human like avatars. The user's motion is captured in real-time using a Microsoft Kinect and is transformed to an avatar rendered in Unreal Engine. The main tasks in the experiments is to touch the target as fast as possible. The experimental results show that same-gender human-like avatar led to the highest performance during the training. Finally, in \cite{zhao2013realistic} Zhao et al. evaluate the performance of users with different body size to do a certain tasks in a VR environment. The main task is to work with a virtual steering wheel and mirror while the user wears a HMD. To do a real-time motion retargeting, the joint angles of the tracked user are mapping to the avatar. The results were satisfactory based on participants rating and also evaluating the time assigned for each task.
Another usage of online motion retargeting is for testing a social or psychological hypothesis \cite{choi2019effects, maloney2019dc}. As an example in \cite{choi2019effects} Choi et al. evaluated the impact of different types of motions on user' feeling of body ownership and presence. They used a mo-cap system based on OptiTrackmotion tracking system using six cameras for real-time motion retargeting as well as pre-rendered motions to investigate the avatars effects on the users in a VR environment. They collected some statistical results based on this experimental setup.
As another instance, in \cite{maloney2019dc} Maloney designed a VR environment where user can embody the avatar with different races. The goal of the project is to measure the racial bias. The users are placed in a room with a mirror and weapons. Users need to shoot the human-like targets that are explained to be aliens invading the earth. The results show that the method was successful to decrease users implicit bias against different races.
\textbf{Offline Retargeting}
Offline motion retargeting is mainly used to reconstruct high quality human body shape, pose and textures \cite{bogo2015detailed, cui2012kinectavatar, lim2019rapid, malleson2017rapid, shapiro2014rapid, zhang2014quality}, As an example of offline motion retargeting in \cite{bogo2015detailed} Bogo et al. proposed a method to reconstruct human body shape and pose from RGB-D video sequence of moving human. They can generate 3D avatar and capture both 2D mesh and textures offline. Depth images are obtained using Microsoft Kinect One that then along with RGB images is processed through three stages: Firstly, after segmenting the video to short intervals, the pose and shape is initially estimated in a low-dimensional. Next, the appearance is refined based on an optimization approach. Finally, they repose the point cloud by displacement mapping between the local and global geometries. The results shows the method is reliable even in challenging situations such as varying resolution and soft tissue deformation
In \cite{cui2012kinectavatar}, Cui et al. suggested an approach to reconstruct 3D human body geometry and textures using a depth camera (Kinect). The reconstruction process includes: (1) capturing frames from different views ("chunk"), (2) generating super-resolved depth scan using the chunk, (3) aligning the super-resolved scans in different frames, (4) reconstructing the mesh using Poisson mesh reconstruction \cite{kazhdan2006poisson}, (5) texture mapping the mesh using Kinect color data. The suggested method is able to resolve issues such as low resolution and non-rigid movement by a superresolution algorithm and rigid and non-rigid alignment respectively.
In \cite{lim2019rapid} Lim et al. used several approaches to generate a 3D avatar automatically. To do that, after scanning the human body the human 3D model is reconstructed based on KinectFusion \cite{pagliari2014kinect} and a patch-based optimization approach\cite{loper2015smpl}. Then, for face registration, the facial landmarks are obtained from the key-frames extracted from the KinectFusion stage. The results shows that the generated are realistic enough to represented as the users .
In \cite{malleson2017rapid} Malleson et al. designed a system to create full-body avatars replicating the person body shape, face and textures. The body shape is created using the blendshapes based on the body dimensions obtained from the depth images. The face is reconstructed using blendweights and the facial landmarks obtained from the images. The used a warping method to texture map the images to the face based on the facial landmarks. The results shows that the users can feel the presence of other users in the VR environment.
In \cite{shapiro2014rapid} a method is suggested to generate 3D static human model using a single hardware such as Kinect. To do that first 4 key-poses are captured from four profiles. Then, for each key-pose a super-resolution range scan is created by moving the Kinect on a motor. In the next phase, an alignment is performed for the four super-resolution scans. Next, they used a contour-based registration approach that minimize the distance between the predicted and observed data in the point cloud. Finally, they used Poisson Surface Reconstruction algorithm to post-process the key-poses. The experimental results show that the whole algorithm can be run without any expert in a few minutes.
In \cite{zhang2014quality}, Zhang et al. suggested a method to generate pose-driven 3D avatars automatically. Firstly, the capture the depth scan of the user from different view. Then, they complete each incomplete scan. After registration, they used the SCAPE model to generate different poses. The exterminate results were satisfactory for a dynamic human model.
\subsubsection{Sensor-based Approaches}
Some researchers used the sensors to evaluate or compared different types of designed systems \cite{wang2019effect, camporesi2015effects, han2017simulating}. As an example, in \cite{wang2019effect} Wang et al. drew a comparison between deferent types of avatar, partial hand, full hand and full body. The users wear HMD (HTC Vive) and are tracked by the HMD sensors. To conduct the experiments users are instructed to follow the specific tasks. The results how that fully body avatar led to the highest performance. As another example, In \cite{camporesi2015effects} Camporesi et al. evaluate the performance of an avatar-based VR environments compared with non-avatar based environment. To create the environment, they used a motion capturing system including 10 cameras. The motion capturing system can track the users head and body based on the installed markers. To perform the interactions users use Nintendo Wii-mote controllers.
The experimental setup is conducted based on the categories defined by some conditions: avatar or non-avatar, stereo vision or non- stereo vision, desktop or multi-tile display. The experimental results show that using avatar-based categories can improve the quality and speed of doing the assigned tasks. In \cite{han2017simulating} Han et al. investigate the stimulation of upper body motions using the head and hand motion information. To do that, they suggested an approach they separately send the head and head data in two channels from the motion capturing device to the rendering platform and modified the avatar motion based on those channels. The experimental results show a 80\% consistency rate between real and digital motions.
Sensors also has been used to simulate social events \cite{andreadis2010real, de2019watching} . For example in \cite{andreadis2010real} Andreadis et al. simulate a theater and their actors and other scenery subjects which are streamed in a multi-screen display. The main actors' interactions are captured using a real-time motion capturing system by wearing magnet sensors. For the other subjects such as background actors and animal they used Artificial Intelligence (AI) to create the motions. After testing the environment, they received positive feedback from both regular audience and experts. As another example, In \cite{de2019watching}. De et al. used the Facebook Spaces a pre-designed commercial software to evaluate the interaction between users while they are watching movie trailers. The avatars are human like and can users listen to others conversation during watching the movie. The experimental results show that there is no difference between the 3D based avatar-based environment and the traditional video-based environment when people watch the movies.
Improving the features in a VR system is another purpose to use the sensors \cite{park2019investigation, rauter2019augmenting}. As an example, in \cite{park2019investigation} Park et al. created a system to capture the Walking-in-place movement of users in VR environments. To do that they installed motion sensors on the users. So that when the motion is detected on the lower part of legs by sensors, the motion on the lower part of the avatar is modified. The avatar moves in the direction of pelvis bone that is linked to a sensor on the same bone. The experimental results shows the avatar motion was natural and accurate. As another example, in \cite{rauter2019augmenting} a mixed-reality environment is designed to investigate the interaction between users and objects. The real world is combined with the virtual objects using depth sensors and an HTC Vive pro HMD. The objects are segmented and rendered for both eyes providing a stereovision. The results show the high possibility of creating such a scenario to interact with virtual near-real objects in a mixed-reality environment.
\bibliographystyle{unsrt}
| -33,840.435069 |
[
-2.859375,
2.755859375
] | 61.341223 |
[
-2.9296875,
1.25390625,
-1.2373046875,
-3.072265625,
-0.233154296875,
4.421875
] |
[
4.328125,
5.3359375,
2.591796875,
9.0078125
] | 618 | 10,601 |
[
-1.7841796875,
1.720703125
] | 20.056022 |
[
-6.125,
-3.41796875,
-3.28125,
-0.96533203125,
2.43359375,
9.8984375
] | 0.461567 | 34.006382 | 14.847656 | 1.417052 |
[
3.3276138305664062
] | -29,103.005999 | 6.13112 | -33,064.094477 | 0.438489 | 6.002511 |
[
-3.923828125,
-3.201171875,
-1.716796875,
-2.6171875,
3.21875,
7.55859375
] |
[
-6.390625,
-2.58984375,
-2.697265625,
-1.45703125,
4.24609375,
6.12890625
] | |
BkiUdMw5qoYAm1d-C9lY
|
\section{introduction}
The study of anisotropic flow in relativistic heavy ion collisions has provided important information on the properties of the produced quark-gluon plasma (QGP). In earlier studies, the large elliptic flow observed in experiments at the BNL Relativistic Heavy Ion Collider (RHIC) for non-central collisions was found to be describable by ideal hydrodynamics. This has led to the conclusion that the produced QGP is an ideal fluid and thus a strongly interacting matter~\cite{Adams:2005dq,Adcox:2004mh,Arsene:2004fa,Back:2004je}. More recent studies indicate that the experimental data on anisotropic flows could be better understood using viscous hydrodynamics ~\cite{Romatschke:2007mq,Song:2010mg,Schenke:2010rr} with a specific viscosity that is only about a factor of two larger than the theoretically predicted lower bound~\cite{Kovtun:2004de}. In particular, the larger triangle flow observed in experiments not only put a more stringent constraint on the specific viscosity of the QGP~\cite{Gale:2012rq} but also reveal the importance of initial spatial fluctuations in heavy ion collisions~\cite{PhysRevC.81.054905}. To study the
effect of initial-state fluctuations and that of final-state interactions with better precision, new observables based on $n$-particle correlations have been proposed~\cite{PhysRevC.84.034910}, since the ratio between these correlations and corresponding anisotropic flows can provide information on initial fluctuations. In the present study, we use a multiphase transport (AMPT) model~\cite{PhysRevC.72.064901} to study the three-particle correlations and compare the results with recent experimental measurements by the STAR Collaboration~\cite{Adamczyk:2017hdl,Adamczyk:2017byf}.
The paper is organized as follows. In the next section, we briefly describe the AMPT model and the parameters used in our calculations. In Sec. III, both two-particle and three-particle correlations are described. Results on anisotropic flows and three-particle correlations obtained from the AMPT model are presented and compared with experimental data in Sec. IV. Finally, a summary is given in Sec. IV.
\section{The AMPT model}
The AMPT model is a hybrid model consisting of four stages of heavy ion collisions at ultrarelativistic energies: initial conditions, parton scatterings, conversion from the partonic matter to hadronic matter, and hadron scatterings~\cite{PhysRevC.72.064901} . There are two versions of the AMPT model, which are the default AMPT model and the AMPT model with string melting. In both versions, the initial conditions are generated from the heavy ion jet interaction generator (HIJING) model~\cite{PhysRevD.44.3501}. In the default version, only minijet partons from HIJING are included in the partonic stage via Zhang's parton cascade (ZPC)~\cite{ZHANG1998193}. After their scatterings, minijet partons are recombined with their parent strings to form excited strings, which are then converted to hadrons through the Lund string fragmentation model. In the string melting version, all hadrons produced from HIJING are converted to partons according to their valence quark flavors and spin structure, and these partons are evolved via the ZPC. At the end of their scatterings, quarks and antiquarks are converted to hadrons via a simple coalescence model. Specifically, two nearest quark and antiquark are combined into a meson, and three nearest quarks (antiquarks) are combined into a baryon (antibaryon), with their species determined by the flavor and invariant mass of coalescing quarks and antiquarks. Scatterings among hadrons in both the default and string melting versions are described by a relativistic transport (ART) model~\cite{PhysRevC.52.2037} until kinetic freeze-out.
In the present study, we use the string melting version of the AMPT model with the parameter set B of
Ref.~\cite{PhysRevC.84.014903}, i.e., using the values $a=0.5$ and $b=0.9$~GeV$^{-2}$ in the Lund string fragmentation function $f(z)\propto z^{-1}(1-z)^a\exp{(-bm_\perp^2/z)}$, where $z$ is the light-cone momentum fraction of the produced hadron of transverse mass $m_\perp$ with respect to that of the fragmenting string; and the values $\alpha_s=0.33$ and $\mu=3.2$ fm$^{-1}$ in the parton scattering cross section $\sigma\approx 9\pi\alpha_s^2/(2\mu^2)$. This parameter set has been shown to give a better description of the charged particle multiplicity density, transverse momentum spectrum, and elliptic flow in heavy ion collisions at RHIC.
\section{Two- and three-particle correlations}
The two-particle correlation of particles in certain rapidity and transverse momentum range in heavy ion collisions is defined by~\cite{PhysRevC.83.044913}
\begin{eqnarray}\label{twoa}
c_n\{2\}&=&\left\langle\negmedspace\left\langle\frac{\sum_{i\ne j}\cos(n(\phi_i-\phi_j))}{M(M-1)}\right\rangle\negmedspace\right\rangle,
\end{eqnarray}
where the sum is over all possible particle pairs $i$ and $j$ in a single event with $M$ particles in that rapidity and transverse momentum range, $\phi_i$ and $\phi_j$ are azimuthal angles of their transverse momenta in the transverse plane of the collision, and the $\langle\langle\cdot\rangle\rangle$ denotes the average over events. Using the identity $\sum_{i\ne j}=\sum_{i,j}-\sum_{i=j}$, the numerator in the above equation can be written as
\begin{eqnarray}\label{twob}
&&\sum_{i\ne j}\cos(n(\phi_i-\phi_j))\nonumber\\
&&=M^2\left(\langle\cos n\phi\rangle^2+\langle\sin n\phi\rangle^2-\frac{1}{M}\right),
\end{eqnarray}
where $\langle\cdot\rangle$ denotes the average over all particles in a single event. In the two-particle cumulant method~\cite{PhysRevC.83.044913}, anisotropic flow coefficients are simply given by the square root of the two-particle correlation, i.e., $v_n\{2\}=\sqrt{c_n\{2\}}$.
Similarly, the three-particle correlation, denoted as $C_{m,n,m+n}$, is defined by~\cite{PhysRevC.84.034910,Adamczyk:2017hdl,Adamczyk:2017byf}
\begin{eqnarray}\label{three}
&&C_{m,n,m+n}=\nonumber\\
&&\left\langle\negmedspace\left\langle\frac{\sum_{i\ne j\ne k}\cos(m\phi_i+n\phi_j-(m+n)\phi_k)}{M(M-1)(M-2)}\right \rangle\negmedspace\right\rangle.
\end{eqnarray}
Using the identity $\sum_{i\ne j \ne k}=\sum_{i,j,k}-\sum_{j=i,k}-\sum_{k=i,j}-\sum_{i,k=j}+2\sum_{i=j=k}$
~\cite{DiFrancesco:2016srj}, the numerator can also be written as
\begin{eqnarray}
&&\sum_{i\ne j\ne k}\cos(m\phi_i+n\phi_j-(m+n)\phi_k)=\nonumber\\
&&\sum_{i,j,k}\cos(m\phi_i+n\phi_j-(m+n)\phi_k)\nonumber\\
&&-\sum_{i,j}\cos(m(\phi_i-\phi_j))-\sum_{i,j}\cos(n(\phi_i-\phi_j))\nonumber\\
&&-\sum_{i,j}\cos((m+n)(\phi_i-\phi_j))+2M\nonumber\\
&&=M^3(\left \langle \cos m\phi \right \rangle\left \langle \cos n\phi \right \rangle\left \langle \cos(m+n)\phi \right \rangle\nonumber\\
&&-\left \langle \sin m\phi \right \rangle\left \langle \sin n\phi \right \rangle\left \langle \cos(m+n)\phi \right \rangle\nonumber\\
&&+\left \langle \sin m\phi \right \rangle\left \langle \cos n\phi \right \rangle\left \langle \sin(m+n)\phi \right \rangle\nonumber\\
&&+\left \langle \cos m\phi \right \rangle\left \langle \sin n\phi \right \rangle\left \langle \sin(m+n)\phi \right \rangle)\nonumber\\
&&-M^2(\left \langle \cos m\phi \right \rangle^2+\left \langle \sin m\phi \right \rangle^2)\nonumber\\
&&-M^2(\left \langle \cos n\phi \right \rangle^2+\left \langle \sin n\phi \right \rangle^2)\nonumber\\
&&-M^2(\left \langle \cos(m+n)\phi \right \rangle^2+\left \langle \sin(m+n)\phi \right \rangle^2)+2M,
\end{eqnarray}
which shows that the number of terms in the three-particle correlation can be reduced from $M^3$ to the order of $M$, thus improving significantly the efficiency of the calculation when $M$ is large.
\section{Results}
In this section, we show the anisotropic flow of charged particles and their three-particle mixed harmonic correlations obtained from the AMPT model in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV at RHIC and compare them with experimental data measured by the STAR Collaboration.
\subsection{Charged particle anisotropic flow}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{vn.eps}
\caption{(Color online) Participant number $N_{\rm part}$ or centrality dependence of $N_{\rm part}v_{n}\{2\}^2$ for mid-pseudorapidity ($|\eta|<1$) charged particles of transverse momentum $p_T>0.2$ GeV/$c$ in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV. Open circles are experimental data from the STAR Collaboration~\cite{Adamczyk:2017hdl,PhysRevLett.116.112302}.}
\label{vn}
\end{figure}
In Fig. \ref{vn}, we show the participant number $N_{\rm part}$ or centrality dependence of anisotropic flow from $n=1$ to 4 for mid-pseudorapidity ($|\eta|<1$) charged particles of transverse momentum $p_T>0.2$ GeV/$c$ in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV. In particular, the participant numbers chosen in our calculations are those in collisions at impact parameters of $2.2$, $4.1$, $5.8$, $7.5$, $8.8$, and $10.0$ fm, corresponding, respectively, to centrality bins of 0-5\%, 5-10\%, 10-20\%, 20-30\%, 30-40\% and 40-50\% in the STAR experiment. We calculate the anisotropic flow from the two-particle cumulant method~\cite{PhysRevC.83.044913}. Note that we do not include the anisotropic flow for $n=5$ because of the large uncertainty in both experimental data~\cite{Adamczyk:2017byf} and our results. It is seen that the results from the AMPT model agree qualitatively with the experimental data~\cite{Adamczyk:2017hdl,PhysRevLett.116.112302}. Quantitatively, the AMPT slightly overestimates the measured elliptic flow $v_2\{2\}$ and triangular flow $v_3\{2\}$ for the most-central collisions and underestimate them for peripheral collisions. On the other hand, our results for the directed flow $v_1\{2\}$ underestimate for the most-central collisions and overestimate for peripheral collisions. For the quadrupolar flow, our results are slightly smaller than the data for all centralities.
In both experimental data and results from the AMPT, $v_1\{2\}^2$ are negative for more peripheral collisions. This can be understood from Eq.(\ref{twob}). In the case of including all particles in a collision, the first term $\langle\cos\phi\rangle^2+\langle\sin\phi\rangle^2$ should be identically zero due to the conservation of total transverse momentum. Since only mid-pseudorapidity ($|\eta|<1$) particles of transverse momentum $p_T>0.2$ GeV/$c$ are included in calculating $v_1\{2\}^2$,
the value of the first term can be nonzero but small. This can lead to positive values of $v_1\{2\}^2$ for large $M$. With decreasing particle number $M$ for more peripheral collisions, the first term in Eq.(\ref{twob}) can become smaller than the second term $1/M$, resulting in a negative value for $v_1\{2\}^2$.
\subsection{Three-particle correlations}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Ctot.eps}
\caption{(Color online) Centrality dependence of $C_{m,n,m+n}\times N_{\rm part}^2$ for mid-pseudorapidity ($|\eta|<1$) charged particles of transverse momentum $p_T>0.2$ GeV/$c$ in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV. Open circles are experimental data from Ref.~\cite{Adamczyk:2017byf}.}
\label{corr}
\end{figure}
For the three-particle correlations, we have calculated $C_{112}$, $C_{123}$, $C_{224}$ and $C_{235}$, and compared them to the experimental results~\cite{Adamczyk:2017byf}. Figure \ref{corr} shows $C_{m,n,m+n}\times N_{\rm part}^2$ for the four cases as functions of the number of participant nucleons. For $C_{112}$ in the upper left panel of Fig.~\ref{corr}, our results show good agreement with the experimental data, although there are some discrepancy in more central collisions. Values of $C_{112}$ from both our calculations and the experimental measurement are negative for all centralities. Besides possible non-flow effects from momentum conservation in the AMPT simulations, this could imply that the angles $\Psi_1$ and $\Psi_2$ of the reaction planes for directed and elliptic flows are likely to be perpendicular to each other. Our results on $C_{123}$, shown in the upper right panel of Fig. \ref{corr}, are seen to agree with the experimental data within their error bars for most-central collisions but are smaller than the experimental data for mid-central collisions. Their essentially zero values indicate that the directed and triangular flows or the angles $\Psi_1$ and $\Psi_3$ of their reaction planes are not sufficiently correlated in the AMPT model for mid-central collisions. For $C_{224}$ shown in the lower left panel, our results agree extremely well with experimental data for all centralities, although there are small difference between our results on elliptic and quadrupolar flows and those measured in experiments as shown in Fig.~\ref{vn}. Their large values further indicate that there is a strong correlation between the angles $\Psi_2$ and $\Psi_4$ of their reaction planes. The lower right panel shows our results for $C_{235}$, which are seen to show similar trend and magnitude as experimental data, although overestimating the data in most-central collisions and underestimating it in mid-central collisions. Since the AMPT reproduces reasonably well various anisotropic flows measured in experiments as shown in Fig.~\ref{vn}, the above results on its reasonable success in describing also the measured three-particle correlations clearly indicate that the initial states in the AMPT model are quite similar to what are generated in heavy ion collisions.
\subsection{Relative pseudorapidity dependence of $C_{m,n,m+n}$}
\begin{figure}[h]
\raggedleft
\includegraphics[width=0.5\textwidth]{eta.eps}
\caption{(Color online) Relative pseudorapidity $|\Delta \eta|$ dependence of $C_{m,n,m+n}$ for mid-pseudorapidity ($|\eta|<1$) charged particles of transverse momentum $p_T>0.2$ GeV/$c$ in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV. Open circles are experimental data from Ref.~\cite{Adamczyk:2017byf}.}
\label{eta}
\end{figure}
In this section, we study the relative pseudorapidity $|\Delta \eta|$ dependence of $C_{m,n,m+n}$ for mid-pseudorapidity ($|\eta|<1$) charged particle of transverse momentum $p_T>0.2$ GeV/$c$ in Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV and centrality 20-30\%. In particular, we consider the pseudorapidity difference between the first and the second particle ($\eta_1-\eta_2$) or the first and the third particle ($\eta_1-\eta_3)$. The upper left panel of Fig. \ref{eta} shows that $C_{123}$ from the AMPT model only changes slightly with $|\eta_1-\eta_2|$ as in the experimental data. These results imply that there is negligible breaking of boost invariance as seen in terms of the pseudorapidity dependence of the angle $\Psi_2$ of elliptic flow. The $|\eta_1-\eta_3|$ dependence of the results from the AMPT model, shown in the upper right panel, shows, on the other hand, a strong decrease with increasing $|\eta_1-\eta_3|$ as in the data, indicating that azimuthal angles $\Psi_1$ and $\Psi_3$ of the reaction planes for the directed and triangular flows change strongly with rapidity. Since our results for $C_{123}$ are smaller than experimental data for small values of $|\eta_1-\eta_2|$ and $|\eta_1-\eta_3|$, the reaction planes for the directed and triangular flows in the AMPT model is thus less correlated than measured in experiments. This is probably due to the Hambury-Brown-Twiss interference of identical particles at small $\Delta\eta$~\cite{HanburyBrown:1956bqd}, which is not included in the AMPT.
The lower two panels show the $|\eta_1-\eta_2|$ and $|\eta_1-\eta_3|$ dependence of $C_{224}$, and both are seen to agree with experimental data very well. As for $C_{123}$ in the upper left panel, $C_{224}$ in the lower left panel also changes little with $|\eta_1-\eta_2|$, indicating that the reaction plane for the elliptic flow has a weak dependence on rapidity. The lower right panel shows that $C_{224}$ decreases slightly with increasing $|\eta_1-\eta_3|$, implying that the reaction plane for the quadrupolar flow changes with rapidity and thus breaks slightly the boost invariance.
\section{Summary}
Using the AMPT model with parameters for the Lund string fragmentation and parton scattering taken from Ref.~\cite{PhysRevC.84.014903}, we have calculated the centrality dependence of various anisotropic flows in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV from the two-particle cumulant method. The obtained results are seen to agree with experimental data from the STAR Collaboration in both the trend and magnitude. We have found that the square of the directed flow $v_{1}\{2\}^2$ can be negative in more peripheral collisions as in experiments, and this has been attributed to the net total transverse momentum of particles included in the evaluation and the small number of particles in more peripheral collisions.
We have also used the AMPT model to study various three-particle correlations in Au+Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV as functions of centrality, which contain information on both flow harmonics and correlations among their reaction planes. We have found that our results for $C_{112}$, $C_{224}$ and $C_{235}$ generally agree with experimental data both in their magnitude and dependence on the participant number of collisions. In particular, our results for $C_{224}$ agree very well with the data, although our results for the elliptic and quadrupolar flows differ slightly from the data. For $C_{123}$, our results show that for mid-central collisions there is a weaker correlation between the angles of the reaction plane for the directed, elliptic and triangular flows for mid-central collisions in AMPT model than in the experimental data. We have further studied the dependence of three-particle correlations on the relative pseudorapidity $|\eta_1-\eta_2|$ and $|\eta_1-\eta_3|$ between first and second particles as well as between first and third particles. Our results are seen to agree with experimental data for $C_{123}$ and $C_{224}$, and indicate that the boost invariance is weakly broken in the angles of the reaction planes for the elliptic and quadrupolar flows but strongly broken in those for the directed and triangular flows. These results have led us to conclude that the AMPT model with its fluctuating initial conditions and strong partonic scatterings can capture the essential collision dynamics of relativistic heavy ion collisions as revealed in the measured anisotropic flows and three-particles correlations.
\section*{ACKNOWLEDGEMENTS}
We thank Prithwish Tribedy for discussions that led to present study and for his critical reading of the manuscript. This work was supported in part by the US Department of Energy under Contract No. DE-SC0015266 and the Welch Foundation under Grant No. A-1358.
| -15,981.113732 |
[
-2.9375,
2.662109375
] | 43.137255 |
[
-3.046875,
0.416748046875,
-2.220703125,
-5.640625,
-1.23046875,
8.6953125
] |
[
3.6640625,
8.484375,
3.810546875,
7.05078125
] | 153 | 2,416 |
[
-3.375,
3.876953125
] | 25.811939 |
[
-6.18359375,
-3.984375,
-3.529296875,
-2.056640625,
1.9228515625,
11.25
] | 1.074226 | 29.499586 | 26.200331 | 1.82387 |
[
1.5864956378936768
] | -12,010.114365 | 5.972268 | -15,657.107389 | 1.330654 | 5.481077 |
[
-2.765625,
-3.826171875,
-3.86328125,
-4.6953125,
2.21875,
12.21875
] |
[
-6.14453125,
-2.384765625,
-2.447265625,
-1.7548828125,
3.75390625,
5.37109375
] | |
BkiUbvXxK03BfNelVr3V
|
\section{Introduction}
High energy particles of cosmic origin interact with our atmosphere and create cascades of secondary particles known as extensive air showers (EASs). These EASs can be observed by recording the fluorescence, Cherenkov, and radio emission produced while propagating through the atmosphere and/or detecting the secondary particles that reach ground level. In order to predict and understand the EAS observations the interactions within the atmosphere must be compared to Monte Carlo simulations commonly performed with software packages like CORSIKA \cite{corsika}. This simulation package bundles electromagnetic and several selectable hadronic interaction packages to calculate the development of the particle cascades. The hadronic interaction event generators are separated in low and high energy models and in the simulation package, one or the other is selected depending on the energy of the interaction being considered. Per default, the energy at which the switch between high and low energy models occurs is 80\,GeV.
In a recent study \cite{Sys_Diff_HIM}, it was found that for cosmic-ray protons with an energy just above the typical switching energy, the properties of the simulated EASs have a strong dependency on the selection of the hadronic interaction model. With increasing energy of the incident proton (up to 100 TeV), the average differences in air shower properties between the models seemed to reduce. In this follow-up study, we focus on the switching energy domain, where the differences between the models are most prominently exposed (up to $60\%$ difference in the ground level observables). We compare in detail the nature of the early shower development and its relation to ground-level observables for different hadronic interaction models.
The differences in ground-level observables at 100\,GeV are driven by the early shower development and can therefore expose intrinsic differences between the hadronic interaction models. Therefore, low energy showers serve as a laboratory for comparison of differences in the first interactions from the ground level outcome. Another domain where the first interaction will play an important role is in the estimation of background rates for gamma ray astronomy. A fraction of hadronic first interactions will generate an energetic $\pi^0$, which will subsequently generate an electromagnetic cascade, mimicking a gamma-ray induced air shower. The rates of these events generated by different hadronic interaction models has a significant impact on sensitivity studies for gamma-ray observatories \cite{CTA_HADR}. Ideally, the different models should be directly compared to measurements made at dedicated experiments at particle accelerators (for example \cite{Dembinski_proton_Oxigen}).
\section{Simulation methodology}
All simulations for this study were performed with the Monte Carlo air shower event generator CORSIKA v7.64. The hadronic models tested were EPOS-LHC \cite{EPOS-LHC}, QGSJetII-04 \cite{QGSJetII-04}, SIBYLL 2.3c \cite{Sibyll2.3c,SIBYLL2.3-2020} and UrQMD \cite{UrQMD}.
The transition energy is not an a priori defined parameter and can flexibly be set up to a few hundred GeV. Nevertheless, it was set to 80\,GeV for all high energy models (default value in CORSIKA). UrQMD in contrast is a low energy model and rules the hadronic interactions in air showers below the transition energy.
The main goal of this research is to understand the hadronic interaction models' deviations in the sub-TeV regime, where they have been seen to significantly disagree \cite{Sys_Diff_HIM}. We chose an initial proton energy of 100\,GeV (including the rest mass) to perform all simulations.
Since the energy range studied is within the validity region of the transition energy, agreement was expected between using only the low energy model and using the combination of high- and low-energy models. UrQMD's full shower simulations were carried out by setting the transition energy to values larger than the initial proton energy.
In order to remove the effect of model intrinsic cross-section differences (given in Appendix \ref{Cross-sectionSection}) and track the particles produced in the first interaction, all simulations were performed with a fixed first interaction altitude. We set the altitude of the first interaction to $17.55$\,km which corresponds to the average cross-section of the interaction models. The atmospheric model chosen was GDAS/May, corresponding to an atmospheric depth of 85\,g\,cm$^{-2}$ at the initial altitude. Moreover, the collision setup consisted of a fixed target nitrogen nucleus and a zero degree zenith angle incoming proton. The nucleon-nucleon centre-of-mass energy is $\sqrt{s_{NN}} \approx 14$\,GeV.
The most important feature in the simulations was the possibility of observing event-by-event the final state immediately after the first interaction and at ground level. It allowed us to compare ground-level observables from different models for fixed classes of first interactions. Two observational points were defined; the first, 1\,cm below the initial interaction point to register the initial particle production; and the second at the ground level of the HAWC gamma-ray observatory at an altitude of 4100\,m \cite{HAWC_CRAB}. We used the lateral distribution functions (LDF) of the muon and electromagnetic (EM) components as the main observables at ground level.
\section{First Interaction Contributions to the Lateral Distribution Functions}
\label{sec:FirstInteraction}
\subsection{First interaction classification}
To relate the initial collisions with ground level observables, the different first interaction scenarios will be characterised. As a huge diversity of particles can be produced in the hadronic interaction, we group particles into four different ``families" (see Table \ref{particlefamiliestable}). The classification is motivated by the family's impact on the shower development and the preservation of a causal correlation between first interaction products and ground level observables, so for example the \emph{muonic family} contains muons and particles which typically decay to produce muons. The \emph{other hadrons} family was defined to contain particles which have decay channels that simultaneously contribute to multiple components. It is important to separate these hadrons in order to avoid mistaken links between the post-first interaction particle spectrum and ground level observable particle type.
\begin{table}[hbt!]
\centering
\caption{Definition of particle families.}
\begin{tabular}{|l|l|}
\hline
\textbf{Nucleons } & $p$, $\bar{p}$, $n$, $\bar{n}$ \\ \hline
\textbf{Muonic family } & $\mu^{+}$, $\mu^{-}$, $\pi^{+}$, $\pi^{-}$, $K^{0}_{L}$, $K^{+}$, $K^{-}$, $K^{0}_{S}$ \\ \hline
\textbf{EM component } & $\gamma$, $e^{-}$, $e^{+}$\\ \hline
\textbf{Other hadrons } & $\Lambda$, $\Sigma^{+}$, $\Sigma^{-}$, $\overline{\Sigma}^{-}$, $\overline{\Sigma}^{+}$, $\Xi^{0}$, $\Xi^{-}$, $\Omega^{-}$, $\bar{\Lambda}$, ... \\ \hline
\end{tabular}
\label{particlefamiliestable}
\end{table}
A common parameter used to characterise the energy distributions in an interaction is the inelasticity,
\begin{equation}
\kappa=1-\frac{E_{\text{LP}}}{E_{\text{FI}}} \approx 1 - x^{\text{LP}}_F,
\end{equation}
where $E_{\text{FI}}$ is the total shower energy (sum of all particle energies) after the first interaction and $E_{\text{LP}}$ the leading particle energy. The inelasticity $\kappa$ is closely related to Feynman's scaling variable $x^{\text{LP}}_F$ and is a measure for how much energy is available for the production of secondary particles. The leading particle jointly with the variable $\kappa$ allows one to identify similar types of interactions within the different models.
Note that the total shower energy after the first interaction is used in the definition of the inelasticity to ensure that $\kappa$ is in the range $[0,1]$. Normalisation to the energy of the initial system (proton plus target) would violate this constraint, since in all four models some violation of energy-momentum conservation was observed.
This was additionally checked at generator level using the CRMC software package \cite{CRMC} for EPOS-LHC, QGSJetII-04 and SIBYLL 2.3c. Small differences in the amount of energy violation between the two software packages are present, caused by CORSIKA's energy cutoff in tracked particles. QGSJetII-04, SIBYLL 2.3c and UrQMD show a similar behaviour after the first interaction, violating approximately $\pm 5$\,GeV from the initial proton energy. In EPOS-LHC's case, events violating up to $\pm 15$\,GeV were registered. Currently, these anomalies are under investigation in collaboration with the models' authors.
To further constrain the initial interaction scenarios we define three inelasticity bands: elastic and diffractive ($\kappa_1\approx[0,0.2]$), intermediate-inelastic ($\kappa_2 \approx[0.2,0.4]$) and highly inelastic collisions ($\kappa_3 \approx[0.4,1]$).
\begin{itemize}
\item The $\kappa_1$ regime encompasses events with leading particles carrying energies above the threshold defined by the transition from high to low energy hadronic model ($E^{\text{LP}}_{\kappa_{1}} =[80 - E_{p\text{,FI}}]$ GeV). These showers will undergo at least one additional interaction ruled by the high energy hadronic model, potentially leading to a larger impact in the ground level outcome. Furthermore, as most energy is concentrated in a single particle, the branching of the shower is very weak. We will refer as highly diffractive events to those in which $\kappa_{\text{Hel}}<0.05\subset \kappa_1$. This sub-regime represents events in which less than $5\%$ of the available energy is available for particle production in the first interaction. Besides, these showers are expected to replicate a scaled version of the first interaction spectrum in the upcoming interaction.
\item $\kappa_2$ showers have leading particles in the energy range $E^{\text{LP}}_{\kappa_{2}} =[60-80]$ GeV. The next interaction these showers undergo is governed by the low energy model, therefore the ground level observables will reflect the difference in the spectrum of particles that accompany the leading particle and their normalisation.
\item Finally, in the $\kappa_3$ region leading particles have energies in the range $E^{\text{LP}}_{\kappa_{3}} =[0-60]$\,GeV. These events correspond to highly inelastic first interactions in which the model's initial particle production spectrum plays a determinant role.
\end{itemize}
\begin{figure}[hbt!!]
\centering
\includegraphics[width=1.0\columnwidth]{Figures/EPOS_LP_ALL_pev_v6_labeled.pdf}
\caption{EPOS-LHC's inelasticity distributions for each leading particle family. The two shaded regions indicate
the systematic uncertainties in the definition of $\kappa$ due to event-by-event
fluctuations in the amount of energy violation.}
\label{EPOS_LP_ALL}
\end{figure}
In Figure \ref{EPOS_LP_ALL}, EPOS-LHC's leading particle inelasticities are shown for each family. In most cases the leading particle is a nucleon, only at very large inelasticities particles from the muonic family dominate. Showers with leading EM particles or ``other hadrons'' contribute less than $7.5\%$ in all models.
\begin{figure*}[htp!]
\centering
\includegraphics[width=1.0\textwidth]{Figures/MuEM_LDF_kiprotons_separated_pev_v2_FINAL_Sameyaxis_labeled.pdf}
\caption{Breakdown of the muon (left panels) and EM component (right panels) LDFs into the contributions from the three inelasticity regimes in nucleon led events. To have leading particles with similar properties, here energy ranges are taken as a proxy for the respective $\kappa$-regions, the boundaries of which are smeared due to event-by-event fluctuations in the energy violation of the models.}
\label{MUEMLDF_kiprotons_divided_abs_rel}
\end{figure*}
\subsection{Lateral distribution functions} \label{LDFinelasticitybands}
To investigate the source of the disagreements between the models we study the ground level outcome arising from events with similar first interactions, focusing on the two leading contributions as explained in Appendix \ref{LDFAppSection}. In Figure \ref{MUEMLDF_kiprotons_divided_abs_rel} the four models' muon and EM LDFs in the three classes are presented for events with a leading nucleon. As each $\kappa$-regime encloses broadly the same varieties of first interactions, we expect to obtain similar LDFs in each contribution.
The largest differences between the muon LDFs appear in the largest contributors, the $\kappa_1$ and $\kappa_3$ regimes. Initial collisions with large $\kappa$ produce abundant particle content (mostly muonic family particles) along with the leading particle, which later decay and contribute to the number of muons at ground level. In events where the initial interaction is diffractive, higher inelasticity interactions occur deeper in the atmosphere, where the production of secondary particles from the muonic family will more likely affect the outcome at ground level. For the three high energy models, the largest ground level muon source originates from $\kappa_3$ events (highly inelastic interactions) while for UrQMD it is from $\kappa_1$ (diffractive interactions).
In the $\kappa_3$ regime, the LDFs provide interesting information on the particle spectrum in the first interaction. Considering the integral over the LDF, SIBYLL 2.3c produces most muons ($1.85$ per this type of shower) followed by EPOS-LHC ($1.79$), QGSJetII-04 ($1.70$) and lastly UrQMD ($1.45$). The relative difference between SIBYLL 2.3c's and EPOS-LHC's average muon number is clear from the LDFs shape. As also seen in \cite{Sys_Diff_HIM}, QGSJetII-04 concentrates its smaller average number of muons at low core distances due to a lower muon transverse momentum (to be discussed in the next section).
In the LDF of the EM component, the largest contribution to the energy flow is present in the $\kappa_1$ nucleon led events. As in the muon LDF, a strong contribution from UrQMD's diffractive peak can be seen. A large contribution from $\kappa_2$ showers is not expected as the number of events in this regime is significantly lower $\left(\left. \frac{\text{N}_{\kappa_{2}}}{\text{N}_{\kappa_{3}}} \right\vert_{\text{EPOS-LHC}}\approx 0.29\right)$,
but also in highly inelastic events the ground level contribution to the EM component is small. This leads to the conclusion that the ground level EM component is driven by diffractive interactions. Physically this effect is understood, as at the energy of this study an EM component produced high in the atmosphere is very likely to die out before reaching ground level, while lower $\kappa$ interactions produce showers that develop deeper in the atmosphere and hence produce EM showers more likely to reach the ground.
\begin{figure}[hbt!]
\centering
\includegraphics[width=\columnwidth]{Figures/muLDF_LPpions2.pdf}
\caption{Muon LDF contribution from events lead by muonic family particles.}
\label{MULDF_LPPions}
\end{figure}
Separately, in Figure \ref{MULDF_LPPions} the muon LDF from events with a leading particle from the muonic family is presented integrated over all $\kappa$ ranges. In such events, a large fraction of the shower energy is assigned to multiple particles from the muonic family in the first interactions, therefore the large differences are expected to be caused by dissimilarities in the production of this type of secondary particles. For QGSJetII-04 the muon production at short distances (r $\lesssim$ 200\,m) is approximately three times larger than for the other models. Similarly to the case of the $\kappa_3$ nucleon led muon LDF, the integral over the LDF does not indicate a sizeable difference in the absolute number of muons reaching ground level. QGSJetII-04's absolute number of muons from this type of scenario is only $4\%$ larger than UrQMD's, and $20\%$ more than SIBYLL 2.3c's and EPOS-LHC's. The observed excess at distances relevant for experimental purposes is therefore not expected to come from a larger number of events but from a different character of the muonic family particles.
Concluding this section, we have seen that a breakdown of the ground level particle distributions into contributions from different types of initial interactions reveals striking differences between the models considered in this study. As the inelasticity ranges represent roughly similar physical events, a better agreement was expected.
\newpage
\section{First interaction analysis and sources of disagreement unravelled}
\subsection{First interaction rates}
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{Figures/LP_protons_pions_pev_FINAL_3_labeled.pdf}
\caption{ Inelasticity distributions for the four studied models, for events initiated by the two dominant families; nucleons and muonic particles.}
\label{LP_proton_pion}
\end{figure*}
In order to find the cause of the observed ground level differences we continue by studying the phenomenology of the first interaction. To understand the source of dissimilarities in the different inelasticity bands we begin by analysing the various leading particle inelasticity distributions for the two most contributing families, shown on Figure \ref{LP_proton_pion}.
Within the diffractive regime, UrQMD's number of events falling into the diffractive peak ($\kappa<0.05$) composes over $21\%$ of the total number of showers ($67\%$ of the $\kappa_1$ events and over five times EPOS-LHC's diffractive number). This abundance is responsible for the observed excess in both $\kappa_1$ LDFs (Figure \ref{MUEMLDF_kiprotons_divided_abs_rel}). As the shower development is spatially delayed, lower energy muonic family particles produced in secondary interactions will be more likely to contribute to the ground level products. Additionally, the EM component is directly enhanced as the first interaction is only weakly inelastic and therefore, the EM shower maximum will be reached deeper in the atmosphere. Compared to the other models, UrQMD compensates for the high probability of low inelasticity events (Figure \ref{LP_proton_pion}) with a larger cross-section (Table \ref{crosssections} in Appendix \ref{Cross-sectionSection}), i.e. more interactions take place in UrQMD showers but they produce fewer particles on average.
Approximately $5\%$ of the total number of EPOS-LHC's and SIBYLL 2.3c's nucleon led events fall in the highly diffractive peak ($\kappa < 0.05$). However, it is necessary to recall the consequences of energy violation in the first interaction. In EPOS-LHC's case, the number of highly diffractive events significantly decreases if the first interaction is not considered to violate energy conservation. It implies that highly diffractive events lose considerable amounts of the shower energy and therefore, the leading particle energy is lower.
For QGSJetII-04 the $\kappa_1$ regime shows a difference in behaviour when compared to the other models. Its diffractive peak is localised and composed of only a small fraction of the total number of events. While the rest of the models show a minimum around $\kappa\sim0.1$, QGSJetII-04 maintains a constant level as soon as energy for particle production is available. This means that QGSJetII-04 has a higher rate of events with particle production than EPOS-LHC and SIBYLL 2.3c in the $\kappa_1$-regime.
Such behaviour accounts for the absolute number of muons and EM component energy domination shown on Figure \ref{MUEMLDF_kiprotons_divided_abs_rel} (top panels).
Furthermore, the dissimilar fractions of events at $\kappa \sim 0.1-0.2$ suggest an important source of disagreement for the event rate and particle production in primaries below the TeV scale. At higher energies a better agreement and smoother transition in this regime is reached, significantly reducing the relative differences in the ground level $\kappa_1$ contributions.
In the $\kappa_2$ regime, the distribution shapes broadly agree between models and only differ by a normalisation factor. Correspondingly, it is also shown in the middle panels of Figure \ref{MUEMLDF_kiprotons_divided_abs_rel}, where the larger number of events results in a larger number of ground level particles rather than in a change in the shape of the LDF. These events exhibit the best average ground level agreement as the low energy model governs most of the shower behaviour. The accompanying particle spectra are determined by the available energy, the leading particles of the different models place the shower development all under the same low energy model description. With increasing inelasticity more of the particle production happens in the first interaction within the high energy model, thereby again reducing the importance of the low-energy model for the description of the ground-level observables.
The $\kappa_3$ nucleon led distributions present large differences in the normalisation of event rates. For example, SIBYLL 2.3c strongly enhances large $\kappa$ collisions led by nucleons over muonic family particles, presenting no inelasticity regimes where the most likely leading particle is a muonic particle.
The large variations seen in the $\kappa_3$ distributions show how the models' relative rates correlate with the observed differences in the muon number density in the $\kappa_3$ LDFs. It follows that even in inelastic events, the ground level differences arise from incompatibilities in same first interaction scenarios.
To close up this section, we have seen that the number of events falling in each inelasticity regime correlates with the hierarchy of the interaction products at ground level observed in the LDFs of Section \ref{LDFinelasticitybands}. This correspondence shows that a large part of the ground level differences originates from the models' dissimilar event rates. In other words, the normalisation differences emerge from a dissimilar nucleon-nucleon cross-section. Although certain compensation between scenarios is reasonable (eg. $\kappa_3$ nucleon led and muonic family particle led), the strong disagreement around $\kappa\sim0.1$ reflects intrinsically large model incompatibilities.
\subsection{Accompanying particle production}
\begin{figure}[htb!]
\centering
\includegraphics[width=\columnwidth]{Figures/muGL_K_LP_protons_all_rebin_3.pdf}
\caption{Average number of ground level muons per event as a function of the first interaction inelasticity for nucleon led and all types of events.}
\label{GLmu_vs_KLPprotons}
\end{figure}
In addition to the differences in the inelasticity distributions and consequently in the event rates, other characteristics have been found to directly affect the ground level muon LDF. Figure \ref{GLmu_vs_KLPprotons} shows the average ground level muon number per event as a function of the first interaction inelasticity for all types and nucleon led events. Regardless of the inelasticity distributions, some of the models generate a noticeably different number of muons at ground level per event.
For example, QGSJetII-04's events produce the largest average number of ground level muons for all events, but also on the selection of nucleon led events. The greater average number of ground level muons combined with the enhancement of the rate of events results in an overall larger muon number. Comparing both curves, it can be observed that although EPOS-LHC's and SIBYLL 2.3c's muon production in nucleon led events show slight differences, they vanish when all events are considered. This illustrates how these two models have a similar ground level impact from a compensation between both leading families contributions.
Furthermore, the larger number of $\kappa_1$ muons exhibits two interesting points; a stronger influence of the high energy model results in a larger number of muons, and secondly, a disagreement between the high and low energy models.
Due to the transition of the high energy model to the low energy model, there is a reduction of muon number at $\kappa\sim0.2$, above which all secondaries from the first interaction are handled by the low-energy model. The drop is most pronounced for QGSJetII-04. This illuminates the general difference between high and low energy models and the impact of the secondary interactions. As we are plotting a ground level quantity in terms of an first interaction parameter, the distributions shown are sensitive to the late shower development. At these inelasticities, the influence of the low energy model on the shower is at its largest as the transition energy was set to 80\,GeV and the leading particle's next interaction will be determined by the UrQMD model. QGSJetII-04's sudden change in slope shows how events in which the leading particle is ruled by the low energy model produce a lower number of muons at ground level (as UrQMD overall does). Such convergence towards UrQMD's nature shows how showers are built from dissimilar model approaches and how sensitive the ground level number of muons is to the model switch.
In order to explain the general muon excess from QGSJetII-04 (compared to the other models), it is not enough to only look at the leading particle in the first interaction, but also the accompanying particles need to be studied. The early production of particles from the muonic family will directly impact the ground level muons.
The minimum energy for 50\% of the muons to travel approximately $17$\,km without decaying is $\sim5$\,GeV. We will use this energy as a threshold to select the first interaction muonic family particles that contribute directly to the ground level muon number.
\begin{figure}[htb!]
\centering
\includegraphics[width=\columnwidth]{Figures/LPprotons_pFI_pions_Cutoff_OnOff_o5GeV_E100_AllRelDiff_5GeVRelDiff_v3_labeled.pdf}
\caption{Muonic family particle production ratios with respect to EPOS-LHC considering all particle energies (top panel) and above an energy threshold of 5\,GeV (bottom panel).
}
\label{averageNpions_PFI_vsK}
\end{figure}
In Figure \ref{averageNpions_PFI_vsK}, the average numbers of muonic family particles produced in nucleon led events are studied. On the top panel, the ratio between the average number of all accompanying muonic particles is presented and in the lower panel, the cut of 5\,GeV is applied. Considering all muonic family particles produced, QGSJetII-04's average larger ground level muon number is not evident as it does not show an increased abundance over the other models. However, when considering high energy muons, QGSJetII-04 exhibits an average effective muonic particle surplus over the other models, accounting for the average shower muon excess at ground level.
The investigation in \cite{Sys_Diff_HIM} of the muonic family particle production and ground level muon number at higher primary energies shows that a better agreement between the models is not found by increasing the primary energy. This means that the discontinuity in the description of hadronic physics when switching from the high-energy to the low-energy model is not reduced. A consistent description of particle production would require a well defined transition energy where the physics implemented in the models agrees.
\subsection{Transverse momenta of first interaction muonic family particles and distribution of ground level muons}
\begin{figure}[hbt!]
\centering
\includegraphics[width=1\columnwidth]{Figures/Pt_muons_pions_vsK_LPALL_RELDIFF_v2.pdf}
\caption{ Ratios with respect to EPOS-LHC of the (top panel) first interaction muonic family particles (energy above 5\,GeV) and (bottom panel) ground level muons average transverse momenta as a function of the first interaction inelasticity.}
\label{MM_muons_pt_K}
\end{figure}
As discussed in section~\ref{sec:FirstInteraction}, the number of muons at small distances (r$<200$\,m) from the shower core produced in events with large inelasticity ($\kappa_3$) and with a leading nucleon or particle from the muonic family does not reflect the total number of produced muons. The difference seen can instead be understood from the larger number of events falling in this regime and the average muonic family particle production. An explanation for the muon concentration (QGSJetII-04's case) and a spread (UrQMD's case) is important for ground-level observations. A parameter directly related to this spread is the transverse momentum of the ground level particles; lower $p_{\text{t}}$ muons concentrate at short distances while large $p_{\text{t}}$ are spread away from the shower axis. In Figure \ref{MM_muons_pt_K} the mean transverses momentum from energetic muonic family particles in the first interaction and ground level muons is shown. From the top panel it can be concluded that as inelasticity increases and more energy is being assigned to muonic particles, the transverse momenta significantly diverge between models. This disagreement in the first interaction is directly correlated to the muon $p_{\text{t}}$ at ground level, resulting in the discussed differences from Figures \ref{MUEMLDF_kiprotons_divided_abs_rel} (bottom left panel) and \ref{MULDF_LPPions}. The large differences between models in the transverse momentum spectra of muonic family particles (mostly produced in large $\kappa$ events) and their extrapolation to ground level muons was already pointed out in \cite{Sys_Diff_HIM}, where the higher energies muonic particles were shown to diverge between models.
Worthy of note is the slight reduction of transverse momentum of ground level muons in low inelasticity collisions and the abrupt change at $\kappa=0.2$. As was commented in Figure \ref{GLmu_vs_KLPprotons}, this is caused by the switching from high to low energy models. This is most clearly seen between models with most different muonic family particles production and properties, eg. QGSJetII-04 and UrQMD.
\section{Model Summary}\label{ModelSummarySection}
In this section, the main characteristics and differences of the models are summarised:
\begin{itemize}
\item EPOS-LHC ground level components present a relative deficit in the contribution from $\kappa_1$ events, as shown in the muon and the EM component LDFs. This deficit is a consequence of the weak impact of EPOS-LHC's diffractive collisions as their leading particles are not as energetic as in the other models. Moreover, this model is more affected by the energy violation in the first interaction. Over 10\,GeV variations were registered which influenced the computation of the first interaction inelasticity. The diffractive peak at $\kappa<0.05$ would be completely smeared out, as the leading particles from highly diffractive events have on average lower energies in EPOS-LHC than in the other models.
Moreover, EPOS-LHC's production of accompanying high energy muonic family particles is slightly lower than in the other models but compensated by a larger production of secondary high energy nucleons.
\item QGSJetII-04 has been shown to present a significantly larger number of muons at ground level. Moreover, the excess is critical
at short core distances where simulations are most relevant for experimental purposes.
The overproduction arises from a different energy spectrum of secondary particles that does not cause an overall larger number of muonic family particles but an increase in the amount of those carrying energies over 5\,GeV. Additionally, highly diffractive events are suppressed while at larger inelasticities, where more energy is available for particle production, the rates are enhanced.
QGSJetII-04's muon concentration at short core distances is caused by lower transverse momenta in the muonic family particles produced. It is clearest at large $\kappa$ values where muonic particles carry great fractions of the shower energy. The muonic family particles' $p_{\text{t}}$ is transferred in their decay to the muons which reach ground level closer to the shower axis.
\item SIBYLL 2.3c's ground level spectrum is similar to EPOS-LHC's. Despite this agreement, both models have been shown to be very different; the event rate at large $\kappa$ values is significantly different as well as the accompanying particle production. In the $\kappa>0.4$ regime, events led by nucleons with a strong accompaniment of particles from the muonic family are enhanced over events that are led directly by muonic particles. In contrast, EPOS-LHC's $\kappa_3$-nucleon led events have a larger fraction of accompanying nucleons but result in a subtle difference at ground level.
\item UrQMD's proton cross-section is shown to be significantly larger than that of the other high energy models in Appendix \ref{Cross-sectionSection}. This has a severe impact on the development of showers initiated by 100\,GeV primaries if the height of the first interaction is not fixed. Furthermore, we have shown that over 20\% of the events are highly diffractive, showing a great difference with the other models. Although a compensation between the two effects could be expected in non-fixed first interactions, it has been shown in \cite{Sys_Diff_HIM} that this does not occur and causes an overall deficit of the ground level components.
In comparison to the high energy models and specifically to QGSJetII-04, UrQMD's high energy muonic family particles show large transverse momenta. Consequently, its muon LDF is spread over a larger surface, resulting in the opposite behaviour to QGSJetII-04.
UrQMD's observed disagreement in the production of muonic particles ($>5$\,GeV) and their transverse momentum, questions the setting of the transition energy. It has been shown that in the $\kappa_2$ regime all high energy model predictions blend together due to the large influence of UrQMD's physics in secondary interactions. In contrast, all models seem to diverge in regimes where the low energy model does not dominate; this can be seen in $\kappa<0.2$ and $\kappa >0.5$ events.
\end{itemize}
\section{Impact of model choice and transition energy setting on air shower development}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Figures/RelDiff_QGSII_SIBYLL_EPOS_4.pdf}
\caption{On the left y-axis, the ratio between the hadronic interaction distributions in PeV proton showers is shown for each high energy model with respect to EPOS-LHC. Labelled in green and on the right y-axis, EPOS-LHC's hadronic interactions distribution is shown for a PeV proton initiated event.
}
\label{DiffswEHEmodels}
\end{figure}
So far, the main focus has been on the interaction of 100\,GeV protons with atmospheric nitrogen. An air shower can be viewed as a superposition of cascades initiated by particles with a fraction of the initial particle energy. Therefore, it is expected that the model differences observed for interactions with 100 GeV primaries, might also manifest themselves throughout the development of the air showers that were initiated by primaries with energies well above 100\,GeV. Moreover, as the event generators select the hadronic interactions by the centre of mass energy and the participants involved, the correspondence between the studied first interactions and the mid-shower development is ensured. It is not practical to track the impact of each individual sub-shower generated by a hadronic interaction on the air shower as whole. However, to get an idea of the impact of the hadronic interaction model choice and the transition energy between low and high energy models, the CORSIKA data access tools (COAST) package \cite{corsika} was used to track the rate of hadronic interactions throughout the whole shower development. This analysis contextualises the studied differences at higher energies and more commonly studied primaries. Shifting the transition energy therefore reveals whether the studied differences enhance the disagreement or bring models into accordance.
In Figure \ref{DiffswEHEmodels} we display the hadronic interaction ratios in PeV proton initiated showers (with default transition energy). Firstly, EPOS-LHC's events undergo a considerably larger number of interactions below the 10\,TeV scale, over 10\% more than QGSJetII-04 and SIBYLL 2.3c at energies at which muon production peaks. This reflects how, for the same initial conditions, models generate different shower scenarios which agree in their ground level product \cite{Sys_Diff_HIM}. Consequently, the differences in particle production and shower dynamics between models can also be inferred here. Worthy of note is QGSJetII-04's and SIBYLL 2.3c's high energy interaction rate. Although the likelihood of these collisions is well suppressed, the whole shower is affected by a larger number of interactions carrying large fractions of the initial energy.
We show in the Appendix \ref{TransEHadProf} the effect of modifying the values of the transition energy on the hadronic interaction profiles. There it is shown how the increase or decrease of the transition energy results in opposite effects for EPOS-LHC and the other models. Strengthening the role of the low energy model causes a decrease in the number of interactions under 10\,TeV in EPOS-LHC and an increase in QGSJetII-04's and SIBYLL 2.3c's. This can be understood as UrQMD's hadronic interaction profile lying in between EPOS-LHC's and SIBYLL 2.3c's.
In the previous sections we discussed how the models, high or low energy, do not agree for the default transition energy value. While a better compatibility in the inelasticity distributions can be found at higher energy, increasing the transition energy could mute the properties of the individual high energy models and threaten the consistency of events at higher energy. This inconsistency and abrupt change in the secondary interaction physics demands for a better agreement between models and/or a redefinition of the transition scales.
\section{Conclusions}
Through a detailed investigation of events initiated by 100\,GeV protons we cast light on the sources of disagreement between hadronic interaction models \cite{Sys_Diff_HIM}. By a simultaneous analysis of the first interaction and ground level particles, we can conclude that models present an intrinsically different behaviour in the low energy regime. An outline on each model's arguments is presented in section \ref{ModelSummarySection}.
Phenomenological variables were used to correlate differences in ground level observables to properties of the first interaction for some commonly employed high energy models. Discrepancies in the ground level particle number emerged from event rate deviations that were drawn from the study of first interaction inelasticity distributions. Within same events types (same leading particle and inelasticity), the production of accompanying particles (eg. muonic family particles) disagrees causing differences in the ground-level muon component. The transverse momentum spectra of muonic particles that are produced in the first interaction has also been shown to differ between models. This disagreement becomes worse as their energy increases.
The focus in this study was on the differences between models in the early shower development, however, the model differences are of course not restrained to the early development stage. When increasing the primary energy, the number of interactions around the switching energy will increase, however, the differences on the average shower parameters are washed away by the dominance of particle production by the low energy model. When selecting air showers with special properties, like proton induced air showers that mimic gamma-ray induced air showers, differences between the modelling of
hadronic interactions might be exposed again \cite{CTA_HADR}, also at higher primary energies (TeV domain).
Additionally, we have shown how the event generators diverge by studying the number of hadronic interactions in showers initiated at high energies. Large differences have been spotted remarking how the models produce very dissimilar showers from the same initial conditions. In this line, we have shown how sensitive the shower development is to modifications of the transition energies. The low energy model shows a clear domination of the shower (ruling over 85\% of the interactions in a PeV shower with default transition energy and increasing percentage for higher primaries) while the high energy models set the initial conditions. This points to low energy model investigations as a possible important contribution to the solution of the hadronic interaction model puzzles \cite{muondeficit,HansMuonPro,Dembinski:2019uta}.
As the focused energy regime falls within the validity range of all models, a reasonable agreement would be expected between all high energy models and UrQMD. The large differences found demand for a convergent model tuning to existing and/or future accelerator data or a redefinition of the transition regime from high to low energy hadronic interaction models that ensures consistency with the high energy partners.
\section{Acknowledgements}
The authors thank the MPIK non-thermal astrophysics group for fruitful discussions about the paper, in particular J.A. Hinton for providing helpful comments.
\bibliographystyle{elsarticle-num_etal}
| -19,872.600294 |
[
-3.68359375,
3.294921875
] | 62.962963 |
[
-2.6171875,
0.07879638671875,
-2.142578125,
-5.23828125,
-1.3935546875,
8.0078125
] |
[
3.326171875,
8.09375,
4.86328125,
7.484375
] | 303 | 5,780 |
[
-2.083984375,
2.19921875
] | 20.779845 |
[
-6.3359375,
-4.328125,
-4.33984375,
-2.6171875,
2.033203125,
12.6015625
] | 1.232527 | 33.755111 | 19.32526 | 0.630583 |
[
2.7761454582214355
] | -16,735.38947 | 5.75519 | -19,164.384639 | 1.656396 | 5.616905 |
[
-3.28515625,
-3.896484375,
-3.533203125,
-4.171875,
2.5546875,
11.328125
] |
[
-5.5859375,
-2.306640625,
-2.2734375,
-2.341796875,
3.71875,
5.9921875
] | |
BkiUddA5qoYA4kbtG-_f
|
\section{Introduction}
\label{section:introduction}
Deep learning approaches allow for end-to-end training of neural network models that transform inputs into desired outputs. These inputs and outputs are typically image, text, and/or audio. Most successful deep learning models involve stacking a number of layers that convert inputs into continuous space vectors and then into outputs. Typically, such layers consist of either recurrent (mostly for text and audio), convolutional (mostly for image), or self-attentional feed-forward (image or text) components.
It has been empirically demonstrated that stacking of layers leads to an improvement in performance, especially in high-resource scenarios, since it allows for better representation learning owing to additional parameters
and/or more multiplications of non-linear transformations. However, it also increases both space complexity, e.g., the number of parameters (or simply ``size'') of the model, and time complexity, e.g., the time consumed during training and inference, by a significant amount.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}\centering
\includegraphics*[scale=.45]{fig-vanilla.pdf}
\caption{Vanilla model}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}\centering
\includegraphics*[scale=.45]{fig-tied.pdf}
\caption{RS model (unfold)}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}\centering
\includegraphics*[scale=.45]{fig-RS.pdf}
\caption{RS model (fold)}
\end{subfigure}
\caption{Comparison of vanilla and recurrently stacked (RS) models: (a) vanilla model has different sets of parameters in different layers, whereas (b) RS model uses the same parameters multiple times. (c) is a simplified version of (b).}
\label{figure:vanilla-vs-RS}
\end{figure*}
This paper focuses on compact model architectures through simplification of existing architectures. Compact models are desirable in practical scenarios, especially when only machines with low memory are available for deployment.
The search for compact neural network models with performance comparable to their bulkier counterparts has led to the development of \emph{knowledge distillation} approaches \citep{DBLP:journals/corr/HintonVD15,kim-rush-2016-sequence,DBLP:journals/corr/FreitagAS17}. In this approach, teacher\footnote{In literature related to transfer learning by fine-tuning, teacher and student models are commonly known as parent and child models, respectively. Even though we consider that these terms are interchangeable, in this paper, we use the former terms in order to be consistent with teacher-student terminology used in knowledge distillation.} models with a large number of parameters are first trained and then student models with significantly fewer parameters are trained to mimic the teacher models. This mimicking takes place in the form of matching the probability distribution of teacher models. As a result, a single student model with a small number of parameters can imbibe the performance of a teacher model with a significantly larger number of parameters. Additionally, such a student model can also be taught to perform as well as the ensemble of larger teacher models which outperforms the individual teacher models.
As an alternative approach for model compression, in this paper, we propose \emph{recurrent stacking} (RS)\footnote{RS can mean ``recurrently stacked'' or ``recurrent stacking'' both of which have the same implication.} of layers. \Fig{vanilla-vs-RS} compares vanilla and RS models with $N$ layers. Unlike vanilla models, where each of the $N$ stacked layers has its own parameters, RS models use the same parameters across all the $N$ stacked layers, significantly reducing the size of the model compared to the vanilla model with $N$ layers. Our approach was motivated by works on parameter sharing architectures \cite{TACL1081,DBLP:conf/acl/DongWHYW15,kaiser:17} where the same model parameters are reused for several diverse tasks without strong negative effects to the performance of those tasks. We rationalized that if the sharing parameters between tasks is not detrimental to those tasks' performance then sharing parameters between components of a model for a single task should not be detrimental to that task's performance either.
The concept of RS does not assume characteristics of specific neural network models and tasks, and thus it should be applicable to any neural network architectures in general. In this paper, we empirically evaluate its utility, taking a case study of neural machine translation (NMT) \citep{DBLP:journals/corr/ChoMGBSB14,DBLP:journals/corr/BahdanauCB14:original}. Following the recent advances in NMT, we work with the \textit{Transformer} architecture \citep{NIPS2017_7181}, an instance of encoder-decoder neural networks, which are prevalently used for sequence-to-sequence (seq2seq) modeling \citep{DBLP:journals/corr/SutskeverVL14}.
Through experiments on three different datasets, we clarify whether it is the additional parameters or the additional non-linear transformations that are responsible for improvement in performance as a result of stacking layers.
These experiments reveal that RS models always have better performance than the 1-layer vanilla models, i.e., the models with exactly the same number of parameters. However, when compared to the 6-layer vanilla models, RS models tend to suffer from reduced performance.
To obtain compact models with a simple architecture but with comparable performance to vanilla models, we also investigate the utility of RS models as student models in a transfer learning setting. More specifically, we train RS models through two types of transfer learning techniques: a simple layer transfer followed by fine-tuning \citep{DBLP:conf/emnlp/ZophYMK16:original} and sequence-level knowledge distillation \citep{kim-rush-2016-sequence}.
For the first approach, we initialize the layer of an RS model with a particular layer from a pre-trained vanilla model and then perform fine-tuning of the parameters. For the second, we generate pseudo-training data with the pre-trained vanilla model and then train the RS model on the resultant data. The former is an explicit transfer learning method whereas the latter is an implicit transfer learning method.
Through additional experiments, we confirm that transfer learning methods can compensate for the performance drops that the direct training of the RS models suffer from.
This paper is an extended version of our previous work on RS layer models \cite{DBLP:conf/aaai/DabreF19} with additional novel contents. The rest of this paper is organized as follows. In \Sec{relwork}, we review existing methods that focus on the sizes of deep neural network models in general.
Then, the following three contributions of this paper are presented.
\begin{description}
\item[Sections~\ref{section:proposed} and \ref{section:experiments}:] We propose RS of layers and empirically evaluate it using NMT as an application. (our previous work + minor new work + refinements)
\item[\Sec{transfer_learning}:] We also propose to train RS models through transfer learning to bridge the performance gaps between RS and vanilla models. (completely new work)
\item[\Sec{analysis}:] By visualizing the attentions across different layers of various RS models, we unveil the adaptive power of their attention mechanisms. (our previous work + refinements)
\end{description}
Finally, \Sec{conclusion} concludes this paper, mentioning future research directions.
\section{Related Work}
\label{section:relwork}
There are three major bodies of work that deal with reducing the sizes of deep neural network models: (a)~parameter sharing, (b)~transfer learning, and (c)~implementation tricks.
The simplest way to reduce the number of parameters in a neural network model is to find parts that can be shared. This \textbf{parameter sharing} has been examined in a wide variety of scenarios.
The work on zero-shot NMT \citep{TACL1081} revealed that a single encoder-decoder NMT model can host multiple language pairs without an appreciable loss in translation quality,
thereby avoiding the need to train separate models per translation direction. For languages that share orthographies \citep{sennrich-haddow-birch:2016:P16-12} or have orthographies that can be mapped to a common script \cite{kunchukuttan-etal-2018-leveraging}, using a shared embedding layer can help reduce the model size significantly, also enabling cognate sharing.
Universal Transformer \citep{univtrans} shows that feeding the output of the multi-layer encoder (and decoder) to itself repeatedly leads to an improvement in quality for English-to-German translation. The idea of RS is similar to this in the sense that both use the same parameters recurrently. However, whereas \citet{univtrans} have focused on improving the state-of-the-art, the aim of RS is to obtain neural network models with significantly fewer parameters. Eventually, the RS models have the same size as that of a 1-layer model.
Another approach is to share the parameters between the encoder and the decoder \cite{DBLP:conf/aaai/XiaHTTHQ19,DBLP:conf/aaai/DabreF19}.
We consider that this approach is orthogonal to our RS and will examine their combination in our future work.
Another common way to reduce the size of a neural network model is \textbf{transfer learning} through knowledge distillation \citep{DBLP:journals/corr/HintonVD15,kim-rush-2016-sequence,DBLP:journals/corr/FreitagAS17}, which transfers the knowledge learned by the larger teacher model(s) into a smaller student model.
It requires training one or more teacher models and thus is time-consuming.
In this paper, we examine the combination of RS and knowledge distillation to yield compact neural network models without performance loss. Additionally, we examine transfer learning by fine-tuning pre-trained models \cite{DBLP:conf/emnlp/ZophYMK16:original} unlike most work.
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c}
\hline
Dataset & Train & Dev & Test & Vocab & Training iteration \\\hline
GCP & 400k & 2,000 & 2,000 & 16k & 60k/120k\\
KFTT & 440k & 1,166 & 1,160 & 8k & 160k\\
ASPEC & 1.50M & 1,790 & 1,812 & 32k & 400k\\\hline
\end{tabular}
\caption{Datasets (number of sentence pairs) and model settings.}
\label{table:datasets}
\end{table*}
Finally, the third orthogonal body of work involves \textbf{implementation tricks}. Pruning of pre-trained models \cite{see-etal-2016-compression} makes it possible to discard around 80\% of the smallest weights of a model without deterioration in performance. Such models do require re-training and hyper-parameter tuning after pruning to avoid loss in performance. Currently, most deep learning implementations use 32-bit floating point representations, but 16-bit floating point representations can be an alternative without any loss in performance \cite{pmlr-v37-gupta15,ott-etal-2018-scaling}.
There is an even more aggressive work that uses binarization \cite{Courbariaux:16} as a way to compress models. These methods can naturally be used with the RS approach and lead to a speed-up in decoding due to faster computations. The RS approach should be combined with binary code prediction method \cite{oda-etal-2017-neural}, which specifically intends to speed up the softmax layer of neural networks. On a related topic, compact models are usually faster to decode and work on quantization \cite{Lin:2016:FPQ:3045390.3045690} and average-attention networks \citep{DBLP:conf/acl/XiongZS18} address the topic of fast decoding in detail.
\section{Recurrent Stacking of Layers}
\label{section:proposed}
In this paper, we propose to share the parameters across layers by \emph{recurrent stacking} (RS), as illustrated in \Fig{vanilla-vs-RS}.
Assume that $X$ is the input to the neural network model consisting of $N$ layers. Let $Y$ be the output of the topmost layer, $L_{i}$ be the $i$-th layer, and $L_{i}(\cdot)$ indicates that $L_{i}$ transforms the argument into a hidden representation. \Eq{vanilla-eq} shows the hidden layer computation of a 6-layer vanilla model. In contrast, as shown in \Eq{rs-eq}, a 6-layer RS model uses the same layer 6 times.
\begin{eqnarray}
Y &= &L_{6}(L_{5}(L_{4}(L_{3}(L_{2}(L_{1}(X))))))
\label{eq:vanilla-eq}\\
Y &= &L_{1}(L_{1}(L_{1}(L_{1}(L_{1}(L_{1}(X))))))
\label{eq:rs-eq}
\end{eqnarray}
This parameter sharing results in a single-layer deep neural network model, massively reducing the model size. During training, RS of layers enforces the model to revise the hidden states and leads to more complex representations that should help in improving translation quality.
Let us consider an application of RS to encoder-decoder models consisting of (a)~an encoder comprising of an embedding layer for the input and $N$ stacked transformation layers, and (b)~a decoder comprising of an embedding layer, $M$ stacked transformation layers, and a softmax layer for the output, where $M$ is often set to $N$.
RS is applied to each of the encoder and the decoder. Beside this approach, there are two well-known orthogonal approaches for sharing parameters of encoder-decoder models. One involves using shared input-output vocabulary; the same parameters are used for embeddings of both the encoder and the decoder and the softmax layer of the decoder. The other is to share the parameters between the encoder and the decoder \citep{DBLP:conf/aaai/XiaHTTHQ19}.
\section{Machine Translation Experiments}
\label{section:experiments}
We empirically evaluated the utility of our proposed method through its application to NMT, using the Transformer model \citep{NIPS2017_7181}.
Even though our method is independent of implementation and model as formulated in \Sec{proposed}, we chose the Transformer model, since it is the current state-of-the-art NMT model.
In the case of the Transformer models, each layer consists of self-attention, cross-attention (decoder only), and feed-forward sub-layers with layer normalization and residual connections for each sub-layer.
To clarify the contribution of our RS method, we trained and evaluated the following two types of NMT models, where the same numbers of encoder and decoder layers were used.
\begin{description}
\item[(a) Vanilla NMT:] $k$-layer ($k\in\{1,2,6\}$) models without any shared parameters across layers.
\item[(b) RS-NMT:] $k$-layer ($k\in\{1,2,3,4,5,6\}$) models with parameters shared across all layers. Note that the 1-layer model is identical to the 1-layer vanilla NMT model.
\end{description}
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Model Type} & \multicolumn{2}{c|}{GCP} & \multicolumn{2}{c|}{KFTT} & \multicolumn{2}{c}{ASPEC} \\ \cline{2-7}
& \unidir{Ja}{En} & \unidir{En}{Ja} & \unidir{Ja}{En} & \unidir{En}{Ja} & \unidir{Ja}{En} & \unidir{En}{Ja} \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}RS-NMT and\\ 1-layer vanilla NMT\end{tabular}}& \multirow{2}{*}{30.4M} & \multirow{2}{*}{33.2M} & \multirow{2}{*}{20.2M} & \multirow{2}{*}{20.3M} & \multirow{2}{*}{57.7M} & \multirow{2}{*}{57.7M} \\
& & & & & & \\
\hline
2-layer vanilla NMT & 39.5M & 40.4M & 28.4M & 28.6M & 65.0M & 65.2M \\ \hline
6-layer vanilla NMT & 69.8M & 70.2M & 57.7M & 57.9M & 94.4M & 96.2M \\ \hline
\end{tabular}
\caption{The numbers of parameters for different NMT models that we trained.}
\label{table:model-parameters}
\end{table*}
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c|}{GCP} & \multicolumn{2}{c|}{KFTT} & \multicolumn{2}{c}{ASPEC} \\ \cline{2-7}
& \unidir{Ja}{En} & \unidir{En}{Ja} & \unidir{Ja}{En} & \unidir{En}{Ja} & \unidir{Ja}{En} & \unidir{En}{Ja} \\ \hline
\multicolumn{7}{l}{\textbf{(a) Vanilla NMT}}\\\hline
1 & 21.95 & 23.89 & 21.64 & 25.00 & 23.28 & 32.19 \\
2 & 24.23{$^{\dagger}$} & 25.62 & 24.14 & 30.05 & 28.06 & 38.91 \\
6 & \textbf{24.67} & \textbf{26.22} & \textbf{27.19} & \textbf{32.72} & \textbf{28.77} & \textbf{41.32} \\ \hline
\multicolumn{7}{l}{\textbf{(b) RS-NMT}}\\\hline
1 & 21.95 & 23.89 & 21.64 & 25.00 & 23.28 & 32.19 \\
2 & 23.24 & 24.47 & 24.50 & 28.53 & 27.84 & 38.54 \\
3 & 23.42 & 25.02 & 25.84 & 29.90 & 28.05 & 39.26 \\
4 & 24.33 & 25.28 & 26.23 & 30.36 & 28.08 & 39.31 \\
5 & 23.95 & 25.38 & 26.42 & 30.78 & 28.02 & 38.86 \\
6 & 24.36{$^{\dagger}$} & 25.84{$^{\dagger}$} & 26.51 & 30.83 & 27.20 & 40.04 \\ \hline
\end{tabular}
\caption{BLEU scores obtained in our experiments. The definition of numbers in the leftmost column are as follows: (a)~the number of different encoder and decoder layers and (b)~the number of recurrence. Scores in bold are the ones with highest BLEU score in each translation task, whereas scores marked with ``{$^{\dagger}$}'' are the ones that are not statistically significantly (bootstrap re-sampling with $p<0.05$) worse compared to the corresponding 6-layer vanilla model.}
\label{table:direct-results}
\end{table*}
\subsection{Settings}
We used three different datasets for Japanese--English translation for both directions (\unidir{Ja}{En} and \unidir{En}{Ja}):
the Global Communication Plan (GCP) corpus\footnote{The splits provided by \citet{W18-2707}.} \cite{IMAMURA18.104}, the Kyoto free translation task (KFTT) corpus,\footnote{\url{http://www.phontron.com/kftt}}
and the Asian Scientific Paper Excerpt Corpus (ASPEC)\footnote{\url{http://orchid.kuee.kyoto-u.ac.jp/ASPEC/} We used the cleaner half of this corpus.} \citep{NAKAZAWA16.621}.
\Tab{datasets} gives the number of parallel sentences for all the datasets.
We tokenized the Japanese sentences in the KFTT and ASPEC corpora using the JUMAN morphological analyzer.\footnote{\url{http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?JUMAN}} We tokenized and lowercased the English sentences of these corpora using the \textit{tokenizer.perl} and \textit{lowercase.perl} scripts in Moses.\footnote{\url{https://github.com/moses-smt/mosesdecoder}} The GCP corpus was available to us in a pre-tokenized and lowercased form.
We implemented our method on top of an open-source implementation of the Transformer model \citep{NIPS2017_7181} in the version 1.6 branch of \textit{tensor2tensor}.\footnote{\url{https://github.com/tensorflow/tensor2tensor}} For training, we used the default model setting corresponding to \textit{transformer\_base\_single\_gpu} in the implementation, except that we varied the number of sub-words and training iterations depending on the dataset.
We used the tensor2tensor internal sub-word segmenter for simplicity. The settings of sub-word vocabularies and training iterations are given in \Tab{datasets}. Note that for the GCP task, we chose the vocabulary size used by \citet{W18-2707}, and trained \unidir{En}{Ja} models for 60k iterations whereas we trained \unidir{Ja}{En} models for 120k iterations.
We averaged the last 10 check-points and decoded the test sentences with a beam size of 4 and length penalty of $\alpha=0.6$ for the KFTT \unidir{Ja}{En} task and $\alpha=1.0$ for the rest. We evaluated our models using the BLEU metric \citep{Papineni:2002:BMA:1073083.1073135} implemented in tensor2tensor as \textit{t2t\_bleu}: case-insensitive and tokenized BLEU.
\subsection{Reduction in Model Size}
\label{section:model_size}
\Tab{model-parameters} compares the number of parameters in the vanilla and RS models in all the translation tasks. A 1-layer RS model is identical to a 1-layer vanilla model.
For the same dataset, the number of parameters for the two translation directions can be different due to the implementation of the sub-word mechanism in tensor2tensor, which does not produce vocabularies of exactly the same sizes as we specified (\Tab{datasets}).
Whereas the numbers of parameters of vanilla NMT models increase according to the increase of the number of layers, those for RS models do not, leading to a significant difference.
We observed higher percentages of parameter reduction for the settings with smaller vocabularies; compared to the 6-layer vanilla NMT, approximately 65\%, 55\%, and 40\% of the parameters were reduced for the KFTT, GCP, and ASPEC tasks, respectively. This is because the bulk of the parameters in NMT are used for mapping between sub-words and hidden representation at the embedding and softmax layers.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics*[width=\textwidth]{curve-gcp-enja.pdf}
\caption{GCP \unidir{En}{Ja} task.}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics*[width=\textwidth]{curve-aspec-enja.pdf}
\caption{ASPEC \unidir{En}{Ja} task.}
\end{subfigure}
\caption{BLEU scores of the 6-layer models trained on different sizes of data.}
\label{figure:corpus-reduction}
\end{figure*}
\subsection{Translation Quality}
\label{section:quality}
\Tab{direct-results} summarizes the translation quality in BLEU for all the datasets.
In what follows, we first compare the vanilla and RS models, and then describe three additional experiments: learning-curve experiment, training with back-translations, and an exploration of much deeper model.
\subsubsection{Recurrently Stacked Models vs. Vanilla Models}
\label{section:main_results}
In general, no matter which dataset was used, the translation quality improved as the same parameters were recurrently used in a depth-wise fashion. In all the translation tasks, except ASPEC \unidir{Ja}{En}, the 6-layer RS-NMT model was better than the corresponding 2-layer vanilla NMT model that has more parameters, meaning that our RS-NMT models can compensate for the lack of parameters. However, even though the performance of a 6-layer RS-NMT model approaches that of the 6-layer vanilla NMT model, there are still significant differences in the two KFTT tasks and the two ASPEC tasks.
For the resource-richest ASPEC task, the 6-layer RS-NMT models were better than the 1-layer vanilla NMT models by 3.92 and 7.85 BLEU points, respectively, and they were worse than the 6-layer vanilla NMT models by only 1.57 and 1.28 BLEU points, respectively. Similar observations hold for the GCP and KFTT tasks. Even though a drop in translation quality is undesirable, in our opinion, these drops are not catastrophic given the reduction in the number of parameters.
Most existing studies assume that the improvement in performance of deep stacked models comes from the large number of parameters and hence the complex representations learned. However, even though an RS-NMT model has exactly the same number of parameters as a 1-layer vanilla NMT model, its representational power seems to be much higher. It is clear that the continuous representations undergo a number of non-linear transformations (linear transformations and non-linear activations) thanks to the RS of layers. The same happens in vanilla NMT models except that more parameters are involved per transformation at each layer. Thus, our comparison suggests that it is the non-linear transformations that are responsible for the bulk of the improvement in deep stacked models. In our opinion, this is an important observation in deep learning research and deserves further exploration.
\subsubsection{Effect of Corpora Sizes on Translation Quality}
\label{section:corpus_size}
\Fig{corpus-reduction} shows the difference in the performance between the RS-NMT and the vanilla NMT when we vary the size of the training data for the same datasets. We plot the BLEU scores for the GCP \unidir{En}{Ja} and ASPEC \unidir{En}{Ja} tasks. Whereas it is obvious that reducing the size of training data deteriorates the BLEU scores, the trends in terms of difference in performance between the RS-NMT and vanilla NMT remain almost the same: i.e., 6-layer RS model is much better than 1-layer vanilla model, even though it is quite clear that the reduction in the number of parameters can come with a drop in translation quality when compared to 6-layer vanilla model.
As such, we can safely say that our proposed method is independent of corpus size and domain.
\subsubsection{Training with Back-Translation}
Since back-translation is one of the most reliable ways to boost translation quality \cite{sennrich-haddow-birch:2016:P16-11}, we evaluated its impact on our proposed method.
We experimented with the GCP tasks, since 1.55M sentences in the Japanese and English monolingual corpora for the same domain \cite{W18-2707} are available as the source to generate back-translations.
First, we generated pseudo-parallel data by back-translating the monolingual sentences using the 1-layer models for the opposite translation direction.\footnote{For example, we used the 1-layer \unidir{Ja}{En} model in order to translate the Japanese monolingual sentences for training new \unidir{En}{Ja} models. Even though our original intention to use the 1-layer model is its fast decoding, as the 1-layer RS-NMT model is identical to the 1-layer vanilla NMT model, this eventually makes a fair comparison: all the models for the same translation direction are trained on exactly the same pseudo-parallel data.} We then trained, from scratch, the 6-layer vanilla NMT and up to 6-layer RS-NMT models on the mixture of the pseudo-parallel and the original parallel data. To compensate for the additional data, we trained both the \unidir{Ja}{En} and the \unidir{En}{Ja} models for 200k iterations (cf. \Tab{datasets}).
\Tab{backtransresults} provides the results. Despite no increase in the number of parameters, the presence of back-translated data improved the translation quality for all the translation tasks. The 2-layer RS-NMT trained using additional back-translated data already outperformed the 6-layer vanilla NMT models trained only on the original parallel corpus. Note that we used 1-layer models to generate the back-translated data and it is natural to expect that using deeper models will give better back-translated data leading to further improvements in translation quality.
\begin{table}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c}
\hline
\multirow{2}{*}{Model}
& \multicolumn{2}{c|}{\unidir{Ja}{En}}
& \multicolumn{2}{c}{\unidir{En}{Ja}}\\\cline{2-5}
& No & Yes & No & Yes \\\hline
\multicolumn{5}{l}{\textbf{(a) Vanilla NMT}}\\\hline
6 & 24.67 & \textbf{25.91} & 26.22 & \textbf{28.75} \\\hline
\multicolumn{5}{l}{\textbf{(b) RS-NMT}}\\\hline
1 & 21.95 & \textbf{23.90} & 23.89 & \textbf{25.47} \\
2 & 23.24 & \textbf{24.79} & 24.47 & \textbf{26.56} \\
3 & 23.42 & \textbf{24.79} & 25.02 & \textbf{26.66} \\
4 & 24.33 & \textbf{25.17} & 25.28 & \textbf{27.31} \\
5 & 23.95 & \textbf{24.92} & 25.38 & \textbf{27.08} \\
6 & 24.36 & \textbf{25.82} & 25.84 & \textbf{27.55} \\
\hline
\end{tabular}
\caption{BLEU scores obtained using back-translated data for the GCP tasks. The ``Yes'' and ``No'' columns indicate the involvement of back-translated data. The highest BLEU score in each configuration is in bold.}
\label{table:backtransresults}
\end{table}
\subsubsection{Limits of Recurrent Stacking}
\label{section:limits}
In \Sec{main_results}, we observed that non-linear transformations cover the bulk of improvement when testing with up to 6-layer models. However, similarly to vanilla models, we expect that the increase in the number of recurrence does not always bring improvement, i.e., there must be a limit.
To determine if there is a depth-benefit trade-off with RS-NMT modeling, we trained a 24-layer RS-NMT model for the GCP \unidir{En}{Ja} task. We obtained a BLEU score of 26.04, with a non-significant 0.20 point improvement over the 6-layer RS-NMT model which has a BLEU score of 25.84. One possible reason for this is that the gradients vanish during back-propagation due to the extreme depth. We leave the application of sophisticated training techniques for the future. Note that a very deep RS-NMT model with high translation quality is more valuable than its vanilla counterpart, because the size of an RS-NMT model always remains the same whereas the size of the vanilla NMT model continues to grow according to the increase of the number of layers. Work on identifying the equilibrium of RS models \cite{DBLP:conf/nips/BaiKK19} has shown that it is possible to predict the outputs of extremely deep RS layers. This means that it should be possible to simulate extremely deep RS-NMT models and their improved performance without being slowed down by their computational complexity.
\section{Training Recurrent Stacking Models through Transfer Learning}
\label{section:transfer_learning}
Our empirical experiments presented in \Sec{experiments} revealed that the RS models, despite having the same computational complexity\footnote{Both vanilla and RS models perform the same number of matrix multiplications.} as vanilla models, tend to suffer from reduced performance, due to a significant reduction of the number of parameters.
To obtain compact models with a simple architecture but with comparable performance to vanilla models, this section examines ways to train RS models through transfer learning from a pre-trained vanilla model instead of merely relying on the parallel data as in \Sec{experiments}.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.32\textwidth}\centering
\includegraphics*[scale=.45]{fig-preparation.pdf}
\caption{Preparation}
\label{figure:preparation}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}\centering
\includegraphics*[scale=.45]{fig-transfer.pdf}
\caption{Layer transfer}
\label{figure:transfer}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}\centering
\includegraphics*[scale=.45]{fig-distillation.pdf}
\caption{Sequence distillation}
\label{figure:distillation}
\end{subfigure}
\caption{Overview of techniques used for bridging the gap between RS and vanilla models. (a)~Training a ``vanilla model'' and using it to generate ``train.out (distilled)'' for the inputs of the training data. (b)~Training an RS model using the pre-trained parameters of the ``vanilla model'' for initialization. (c)~Training an RS model from scratch on the pair of the inputs of the training data and decoded outputs, i.e., ``train.out (distilled).''}
\label{figure:overview-methodologies}
\end{figure*}
\subsection{Transfer Learning Methods}
Transfer learning for neural network models involves utilizing strong pre-trained teacher model(s) to transfer its capabilities to a student model that is either small or has less training data. The underlying principle is to leverage the priors from the pre-trained model(s). In this paper, we explore two ways of transferring knowledge: a simple layer transfer followed by fine-tuning and sequence-level knowledge distillation. The former is an explicit way of transferring knowledge whereas the latter is an implicit way of transferring knowledge. We also explore the effects of combining both methods.
\Fig{overview-methodologies} illustrates an overview of the procedure.
\subsubsection{Layer transfer and fine-tuning}
\citet{DBLP:conf/emnlp/ZophYMK16:original} have proposed fine-tuning as a way to improve NMT in a low-resource scenario, such as Hausa-to-English translation. First, a teacher model is trained on a high-resource language pair like French-to-English. The parameters of this model are then used to initialize those of a student Hausa-to-English model, and the training is resumed on the Hausa--English parallel corpus.
In our case, as depicted in \Fig{transfer}, the teacher and student models are vanilla and RS models, respectively, but the parallel corpus is the same. Whereas the embeddings and softmax layers of the RS model can be initialized with those of the vanilla model, it has a single encoder and decoder layer unlike the vanilla model which has six. It is unclear as to which particular layer in the vanilla model is most useful a priori and we can exploit only one layer, each from the encoder and decoder, of the vanilla model to initialize the corresponding encoder and decoder layers of the RS model. As such, we explore all possible models with different choices of layers in the encoder and decoder of the vanilla model. After initialization, we continue training on the parallel corpus till convergence.
\subsubsection{Knowledge Distillation}
Knowledge distillation \citep{DBLP:journals/corr/HintonVD15} uses the probability distribution generated by a teacher model
as a label for training a student model.
The rationale is that a probability distribution as a smoothed label enables a student model to learn as well as a teacher model that has already been trained using one-hot labels. However, this gives minimal benefits for sequences, because sequence models traditionally use teacher-forcing\footnote{For clarity, teacher-forcing is an input-feeding technique for training or decoding neural network models, whereas teacher models are models whose behavior is to be learned by student models. As such, teacher-forcing can be used to train models regardless of whether they are teacher or student.} during training \cite{journals/corr/BengioVJS15}. Teacher-forcing implies the use of the gold label instead of the predicted label as an input to the model in order to predict the next label. As such, using teacher-forcing prevents the student model from learning decoding-time behavior of the teacher model and the result is distillation being done at the word level. This also involves loading two models into memory and several implementation changes.
The problems caused by teacher-forcing can be mitigated by greedy or beam search decoding of the teacher model while training the student model. However, this can slow down training substantially. Note that once the teacher model has been trained, its decoding behavior will remain the same, and hence it is faster to generate distilled sequences before training the student model.
In our context, we regard vanilla and RS models as teacher and student models, respectively, as depicted in Figures~\ref{figure:preparation} and \ref{figure:distillation}.
Note that sequence-level distillation is a forward translation technique, which has already been used for self-learning of machine translation models \citep{ueffing:07}.
Combining word- and sequence-level distillation methods \cite{kim-rush-2016-sequence} has been shown to improve the performance of smaller models, even though the bulk of the improvement comes from sequence-level distillation. We leave the combination for our future work.
\subsection{Experiments}
\label{section:transfer_results}
We conducted experiments to confirm the efficacy of the two transfer learning methods. We first compare the impact of these methods on an experiment for all translation tasks (\Sec{impact}). We then determine whether they are complementary, taking an example task (\Sec{complementarity}).
This is followed by focused analyses of our approaches on different sets of tasks, as applicable, where we
\begin{itemize}
\item determine when and where sequence distillation may or may not be useful (\Sec{limits-sd}),
\item discuss the impact of sequence distillation on decoding efficiency (\Sec{speedup}), and
\item perform a cost-benefit analysis of translation quality, model size, and decoding speed to determine when and where RS models should be preferred over vanilla models (\Sec{costbenefitanalysis}).
\end{itemize}
\subsubsection{Impact of Transfer Learning on Translation Quality}
\label{section:impact}
To confirm that the transfer learning methods can help us train compact RS models with performance comparable to vanilla models, we trained and evaluated the following three new types\footnote{Since our aim is to obtain a compact model with the RS approach, we do not evaluate the configurations where RS and vanilla models are regarded as teacher and student models, respectively.} of NMT models for all translation tasks with the same settings as those in \Sec{experiments}.
\begin{description}
\item[(c)~RS-NMT fine-tuned:] 6-layer\footnote{Note that we avoided training models with fewer layers (recurrences) for this specific scenario as the number of results become too much to present.} RS-NMT models where the encoder and decoder layers were initialized with $l$-th ($l\in\{1,2,6\}$)\footnote{Note that we do not assume the same depth of the encoder and decoders layers in the vanilla model must be used to initialize those in the RS model.} encoder and decoder layers of the pre-trained 6-layer vanilla NMT model, respectively.
\item[(d)~RS-NMT distilled:] $k$-layer ($k\in\{1,2,6\}$) RS-NMT models trained on the pseudo-parallel data generated by the pre-trained 6-layer vanilla NMT model.
\item[(e)~Vanilla-NMT distilled:] $k$-layer ($k\in\{1,2,6\}$) vanilla NMT models trained on the pseudo-parallel data generated by the pre-trained 6-layer vanilla NMT model.
\end{description}
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c|}{GCP} & \multicolumn{2}{c|}{KFTT} & \multicolumn{2}{c}{ASPEC} \\ \cline{2-7}
& \unidir{Ja}{En} & \unidir{En}{Ja} & \unidir{Ja}{En} & \unidir{En}{Ja} & \unidir{Ja}{En} & \unidir{En}{Ja} \\ \hline
\multicolumn{7}{l}{\textbf{(a) Vanilla NMT}}\\\hline
1 & 21.95 & 23.89 & 21.64 & 25.00 & 23.28 & 32.19 \\
2 & 24.23 & 25.62 & 24.14 & 30.05 & 28.06 & 38.91 \\
6 & 24.67 & 26.22 & 27.19{$^{\dagger}$} & \textbf{32.72}{{$^{\dagger}$}} & 28.77{{$^{\dagger}$}} & \textbf{41.32}{{$^{\dagger}$}} \\ \hline
\multicolumn{7}{l}{\textbf{(b) RS-NMT scratch}}\\\hline
1 & 21.95 & 23.89 & 21.64 & 25.00 & 23.28 & 32.19 \\
2 & 23.24 & 24.47 & 24.50 & 28.53 & 27.84 & 38.54 \\
6 & 24.36 & 25.84 & 26.51 & 30.83 & 27.20 & 40.04 \\ \hline
\multicolumn{7}{l}{\textbf{(c) RS-NMT fine-tuned (only 6-layer models)}}\\\hline
1 & 23.86 & 25.33 & 25.89 & 30.93 & 27.44 & 39.16 \\
2 & 24.10 & 25.70 & 26.63 & 31.68{{$^{\dagger}$}} & 27.52 & 40.07 \\
6 & 22.95 & 24.63 & 26.09 & 30.58 & 28.69{{$^{\dagger}$}} & 39.75 \\ \hline
\multicolumn{7}{l}{\textbf{(d) RS-NMT distilled}}\\\hline
1 & 23.25 & 24.20 & 23.23 & 26.47 & 27.81 & 35.71 \\
2 & 24.52 & 25.78 & 25.39 & 30.51 & 29.62{{$^{\dagger}$}} & 38.86 \\
6 & 25.20{{$^{\dagger}$}} & 26.94{{$^{\dagger}$}} & 27.11{{$^{\dagger}$}} & 31.52{{$^{\dagger}$}} & 29.18{{$^{\dagger}$}} & 40.43 \\ \hline
\multicolumn{7}{l}{\textbf{(e) Vanilla NMT distilled}}\\\hline
1 & 23.25 & 24.20 & 23.23 & 26.47 & 27.81 & 35.71 \\
2 & 24.91 & 26.38{{$^{\dagger}$}} & 26.61 & 30.26 & 29.23{$^{\dagger}$} & 40.11 \\
6 & \textbf{25.75}{{$^{\dagger}$}} & \textbf{27.44}{{$^{\dagger}$}} & \textbf{27.53}{{$^{\dagger}$}} & 32.58{{$^{\dagger}$}} & \textbf{29.68}{{$^{\dagger}$}} & 40.92{{$^{\dagger}$}} \\ \hline
\end{tabular}
\caption{BLEU scores obtained in our experiments. The definition of numbers in the leftmost column are as follows: (a) and (e)~the number of different encoder and decoder layers, (b) and (d)~the number of recurrence, and (c)~the depth of encoder and decoder layers of the pre-trained 6-layer vanilla NMT model used for initializing the RS-NMT model. Scores in bold are the ones with highest BLEU score for each translation direction. Scores marked with ``{$^{\dagger}$}'' are the ones that are statistically significantly (bootstrap re-sampling with $p<0.05$) better than all the RS-NMT models trained from scratch i.e., (b).}
\label{table:all-results}
\end{table*}
\Tab{all-results} shows the results of these models along with some excerpts for models (a) and (b) from \Tab{direct-results}. Unless mentioned otherwise, we compare models that use 6-layer vanilla or RS layers.
It is clear that transfer learning helps improve the performance of an RS-NMT model compared to when it is trained from scratch, i.e., models (b). Models (d) trained via sequence distillation gave better results than models (c) trained via fine-tuning for all the tasks, except for the KFTT \unidir{En}{Ja} task. Furthermore, models (d) outperformed the vanilla NMT models for the two GCP tasks and the ASPEC \unidir{Ja}{En} task.
We also trained vanilla NMT models on the distillation data, i.e., models (e), for comparison, and noticed that they sometimes improve over vanilla NMT models trained on the original data, i.e., models (a). They are slightly to significantly better (approximately 1.0 BLEU in some tasks) than their corresponding distilled RS-NMT models, i.e., models (d), owing to the much large number of parameters (\Tab{model-parameters}). We will discuss this in further detail in \Sec{costbenefitanalysis}.
\subsubsection{Complementarity of Two Methods}
\label{section:complementarity}
To determine whether the two transfer learning methods are complementary, we trained and evaluated another type of the RS-NMT models.
\begin{description}
\item[(f)~RSNMT-NMT fine-tuned+distilled:] $k$-layer ($k\in\{1,2,3,4,5,6\}$) RS-NMT models trained on the pseudo-parallel data generated by the pre-trained 6-layer vanilla NMT model, with the encoder and decoder RS layers initialized with the $l$-th ($l\in\{1,2,3,4,5,6\}$) encoder and decoder layers of the same vanilla NMT model, respectively.
\end{description}
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
$l{\backslash}k$ & 1 &2 &3 &4 &5 &6 &(c) \\ \hline
1 & 24.60 & 25.68 & 25.62 & 26.75 & 26.15 & 26.20 & 25.33 \\
2 & 24.70 & 26.33 & 25.64 & 26.57 & 26.53 & 26.23 & 25.70 \\
3 & 24.70 & 26.02 & 26.48 & 26.33 & 26.68 & 26.67 & 25.66 \\
4 & 24.51 & 26.06 & 26.21 & 26.38 & 26.45 & 26.49 & 26.22 \\
5 & 24.64 & 26.20 & 25.98 & 25.93 & 25.89 & 26.24 & 24.97 \\
6 & 24.57 & 26.03 & 26.28 & 25.86 & 26.51 & 26.16 & 24.63 \\
\hline
(d) & 24.20 & 25.78 & 25.97 & 26.27 & 26.00 & 26.94 \\
\cline{1-7}
\end{tabular}
\caption{BLEU scores for the GCP \unidir{En}{Ja} task achieved by combining the two transfer learning methods, i.e., fine-tuning and sequence distillation, for training RS-NMT models. $k$ denotes the number of recurrence of the RS-NMT model and $l$ the depth of encoder and decoder layers of the pre-trained 6-layer vanilla NMT model used for initializing the RS-NMT model. The the rightmost column and last row respectively show for reference the results of model (c), i.e., 6-layer RS-NMT models trained with parameter initialization using one of 6 layers but without sequence distillation and model (d), i.e., 1- to 6-layer RS-NMT models trained via sequence distillation without any parameter initialization.}
\label{table:mixed-ft-distillation-results}
\end{table*}
\Tab{mixed-ft-distillation-results} gives the results for the GCP \unidir{En}{Ja} task, which we take as an example for completion. When the number of recurrence ($k$) is small, initializing the RS layer yields slight performance gains over the corresponding model (d) without initialization; however, 6-layer RS models no longer benefit from parameter initialization. Over the models (c) with the same number of recurrence ($k=6$), sequence distillation brings consistent improvements.
Consider the case of $k=2$ and $l=2$: a 2-layer RS-NMT model initialized with the 2nd encoder/decoder layers and fine-tuned on the data for sequence distillation.
This model achieves a BLEU of 26.33 which is better than its non-initialized counterpart, i.e., model (d), with a BLEU of 25.78. This shows that using the combination of sequence distillation and fine-tuning can enable one to train shallower RS models with performance that is significantly better than deeper RS models that do not use any distillation or transfer learning.
Even though we do not report on the results for all translation tasks due to the sheer number of combinations, the results so far confirm that combination of these two transfer learning methods is beneficial and they appear to complement each other.
\subsubsection{Limits of Sequence Distillation}
\label{section:limits-sd}
To investigate the reasons why sequence distillation is not always better than fine-tuning, we measured using BLEU the similarity between (i)~translations of the training data generated as the pseudo-target sentences used for sequence distillation and (ii)~reference translations in the original training data. \Tab{distillation} shows the BLEU scores of a random sample of 10,000 sentences from the training data that we translated for sequence distillation. In the ASPEC \unidir{En}{Ja} task, the BLEU score was almost 60. For reference, the BLEU scores for all other settings were significantly lower. The higher the BLEU score, the higher the similarity of the translated sentences with the references. One of the reasons why sequence distillation can enable smaller models to perform better, according to \citet{kim-rush-2016-sequence}, is because the translated sentences are supposed to be simplified and hence inaccurate versions of the references. As it is easier for smaller models to learn simpler data, the increased similarity between the translations and references prevents from performing better. Consequently, it seems logical that sequence distillation would not help the performance of RS-NMT models in this particular setting. Despite ASPEC being a high-resource setting, the vanilla NMT model seems to learn from the training data much better than for the other datasets. We suppose that stronger regularization during training can mitigate this, and will explore this and its impact on sequence distillation in the future.
\begin{table}[t]
\centering
\small
\begin{tabular}{c|c|c}
\hline
Dataset & \unidir{Ja}{En} & \unidir{En}{Ja}\\\hline
GCP & 45.54 & 51.70 \\
KFTT & 37.91 & 43.42 \\
ASPEC & 51.14 & 59.70 \\
\hline
\end{tabular}
\caption{BLEU scores for 10,000 random samples in the training data, as the similarity between original parallel data and the pseudo parallel data used for sequence distillation.}
\label{table:distillation}
\end{table}
Another observation is that, due to distillation, the performance does not drastically improve by using two or more layers. This applies to both vanilla and RS NMT models. We also confirmed one of the observations of \citet{kim-rush-2016-sequence}: the distilled models can give better performance than their non-distilled counterparts. Distillation benefits both vanilla and RS NMT models and distilled vanilla NMT models are the best, although distilled RS-NMT models are better than or as good as vanilla NMT models trained from scratch. In the end, the gap in translation quality between vanilla and RS NMT models is approximately 1.0 BLEU. However, in the big picture, we do not think that this is a serious matter. Given that RS-NMT models are more compact than vanilla NMT models, they are more attractive in our opinion.
\subsubsection{Faster Decoding Through Transfer Learning}
\label{section:speedup}
The RS-NMT models trained via transfer learning are typically more around 1.0 to 2.0 BLEU points better than their counterparts trained without transfer learning, irrespective of the number of RS layers. This enables us to save decoding time by the following two approaches.
\paragraph{Use of greedy decoding:}
Note that all BLEU scores so far presented are for translations derived via beam search with a beam size of 4. \Tab{beam-vs-greedy} compares BLEU scores and decoding times of two sets of RS-NMT models, taking the GCP \unidir{En}{Ja} as an example task.\footnote{For the sake of presentation, we do not give the results for all tasks, but \Tab{all-results} suggests that the trends for GCP \unidir{En}{Ja} task are applicable for most settings.} One is RS-NMT models trained on the original data, i.e., models (b), used for beam search decoding, and the other is RS-NMT models trained via sequence distillation, i.e., models (d), used for beam search decoding and greedy decoding. Whereas there is no significant difference between the first and the last configurations regarding BLEU scores, beam search takes twice the time as greedy search. This means that we can save a substantial amount of time by using greedy search thanks to sequence distillation.
\paragraph{Use of shallower models:}
According to \Tab{beam-vs-greedy}, for the GCP \unidir{En}{Ja} task, beam search with 6-layer RS-NMT model trained on the original data, i.e., model (b), takes 83.29s to generate translations with BLEU of 25.84. Thanks to sequence distillation, a shallower 2-layer RS-NMT model (d) is now able to generate translations with the comparable quality, BLEU of 25.78, spending only 39.50s with the same width of beam, which is less than half of time of the 6-layer models.
\begin{table*}[t]
\centering
\small
\begin{tabular}{l|cc|cc|cc}
\hline
\multirow{3}{*}{$k$} &\multicolumn{2}{c|}{Model (b)} &\multicolumn{2}{c|}{Model (d)} &\multicolumn{2}{c}{Model (d)}\\
&\multicolumn{2}{c|}{w/ beam search} &\multicolumn{2}{c|}{w/ beam search} &\multicolumn{2}{c}{w/ greedy search}\\\cline{2-7}
&BLEU &Time (s) &BLEU &Time (s) &BLEU &Time (s)\\
\hline
1 &23.89 &36.96 &24.20 &36.96 &23.66 &18.41\\
2 &24.47 &39.50 &25.78 &39.50 &24.52 &23.78\\
6 &25.84 &83.29 &26.94 &83.29 &25.57 &46.22\\
\hline
\end{tabular}
\caption{Comparison of (b) RS-NMT models directly trained and (d) RS-NMT models trained via sequence distillation used with different search methods for the GCP \unidir{En}{Ja} task. Decoding time includes model loading time and writing time. $k$ denotes the number of recurrence.}
\label{table:beam-vs-greedy}
\end{table*}
\subsubsection{Vanilla vs. RS: Cost Benefit Analysis}
\label{section:costbenefitanalysis}
We now present a cost benefit analysis of vanilla and RS models in various model size settings. We already know that in the absence of transfer learning methods, such as sequence distillation or fine-tuning, vanilla models always outperform RS models. We thus focus on comparing these two types of models when using sequence distillation as a transfer learning method.\footnote{We chose sequence distillation and not fine-tuning, mainly because it is straightforward to apply sequence distillation than fine-tuning since the latter involves determining optimal vanilla model layers prior to fine-tuning.} For this analysis, we chose the ASPEC \unidir{Ja}{En} task, because it is one of the cases where the RS model performs on par\footnote{Note that in the results so far, the relative gap between RS and vanilla models is significantly smaller in high-resource settings, such as ASPEC.} with the vanilla model as in \Tab{all-results}. Our objective is to identify precise conditions where RS should be preferred over vanilla models.
We trained (whenever possible) vanilla and RS models using 1 to 6 encoder-decoder layers with the dimensionality of hidden layers of 64, 128, 258, 512, 1,024, and 2,048. In all these configurations, the filter size was consistently set to four times of the hidden layer, as the Transformer Base model.
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c|c|c|c|c}
\hline
Hidden-Filter & 64-256 & 128-512 & 256-1,024 & 512-2,048 & 1,024-4,096 & 2,048-8,192 \\
\hline\hline
Layer & \multicolumn{6}{c}{Model Sizes}\\\hline
1, RS
& 6.4M & 13.0M & 27.0M & 57.7M & 130.1M & 319.0M \\\hline
2 & 6.5M & 13.5M & 28.8M & 65.0M & 159.5M & n/a \\\hline
3 & 6.6M & 14.0M & 30.7M & 72.4M & 188.9M & n/a \\\hline
4 & 6.7M & 14.4M & 32.5M & 79.7M & 218.3M & n/a \\\hline
5 & 6.8M & 14.9M & 34.4M & 87.1M & 247.7M & n/a \\\hline
6 & 7.0M & 15.3M & 36.2M & 94.4M & n/a & n/a \\\hline
\hline
Layer & \multicolumn{6}{c}{BLEU Scores}\\\hline
1
& 14.61/16.04 & 20.61/21.95 & 24.81/25.94 & 26.99/27.81 & 27.68/28.31 & 27.74/28.17 \\\hline
\multirow{2}{*}{2}
& 16.50/18.53 & 24.70/25.52 & 27.86/28.46 & \bf 29.11/29.62 & \bf 29.39/29.74 & \multirow{2}{*}{n/a} \\
& 20.54/21.23 & 26.49/27.20 & 28.58/28.92 & 28.88/29.23 & 29.12/29.47 \\\hline
\multirow{2}{*}{3}
& 17.80/18.67 & 25.80/26.49 & 28.18/28.83 & \bf 29.14/29.57 & 29.24/29.54 & \multirow{2}{*}{n/a} \\
& 22.00/23.12 & 27.34/27.91 & 29.31/29.36 & 28.95/29.45 & 29.56/29.88 \\\hline
\multirow{2}{*}{4}
& 19.86/21.26 & 26.54/27.21 & 28.35/28.74 & 28.97/29.39 & 28.90/29.28 & \multirow{2}{*}{n/a} \\
& 24.27/25.09 & 27.73/28.02 & 29.21/29.48 & 29.22/29.70 & 29.25/29.65 \\\hline
\multirow{2}{*}{5}
& 20.68/21.98 & 26.74/27.56 & 28.53/29.00 & 28.68/29.28 & \bf 29.56/30.14 & \multirow{2}{*}{n/a} \\
& 24.22/25.29 & 28.26/28.65 & 29.33/29.62 & 29.17/29.68 & 29.50/29.97 \\\hline
\multirow{2}{*}{6}
& 21.20/22.69 & 27.06/27.52 & 28.68/28.96 & 28.70/29.18 & 29.05/29.69 & \multirow{2}{*}{n/a} \\
& 25.47/26.13 & 27.91/28.55 & 28.98/29.40 & 29.27/29.68 & n/a \\\hline
\hline
Layer
& \multicolumn{6}{c}{Decoding Times}\\\hline
1
& 37.4/38.9 & 32.5/38.5 & 31.2/39.8 & 32.4/47.2 & 37.1/74.7 & 59.4/162.8 \\\hline
\multirow{2}{*}{2}
& 41.1/55.1 & {\bf 36.2}/48.4 & {\bf 36.9}/53.0 & 41.3/56.8 & \bf 46.9/94.6 & \multirow{2}{*}{n/a} \\
& 38.0/48.9 & 36.7/46.8 & 38.1/51.7 & 41.2/55.1 & 51.0/111.3 \\\hline
\multirow{2}{*}{3}
& 48.4/68.1 & 45.5/60.1 & 48.4/67.1 & {\bf 50.0}/74.3 & \bf 59.5/121.9 & \multirow{2}{*}{n/a} \\
& 45.5/61.4 & 43.6/59.4 & 46.4/63.7 & 51.1/73.9 & 62.7/141.0 \\\hline
\multirow{2}{*}{4}
& 58.3/78.7 & {\bf 54.2}/76.0 & 54.4/{\bf 78.0} & 63.8/{\bf 91.9} & 78.6/183.5 & \multirow{2}{*}{n/a} \\
& 54.2/73.7 & 55.6/73.2 & 53.0/80.2 & 62.9/92.1 & 77.9/175.6 \\\hline
\multirow{2}{*}{5}
& 65.3/90.5 & {\bf 62.6}/87.1 & 65.8/96.7 & 76.3/{\bf 117.7} & \bf 77.9/162.4 & \multirow{2}{*}{n/a} \\
& 65.3/90.1 & 64.0/86.2 & 63.1/96.0 & 75.7/118.1 & 90.4/210.5 \\\hline
\multirow{2}{*}{6}
& {\bf 75.2}/105.2 & 76.7/110.8 & \bf 72.0/101.4 & {\bf 77.0}/117.4 & 88.4/186.6 & \multirow{2}{*}{n/a} \\
& 79.5/99.8 & 74.8/110.7 & 80.8/111.6 & 81.7/115.7 & n/a \\\hline
\end{tabular}
\caption{Model sizes (top block), BLEU scores (middle block) and decoding times (bottom block) for the ASPEC \unidir{Ja}{En} task. Results are for 1- to 6-layer RS and vanilla models with hidden-filter sizes of 64-256, 128-512, 256-1,024, 512-2,048 (default), 1,024-4,096, and 2,048-8,192. In the middle and bottom blocks, each cell contains 2 pairs of 2 scores, except for identical 1-layer RS and vanilla models. The first pair of scores is for the RS model and second one is for the vanilla model. Each pair of scores separated by a ``/'' indicates greedy search and beam-search BLEU scores or decoding speeds. Entries marked with a ``n/a'' are those for which we were unable to train and hence decode models due to lack of computational capacity (GPU memory). Numbers marked bold indicate that they are superior to the corresponding vanilla models shown in the second line.}
\label{table:costbenefitaspec}
\end{table*}
The results are shown in \Tab{costbenefitaspec}. Before delving into details, we would like to emphasize that models with larger hidden-filter sizes (1,024-4,096\footnote{Note that these hidden-filter sizes are the same as used in the Transformer Big setting. Big models are usually better than Base models in high-resource settings \cite{NIPS2017_7181}, and RS models that tend to be faster to decode than vanilla models in these situations are definitely worthwhile.} and above) tend to show better performance despite them used in shallow layer models. Furthermore, RS models barely differ in translation quality compared to vanilla models despite their reduced capacity.
With regards to translation quality, when using small hidden-filter sizes (64-256 and 128-512), vanilla models always outperform RS models, especially when shallow layers are used. However, for hidden-filter size of 128-512, the deepest 6-layer vanilla and RS models are within approximately 1.0 BLEU of each other. Given that we lose up to approximately 1.0 BLEU points for reducing 20\% parameters, we consider that 6-layer RS models with 128-512 hidden sizes can be the smallest possible viable NMT models that can be used in place of vanilla models. When using hidden-filter sizes of 256-1,024 and above, the gaps between vanilla and RS models reduce even further.
In some configurations marked bold in the table, RS models outperform corresponding vanilla models in terms of BLEU score.
In other cases, the BLEU score differences are not statistically siginificant. In these settings, RS models help reduce the number of parameters from 10\% to 50\%. We consider that this is reason enough to prefer RS models in favor of vanilla models.
When focusing on efficiency in terms of decoding speed, the number of encoder-decoder layers has a larger impact than the hidden-filter sizes in most cases. In fact, the decoding speeds for hidden-filter sizes up to 512-2,048 are mostly similar when the number of encoder-decoder layers are fixed. However, when using larger hidden-filter sizes,
the decoding times increase. Furthermore, when using smaller hidden-filter sizes, the decoding speeds between vanilla and RS models do not differ by much, but when using hidden-filter sizes larger than 512-2,048, RS models tend to be faster. We speculate that this is because larger hidden-filter sizes end up utilizing the caches and CUDA cores of the GPUs to their limit. Thus, RS models are more effective as the amount of parameters needed for caching are fewer. This observation is in line with previous work \cite{Li2019}. Given that the RS models tend to give the better performance on average when using larger hidden-filter sizes (1,024-4,096 and above) while being slightly faster to decode than vanilla models for this ASPEC \unidir{Ja}{En} task, we consider that RS models are certainly a viable alternative to vanilla models. In practice, we recommend an in-depth exploration like we did before choosing a model that satisfies all cost-benefit requirements.
While we were unable to train vanilla models with 2 or more layers and hidden-filter sizes of 2,048-8,192 due to our GPU infrastructure, our observations should be generalizable to these and even larger hidden-filter size settings. We performed this cost-benefit analysis for the GCP \unidir{En}{Ja} task as well. Our observations related to decoding speeds were mostly similar to when using wide hidden sizes (1,024 or 2,048) for the ASPEC \unidir{Ja}{En} task. Similarly, the large gaps in translation quality for smaller hidden sizes (up to 128) were also observed. However, the vanilla model tended to give slightly better translations than RS models regardless of the model size setting. We would like the readers to note an important point: the GCP tasks are relatively low-resource compared to ASPEC, and using wider layers does involve the possibility of over-fitting and this makes it possible for the scores to be rather unreliable unless additional analysis involving regularization is done. Nevertheless, we have shown that there exist settings (corpora sizes and model sizes) where a vanilla model can certainly be replaced by an RS model.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics*[width=\textwidth]{kftt-direct.pdf}
\caption*{(b)~RS-NMT scratch.}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics*[width=\textwidth]{kftt-distilled.pdf}
\caption*{(d)~RS-NMT distilled.}
\end{subfigure}
\caption{BLEU scores of the RS-NMT models for the KFTT \unidir{En}{Ja} task with a variable number of recurrences of decoder layers during decoding. The number of encoder recurrences is the same as during training.}
\label{figure:recurrent-decoding}
\end{figure*}
\section{Analyzing Recurrently Stacked NMT Models}
\label{section:analysis}
The above results and analyses show that RS models are best trained with transfer learning methods leading to compact and fast models which should be extremely useful in real-time resource-constrained and latency-sensitive scenarios. We now focus on understanding the internal working of RS models through analyzing their decoding behavior and visualizing parts of the models inner components.
In particular, we answer the following two questions.
\begin{description}
\item[(a)] Do RS-NMT models memorize the number of recurrence?
\item[(b)] Do RS-NMT models behave differently from vanilla NMT models internally?
\end{description}
\subsection{Number of Recurrence Memorized by RS-NMT}
Any deep stacked model trained with $N$ layers can theoretically be used for decoding with fewer than $N$ layers. Non-RS models cannot be used with more than $N$ layers, because the parameters for deeper layers do not exist. In contrast, RS models have the same parameters regardless of the number of layers and recurrence and thus can be used for decoding more flexibly with fewer or more layers. As we have seen in \Tab{direct-results}, the performance of RS-NMT models seems to stabilize with the increasing number of RS layers. As such, we expected that the representations obtained by deep RS should be robust and thus enable the use of fewer layers during decoding than those used during training. To confirm whether this holds or not, we trained several RS-NMT models and used them for decoding with different times of recurrence, taking the KFTT \unidir{En}{Ja} task as an example.
When performing decoding, we fixed\footnote{In our previous work \cite{DBLP:conf/aaai/DabreF19}, we have shown that using the same number of recurrences between training and decoding is important for the encoder. Reducing the number of encoder layers does not have a large impact, because the encoding process of the Transformer architecture is highly parallelizable. In contrast, reduction of the number of decoder layers is valuable, because its auto-regressive process is time-consuming.} the number of recurrences for the encoder to that what was used during training, and varied only the number of recurrences for the decoder from 1 to 8. For example, a trained 6-layer RS-NMT model was tested with 6 times of RS for the encoder (same as training) but do 1 to 8 times of RS for the decoder. We evaluated two types of RS-NMT models: ones trained from scratch and the others trained via sequence distillation. We did not do this for the fine-tuning models for two reasons. First, there are a number of fine-tuned models which would be difficult to analyze. Second, fine-tuned models are trained using the same data as those trained from scratch and hence there is no reason to expect any fundamental change in their behavior regarding recurrence memorization.
\newcommand{{\textit{Ippan'ni wa Dogen Zenji to yoba reru .}}}{{\textit{Ippan'ni wa Dogen Zenji to yoba reru .}}}
\newcommand{{He is generally called Dogen Zenji .}}{{He is generally called Dogen Zenji .}}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics*[width=1.0\hsize]{vanilla_encdec_cross_1st.png}
\caption{Vanilla NMT.}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics*[width=1.0\hsize]{rsnmt_encdec_cross_1st.png}
\caption{RS-NMT scratch.}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics*[width=1.0\hsize]{vann_init_encdec_cross_1st.png}
\caption{The best RS-NMT using fine-tuning (initialized by the 3rd layer of the vanilla NMT model).}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics*[width=1.0\hsize]{distill_encdec_cross_1st.png}
\caption{RS-NMT distilled.}
\end{subfigure}
\caption{Visualization of the cross-attention of the four different types of NMT models trained for the KFTT \unidir{Ja}{En} task. This visualization shows only a single sub-word out of the entire sequence in the target language for an input sentence, ``{\textit{Ippan'ni wa Dogen Zenji to yoba reru .}}'' ({He is generally called Dogen Zenji .}). The x-axis indicates the input sentence, i.e., sequence of sub-words, for each of the 8 attention heads (marked by different prefix of each sub-word and different colors), whereas the y-axis indicates the 6 layers with 0-to-5 in suffix representing 1st-to-6th layer from top to the bottom, respectively. Each cell shows the cross-attention probability given by a head at a layer to a source sub-word, when generating the target sub-word, where darker color means stronger attention with higher probability.}
\label{figure:attvis}
\end{figure*}
\Fig{recurrent-decoding} summarizes the BLEU scores achieved by the above two types of models.
Irrespective of the use of sequence distillation, regarding each model shown in a column of the figures, the number of recurrences during training is not precisely memorized, and the deeper models have a higher level of flexibility. For example, the 6-layer models used with 4--7 times of recurrences achieved comparable BLEU scores. In contrast, there is no guarantee about the performance with fewer number of recurrences, as evidenced by the sharp drop of BLEU score when used with 1--3 decoder layers. BLEU scores also dropped when we used the models with more recurrences than what had been used for training. These observations indicate that the computation of the most useful and hence the most reliable features takes place around the deepest layer used during training.
As such, we conclude that the RS-NMT, in its current form, is unable to generalize the variation in the number of recurrence between training and decoding. However, we can say that the RS-NMT models can weakly memorize the number of recurrences.
Note that sequence distillation yielded better RS-NMT models than those trained from scratch. To be even more precise, the 4-layer RS-NMT model obtained via sequence distillation was as good as or better than the best RS-NMT model (6-layer) trained from scratch. As such, we can avoid training deeper models if we use sequence distillation, and this can also save decoding time.
\subsection{Visualizing Recurrently Stacked Models}
To acquire a deeper understanding of what happens in RS-NMT models, we visualize the attentions across all attention heads for each stacked layer. In particular, we consider all the types of models trained for the KFTT \unidir{Ja}{En} task, except vanilla-vanilla distilled one, and visualize their encoder's self-attentions, the decoder's self-attentions, and the decoder's cross-attentions. For the sake of simplicity, we only show the visualizations for cross-attentions of a target sub-word with the entire source sentence. See \App{visualization} for the sentence-level cross-attention visualizations for the same example.
\Fig{attvis} displays the heat-maps for the 8-head encoder-decoder cross-attention across all 6 layers
when generating the first translated sub-word, ``The,'' for an input Japanese sentence consisting of eight tokens (ten sub-words), ``{\textit{Ippan'ni wa Dogen Zenji to yoba reru .}}'' ({He is generally called Dogen Zenji .}).
Note that the NMT models have eight attention heads that are shared across 6 RS layers, whereas the vanilla model has 48 (8$\times$6) different attention heads.
As shown in the figure, the attention mechanism behaves differently for the vanilla NMT and RS-NMT models, whereas there is no noticeable difference among the three RS-NMT models themselves. There is no consistency in the sharpness of attention as we go deeper in the vanilla NMT model. In contrast, as we go deeper in the RS-NMT models, the attention tends to stabilize itself. To be more precise, it seems to be stable around the 3rd or 4th layer and barely change in deeper layers. This could be a possible explanation as to why the BLEU scores do not vary by large amounts despite using a variable number of recurrences during decoding as compared to training, as we have seen in \Fig{recurrent-decoding}.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.4\columnwidth}
\includegraphics*[width=.9\columnwidth]{vanilla_encdec_cross_entr.png}
\caption{Vanilla NMT.}
\end{subfigure}
\qquad
\begin{subfigure}[t]{0.4\columnwidth}
\includegraphics*[width=.9\columnwidth]{rsnmt_encdec_cross_entr.png}
\caption{RS-NMT scratch.}
\end{subfigure}
\\
\begin{subfigure}[t]{0.4\columnwidth}
\includegraphics*[width=.9\columnwidth]{vann_init_encdec_cross_entr.png}
\caption{The best RS-NMT using fine-tuning (initialized by the 3rd layer of the vanilla NMT model).}
\end{subfigure}
\qquad
\begin{subfigure}[t]{0.4\columnwidth}
\includegraphics*[width=.9\columnwidth]{distill_encdec_cross_entr.png}
\caption{RS-NMT distilled.}
\end{subfigure}
\caption{Visualization of attention entropies of the four different types of NMT models (same as in \Fig{attvis}) for the same input sentence, ``\mbox{{\textit{Ippan'ni wa Dogen Zenji to yoba reru .}}}'' ({He is generally called Dogen Zenji .}). The x-axis indicates the attention heads, whereas the y-axis indicates the depth of the layer (top: 1st layer, bottom: 6th layer). A darker color means higher value of entropy and more uniform distribution of attention probability.}
\label{figure:entrvis}
\end{figure}
For a further understanding, in \Fig{entrvis}, we also produced heat-maps each of which displays for each model the entropies of the attentions for each attention head in each layer.
Higher entropy (darker color) means that the attention head focuses on more sub-words, whereas lower entropy (lighter color) indicates that the head attends to a fewer number of sub-words.
In the vanilla NMT model, the entropies of the attention heads are often very close to zero, as a result of intensive attention to a single sub-word, such as in the 3rd head of the 3rd layer (Head 2 of Layer 2 in \Fig{entrvis}). Otherwise, the entropy value can be very high, indicating that the head attends to almost all the sub-words nearly uniformly, as seen in the 1st head of the 2nd layer (Head 0 of Layer 1 in \Fig{entrvis}). We could not observe any layer-wise tendency, presumably owing to the independence nature of attention heads.
In contrast, in the RS-NMT models, the attention entropy of more than half of the attention heads is non-zero, meaning that each of these heads attends to more than one sub-words, but at the same time they are more conservative by focusing on fewer sub-words than the vanilla model.
It is noteworthy that the attention entropy tends to gradually stabilize as we move towards deeper layers, which has also been seen in \Fig{attvis}.
In the RS-NMT models, both the attention values and the entropies seem to calibrate themselves. This means that the attention mechanism somehow learns to adjust itself and dynamically considers the contribution from more or fewer sub-words by giving them similar or dissimilar weights. More often than not, the entropies tend to grow which means that deeper layers prefer to focus on more sub-words. This ability to self-calibrate possibly comes from the fact that the same parameters are used across all RS layers, meaning that all computed representations will lie in the same vector space. In contrast, the parameters of the attention heads across layers in vanilla NMT models are initialized independently and hence all computed representations will lie in separate vector spaces. Consider the work on average attention networks \citep{DBLP:conf/acl/XiongZS18} where the self-attention weights are statically set as $1/L$ where $L$ is the length of the sequence against which the attention is computed when generating a target sub-word. Average attention has the highest entropy because each word receives the same amount of attention. As RS models tend to have higher entropies for deeper layers, we suppose that the attention mechanism calibrates itself to attend to more words as the depth increases. This also means that attending equally to more words is beneficial. As such, our observation for RS models, that attend to more words equally, might be an explanation of why average attention networks, which attend to all words equally, work well despite using an ordinary attention mechanism.
We can also relate this to the principle of maximum entropy\footnote{\url{https://en.wikipedia.org/wiki/Principle_of_maximum_entropy}} which states that the probability distribution that best represents the data is the one with the maximum entropy. Given that RS models tend to have higher attention entropy across all heads in the deeper layers, we hypothesize that the attention models are indirectly trained to satisfy a maximum entropy objective in order to better represent the context used during decoding. We will explore this hypothesis in the future.
Another probable explanation lies in the understanding of what recurrent neural networks (RNNs), such as long short-term memories (LSTMs) \citep{Hochreiter:1997:LSM:1246443.1246450}, do. At each time-step, an RNN refines the representation of a sequence as it processes each new sub-word with the same parameters. As such, we speculate that the deeper layers of an RS-NMT model are forced to learn more abstract representations through refining those in the shallower layers. The parameter sharing, and thus the reduction in representational power, most likely causes all artificial neurons to work in unison and uniformly distributes the labor of generating reliable representations. This would mean that the learned representation for each sub-word is rather generic. And this in turn causes the attention mechanism to seek out more work for the decoder to compute reliable representations.
Since the attentions in RS-NMT and vanilla NMT models differ greatly, they might have different applications. For example, RS-NMT attentions might be more suited for phrase-to-phrase matching due to the wide attention spans it tends to exhibit. It would be worthwhile to visualize the word- and sentence-level representations to explore our hypotheses.
\section{Conclusion}
\label{section:conclusion}
In this paper, aiming to obtain a compact neural network model, we have proposed recurrent stacking (RS) of layers.
In the neural machine translation (NMT) tasks, we confirmed that the RS-NMT models trained directly on the given parallel data consistently achieve much higher BLEU scores than the identical 1-layer vanilla models, whereas they still underperform the 6-layer vanilla models. Nevertheless, despite the small number of parameters equivalent to that of the 1-layer vanilla models, their performance can approach or even surpass that of the 6-layer vanilla models when trained through transfer learning, such as knowledge distillation and/or layer transfer followed by fine-tuning. Furthermore, we showed that transfer learning approaches help reduce decoding time by reducing the need for beam search and/or deep layers. An important observation we made is that in high-resource settings, shallow RS models using wide hidden layers (512 or 1,024) trained via transfer learning tend to outperform vanilla models (also trained via transfer learning) in terms of translation quality, model size and, in some cases, decoding speed.
Through varying the number of the decoder layers of the RS-NMT models, we show that the number of recurrence used for training is weakly memorized by the trained model itself and it is best to use the same number of recurrences during training and decoding, whereas deep RS models could be used with slightly fewer numbers of decoder layer. Our analysis of the attention mechanism also reveals that the parameter sharing by RS leads to a self-calibrating attention behavior: the attention values tend to converge as we go deeper. Furthermore, more heads are involved in attending to the source and target sub-words as evidenced by higher entropies in deeper layers indicating intensive use of the few parameters in the RS models.
The following methods can augment our proposed method and help in either improving the translation quality or compressing the model further.
\begin{description}
\item[Extremely deep and compact models:] Vanilla models that do not employ any parameter sharing between layers suffer from the risk of parameter explosion. RS models can approach the performance of multi-layer vanilla models, while retaining the number of parameters equivalent to that of 1-layer vanilla models. In our experiments, however, RS-NMT models have reached their performance limit at 6 layers similarly to vanilla NMT models. This is presumably because of so-called gradient flow issues \cite{bapna-etal-2018-training}. Advances in training deeper models should help train extremely deep RS models with an improved performance. It is also possible that the representations of RS layers converge, and mathematical analyses of such a convergence behavior should give us useful insights.
\item[Flexible Decoding of NMT models:] Our analysis has revealed that a trained RS-NMT model can be used for decoding with the fewer layers than during training without an appreciable loss. It will be useful to have a method to train NMT models which can be used with any number of layers without a sharp loss in translation quality that existing models suffer from. Such a method should be useful for both vanilla NMT and RS-NMT models. Note that flexible decoding can have larger impact if we can combine it with the insights gained from the study of convergence behavior of RS layers.
\item[Limits of sharing parameters:] Concurrent studies have shown that sharing the self-attention and feed-forward layer parameters between the encoder and decoder is possible without a great loss in performance \cite{DBLP:conf/aaai/XiaHTTHQ19}. However, its combination with RS perform badly. Sequence distillation should help mitigate this problem. Additionally, it is unclear how multilingual models that rely on parameter sharing will behave when combined with RS. The ultimate question in this direction is to find additional parts of neural network models to share.
\item[Filling the capacity with external data:] Using back-translation is a prominent way to improve the decoder using the monolingual data of the target language \citep{sennrich-haddow-birch:2016:P16-11}, and we have proven its effectiveness in improving RS-NMT models, indicating that RS-NMT models can afford to manage more information. We have also shown that RS-NMT benefits from sequence-level knowledge distillation and can achieve comparable results to vanilla NMT models. This is another indicator of its ability to incorporate more information. Forward translation and filtering \citep{ueffing:07} is a type of knowledge distillation and has been proven effective in improving translation performance in the form of self-learning. It should be possible to combine both these methods to incorporate into RS-NMT models, more information of source and target languages in addition to the limited parallel data.
\end{description}
\section*{Acknowledgments}
A part of this work was conducted under the program ``Research and Development of Enhanced Multilingual and Multipurpose Speech Translation System'' of the Ministry of Internal Affairs and Communications (MIC), Japan.
\bibliographystyle{acl_natbib}
| -44,492.849841 |
[
-3.2734375,
2.982421875
] | 39.670659 |
[
-2.943359375,
1.390625,
-1.3623046875,
-4.015625,
-0.99462890625,
5.73828125
] |
[
3.08203125,
6.60546875,
0.1444091796875,
7.296875
] | 1,080 | 10,505 |
[
-3.2578125,
3.46484375
] | 26.894423 |
[
-6.2734375,
-4.390625,
-4.4609375,
-2.208984375,
2.572265625,
12.546875
] | 0.741632 | 22.207408 | 19.200381 | 11.466347 |
[
2.328688144683838
] | -33,620.983871 | 5.776011 | -43,512.613258 | 0.616379 | 6.071012 |
[
-2.83984375,
-3.560546875,
-3.310546875,
-4.18359375,
2.6171875,
10.875
] |
[
-5.90234375,
-3.041015625,
-2.765625,
-2.2421875,
4.1171875,
6.9609375
] | |
BkiUgRHxK7IDOE3otY1U
|
\section{Disappointed Issues}
\label{sec:dis}
In this section, we present in-depth discussions of the disappointed issues derived from the survey results, together, supported by justifiable evidence.
\subsection{Unjustified Bias on the Selection of Search Algorithms}
The long history of Computational Optimization community has witnessed a wide range of effective search algorithms. The other problems in SBSE have also attempted to explore various search algorithm over the last decade~\cite{DBLP:journals/tse/RamirezRS19}. However, as discussed in Section~\ref{sec:rq2} it is disappointed to find that when investigating SBSE for SASs, there is an unjustified bias on the selection of search algorithms, i.e., GA and ES are predominated for single or aggregated objective case; NSGA-II is significantly more common in Pareto-based multi-objective cases.
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\includestandalone[width=\columnwidth]{tikz/femosaa-moead}
\end{subfigure}
\caption{Optimization results by MOEA/D-STM and NSGA-II for an example SAS over all runs~\cite{DBLP:journals/tosem/ChenLBY18}.}
\label{fig:search-alg}
\end{figure}
\begin{figure}
\centering
\includestandalone[width=0.5\columnwidth]{tikz/single-better-ga}
\caption{Optimization results by Memetic Algorithm (MA) and GA for an example SAS over all runs~\cite{DBLP:journals/tsc/LeitnerHD13}.}
\label{fig:search-alg-single}
\end{figure}
Indeed, certain search algorithms are `better known' than some others, but such a large bias is what we did not expect. In fact, the selections have not been well-justified: undoubtedly, ES will not work for those widely used subject SASs with an explosion of search space, as shown in Section~\ref{sec:rq6}. While both GA and NSGA-II are arguably one of the most popular algorithms out of their own category, there has been no scientific evidences to prove that they are universally fit to all contexts, especially considering that the dynamics and uncertainty are heavily involved in the search problems of SAS. In particular, given the extremely active research within the computational optimization domain, it is possible that alternative algorithms perform much better for SAS optimization, which is not uncommon in other SBSE problems. For example, in the software product line optimization problem, IBEA~\cite{Zitzler2004} has been shown to be able to find solutions with better convergence and diversity than NSGA-II. This is also highly plausible on SAS problems. For example, a recent work on SBSE for SASs~\cite{DBLP:journals/tosem/ChenLBY18} has shown that MOEA/D-STM~\cite{DBLP:journals/tec/LiZKLW14}, which is a relatively new search algorithm proposed in 2015, tends to achieve better results than NSGA-II in terms of optimizing two conflicting objectives on different SASs that require configuration based self-adaptation. An example result has been shown in Figure~\ref{fig:search-alg} where the points that closer to the top-left region are the ideal adaptation solutions. Another work by Leitner et al.~\cite{DBLP:journals/tsc/LeitnerHD13} has also demonstrated that when configuring different SASs, the GA is inferior to MA, which is an even older search algorithm introduced in 1999, as shown in the example from Figure~\ref{fig:search-alg-single}.
\begin{figure}[!t]
\centering
\includestandalone[width=0.6\columnwidth]{tikz/alg-number-pie}
\caption{Distribution on the number of search algorithms considered (with reported results) in the primary studies.}
\label{fig:alg-pie}
\end{figure}
In general,
there are two issues leading to this disappointment.
The first is that the number of search algorithms considered (with reported results) in each study is rather small,
as can be seen in Figure~\ref{fig:alg-pie}.
In particular,
we see that there are as much as 63\% of the primary studies consider and present the results for only one search algorithm,
and a further 15\% studies discuss two.
This has already constituted to more than three quarters of the work surveyed.
Such number apparently cannot be considered as a sufficiently wide exploration that covers the full spectrum of the possibility of SBSE for SASs.
In fact,
it is essential to `try' a good range of alternative search algorithms and report the results,
especially when the knowledge of the problem is not very clear.
In the Computational Optimization and Evolutionary Computation community,
it is not uncommon to investigate between four to six algorithms for majority of the work\footnote{Excluding empirical studies where the number of the compared algorithms may easily be over a dozen.},
in order to fully evaluate the effectiveness of the proposed approach and justify the choice.
This has also been becoming a standard practice in SBSE.
For example,
on the test case generation~\cite{pradhan2018cbga} and refactoring~\cite{DBLP:journals/tse/LuWYAN19} problem, existing work (excluding empirical studies) has also studied and presented results based on three or more competitive search algorithms. Sometimes, there can be nine search algorithms~\cite{DBLP:journals/jss/ZhangAY19}.
Another (probably more important) issue is to select search algorithms for comparisons according to the properties of the problem considered.
It is known that every algorithm does have their own ``comfort zone''.
But people tend to work by analogy;
that is, using the algorithm which was (widely) used before.
This may cause the risk of not being fully aware of their limitations.
For example,
ES is apparently not workable on large scale SASs;
GA may not be suitable for time-critical SASs;
NSGA-II typically does not work on SAS problems with four and more objectives~\cite{Purshouse2007}.
Therefore,
selecting an algorithm suitable for the considered SAS problem is crucial,
which demands a well understanding of both the problem and the algorithm.
This, however, can emerge as a future opportunity on SBSE for SASs,
as we will discuss in Section~\ref{sec:alg-sel}.
We therefore urge the researchers and practitioners in SBSE for SASs to consider a much wider and deeper investigation on the different search algorithms, as well as a more scientifically justifiable selection of algorithms based on the understanding between SAS problems and search algorithms where possible.
\subsection{Limited Synergy between Domain Knowledge and Search Algorithm}
Given the very nature of the optimization involved in engineering SASs, exploring SBSE, or more generic sense of search algorithms, for SASs is certainly not new. Yet, disappointingly, from Section~\ref{sec:rq3}, we have failed to see how the advances of SBSE for SASs can be differed from ``another application domain of the search algorithms'', as 79\% of the primary studies merely specialize the search algorithm with the nature information about the problem.
Since SBSE is relatively new to SAS research, this result is predictable, but we did not expect such a significant discrepancy. The key challenge that underpins this disappointment is how to exploit the expertise and domain knowledge of engineers to tailor which aspects of the search algorithms, in order to reach better specialized and improved optimization results. This is perhaps a more general issue on a wider context of SBSE problems, but there has been some very successful attempts in capturing and synergizing such knowledge into the algorithms on other SBSE research. For instance, for the software testing problems addressed by SBSE, the search algorithms are often manipulated to work with various types of seeds, which were specifically designed to match the contexts of the code to the test bed~\cite{DBLP:conf/icst/FraserA12}. There are also a considerable amount of work that focuses on specializing the operators of search algorithms to better serve the context~\cite{DBLP:conf/gecco/WuWHJK15}\cite{DBLP:journals/asc/XueZT0CC016}.
Ignoring the strong domain knowledge from engineers is a non-trivial issue on SBSE for SASs and can be an unwise waste of such valuable knowledge. For example, a recent work by Chen et al.~\cite{DBLP:journals/infsof/ChenLY19} on SBSE for SASs has shown that, instead of allowing the algorithm to search from scratch, seeding the search algorithm with high quality seeds, which were selected based on engineers' expertise, can largely improve the optimization result (i.e., the HV value) when configuring 10 different service-based SASs. An example\footnote{The work assumes that there is no preference towards any of the objectives, and thus hypervolume serves as a fair indicator on the overall quality of solution set.} has been illustrated in Figure~\ref{fig:search-knowledge}.
\begin{figure}
\begin{subfigure}[t]{0.45\columnwidth}
\includestandalone[width=\columnwidth]{tikz/w7-hseed-HV-nsgaii}
\subcaption{NSGA-II}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\columnwidth}
\includestandalone[width=\columnwidth]{tikz/w7-hseed-HV-ibea}
\subcaption{IBEA}
\end{subfigure}
\caption{Mean hypervolume values for an example SAS with and without additional domain knowledge over all runs~\cite{DBLP:journals/infsof/ChenLY19}.}
\label{fig:search-knowledge}
\end{figure}
As shown in Section~\ref{sec:rq3}, we do see a few very good example work (e.g., \cite{DBLP:journals/tosem/ChenLBY18}\cite{DBLP:conf/icse/CailliauL17}\cite{DBLP:journals/infsof/ChenLY19}\cite{DBLP:conf/gecco/HaraldssonWBS17}) on better synergizing domain expertise and search algorithms when exploring SBSE for SASs, which constitutes to the renaming 21\% of the primary studies. This, as shown in those 21\% work, is in fact a win-win strategy, where on one hand, the search algorithm can be potentially made more controllable and explainable, on the other hand, the strong domain knowledge can serve as strong guidance to steer the search, achieving results that would be otherwise difficult to obtain. Further, the nature of complexity in SAS can actually provide more opportunity to design a better tailored and specialized search algorithm for the context.
We therefore seek to increase the 21\% of primary studies and urge the community that when investigating SBSE for SASs, a better and more thorough synergy between domain knowledge and the search algorithms should be carefully considered. This disappointed issue also raise the need of highly specialized search algorithm that considers the characteristics of the algorithm itself; and human centric SBSE for SASs, in which the human (engineers) is permitted to control the search algorithms, rendering the results more explainable. We will further discuss these new opportunities for future research in Section~\ref{sec:alg-sel} and ~\ref{sec:hc}, respectively.
\subsection{Limited and Inaccurate Definition of Multi-Objective Search for SAS}
\label{sec:dis3}
Multi-objective search, in the context of Computational Optimization community and other SBSE problem, refers to simultaneously optimizing multiple (usually conflicting) objectives, thus resulting in a set of trade-off solutions, called Pareto optimal solutions. Searching for all Pareto optimal solutions (or a good approximation of them) provides the engineers with diverse choices from which they can choose their preferred one, but also with the knowledge/information of the optimization problem (e.g., correlation between the objectives, actual dimensionality of the Pareto front, and the location of knee points) for better understanding of the problem.
However, as shown in Section~\ref{sec:rq4}, our review discovers that on SBSE for SASs, the multi-objective search can refer to the fact that there are multiple conflicting objectives, but they are optimized in certain form of aggregation (i.e., weighted sum or product). We were disappointed to see that, this form of optimization is predominantly referred to as multi-objective search and constitutes to 52 primary studies, compared with 20 others that rely on the alternative, and perhaps more accurate way of multi-objective search using Pareto-based relation.
\begin{figure}
\begin{subfigure}[t]{0.45\columnwidth}
\includestandalone[width=\columnwidth]{tikz/single-op}
\subcaption{GA for equal weights}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\columnwidth}
\includestandalone[width=\columnwidth]{tikz/multi-op}
\subcaption{NSGA-II}
\end{subfigure}
\caption{Comparing results by search with equally weighted objective aggregation (GA) and Pareto-based multi-objective search (NSGA-II) on the bi-objective DTLZ2 function.}
\label{fig:single-vs-mutiple}
\end{figure}
In essence, the aggregation of objectives implies that certain preferences over the objectives are available and they can be precisely quantified using weights, as we showed that the majority of them have either assumed the weights can be provided by the engineers or they are equally weighted by default. Indeed, if this is the case, then the aggregation way of search-based optimization can be ideal. However, it is not uncommon that a precise quantification of the weights are very difficult, if not impossible, especially given the complexity of SAS. In particular, despite being adopted in 20 primary studies, it is in fact an inaccurate assumption that setting equal weights to the objectives implies an equal importance and thereby leading to a fair solution, because the search pressure guided by the weighted fitness and a single objective search algorithm may not be able to discover certain solutions in the first place. For example, in Figure~\ref{fig:single-vs-mutiple}, we compare the case of searching with equally weighted sum of objectives (GA) and the Pareto-based multi-objective search (NSGA-II) on DTLZ2, a well-known bi-objectives test function from the Evolutionary Computation community. Clearly, after very few generations (less than 20), the GA converges to a single point while the NSGA-II produces a much more diverse set of Pareto optimal solutions, finally representing a full spectrum of the trade-off surface in the problem. It is interesting to note that, even equal weights have been given to the two objectives under the same scale, the final solutions are far from being balanced. In fact, it shifts towards the extreme where one objective is preferred more, i.e., close to the boundary points of the problem's Pareto front. This has apparently contradicted with the general belief that equally weighted objectives represent fair importance. The fundamental cause of this is that (i) the boundary points have higher aggregated fitness than inner points due to the concave shape of the Pareto front, and (ii) the gene drift phenomenon in GA easily lead the population to converge into one point.
The minority of the Pareto-based multi-objective search on SBSE for SASs, on the other hand, does not require weights specification during the search, which precisely addresses the shortcoming of weighted sum and product of the objectives. However, one challenge of Pareto-based multi-objective search is how to select a single solution from the produced nondominated trade-off set. When no preference information is available, it is ideal to provide the engineers with all nondominated solutions found to keep them informed in the decision-making process. When certain preferences exist, the selection of the single solution can be tailored with such. Yet, unlike the case of weighted aggregation, such preferences do not require explicit quantification. As we have shown in Section~\ref{sec:rq4}, this can be achieved with human intervention when possible (14 studies), or automatically completed by selecting a solution from certain regions, e.g., the knee point selection.
In our review, we have also observed that certain work apply both, e.g.,~\cite{DBLP:conf/icse/Garcia-GalanPTC14}\cite{DBLP:journals/tmc/PaolaFGRD17}, in which the search is conducted in a Pareto-based multi-objective way while the final solution is selected using one single weighted aggregation. This is an approach that we are particularly against, as it arises a clear contradiction: the fact that the weighted aggregation is used implies that certain preferences over the objectives are available and can be quantified, which contradicts with the fact that Pareto-based multi-objective search is usually conducted when such a clear quantification does not exist. Sadly, those studies have not explicitly discussed the rationals behind the choice.
As a result, Pareto-based multi-objective search is what should be investigated more, at least in a comparative amount of attentions to the weighted aggregation, when exploiting SBSE for SASs. With this regard, a future direction on how to capture and model preference on SBSE for SASs has emerged as a new research opportunity, as we will discuss in Section~\ref{sec:cap-pref}.
\subsection{Unjustified Quality Indicator Selection under Multiple Objectives}
\begin{table}[t!]
\begin{threeparttable}
\caption{The prerequisite and characteristics of the top 5 quality indicators considered in the primary studies.}
\label{table:single-multi}
\begin{tabularx}{\columnwidth}{P{2.2cm}|P{1.72cm}|Y|Y}\hline
& \textbf{\emph{utility}} & \textbf{\emph{objective value}} & \textbf{\emph{HV, IGD, $\epsilon$ and GD}}\\\hline
\textbf{\emph{Prerequisite}}& Preference on objectives& None & None \\\hline
\textbf{\emph{Comparing the solution(s) on all objectives?}} & Yes & No & Yes \\\hline
\textbf{\emph{Comparing solution sets?}} & No & No & Yes \\\hline
\end{tabularx}
\end{threeparttable}
\end{table}
When multiple conflicting objectives of the SAS are involved, we have shown in Section~\ref{sec:rq5} that using utility or the value for each objective individually are predominated ways to assess the quality of solutions. Despite their popularity, it is disappointed that our review finds no systematic justification on the choice of utility and objective value as quality indicators for multiple objectives. This is important, because as shown in Table~\ref{table:single-multi}, the utility can collectively assess all objectives of a single solution only under the assumption that preferences are available and can be precisely quantified. However, as mentioned, it is not uncommon in SAS optimization that the preferences are not available or too difficult to be quantified. The objective value, on the other hand, is exempted from such prerequisite but only provides assessment on a single objective at a time. This is apparently not accurate as it ignores the quality of the solutions on other objectives. When there is a need to assess a solution set, such a way of considering objectives separately may lead to a situation that a solution set is evaluated better than another on all objectives individually, but it turns out that the engineer would never prefer it under any circumstance. The other quality indicators, e.g., HV, IGD, $\epsilon$-indicator and GD, are exactly designed to compare the solution set as a whole and have no prerequisite/preferences, but they have disappointedly attracted much less attention when compared with the utility and objective value.
In the Computational Optimization and Evolutionary Computation community, over hundreds of quality indicators have been proposed to assess the solution sets \cite{DBLP:journals/csur/LiY19}. Yet, we are disappointed to see that the work on SBSE for SASs has only explored a very limited set of them, as discussed in Section~\ref{sec:rq5}. It is with an even greater disappointment to discover that their selections also lack a systematic justification, but are driven by the facts that other work has also used the same ones. The issue here, however, is that each quality indicator has its own unique preferences. The one fitted well in other situations may not be suitable for the SAS problem. Even for two indicators which are designed for assessing the same quality aspect of a solution set, they could have very different preferences~\cite{DBLP:journals/csur/LiY19}. For example, both HV and IGD are used to provide a comprehensive evaluation of a solution set in terms of convergence, spread, uniformity and cardinality, but HV clearly prefers knee points of the Pareto front and IGD prefers uniformly-distributed solutions. Therefore, a careful selection and use of quality indicators have to be made to evaluate/compare solution sets since quality indicators can easily mislead the decision maker, returning opposite evaluation results to actual preferences of the decision maker ~\cite{DBLP:journals/csur/LiY19}\cite{DBLP:conf/icse/Li0Y18}.
For example, when reasoning about adaptation solutions, two common but conflicting objectives to be optimized can be the dependency compliance and the incurred cost\footnote{The actual meaning of cost depends on the context, e.g., it could be the time or resources required to realize adaptation.}: an adaptation solution with lower cost could likely to have a larger violation of the dependency. Let us assume two solution sets \texttt{A} and \texttt{B} in Figure~\ref{fig:qi-example} returned by two search algorithms. To compare these two sets, a typical practice is to consider one or several commonly-used quality indicators. Here, eight most commonly used quality indicators in SBSE are given in Table~\ref{table:taxonomy}, along with what quality aspects they can cover. As we can see, there are some indicators or their combinations which can (partially) cover all quality aspects of solution sets, thus being expected to give reliable evaluation. However, the fact is quite opposite --- all the eight indicators evaluate \texttt{A} better than \texttt{B}, but the decision maker may typically prefer the latter. This is because, in this particular context, the dependency needs to reach 100\% compliance or otherwise an adaptation solution is considered as invalid, which cannot be used to adapt the software system at all. The solution ($\beta$) of \texttt{B} reaches 100\% and has lower cost than the corresponding one in A, which should have been the most ideal solution. This is a typical example that indicates the risk of directly using existing quality indicators without considering the nature/preferences of the problem~\cite{DBLP:conf/icse/Li0Y18}.
In fact, the lack of systematic justification on the selection of quality indicators when using SBSE for multi-objectively optimizing SAS is also an overwhelming problem in the entire SBSE domain~\cite{DBLP:conf/icse/WangAYLL16}\cite{DBLP:conf/icse/Li0Y18}, but it is of particular importance on SBSE for SASs, given the wide variety of possible objectives to be optimized.
Based upon the above discussion, we urge the community on this thread of research to carefully consider the selection of multi-objective quality indicators, which itself could potentially be a promising research area. This disappointed issue is again related to the new research opportunity of how to capture and model preference on SBSE for SASs, which we will discuss in Section~\ref{sec:cap-pref}.
\subsection{Weak Generalization of Results across the Subject SASs}
In Section~\ref{sec:rq6}, we have shown that a variety of subject SASs are considered on SBSE for SASs. Yet, disappointedly, our review has indicated that there are 65\% primary studies consider only one SAS, and 87\% consider less than three. It becomes even more disappointed if we differentiate the SASs based on their domains, as discussed in Section~\ref{sec:rq6}, where the portion of studies that considers one SAS increases to three quarters, with the number of studies that consider less than three subject SASs increase to 97\%. From the research work on other SBSE problems, it is not uncommon to see a wide range of subject software systems have been used (even excluding empirical studies), for example, SBSE for software product line engineering often involve more than 10 distinct subject systems~\cite{DBLP:journals/asc/XueZT0CC016}. In the Computational Optimization and Evolutionary Computation community, the number of test functions used to evaluate a search algorithm is also most commonly to be more than 10 (excluding empirical studies). Comparatively, the number of subject SASs considered is current work on SBSE for SASs is rather limited, even when pure empirical studies are included.
\begin{table}[t!]
\begin{threeparttable}
\caption{The median GD (IQR) produced by NSGA-II and SPEA2 when search on different SASs from distinct domains over all runs~\cite{DBLP:journals/jss/PascualLPFE15}.}
\label{table:subject-example}
\begin{tabularx}{\columnwidth}{Y|Y|Y}\hline
&\textbf{\emph{NSGA-II}}&\textbf{\emph{SPEA2}} \\\hline
\textbf{\emph{x264}} &\cellcolor{yellow}0.00{\footnotesize(0.00)}&\cellcolor{yellow}0.00{\footnotesize(0.00)} \\\hline
\textbf{\emph{Wget}} &0.91{\footnotesize(2.7E+2)}&\cellcolor{yellow}0.74{\footnotesize(2.6E+2)} \\\hline
\textbf{\emph{Berkeley}} &0.22{\footnotesize(5.6E+2)}&\cellcolor{yellow}0.00{\footnotesize(4.4E+2)} \\\hline
\textbf{\emph{Sensor}} &\cellcolor{yellow}1.23{\footnotesize(1.9E+3)}&1.29{\footnotesize(1.4E+3)}\\\hline
\textbf{\emph{Game}} &2.41{\footnotesize(1.7E+3)}&\cellcolor{yellow}2.19{\footnotesize(1.4E+3)}\\\hline
\textbf{\emph{Tank War}} &\cellcolor{yellow}2.08{\footnotesize(6.7E+2)}&2.09{\footnotesize(6.3E+2)}\\\hline
\textbf{\emph{Media}} &2.96{\footnotesize(1.2E+3)}&\cellcolor{yellow}2.90{\footnotesize(1.1E+3)} \\\hline
\textbf{\emph{Guide}} &\cellcolor{yellow}1.58{\footnotesize(3.8E+2)}&1.72{\footnotesize(3.7E+2)} \\\hline
\end{tabularx}
\begin{tablenotes}
\footnotesize
\item The better one is highlighted.
\end{tablenotes}
\end{threeparttable}
\end{table}
This can be a crucial issue when conducting research on SBSE for SASs, as it weakens the generalization of the conclusion drawn. An example from Pascual et al.'s work~\cite{DBLP:journals/jss/PascualLPFE15} has been shown in Table~\ref{table:subject-example}, in which the GD results produced by the search algorithms on configuring 8 different SASs are compared. It is unclear to claim whether NSGA-II or SPEA2 can generally outperform the other as they are competitive across all the SASs. However, if only limited subject SASs are considered, it is possible that \texttt{Sensor}, \texttt{Game} and \texttt{Guide} may be chosen, which would lead to an inappropriate conclusion such that NSGA-II is generally better. Therefore, we urge the community to consider more subject SASs in future work on SBSE for SASs. Of course, arguably the setup and deployment of working SASs can be more complex than the systems in other SBSE problems, but those subject SASs need not to be real software systems; they can well be simulators, e.g., those prototyped artifacts from the SEAMS artifacts track\footnote{https://www.hpi.uni-potsdam.de/giese/public/selfadapt/ exemplars/}.
\section{Opportunities}
\label{sec:opp}
Despite of having several disappointed and overwhelming issues, SBSE for SASs is still a vital research direction which also bears many opportunities. In this section, we specify those opportunities and outline their promising research directions, as well as the related challenges to be addressed.
\subsection{Justifiably Selecting Search Algorithm according to Problem Instances on SBSE for SASs}
\label{sec:alg-sel}
An interesting phenomenon in SAS as well as SBSE is that in spite of the considerably large number of search algorithms currently available,
the primary studies adopt/compare just very few of them, as we shown in Section~\ref{sec:rq2}.
Several algorithms have a clear preference,
such as GA and ES for a generic optimization case and
NSGA-II for the multi-objective optimization case.
Indeed,
every search algorithm has their own merit,
which makes them well-suitable to a particular class of problems.
For a generic optimization case,
it is clear that if the scale of the problem is small enough to allow enumerating all solutions (with a given time budget),
then ES can be a good option as it can guarantee the optimal solution.
Some algorithms are designed for a particular class of problems,
and it is expected that these algorithms may produce better solutions than algorithms designed for generic problems,
for example Branch and Bound (BB) for some combinatorial problems.
The algorithms RS, Greedy Search (GS), HC and GA cannot guarantee an optimal solution.
As its name suggests,
RS visits the space randomly and the final solution can be rather different in each run of the algorithm.
As opposed to RS, every iteration (step) in GS is determined (to make the current optimal choice),
in order to the overall optimal way to solve the entire problem.
HC may be deemed as a combination of RS and GS,
as it starts from a random initial point in the search space and
iteratively finds out the best neighbor of the current solution.
Compared to RS, GS and HS,
GA (and other metaheuristics) is a sophisticated optimization algorithm.
Its population-based search strategy with a diversity preservation can help to some extent the search jump out of local optimum.
A particular benefit of such population-based search is dealing with multi-objective problems, where each individual in the population can be used to approximate a unique trade-off between objectives.
For the multi-objective case,
as shown in Figure~\ref{fig:alg-count2},
most of the algorithms considered belong to GA or its variants.
Consequently,
the range of problems that the algorithms are suitable for is narrow and overlapping.
Nevertheless,
different multi-objective GAs do have their own ``comfort zone''.
For example,
the algorithms which compare solutions by Pareto dominance and density, such as NSGA-II~\cite{DBLP:journals/tec/DebAPM02} and SPEA2~\cite{Zitzler2002},
typically do not work well on many-objective problems where the number of objectives is larger than three~\cite{Purshouse2007}.
The decomposition-based algorithms (e.g., MOEA/D~\cite{Zhang2007} and its variants MOEA/D-STM~\cite{DBLP:journals/tec/LiZKLW14} and NSGA-III~\cite{Deb2014}) scale up well in terms of objective dimensionality,
but may struggle on problems with an irregular Pareto front shape (e.g. degenerate, disconnect or highly-nonlinear)~\cite{Ishibuchi2017}.
The indicator-based algorithms (e.g., IBEA~\cite{Zitzler2004}) are often insensitive to the Pareto front shape,
but may suffer from the dominance resistance solutions (i.e. solutions with an extremely poor value on at least one of the objectives but with (near) optimal values on the others~\cite{Ikeda2001}), thereby with their solutions concentrating in the boundaries of the Pareto front~\cite{Li2018}.
Overall,
it is important to understand the behavior and advantages/drawbacks of search algorithms available,
which, together with the problem knowledge, will certainly help to choose an appropriate algorithm that is well-suited to the problem in hand.
To achieve that,
there are several challenging research questions desirable for us to address.
\begin{itemize}
\item What kind of information/knowledge of the SAS problem can be obtained/extracted, and how.
\item What kind of search algorithm can make better use of such information/knowledge, which can greatly inform the algorithm selection.
\item Or further, how to generically develop a specialized search algorithm specifically for a SAS problem.
\end{itemize}
\subsection{Capturing Preferences of the Engineers in SBSE for SASs}
\label{sec:cap-pref}
Preferences refer to one's satisfaction on certain aspects of the SAS, and they can come from the software and system engineers, who have specific software engineering expertise, and would make various decisions in engineering SAS. While preference exist even for the single objective case, it is of even greater importance when there are multiple conflicting objectives for the SAS. According to the software engineering expertise of the engineers, s/he might only be interested in a handful of selected solutions that meet their preference most, instead of the entire set of solution obtained by a Pareto-based multi-objective search algorithm. As a matter of fact, this has been a long standing topic in the Computational Optimization community over half a century.
As we discussed in Section~\ref{sec:rq4}, 14 primary studies on SBSE for SASs, which constitute to the majority, have captured preferences in \textit{a posteriori} manner, where a set of widely spread trade-off alternatives are obtained by a search algorithm before being presented to the engineers. However, this obviously increases the engineers' workload and will incorporate much irrelevant or even noisy information during the decision-making process. To alleviate the cognitive burden of the engineers, some pre-screen techniques have been introduced to help select a handful of representative solutions before handing over to the engineers. These, as shown in Section~\ref{sec:rq4}, include selecting only the knee point(s) or solutions that are significantly superior on certain preferred objectives. Indeed, \textit{a posteriori} manner mainly captures preferences based on the natural cognition of engineers rather than their software engineering expertise, as the captured preference cannot influence the search process.
The above has raised the research opportunity of SBSE for SASs on handling preference in \textit{a priori} or even an \textit{interactive} manner. In this regard, the former means that the preference information can also be elicited \textit{a priori} and is used as a criterion to evaluate the fitness that drives the search towards the region of interest along a pre-defined preferred direction. For example, the preference information can be represented as one or more reference points, each dimension of which represents the engineers' expectation at the corresponding objective~\cite{DebSBC06,LiCMY18}. Such information can be formally extracted from some software engineering design notations, such as the goal model. However, capturing the preferences only occur at the beginning of the search. The latter, i.e., the \textit{interactive} manner, permits continuous refinement of the preferences and thus allowing more accurate capture. Such an interactive preferences capture is a point we will further elaborate in Section~\ref{sec:hc}. To achieve \textit{a priori} or \textit{interactive} capture of preferences, there are several challenging research questions need to be addressed:
\begin{itemize}
\item How to automatically extract the stockholders' preference information in an efficient and cost effective manner.
\item How to structuralize the preferences in a way that is understandable by the search algorithm.
\item What parts of the stockholders' preferences, expressed in some software engineering representations, can be correlated to which quality aspect of the solution.
\end{itemize}
\subsection{Effective and Efficient Fitness Evaluation in SBSE for SASs}
A crucial part of SBSE is how the fitness of a solution can be evaluated, which serves as the key to drive the search process. This, in the context of SASs, is often related to how the behaviors of the systems can be changed with different adaptation solutions. In certain scenario, it is possible to profile the SAS at design time, or at runtime where the profiling only affect the SAS in certain aspects rather than changing the whole system~\cite{DBLP:conf/icse/Gerostathopoulos18}. However, most commonly, such profiling is expensive and time consuming. In contrast, surrogate models that are based on machine learning has been explored as an alternative, given that they are relatively cheap in terms of the fitness evaluation as the search proceeds~\cite{DBLP:conf/icse/Chen19b}. Yet, this comes with the cost of high complexity in building such model, which may still be lack of accuracy or difficult to capture the up-to-date changes of SASs. Further, the amount of examples required to train the model can also hinder the effectiveness of the search.
The situation raises the research opportunity of investigating effective and efficient fitness evaluation in SBSE for SASs. In particular, the key difficulty lies in the question on how to keep the overhead of fitness evaluation low, while maintaining a reasonable accuracy and cost of building the surrogate model. A promising direction on this is the research area of incremental online learning, where the model can be learned with limited data samples and can be efficiently updated as new data is collected while providing adequate accuracy~\cite{DBLP:conf/icse/Chen19b}. The other possible direction is to explore so-called novelty model that does not required to observe the behaviors of SASs when using SBSE~\cite{DBLP:conf/kbse/RamirezJCK11}. Such a model mimics the natural phenomenon where the evolution would never be solely guided by explicit objectives, but also the biological novelty of the individuals. In such a way, the fineness can be assessed without the need to affect or acquire data from the SASs, and thus mitigating expensive evaluation. However, more research questions need to be addressed in order to better incorporate online learning with SBSE for SASs, such as the following:
\begin{itemize}
\item Whether the frequency of model update could have an impact to the search results.
\item How to handle the trade-off between the cost of model building and the accuracy (or relevance) of the model, if any.
\item What are the correlations between the accuracy (or relevance) of a model to the improvement of SBSE for SASs.
\end{itemize}
\subsection{Just-in-Time Handling of Changes in SBSE for SASs}
SAS would inevitably face with changes on the requirements, environment or its internal states, either at design time or at runtime. Despite the fact that SBSE is capable of naturally handling dynamics to some extents, the more fundamental problem is how often should the optimization runs in order to ensure that the results can cope with the new changes. Current researches on SBSE for SASs have almost ignored this point or simply assume that the search algorithm can be re-triggered when there is a need (e.g., according to a fixed frequency or upon the occurrence of changes). Yet, such a strategy would suffer the limitation that no changes can be captured during the run of the search algorithm.
To this end, recent advances on so-called dynamic optimization~\cite{DBLP:journals/swevo/NguyenYB12} and dynamic SBSE~\cite{harman2012dynamic} is a promising, but under-explored solution for SASs. Here, the key idea is to allow the search algorithm to automatically pick up any new changes during the search process, and therefore the new information can be used to steer the search or old and useless information can be discard in order to prevent misleading. Such a very nature is a perfect fit to various problems with `changes' that are faced by modern SASs. However, there are some crucial challenges on this particular direction of research on SBSE for SASs, for example:
\begin{itemize}
\item What are the mappings between the changes on SASs and the changes with respect to the search algorithm.
\item What are the changes can be handled while the search is under processing, and how they can be fed into the search.
\item Whether it is possible to generically consolidate any given search algorithm.
\end{itemize}
\subsection{Human-Centric SBSE for SASs}
\label{sec:hc}
The main purpose of engineering SASs is to reduce the levels of human intervention on the running software systems. However, it has been shown that there are scenarios where human involvement is essential~\cite{DBLP:conf/icse/CamaraMG15}, or human knowledge has been proven to be able to greatly improve the behaviors of SASs. Similarly, SBSE is also motivated from the same origin: to automatically generate the software engineering process and thus free the software engineer from tedious and error-proven tasks. Recently, there is an ongoing demand to engineer human-centric SBSE, such that the search approach does not make the final decision on its own, but serving as an assistant to add insight for human to make decisions. Those two facts, together, imply a perfect match between the two fields in terms of the human-centric aspect.
In particular, human can refer to a wide range of engineers with certain software engineering expertise around SASs, including but not limited to, developers, requirements analyst, architects and testers. Among others, interactive SBSE, in which the human's domain knowledge of expertise can be used to explicitly control the search process with explainable outcomes, is a promising research opportunity for this thread of research.
Specifically, the interactive SBSE is an emerging paradigm for decision making for software engineering problems on the fly. Combing together with the preferences capturing discussed in Section~\ref{sec:cap-pref}, interactive SBSE enables the human to progressively learn and understand the characteristics of the SAS problem at hand and adjust towards more appropriate capture of the preferences information. As a result, solutions can be gradually driven towards the region of interest~\cite{LiCSY18}, allowing the human to have more controllability over the search algorithm using their software engineering expertise~\cite{DebSKW10}. This would also create more inspirations to build specialized search algorithms, which should work the best under the SAS where the knowledge lies. Yet, the challenges can be related to:
\begin{itemize}
\item What forms of human knowledge/expertise can explicitly influence which aspects of SBSE for SASs.
\item How human can be placed in the loop in order to facilitate timely interaction with SBSE for SASs.
\item How to ensure the information provided by human is reliable, i.e., how to prevent immature inputs.
\end{itemize}
\subsection{Incorporating SBSE with Other Approaches for SASs}
SBSE would never be the sole approach for tackling problems in SASs. In fact, given the nature of ``optimization" implied in SBSE, there is a variety of opportunities to incorporate SBSE and other approaches for SASs, such as control theory, verification, machine learning and so forth. Our review has witnessed a few successful work that specifically incorporates SBSE with the other approaches, for example, Gerasimou et al.~\cite{DBLP:journals/ase/GerasimouCT18} have adopted SBSE, guided by probabilistic verification model, to search for optimal adaptation solution; Maggio et al.~\cite{DBLP:conf/sigsoft/MaggioPFH17} have also applied control-theoretic adaptation that is tuned using SBSE. However, there is a lack of general guideline about the possible forms of incorporation. This is important, especially given the wide applicability of SBSE and other approaches for engineering SASs. In particular, challenges can be raised by the following new directions of research:
\begin{itemize}
\item What are the patterns involved when incorporating SBSE with the other approaches for engineering SASs.
\item Whether there could be a ``symbiotic" relation exist between SBSE and the another approach, i.e., both SBSE and the other can benefit from each other, which collaborates together to improve the SAS.
\item How to codify a generic methodology that guides the practitioners of SASs on incorporating SBSE with the other approaches.
\end{itemize}
\section{Review Protocol Overview}
\label{sec:review}
\rev{As shown in Figure~\ref{fig:slr}, our literature review protocol exploits automatic search to obtain a set of 66,786 studies from various sources (see Section~\ref{sec:scope}). Starting from stage 1, we removed duplication by automatically matching their titles\footnote{Patents, citation entries, inaccessible papers, and any other non-English documents were also eliminated.}, leading to 3,740 \textbf{\emph{searched studies}}. Next, we filtered the searched studies according to their titles and abstracts. A study was ruled out if it meets any of the two filtering criteria below:}
\begin{itemize}
\item \rev{The paper is not relevant to SAS.}
\item \rev{The paper does not conduct research in the context of software or system engineering.}
\end{itemize}
\rev{The filtering process resulted in a much smaller and more concise set of 378 \textbf{\emph{candidate studies}}. We then conducted a manual search using the iterative forward snowballing as suggested by Felizardo et al.~\cite{DBLP:conf/esem/FelizardoMKSV16}, where the newly included studies (after filtering) were placed into the next snowballing round. Note that we did not do backward snowballing because the studies searched in our work are restricted within the last decade, therefore the backward snowballing would too easily violate such a requirement of timeliness. To avoid a complicated process in the snowballing, we relied on Google Scholar as the single source therein following the best practice for software engineering surveys~\cite{DBLP:journals/tse/GalsterWTMA14}. The snowballing process stopped when no new studies can be found, leading to 409 candidate studies, and the procedure for full-text review begins thereafter.}
\rev{At stage 2, we reviewed all the 409 studies and temporarily keep some of them using the inclusion criteria from Section~\ref{sec:in-ex}, which resulted in 199 candidate studies. We then applied the exclusion criteria (see Section~\ref{sec:in-ex}) to extract the temporarily included studies, leading to 92 candidate studies. By using the cleaning criteria specified in Section~\ref{sec:in-ex}, a further cleaning process was conducted to prune different studies that essentially report on the same work, e.g., journal papers extended from a conference version. All the processes finally produced 74 \textbf{\emph{primary studies}} ready for data analysis and collection.}
\rev{For all primary studies at stage 3, we conducted a systematic and pragmatic data collection process that consists of three iterations, for which we elaborate in Sections~\ref{sec:item-class} and~\ref{sec:data-collection}.}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figures/SLR.pdf}
\caption{Systematic literature review protocol.}
\label{fig:slr}
\end{figure}
\subsection{Search String}
\label{sec:scope}
\rev{From 16th to 30th Sep 2019, we conducted an automatic search over a wide range of scientific literature sources, including ACM Library, IEEE Xplore, ScienceDirect, SpringerLink, Google Scholar, DBLP and the SBSE repository maintained by the CREST research group at UCL\footnote{http://crestweb.cs.ucl.ac.uk/resources/sbse\_repository}.}
The used search string was designed to cover a variety of computational search applied in the context of SASs. Synonyms and keywords were properly linked via logical connectors (AND, OR) to build the search term. The final search string is shown as below:
\begin{displayquote}
\textit{(``optimization" OR ``search algorithm" OR ``search based" OR ``multi-objective") AND (``adaptive software" OR ``adaptive system" OR ``dynamic software product line" OR ``autonomic")}
\end{displayquote}
The first set of terms before the AND connector consists of the common keywords for SBSE surveys~\cite{DBLP:journals/csur/HarmanMZ12,Sayyad2013b}, while the second contains terms that are commonly appeared in the SAS related surveys~\cite{DBLP:conf/c3s2e/WeynsIIA12,DBLP:conf/refsq/YangLJC14}. Noteworthily, we explicitly placed \textit{``dynamic software product line"} in the string because as far as we aware, this is the only domain that has been formally acknowledged of being highly relevant to both the SBSE~\cite{DBLP:conf/splc/HarmanJKLPZ14} and SAS~\cite{DBLP:journals/computer/BencomoHA12,baresi2014self,classen2008modelling} community. In this way, we retain a high degree of coverage as evidenced by the number of returned results from Figure~\ref{fig:slr}.
\rev{Using the above string, we conducted a full-text search on ACM Library, IEEE Xplore, ScienceDirect, SpringerLink, and Google Scholar, but rely on searching the title only for DBLP and UCL's SBSE repository, due to their restricted feature. Since DBLP does not work on the whole search string, we paired each term in the first bracket with each one from the second bracket. The results of all pairs were collected. Due to the similar reason, for the UCL's SBSE repository, we searched each term from the second bracket independently and collected all results returned, as it is known that all the studies in this source are SBSE related.}
\subsection{Inclusion, Exclusion, and Cleaning Criteria}
\label{sec:in-ex}
For the selected candidate studies, we first identify the primary ones by using the inclusion criteria as below; studies meeting all of the criteria were temporarily chosen:
\begin{enumerate}
\item The study specifies the deign, or application, of the computational search algorithm as a major part of the solution to a problem of engineering SASs. If this is not the case, the paper should at least present a formulation of the SAS problem that can be subject to computational search.
\item The study investigates problems related to SAS runtime, or it is a design-time problem that can provide significant insights for runtime self-adaptation of SASs with discussion.
\item The study explicitly or implicitly discusses, or at least made assumptions about, the generality of the problem and solution when engineering SASs to the wider context, despite that it may focus on a particular domain of SAS (e.g., Cloud, Services, Internet-of-Things and Cyber-Physical Systems).
\item The problem in the study to be solved by SBSE is derived from a software or system engineering perspective.
\item The study includes quantitative experimental results with clear instructions on how the results were obtained.
\item The study uses at least one method or quality indicator to evaluate the experimental results.
\end{enumerate}
Subsequently, studies meeting any of the exclusion criteria below are ruled out:
\begin{enumerate}
\item The study neither explicitly nor implicitly mentions SBSE, where the computational search is the key; or the search problem is not considered as an important part of the approach.
\item The study is not ``highly visible'' or widely followed. We used the citation information from Google Scholar as a single metric to (partially) assess the impact of a study\footnote{Admittedly, no single metric can well quantify the impact of a paper. Nevertheless,
the citation count can tell something about a paper, e.g., its popularity.}. In particular, we follow a pragmatic strategy that: a study has 5 citations per year from its year of publication is counted in, e.g., a 2010 study would expect to have at least 45 citations\footnote{All the citations were counted by 30th September 2019.}. The only exception is for the work published in the years of writing this article (i.e., 2019), where we consider any published work or pre-press ones that have not yet been given an issue number, regardless of their citation counts. The reasons behind this setting are three-folds:
\rev{\hspace{1em} (a) Our aim is to emphasize on the major trends about how SBSE has been used for SASs. This is important, as any issue discovered would be particularly prevalent across the most visible studies, which are of even higher impact. It, therefore, makes sense to ``sample" the literature for the most ``representative'' work. This approach was adopted by many studies, such as~\cite{DBLP:journals/infsof/FuMS16}, where they used the citation count from Google Scholar as a threshold to select studies for review, as we did in this work.}
\rev{\hspace{1em} (b) It is not uncommon to see that software engineering surveys are conducted using some metrics to measure the ``impact" of a work. For example, some restrict their work only at what the authors believe to be premium venues~\cite{DBLP:journals/tse/GalsterWTMA14}, others use a threshold on the impact factors of the published journals, e.g., Cai and Card~\cite{cai2008analysis} used $0.55$, and Zou et al.~\cite{8466000} used $2.0$. In our case, it may not be a best practice to apply a metric at the venue level as the work on SASs often cuts across different fields (as we will show in Table~\ref{tb:papers-count}) --- it is difficult to quantify the ``impact" across communities. We, therefore, have taken a measurement at the paper level based on the citation counts from Google Scholar, which has been used as the metric to differentiate between the studies in some prior work~\cite{ten-year-sbse,DBLP:journals/tse/GalsterWTMA14,DBLP:journals/infsof/FuMS16}.}
\rev{\hspace{1em} (c) Indeed, there is no rule to set the citation threshold. These may seem very high at the first glance, but are in fact reasonable due to two reasons: (i) by publication date, we meant the official date that the work appears on the publisher's webpage (for journal work, this means it has been given an official issue number). Yet, it is not uncommon that many studies are made citable as pre-prints before the actual publication, e.g., ICSE often has around 6 months gap between notification and official publication, and there is an even larger gap for some journals. This has helped to accumulate citations. (ii) Google Scholar counts the citations made by any publicly available documents and self-citation, which can still be part of the impact but implies their citation count may be higher than those purely made by peer-reviewed publications. Nevertheless, this could indeed pose a threat of construct validity, which we will discuss in Section~\ref{sec:tov}.}
\item The study is a short or work-in-progress paper, i.e., shorter than 8 pages (double column) or 15 pages (single column).
\item The study is a review, survey, tutorial, or purely empirical work.
\item The study is published in a non-peer-reviewed public venue, e.g., arXiv.
\end{enumerate}
Finally, if multiple studies of the same research work are found, we applied the following cleaning criteria to determine if they should all be considered. The same procedure is applied if the same authors have published different studies for the same SBSE approach, and thereby only significant contributions are analyzed for the review.
\begin{itemize}
\item All studies are considered if they report on the same problem but have different solutions.
\item All studies are considered if they report on the same problem and solutions, but have different assumptions about the nature of the problem or have new findings.
\item When the above two criteria do not hold, only the latest version or the extended journal version is considered.
\end{itemize}
\input{tables/data-items}
\subsection{Data Items and Classification}
\label{sec:item-class}
\rev{The key items to be collected when reviewing the details of the primary studies have been shown in Table~\ref{tb:items}. We now describe their design rationales and the procedure to extract and classify the data from each item.}
\rev{The data for $I_1$ to $I_4$ is merely used as the meta-information of the primary studies. $I_5$ and $I_6$, which answer \textbf{RQ1}, aim to identify the most widely used search algorithms for SAS and the justifications of their choices. In general, the essential ways to justify the choice of a search algorithm lies in two forms: (i) it is theoretically justified by discussing why do the characteristics of search algorithm(s) align well with the requirements of the problem, with or without contrast to the applicable alternatives. This includes, e.g., how its pros and/or cons fit with the SAS problem, or how its success in other cases can be applied under the current problem; (ii) it is experimentally justified by comparing with at least one other applicable alternative algorithm in some aspects, e.g., optimality, convergence trajectory, or landscape coverage. Note that in theoretical justification, it is reasonable that a search algorithm is chosen because previous work has shown that it is the best for the SAS problem considered. In this case, however, we look for evidence or assertion to justify that the current SAS problem studied is identical (or at least share many similarities) to those from the previous work. Therefore, simply stating that a search algorithm is chosen \textit{because it has been widely used in previous work} is not a theoretical justification considered in this work. To understand whether such justifications are reasonable, we classified the data item into the following levels:}
\begin{itemize}
\item \rev{$L_1$: Both theoretical and experimental justifications are available.}
\item \rev{$L_2$: Only theoretical justifications are discussed.}
\item \rev{$L_3$: Only experimental justifications are presented.}
\item \rev{$L_4$: Neither theoretical nor experimental justifications is available.}
\end{itemize}
\rev{Clearly, $L_1$ is the most ideal situation, and $L_4$ would imply a lack of justification. We also place having theoretical justifications being more important than experimental comparison, as when justifying the algorithm choice in SBSE for SASs, the former can guide the design of the latter but rarely the other way around (we will discuss this with more details in Section~\ref{sec:rq1}). $I_8$ provides detailed information for both \textbf{RQ1} and \textbf{RQ2} as classified by the common categories of SAS problem~\cite{DBLP:journals/taas/SalehieT09,DBLP:conf/dagstuhl/LemosGMSA}. In particular, it additionally contains:}
\begin{itemize}
\item[---] \rev{Managing or managed system.}
\item[---] \rev{MAPE step(s) that involves search~\cite{DBLP:journals/computer/KephartC03}.}
\item[---] \rev{Self-adaptation purpose(s), e.g., self-configuration or self-optimization~\cite{DBLP:journals/taas/SalehieT09}.}
\item[---] \rev{Search objective(s).}
\item[---] \rev{Search constraint(s).}
\end{itemize}
\rev{$I_9$ and $I_{10}$ are useful for \textbf{RQ2}. Specifically, in $I_9$, we recorded any reasons why a particular objective formulation of the search for SASs (e.g., single-objective, Pareto, and weighted) was chosen, or otherwise, it was marked as \textit{Unknown}. $I_{10}$ seeks to understand what treatment has been assumed as required by certain formulation. For example, how to select a final solution under Pareto search; how to set the weight vector for weighted search. $I_{11}$ provides data for \textbf{RQ3}, including the quality indicators used for Pareto search on SAS, and specifically the justifications of the generic quality indicators considered, e.g., HV~\cite{Zitzler1998} and IGD~\cite{Coello2004}. Again, we classify whether the justification is reasonable using the levels as below:}
\begin{itemize}
\item \rev{$L_1$: The generic quality indicator is justified by referring to what quality aspects~\cite{DBLP:conf/icse/Li0Y18,DBLP:journals/csur/LiY19} they cover with respect to the preferences assumed in the SAS problem. By preferences in this work, we refer to the favored shift on the trade-off between different objectives.}
\item \rev{$L_2$: The generic quality indicator is justified by referring to what quality aspects they cover only.}
\item \rev{$L_3$: Neither the quality aspects covered nor the preference of the SAS problem is discussed.}
\end{itemize}
\rev{$L_1$ represents a well-justified case while $L_3$ can be questionable. Indeed, as discussed by Li et al.~\cite{DBLP:conf/icse/Li0Y18}, in SBSE each quality indicator may only cover certain quality aspect (e.g., convergence and diversity), and therefore its choice needs to be justified therein and aligned with the preferences of the SAS problem, e.g., whether one objective is naturally more preferred than the others.}
\rev{$I_{12}$ and $I_{13}$ answer \textbf{RQ4} by revealing what domain information of SAS (e.g., variation points, objectives, and model) has been used to specialize which aspect of a search algorithm, such as representation, fitness, and operator. We also collected the reason for leveraging a particular form of domain information. In particular, we classify the domain information from $I_{12}$ into two categories as proposed by Chen et al.~\cite{DBLP:journals/pieee/ChenBY20}:}
\begin{itemize}
\item \rev{\textbf{Problem nature} refers to commonly known basic properties
and characteristics of the problem domain, such that
the search algorithms have to comply with in order to be
used appropriately. This may, for example, include the
type/range of the variables, sparsity of the values, forms of the
equality, and inequality constraints. Directly applying a standard search algorithm is often considered as exploiting only the problem's nature without further specialization, due primarily to the
generality of these algorithms~\cite{DBLP:journals/pieee/ChenBY20}.}
\item \rev{\textbf{Domain expertise} is represented as
or produced by typical SE/SAS methods, practices, and models involved in the engineering process. Most commonly, the SE/SAS knowledge of domain expertise is not
naturally intuitive form the problem context but can
be extracted through engineering practices, skills, and
tools, for example, design models, formatted documents, or even concepts.}
\end{itemize}
\rev{$I_{14}$ was designed for \textbf{RQ5} and it contains several additional data items:}
\begin{itemize}
\item[---] \rev{Type (real system, simulator or dataset) and domain.}
\item[---] \rev{Search space.}
\item[---] \rev{\# variation point.}
\item[---] \rev{Types of environment changes, e.g., workload, signal or service availability.}
\item[---] \rev{Reasons of selected subject SAS(s).}
\item[---] \rev{\# subject SASs (from different settings or domains).}
\item[---] \rev{\# subject SASs (from different domains only).}
\end{itemize}
\subsection{Data Collection Process}
\label{sec:data-collection}
\rev{For each primary study identified, the data items from Table~\ref{tb:items} were collected and classified based on the coding from Section~\ref{sec:item-class}. To this end, the first author of this paper and two other researchers acted as the investigators and reviewed the primary studies independently. The data and classification extracted by one were checked by each other. Disagreements and discrepancies were resolved by discussing among the investigators or by consulting other authors. In this work, we adopted three iterations for the data collection process following the recommendation from a recent survey~\cite{8466000}:}
\rev{\textit{\underline{Iteration 1:}} This iteration aims to conduct an initial data collection to summarize the data and perform preliminary classification. In particular, for those data items that do not have clearly pre-defined categories (e.g., $I_9$ and $I_{10}$), each investigator proposed his own categories without counseling each other.}
\rev{\textit{\underline{Iteration 2:}} In this iteration, all investigators checked the data and classification from each other to ensure consistency. A study was discussed during the process if there is any discrepancy in (i) the classification; (ii) the self-defined categories; (iii) the data itself. All the concerned studies and their data items were examined in order to reach an agreement. Further reading to understand the root cause of the discrepancy was conducted when necessary. Overall, 32 studies were discussed and $I_{12}$ being the data item that was involved in most of the discussions, which is perhaps due to the fact that many studies contain a mix of different forms of domain information when using SBSE for SASs.}
\rev{\textit{\underline{Iteration 3:}} The process of the final iteration is similar to that of \textit{Iteration 1}, but its goal is to eliminate any typo, missing labels, and errors.}
\section{Other Opportunities}
\label{sec:opp}
Apart from the opportunities discussed under each of the RQs, we have also unidentified other opportunities which are promising to promote SBSE for SASs but are unfortunately under-explored. In what follows, we elaborate on these opportunities in detail.
\subsection{Effective and Efficient Fitness Evaluation in SBSE for SASs}
A crucial part of SBSE is how the fitness of a solution can be evaluated, which serves as the key to driving the search process. This, in the context of SASs, is often related to how the behaviors of the systems can be changed with different adaptation solutions. In certain scenario, it is possible to profile the SAS at design-time, or at runtime where the profiling only affects the SAS in certain aspects rather than changing the whole system~\cite{DBLP:conf/icse/Gerostathopoulos18}. However, most commonly, such profiling is expensive and time-consuming. In contrast, surrogate models that are based on machine learning has been explored as an alternative, given that they are relatively cheap in terms of the fitness evaluation as the search proceeds~\cite{DBLP:conf/icse/Chen19b}. Yet, this comes with the cost of high complexity in building such a model, which may still be lack of accuracy or difficult to capture the up-to-date changes of SASs. Further, the number of examples required to train the model can also hinder the effectiveness of the search.
The situation raises the research opportunity of investigating effective and efficient fitness evaluation in SBSE for SASs. In particular, the key difficulty lies in the question of how to keep the overhead of fitness evaluation low, while maintaining a reasonable accuracy and cost of building the surrogate model. A promising direction on this is the research area of incremental online learning, where the model can be learned with limited data samples and can be efficiently updated as new data is collected while providing adequate accuracy~\cite{DBLP:conf/icse/Chen19b}. The other possible direction is to explore the so-called novelty model that does not require to observe the behaviors of SASs when using SBSE~\cite{DBLP:conf/kbse/RamirezJCK11}. Such a model mimics the natural phenomenon where the evolution would never be solely guided by explicit objectives, but also the biological novelty of the individuals. In such a way, the fineness can be assessed without the need to affect or acquire data from the SASs, and thus mitigating expensive evaluation. However, more research questions need to be addressed in order to better incorporate online learning with SBSE for SASs, such as the following:
\begin{itemize}
\item Whether the frequency of model updates could have an impact on the search results.
\item How to handle the trade-off between the cost of model building and the accuracy (or relevance) of the model, if any.
\item What are the correlations between the accuracy (or relevance) of a model to the improvement of SBSE for SASs.
\end{itemize}
\subsection{Just-in-Time Handling of Changes in SBSE for SASs}
SAS would inevitably face changes in the requirements, environment, or its internal states, either at design-time or at runtime. Despite the fact that SBSE is capable of naturally handling dynamics to some extent, the more fundamental problem is how often should the optimization runs in order to ensure that the results can cope with the new changes. Current researches on SBSE for SASs have almost ignored this point or simply assumes that the search algorithm can be re-triggered when there is a need (e.g., according to a fixed frequency or upon the occurrence of changes). Yet, such a strategy would suffer the limitation that no changes can be captured during the run of the search algorithm.
To this end, recent advances on so-called dynamic optimization~\cite{DBLP:journals/swevo/NguyenYB12} and dynamic SBSE~\cite{harman2012dynamic} is a promising but under-explored solution for SASs. Here, the key idea is to allow the search algorithm to automatically pick up any new changes during the search process, and therefore the new information can be used to steer the search or old and useless information can be discarded in order to prevent misleading. Such a very nature is a perfect fit for various problems with ``changes" that are faced by modern SASs. However, there are some crucial challenges in this particular direction of research on SBSE for SASs, for example:
\begin{itemize}
\item What are the mappings between the changes in SASs and the changes with respect to the search algorithm.
\item What are the changes can be handled while the search is under processing, and how they can be fed into the search.
\item Whether it is possible to generically consolidate any given search algorithm.
\end{itemize}
\subsection{Incorporating SBSE with Other Approaches for SASs}
SBSE would never be the sole approach for tackling problems in SASs. In fact, given the nature of ``optimization" implied in SBSE, there is a variety of opportunities to incorporate SBSE and other approaches for SASs, such as control theory, verification, machine learning, and so forth. Our review has witnessed a few successful works that specifically incorporate SBSE with the other approaches. For example, Maggio et al.~\cite{DBLP:conf/sigsoft/MaggioPFH17} have applied control-theoretic adaptation whose internal control signals are optimized by using SBSE. In general, however, there is a lack of generic guidelines about the possible forms of incorporation. This is important, especially given the wide applicability of SBSE and other approaches for engineering SASs. In particular, challenges can be raised by the following new directions of research:
\begin{itemize}
\item What are the patterns involved when incorporating SBSE with the other approaches for engineering SASs.
\item Whether there could be a ``symbiotic" relation exist between SBSE and another approach, i.e., both SBSE and the other can benefit from each other, which collaborates together to improve the SAS.
\item How to codify a generic methodology that guides the practitioners of SASs on incorporating SBSE with the other approaches.
\end{itemize}
\section{Research Methodology}
\label{sec:method}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figures/method.pdf}
\caption{Overall research methodology.}
\label{fig:research-method}
\end{figure}
The overview of our research methodology in this work has been shown in Figure~\ref{fig:research-method}. As can be seen, to understand the state-of-the-art on exploring SBSE when engineering SASs, we first conducted a systematic literature review covering the papers published from 2009 to 2019. The reason why we chose 2009 as the starting year of the review is that it is the last year covered by two well-known surveys for the SBSE~\cite{DBLP:journals/csur/HarmanMZ12} and SAS domain~\cite{DBLP:journals/taas/SalehieT09}, respectively. Therefore, this work seeks to overcome such a gap and, also for the first time, uniquely focuses on how SBSE has been involved in the research on SAS over the last decade.
The review methodology follows the best practice of systematic literature review for software engineering~\cite{DBLP:journals/infsof/KitchenhamBBTBL09}, consisting of clear search protocol, inclusive/exclusive criteria, pragmatic classification of data items and formal data collection process. In brief, the review has two goals: (i) to provide summative statistics with respect to the aforementioned RQs; (ii) to identify the sources that derive our discussion on the disappointments and opportunities for this particular research field.
For each RQ, we discuss the results, identify disappointments (if any) together with their theoretical and/or experimental justification, provide suggestions and outline the future research opportunities that could potentially mitigate those disappointments. In addition to these, we discuss other opportunities that are currently under-explored in SBSE for SASs in general.
\section{Discussions on Results, Disappointments and Opportunities}
\label{sec:rq}
\input{tables/venues}
To provide an overview, we show all the 74 primary studies and their published venues in Table~\ref{tb:papers-count}, based on which it is clear that the studies identified come from a variety of well-known conferences and journals\footnote{We omitted the venues that do not result in any primary study.}. Figure~\ref{fig:paper-count} illustrates the evolution of study count with respect to the publication year. We note that the number of studies increases at a steady pace, achieving a 5$\times$ increment in 2019 (by September) compared with 2009. This implies an increasing popularity of the research in SBSE for SASs.
It is worth noting that the primary studies do not only contain work published in top Software Engineering venues, but also those relevant ones that were published in System Engineering conferences/journals as well as those in
Computational Optimization venues, as long as they are related to problems in engineering SAS and comply with the inclusive/exclusive criteria. This evidences the fact that SBSE for SASs work is often interdisciplinary, spanning across different communities.
In what follows, we present and analyze the survey results with respect to our RQs, together with the disappointments, the justifications of the likely issues and opportunities to mitigate them. All the data of our survey and experiments is publicly available at our repository\footnote{\url{https://github.com/taochen/sbse-for-sas}}.
\begin{figure}
\centering
\includestandalone[width=0.6\columnwidth]{tikz/paper-count}
\caption{Number of primary studies identified per year.}
\label{fig:paper-count}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.515\columnwidth}
\includestandalone[width=\columnwidth]{tikz/alg-single-evolution}
\subcaption{Single/aggregated objective search}
\label{fig:alg-s-evo}
\end{subfigure}
\hspace{-0.2cm}
\begin{subfigure}[t]{0.485\columnwidth}
\includestandalone[width=\columnwidth]{tikz/alg-multi-evolution}
\subcaption{Pareto search}
\label{fig:alg-m-evo}
\end{subfigure}
\caption{Popularity evolution of search algorithm and their levels of justification for SAS (12 studies use more than one algorithm).}
\label{fig:alg-evo}
\end{figure}
\input{tables/alg-context}
\subsection{RQ1: Search Algorithms for SASs}
\label{sec:rq1}
\subsubsection{Significance}
Being one of the most important parts of SBSE, understanding what, where, and why search algorithms are chosen in SBSE for SASs is essential.
\subsubsection{Findings}
From Figure~\ref{fig:alg-evo}, we can clearly see that the most popular search algorithms used each year have been similar, i.e., Exhaustive Search (ES)~\cite{DBLP:journals/tse/CalinescuGKMT11,DBLP:conf/icse/CailliauL17}, Genetic Algorithm (GA)~\cite{DBLP:conf/icac/RamirezKCM09,DBLP:journals/ase/GerasimouCT18} and Integer Programming (IP) solver\footnote{We have seen that a variety of solvers used, e.g., CPLEX (\url{https://www.ibm.com/analytics/cplex-optimizer}), LINDO (\url{https://www.lindo.com/}) and SCIP (\url{https://scip.zib.de/})~\cite{DBLP:journals/tse/EsfahaniEM13,DBLP:journals/tse/WangHYY18}, in SBSE for SASs.} for single/aggregated objective search while NSGA-II~\cite{DBLP:journals/tec/DebAPM02} for Pareto search in SAS~\cite{DBLP:journals/tosem/ChenLBY18,DBLP:journals/jss/CalinescuCGKP18}. A more important message we obtain from the results is that $L_4$ (e.g.,~\cite{DBLP:conf/icac/RamirezKCM09,DBLP:journals/soca/HuberHKBK14,DBLP:journals/isci/NascimentoL17}) and $L_3$ (e.g.,~\cite{DBLP:journals/jss/PascualLPFE15,DBLP:journals/taas/Garcia-GalanPTC16,DBLP:journals/tsc/ChenB17}) are the most common level of justification on the algorithm choice for the single/aggregated objective search and Pareto search cases, respectively. This trend does not seem to have the tendency to change according to the evolution over years. In addition to the studies where more than one search algorithm has been experimentally compared (but only one is chosen), we also found 12 studies (e.g.,~\cite{DBLP:conf/icse/Garcia-GalanPTC14,DBLP:journals/jss/PascualLPFE15,DBLP:journals/tosem/ChenLBY18}) which have chosen multiple search algorithms, because the proposed approach is algorithm agnostic.
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/alg-count1-v}
\caption{Top 10 selected single and aggregated objective search algorithms and their levels of justification in SBSE for SASs over years.}
\label{fig:alg-count1}
\end{figure}
However, it is yet clear whether the overall levels of justification are biased by one or two particular search algorithms. To understand such, Figures~\ref{fig:alg-count1} and~\ref{fig:alg-count2} illustrate more clear views on the top 10 most popularly used search algorithms and their justifications of choice. For the single/aggregated objective search case, a variety of algorithms have been chosen, ranging from the exact search, e.g., ES and IP solver and the stochastic search, e.g., GA and Random Search (RS). Apparently, ES, GA, and IP solver share similar popularity but are more predominant than the rest. In the Pareto search case, NSGA-II is significantly more popular than the others --- an inherited trend from SBSE~\cite{DBLP:journals/csur/HarmanMZ12,Sayyad2013b}. In particular, we confirm that the observation of $L_3$ or $L_4$ being the most common justification level for choosing the search algorithms was not biased by a particular algorithm, but a prevalent phenomenon across all.
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/alg-count2-v}
\caption{Top 10 selected Pareto search algorithms and their levels of justification in SBSE for SASs over years (MOEA/D stands for Multi-Objective Evolutionary Algorithm based on Decomposition).}
\label{fig:alg-count2}
\end{figure}
Table~\ref{tb:alg-context} shows in what context the top 10 search algorithms have been used. They have clearly spanned across different SAS problems, parts of the SAS, MAPE-K phases, and self-adaptation purpose. However, in general, we see a clear trend on which some aspects are overwhelmingly targeted: the SAS configuration problems; on the managing system; at the \textit{Plan} phase and for self-optimization/-configuration purpose. In particular, for Pareto search, searching for three objectives is the most common case but NSGA-II has been applied on up to six objectives~\cite{DBLP:conf/icse/Garcia-GalanPTC14}.
Overall, our findings for \textbf{RQ1} conclude that:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Findings 1:} In SBSE for SASs, ES, GA and IP solvers are the top three most popular algorithms for single/aggregated objective search while NSGA-II is the predominant one for Pareto search. \\
\textbf{Findings 2:} $L_3$ and $L_4$ are the most common levels of justification when choosing a search algorithm.\\
\textbf{Findings 3:} SBSE for SASs has been used on different contexts where it is most common to search on the managing system at \textit{Plan} phase for self-optimizing/-configuring in the SAS configuration problem.
\end{tcolorbox}
\subsubsection{Disappointments}
\label{sec:rq1-di}
Indeed, certain search algorithms are ``better known" than some others, but such a large bias is what we did not expect. In fact, it is disappointed to see that, on choosing search algorithms for SASs, majority of the studies give no justification at all ($L_4$, especially on the single/aggregated objective search) or rely on purely experimental comparisons ($L_3$, most commonly in Pareto search). We have also shown that such a finding was neither biased by a particular search algorithm nor the year of study, but overwhelmingly happened to most algorithms used in the field over the past decade --- as shown in Figure~\ref{fig:alg-evo}, out of the 97 cases, only 12 and 9 cases in total qualify to $L_1$ and $L_2$, respectively. To give a good example of $L_1$, Kinneer et al.~\cite{DBLP:conf/icse/KinneerCWGG18} state that GP is chosen because the SAS problem studied has a large search space and complex landscape and, at the same time, sub-optimal result or premature convergence is acceptable. This fits precisely with the major pros and cons of GP, and it is then supported by experimental comparisons with an alternative algorithm, i.e., ES.
Admittedly, the aim of studies in SBSE for SASs may not be finding the ``best" search algorithm for the problem. However, in whichever case, our conjecture is that the choice of search algorithm should be justifiable, i.e., ideally at $L_1$ or at least at $L_2$ if resources are rather limited, but definitely not $L_4$, because it is known that every search algorithm does have their own ``comfort zone''~\cite{DBLP:journals/tec/DolpertM97}. In fact, among those $L_3$ and $L_4$ cases, a considerable number of studies we found tend to work by analogy, i.e., one of the most common reasons that is solely used to choose an algorithm we found is \textit{``it has been widely used before or in other problems"}, e.g.,~\cite{DBLP:journals/tse/CardelliniCGIPM12,DBLP:conf/icse/Garcia-GalanPTC14,DBLP:journals/ase/GerasimouCT18}; or mostly no reasons mentioned (nor experiments) at all~\cite{DBLP:journals/isci/NascimentoL17,DBLP:journals/fgcs/BarakatML18,DBLP:journals/ase/DellAnnaDD19}. Indeed, it makes sense that a search algorithm is chosen because previous work has shown that it is the best for the SAS problem considered. In this case, however, evidence is required to justify that the current SAS problem studied is identical (or at least share many similarities) to those from the previous work. This is what we did not find in the primary studies under these cases.
All the above suggests a lack of sufficient justification on the choice of search algorithms for SAS. This implies a high risk of not being fully aware of their suitability for the SAS(s) studied and its problem, resulting in an immediate threat to the conclusion drawn. For example, ES (or similar exact search algorithms) is apparently not workable on large scale SASs~\cite{DBLP:journals/tosem/ChenLBY18}; GA may not be suitable for time-critical SASs~\cite{DBLP:journals/tsc/LeitnerHD13}; NSGA-II typically does not scale well on SAS problems with four and more objectives~\cite{Purshouse2007}. Even if the proposed approach is algorithm agnostic, a limited justification ($L_3$ or $L_4$) can still cause misleading conclusions, as we will show. Therefore, justifiably selecting an algorithm suitable for the considered SAS problem is crucial,
which demands a well understanding of both the problem and the algorithm.
Our first disappointment is thus:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Disappointment 1:} Unjustified bias on the choice of search algorithms.
\end{tcolorbox}
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\columnwidth]{tikz/exp/no-exp.pdf}
\caption{The convergence of latency (s) by Hill Climbing (HC denoted as \textcolor{blue}{---}) and Random Search (RS denoted as \textcolor{red}{- - -}) on the three SASs from distinct domains under a workload change (each point is the mean over 100 runs). This is an example of the possible consequences when the theoretical justification is not supported by experimental justification, where the theory may be affected by misconsidered factors (e.g., search budget) of the problem.}
\label{fig:rq1-no-exp}
\end{figure}
\subsubsection{Justification on the Likely Issues}
To justify the possible issues raised form $L_2$, we first compare Hill Climbing (HC) and Random Search (RS) (as they are used in the studies and exhibit distinct characteristics) on profiling directly for three SASs, namely \textsc{SQLite}, \textsc{BDBC} and \textsc{BDBJ}, because using three different subjects, even from the same domain, can cover wider scenarios than 65\% existing work as we will show in Section~\ref{sec:rq5}. These SASs are chosen because (i) they are real-world software that has been used by prior work~\cite{DBLP:conf/icse/SiegmundKKABRS12} and (ii) they are expensive to evaluate and difficult to be thoroughly modeled, and thus design-time profiling beforehand can provide important insights on designing the policies for runtime self-adaptation. The aim is to tune latency by adjusting various variation points, e.g., the \texttt{SQLITE\_OMIT\_BTREECOUNT} on \textsc{SQLite}, under a workload change. The corresponding adaptation tactic under such a workload condition can then be drawn. Both HC and RS are run using the same search budget and their setting details, together with details of the SASs, can be found in the supplementary.
In general, one may theoretically justify that HC tends to be suitable for such a SAS problem (or better than the common baseline RS~\cite{DBLP:conf/icse/ArcuriB11}) because the HC is explicitly guided and it is known that the SASs do not contain difficult local optimum points. As we show in Figure~\ref{fig:rq1-no-exp} (each point is the mean over 100 runs), we see that such a theoretical justification is indeed the case: given sufficient search budget, HC can converge well and would eventually be better than RS. However, as can be seen from the experimental results, when both HC and RS tend to converge prematurely under a restricted budget, the RS can actually be a better fit for the problem. This implies that the above theoretical justification could misconsider the fact that the possible search budget may not be ``sufficient" enough to allow HC to converge better --- a highly likely case for the SAS problem as the profiling process can be expensive, e.g., it could take minutes to evaluate only a single solution. Now, suppose that the requirement threshold is right at the performance gap between HC and RS, then it is likely that one would mark ``no satisfactory solution" under the workload, and encode this as part of the adaptation policies. This would, of course, not be ideal as a satisfactory solution could have been found if the RS is simply used instead. The above is a typical case where $L_2$ may still fail to justify the choice of the search algorithm, due to the lack of experimental comparison that may reveal misconsidered factors in SBSE for SASs. Yet, another example is from Leitner et al.~\cite{DBLP:journals/tsc/LeitnerHD13}, who chose GA as a candidate because it is theoretically understandable that GA is less sensitive to local optimum than local search, and hence can potentially lead to better results. However, in their experiment comparison, the GA is, in fact, inferior to local search, because the local optimum points for the SAS problem are not difficult to escape from.
Next, we showcase the likely issues for $L_3$ by running NSGA-II and IBEA (as they are two most common search algorithms with distinct ``comfort zone") on synthetic service systems for runtime self-adaptation to a service change using the QWS data~\cite{DBLP:conf/smc/Al-MasriM09a}, because it is a type of the most widely used SASs as we will show in Section~\ref{sec:rq5}. We experiment on three workflows, each with a different structure and number of abstract services, ranging from 10 to 15 as used in~\cite{DBLP:journals/infsof/ChenLY19}. Again, using three different subjects (from the same domain) has already achieved better coverage than 65\% of existing studies, as what will be shown in Section~\ref{sec:rq5}. The aim is to self-adapt the SAS by re-composing the concrete services with an aim to tune different objectives upon service quality/availability changes. HV is chosen as the quality indicator because we aim to assess the overall quality of the solution set produced under no specific preferences and it covers all quality aspects of a solution set~\cite{DBLP:conf/icse/Li0Y18}. To achieve efficient search, the fitness is evaluated by using a well-defined analytical model~\cite{DBLP:conf/gecco/0001LY18,DBLP:journals/infsof/ChenLY19}. We run on three and five objective cases, as these are what have been used on NSGA-II from Table~\ref{tb:alg-context}. Both NSGA-II and IBEA use identical search budget, their setting details, and specifications of the service-based SASs can be found in the supplementary.
\input{tikz/exp/no-theory-rq1}
As can be seen clearly from Table~\ref{tb:mo-rq1}, over 30 repeated runs, NSGA-II is slightly better on the three objectives case while IBEA is better on the five objective case, in which cases the volume differences are relatively high. This is mainly due to the fact that the Pareto-dominance guided search in NSGA-II liked algorithms cannot scale well on more than three objectives, which has been theoretically analyzed in a large number of existing work, e.g., \cite{Li2015}. However, if the experimental comparison has not been guided by such a theoretical understanding, it is likely that only the three-objective case is compared. This could result in a misleading conclusion that \textit{``the experiments show the NSGA-II is better than IBEA on the SAS problem considered and thus it is chosen to derive subsequent study"} without taking the number of objectives into account, hence giving a wrong implication that the same can be applied to whatever number of objectives under the SAS problem. In this case, the addition of theoretical justification could easily motivate the need to validate cases beyond three objectives, or explicitly state that the conclusion may not be applicable to other cases with a different number of objectives to be considered. This is the possible issue we observed from some studies, such as~\cite{DBLP:conf/icsa/CalinescuCGKP17,DBLP:journals/ase/GerasimouCT18,DBLP:conf/saso/FredericksGK019}, where NSGA-II has been used with experimental justification only, and the conclusion implies that the approach (based on NSGA-II) can work equally well on SASs with more than three objectives. Another likely issue of $L_3$ is that even though more than three objectives have been considered, the lack of theoretical justification can lead to a comparison between similar and equally unfitted algorithms, such as the FastPGA and NSGA-II compared in~\cite{DBLP:journals/taas/Garcia-GalanPTC16}. Despite the fact that they both suffer from the issue of Pareto dominance when the number of objectives is greater than three,
one may draw a conclusion that NSGA-II works better and thus should be used for SAS under such case,
which is even more misleading.
All the above have shown that a lack of justification for choosing the search algorithm can raise serious consequences to the field.
In fact, to justify the choice of the algorithm in SBSE for SASs, theoretical justification can often easily guide the design of experimental justification, but the opposed is difficult unless extensive empirical studies have been conducted. From the above, what we have shown is that choosing an algorithm based only on theoretical understanding may still cause issues, as some factors could be misconsidered. This is, nevertheless, less serious than choosing one based on experimental comparisons without theoretical justification, in which case misleading conclusions may be drawn easily. These are the reasons why they are ranked as $L_2$ and $L_3$, respectively. Clearly, $L_4$ is the worst case that should be avoided and its consequence could be dreadful. For example, a very recent work in SBSE for SASs~\cite{DBLP:journals/corr/abs-2004-11793}, which is ranked as $L_4$, has wrongly adapted NSGA-II to optimize a single-objective problem for SAS.
We would like to stress that our goal here is not to show an algorithm can be better than another in general, but to demonstrate the likely issues when choosing a search algorithm without proper justification in SBSE for SASs.
\subsubsection{Suggestion and Opportunity}
Our suggestion is, therefore, simple:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Suggestion 1:} If permitted, theoretically justifying the algorithm choice supported by experimental comparison ($L_1$), or at least, theoretical justification is a must ($L_2$). In all cases, avoiding the omission of justification or making choice solely according to analogy such as ``this algorithm is widely used" ($L_4$).
\end{tcolorbox}
The intrinsic reason behind the disappointment from Section~\ref{sec:rq1-di} is the lack of guidelines on choosing search algorithms for SASs. This raises a promising research opportunity:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Opportunity 1:} Generic guidance on justifiably choosing search algorithms according to the requirements of the particular SAS problem studied.
\end{tcolorbox}
Indeed,
every search algorithm has its own merit,
which makes them well-suited to a particular class of SAS problems. For example, HC starts from a random initial point in the search space and
iteratively finds out the best neighbor of the current solution, which could fit well with the planning problem for most service-based SASs as it is straightforward to design the `neighbor' based on the service providers. This feature helps converge fast if the search is in the ``right" direction, but may also cause it to get trap in local optimum easily. In contrast, the population-based search algorithm with diversity preservation, such as the GA, can help jump out of local optimum. It could be desirable for SAS with a large search space and complex types of variation points, e.g., planning for \textsc{RUBiS} and \textsc{SQLite}. However, such relatively random exploration can cause its slow convergence when there is a strict requirement for planning time.
The same applies to the Pareto search case. For example,
the algorithms which compare solutions by Pareto dominance and density, such as NSGA-II~\cite{DBLP:journals/tec/DebAPM02} and SPEA2~\cite{Zitzler2002},
typically do not work well on many-objective problems where the number of objectives is larger than three~\cite{Purshouse2007}, which is not uncommon for SASs~\cite{DBLP:conf/icse/Garcia-GalanPTC14,DBLP:journals/taas/Garcia-GalanPTC16}.
The decomposition-based algorithms (e.g., MOEA/D~\cite{Zhang2007} and its variants MOEA/D-STM~\cite{DBLP:journals/tec/LiZKLW14} and NSGA-III~\cite{Deb2014}) scale up well in terms of objective dimensionality,
but may struggle on problems with an irregular Pareto front shape (e.g., degenerate, disconnect or highly-nonlinear)~\cite{Ishibuchi2017} --- a typical case between the throughput and cost objectives on SASs~\cite{DBLP:conf/gecco/0001LY18,DBLP:journals/infsof/ChenLY19}.
The indicator-based algorithms (e.g., IBEA~\cite{Zitzler2004}) are often insensitive to the Pareto front shape
but may suffer from the dominance resistance solutions (i.e. solutions with an extremely poor value on at least one of the objectives and (near) optimal values on the others~\cite{Ikeda2001}), thereby with their solutions concentrating in the boundaries of the Pareto front~\cite{Li2018}. This may produce some undesired effects when penalty terms exist in the requirements of SAS.
The linkage between the characteristics of search algorithms and the requirements of SAS problems is a unique challenge of this research opportunity. Such a linkage lies in the heart of the guidance to enable well-justified choices of search algorithms, supported with both theoretical and experimental justifications.
To achieve that,
there are several research questions desirable for us to address.
\begin{itemize}
\item What kinds/form of requirements from the SAS problem can be important to the search process, such as the planning time, diversity of solutions, and self-adaptation purpose.
\item What characteristics of a search algorithm can better fit with such requirements, which can greatly inform the algorithm choice.
\item Or further, to determine in what context a further specialized search algorithm is a must.
\end{itemize}
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/mo-vs-so}
\caption{Popularity evolution on the formulation of search in SBSE for SASs (Seven studies use more than one types of formulations).}
\label{fig:mo-rel}
\end{figure}
\subsection{RQ2: Objective Formulation in the Search for SASs}
\label{sec:rq2}
\subsubsection{Significance}
When engineering SBSE for SASs, another fundamental aspect is to determine the objective to be searched. As a result, understanding what, how, and why objectives are defined and handled during the search is crucial in SBSE for SASs, especially in the presence of multiple conflicting objectives.
\subsubsection{Findings}
\label{sec:rq2-findings}
As illustrated in Figure~\ref{fig:mo-rel}, our reviews reveal that over the years, only a relatively small proportion of the studies consider the single-objective case. For the majority that take multiple objectives into account, weighted search, which combines all objectives via certain form of weighting strategy that effectively turns the problem into a single objective one\footnote{We found weighted sum and weighted product.} (a.k.a. utility-driven search), is the most predominant way to formulate the objectives in the search for SAS, such as~\cite{DBLP:conf/icac/RamirezKCM09,DBLP:conf/IEEEscc/MiWYZSY10,DBLP:conf/icse/FredericksDC14,DBLP:conf/saso/PodolskiyMKGP19,DBLP:journals/tse/WangHYY18}. Pareto search is ranked the second, e.g.,~\cite{DBLP:journals/tosem/ChenLBY18,DBLP:journals/jss/CalinescuCGKP18,DBLP:journals/tsc/ChenB17,DBLP:journals/taas/Garcia-GalanPTC16}, while hierarchical search, i.e., explicitly search one objective before another which is another way of objective aggregation, forms the minority~\cite{DBLP:journals/tse/RosaRLHS13,DBLP:conf/re/PengCYZ10,DBLP:conf/sigsoft/FilieriHM15}. There is indeed a tendency that the number of studies considers Pareto search to increase gradually since 2014. This, however, remains much less common compared with its weighted counterparts, especially in 2019. We have also found seven studies, e.g.,~\cite{DBLP:conf/wosp/0001BWY18,DBLP:conf/icse/KinneerCWGG18,DBLP:journals/ase/GerasimouCT18} exploit multiple types of objective formulation in the search, mostly due to they are used on different problems/aspects or contexts of the SAS.
Since considering multiple objectives are more pragmatic, in which weighted search and Pareto search are the two most popular (yet alternative) ways of formulating SBSE for SASs, in Figure~\ref{fig:alg-s-m-evo}, we summarized the reason behind their choice of the two formulations. Clearly, for the weighted case, the majority of the choice gives no clear reasons, e.g.,~\cite{DBLP:conf/sigsoft/MaggioPFH17}. For those that do, the most common reason is that \textit{``the weights can flexibly allow one to specify the relative importance between objectives"}~\cite{DBLP:conf/sigsoft/MaggioPFH17}. Pareto search, in contrast, often provide clear reasons. For example, Gerasimou et al.~\cite{DBLP:journals/ase/GerasimouCT18} explain that Pareto search is chosen because it reveals richer information about the trade-offs between multiple QoS requirement, leading to better-informed decision making for SASs. Table~\ref{tb:mo-so} shows the treatments and assumptions applied to these two formulations of search under multi-objectivity. For weighted search, the weights are often left to the engineers to provide in a prior (28 studies), such as~\cite{DBLP:journals/fgcs/BarakatML18}; or equal weights are assumed by default to reflect equal importance (16 studies) which is believed to achieve a balanced outcome~\cite{DBLP:conf/sigsoft/EsfahaniKM11}, such as~\cite{DBLP:conf/IEEEscc/MiWYZSY10}. For Pareto search, the final trade-off solution is most commonly left to the engineers (e.g.,~\cite{DBLP:conf/kbse/GerasimouTC15,DBLP:conf/saso/FredericksGK019}) while three studies~\cite{DBLP:journals/tsc/ChenB17,DBLP:conf/wosp/0001BWY18,DBLP:journals/tosem/ChenLBY18} automatically select the knee solution --- the solution that achieves a balanced result without explicit weights. Interestingly, some studies, such as Gal{\'{a}}n et al. ~\cite{DBLP:journals/taas/Garcia-GalanPTC16}, apply an additional weighted function to select the final one from the solution set produced by Pareto search, but the reason of which has not been discussed.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.525\columnwidth}
\includestandalone[width=\columnwidth]{tikz/weight-evolution}
\subcaption{Weighted search}
\label{fig:alg-s-evo}
\end{subfigure}
\hspace{-0.2cm}
\begin{subfigure}[t]{0.475\columnwidth}
\includestandalone[width=\columnwidth]{tikz/pareto-evolution}
\subcaption{Pareto search}
\label{fig:alg-m-evo}
\end{subfigure}
\caption{Evolution of whether reasons have been provided when using weighted search and Pareto search for SAS.}
\label{fig:alg-s-m-evo}
\end{figure}
As for the actual search objectives, Table~\ref{tb:objectives} shows a summary of the most common ones. Overall, latency and cost are the most overwhelmingly targeted objectives individually whilst in terms of objective combination on SASs, latency and cost are also the most predominately targeted case. It is worth noting that some combinations are clearly conflicting, such as latency and cost; latency and power. Some others tend to be more harmonic, such as latency and throughput. Constraints are also sometime considered, in which the most common ones are threshold (e.g., threshold of latency requirements)~\cite{DBLP:journals/jss/BashariBD18} and dependency between variation points~\cite{DBLP:conf/icse/PascualPF13,DBLP:journals/tosem/ChenLBY18}. For example, the \texttt{cache\_mode} cannot be changed until \texttt{cache} option has been enabled. We would like to stress that the constraints may also be considered as an objective depending on the search algorithms used, for example, Chen et al. ~\cite{DBLP:journals/tosem/ChenLBY18} consider dependency as a constraint but Gal{\'{a}}n et al. ~\cite{DBLP:journals/taas/Garcia-GalanPTC16} treat such as an objective. As a summary, our findings for \textbf{RQ2} include:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Findings 4:} Multiple objectives case is much more common than single-objective assumption in SBSE for SASs, in which the weighted search is predominately used. \\
\textbf{Findings 5:} The formulation behind weighted search often provide no reasons while that for Pareto search is usually discussed in details.\\
\textbf{Findings 6:} A vast set of objectives and their combination have been targeted; a constraint in one study may also be used as objectives in some other studies.
\end{tcolorbox}
\input{tables/mo-so}
\input{tables/objective}
\subsubsection{Disappointments}
\label{sec:rq2-di}
When dealing with multiple objectives in SAS, it is disappointed to find that the weighted search is much more commonly used than its Pareto counterpart in SBSE for SASs, albeit the latter is regarded as the better option in offering an understanding of the search problem at the SBSE community~\cite{DBLP:journals/csur/HarmanMZ12}. We also found that such a trend is prevalent across the years with little changes, as shown from Figure~\ref{fig:mo-rel}.
In essence, the aggregation of objectives implies that certain preferences between the objectives are available and they can be precisely quantified using weights. We showed that the majority of them have either assumed the weights can be provided by the engineers or they are equally weighted by default. However, as widely recognized from the literature~\cite{DBLP:journals/csur/HarmanMZ12,DBLP:journals/tmc/PaolaFGRD17,DBLP:journals/tosem/ChenLBY18}, it is not uncommon that a clear and precise quantification of the weights is very difficult, if not impossible, especially given the complexity of SAS. This is what should have been justified when assuming the weighted search for SASs, which unfortunately we have failed to see in most studies.
Of course, if the search space is so small and the evaluation is rather cheap on a given SAS problem, then it does not really matter which formulation to use as all the solutions can be identified and searched easily. However, this needs to be discussed explicitly to justify that the choice of objective formulation in the search has no impact. A further evidence to this disappointment is that majority of the studies that adapt weighted search has no discussion on the reasons behind --- only 7 out of 43 studies have explained reasons, such as the weights can be explicitly given because of the special characteristics of the SAS problem/subject domain considered~\cite{DBLP:conf/icac/RamirezKCM09}. This is by contrast to the 12 cases (out of 18) supported with reasons when formulating the search in a Pareto manner. In fact, a considerable amount of studies~\cite{DBLP:journals/tmc/PaolaFGRD17,DBLP:journals/tosem/ChenLBY18,DBLP:conf/gecco/0001LY18} have provided the reasons of using Pareto search by clearly comparing with the weighted search, but we found none of the similar cases when the weighted search is chosen. The wide adoption of weighted search without justification, especially given its clear limitations, can cause threats to the validity and applicability of the work in SBSE for SASs. Our disappointment is, therefore:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Disappointment 2:} Unjustified and limited formulation on the multi-objective search for SASs.
\end{tcolorbox}
\subsubsection{Justification on the Likely Issues}
A clear advantage of using Pareto search on multiple SAS objectives is the fact that it does not require weights specification. In addition to this, the unique search setting to approximate the Pareto front may make it possible to discover some irregular search regions that would be otherwise difficult to be found with the weighted search.
To justify why it could be important to consider Pareto search as opposed to the current trend of SBSE for SASs where the weighted search is predominately used, we experimentally compare how NSGA-II and GA perform when optimizing the SASs, as the representative of Pareto and weighted search under equal weights (with normalization), respectively. The reason why we have chosen these two is due to their algorithmic similarity, i.e., we can then focus on the formulation of the search they rely upon. They all use the same parameter settings, e.g., population size and evaluations, details of which can be found in the supplementary. We run them on three SASs, \textsc{LLVM}, \textsc{Trimesh} and a service-based system (\textsc{SBS}), under the scenario of design-time profiling. Three subject SASs from distinct domains achieve better coverage than 87\% of the current studies (as what will be shown in Section~\ref{sec:rq5}) and have been used in prior work~\cite{DBLP:journals/corr/abs-1801-02175,DBLP:journals/tosem/ChenLBY18}. Details of the SASs can also be found in the supplementary. In particular, these SASs are chosen because they involve two objectives to be tuned and their objective spaces are diverse, with different shapes and densities of the trade-off surface. A total of 100 runs have been conducted.
The resulted objective space of one example, which is common across all the runs, has been shown in Figure~\ref{fig:w-vs-p}. From this, we can obtain the following observations:
\begin{enumerate}
\item All three subject SASs reveals that the weighted search, albeit being proceeded by a population-based algorithm like GA, would converge to one point. Pareto search would approximate the whole Pareto front by contrast.
\item The \textsc{Trimesh} and \textsc{SBS} cases reveal that the sole solution produced by weighted search can be dominated by the solutions found by Pareto search. This explains why we found that certain studies~\cite{DBLP:conf/icse/Garcia-GalanPTC14,DBLP:journals/taas/Garcia-GalanPTC16} apply both searches for the same SAS problem and context: because the weighted search may be over-constrained by the given weights, hence struggling to find some good solutions in the first place.
\item \textsc{SBS} implies that, albeit being assumed in 16 studies, an equal weight may not lead to a balanced outcome depending on the shape of Pareto front. In fact, it could largely bias towards a certain objective, which contradicts with the most common reason for using equally weighted search for SAS under weights/utility theory~\cite{DBLP:conf/sigsoft/EsfahaniKM11}.
\end{enumerate}
The above has, therefore, justified that the overwhelming adoption of the weighted search on SAS, especially when there is a lack of justification, is problematic to the field.
Indeed, one challenge of Pareto multi-objective search is how to select a single solution from the produced non-dominated trade-off set. When no preference information is available for the design-time problem, it is ideal to provide the engineers with all non-dominated solutions found to keep them informed in the decision-making process. When certain preferences exist (or at runtime), the selection of the single solution can be tailored with such or according to some assumptions of the preferences in case of runtime problem. Yet, unlike the case of weighted search, such preferences do not require explicit quantification. As we have shown in Section~\ref{sec:rq2-findings}, this can be automatically completed by selecting a solution from certain regions, e.g., the knee point selection.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{tikz/exp/w-vs-p-rq2.pdf}
\caption{Examples of the common results between Pareto (NSGA-II denoted as \copy\aMark) and equally weighted search [0.5,0.5] with normalization (GA denoted as \copy\bMark) for three SASs under a workload, job, or service quality/availability change (\# iterations and throughput are to be maximized while others are to be minimized).}
\label{fig:w-vs-p}
\end{figure}
\subsubsection{Suggestion and Opportunity}
According to the above, our suggestions to overcome the disappointment from Section~\ref{sec:rq2-di} in the presence of multiple SAS objectives is apparently:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Suggestions 2:} When dealing with multiple objectives on SASs, always consider Pareto search as a possible alternative, regardless whether weights can be explicitly set.
\end{tcolorbox}
A particular opportunity which is now under-explored in SBSE for SASs is:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Opportunity 2:} Pareto many-objective search for SASs.
\end{tcolorbox}
Conventionally, Pareto many-objective search targets the case when the number of objectives is greater than three. Similar to the classic Pareto multi-objective search, such a paradigm is also free from the tedious weight specification but aims to explicitly overcome the limitation introduced by Pareto-dominance guided search. This fits well with the requirements of SAS, in which case the current treatment relies on weighted search.
Unlike some other SBSE problems, the unique property in a SAS problem with different objectives is that the relations between these objectives may not necessarily be conflicting, or only partially be conflicting depending on the environment. For example, as we have shown, the widely used pair of objectives, such as latency and throughput, tends to be more harmonic. When the objective number is small, this may not be an issue as the dimensionality may not cause too much challenge to the selection pressure. However, for Pareto many-objective search, such a unique property of SAS problems could be better exploited and specialized in the algorithm.
There are already readily available Pareto many-objective search algorithms~\cite{DBLP:conf/cec/IshibuchiTN08}, but it is yet clear how they can be specialized for SASs to better meet the requirements of a SAS problem. To this end, the key challenges of this research opportunity are, therefore:
\begin{itemize}
\item Which SAS problem, MAPE-K phases, and self-adaptation propose can suit well with the pros and/or cons of Pareto many-objective search.
\item How to consider SAS problem-specific objective relation (conflicting or harmonic) in the search and solution selection, especially when there is a high dimensional objective space.
\item How to mitigate the rapidly increasing search cost (i.e., space and time) to fit the timeliness requirements of certain SAS problems at runtime.
\end{itemize}
\subsection{RQ3: Evaluating the Pareto Search for SASs}
\label{sec:rq5}
\subsubsection{Significance}
When optimizing SAS that involves only a single or aggregated objective, the quality of the SBSE approach can be simply evaluated by using that objective value or the given weight vector. However, in the case that Pareto search is involved, selecting appropriate quality indicator(s), which assess the quality of a solution set produced, becomes a critical yet challenging task since different solutions may be incomparable on the basis of Pareto dominance. Therefore, it is of great significance to understand what types of methods/indicators are currently used to serve such purpose and the reasons behind it.
Recall from Table~\ref{tb:objectives}, it is often the case that the objectives to be optimized by a search algorithm are directly related to the ultimate quality concerns of the SAS that the overall approach seeks to improve. Therefore, the evaluation of the quality concerns in SAS is equivalent to the evaluation of the search algorithm, which is an integral part of the SBSE approach. This is particularly true for the studies that adopt Pareto search, wherein the exact quality concerns of the SAS are searched/optimized directly by the algorithm.
\subsubsection{Findings}
For the 18 studies that consider Pareto search for SASs,
Figure~\ref{fig:qi-count} depicts the types of quality evaluation methods and their popularity to assess solution sets over years. As can be seen, examining directly on each objective is the most common way (e.g., reporting the mean or plotting the results), following by generic quality indicators that were designed specifically to evaluate solution sets, such as HV~\cite{Zitzler1998}, GD~\cite{Veldhuizen1998} and IGD~\cite{Coello2004}.
Two studies~\cite{DBLP:conf/icse/Garcia-GalanPTC14,DBLP:journals/taas/Garcia-GalanPTC16} leverage on a given weight vector to evaluate the solutions produced by the Pareto search.
Such a trend remains unchanged as the field evolves. We note that 11 out of the 18 studies, e.g.,~\cite{DBLP:journals/tosem/ChenLBY18,DBLP:journals/jss/CalinescuCGKP18,DBLP:journals/infsof/ChenLY19},
have used different types of methods to assess Pareto SBSE for SASs, due primarily to their complementary nature.
For example, plotting all the objective values can be a good addition to the results from generic quality indicators~\cite{DBLP:journals/csur/LiY19}.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.49\columnwidth}
\includestandalone[width=\columnwidth]{tikz/qi-count-new1}
\subcaption{All types}
\label{fig:qi-count}
\end{subfigure}
\hspace{-0.1cm}
\begin{subfigure}[t]{0.5\columnwidth}
\includestandalone[width=\columnwidth]{tikz/qi-count-new2}
\subcaption{Generic QI and justification}
\label{fig:qi-all}
\end{subfigure}
\caption{Evolution of evaluation methods to assess Pareto SBSE for SAS (All studies use individual objective values; 11 use more than one type).}
\end{figure}
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/qi-count-evolution}
\caption{The generic quality indicators (including cases when none of them are used) and their levels of justification in SBSE for SASs over years (Most studies use more than one indicator).}
\label{fig:qi-top}
\end{figure}
Unlike the other types, there are more than hundreds of generic quality indicators and thus their selection is also a challenge~\cite{DBLP:journals/csur/LiY19}. Figure~\ref{fig:qi-all} shows the levels of justification when choosing each generic quality indicator in SBSE for SASs, in which HV tends to be the most popular indicator used. We note that over the years, $L_3$ (e.g.,~\cite{DBLP:journals/jss/PascualLPFE15}) or $L_2$ (e.g.,~\cite{DBLP:journals/ase/GerasimouCT18}) are much more common than the $L_1$ cases~\cite{DBLP:conf/icsa/CalinescuCGKP17,DBLP:journals/infsof/ChenLY19} when justifying the choice. Indeed, we have found that most of the studies have used more than one indicator, but not all of them have a justification. The most common reason is that certain indicators only cover part of the quality aspect~\cite{DBLP:journals/csur/LiY19}, or there is a specific requirement according to the preferences of the SAS problem~\cite{DBLP:conf/icsa/CalinescuCGKP17}.
To ensure that such a trend is not biased by a particular indicator, Figure~\ref{fig:qi-top} plots all the generic quality indicators used and their levels of justification, together with the reasons for cases when no indicator is used at all. We note that while HV, IGD and $\epsilon$-indicator are much more popular than the others, the overall trend of justification levels remains the same as Figure~\ref{fig:qi-all}. In particular, seven studies (out of 18), such as~\cite{DBLP:conf/icse/KinneerCWGG18,DBLP:journals/taas/Garcia-GalanPTC16}, have not used any generic quality indicator, for which no justification has been provided. In summary, our findings on evaluating Pareto search for SASs under \textbf{RQ3} are:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Findings 7:} Directly assessing on each objective remains the most common evaluation methods in Pareto search for SASs. \\
\textbf{Findings 8:} A considerable amount of studies have used none of the generic quality indicators.
However, no justification is provided under these cases. \\
\textbf{Findings 9:} The choice of indicators often lies at the justification level $L_3$ or $L_2$.
\end{tcolorbox}
\subsubsection{Disappointments}
\label{sec:rq3-di}
Our disappointment lies in the fact that generic quality indicators, despite being rather successful in SBSE, remains far away from standard practice in SBSE for SASs~\cite{DBLP:conf/icse/Li0Y18}.
Indeed, plotting the objectives may seem to be a simple way for the assessment. Yet this only works well for the bi-objective case, and when the number of objectives reaches four or more
(which is not uncommon for SASs), it is difficult to clearly illustrate the solution sets by scatter plot. More importantly, visual comparison cannot provide a quantitatively comparable result between the solution sets. Reporting the mean values of each objective may also seem straightforward, but they neither reflect the trade-off nor the overall quality of the solution set,
leaving many aspects uncovered.
Therefore, in conjunction with the above, generic quality indicators are promising to overcome these limitations~\cite{DBLP:conf/icse/Li0Y18,DBLP:journals/csur/LiY19}.
Since the possible number of generic quality indicators is enormously high, a perhaps even more disappointing point is that the justification of the choice has been insufficient, i.e., only a small proportion can reach $L_1$. To give a good example of $L_1$, Calinescu et al.~\cite{DBLP:conf/icsa/CalinescuCGKP17} adopted IGD and $\epsilon$-indicator because they need to assess three quality aspects of a solution set (i.e., convergence, spread, and uniformity), which are all important for the SAS problem studied. In addition, there is a specific preference of robustness in the problem with respect to the quality aspects, and therefore these indicators are further tailored to fit such a need --- a typical example where the choice of indicators are driven by the quality aspects covered and their relations to the preference in the SAS problem.
In fact, we found that most of the time the choices are solely driven by the analogy that other work has also used the same ones, which is a typical case of $L_3$. For example, Fredericks et al.~\cite{DBLP:conf/saso/FredericksGK019} used HV as the sole indicator simply because it is well-known in the field. This is of concern, as lack of justification (or even none at all) may result in misleading conclusions which we will discuss in Section~\ref{sec:rq3-justify}. Our findings have also confirmed that such a trend neither is due to bias on using a particular indicator nor tends to be changed over the years. The negligence of generic quality indicators, together with the limited justification of the choice, are severe threats to the conclusion validity. Overall, our disappointment can be summarized as:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Disappointment 3:} Questionable choice of evaluation methods in Pareto search for SASs.
\end{tcolorbox}
\subsubsection{Justification of the Likely Issues}
\label{sec:rq3-justify}
The likely issue of our disappointment here can be caused by the fact that each indicator has, by design, its own assumption of preferences and the quality aspects covered, as shown in Table~\ref{tb:taxonomy}. This is precisely the reason why their choice cannot be done arbitrarily, as the one that fitted well in other situations may not be suitable for the SAS problem studied. For example, HV measures all the four quality aspects, which implies that one study uses HV only because all four quality aspects are of interest, such as the planning for SASs that seek to tune latency, reliability, and throughput under no preferences~\cite{DBLP:journals/tosem/ChenLBY18}. GS, which measures solely the diversity, could be an ideal indicator for SAS testing~\cite{DBLP:conf/icse/FredericksDC14} with two quality attributes of interest, such as latency and power, as the aim therein is to verify the behaviors of SAS by using a diverse set of test cases that covers different trade-off points between latency and power. As a result, when some of the chosen quality indicators do not agree with each other, it could be simply due to the fact that they assess different quality aspects of the solution set. This is the key reason why justification level at $L_3$ is problematic.
\input{tables/common-qi}
$L_2$ can still be insufficient because even for two indicators which are designed for assessing the same quality aspect of a solution set, they could work only on cases with very different preferences~\cite{DBLP:conf/icse/Li0Y18,DBLP:journals/csur/LiY19}, such as a region of interests, the priority of objectives or even subject to some vague constraints. For example, both HV and IGD are used to provide a comprehensive evaluation of a solution set in terms of convergence, spread, uniformity, and cardinality, but HV clearly prefers knee points of the Pareto front and IGD prefers uniformly-distributed solutions~\cite{DBLP:journals/csur/LiY19}. Therefore, a careful and justifiable selection and use of quality indicators to evaluate/compare solution sets have to be made in relation to the preferences of the SAS problems~\cite{DBLP:journals/csur/LiY19,DBLP:conf/icse/Li0Y18}.
To justify the likely issues raised from $L_2$ level of indicator choice, we show an example SAS at Figure~\ref{fig:qi-example} using the case from some studies~\cite{DBLP:conf/kbse/GerasimouTC15,DBLP:journals/tse/WangHYY18,DBLP:journals/tsc/ChenB17}, where the aim is to optimize both reliability and latency of the SAS. We use one subject SAS only, as our goal here is to prove the existence of the likely issues when justification of indicators dose not aligned with the preferences in the SAS problem. Now, suppose that there are two solution sets \texttt{A} and \texttt{B} in Figure~\ref{fig:qi-example} returned by two search algorithms, and that 100\% reliability is of more interest to the engineer. To compare these two sets, a typical practice is to consider one or several commonly-used quality indicators, e.g., using those common ones in SBSE for SASs from Table~\ref{tb:taxonomy}. Since there is a strong preference towards 100\% reliability, the solution ($\beta$) of \texttt{B}, which reaches 100\% and has a lower cost than the corresponding one in \texttt{A}, should be the most ideal solution. However, the results from the quality indicators are quite opposite --- all the nine indicators evaluate \texttt{A} better than \texttt{B}, because all these quality indicators work on the assumption that the two objectives are incomparable and there is no preference of the problem,
i.e., ignoring that the reliability needs to reach 100\% in this particular case.
This is a typical example that indicates the risk of directly using existing quality indicators without considering the preferences of the problem~\cite{DBLP:conf/icse/Li0Y18} --- a major threat to $L_2$ level of justification.
\begin{figure}[t!]
\centering
\includestandalone[width=0.5\columnwidth]{tikz/sas-example}
\caption{An example in SBSE for SASs where the generic quality indicators can be misleading. When searching for the minimal latency and the best reliability of SAS adaptation upon a workload change~\cite{DBLP:conf/kbse/GerasimouTC15,DBLP:journals/tse/WangHYY18,DBLP:journals/tsc/ChenB17}, two Pareto search algorithms produce two nondominated solutions sets, $A$ and $B$, respectively. $A$ is evaluated better than $B$ on all the nine commonly used quality indicators in SBSE for SASs (solutions being normalized before the evaluation):
$GD(A)=0.02 < GD(B)=0.26, ED(A)=0.5 < ED(B)=0.89, \epsilon(A)=0.1 < \epsilon(B)=0.3,
GS(A)=0.15 < GS(B)=0.46, CI(A)=0.8 > CI(B) = 0.2,
IGD(A)=0.02 < IGD(B)=0.27,
HV(A)=0.77 > HV(B)=0.43, SP(A)=0.05 < SP(0.1),
\mathcal{C}(A)=0.8 > \mathcal{C}(B)=0.25.$
However, $B$ is, in fact, more preferred (specifically solution $\beta$)
when full reliability is more important than possible low latency.}
\label{fig:qi-example}
\end{figure}
\subsubsection{Suggestion and Opportunity}
The disappointment mentioned in Section~\ref{sec:rq3-di} can be mitigated by a simple suggestion:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Suggestion 3:} Generic quality indicators should be adopted in conjunction with other evaluation methods for Pareto search on SASs. The justification needs to be made on the quality aspect that the indicator(s) cover and with respect to the preferences of the SAS problem considered (i.e., $L_1$).
\end{tcolorbox}
To this end, a specific research opportunity raised is:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Opportunity 3:} Preferences driven Pareto search for SASs.
\end{tcolorbox}
Unlike weighted search where preferences need to be defined precisely to aim for a single optimal point in the objective space, the nature of Pareto search permits to take vague and imprecise preferences into account. For example, searching for a particular region in the objective space. This is of high interest in SAS where the nature of requirement specification is often imprecise~\cite{DBLP:journals/re/WhittleSBCB10}. More importantly, according to the SAS problems, engineers might only be interested in a handful of certain solutions that meet their preferences most in the objective space, instead of the entire set of trade-off solutions~\cite{DBLP:conf/icsa/CalinescuCGKP17,DBLP:journals/tosem/ChenLBY18}.
In this regard, preference information can be elicited and integrated as part of a generic quality indicator, or even serve as part of the fitness that drives the search towards the region of interest along a preferred direction.
For example, preference information can be extracted from existing SAS design models or language, e.g., the Goal Model or RELAX~\cite{DBLP:journals/re/WhittleSBCB10} which contains a formal expression of preferences such as \textit{the latency shall be low while the cost shall ideally be as low as 5\$}. Next, it is possible to sample a vector of reference points, each dimension of which represents the expectation at the corresponding objective aligned with those preferences~\cite{DebSBC06,LiCMY18}. The resulted reference points could be directly exploited by a search algorithm or integrated into an indicator to assess the solution set thereafter.
To achieve preferences driven Pareto search for SASs, there are several challenging research questions needed to be addressed:
\begin{itemize}
\item How to (automatically) extract the preference information about the SAS problem in an efficient and cost-effective manner.
\item How to structuralize the preferences of the SAS problem in a way that can be well reflected in the fitness of the search algorithm.
\item What parts of the preferences, expressed in some software engineering representations, can be correlated to which quality aspect of the solution set.
\end{itemize}
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/domain-count}
\caption{Popularity evolution on the domain information in SBSE for SASs (All studies use as least problem nature).}
\label{fig:domain-count}
\end{figure}
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/spec-count}
\caption{Popularity evolution which parts of the search algorithms are specialized in SBSE for SASs (All studies specialize at least representation and fitness function).}
\label{fig:spec-count}
\end{figure}
\subsection{RQ4: Specializing Search Algorithm for SASs}
\label{sec:rq4}
\subsubsection{Significance}
Most SBSE tasks inevitably require specializing the search algorithms in order to make them better serve the purpose, SAS problems are of no exception. It is, therefore, important to understand what, how, and why domain information of the SAS problems have been considered in such specialization when investigating SBSE for SASs.
\subsubsection{Findings}
As mentioned, according to the categories proposed by Chen et al.~\cite{DBLP:journals/pieee/ChenBY20}, we first summarize the types of domain information used in SBSE for SASs. Figure~\ref{fig:domain-count} shows the results, anticipatedly, we see that problem nature is the fundamentally required domain information for the specialization in every study. This includes, for example, a variation point is categorical/numeric, the threshold constraint (e.g., full or partial satisfaction), and the scale/metric of the objective (e.g., worst or mean of the SAS's latency). SE/SAS domain expertise, in contrast, only forms the minority such as feature model~\cite{DBLP:journals/jss/PascualLPFE15,DBLP:journals/tosem/ChenLBY18}, adaptation tactics~\cite{DBLP:conf/icac/MorenoCGS16}, historical solutions~\cite{DBLP:conf/icse/KinneerCWGG18,DBLP:journals/infsof/ChenLY19}, and Markov model~\cite{DBLP:conf/icsa/CalinescuCGKP17}. Over the years, there is a tendency to widen the gap between the uptake of problem nature and SE/SAS domain expertise.
A more interesting question is perhaps which parts of a search algorithm have been specialized, regardless of whichever type of domain information used. From Figure~\ref{fig:spec-count}, we see that representation (e.g., a fixed-length vector or a tree) and fitness function (e.g., directly based on the SASs/simulator or derived from a well-defined mathematical model) are the essential parts in a search algorithm to be specialized. This is not surprising as they are always required in order to tailor a search algorithm to work on a SAS problem~\cite{DBLP:journals/csur/HarmanMZ12}. In contrast, there is only a small proportion of the studies (11 out of 74) that additionally consider other parts in the specialization, namely operators~\cite{DBLP:journals/tosem/ChenLBY18,DBLP:journals/jss/PascualLPFE15}, candidate solution~\cite{DBLP:conf/icse/KinneerCWGG18,DBLP:journals/infsof/ChenLY19} and solution selection~\cite{DBLP:conf/icsa/CalinescuCGKP17}, all of which involve SE/SAS domain expertise. For example, Kinneer et al.~\cite{DBLP:conf/icse/KinneerCWGG18} and Chen et al.~\cite{DBLP:journals/infsof/ChenLY19} leverage the good solution for a past problem instance or timestep to ``seed" the candidate solutions under the current search problem. The definition of goodness is entirely dependent on the SAS domain and engineering practices though. Calinescu et al.~\cite{DBLP:conf/icsa/CalinescuCGKP17} make use of Markov model for the SAS to define a boundary of robustness, which is then used to determine the survival of solutions during the search.
To provide details on how and why domain expertise is specialized, Table~\ref{tb:domain-reason} specifies all the types of SE/SAS domain expertise used, which parts of a search algorithm they have been specialized with and the reason behind. We can clearly see that the exploitation of the domain expertise all come with justifiable reasons, but their specializations may not go beyond the most fundamental representation and fitness function. Our findings for \textbf{RQ4} is therefore:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Findings 10:} There is an increasing gap between the uptake of problem nature and SE/SAS domain expertise in SBSE for SASs \\
\textbf{Findings 11:} Regardless of whether problem nature or SE/SAS domain expertise has been used, nearly all the studies specialize in the representation and fitness function of a search algorithm only.
\end{tcolorbox}
\input{tables/domain-reasons}
\subsubsection{Disappointments}
\label{sec:rq4-di}
Disappointingly, form the findings, we have failed to see how the advances of SBSE for SASs can be distinguished from ``yet another application domain of vanilla search algorithms", as SE/SAS domain expertise is often ignored (from Figure~\ref{fig:domain-count}) and the specialization in search algorithm rarely goes beyond the basic representation and fitness function under whichever type of domain information (from Figure~\ref{fig:spec-count})\footnote{Note that empirical studies, which may need to purposely compare the application of vanilla search algorithms for SAS, have been excluded.}. In other words, predominately the problem nature is used for the representation and fitness function of a search algorithm, representing a limited specialization.
Since SBSE is relatively new to SAS research, this result is predictable, but we did not expect such a significant gap and, as we have shown, the trend has no tendency to change over the years. Unlike the other disappointments discussed in this work, this disappointment may not cause immediate threats as the others but tend to have negative effects in the long-term. Indeed, a limited specialization may work without any issue for small and simple SASs, especially at the early dates. However, the SASs have now evolved to a stage with commonly high complexity, scales, dynamics and uncertainty (as we will show in Table~\ref{tb:sas}) that are hard to keep up with some of the assumptions made in vanilla search algorithms~\cite{DBLP:journals/jss/PascualLPFE15,DBLP:journals/tosem/ChenLBY18}. Therefore, ignoring the strong domain knowledge from engineers is a non-trivial issue in SBSE for SASs and can be an unwise waste of such valuable knowledge. At the same time, limiting specialization to only representation and fitness function could be harmful to the success of SBSE for SASs in the long-term~\cite{DBLP:journals/pieee/ChenBY20}. For example, searching without knowing the dependency relations in SAS may be difficult to find any valid solutions at all in the presence of complex dependencies~\cite{DBLP:journals/jss/PascualLPFE15,DBLP:journals/tosem/ChenLBY18}; producing only the non-dominated solutions while ignoring the robust ones is often undesirable in SAS design when there is an irregular trade-off surface~\cite{DBLP:conf/icsa/CalinescuCGKP17}. It is also not uncommon to see that similar concerns have been raised in the software engineering community, e.g., see Menzies's work~\cite{DBLP:journals/software/Menzies20}.
From our findings, we do see a few very good examples (e.g.,~\cite{DBLP:journals/tosem/ChenLBY18,DBLP:conf/icse/CailliauL17,DBLP:journals/infsof/ChenLY19,DBLP:journals/jss/PascualLPFE15}) on better specializing different parts of the search algorithms with SE/SAS domain expertise in SBSE for SASs. This is, in fact, a win-win strategy, where on the one hand, the search algorithm can be potentially made more controllable and explainable; on the other hand, the strong domain knowledge can serve as strong guidance to better steer different aspects of the search, achieving results that would be otherwise difficult to obtain. Further, the nature of complexity in SAS can actually provide more opportunity to design a better tailored and specialized search algorithm for the context.
The ideal case would be specializing SE/SAS domain expertise in different parts of the search algorithms; if not, at least the SE/SAS domain expertise should be considered in the specialization or the problem nature should be exploited in parts other than representation and fitness function. This is what makes the work in SBSE for SASs rather unique and tailored to the SAS problems. In this way, we turn the search algorithms to be less general (i.e., typically not able to apply to other problems), but they are expected to work better (when being done properly) under the given SAS where the knowledge lies. Such an advanced specialization, albeit may not be essential, is often desirable in the long-term. Our disappointment is, therefore:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Disappointment 4:} Limited specialization on search algorithms for SASs without tinkering with their internal designs.
\end{tcolorbox}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{tikz/exp/synergy.pdf}
\caption{Comparing the results between advanced (denoted as \copy\aMark) and limited (denoted as \copy\bMark) specialization on three search algorithms and three SASs under time-varying workload/services over 100 runs (Throughput is to be maximized while others are to be minimized).}
\label{fig:synergy}
\end{figure}
\subsubsection{Justification of the Likely Issues}
To justify the possible issue caused by limited specialization, we specialize Pareto search to optimize SASs that are designed using the feature model. The experiments contain three subject SAS, namely \textsc{RUBiS} with simple functionalities (\textsc{RUBiS(s)}), \textsc{RUBiS} with complex functionalities (\textsc{RUBiS(c)}), and a service-based system (\textsc{SBS}), all of which have been used in prior work~\cite{DBLP:conf/icac/MorenoCGS16,DBLP:conf/icac/GhahremaniG017,DBLP:journals/tosem/ChenLBY18}. The aim is to tune two objectives by adapting different variation points under time-varying workload and service quality/availability at runtime. Again, the reason for using three (two of which are from the same domain) is that, for SASs from the same or different domains, such a number is higher than 65\% and 73\% of the existing studies, respectively (as what will be shown in Section~\ref{sec:rq5}). We specialize in three search algorithms, i.e., MOEA/D-STM, NSGA-II, and IBEA, which are chosen because of their diverse characteristics and is the representative of their own kind. This is important for our justification, as we seek to showcase that the benefit of having better-specialized search can be generalized to different algorithms. We experiment 100 runs in total, under each of which a knee point is selected for self-adaptation. The settings of the algorithms and SASs details can be found in the supplementary.
We compare two forms of specialization with identical search budget: (i) a limited one where no feature model is used, but the variation points and their types (e.g., numeric or categorical) are directly encoded as the representation. To enable efficient search at runtime, the fitness function is built by regression~\cite{DBLP:journals/tse/ChenB17} (for \textsc{RUBiS(s)} and \textsc{RUBiS(c)}) and the well-defined analytical model~\cite{DBLP:conf/gecco/0001LY18} (for \textsc{SBS}). As a result, it resembles a case where the vanilla search algorithm is used for the SASs. (ii) The advanced one that additionally parses the feature model, extracts only the critical features as the variation points in the representation and injects feature dependency into the reproduction operators of the search algorithm. In such a case, the SE/SAS domain expertise is the feature model while the specialization parts include the representation, fitness function, and operators. More details can be found from~\cite{DBLP:journals/tosem/ChenLBY18}.
From Figure~\ref{fig:synergy}, we see that the limited specialization, albeit does have some good results, often lead to solutions closer to the nadir points of both objectives or those that cause serve degradation on an objective with a little gain on the other. The advanced specialization, in contrast, performs overwhelmingly better than the limited counterpart, as its results have more points closer to the ideal region. The key rationale behind such a success is because the rich domain knowledge in the feature model is highly effective on which different parts of the search algorithm can rely.
\subsubsection{Suggestion and Opportunity}
Since the trend in SBSE for SASs has been on applying the vanilla version of the search algorithm(s) with only the compulsory amendments, our suggestion is, therefore:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Suggestion 4:} Considering the possibility of tinkering with the vanilla search algorithm that is chosen justifiably for a SAS problem, especially the available SE/SAS domain expertise in relation to the internal algorithmic designs.
\end{tcolorbox}
Indeed, the suggestion remains at a high level, but it can be centered as one thread of research opportunity:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Opportunity 4:} Human-centric SBSE for SASs.
\end{tcolorbox}
\input{tables/sas}
The fundamental cause of the disappointments for \textbf{RQ4} is due to the lack of possible human involvement and their various forms of knowledge. This is, by design, often not part of a vanilla search algorithm. Traditionally, the main purpose of engineering SASs is to reduce the levels of human intervention on the running software systems. However, it has been shown that there are scenarios where human involvement is essential~\cite{DBLP:conf/icse/CamaraMG15}, or human knowledge has been proven to be able to greatly improve the behaviors of SASs. Similarly, SBSE is also motivated by the same origin: to automatically generate the software engineering process and thus free the software engineer from tedious and error-proven tasks. Recently, there is an ongoing demand to engineer human-centric SBSE~\cite{DBLP:journals/tse/RamirezRS19}, such that the search approach does not make the final decision on its own, but serving as an assistant to add insights for the human to make decisions. Those two facts, together, imply a perfect match between the two fields in terms of the human-centric aspect.
In particular, humans can refer to a wide range of engineers with certain software engineering expertise around SASs, including but not limited to, developers, requirements analysts, architects, and testers. Unlike classic SBSE for SASs, human-centric SBSE for SASs strikes for opening up the ``black box" of the vanilla search algorithm. A key outcome, when placing humans in the center of SBSE for SASs, is the tendency of encouraging more SE/SAS domain expertise uptakes, as the human in this context is all experts. In particular, it also promotes the specialization of different parts in the search algorithms, allowing humans to better explain and control the outcomes produced by a search algorithm.
A particularly interesting direction is the interactive SBSE for SASs, which enables the human to progressively learn and understand the characteristics of the SAS problem at hand and adjust towards more appropriate capture of the preferences information. As a result, the search can be driven to more precisely perform the expected behaviors, allowing the human to have more controllability over the search algorithm using their software engineering expertise on the SAS~\cite{LiCSY18,DebSKW10}. This would also create more inspirations to build specialized search algorithms, which should work the best under the SAS where the knowledge lies. The timely feedback retrieved from the search on SASs can also stimulate ``Innovization"~\cite{deb2014innovization} --- a particular situation where the SE/SAS domain expertise can also be consolidated as the search proceeds. Yet, some important challenges can be related to:
\begin{itemize}
\item What forms of SE/SAS knowledge/expertise can explicitly influence which aspects of SBSE for SASs.
\item How humans can be placed in the loop of engineering SASs (either at design-time or runtime) in order to facilitate timely interaction with SBSE for SASs.
\item How to ensure the information provided by humans is reliable, i.e., how to prevent immature inputs.
\end{itemize}
\subsection{RQ5: Subject SASs in Evaluation}
\label{sec:rq5}
\subsubsection{Significance}
Research in SBSE for SASs would inevitably involve stochastic and random behaviors, either caused by the search algorithms and/or from the underlying SAS to be optimized. As a result, it is important to understand what, why, and how many subject SASs have been used in the evaluation.
\subsubsection{Findings}
Table~\ref{tb:sas} shows the top 20 subject SASs and their characteristics in SBSE for SASs. Clearly, we can see that they come from different types and domains, of diverse scales, the number of objectives, and search space, together with different environmental changes. An interesting finding is that a common reason behind the choice is because certain SASs are ``widely used"~\cite{DBLP:conf/icac/MorenoCGS16,DBLP:journals/tse/WangHYY18,DBLP:journals/taas/Garcia-GalanPTC16,DBLP:journals/jss/PascualLPFE15}. This is sensible as the purpose is often to generalize the findings on the most common SAS domains. A few of them, e.g., Pascual et al.~\cite{DBLP:conf/icse/PascualPF13}, have explicitly stated that the chosen SASs are particularly fit to evaluate the search algorithm studied (i.e., GA). However, there is still a considerable amount of the remaining studies that give no clear reasons for the choices.
Another unique question in SBSE for SASs is what types of subject SAS are used. Overall, we found three types: real systems that involve actual deployment and running of the SAS; simulators that mimic the behaviors of a real system; and data that was collected from the real system but can be reconstructed and parsed to replicate the actual scenarios without the need to access the real system. Figure~\ref{fig:sas-count} shows the proportions of these three types with respect to the years. As can be seen, simulator is the most predominately used type and it exhibits an increasing popularity over the years, such as~\cite{DBLP:conf/icse/FredericksDC14,DBLP:conf/icse/Gerostathopoulos18,DBLP:journals/taas/ShevtsovWM19,DBLP:conf/saso/FredericksGK019}. Real system, in contrast, is much less commonly used, e.g.,~\cite{DBLP:journals/ase/GerasimouCT18,DBLP:journals/tosem/ChenLBY18,DBLP:journals/jss/PascualLPFE15}. Data is rarely adopted in the last decade~\cite{DBLP:journals/infsof/ChenLY19}. We found three studies~\cite{DBLP:journals/taas/LewisECRTY15,DBLP:conf/sigsoft/FilieriHM15,DBLP:conf/sigsoft/MaggioPFH17,DBLP:journals/tosem/ChenLBY18} where more than one type of SAS are used, and the reason for that is to improve the generalization of the results.
\begin{figure}[!t]
\centering
\includestandalone[width=\columnwidth]{tikz/sas-count}
\caption{Popularity evolution on the type of subject SAS to evaluate the search algorithms (Three studies use more than one type).}
\label{fig:sas-count}
\end{figure}
Indeed, it is important to evaluate the work on a set of subject SASs with different settings and/or from different domains~\cite{DBLP:conf/sigsoft/NagappanZB13,DBLP:conf/icse/SiegmundSA15}. To understand how many subject SASs are used per study, Figure~\ref{fig:subj-setting} shows the number of SAS with different settings or domains considered in a study. Interestingly, our result has indicated that there are 59\% primary studies consider only one SAS, a further 6\%, and 11\% consider two and three SASs, respectively. Note that here, the SASs are differentiated based on settings, i.e., they are said different even if the study considers the same system under a given domain, as long as they have different structures, e.g., the same service-based systems with a different number of services to be composed from. If we differentiate the SASs solely based on their domains (e.g., a web system and an unmanned vehicle system), as shown in Figure~\ref{fig:subj-domain}, then the proportion of studies that consider one SAS increases to 73\%, and the number of studies that consider less than three subject SASs becomes 93\%. Our findings can be summarized as:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Findings 12:} Various types of subject SASs have been used, with ``widely used" being the popular reason while often no justification is given. \\
\textbf{Findings 13:} Simulator is the most predominately used SAS type over years, followed by real systems.\\
\textbf{Findings 14:} Majority of the studies consider one subject SAS in the evaluation.
\end{tcolorbox}
\subsubsection{Disappointments}
Given the variety of subject SASs studied from the literature, it is disappointed to see that majority of the studies consider only one SAS in the evaluation. In particular, this is not because of a single overwhelmingly used benchmark, as shown in Table~\ref{tb:sas}.
From the literature, the importance of diversity and coverage on subjects in evaluating software engineering research has been widely acknowledged. For example, sysmtematic studies conducted by Nagappan et al.~\cite{DBLP:conf/sigsoft/NagappanZB13} and Siegmund et al.~\cite{DBLP:conf/icse/SiegmundSA15} have concluded that:
\begin{displayquote}
``\emph{subject systems cover a wide range of different dimensions, which positively affects external validity.}"~\cite{DBLP:conf/icse/SiegmundSA15}
\end{displayquote}
As a result, a limited set of systems in the evaluation would inevitably weaken the generalization of conclusion, which is a major threat to external validity that remain unsolved in SBSE for SASs.
Admittedly, some studies, such as~\cite{DBLP:conf/icac/RamirezKCM09,DBLP:journals/jss/XuB19a,DBLP:conf/icse/KinneerCWGG18}, tend to provide an emerging idea together with a proof-of-concept evaluation only, in which case using one subject SAS might seem reasonable. We conjuncture that, however, it is not a sustainable trend for the research filed when such a proof-of-concept type of evaluation appears to be overwhelming, constituting the majority of the work from the literature. Our disappointment can then be summarized as:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Disappointment 5:} Weak generalization of results across the subject SASs.
\end{tcolorbox}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.5\columnwidth}
\includestandalone[width=\columnwidth]{tikz/subject-number-pie1}
\subcaption{Different settings or domains}
\label{fig:subj-setting}
\end{subfigure}
\hspace{-0.2cm}
\begin{subfigure}[t]{0.5\columnwidth}
\includestandalone[width=\columnwidth]{tikz/subject-number-pie2}
\subcaption{Different domains only}
\label{fig:subj-domain}
\end{subfigure}
\caption{Number of different subject SASs evaluated per study in SBSE for SASs.}
\end{figure}
\subsubsection{Justification of the Likely Issues}
To justify the likely issue on a limited number of subject SASs considered, we conduct experiments on both single-objective search (HC and RS) and Pareto search (MOEA/D and NSGA-II). These algorithms are chosen merely for illustration purposes and they are run with identical search budget, details the settings can be found in the supplementary. For Pareto search (for tuning latency, throughput, and cost), we run on the most widely used synthetic service-based systems, derived from the \textsc{WS-DREAM} dataset, with an aim to achieve runtime self-adaptation to a service change. We use up to four different workflows a in existing work~\cite{DBLP:conf/gecco/0001LY18,DBLP:journals/infsof/ChenLY19}, which offers better coverage than 76\% of the studies from the same domain as shown in Figure~\ref{fig:subj-setting}. The fitness is again evaluated by a well-defined analytical model~\cite{DBLP:conf/gecco/0001LY18,DBLP:journals/infsof/ChenLY19}. We use HV as the sole indicator because we seek to assess the overall quality of the solution set produced without specific preferences and it covers all quality aspects of a solution set. On the single-objective search, we chose six different SASs for design-time profiling scenarios. They are of diverse domains and have been used previously~\cite{DBLP:conf/icse/SiegmundKKABRS12,DBLP:journals/corr/abs-1801-02175}, which fits precisely with our goal of the justification. Note that according to Figure~\ref{fig:subj-domain}, six or more subject SASs from different domains has only been considered by 4\% of the studies. Details of theses SAS can be found in supplementary.
As shown in Table~\ref{tb:mo-rq5}, suppose that there are two sets of subject in the evaluation, \texttt{Set 1} with two subject SASs while \texttt{Set 2} with four. It is clear that \texttt{Set 1} could lead to a conclusion that NSGA-II being better. However, with a more thorough comparison in \texttt{Set 2}, we can understand that the conclusion from \texttt{Set 1} may not be the case: in fact, the two search algorithms gain competitive results. Similarly, for the single-objective case from Table~\ref{tb:so-rq5}, \texttt{Set 1} would imply that RS is better but \texttt{Set 2}, which involves a more extensive number of subject SASs, suggests that HC is actually better. The above exemplifies how a limited number of subject SAS can mislead the conclusion and weaken the generalization.
\input{tikz/exp/mo-rq5}
\input{tikz/exp/so-rq5}
\subsubsection{Suggestion and Opportunity}
Our suggestion is straightforward:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Suggestion 5:} Aiming to evaluate SBSE for SASs work with at least two subject SASs, which can be with different settings (better coverage than 59\% of the studies) or from different domains (better coverage than 73\% of the studies). Ideally, the more subject SASs the stronger conclusion, but using only one subject should not be recommended.
\end{tcolorbox}
Indeed, it is easy to argue that \textit{``we need a higher number of subject SASs"}, but practically this depends on many factors, such as the resources to deploy, test, and configure a real SAS, as well as the time for the experiments to run. We do not attempt to undermine such an effort, as this is one of the difficulties that distinguishes research on SAS and many other fields. With this in mind, applying simulators can be an option\footnote{For example, the artifacts collection from SEAMS: \url{https://www.hpi.uni-potsdam.de/giese/public/selfadapt/exemplars/}.}, as relatively they are often simpler to be deployed and can run faster. However, simulators often rely on a fixed assumption about the realistic scenarios, which may not hold. Therefore, conducting research in SBSE for SASs using the simulator could pose threats to construct validity.
Indeed, a possible solution could be to use real SASs as the complement to the simulator. This, however, does not solve our disappointment: it merely separates the evaluation on two hierarchies, where on the real SAS part the number of subject SAS may still be small due to the cost of setting up the experiments. As a result, the conclusions drawn would be biased towards the simulation part. The key is how to retain sufficient realism while keeping the efforts low. From this perspective, the realistic data, which may be easily parsed while is collected from real SAS, can be a promising source. However, as our results form Figure~\ref{fig:sas-count} indicated that, over years, there is not much readily available dataset for SAS. An opportunity for \textbf{RQ5} is, therefore:
\begin{tcolorbox}[breakable,left=5pt,right=5pt,top=5pt,bottom=5pt]
\textbf{Opportunity 5:} Reusable real-world dataset collection and sharing in SBSE for SASs.
\end{tcolorbox}
A unique property of the dataset for evaluating SBSE on SAS is that it needs to involve certain complexity of the search, e.g., search space, number of objectives, or number of variation points. The collection process of the data itself, as expected, would be expensive. However, once such data has been collected, it can benefit the community as a whole for possible reuse and benchmark. We, therefore, call for the community to join the effort on building an ecosystem of collecting and maintaining real-world data for SAS, based on which perhaps more realistic simulators can be built. Some specific challenges are:
\begin{itemize}
\item How to define environmental conditions for a SAS during the data collection process?
\item How to sample the variation point and the objective?
\item How to codify a data collection protocol that can mitigate measurement bias?
\end{itemize}
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{E}{ngineering} software systems with the ability to reason and adapt themselves under changes (e.g., on its states, requirements, and the environment) has emerged as a successful paradigm for handling runtime dynamics and uncertainty. The resulted software system, namely self-adaptive systems (SASs), has become one of the most complex artifacts that have ever been created by humans. Many complex software systems require optimization in the engineering process and the SASs are of no exception. For example, the configuration of SASs' adaptable parameters is a very typical optimization problem in which the best-configured values (and possibly sequence) need to be searched in order to achieve optimality on different functional and non-functional objectives/criteria~\cite{DBLP:journals/taas/SalehieT09}. However, optimizing SASs is important yet challenging, as human intervention is profoundly limited and there may be an explosion of the possible adaptation solutions, together with multiple conflicting objectives under resource constraints and feature dependencies~\cite{DBLP:conf/dagstuhl/LemosGMSA}. As a result, intelligent search is required to fulfill the requirement of optimization in various domains of SASs.
Search-Based Software Engineering (SBSE)~\cite{DBLP:journals/csur/HarmanMZ12} is one example of a form of 'Intelligent Software Engineering' that has been widely applied across many software engineering domains that demand optimization, including requirements~\cite{DBLP:journals/infsof/ZhangHL13}, design~\cite{DBLP:journals/tse/PraditwongHY11}, testing~\cite{DBLP:conf/icst/FraserA12} and refactoring~\cite{DBLP:journals/tse/LuWYAN19}. Specifically, SBSE applies computational search, i.e., various search algorithms, to automatically and dynamically seek solutions for minimizing/maximizing objective(s) or for satisfying certain constraint(s) in software engineering. In particular, SBSE can be either single-objective, where a single fitness would be used to guide the search that leads to a single optimal solution; or multiple objectives, in which the search is steered by either a weighted aggregation (a.k.a. utility-driven search) or a pressure to approximate the Pareto front~\cite{Ehrgott2006}, i.e., Pareto search\footnote{Pareto search refers to any algorithms that seek to search for the Pareto front in the presence of multiple objectives, including Pareto-dominance based, indicator-based and decomposition-based multi-objective algorithms.}~\cite{collette2013multiobjective}.
Over years, there have been some successful attempts on exploring SBSE for SASs~\cite{DBLP:conf/icac/RamirezKCM09,DBLP:journals/jss/PascualLPFE15,DBLP:conf/icse/KinneerCWGG18,DBLP:journals/infsof/ChenLY19,DBLP:journals/tosem/ChenLBY18}. Indeed, as pointed out by Harman et al.~\cite{DBLP:conf/icse/HarmanJLPMYW14}, the very natural requirement of dynamic and automated reasoning in SAS provides a perfect place for SBSE, which targets exactly such a need. Nevertheless, the work in such direction is still arguably much less active compared with the other problems of software engineering, e.g., software testing~\cite{DBLP:journals/csur/HarmanMZ12}, where SBSE has become a standard. We believe that one of the reasons for this is because, to the best of our knowledge, there has been no explicit survey on the topic of SBSE for SASs. As such, we lack a general overview, and hence both the SBSE and SAS practitioners are struggling to understand, e.g., what search algorithms to use and how they can be tailored, in what contexts, and how the search results can be assessed. This is what originally motivates this paper, in which we aim to bridge such gap by conducting a systematic survey on papers over 27 venues and 7 repositories, based on which 409 ones were identified for detailed review and eventually 74 primary studies were extracted for the analysis.
The survey has in fact led to a surprising result: we have identified five disappointing phenomena in the current research on SBSE for SASs, some of which can pose immediate threats to the validity in current research while others can negatively affect the field in the long-term. We would like to stress that we term them ``disappointments" because they, albeit very likely, may not always lead to an issue for an individual study. However, from the perspective of the entire field, these disappointments bear a resemblance to the ``bad smells" in software engineering analogy --- phenomena where there are hints that suggest there can be an issue. For example, randomly choosing a search algorithm without justifying with the specifics of the SAS problem may work when the choice happens to be the suitable one, this is nevertheless not ideal if it becomes an overwhelming phenomenon for the field. The presence of those disappointments is perhaps another reason that prevents a significant growth of the research on SBSE for SASs. To further advance this direction of research and mitigate the disappointments discovered, we provide suggestions and highlight eight promising research opportunities that are currently under-explored in the existing work from the literature.
To the best of our knowledge, our work is the very first endeavor that aims to target explicitly on SBSE for SASs, offering a comprehensive overview and critical analysis of this field. Specifically, our contributions in this paper are three folds:
\begin{enumerate}
\item We conduct a systematic survey of the work on SBSE for SASs published between 2009 and 2019. The research questions (RQs) that our survey aims to answer are:
\begin{itemize}
\item[---]\textbf{RQ1:} What, where, and why search algorithms are used for SAS?
\item[---]\textbf{RQ2:} What, how, and why SAS objectives are defined and handled?
\item[---]\textbf{RQ3:} What and why evaluation methods are used to assess the results on Pareto search for SASs?
\item[---]\textbf{RQ4:} What, how, and why domain information is used to specialize the search algorithm for SAS?
\item[---]\textbf{RQ5:} What, why, and how many subject SASs are studied in order to generalize the conclusion drawn?
\end{itemize}
\item Drawing on the survey results for the above RQs, we have identified five disappointing phenomena in current work, for which we discuss the disappointments supported with theoretical and/or experimental justification of the possible issues.
\item We provide suggestions and highlight eight promising research opportunities in SBSE for SASs, some of which are promising to mitigate the disappointments identified and discuss their challenges.
\end{enumerate}
Noteworthily, our goal is not to question the importance and significance of existing work, but rather to summarize the key statistics that enable us to discuss, justify and raise debates about some of the overwhelmingly possible issues in existing studies related to SBSE for SASs, which have sadly disappointed us. We feel that respectful scientific debates are very important for sustainable research, particularly in such an interdisciplinary topic where research from the well-established community of SBSE and computational optimization may still be relatively new to the SAS practitioners. Indeed, explicit debates on topics may timely reveal the opposing ideas and can often excite significant growth of the research field (e.g., see~\cite{DBLP:conf/icse/Li0Y18}). By addressing those disappointments, together with promising research opportunities that are currently under-explored in SBSE for SASs, we envisage to further grow this particular research field.
The remaining of this paper is organized as follows. Section~\ref{sec:bg} introduces some background information of SBSE and SASs. Section~\ref{sec:method} presents the research methodology following by detailed elaboration on our literature review protocol in Section~\ref{sec:review}. Section~\ref{sec:rq} analyzes the results obtained from our systematic survey with respect to our RQs, discusses the disappointments with justification, and highlights the suggestions and research opportunities for mitigation. Other currently under-explored opportunities in SBSE for SASs are discussed in Section~\ref{sec:opp}. Threats to validity and conclusions are included in Section~\ref{sec:tov} and~\ref{sec:con}, respectively.
\section{Preliminaries}
\label{sec:bg}
\subsection{Search-Based Software Engineering}
In SBSE, the behaviors of search are specifically referring to specialize a metaheuristic search algorithm (evolutionary algorithm in particular) within a search space of candidate solutions, guided by a fitness function that evaluates the quality of solutions, with an aim to find the optimal or near-optimal one(s)~\cite{DBLP:journals/csur/HarmanMZ12}. According to the domain information, the fundamental tasks of specializing SBSE to a SE problem (in fact, any optimization problem), as discussed by Harman et al.~\cite{DBLP:journals/csur/HarmanMZ12}, include reformulating the solution representation of the problem and designing the objective/fitness function(s) that distinguishes between good and bad solutions.
The successful application of search algorithms for software engineering over years has led to an increasing interest in other forms of optimization for software engineering that are not necessarily directly based on a metaheuristic search. Indeed, from the literature, it is not uncommon to find SBSE, or simply search algorithms, applied to any form of optimization in which the problem comes from software engineering and the solutions are subject to search. In this paper, we, therefore, include classical Operational
Research and Computational Optimzation techniques, as well as the metaheuristic search algorithms in the traditional understanding of SBSE.
One
important distinction on the problems of SBSE is whether the software engineering problem involves multiple conflicting objectives.
In the single-objective case,
the search can be directly guided and evaluated by a fitness function.
In contrast,
when multiple objectives are involved,
the search and evaluation become much more complex due to the presence of conflicts,
i.e., trade-offs are required.
Indeed,
any multi-objective problem may be converted into a single-objective one via a certain form of weighted aggregation.
However,
as we will show in Section~\ref{sec:rq2},
this does not come without any cost:
the precise quantification of weights can be very difficult,
if not impossible,
and the conflicting relation between objectives may be blurred,
causing some ideal solutions hard to be found~\cite{DBLP:journals/csur/HarmanMZ12}.
This is the fact that has been well accepted by the SBSE community in many different problems~\cite{DBLP:journals/tse/PraditwongHY11,zhang2007multi,DBLP:conf/issta/YooH07}.
In SBSE, a standard way of handling multi-objectivity,
borrowed from the Computational Optimization and Evolutionary Computation community,
is to use the notion of Pareto dominance and optimality~\cite{Ehrgott2006}
by which the searched result is often a set of trade-off solutions instead of a single one.
By definition,
a solution \texttt{A} is said to be (Pareto) dominated by \texttt{B}
if all of \texttt{B}'s objective values are better than or equals to the corresponding objective values of \texttt{A},
and there is a least one objective on which \texttt{B} is better than \texttt{A}.
A solution is called Pareto optimal if it is not dominated by any solution in the search space.
The set of all the Pareto optimal solutions is called the Pareto set
while their images in the objective space constitute the Pareto front of the problem.
Without additional information from the decision maker,
the solutions in a nondominanted set are in fact incomparable.
This has led to the distinctiveness of Pareto search algorithms,
in which the search does not aim for a single (weighted) optimal solution,
but rather for a set of solutions that can well-represent the whole Pareto front.
Such ``representation'' can be broken down into four aspects with respect to solution sets' quality:
convergence, spread, uniformity, and cardinality~\cite{DBLP:journals/csur/LiY19}
which various indicators are designed to evaluate.
Convergence refers to the closeness of the solution set to the Pareto front;
spread considers the region of the set covering;
uniformity refers to the evenness of solutions distributed in the set;
and cardinality refers to the number of solutions in the set.
A more thorough overview of the Pareto search algorithms in SBSE can be found in~\cite{DBLP:journals/csur/HarmanMZ12,DBLP:conf/splc/HarmanJKLPZ14,DBLP:journals/tse/RamirezRS19}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{figures/sas.pdf}
\caption{General overview of SAS.}
\label{fig:sas}
\end{figure}
\subsection{Self-Adaptive Systems}
The ever increasing complexity of engineering software systems has led to the high demand for software systems to be versatile, resilient, and dependable to changes in its operational contexts, requirements, and environments. This cannot be made possible without the notion of self-adaptation\textemdash the ability of a software system that permits it to modifies its own behaviors according to the perceptions about its interior states and the exterior factors, leading to a special type of software systems termed SASs.
Figure~\ref{fig:sas} shows a general overview of the SAS, deriving from the most widely-adopted MAPE-K architectural model~\cite{DBLP:journals/computer/KephartC03}. As can be seen, at runtime, there is a feedback loop consists of two key components~\cite{DBLP:conf/icse/WeynsIMA12}: a managed system that is adaptable, but does not itself have the ability to adapt; and a managing system that encapsulates all the core logic to realize self-adaptation for the managed system --- this is also the key component in which most work on SBSE for SASs lies, as we will show in Section~\ref{sec:rq1}. In particular, under MAPE-K, runtime self-adaptation in SAS is governed by the managing system via the following phases:
\begin{enumerate}
\item \textbf{Monitor:} Collecting data on the managed system and the environment through sensors.
\item \textbf{Analyze:} Making decisions about whether adaptation is required.
\item \textbf{Plan:} Reasoning about the suitable adaptation solution.
\item \textbf{Execute:} Physically conducting adaptations via actuators.
\item \textbf{Knowledge:} Centralized abstraction of relevant aspects of the managed system, its environment, and the self-adaptation objectives. This is the shared phase by the other four.
\end{enumerate}
Although at the first glance of the term `self-adaptation', one may think that it is only related to the running software systems, the engineering processes of SASs are, in fact, spread over both design-time and runtime~\cite{DBLP:journals/taas/SalehieT09,DBLP:conf/dagstuhl/LemosGMSA}. As shown in Figure~\ref{fig:sas}, the design-time tasks for SAS can provide important insights to the process embedded in each MAPE-K phase~\cite{DBLP:conf/dagstuhl/LemosGMSA,DBLP:conf/kbse/GerasimouTC15,DBLP:journals/tosem/ChenLBY18}. Indeed, design-time profiling of the possible adaptation solution and their effect on the quality, under different environmental conditions, have been shown to be helpful for building more effective policies for runtime self-adaptation~\cite{DBLP:journals/tosem/ChenLBY18,DBLP:journals/ase/GerasimouCT18}. As a concrete example, Gerasimou et al.~\cite{ DBLP:journals/ase/GerasimouCT18} shows that design-time analysis can help to determine the strategy design in the \textit{Analysis} and \textit{Plan} phases. In this paper, we, therefore, include not only the work on SASs runtime but also the design-time studies of SASs as long as they serve as significant understandings for runtime self-adaptation. We would like to stress that although the design-time problem of SASs is aligned with our purpose in this work, we, however, do not consider those studies at design-time without referring to their implication on runtime self-adaptation. This is because the ultimate goal of SAS is to allow the software system to run and dynamically adapt according to the time-varying changes as they emerge.
Given the generic notions from Figure~\ref{fig:sas}, any adaptable/managed software systems can form a SAS, providing that some, if not all, of the internal parts (i.e., variation points) can be changed as the software system runs. In real-world scenarios, many widely used software systems are readily prepared for self-adaptation. For example, \texttt{MySQL}, which is one of the most popular Relational Database Management Systems, has around one-third of its variation points that can be changed at runtime\footnote{https://dev.mysql.com/doc/refman/5.5/en/server-system-variable-reference.html}. Because of this, from the software engineering research community, it is not uncommon to find that research of SASs has been conducted under different themes, one of the most noticeable examples is Dynamic Software Product Line~\cite{DBLP:journals/computer/BencomoHA12,baresi2014self,classen2008modelling}. In addition to software engineering, SAS research has been spread over the other communities, e.g., System Engineering, Service Computing, and Cloud Computing. Our survey, therefore, does not restrict only to software engineering research, but also to any other communities following our review protocol introduced in Section~\ref{sec:method}. More detailed surveys of the SASs in general are available from the literature, see~\cite{DBLP:journals/taas/SalehieT09} and~\cite{DBLP:conf/dagstuhl/LemosGMSA}.
\subsection{Marrying SBSE with SASs}
Self-adaptations of a SAS are certainly not being conducted for no reason, they are designed to serve a certain purpose: to improve the quality of software systems, including both functional and non-functional quality. While there are certain quality dimensions that are widely applicable to many domains, e.g., latency, throughput, and availability, the actual variation points by which a SAS can modify itself are highly diverse case by case.
Because of the above reasons, engineering SASs can be abstracted as developing automated and dynamic methods that involve tuning (searching) different parts or processes of the software systems, with an aim to improve functional or non-functional quality. This fits precisely with the purpose of SBSE and therefore raises a perfect marriage between the two fields.
\input{research-method}
\input{literature-review}
\input{results}
\input{others}
\section{Threats to Validity}
\label{sec:tov}
Threats to construct validity can be raised by the research methodology, which may not serve the purpose of answering our research questions. We have mitigated such threats by following the systematic review protocol proposed by Kitchenham et al.~\cite{DBLP:journals/infsof/KitchenhamBBTBL09}, which is a widely recognized search methodology for conducting a survey on software engineering research. Another threat is related to the citation count used in the exclusion criteria. Indeed,
it is difficult to set a threshold for such,
as the citation count itself cannot well reflect the impact of work, thereby such exclusion criteria can be harmful to the construct validity.
It is however worth noting that our goal is to analyze the major trends about how SBSE has been used for SASs, which can at least provide some sources for analyzing and justification. Further, it is necessary to reach a trade-off between the trend coverage and the efforts required for detailed data collections of the studies. Of course, the citation from Google Scholar could be biased by its underlying mechanism, but it remains uncertain about which online repository offers the most reliable citation information.
Threats to internal validity may be introduced by having inappropriate classification and interpretation of the papers. We have limited this by conducting multiple rounds of paper reviews amongst all the authors. Error checks and investigations were also conducted to correct any issues found during the search procedure. Another related threat to internal validity is that there was a considerable gap between the completion of collection and the submission/final publication, and therefore it raises a timeliness issue, particularly with respect to the citation count used in the exclusion criteria. This is, however, not uncommon for all survey studies and hence remains an open problem. Another threat is caused by information that has not been stated in the studies. For example, a possible reason for using a search algorithm could be that it is the only one with readily available implementation, but none of the studies has stated this clearly.
Finally, threats to external validity may restrict the generalization of the results. We have mitigated such by conducting the systematic survey wider and deeper: it covers 3,740 searched papers published between 2009 and 2019, on 27 venues from 7 repositories; while at the same time, extracting 74 most notable primary studies following the exclusion and inclusion procedure.
\section{Conclusion}
\label{sec:con}
In this work, we have systematically surveyed the research on SBSE for SASs published between 2009 and 2019, leading to a large set of studies span across 27 venues, based on which 409 ones were identified for detailed review and eventually 74 primary studies were selected for the analysis. Several key statistics have been extracted from the state-of-the-art with respect to the RQs:
\begin{itemize}
\item \textbf{To RQ1:} In the past decade, LS, GA, and IP solver are the most popular search algorithm on the single/aggregated objective case. NSGA-II is predominant for Pareto search. Their justification of choice are mainly at $L_3$ or $L_4$, despite they are used in a different context of SASs.
\item \textbf{To RQ2:} Single objectives are less commonly assumed than its multiple objective counterparts, within which weighted search is predominant over the years. The actual objectives to be searched are varied, but latency and cost are of the widest concern.
\item \textbf{To RQ3:} On Pareto search, the raw objectives are most commonly used in the evaluation and a considerable amount of studies have used no generic quality indicator at all, without justification. For those that do use, the justification of choices is mainly at level $L_2$ or $L_3$. This is a consistent trend across the years.
\item \textbf{To RQ4:} There is an increasing gap between the uptake of problem nature and SE/SAS domain expertise, while most studies specialize in the representation and fitness function of a search algorithm only.
\item \textbf{To RQ5:} Over the years, simulators are the most commonly used types of subject SASs and the majority of the studies consider only one subject SAS, regardless of the settings and domains.
\end{itemize}
The results have also revealed five disappointments from the most notable primary studies, namely:
\begin{itemize}
\item Unjustified bias on the choice of search algorithms.
\item Unjustified and limited formulation on the multi-objective search for SASs.
\item Questionable choice of evaluation methods in Pareto search for SASs.
\item Limited specialization on search algorithms for SASs without tinkering with their internal designs.
\item Weak generalization of results across the subject SASs.
\end{itemize}
We present theoretical and/or experimental evidence to justify the issues, provide suggestions, and also highlight eight emergent opportunities that are currently under-explored for research on SBSE for SASs, theses are:
\begin{itemize}
\item Generic guidance on justifiably choosing search algorithm(s) according to the requirements of the particular SAS problem studied.
\item Pareto many-objective search for SASs.
\item Preferences driven Pareto search for SASs.
\item Human-centric SBSE for SASs.
\item Reusable real-world dataset collection and sharing in SBSE for SASs.
\item Effective and efficient fitness evaluation in SBSE for SASs.
\item Just-in-time handling of changes in SBSE for SASs.
\item Incorporating SBSE with other approaches for SASs.
\end{itemize}
Our work provides useful insights that can hopefully excite a much more significant growth of this particular field of research, attracting not only the SAS practitioners but also the researchers from the other fields, such as general SBSE, Computational Optimization, and Evolutionary Computation.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section*{Acknowledgement}
We thank the doctoral researchers from the IDEAS laboratory at Loughborough University for their assistance in collecting and analyzing the data in this work. We also would like to thank all the anonymous reviewers for their constructive comments that help to significantly improve this paper.
\bibliographystyle{IEEEtran}
| -67,469.837437 |
[
-0.454833984375,
0.75830078125
] | 62.665897 |
[
-2.900390625,
1.275390625,
-1.08203125,
-3.3828125,
-0.29150390625,
4.31640625
] |
[
-0.045928955078125,
4.0859375,
0.1885986328125,
3.9140625
] | 1,302 | 24,912 |
[
-1.849609375,
1.8154296875
] | 21.146817 |
[
-6.1484375,
-4.734375,
-4.109375,
-1.7724609375,
2.72265625,
12.5703125
] | 0.821634 | 42.776966 | 11.897881 | 1.290492 |
[
1.1447244882583618
] | -53,610.574093 | 5.887083 | -66,388.535134 | 0.870045 | 6.164181 |
[
-3.076171875,
-3.3046875,
-3.068359375,
-4.16796875,
2.587890625,
10.734375
] |
[
-5.30078125,
-1.7314453125,
-1.8408203125,
-1.0615234375,
3.423828125,
4.28515625
] | |
BkiUdms5qYVBcTolxflt
|
\section{Introduction}
\label{sec:intro}
The $U(1)^{\prime}$ symmetry exists in many extensions of the Standard Model (SM). It can arise from a grand unified theory based on $SO(10)$ or $E_{6}$~\cite{ref:SO10, ref:E6, ref:E6Enrico}. In the presence of fermions or Higgs bosons with non-standard SM charges, a non-unifiable $U(1)^{\prime}$ symmetry can also exist~\cite{ref:nonUnifU1}. In addition, the $U(1)^{\prime}$ symmetries abound in many low energy effective theories of the string theories~\cite{ref:strM}. If the $U(1)^{\prime}$ breaking scale is at the TeV scale, the $Z^{\prime}$ gauge boson associated with the breaking of the $U(1)^{\prime}$ symmetry may be discovered at the early stages of the Large Hadron Collider (LHC) operation.
Models with an extra $U(1)^{\prime}$ symmetry at the TeV scale are severely constrained by the flavor-changing-neutral-current (FCNC) processes and by the electroweak precision measurements. In the generation dependent $U(1)^{\prime}$ model, the off-diagonal terms in the Yukawa matrices can lead to FCNCs at the tree level through the exchange of the $Z^{\prime}$ gauge boson. The most stringent experimetal constraints are in the down-type quark sector from the measurements of $K^{0}-\overline{K}^{0}$ mixing and $B^{0}-\overline{B}^{0}$ mixing, and in the lepton sector from the non-observation of $\mu-e$ conversion, $\mu \rightarrow e^{+}e^{-}e^{+}$, $\tau \rightarrow e^{+}e^{-}e^{+}$ and $\tau \rightarrow \mu^{+}\mu^{-}\mu^{+}$. While recent measurement of $D^{0}-\overline{D}^{0}$ mixing places a bound on the $1-2$ mixing in the up-type quark sector, such limit is not as severe as those in the down-type quark sector. So far, there is no constraint on other mixing in the up-type quark sector.
To satisfy the FCNC constraints, most models~\cite{ref:PLRevZ', ref:Z'LHCRizzo, ref:Z'LHCRizzoInd} assume that the $U(1)^{\prime}$ charges are universal across the three generations of Standard Model (SM) fermions. Due to the fact that the most stringent constraints from FCNCs appear in the processes that involve the first and second generations of fermions, a non-universal $U(1)^{\prime}$ at the TeV scale can be consistent with the experimental constraints on flavor violation, if the first two generations of the SM fermions have the same $U(1)^{\prime}$ charges and the flavor non-universality occurs between the third family charges and the charges of the first and second families of fermions~\cite{ref:FCNC}.
In this paper, we relax the assumption of having universal charges for the first and second families. Instead, all three generations of SM fermions are allowed to have different charges under the $U(1)^{\prime}$ symmetry. The FCNC constraints are satisfied by attributing the flavor mixing to the up-type quark and neutrino sectors, while having flavor diagonal down-type quark and charged lepton sectors, given that the down-type quark and charged lepton sectors are most stringently constrained. In this scenario, the $U(1)^{\prime}$ can play the role of a family symmetry \cite{ref:gauTrmNeuM, ref:SU(5)U(1), ref:highOpeWalter} that gives rise to realistic mass hierarchy and mixing angles among the SM fermions through the Froggatt-Nielsen (FN) mechanism~\cite{ref:frogNiel}. In addition, the $U(1)^{\prime}$ charge assignment naturally suppresses the $\mu$ term, and it forbids at the tree level baryon number and lepton number violating operators that could lead to proton decay.
This paper is organized as follows. In Section~\ref{sec:model} we present the flavor non-universal $U(1)^{\prime}$ model combined with MSSM. In particular we show how all gauge anomalies are cancelled and how realistic masses and mixing angles of all quarks and leptons (including the neutrinos) are generated. In addition, the implications for the $\mu$ problem and proton decay is also discussed. Section~\ref{sec:ewpt} gives the parameter space of this model allowed by the most stringent experimental constraints. Phenomenological implications of our model for collider experiments are discussed in Section~\ref{sec:collider}. The mass spectrum of the super particles and the new phenomenology signatures in addition to the MSSM are discussed in section ~\ref{sec:susyMass}. Section~\ref{sec:conclude} concludes the paper.
\section{The Model}
\label{sec:model}
In MSSM with three right-handed neutrinos, the superpotential for the Yukawa sector and Higgs sector that gives masses to all SM fermions and Higgs fields is given as follows,
\begin{equation}
\label{eqn:sPoten}
W = Y_uH_uQu^c + Y_dH_dQd^c + Y_eH_dLe^c + Y_{\nu}H_uL\nu^{c} + Y_{LL}LLH_uH_u + Y_{\nu\nu}\nu^c\nu^c + \mu H_uH_d + \mu^{\prime} \Phi \Phi^{\prime}\; .
\end{equation}
In the presence of an additional $U(1)^{\prime}_{F}$ symmetry under which various chiral super-fields are charged, the Yukawa matrices shown above are the effective Yukawa couplings generated through higher dimensional operators. As a result, they can be written as powers of the ratio of the flavon Higgs field, $\Phi$, that breaks the $U(1)^{\prime}_{F}$ symmetry, to the cutoff scale of the $U(1)^{\prime}_{F}$ symmetry, $\Lambda$,
\begin{equation}
\label{eqn:genYukawa1}
Y_{ij} \sim \biggl( y_{ij} \frac{\Phi}{\Lambda} \biggr)^{3|q_i+q_j+q_H|} \; .
\end{equation}
Similarly, the effective $\mu$ term is generated by the higher dimensional operator and it is given by
\begin{equation}
\label{eqn:genmu}
\mu \sim \biggl( \mu_{ud} \frac{\Phi}{\Lambda} \biggr)^{3|q_{H_u}+q_{H_d} - 1/3|} \Phi \; .
\end{equation}
Here the chiral superfield $\Phi$ is a SM gauge singlet whose $U(1)^{\prime}_{F}$ charge is normalized to $-1/3$ in our model; $q_i$ and $q_j$ are the $U(1)^{\prime}_{F}$ charges of the chiral superfields of the $i$-th and $j$-th generations of quarks and leptons, while $q_H$ (which can be $q_{H_u}$ or $q_{H_d}$) denotes the $U(1)^{\prime}_{F}$ charges of the up- and down-type Higgses. Note that if $q_{i}+q_{j}+q_{H} < 0$ or $q_{H_{u}} + q_{H_{d}} < 1/3$, then instead of the $\Phi$ field, the field $\Phi^{\prime}$ whose $U(1)^{\prime}_{F}$ charge is $1/3$ is used respectively in Eq.(\ref{eqn:genYukawa1}) and (\ref{eqn:genmu}). The terms with non-integer $3|q_i+q_j+q_H|$ and $3|q_{H_u}+q_{H_d}|$ are not allowed in the superpotential given that the number of the flavon fields must be an integer. This thus naturally gives rise to texture-zeros in the Yukawa matrices. Once the scalar component $\phi$ ($\phi^{\prime}$) of the flavon superfield $\Phi$ ($\Phi^{\prime}$) acquires a vacuum expectation value (VEV), the $U(1)^{\prime}_{F}$ symmetry is broken. Upon the breaking of the $U(1)^{\prime}_{F}$ symmetry and the electroweak symmetry, the effective Yukawa couplings can be rewritten as,
\begin{equation}
\label{eqn:genYukLam}
Y_{ij}^{eff} \sim \left(y_{ij} \epsilon \right)^{|q_i+q_j+q_H|},
\end{equation}
and the effective $\mu$ term is given by,
\begin{equation}
\label{eqn:genMuLam}
\mu \sim \left(\mu_{ud} \epsilon \right)^{|q_{H_u}+q_{H_d}-1/3|} <\phi> \; ,
\end{equation}
where $\epsilon \equiv \left( <\phi> / \Lambda \right)^3$ and $\epsilon^{\prime} \equiv \left(<\phi^{\prime}> / \Lambda \right)^3$. By choosing the expansion parameters $\epsilon$ and $\epsilon^{\prime}$ to be of the size of the Cabibbo angle $\sim 0.22$, we have found solutions to the charges that give rise to realistic fermion masses and mixing angles with all Yukawa couplings of order $y_{ij} \sim \mathcal{O}(1)$. One thing to address here is that although both $\epsilon$ and $\epsilon^{\prime}$ have to be of the size $\sim 0.22$, $<\phi>$ and $<\phi^{\prime}>$ are not necessarily to be the same due to the existence of the $\mathcal{O} (1)$ coefficients $y_{i,j}$ and $\mu_{ud}$.
These charges also suppress the effective $\mu$ term by a factor of $\epsilon^{|q_{H_u}+q_{H_d}-1/3|}$. With $|q_{H_u}+q_{H_d}|$ having a value in the range of $\sim [1, 2]$, the effective $\mu$ term of the size of $100$ GeV can naturally arise.
We then search for charges that satisfy all anomaly cancellation conditions and at the same time give rise to realistic masses and mixing angles for all SM fermions and a $\mu$ term of the right size. The details are discussed below.
\subsection{Anomaly Cancellation}
By expanding the gauge symmetry of MSSM with an additional $U(1)^{\prime}_{F}$ symmetry, there are six additional anomaly cancellation conditions. For all Higgs super-fileds in our model, we assume that they appear in conjugate pairs and therefore do not contribute to the gauge anomalies. As a result, only the charges of the three generations of matter fields are constrained by the anomaly cancellation conditions: \\
\begin{eqnarray}
\label{eqn:su3u1}
[SU(3)]^{2} U(1)^{\prime}_{F} & : & \sum_{i} \left[ 2q_{Q_i} - (-q_{u_i}) - (-q_{d_i}) \right] = 0 \; ,
\\
\label{eqn:su2u1}
[SU(2)_{L}]^{2} U(1)^{\prime}_{F} & : & \sum_{i} \left[ q_{L_i} + 3q_{Q_i}\right] = 0 \; ,
\\
\left[U(1)_{Y}\right]^{2} U(1)^{\prime}_{F} & : &
\sum_{i} \biggl[ 2 \times 3 \times \biggl( \frac{1}{6} \biggr)^2 q_{Q_i} - 3 \times \biggl( \frac{2}{3} \biggr)^2 (-q_{u_i} ) - 3 \times \biggl( -\frac{1}{3}\biggr)^2 (-q_{d_i}) \label{eqn:u1y2u1} \\
& & \qquad \qquad + 2 \times \biggl(-\frac{1}{2}\biggr)^2 q_{L_i} - (-1)^2 (-q_{e_i}) \biggr] = 0 \; , \nonumber
\\
\left[U(1)_{F}^{\prime}\right]^{2} U(1)_{Y} & : &
\displaystyle \sum_{i} \biggl[ 2 \times 3 \times \biggl( \frac{1}{6} \biggr) q_{Q_i}^2 - 3 \times \biggl( \frac{2}{3} \biggr) \times (-q_{u_i})^2 - 3 \times \biggl(-\frac{1}{3} \biggr) (-q_{d_i})^2 \label{eqn:u1yu12}\\
& & \qquad \qquad + 2 \times \biggl(-\frac{1}{2}\biggr)(q_{L_i})^2 - (-1)(-q_{e_i})^2 \biggr] = 0 \; ,
\nonumber \\
\label{eqn:u1grav}
U(1)^{\prime}_{F}-\mbox{gravity} & : &
\displaystyle \sum_{i} \left[ 6q_{Q_i} + 3q_{u_i} + 3q_{d_i} + 2q_{L_i} + q_{e_i} + q_{N_i}\right] = 0 \; ,
\\
\label{eqn:u13}
[U(1)^{\prime}_{F}]^{3} & : & \hspace{-0.05in}
\sum_{i} \left[ 3 \bigl( 2 (q_{Q_i})^3 - (-q_{u_i})^3 - (-q_{d_i})^3\bigr) + 2(q_{L_i})^3 - (-q_{e_i})^3 - (-q_{N_i})^3\right] = 0\;.
\end{eqnarray}
where $q_{Q_{i}}$, $q_{u_{i}}$, $q_{d_{i}}$, $q_{L_{i}}$, $q_{e_{i}}$, and $q_{N_{i}}$ denote, respectively, the charges of the quark doublet, iso-singlet up-type quark, iso-singlet down-type quark, lepton doublet, iso-singlet charged lepton, and right-handed neutrino of the $i$-th generation.
To further reduce the number of parameters, we also assume that the fields $Q_{i}$, $u_{i}^c$, and $e_{i}^c$ have the same $U(1)^{\prime}$ charges, $q_{Q_i} = q_{u_i} = q_{e_i} \equiv q_{t_i}$, and that the fields $L_{i}$ and $d_{i}^c$ have the same $U(1)^{\prime}$ charges, $q_{L_i} = q_{d_i} \equiv q_{f_i}$, as motivated by the $SU(5)$ unification~\cite{ref:SU(5)U(1)}. With these assignments, the above six anomaly cancellation conditions reduce to the following three independent ones,
\begin{eqnarray}
\label{eqn:anomaly1}
\frac{1}{2} \displaystyle \sum_{i} q_{f_i} + \frac{3}{2} \displaystyle \sum_{i} q_{t_i} & = & 0 \; , \\
\label{eqn:anomaly2}
5 \displaystyle \sum_{i} q_{f_i} + 10 \displaystyle \sum_{i} q_{t_i} + \displaystyle \sum_{i} q_{N_i} & = & 0 \; , \\
\label{eqn:anomaly3}
5 \displaystyle \sum_{i} q_{f_i}^3 + 10 \displaystyle \sum_{i} q_{t_i}^3 + \displaystyle \sum_{i} q_{N_i}^3 & = & 0 \; .
\end{eqnarray}
The first two anomaly conditions, Eqs.(\ref{eqn:anomaly1}) and (\ref{eqn:anomaly2}), are satisfied automatically with the following parametrization of the $U(1)^{\prime}_{F}$ charges,
\begin{eqnarray}
\begin{array}{lll} q_{t_1} & = & -\frac{1}{3} q_{f_1} - 2a \; , \nonumber \\
q_{t_2} & = & -\frac{1}{3} q_{f_2} + a + a^{\prime} \; , \nonumber \\
q_{t_3} & = & -\frac{1}{3} q_{f_3} + a - a^{\prime} \; , \end{array} \nonumber \,\,\qquad
\begin{array}{lll} q_{N_1} & = & -\frac{5}{3} q_{f_1} - 2b \; , \nonumber \\
q_{N_2} & = & -\frac{5}{3} q_{f_2} + b + b^{\prime} \; , \nonumber \\
q_{N_3} & = & -\frac{5}{3} q_{f_3} + b - b^{\prime} \; . \end{array}
\label{eqn:param}
\end{eqnarray}
where parameters $a$, $a^{\prime}$, $b$, and $b^{\prime}$ characterize the charge splittings between different generations of $q_{t_i}$ and $q_{N_i}$. The charges $q_{f_i}$ and charge splitting parameters, $a$, $a^{\prime}$, $b$, and $b^{\prime}$, are determined by the cubic equation Eq.(\ref{eqn:anomaly3}) and the observed fermion masses and mixings as shown in the following section.
\subsection{Fermion Masses and Mixings}
\label{sec:massMix}
The $U(1)^{\prime}_{F}$ charges give the following up-type quark Yukawa matrix,
\begin{eqnarray}
\label{eqn:upYukawa}
Y_U & \sim & \left(\begin{array}{lll} (\epsilon)^{|2q_{t_1}+q_{H_u}|} & (\epsilon)^{|q_{t_1}+q_{t_2}+q_{H_u}|} & (\epsilon)^{|q_{t_1}+q_{t_3}+q_{H_u}|} \\ (\epsilon)^{|q_{t_1}+q_{t_2}+q_{H_u}|} & (\epsilon)^{|2q_{t_2}+q_{H_u}|} & (\epsilon)^{|q_{t_2}+q_{t_3}+q_{H_u}|} \\ (\epsilon)^{|q_{t_1}+q_{t_3}+q_{H_u}|} & (\epsilon)^{|q_{t_2}+q_{t_3}+q_{H_u}|} & (\epsilon)^{|2q_{t_3}+q_{H_u}|} \end{array} \right) \; ,
\end{eqnarray}
and the Yukawa matrix of the down-type quarks is given by,
\begin{eqnarray}
\label{eqn:downYukawa}
Y_D & \sim & \left(\begin{array}{ccc} (\epsilon)^{|q_{t_1}+q_{f_1}+q_{H_d}|} & (\epsilon)^{|q_{t_1}+q_{f_2}+q_{H_d}|} & (\epsilon)^{|q_{t_1}+q_{f_3}+q_{H_d}|} \\ (\epsilon)^{|q_{t_2}+q_{f_1}+q_{H_d}|} & (\epsilon)^{|q_{t_2}+q_{f_2}+q_{H_d}|} & (\epsilon)^{|q_{t_2}+q_{f_3}+q_{H_d}|} \\ (\epsilon)^{|q_{t_3}+q_{f_1}+q_{H_d}|} & (\epsilon)^{|q_{t_3}+q_{f_2}+q_{H_d}|} & (\epsilon)^{|q_{t_3}+q_{f_3}+q_{H_d}|} \end{array} \right) \; .
\end{eqnarray}
(It is again to be understood that if the arguments of the absolute values are negative, then $\epsilon^{\prime}$ should be utilized instead of $\epsilon$.) Because the top quark is heavy, we assume that its mass term is generated at the renormalizable level, and thus
\begin{equation}
\label{eq:q1}
2q_{t_3}+q_{H_u} = 0 \; .
\end{equation}
To avoid the tree level FCNCs while allowing all three generations of chiral super-fields to have different $U(1)^{\prime}_{F}$ charges, we attribute all flavor mixings in the up-type quark and neutrino sectors with down-type quark and charged lepton sectors being flavor diagonal. Ideally the texture-zeros in the down-type quark and charged lepton sectors are generated due to the non-integer exponents as determined by the $U(1)^{\prime}_{F}$ charges. Nevertheless, no solution is found that can give diagonal down-type quark and charged lepton sectors and at the same time satisfy all other constraints in the model. We therefore impose an additional $Z_{8}$ symmetry to forbid the off diagonal elements in the down-type quark and charged lepton mass matrices. The transformation properties of various chiral superfields are summarized in Table ~\ref{tbl:z8Charg}.
\begin{table}[tbh!]
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline
Field & $d_1^c, e_1^c$ & $d_2^c, e_2^c$ & $d_3^c, e_3^c$ & $Q_1, u_1^c$ & $Q_2, u_2^c$ & $Q_3, u_3^c$ & $\nu_1^c$ & $\nu_2^c$ & $\nu_3^c$ & $H_u, H_d$ & $\Phi$ & $\Phi^{\prime}$ \\ \hline
$Z_{8}$ Parity & $-1$ & $i$ & $e^{-\frac{\pi i}{4}}$ & $1$ & $1$ & $1$ & $-1$ & $1$ & $e^{-\frac{\pi i}{4}}$ & $1$ & $e^{\frac{\pi i}{4}}$ & $1$ \\ \hline \hline
\end{tabular}
\caption{The $Z_{8}$ parity of the chiral superfields. $d_i^c$ is the down-type quark singlet chiral superfield with family index $i$, $e_i^c$ is the charged lepton singlet, $Q_i$ is the quark doublet chiral superfield, $u_i^c$ is the up-type quark singlet, $L_i$ is the lepton doublet, and $\nu_i^c$ is the neutrino singlet.
}
\label{tbl:z8Charg}
\end{table}
Realistic mass and mixing spectrum is found with
\begin{eqnarray}
\label{eq:q2}
q_{f_1} - q_{f_3} = 22/3 \; , \qquad & q_{f_2} - q_{f_3} = 11/3 \; , \\
q_{H_u}+q_{H_d} = -5/3 \; \qquad & a^{\prime} = -7/18 \; , \nonumber
\end{eqnarray}
and also
\begin{equation}
\label{eq:q3}
q_{t_3} + q_{f_3} + q_{H_d} = 1/3 \; ,
\end{equation}
which naturally accounts for the mass hierarchy between the top quark and the bottom quark, leading to a prediction of $\tan \beta = v_u/v_d \sim 25$, with
\begin{equation}
\left< H_{u} \right> = v_{u} \; , \quad \left< H_{d} \right> = v_{d} \; ,
\end{equation}
being the VEVs of the neutral scalar components of the Higgs supermultiplets. All elements in the up-type quark Yukawa matrix has $Z_8$ parity of $+1$, and thus they are all allowed. The elements in the down-type quark Yukawa matrix have the following transformation properties under the $Z_8$ parity,
\begin{eqnarray}
\label{eqn:downYukawaParity}
P_D & \sim & \left(\begin{array}{lll} 1 & e^{\frac{3\pi}{4}i} & e^{-\frac{\pi}{4}i} \\ e^{\frac{3\pi}{4}i} & 1 & e^{-\frac{\pi}{4}i} \\ e^{\frac{3\pi}{2}i} & e^{\frac{3\pi}{2}i} & 1 \end{array} \right) \; ,
\end{eqnarray}
As a result, only the diagonal elements in the down-type quark Yukawa matrix are allowed. The resulting effective Yukawa matrices of the up- and down-type quarks are thus given, in terms of $\epsilon$ or $\epsilon^{\prime}$, as
\begin{eqnarray}
\label{eqn:yu}
Y_U & \sim & \left(\begin{array}{lll} (\epsilon^{\prime})^{22/3} & (\epsilon^{\prime})^{17/3} & (\epsilon^{\prime})^{11/3} \\ (\epsilon^{\prime})^{17/3} & (\epsilon^{\prime})^{4} & (\epsilon^{\prime})^{2} \\ (\epsilon^{\prime})^{11/3} & (\epsilon^{\prime})^{2} & 1 \end{array} \right) \; , \\
\label{eqn:yd}
Y_D & \sim &
\left(\begin{array}{lll} (\epsilon)^{4} & 0 & 0 \\
0 & (\epsilon)^{2} & 0 \\ 0 & 0 & (\epsilon)^{1/3} \end{array} \right) \; .
\end{eqnarray}
Due to the $SU(5)$-inspired charge assignment, we also have $Y_E = Y_{D}^{T}$. Eqs.(\ref{eqn:yu}) and (\ref{eqn:yd}) give rise to realistic masses of up- and down-type quarks and charged leptons as well as all CKM matrix elements. Since no mixing appears in the down-type quark and the charge lepton sectors, all FCNC constraints are satisfied.
With the above charge assignment, the $[U(1)^{\prime}_{F}]^{3}$ anomaly cancellation condition is satisfied if
\begin{equation}
\label{eq:q4}
q_{f_3} = \frac{-10240 - 63525b - 13365b^2 - 486b^3 + 9075b^{\prime} - 2970bb^{\prime} - 1485b^{\prime 2} + 486bb^{\prime 2}}{10(304+1485b+243b^2 - 495b^{\prime} + 81b^{\prime 2})} \; ,
\end{equation}
for any $b$ and $b^{\prime}$. The values of $b$ and $b^{\prime}$ are determined by the neutrino sector.
The Dirac, left-handed, and right-handed Majorana Neutrino mass terms are given, respectively, as follows,
\begin{eqnarray}
\label{eqn:neutDiracYukawa}
Y_N & \sim & \left(\begin{array}{lll} (\epsilon)^{|q_{f_1}+q_{N_1}+q_{H_u}|} & (\epsilon)^{|q_{f_1}+q_{N_2}+q_{H_u}|} & (\epsilon)^{|q_{f_1}+q_{N_3}+q_{H_u}|} \\ (\epsilon)^{|q_{f_2}+q_{N_1}+q_{H_u}|} & (\epsilon)^{|q_{f_2}+q_{N_2}+q_{H_u}|} & (\epsilon)^{|q_{f_2}+q_{N_3}+q_{H_u}|} \\ (\epsilon)^{|q_{f_3}+q_{N_1}+q_{H_u}|} & (\epsilon)^{|q_{f_3}+q_{N_2}+q_{H_u}|} & (\epsilon)^{|q_{f_3}+q_{N_3}+q_{H_u}|} \end{array} \right) \; ,
\\
\label{eqn:neutMajLeYukawa}
Y_{LLHH} & \sim & \left(\begin{array}{lll} (\epsilon)^{|2q_{f_1}+2q_{H_u}|} & (\epsilon)^{|q_{f_1}+q_{f_2}+2q_{H_u}|} & (\epsilon)^{|q_{f_1}+q_{f_3}+2q_{H_u}|} \\ (\epsilon)^{|q_{f_2}+q_{f_1}+2q_{H_u}|} & (\epsilon)^{|2q_{f_2}+2q_{H_u}|} & (\epsilon)^{|q_{f_2}+q_{f_3}+2q_{H_u}|} \\ (\epsilon)^{|q_{f_3}+q_{f_1}+2q_{H_u}|} & (\epsilon)^{|q_{f_3}+q_{f_2}+2q_{H_u}|} & (\epsilon)^{|2q_{f_3}+2q_{H_u}|} \end{array} \right)
\; , \\
\label{eqn:neutMajRiYukawa}
Y_{NN} & \sim & \left(\begin{array}{lll} (\epsilon)^{|2q_{N_1}|} & (\epsilon)^{|q_{N_1}+q_{N_2}|} & (\epsilon)^{|q_{N_1}+q_{N_3}|} \\ (\epsilon)^{|q_{N_2}+q_{N_1}|} & (\epsilon)^{|2q_{N_2}|} & (\epsilon)^{|q_{N_2}+q_{N_3}|} \\ (\epsilon)^{|q_{N_3}+q_{N_1}|} & (\epsilon)^{|q_{N_3}+q_{N_2}|} & (\epsilon)^{|2q_{N_3}|} \end{array} \right) \; .
\end{eqnarray}
By choosing
\begin{equation}
\label{eq:q5}
b = 55/8 \; , \qquad b^{\prime} = -347/18 \; ,
\end{equation}
only the Dirac neutrino mass matrix is allowed since all elements in the left-handed and right-handed neutrino Majorana mass matrices are non-integers. Furthermore, elements of the Dirac neutrino mass matrix have the following transformation properties under the $Z_{8}$ parity,
\begin{eqnarray}
\label{eqn:yvParity}
P_N & \sim & \left(\begin{array}{lll} -1 & 1 & 1 \\ -1 & 1 & e^{\frac{\pi}{4}i} \\ -1 & 1 & i \end{array} \right) \; .
\end{eqnarray}
Therefore, only the second column and the (3,1) element are allowed, leading to the following Dirac neutrino mass matrix,
\begin{equation}
\label{eqn:yv}
Y_N \sim \left(\begin{array}{lll} 0 & (\epsilon^{\prime})^{49/3} & (\epsilon)^{85/3} \\ 0 & (\epsilon^{\prime})^{20} & 0 \\ 0 & (\epsilon^{\prime})^{71/3} & 0 \end{array} \right) \; .
\end{equation}
With the following $Y_{ij}$ coefficients of the order of unity, we obtain
\begin{eqnarray}
\label{eqn:yvPrime}
Y_{N} & = & \left(\begin{array}{lll} \; 0 \; & \;(0.8526)^{49}(\epsilon^{\prime})^{21} \; & \; (1.186633)^{85}(\epsilon)^{55/3} \; \\ \; 0 \; & \; (1.02678)^{60}(\epsilon^{\prime})^{19} \; & \; 0 \; \\ \; 0 \; & \; (1.105762e^{\frac{i\pi}{71}})^{71}(\epsilon^{\prime})^{19} \; & \; 0 \; \end{array} \right) \; .
\end{eqnarray}
The matrix $Y_{N} Y_{N}^{\dagger}$ is given by,
\begin{eqnarray}
\label{eqn:yvPrime}
Y_{N} Y_{N}^{\dagger} & = & \left(\begin{array}{lll} (\epsilon)^{110/3} & (\epsilon^{\prime})^{40} & -(\epsilon)^{40} \\ (\epsilon^{\prime})^{40} & (\epsilon^{\prime})^{38} & -(\epsilon^{\prime})^{38} \\ -(\epsilon^{\prime})^{40} & -(\epsilon^{\prime})^{38} & (\epsilon^{\prime})^{38} \end{array} \right) \; .
\end{eqnarray}
The resulting neutrino mixing pattern arising from the matrix $Y_{N}$ given above is close to the tri-bimaximal mixing pattern. The three absolute masses are predicted to be,
\begin{equation}
m_{\nu_1} \simeq 0.048214 \; \mbox{eV} \; ,\quad
m_{\nu_2} \simeq 0.048988 \; \mbox{eV} \; ,\quad
m_{\nu_3} \simeq 0 \; .
\end{equation}
These three masses give the following values for the squared mass differences
\begin{equation}
|\Delta m_{atm}^{2}| = 2.40 \times 10^{-3} eV^{2} \; , \quad \Delta m_{\odot}^{2} =7.52 \times 10^{-5} eV^2\; ,
\end{equation}
which satisfy the neutrino experimental results ~\cite{ref:neutinoPhysics, ref:neutrinoTheta13} and predict the inverted mass ordering for the light neutrinos.
The $U(1)^{\prime}_{F}$ charges that correspond to the parameters given in Eqs.(\ref{eq:q1}), (\ref{eq:q2}), (\ref{eq:q3}), (\ref{eq:q4}), and (\ref{eq:q5}) are summarized in Table~\ref{tbl:u1Charge}.
\begin{table}[t!]
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline
Field & $d_1^c, L_1$ & $d_2^c, L_2$ & $d_3^c, L_3$ & $Q_1, u_1^c, e_1^c$ & $Q_2, u_2^c, e_2^c$ & $Q_3, u_3^c, e_3^c$ &
\\ \hline
$U(1)_{F}^{\prime}$ charge
& $q_{f_{1}} = \frac{102857}{15585}$
& $q_{f_{2}} = \frac{45712}{15585}$
& $q_{f_{3}} = -\frac{3811}{5195}$
& $q_{t_{1}} = -\frac{42944}{15585}$
& $q_{t_{2}} = -\frac{16969}{15585}$
& $q_{t_{3}} = \frac{14201}{15585}$
&
\\
\hline \hline
Field & $\nu_1^c$ & $\nu_2^c$ & $\nu_3^c$ & $H_u$ & $H_d$ & $\Phi$ & $\Phi^{\prime}$ \\ \hline
$U(1)_{F}^{\prime}$ charge
& $q_{N_{1}} = -\frac{17778}{1039}$
& $q_{N_{2}} = -\frac{21934}{1039}$
& $q_{N_{3}} = \frac{73424}{3117}$
& $q_{H_{u}} = -\frac{28402}{15585}$
& $q_{H_{d}} = \frac{2427}{15585}$
& $q_{\Phi} = -\frac{1}{3}$
& $q_{\Phi^{\prime}} = \frac{1}{3}$ \\ \hline \hline
\end{tabular}
\caption{The $U(1)^{\prime}_{F}$ charges of the chiral superfields.}
\label{tbl:u1Charge}
\end{table}
\subsection{Implications for the $\mu$ problem and Proton Decay}
\label{sec:mu}
Due to the presence of the $U(1)_{F}^{\prime}$ symmetry, the $\mu$ parameter in our model is given by,
\begin{equation}
\label{eqn:muMass}
|\mu|^2 = \frac{1}{2} \biggl[\frac{M_{H_u}^2 - M_{H_d}^2 + (q_{H_u} - q_{H_d})C}{\cos 2 \beta}
- \biggl(M_{z}^2 + M_{H_u}^2 + M_{H_d}^2 + (q_{H_u} + q_{H_d})C \biggr) \biggr] \,\, .
\end{equation}
The parameter $C$ is defined as
\begin{equation}
\label{eqn:varA}
C = g_{z'}^2 (q_{H_u} v^2 \sin^2 \beta + q_{H_d} v^2 \cos^2 \beta - q_{\Phi} u^{2} \cos 2\psi) \; ,
\end{equation}
where $v^{2} = v_{u}^{2} + v_{d}^{2}$ while $u$ and $\psi$ are defined through $\left< \phi \right> = u \sin\psi $ and $\left< \phi^{\prime} \right> = u \cos \psi$,
with $\phi$ and $\phi^{\prime}$ being the scalar component of the chiral superfield $\Phi$ and $\Phi^{\prime}$, respectively. The more detailed description of the Higgs sector of our model is given in Section~\ref{sec:higgs-sector}.
Generally, a delicate cancellation between the Higgs masses and a $\mu$ term of the weak scale is required in order to obtain the observed $M_Z$~\cite{ref:muParaDawson}. This is known as the $\mu$ problem~\cite{ref:muProblem}.
In our $U(1)^{\prime}_{F}$ model, the charge assignment of $H_{u}$ and $H_{d}$ naturally suppresses the $\mu$ term by a factor of $\sim (\epsilon)^{4/3} \sim 0.133$ with respective to $\left< \phi \right> \sim$ TeV scale. This thus naturally gives a $\mu$ term of the order of $\sim \mathcal{O}(100)$ GeV while having $\mu_{ud} \sim \mathcal{O}(1)$.
The $U(1)^{\prime}_{F}$ charge assignment of the chiral superfields also automatically forbids lepton number and baryon number violating operators at the tree level. In general, lepton number and baryon number violating operators
\begin{eqnarray}
\label{eqn:lepVi}
W_{\Delta L = 1} & = & \frac{1}{2} \lambda^{ijk} L_i L_j e_k^c + \lambda^{\prime ijk} L_i Q_j d_k^c + \mu^{\prime i}L_i H_u \; , \\
\label{eqn:baryVi}
W_{\Delta_B = 1} & = & \frac{1}{2} \lambda^{\prime \prime ijk} u_i^c d_j^c d_k^c \; ,
\end{eqnarray}
are allowed by supersymmetry, and they can lead to proton decay processes, {\it e.g.}
\begin{equation}
p \rightarrow e^{+} \pi^0, \; e^{+} K^0, \; \mu^{+} \pi^0, \; \mu^{+} K^0, \; \nu \pi^0, \; \mbox{or} \; \; \nu K^0 \; .
\end{equation}
To avoid those operators, the usual way is to impose the conservation of the R-parity, which is defined as $P_{R} = (-1)^{3(B-L)+2s}$ with $B$ being the baryon number, $L$ being the lepton number, and $s$ being the spin of the particle ~\cite{ref:susyPrimer}. Without imposing the R-parity, in our model the $U(1)^{\prime}_{F}$ charges automatically forbid these operators since these operators are not $U(1)^{\prime}_{F}$ gauge invariant.
\section{Experimental Constraints}
\label{sec:ewpt}
\subsection{Electroweak Precision Constraints}
Because $H_{u}$ and $H_{d}$ both are charged under $U(1)_{F}^{\prime}$, tree-level kinetic mixing between the $Z$ and $Z^{\prime}$ gauge bosons exists in our model. The $Z-Z^{\prime}$ mixing contributes to the $\rho$ parameter and therefore it is very severely constrained ~\cite{ref:ewCons, ref:mixZZ, ref:CDFz'Con1}. The kinetic terms of the Higgses lead to mass terms for the gauge bosons, $Z_{F}^{\prime}$, $B_{Y}$, and $W^{3}$, which are associated with $U(1)^{\prime}_{F}$, $U(1)_{Y}$, and the $T^{3}$ generator of the $SU(2)_{L}$ symmetry, respectively,
\begin{eqnarray}
\frac{1}{4} \biggl[ v_u^2 (-g_2 W^{3} + g_1 B_{Y} + 2 q_{H_u} g_{z'} Z_{F}^{\prime})^2 + v_d^2 (g_2 W^3 - g_1 B_{Y} + 2 q_{H_d} g_{z'} Z_{F}^{\prime})^2 \biggr] \; , \nonumber
\end{eqnarray}
and $g_2$, $g_1$, and $g_{z'}$ are the $SU(2)_{L}$, $U(1)_Y$, and $U(1)^{\prime}_{F}$ gauge coupling constants. After diagonalizing the mass matrix for the gauge bosons, we obtain three physical eigenstates which are identified as the photon, the $Z$ boson, and the $Z^{\prime}$ boson
\begin{eqnarray}
\label{eqn:massBoson}
A & = & \frac{1}{\sqrt{g_2^2 + g_1^{2}}}(g_1 W^3 + g_2 B_{Y}) \; , \\
Z^{SM} & = & \frac{1}{\sqrt{g_2^2 + g_1^{2}}}(g_2 W^3 - g_1 B_{Y}) \; , \\
Z & = & Z^{SM} + \delta_{ZZ^{\prime}} Z_{F}^{\prime} \; , \\
Z^{\prime} & = & Z_{F}^{\prime} - \delta_{ZZ^{\prime}} Z^{SM} \; .
\end{eqnarray}
The masses of the physical gauge bosons are given by
\begin{eqnarray}
M_{Z} & = & \sqrt{\frac{g_2^2 + g_1^{2}}{2}} \sqrt{v_u^2 + v_d^2}(1 + O(\delta_{ZZ^{\prime}}^2)) \; , \\
M_{Z^{\prime}} & = & g_{z'} \sqrt{2(q_{H_u}^2 v_u^2 + q_{H_d}^2 v_d^2 + 2 q_{\Phi}^2 u_{\phi}^2)}(1 + O(\delta_{ZZ^{\prime}}^2)) \; ,
\end{eqnarray}
and the term
\begin{equation}
\delta_{ZZ^{\prime}} = \frac{\Delta M_{ZZ^{\prime}}^2}{M_{Z^{\prime}}^2 - M_{Z}^2} \; , \quad \mbox{with} \quad
\Delta M_{ZZ^{\prime}}^2 = g_{z'} \sqrt{g_2^2 + g_1^{2}}(q_{H_d} v_d^2 - q_{H_u} v_u^2) \; ,
\end{equation}
is the Higgs induced kinetic mixing between $Z$ and $Z^{\prime}$, which is severely constrained by the precision electroweak data.
The electroweak precision measurements indicate that the $\rho$ parameter is very close to $1$ and the experimentally allowed deviation, $\Delta \rho$, is
given by,
\begin{equation}
|\Delta \rho| = \biggl| \frac{(\Delta M_{ZZ^{\prime}}^2)^2}{(M_{Z^{\prime}}^2 - M_{Z}^2)M_{Z}^2} \biggr| < 0.00023 \; .
\end{equation}
In the $U(1)^{\prime}_{F}$ model, $\Delta \rho$ is given by,
\begin{equation}
\Delta \rho = \frac{4 g_{z'}^2 (q_{H_d} - q_{H_u} \tan^2 \beta)^2}{\biggl(\frac{M_{Z^{\prime}}^2}{M_{Z}^2} - 1\biggr)(g_2^2 + g_1^{2})(1 + \tan^2 \beta)^2} \; .
\end{equation}
The experimental limit on $\Delta \rho$ then is translated into the following constraints on the $U(1)^{\prime}$ gauge coupling $g_{z'}$ and the $Z^{\prime}$ mass with $\tan \beta = 25$ in our model,
\begin{equation}
g_{z'} < \sqrt{\frac{0.00023}{24.213637}\biggl( \frac{M_{Z^{\prime}}^2}{M_{Z}^2} - 1 \biggr)} \; .
\end{equation}
As shown in Figure~\ref{fig:couplingMass}, for a relatively light $Z^{\prime}$ mass of 600 GeV, the gauge coupling $g_{z'}$ must be smaller than $\lesssim 0.02$ in order to satisfy the precision electroweak constraint on the $\rho$ parameter. With increasing $Z^{\prime}$ mass, the maximally allowed value for $g_{z'}$ increases, to a very good approximation, linearly.
\begin{figure}[t!]
\includegraphics[scale=0.8, angle = 0, width = 80mm, height = 50mm]{couplingMass3.pdf}
\caption{The maximally allowed value of the $U(1)^{\prime}_{F}$ gauge coupling, $g_{z'}$, as a function of the mass of the $Z^{\prime}$ gauge boson, $M_{Z^{\prime}}$, derived from the constraints on the $\rho$ parameter.}
\label{fig:couplingMass}
\end{figure}
We note that, in addition to the Higgs-induced contribution discussed above, the $Z-Z^{\prime}$ mixing can also be generated by an explicit kinetic mixing term in the Lagrangian and by the renormalization group evolution ~\cite{ref:rgeLoop}. There thus exists possible cancellation among these contributions to the $\rho$ parameter, allowing the constraints on $g_{z'}$ and $M_{Z^{\prime}}$ to be loosened.
\subsection{Constraints from Flavor-Changing Neutral Currents}
Following the formalism in ~\cite{ref:FCNC}, the neutral current Lagrangian in the gauge eigenstates can be written as
\begin{equation}
\mathcal{L}_{NC}=-eJ^{\mu}_{\mbox{\tiny em}}A_{\mu} - g_2 J^{\mu} Z_{\mu}^{SM}
- g_{z'} J^{\prime \; \mu} Z_{F \; \mu}^{\prime}\;,
\end{equation}
where $e = g_1 g_2/\sqrt{g_1^2 + g_2^2}$ and $\cos \theta_w = g_2/\sqrt{g_1^2 + g_2^2} \,\, , \sin \theta_w = g_1/\sqrt{g_1^2 + g_2^2}$ and $\theta_w$ is the weak mixing angle in SM. The currents are defined as
\begin{eqnarray}
J^{\mu}&=&\sum_i \bar{\psi}_i \gamma_{\mu}
\left[ \epsilon_{i}^{\psi_L} P_L + \epsilon_{i}^{\psi_R} P_R\right]\psi_i\;,\label{J1}\\[1ex]
J^{\prime \; \mu}&=&\sum\limits_{i,j} \bar{\psi}_i \gamma_{\mu}
\left[ \epsilon_{ij}^{\prime \; \psi_L} P_L
+ \epsilon_{ij}^{\prime \; \psi_R} P_R\right]\psi_j\;,\label{J2}
\end{eqnarray}
where the summations are taken over all quarks and leptons, $\psi_{i,j}$, and $P_{R,L}=\frac{1}{2} (1\pm\gamma_5)$ are the projection operators. The gauge coupling constants of the SM Z boson are given by
\begin{equation}
\epsilon_{i}^{\psi_R}=-\sin^2\theta_wQ_{e_{i}} \;, \qquad
\epsilon_{i}^{\psi_L}=t_3^i-\sin^2\theta_w Q_{e_{i}} \;,
\end{equation}
where $t_3^i$ and $Q_{e_{i}}$ are the third component of the weak isospin and the electric charge of fermion $i$, respectively. The gauge coupling constants to $Z^{\prime}$ are denoted by $\epsilon_{ij}^{\prime \; \psi_{L,R}}$. Flavor-changing neutral currents immediately arise if the matrices $\epsilon^{\prime \; \psi_{L,R}}$ are non-diagonal. FCNCs can also be induced by the quark and lepton mixing, if $\epsilon^{\prime \; \psi_{L,R}}$ are diagonal but have non-universal elements. The fermion Yukawa matrices $Y_{\psi}$ in the weak eigenstate basis can be diagonalized by unitary matrices $V^{\psi}_{R,L}$
\begin{equation}
Y_{\psi,diag}=V^{\psi}_R\,Y_{\psi}\,{V_L^{\psi}}^{\dagger} \; .
\end{equation}
Hence, the $Z^{\prime}$ coupling matrices in the fermion mass eigenstate basis are
\begin{equation}
B^{\psi_L}\equiv
\left(V_L^{\psi} \epsilon^{\prime \; \psi_L} {V_L^{\psi}}^{\dagger}\right)\;,
\qquad \qquad
B^{\psi_R}\equiv
\left(V_R^{\psi} \epsilon^{\prime \; \psi_R} {V_R^{\psi}}^{\dagger}\right)\;.
\label{Bij}
\end{equation}
The currents $J^{\mu}$ and $J^{\prime}$ in the mass eigenstates of the fermions can then be written as
\begin{eqnarray}
J_{m}^{\mu}&=&\sum_i \bar{\psi}_{L_i}^{m} \gamma_{\mu} \epsilon_i^{\psi_L} \psi_{L_i}^{m} + \bar{\psi}_{R_i}^{m} \gamma_{\mu} \epsilon_i^{\psi_R} \psi_{R_i}^{m} \;,\label{J1}\\[1ex]
J_{m}^{\prime \; \mu}&=&\sum\limits_{i,j} \bar{\psi}_{L_i}^{m} \gamma_{\mu}
B_{ij}^{\psi_L} \psi_{L_j}^{m} + \bar{\psi}_{R_i} \gamma_{\mu} B_{ij}^{\psi_R} \psi_{R_j}^{m}\; , \label{J2}
\end{eqnarray}
and the neutral current interaction in the bases of the mass eigenstates of the fermions, $Z$ and $Z^{\prime}$ as
\begin{equation}
\mathcal{L}_{NC}^{m}= - g_1 \left[\cos\theta J_{m}^{\mu}
+ \frac{g_{z'}}{g_1}\sin\theta J_{m}^{\prime \; \mu}\right] Z_{\mu}
- g_1\left[\frac{g_{z'}}{g_1}\cos\theta J_{m}^{\prime \; \mu}
- \sin\theta J_{m}^{\mu}\right] Z_{\mu}^{\prime}\;,
\label{LZ}
\end{equation}
with $\theta$ being the $Z-Z^{\prime}$ mixing angle and $\sin \theta \sim \delta_{ZZ^{\prime}}$.
The unitary matrices $V^{\psi}_{L}$ are constrained by the CKM matrix in the left-handed quark sector through the relation
\begin{equation}
V_{CKM} =
V_L^u {V_L^d}^{\dagger}\;,
\label{BCKM}
\end{equation}
and equivalently for the lepton sector by the PMNS matrix,
\begin{equation}
V_{PMNS} = {V_{L}^{\nu}}^{\dagger} V_{L}^{e} \; .
\end{equation}
The flavor-changing neutral current interactions are severely constrained by various experiments ~\cite{ref:flavorPhysics} such as rare meson decays and neutral meson mixings, in particular $D^0-\bar{D}^0$, $K^0-\bar{K}^0$, and $B^0-\bar{B}^0$ mixing. Generally, the mass splitting $\Delta m_P$ between neutral mesons $P^0$ and $\bar{P}^0$ through interaction shown in Eq.(\ref{LZ}) can be approximately written as (neglecting the negligible $Z-Z^{\prime}$ mixing effect)
\begin{equation}
\Delta m_P=\left(\frac{g_{z'}}{M_{Z^{\prime}}}\right)^2 m_P F_P^2\left\{
\frac{1}{3}\mbox{Re}\left[\left(B_{ij}^{q_L}\right)^2
+\left(B_{ij}^{q_R}\right)^2\right]-
\left[\frac{1}{2}+
\frac{1}{3}\left(\frac{m_P}{m_{q_i}+m_{q_j}}\right)^2\right]
\mbox{Re}\left(B_{ij}^{q_L}B_{ij}^{q_R}\right)\right\}\;,
\end{equation}
where $m_P$ and $F_P$ are the mass and decay constant of the meson, respectively.
Since the mass eigenstates and the gauge eigenstates of the down-type quarks and the charged lepton sector are the same, there are no FCNCs in the down-type quark and charged-lepton sectors through the $Z^{\prime}$ exchange at the tree level. For the up-type quark sector, the only available experimental constraints is for the first two families and the most stringent one is from $D^0-\bar{D}^0$ mixing. In our model, the $B^{\psi_L}$ matrice is
\begin{equation}
B^{u_L}\equiv
\left(V_{CKM} \epsilon^{\prime \; u_L} V_{CKM}^{\dagger}\right)\;, \\
\qquad
\label{Bij}
\end{equation}
and $B^{u_R} = - B^{u_L}$ since $u_L$ and $u_R$ carry opposite $U(1)^{\prime}_F$ charges. Choosing the standard parametrization of the CKM matrix, we have
\begin{eqnarray}
V_{CKM} = \left(\begin{array}{ccc} C_{12}C_{13} & S_{12}C_{13} & S_{13}e^{-i\delta_{13}} \\ -S_{12}C_{23} - C_{12}S_{23}S_{13}e^{i\delta_{13}} & C_{12}C_{23} - S_{12}S_{23}S_{13}e^{i\delta_{13}} & S_{23}C_{13} \\ S_{12}S_{23} - C_{12}C_{23}S_{13}e^{i\delta_{13}} & -C_{12}S_{23} - S_{12}C_{23}S_{13}e^{i\delta_{13}} & C_{23}C_{13} \end{array}\right) \; ,
\end{eqnarray}
where $C_{ij} \equiv \cos \theta_{ij}$, $S_{ij} \equiv \sin \theta_{ij}$, and $\theta_{ij}$ are mixing angles. As a result of the $U(1)^{\prime}$ symmetry, our model can accommodate the experimentally observed values, $\theta_{12} = 13.04^{\circ}$, $\theta_{13} = 0.201^{\circ}$, and $\theta_{23} = 2.38^{\circ}$. The CP phase in the CKM matrix is not determined in our model. In the following we assume $\delta_{13} =68.75^{\circ}$.
The matrix $\epsilon^{\prime \; u_L} $ is then given by
\begin{eqnarray}
\epsilon^{\prime \; u_L} = \left(\begin{array}{ccc} -42944/15585 & 0 & 0 \\ 0 & -16969/15585 & 0 \\ 0 & 0 & 14201/15585 \end{array}\right) \; ,
\end{eqnarray}
and the B matrix is then
\begin{eqnarray}
\label{matrx:BuL}
B^{u_L} = \left(\begin{array}{ccc} -2.6705 & 0.366226 - 0.000486337i & -0.590674 - 0.0117013i \\ 0.266226 + 0.000486337i & -1.1701 & 0.220285 + 0.00127662i \\ -0.590674 + 0.0117013i & 0.220285 - 0.00127662i & 0.769274 \end{array}\right) \; ,
\end{eqnarray}
which leads to the following contribution to the mass splitting due to the $Z^{\prime}$ exchange,
\begin{eqnarray}
\Delta m_D & = & \left(\frac{g_{z'}}{M_{Z^{\prime}}}\right)^2 m_D F_P^2\left\{
\frac{1}{3}\mbox{Re}\left[\left(B_{12}^{u_L}\right)^2
+\left(B_{12}^{u_R}\right)^2\right]-
\left[\frac{1}{2}+
\frac{1}{3}\left(\frac{m_D}{m_{u}+m_{c}}\right)^2\right]
\mbox{Re}\left(B_{12}^{u_L}B_{12}^{u_R}\right)\right\} \nonumber \\
& = & 42.9483 \; g_{z'}^2 \left(\frac{1 \; \mbox{TeV}}{M_{Z^{\prime}}}\right)^2 \; \mbox{eV} \; .
\end{eqnarray}
The experimental constraint on the mass spliting of $D^0-\bar{D}^0$ system is $|\Delta m_D| \leq 1.56 \times 10^{-5}$ eV ~\cite{ref:pdgDDbar}. Therefore, we obtain conservatively the constraint
\begin{equation}
\frac{g_{z'}}{M_{Z^{\prime}} \, (\mbox{in TeV})} < 6.027 \times 10^{-4} \; ,
\end{equation}
which is much more stringent than the constraints from the precision electroweak data. Hence, taking $g_{z'} = 0.02$, the mass of the $Z^{\prime}$ has to satisfy $M_{Z^{\prime}} > 33.18$ TeV. Similar to the $Z-Z^{\prime}$ mixing, there exist additional contributions from the sparticle sector to the $D$ meson mixing which can potentially cancel the contribution due to the $Z^{\prime}$ exchange~\cite{ref:FCNCSUSY}, allowing the constraints on the $g_{z'}$ and $M_{Z^{\prime}}$ to be loosened. In addition, the contributions due to the long range effects in the standard model, which are intrinsically non-perturbative and thus have large uncertainty, potentially also can relax the constraints, when included.
We also note that in our model the contributions to the D meson leptonic and semi-leptonic decay modes through $Z^{\prime}$ mediation are negligible.
\section{Collider Signatures}
\label{sec:collider}
The $Z^{\prime}$ gauge boson associated with the $U(1)^{\prime}_{F}$ breaking has a mass on the order of a TeV and therefore it can be produced at the collider experiments. A recent updated limit specific for our model from CDF $Z^{\prime}$ searches at CDF can be found in Ref.~\cite{ref:CDFlimit}.
With the assumption that the contribution to the $D^0-\bar{D}^0$ mixing from the $Z^{\prime}$ exchange can be compensated from other new physics such as SUSY, we can relax the severe bound from the $D^0-\bar{D}^0$ mixing, allowing a $M_{Z^{\prime}}$ of 1 TeV to be viable. Even though the $U(1)^{\prime}_{F}$ gauge coupling constant is constrained to be small in order to satisfy the precision electroweak constraints, due to the large $U(1)^{\prime}_{F}$ charges of the three generations of chiral super-fields, the $Z^{\prime}$ may still be discovered through the resonances of the di-muon or di-electron decay channels. Fig.~\ref{fig:xSectDiLeptonLHC} shows the cross sections of $Z^{\prime} \rightarrow e^{+}e^{-}$ and $Z^{\prime} \rightarrow \mu^{+}\mu^{-}$ at the LHC as a function of the $Z^{\prime}$ mass with the center of mass energy $\sqrt{s} = 14$ TeV and $g_{z'} = 0.02$.
\begin{figure}[t!]
\includegraphics[scale=0.8, angle = 90, width = 80mm, height = 50mm]{xSectDiLeptonLHC.pdf}
\caption{Cross Sections of $Z^{\prime} \rightarrow e^{+}e^{-}$ and $Z^{\prime} \rightarrow \mu^{+}\mu^{-}$ for different values of the $Z^{\prime}$ mass at the LHC with the center of mass energy $\sqrt{s} = 14$ TeV and $g_{z'} = 0.02$.}
\label{fig:xSectDiLeptonLHC}
\end{figure}
Due to the fact that the $U(1)^{\prime}$ charges of the superfields are generation dependent, in addition to the flavor conserving $Z^{\prime}$ decay channels, some flavor violating decay channels are also allowed in our model. These are discussed in the following.
\subsection{Flavor Conserving Decays}
Since the three generations of superfields have non-universal $U(1)^{\prime}$ charges, the branching fractions among different $Z^{\prime}$ leptonic decay channels are very distinguishable due to the large charge splittings. The $Z^{\prime}$ partial decay width for different channels at the tree level is given by,
\begin{equation}
\Gamma(Z^{\prime} \rightarrow \psi_i \bar{\psi}_j) = \frac{N_c M_{Z^{\prime}}}{24\pi}\left(|-g_{z'} \cos \theta B_{ij}^{\psi_L} + g_1 \sin \theta \delta_{ij} \epsilon_{i}^{\psi_L}|^2 + |-g_{z'} \cos \theta B_{ij}^{\psi_R} + g_1 \sin \theta \delta_{ij} \epsilon_{i}^{\psi_R}|^2\right),
\label{zDeWid}
\end{equation}
in which the color factor $N_{c}$ is 1 for leptons and 3 for quarks.
The flavor conserving processes contributing to the $Z^{\prime}$ decay width through $Z-Z^{\prime}$ mixing is relatively small. Therefore, the $Z^{\prime}$ decay width can be estimated by
\begin{equation}
\Gamma(Z^{\prime} \rightarrow \psi_i \bar{\psi}_j) = \frac{N_c g_{z'}^2 M_{Z^{\prime}}}{24\pi}\left(|B_{ij}^{\psi_L}|^2 + |B_{ij}^{\psi_R}|^2\right) \; ,
\label{zDeWidNew}
\end{equation}
and the ratios of branching fractions for the flavor preserving leptonic decay channels with respective to the $\tau$ decay channels are predicted to be
\begin{eqnarray}
Br(Z^{\prime} \rightarrow e^{+}e^{-}) & : & Br(Z^{\prime} \rightarrow \mu^{+} \mu^{-}):Br(Z^{\prime} \rightarrow \tau^{+} \tau^{-}) \\
& \sim & (q_{L_1}^2 + q_{e_1}^2):(q_{L_2}^2 + q_{e_2}^2):(q_{L_3}^2 + q_{e_3}^2) \nonumber \\
& \sim & 37.378:7.153:1 \; . \nonumber
\end{eqnarray}
The branching fractions for different $Z^{\prime}$ hadronic decay channels relative to the $Z^{\prime} \rightarrow \tau^{+} \tau^{-}$ channel are
\begin{eqnarray}
Br(Z^{\prime} \rightarrow u \bar{u}) : Br(Z^{\prime} \rightarrow d \bar{d}):Br(Z^{\prime} \rightarrow c \bar{c}):Br(Z^{\prime} \rightarrow s \bar{s}):Br(Z^{\prime} \rightarrow t \bar{t}) \qquad \\
\qquad \qquad : Br(Z^{\prime} \rightarrow b \bar{b}):Br(Z^{\prime} \rightarrow \tau^{+} \tau^{-}) \nonumber \\
\sim 3(2|B_{11}^{u_L}|^2):3(q_{Q_1}^2 + q_{d_1}^2):3(2|B_{22}^{u_L}|^2):3(q_{Q_2}^2 + q_{d_2}^2):3(2|B_{33}^{u_L}|^2):3(q_{Q_3}^2 + q_{d_3}^2):(q_{L_3}^2 + q_{e_3}^2) \nonumber \\
\sim 31.271:112.134:6.003:21.459:2.595:3:1 \; . \nonumber
\end{eqnarray}
Due to the large right-handed neutrino charges, the branching fraction for the $Z^{\prime}$ invisible decay is enhanced in our model,
\begin{equation}
\frac{Br(Z^{\prime} \rightarrow ~\mbox{invisible})}{Br(Z^{\prime} \rightarrow \tau^{+} \tau^{-})} = 983.62 \; .
\end{equation}
\subsection{Flavor Violating Decays}
In addition to the differentiable flavor conserving $Z^{\prime}$ decay channels, the $Z^{\prime}$ in our model also has flavor violating decay modes. Specifically, these are the decay modes into different generations of up-type quarks, and the branching fractions of these decays are
\begin{eqnarray}
Br(Z^{\prime} \rightarrow u \bar{c}, \bar{u} c):Br(Z^{\prime} \rightarrow u \bar{t}, \bar{u} t):Br(Z^{\prime} \rightarrow c \bar{t}, \bar{c} t):Br(Z^{\prime} \rightarrow \tau^{+}\tau^{-}) \\
\sim 1.176:3.061:0.426:1 \; . \nonumber
\end{eqnarray}
The branching fractions of some of the flavor violating $Z^{\prime}$ decay modes are comparable to those of the flavor conserving processes. Thus if the quark flavors can be identified, we may be able to detect these flavor violating processes~\cite{ref:Z'FVTop}.
In addition to the flavor violating $Z^{\prime}$ decay, top quark and charm quark rare decays are also allowed and thus they can potentially distinguish our model from flavor conserving $U(1)^{\prime}$ models. For example, the rare decays $t \rightarrow q l_i \bar{l}_i$ and $t \rightarrow q \nu_i \bar{\nu}_i$ (where $q$ can be any up-type quark except for the top quark) are possible at the tree level through $Z^{\prime}$ exchange in our model. Using the formulae given in the appendix, the branching fraction $Br(t \rightarrow u e^{+}e^{-})$ (which is relatively large among the rare decay modes) is roughly $10^{-12} \sim 10^{-13}$, which are too small to be observable. While the branching fractions of the decay mode $t \rightarrow u \nu \bar{\nu}$ can be a factor of $10 \sim 100$ bigger, they are still unobservable at the current experiments. These branching fractions, which scale as $g_{z'}^{4}$, are highly suppressed due to the small overall $U(1)^{\prime}$ gauge coupling $g_{z'}$ for which we choose $g_{z'} = 0.02$ as a benchmark point in this paper. If $g_{z'}$ can be allowed to increase, while still satisfying all of the electroweak precision measurements and FCNC constraints, by including renormalization group contribution to $Z-Z^{\prime}$ mixing and superpartner contributions to $D^{0}-\overline{D}^{0}$ mixing, these flavor violating decays may be accessible to the current experiments. (Existing phenomenological study of these flavor violating decays can be found in ~\cite{ref:topRareDec}.)
\section{Sparticle mass spectrum}
\label{sec:susyMass}
Since we extend the MSSM by a generation dependent $U(1)_F^{\prime}$ symmetry, the mass spectrum of the sparticles and their associated phenomenology in our model can be different from the usual MSSM. Below we show the electroweak and $U(1)_F^{\prime}$ symmetry breaking conditions, the msss spectrum of the sparticles, as well as $\beta$ functions of the gauge couplings and the Yukawa couplings, which are modified due to the presence of $U(1)_{F}^{\prime}$. The neutralino sector is the most significantly different sector from that of the MSSM; this is illustrated by a numerical example.
\subsection{Analytical Result}
\subsubsection{Higgs Sector}
\label{sec:higgs-sector}
The scalar potential giving rise to the masses of the scalar components, ($h_u$, $h_d$, $\phi$, $\phi^{\prime}$), of the $H_u$, $H_d$, $\Phi$, $\Phi^{\prime}$ fields from the superpotential (Eq.(\ref{eqn:sPoten})) is given by,
\begin{eqnarray}
V & = & (M_{H_u}^2 + \mu^2)|h_u|^2 + (M_{H_d}^2 + \mu^2)|h_d|^2 + (M_{\Phi}^2+\mu^{\prime 2})|\phi|^2 + (M_{\Phi^{\prime}}^2+\mu^{\prime 2})|\phi^{\prime}|^2
\\
& & + B \mu (h_u h_d + h.c.) + B^{\prime} \mu^{\prime}(\phi \phi^{\prime} + h.c.) + \frac{1}{8} (g_1^2 + g_2^2)(|h_u|^2 + |h_d|^2)
\nonumber \\
& & + \frac{1}{2} g_2^2 |h_u h_d|^2 + \frac{1}{2} g_{z'}^2 (q_{H_u}|h_u|^2 + q_{H_d}|h_d|^2 + q_{\Phi}|\phi|^2 + q_{\Phi^{\prime}}|\phi^{\prime}|^2) \; ,
\nonumber
\end{eqnarray}
where the last term is the D term associated with $U(1)_{F}^{\prime}$.
Minimizing the scalar potential, we obtain,
\begin{eqnarray}
M_{H_u}^2 + |\mu|^2 - B \mu \cot \beta - \frac{1}{2}M_{Z^{\prime}}^2 \cos 2\beta + q_{H_u}C = 0 \, \, ,
\label{eq:min1} \\
M_{H_d}^2 + |\mu|^2 - B \mu \tan \beta + \frac{1}{2}M_{Z^{\prime}}^2 \cos 2\beta + q_{H_d}C = 0 \, \, ,
\label{eq:min2} \\
M_{\Phi}^2 + |\mu^{\prime}|^2 + B^{\prime} \mu^{\prime} \cot \psi + q_{\Phi}C = 0 \, \, , \\
M_{\Phi^{\prime}}^2 + |\mu^{\prime}|^2 + B^{\prime} \mu^{\prime} \tan \psi + + q_{\Phi^{\prime}}C = 0 \, \, ,
\label{eq:min3}
\end{eqnarray}
leading to
\begin{eqnarray}
|\mu|^2 & = & \frac{1}{2}
\Big{[} \frac{ M_{H_u}^2 - M_{H_d}^2 + (q_{H_u} + q_{H_d}) C }{\cos 2\beta}
- \Big{(} M_{Z}^2 + M_{H_u}^2 + M_{H_d}^2 + (q_{H_u} + q_{H_d}) C \Big{)} \Big{]} \,,
\label{eqn:miniNew1} \\
B & = & \frac{1}{2\mu} \Big{[} \Big{(} M_{H_u}^2 + M_{H_d}^2 + 2|\mu|^2 + ( q_{H_u} + q_{H_d} )
C \Big{)} \sin 2\beta \Big{]} \, ,
\label{eqn:miniNew2} \\
|\mu^{\prime}|^2 & = & \frac{1}{2} \Big{[} \frac{M_{\Phi}^2 - M_{\Phi^{\prime}}^2 + (q_{\Phi} + q_{\Phi^{\prime}} ) C}{\cos 2\psi} - \Big{(} M_{\Phi}^2 + M_{\Phi^{\prime}}^2 + ( q_{\Phi} + q_{\Phi^{\prime}} ) C \Big{)} \Big{]} \, ,
\label{eqn:miniNew3} \\
B^{\prime} & = & -\frac{1}{2\mu^{\prime}} \Big{[} \Big{(} M_{\Phi}^2 + M_{\Phi^{\prime}}^2 + 2 |\mu^{\prime}|^2 + (q_{\Phi} + q_{\Phi^{\prime}} \Big{)} C ) \sin 2 \psi \Big{]} \, ,
\label{eqn:miniNew4}
\end{eqnarray}
where parameter $C$ is defined in Eq.(\ref{eqn:varA}).
\subsubsection{Neutralino and Chargino sector}
The (Majorana) mass matrix of the neutralinos $(\tilde{B}, \tilde{W}^3, \tilde{H}_d^0, \tilde{H}_u^0, \tilde{B}^{\prime}, \tilde{\Phi}, \tilde{\Phi}^{\prime})$ is given by
\begin{eqnarray}
(\mathcal{M})^{(0)} = \left(\begin{array}{ccccccc}
\tilde{M}_1 & 0 & -v_d g_1/\sqrt{2} & v_u g_1/\sqrt{2} & 0 & 0 & 0 \\
0 & \tilde{M}_2 & v_d g_2 /\sqrt{2} & -v_u g_2/\sqrt{2} & 0 & 0 & 0 \\
-v_d g_1/\sqrt{2} & v_d g_2/\sqrt{2} & 0 & -\mu &\sqrt{2} v_d q_{H_d}g_{z'} & 0 & 0 \\
\sqrt{2} v_u g_1 & -\sqrt{2} v_u g_2 & -\mu & 0 & \sqrt{2} v_u q_{H_u}g_{z'} & 0 & 0 \\
0 & 0 & \sqrt{2} v_d q_{H_d}g_{z'} & \sqrt{2} v_u q_{H_u}g_{z'} & \tilde{M}_1^{\prime} & \sqrt{2} u_{\phi} q_{\phi}g_{z'} & \sqrt{2} u_{\phi} q_{\phi^{\prime}}g_{z'} \\
0 & 0 & 0 & 0 & \sqrt{2} u_{\phi} q_{\phi}g_{z'} & 0 & \mu^{\prime} \\
0 & 0 & 0 & 0 & \sqrt{2} u_{\phi} q_{\phi^{\prime}}g_{z'} & \mu^{\prime} & 0 \end{array} \right) \; ,
\nonumber\\
\end{eqnarray}
in which $\tilde{M}_1$, $\tilde{M}_1^{\prime}$ and $\tilde{M}_2$ are the corresponding gaugino masses for $U(1)_Y$, $U(1)_{F}^{\prime}$ and $SU(2)_L$, respectively.
The physical neutralino masses $m_{\tilde{N}_i^0} (i = 1-7)$ can be obtained by diagnolizing the mass matrix above, with each physical neutralino given in terms of the composition defined below,
\begin{equation}
\tilde{N}_i = x_i^{\scriptscriptstyle{\tilde{B}}} \tilde{B} + x_i^{\scriptscriptstyle{\tilde{W}^3}} \tilde{W}^3 + x_i^{\scriptscriptstyle{\tilde{H}_d^0}} \tilde{H}_d^0 + x_i^{\scriptscriptstyle{\tilde{H}_u^0}} \tilde{H}_u^0 + x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} \tilde{B}^{\prime} + x_i^{\scriptscriptstyle{\tilde{\Phi}}} \tilde{\Phi} + x_i^{\scriptscriptstyle{\tilde{\Phi}^{\prime}}} \tilde{\Phi}^{\prime} \; .
\end{equation}
The interactions among neutrilino, femions and sfermions are summarized in Table ~\ref{tbl:interNFSF}.
\begin{table}
\begin{tabular}{c|l} \hline \hline
$\tilde{N}_i \tilde{u}_j^L u_k^R, \, \tilde{N}_i u_j^L \tilde{u}_k^R$ & $x_i^{\scriptscriptstyle{\tilde{H}_u^0}} y_{jk}^u$
\\ \hline
$\tilde{N}_i \tilde{u}_j^L u_k^L$ & $(x_i^{\scriptscriptstyle{\tilde{B}}} q_{Y}^{\scriptscriptstyle{Q_j}} g_1 + \frac{1}{2} x_i^{\scriptscriptstyle{\tilde{W}^3}} g_2 + x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} q_{Q_j} g_{z'}) \delta_j^k$
\\ \hline
$\tilde{N}_i \tilde{u}_j^R u_k^R$ & $(x_i^{\scriptscriptstyle{\tilde{B}}} q_{Y}^{\scriptscriptstyle{u_j}} g_1 + x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} q_{u_j}g_{z'}) \delta_j^k$
\\ \hline
$\tilde{N}_i \tilde{d}_j^L d_k^R, \, \tilde{N}_i d_j^L \tilde{d}_k^R$ & $x_i^{\scriptscriptstyle{\tilde{H}_d^0}} y_{jk}^d$
\\ \hline
$\tilde{N}_i \tilde{d}_j^L d_k^L$ & $(x_i^{\tilde{B}} q_{Y}^{Q_j} g_1 - \frac{1}{2} x_i^{\tilde{W}^3} g_2 + x_i^{\tilde{B}^{\prime}} q_{Q_j} g_{z'}) \delta_j^k$ \\ \hline
$\tilde{N}_i \tilde{d}_j^R d_k^R$ & $(x_i^{\tilde{B}} q_{Y}^{d_j} g_1 + x_i^{\tilde{B}^{\prime}} q_{d_j} g_{z'}) \delta_j^k$ \\ \hline
$\tilde{N}_i \tilde{\nu}_j^L \nu_k^R, \, \tilde{N}_i \nu_j^L \tilde{\nu}_k^R$ & $x_i^{\scriptscriptstyle{\tilde{H}_u^0}} y_{jk}^{\scriptscriptstyle{\nu}}$
\\ \hline
$\tilde{N}_i \tilde{\nu}_j^L \nu_k^L$ & $(x_i^{\tilde{B}} q_{Y}^{L_j} g_1 + \frac{1}{2} x_i^{\scriptscriptstyle{\tilde{W}^3}} g_2 + x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} q_{L_j} g_{z'}) \delta_j^k$
\\ \hline
$\tilde{N}_i \tilde{\nu}_j^R \nu_k^R$ & $x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} q_{N_j} g_{z'} \delta_j^k$
\\ \hline
$\tilde{N}_i \tilde{e}_j^L e_k^R, \, \tilde{N}_i e_j^L \tilde{e}_k^R$ & $x_i^{\scriptscriptstyle{\tilde{H}_d^0}} y_{jk}^{e}$
\\ \hline
$\tilde{N}_i \tilde{e}_j^L e_k^L$ & $(x_i^{\tilde{B}} q_{Y}^{L_j} g_1 - \frac{1}{2} x_i^{\scriptscriptstyle{\tilde{W}^3}} g_2 + x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} q_{L_j} g_{z'}) \delta_j^k$ \\ \hline
$\tilde{N}_i \tilde{e}_j^R e_k^R$ & $(x_i^{\scriptscriptstyle{\tilde{B}}} q_{Y}^{e_j} g_1 + x_i^{\scriptscriptstyle{\tilde{B}^{\prime}}} q_{e_j} g_{z'}) \delta_j^k$
\\ \hline \hline
\end{tabular}
\caption{Interactions among neutralinos, fermions and sfermions. The parameters $y_{jk}^{f}$ are the $(i,j)$ entries in the $f$ sector Yukawa matrix, and $q_{Y}^{f}$ are the hypercharge of the field $f$.}
\label{tbl:interNFSF}
\end{table}
In the basis $(\tilde{W}^+, \tilde{H}_u^+)$, $(\tilde{W}^-, \tilde{H}_d^{-})$, the chargino (Dirac) mass matrix is given by
\begin{equation}
\mathcal{M}^{(c)} =
\left(\begin{array}{cc}
\tilde{M}_2 & g_2 v_d
\\
g_2 v_u & \mu \end{array}\right).
\end{equation}
\subsubsection{Sfermion sector}
The mass squared matrices of the sfermions in the $(\tilde{f}_{i\,L}, \tilde{f}_{i\,R})^{T}$ basis are given by,
\begin{eqnarray}
\mathcal{M}_{\tilde{u}}^2 & = &
\left(
\begin{array}{cc}
\left(m_{\tilde{Q}}^2 \right)_{ii} + m_{u_i}^2 + (\frac{1}{2} - \frac{2}{3} \sin^2 \theta_w) M_{Z}^2 \cos 2 \beta
& m_{u_i}\left((A_U)_{ii} - \mu \cot \beta \right)
\\
m_{u_i}\left((A_U)_{ii} - \mu \cot \beta \right)
& \left(m_{\tilde{u}}^2\right)_{ii} + m_{u_i}^2 + \frac{2}{3} \sin^2 \theta_w M_{Z}^2 \cos 2\beta
\end{array}
\right)
,
\end{eqnarray}
\begin{eqnarray}
\mathcal{M}_{\tilde{d}}^2 & = & \left(\begin{array}{cc} \left(m_{\tilde{Q}}^2 \right)_{ii} + m_{d_i}^2 - (\frac{1}{2} - \frac{1}{3} \sin^2 \theta_w)M_{Z}^2 \cos 2 \beta & m_{d_i}\left((A_D)_{ii} - \mu \tan \beta \right) \\
m_{d_i}\left((A_D)_{ii} - \mu \tan \beta \right)
& \left(m_{\tilde{d}}^2\right)_{ii} + m_{d_i}^2 - \frac{1}{3} \sin^2 \theta_w M_{Z}^2 \cos 2\beta
\end{array} \right) ,
\end{eqnarray}
\begin{eqnarray}
\mathcal{M}_{\tilde{l}}^2 & = & \left(\begin{array}{cc} \left(m_{\tilde{L}}^2 \right)_{ii} + m_{e_i}^2 - (\frac{1}{2} - \sin^2 \theta_w)M_{Z}^2 \cos 2 \beta & m_{e_i}\left((A_E)_{ii} - \mu \tan \beta \right) \\
m_{e_i}\left((A_E)_{ii} - \mu \tan \beta \right)
& \left(m_{\tilde{e}}^2\right)_{ii} + m_{e_i}^2 - \sin^2 \theta_w M_{Z}^2 \cos 2\beta
\end{array} \right) ,
\end{eqnarray}
\begin{eqnarray}
\mathcal{M}_{\tilde{\nu}}^2 & = & \left(\begin{array}{cc} \left(m_{\tilde{L}}^2 \right)_{ii} + \frac{1}{2}M_{Z}^2 \cos 2 \beta & 0
\\
0 & \left(m_{\tilde{\nu}}^2\right)_{ii}
\end{array} \right) \, .
\end{eqnarray}
In the $6 \times 6$ soft mass matrices shown above, $m_{\tilde{Q}}, \, m_{\tilde{u}}, \, m_{\tilde{d}}, \, m_{\tilde{L}}, \, m_{\tilde{e}}, \, m_{\tilde{\nu}}$ are the soft masses for squarks and sleptons whose explicit forms depend on the specific SUSY breaking mechanism. $m_{u_i}, \, m_{d_i}, \, m_{e_i}, m_{\nu_i}$ are the quark and lepton masses. $A_{U}, \, A_{D}, \, A_{E}$ are the trilinear terms. For $\mathcal{M}_{\tilde{\nu}}^{2}$, we have neglected terms proportional to $m_{\nu}$, which are negligible.
In general, the effects of the the $U(1)_{F}^{\prime}$ symmetry can manifest in gravity mediated, gauge mediated, and anomaly mediated contributions. Specifically, the soft mass terms can be expressed, schematically in the basis where gluino couplings are diagonal , as
\begin{eqnarray}
\tilde{M}_{LL}^{2} & = & \tilde{m}^{2}_{L} + \tilde{m}^{\prime \; 2}_{L} + x X_{LL} + D_{LL} \; ,
\\
\tilde{M}_{RR}^{2} & = & \tilde{m}^{2}_{R} + \tilde{m}^{\prime \; 2}_{R} + x X_{RR} + D_{RR} \; .
\end{eqnarray}
where $\tilde{M}_{LL}^{2} = m_{\tilde{Q}}^{2}, \; m_{\tilde{L}}^{2}$, and $\tilde{M}_{RR}^{2} = m_{\tilde{u}}^{2}, \; m_{\tilde{d}}^{2}, \; m_{\tilde{e}}^{2}, \; m_{\tilde{\nu}}^{2}$.
The first terms refer to the gauge mediated SUSY breaking (GMSB) contributions due to SM gauge interactions, which are flavor universal. They can be schematically written as
\begin{equation}
\tilde{m}_{L}^{2} \sim N_{\mbox{\tiny msg}} \sum_{j} \biggl(\frac{\alpha_{j}}{\pi}\biggr)^{2} \biggl(\frac{F}{M_{\mbox{\tiny msg}}}\biggr)^{2} \propto {\bf 1}_{3\times 3} \; ,
\end{equation}
and similarly for $\tilde{m}_{R}^{2}$, with $N_{\mbox{\tiny msg}}$ being the number of messengers, $M_{\mbox{\tiny msg}}$ being the messenger scale, and $F$ being the F-term of some gauge singlet field that triggers SUSY breaking and the summation is taken over the SM gauge groups under which the matter multiplets are charged. The second terms correspond to the GMSB contributions due to the $U(1)_{F}^{\prime}$ gauge interactions, which are flavor non-universal. These contributions have the form,
\begin{equation}
(\tilde{m}_{L}^{2})_{ii}^{\prime} \sim N_{m} \biggl(\frac{g_{Z^{\prime}}^{2} q_{\psi_{i}}^{2}}{\pi}\biggr)^{2} \biggl(\frac{F}{M_{\mbox{\tiny msg}}}\biggr)^{2} \; ,
\end{equation}
and similarly for $(\tilde{m}_{R}^{2})_{ii}^{\prime}$, with $q_{\psi_{i}}$ being the $U(1)_{F}^{\prime}$ charges of the corresponding sfermions. The third terms are due to gravity mediation arising from operators in the Kahler potential of the form~\cite{Nir:1993mx},
\begin{equation}
X_{ij} \biggl( \frac{F}{\overline{M}_{\mbox{\tiny Pl}}} \biggr)^{2} L_{i} L_{j}^{\dagger} \; ,
\end{equation}
where $\overline{M}_{\mbox{\tiny Pl}}$ is the reduced Planck scale, and the coefficients $X_{ij}$ are determined by the $U(1)_{F}^{\prime}$ charge differences, $|q_{\psi_{i}} - q_{\psi_{j}}|$. The parameter $x$ characterizes the relative size of the gravity mediated contribution to the gauge mediated contribution. The terms $D_{LL}$ and $D_{RR}$ are the D term contributions, which are $\sim \zeta q_{\psi_{i}}$ where $\zeta$ is a constant. While the D term contributions are flavor diagonal, due to the generation dependent charges, they are flavor non-universal.
For the first two generations of the fermions, since their masses (which are leading terms in the off-diagonal entries in the mass squared matrices that couple $\tilde{f}_{L}$ and $\tilde{f}_{R}$) are very small compared to the soft mass terms (which are the leading terms in the entries that involve $\tilde{f}_{L} \tilde{f}_{L}$ and $\tilde{f}_{R} \tilde{f}_{R}$ in the mass squared matrices), we neglect the left-right mixings.
As the GMSB contributions associated with $U(1)_{F}^{\prime}$ are non-universal, there can exist CKM and MNS induced FCNCs. Nevertheless, due to the relative smallness of $g_{z^{\prime}}q_{\psi_{i}}$ with respective to SM gauge coupling constants, the flavor non-universal F term contributions are much smaller compared to the flavor universal F term contributions from GMSB. For the gravity mediated contributions, the $U(1)_{F}^{\prime}$ charges in our model predict that $X_{LL} = \bf{1}_{3\times 3}$ for all sfermions except the up-type squarks ($\tilde{Q}_{i}, \, \tilde{u}_{i}$), and they are,
\begin{equation}
X_{QQ/uu} = \left(\begin{array}{ccc}
1 & \epsilon^{5/3} & \epsilon^{11/3} \\
\epsilon^{5/3} & 1 & \epsilon^{2} \\
\epsilon^{11/3} & \epsilon^{2} & 1
\end{array}\right)
\sim
\left( \begin{array}{ccc}
1 & 0.08 & 0.004 \\
0.08 & 1 & 0.05 \\
0.004 & 0.05 & 1
\end{array}\right)
\; .
\end{equation}
The non-universal D term contributions can also lead to CKM and MNS induced FCNCs. However, due to the relative smallness of $g_{z^{\prime}} q_{\psi_{i}}$, these D terms contributions are much suppressed compared to the SM D term contributions. In anomaly mediated SUSY breaking, the $U(1)_{F}^{\prime}$ effects can manifest through the anomalous dimensions of the matter fields. Due to the smallness of $g_{z^{\prime}} q_{\psi_{i}}$ compared to SM gauge couplings, their contributions to the anomalous dimensions of the matter fields are suppressed.
\subsubsection{Renormalization Group Equations}
The presence of $U(1)_{F}^{\prime}$ also changes the $\beta$ functions. For the beta functions of the gauge coupling constants, the effects of $U(1)_{F}^{\prime}$ appear at the two loop level. For the beta functions of the Yukawa coupling constants, the $U(1)_{F}^{\prime}$ effects appear at one loop. Below are the beta functions including the leading $U(1)_{F}^{\prime}$ effects,
\begin{eqnarray}
\beta_{g_1^{\prime}} & = & \frac{g_1^{\prime \, 3}}{16 \pi^2} \Big{\{}
\frac{33}{5} + \frac{1}{16 \pi^2} \Big{[} \frac{88}{5} g_3^2 + \frac{27}{5}g_2^2 + \frac{199}{25}g_1^{\prime \, 2} + \frac{12}{5}g_{z'}^2 \mbox{Tr} (q_Y^2q^2)
\\
& & \hspace{0.5in}
- \frac{26}{5}Y_{U_3}^2 - \frac{14}{5}Y_{D_3}^2 - \frac{18}{5}Y_{E_3}^2 \Big{]} \Big{\}} \; ,
\nonumber \\
\beta_{g_2} & = & \frac{g_2^3}{16 \pi^2}
\Big{\{} 1 + \frac{1}{16 \pi^2} \Big{[} 24g_3^2 + 25g_2^2 + \frac{9}{5}g_1^{\prime \, 2} + 2g_{z'}^2 (q_{H_u}^2 + q_{H_d}^2 + \sum_i (q_{L_i}^2 + 3q_{Q_i}^2) )
\\
& & \hspace{0.5in} - 6Y_{U_3}^2 - 6Y_{D_3}^2 - 2Y_{E_3}^2 \Big{]} \Big{\}} \,\, ,
\nonumber \\
\beta_{g_3} & = & \frac{g_3^3}{16 \pi^2} \Big{\{} -3 + \frac{1}{16\pi^{2}} \Big{[} 14g_3^2 + 9g_2^2 + \frac{11}{5} g_1^{\prime \, 2} + 2g_{z'}^2 \sum_i (2q_{Q_i}^2 + q_{u_i}^2 + q_{d_i}^2) - 4Y_{U_3}^2 - 4Y_{D_3}^2 \Big{]} \Big{ \}} \,\, ,
\\
\beta_{g_{z'}} & = & \frac{g_{z'}^3}{16 \pi^2} \Big{\{} \mbox{Tr} (q^2) + \frac{1}{16 \pi^2} \Big{[} 16g_3^2 \sum_i (2q_{Q_i}^2 + q_{u_i}^2 + q_{d_i}^2)
\\
& & \hspace{0.5in}
+ 6g_2^2 \Big{(} q_{H_u}^2 + q_{H_d}^2 + \sum_i (q_{L_i}^2 + 3q_{Q_i}^2) \Big{)}
+ \frac{12}{5}g_1^{\prime \, 2} \mbox{Tr} (q_Y^2 q^2)
+ 4g_{z'}^2 \mbox{Tr} (q^4)
\nonumber \\
& & \hspace{0.5in}
- 12(q_{H_u}^2 + q_{Q_3}^2 + q_{u_3}^2)Y_{U_3}^2
- 12(q_{H_d}^2 + q_{Q_3}^2 + q_{d_3}^2)Y_{D_3}^2
- 4(q_{H_d}^2 + q_{L_3}^2 + q_{e_3}^2)Y_{E_3}^2 \Big{]} \Big{\}} \; ,
\nonumber
\end{eqnarray}
where $g_1^{\prime} = \sqrt{\frac{5}{3}} \, g_1$ and $q_Y$ is the $U(1)_Y$ charge of the fermions.
Most of the the $\beta$ functions of the Yukawa couplings except the $(3, 3)$ elements are close to zeros and ignored here. Taking the $U(1)_F^{\prime}$ symmetry into account, the $\beta$ functions of the $(3, 3)$ elements are given by
\begin{eqnarray}
\beta_{Y_{U_3}} & = &
\frac{Y_{U_3}}{16 \pi^2} \biggl[ 6Y_{U_3}^2 + Y_{D_3}^2 - \frac{16}{3}g_3^2 - 3g_2^2 - \frac{13}{15}g_1^{\prime \, 2} - 2g_{z'}^2(q_{H_u}^2 + q_{Q_3}^2 + q_{u_3}^2) \biggr] \,\, ,
\\
\beta_{Y_{D_3}} & = & \frac{Y_{D_3}}{16 \pi^2} \biggl[ 6Y_{D_3}^2 + Y_{U_3}^2 + Y_{E_3}^2- \frac{16}{3}g_3^2 - 3g_2^2 - \frac{7}{15}g_1^{\prime \, 2} - 2g_{z'}^2(q_{H_d}^2 + q_{Q_3}^2 + q_{d_3}^2) \biggr] \,\, ,
\\
\beta_{Y_{E_3}} & = & \frac{Y_{E_3}}{16 \pi^2} \biggl[ 3Y_{D_3}^2 + 4Y_{E_3}^2- 3g_2^2 - \frac{9}{5}g_1^{\prime \, 2} - 2g_{z'}^2(q_{H_3}^2 + q_{L_3}^2 + q_{e_3}^2) \biggr] \,\, .
\end{eqnarray}
Due to the non-universal $U(1)_{F}^{\prime}$ charges, there exist non-universal contributions to the sfermion soft masses in the RG equations. Nevertheless, the smallness of the $g_{z^{\prime}} q_{\psi_{i}}$ suppresses these non-universal contributions.
\subsection{Numerical Result}
In the numerical example, we utilize two loop RGEs for all coupling constants. Given that the effects of $U(1)_{F}^{\prime}$ are subdominant, we restrict ourselves in the numerical example below to the mSUGRA boundary condition for MSSM multiplets, and we include the up type quark mixing. We choose the SPS 1A values for the soft mass parameters at the GUT scale: the universal scalar soft mass $m_0 = 100$ GeV, the universal gaugino soft mass $m_{1/2} = 250$ GeV and the trilinear term $A_{U} = A_{D} = A_{E} = -100$ GeV. In addition, we take $\tan \beta = 25 , \, \tan \psi = 0.9$. Since the gauge coupling of the $U(1)_F^{\prime}$ ($g_{z'}$) is relatively small, it is reasonable to ignore the running effect from the $U(1)_F^{\prime}$. Further, we obtain the soft masses of the gauginos at the SUSY breaking scale which are $\tilde{M}_1 = 101.56$ GeV and $\tilde{M}_2 = 191.8$ GeV. For the soft mass of gaugino $\tilde{B}^{\prime}$, we choose $\tilde{M}_1^{\prime} = 1000$ GeV. The values of $\mu$ and $\mu^{\prime}$ are determined using the minimization conditions given in Eqs. (\ref{eqn:miniNew1})-(\ref{eqn:miniNew4}) and they are $\mu = 904.068$ GeV and $\mu^{\prime} = 1224.7 i$ GeV ($i$ can be rotated away by redefining the scalar fields of the $\Phi$ and $\Phi^{\prime}$).
With these input parameters, the neutralino mass matrix is given by
\begin{eqnarray}
\label{eqn:massNeu}
(\mathcal{M}^{(0)}/\mbox{GeV}) = \left(\begin{array}{ccccccc} 101.56 & 0 & -1.68244 & 42.061 & 0 & 0 & 0 \\
0 & 191.8 & 3.07837 & -76.9594 & 0 & 0 & 0 \\
-1.68244 & 3.07837 & 0 & -904.068 & 0.0306316 & 0 & 0 \\
42.061 & -76.9594 & -904.068 & 0 & -8.96168 & 0 & 0 \\
0 & 0 & 0.0306316 & -8.96168 & 1000 & -668.938 & 743.264 \\
0 & 0 & 0 & 0 & -668.938 & 0 & 1224.7i \\
0 & 0 & 0 & 0 & 743.264 & 1224.7i & 0 \end{array} \right) \; .
\nonumber \\
\end{eqnarray}
After diagonalizing the mass matrix above, the masses of the neutralinos and their compositions are summarized in Table~\ref{tbl:massMixNeu}.
\begin{table}[t!]
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline
Neutralino & Mass (GeV) & $\tilde{B}$ & $\tilde{W}^3$ & $\tilde{H}_d^0$ & $\tilde{H}_u^0$ & $\tilde{B}^{\prime}$ & $\tilde{\Phi}$ & $\tilde{\Phi}^{\prime}$ \\ \hline
$\tilde{N}_1$ & 101.20 & $99.75 \%$ & $0.01 \%$ & $0.23 \%$ & $0.01 \%$ & $0 \%$ & $0 \%$ & $0 \%$ \\ \hline
$\tilde{N}_2$ & 189.82 & $0.01 \%$ & $99.15 \%$ & $0.79 \%$ & $0.05 \%$ & $0 \%$ & $0 \%$ & $0 \%$ \\ \hline
$\tilde{N}_3$ & 907.39 & $0.08 \%$ & $0.23 \%$ & $49.65 \%$ & $50.04 \%$ & $0 \%$ & $0 \%$ & $0 \%$ \\ \hline
$\tilde{N}_4$ & 909.69 & $0.15 \%$ & $0.62 \%$ & $49.33 \%$ & $49.90 \%$ & $0.01 \%$ & $0 \%$ & $0 \%$ \\ \hline
$\tilde{N}_5$ & 1042.36 & $0 \%$ & $0 \%$ & $0 \%$ & $0 \%$ & $24.60 \%$ & $38.04 \%$ & $37.36 \%$ \\ \hline
$\tilde{N}_6$ & 1223.47 & $0 \%$ & $0 \%$ & $0 \%$ & $0 \%$ & $0.07 \%$ & $50.92 \%$ & $49.01 \%$ \\ \hline
$\tilde{N}_7$ & 1515.00 & $0 \%$ & $0 \%$ & $0 \%$ & $0.01 \%$ & $75.38 \%$ & $12.08 \%$ & $12.53 \%$ \\ \hline \hline
\end{tabular}
\caption{Compositions and the mass spectrum of the neutralinos.}
\label{tbl:massMixNeu}
\end{table}
From Table~\ref{tbl:massMixNeu}, we note that the additional neutralinos, $\tilde{N}_{5,6,7}$, associated with the $U(1)_F^{\prime}$ symmetry are heavier compared to those ($\tilde{N}_{1,2,3,4}$) that exist in the usual MSSM. These additional heavy neutralinos are decoupled from the MSSM. Due to this near block diagonal form of the neutralino mass matrix, the mass spectrum of the light neutralinos $\tilde{N}_{1,2,3,4}$ is very similar to that in the usual MSSM where $\tilde{N}_{1} \simeq \tilde{B}$, $\tilde{N}_{2} \simeq \tilde{W}^{3}$, $\tilde{N}_{3,4} \simeq \frac{1}{2}( \tilde{H}_{u}^{0} \pm \tilde{H}_{d}^{0})$, with $\tilde{N}_{1}$ being the lightest neutralino.
The mass matrix of the chargino is
\begin{equation}
(\mathcal{M}^{(c)}/\mbox{GeV}) =
\left(\begin{array}{cc}
191.8 & 4.353473 \\
108.8370 & 904.068 \end{array}\right).
\end{equation}
The mass spectrum of the charginos and the sfermions is summarized in Table ~\ref{tbl:massCharSfer}.
The lightest sfermion in this example is the stau.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline
Field & $\tilde{\chi}_1^{\pm}$ & $\tilde{u}_L$ & $\tilde{u}_R$ & $\tilde{c}_L$ & $\tilde{c}_R$ & $\tilde{t}_1$ & $\tilde{t}_2$ & $\tilde{d}_L$ & $\tilde{d}_R$ & $\tilde{s}_L$ & $\tilde{s}_R$ & $\tilde{b}_1$ & $\tilde{b}_2$ \\ \hline
Mass (GeV)& 191.14 & 562.73 & 545.87 & 562.72 & 545.87 & 375.83 & 578.91 & 568.28 & 545.73 & 568.27 & 545.72 & 389.37 & 592.14 \\ \hline \hline
Filed & $\tilde{\chi}_2^{\pm}$ & $\tilde{e}_L$ & $\tilde{e}_R$ & $\tilde{\mu}_L$ & $\tilde{\mu}_R$ & $\tilde{\tau}_1$ & $\tilde{\tau}_2$ & $\tilde{\nu}_{e_L}$ & $\tilde{\nu}_{e_R}$ & $\tilde{\nu}_{\mu_L}$ & $\tilde{\nu}_{\mu_R}$ & $\tilde{\nu}_{\tau_L}$ & $\tilde{\nu}_{\tau_R}$ \\ \hline
Mass (GeV) & 904.73 & 202.56 & 144.20 & 202.56 & 144.16 & 120.68 & 263.61 & 186.10 & 195.53 & 186.08 & 195.51 & 180.33 & 190.46 \\ \hline \hline
\end{tabular}
\caption{The mass pectrum of the charginos and sfermions}
\label{tbl:massCharSfer}
\end{table}
\section{Conclusion}
\label{sec:conclude}
In this paper, we propose a non-universal $U(1)^{\prime}_{F}$ symmetry combined with MSSM. All gauge anomaly cancellation conditions in our model are satisfied without exotic fields other than three right-handed neutrinos. Because all three generations of chiral superfields have different $U(1)^{\prime}_{F}$ charges, realistic masses and mixing angles in both the quark and lepton sectors are obtained, after the $U(1)^{\prime}_{F}$ symmetry is broken at a low scale. In our model, neutrinos are predicted to be Dirac fermions and their mass ordering is of the inverted hierarchy type. The $U(1)^{\prime}_{F}$ charges of the chiral super-fields also naturally suppress the $\mu$ term and automatically forbid baryon number and lepton number violating operators.
Even though all FCNCs constraints in the down quark and charged lepton sectors can be satisfied, we find that constraint from $D^{0}-\overline{D}^{0}$ turns out to be much more stringent than the constraints from the precision electroweak data.
\begin{acknowledgments}
We thank Daniel Whiteson for useful communication and for providing us the updated CDF limit~\cite{ref:CDFlimit}. The work was supported, in part, by the National Science Foundation under Grant No. PHY-0709742.
\end{acknowledgments}
\begin{appendix}
\section{Top Quark Rare Decays}
The effective four fermion operators that can lead to top quark rare decays are~\cite{ref:FCNC},
\begin{eqnarray}
-\mathcal{L}_{eff}&=&\frac{4G_F}{\sqrt{2}}\sum\limits_{m=\psi,\chi}
\left(\rho_{eff} {J_m}^2
+ 2wJ_m\cdot J_m^{\prime}+ y {J_m^{\prime}}^2\right) \label{Leff} \\[1ex]
&=&\frac{4G_F}{\sqrt{2}}\sum_{\psi,\chi}\sum_{i,j,k,l}
\left[
C_{kl}^{ij} Q_{kl}^{ij}
+ \tilde{C}_{kl}^{ij} \tilde{Q}_{kl}^{ij}
+ D_{kl}^{ij} O_{kl}^{ij}
+ \tilde{D}_{kl}^{ij} \tilde{O}_{kl}^{ij}
\right]\;, \nonumber
\end{eqnarray}
where $J_{m}$ and $J_{m}^{\prime}$ are currents that couple to $Z$ and $Z^{\prime}$, respectively, and
\begin{eqnarray}
Q_{kl}^{ij}
=\left(\bar{\psi}_i\gamma^{\mu}P_L\psi_j\right)
\left(\bar{\chi}_k\gamma_{\mu}P_L\chi_l\right)\;,&\qquad&
\tilde{Q}_{kl}^{ij}
=\left(\bar{\psi}_i\gamma^{\mu}P_R\psi_j\right)
\left(\bar{\chi}_k\gamma_{\mu}P_R\chi_l\right)\;,\label{op}\\[1ex]
O_{kl}^{ij}
=\left(\bar{\psi}_i\gamma^{\mu}P_L\psi_j\right)
\left(\bar{\chi}_k\gamma_{\mu}P_R\chi_l\right)\;,&\qquad&
\tilde{O}_{kl}^{ij}
=\left(\bar{\psi}_i\gamma^{\mu}P_R\psi_j\right)
\left(\bar{\chi}_k\gamma_{\mu}P_L\chi_l\right)\;.\nonumber
\end{eqnarray}
The variables $\psi$ and $\chi$ denote the fermionic fields while $i,j,k,l$ are
the family indices. The coefficients for the effective four fermion operators are
\begin{eqnarray}
C_{kl}^{ij}&=& \rho_{eff}\delta_{ij}\delta_{kl}\epsilon_i^{\psi_L}\epsilon_k^{\chi_L}
+ w\delta_{ij}\epsilon_i^{\psi_L}B_{kl}^{\chi_L}
+ w\delta_{kl}\epsilon_i^{\chi_L}B_{ij}^{\psi_L}
+ yB_{ij}^{\psi_L}B_{kl}^{\chi_L}\;,\\[1ex]
\tilde{C}_{kl}^{ij}&=& \rho_{eff}\delta_{ij}\delta_{kl}\epsilon_i^{\psi_R}\epsilon_k^{\chi_R}
+ w\delta_{ij}\epsilon_i^{\psi_R}B_{kl}^{\chi_R}
+ w\delta_{kl}\epsilon_l^{\chi_R}B_{ij}^{\psi_R}
+ yB_{ij}^{\psi_R}B_{kl}^{\chi_R}\;,\\[1ex]
D_{kl}^{ij}&=& \rho_{eff}\delta_{ij}\delta_{kl}\epsilon_i^{\psi_L}\epsilon_k^{\chi_R}
+ w\delta_{ij}\epsilon_i^{\psi_L}B_{kl}^{\chi_R}
+ w\delta_{kl}\epsilon_l^{\chi_R}B_{ij}^{\psi_L}
+ yB_{ij}^{\psi_L}B_{kl}^{\chi_R}\;,\\[1ex]
\tilde{D}_{kl}^{ij}&=& \rho_{eff}\delta_{ij}\delta_{kl}\epsilon_i^{\psi_R}\epsilon_k^{\chi_L}
+ w\delta_{ij}\epsilon_i^{\psi_R}B_{kl}^{\chi_L}
+ w\delta_{kl}\epsilon_l^{\chi_L}B_{ij}^{\psi_R}
+ yB^{\psi_R}_{ij}B_{kl}^{\chi_L}\; ,
\end{eqnarray}
where $\rho_{eff}$, $w$, and $y$ are defined as
\begin{eqnarray}
\rho_{eff}&=&\rho_1\cos^2\theta + \rho_2\sin^2\theta\;,
\qquad \rho_i={M_W^2\over M_i^2\cos^2\theta_w}\;,\label{rho}\\[1ex]
w&=&\frac{g_2}{g_1}\sin\theta\cos\theta(\rho_1-\rho_2)\;,\label{w}\\[1ex]
y&=&\left(\frac{g_2}{g_1}\right)^2(\rho_1 \sin^2 \theta + \rho_2 \cos^2 \theta)\;,\label{y}
\end{eqnarray}
and $M_1 = M_{Z}$ and $M_2 = M_{Z^{\prime}}$.
At the tree level, the decay width of $q_i \rightarrow q_j \psi_k \bar{\psi}_l$ is
\begin{eqnarray}
\Gamma(q_i \rightarrow q_j \psi_k \bar{\psi}_l) = \frac{3N_{c_k}G_F^2 m_{q_i}^5}{48 \pi^3}
\hspace{4in}
\\
\times \left(|C_{\psi_k \; \psi_l}^{q_j \; q_i} + C_{q_j \; \psi_l}^{\psi_k \; q_i}|^2 + |\tilde{C}_{\psi_k \; \psi_l}^{q_j \; q_i} + \tilde{C}_{q_j \; \psi_l}^{\psi_k \; q_i}|^2 + |D_{\psi_k \; \psi_l}^{q_j \; q_i}|^2 + |D_{q_j \; \psi_l}^{\psi_k \; q_i}|^2 + |\tilde{D}_{\psi_k \; \psi_l}^{q_j \; q_i}|^2 + |\tilde{D}_{q_j \; \psi_l}^{\psi_k \; q_i}|^2 \right) \; . \nonumber
\end{eqnarray}
If two fermions in the final state are the same ($q_j = \psi_k$), we need to take the permutations into account, which leads to
\begin{equation}
\Gamma(q_i \rightarrow q_j \psi_k \bar{\psi}_l) = \frac{3N_{c_k}G_F^2 m_{q_i}^5}{48 \pi^3} \left(2|C_{q_j \; \psi_l}^{q_j \; q_i}|^2 + 2|\tilde{C}_{q_j \; \psi_l}^{q_j \; q_i}|^2 + |D_{q_j \; \psi_l}^{q_j \; q_i}|^2 +|\tilde{D}_{q_j \; \psi_l}^{q_j \; q_i}|^2 \right) \; .
\end{equation}
\end{appendix}
| -94,293.044147 |
[
-1.9853515625,
1.8056640625
] | 18.321513 |
[
-3.328125,
0.312255859375,
-2.197265625,
-6.33203125,
-1.1015625,
8.953125
] |
[
1.7021484375,
8.9609375,
2.0859375,
4.1484375
] | 411 | 7,826 |
[
-3.509765625,
4.1328125
] | 41.25224 |
[
-5.7109375,
-4.12109375,
-4.19921875,
-2.40234375,
1.67578125,
11.7265625
] | 1.151873 | 10.316414 | 25.223614 | 6.973887 |
[
2.190732002258301
] | -57,333.198902 | 5.879376 | -92,284.678329 | 0.195601 | 6.333927 |
[
-2.529296875,
-3.75,
-4.2265625,
-5.4921875,
2.201171875,
13.2734375
] |
[
-5.9296875,
-2.52734375,
-2.35546875,
-1.9453125,
3.935546875,
5.5
] | |
BkiUdB05qoaAwqc6VNvM
|
\section{Introduction}
\label{intro}
We consider one of the most frustrated two-dimensional (2D)
antiferromagnets: the checkerboard antiferromagnet, also known as the
planar pyrochlore and the crossed-chains model (CCM).
As the name suggests, this model is motivated by the three-dimensional
(3D) pyrochlore materials. The 2D model is obtained by
a projection of 3D corner-sharing lattice of tetrahedra on a 2D plane.
This projection maps a four-spin tetrahedron onto a four-spin square
with additional links (antiferromagnetic exchanges) along the diagonals.
The structure obtained in this way, depicted in Fig.~\ref{ccm-pic},
preserves the corner-sharing arrangement of crossed squares, typical
of the original 3D pyrochlore lattice, but destroys the symmetry
between bonds of the tetrahedron: in two dimensions, the horizontal
and vertical bonds are not equivalent to diagonal ones. This lowering
of symmetry suggests consideration of extended 2D models with
the checkerboard structure where exchange interactions on
horizontal/vertical and diagonal bonds take on different values.
Among these, the quasi-one-dimensional limit, in which exchange along
horizontal and vertical directions $J$ is much stronger than that
along diagonal directions $J_\times$, is of special interest because
it involves competition between strong quantum fluctuations, typical
for one-dimensional (1D) spin chains, and equally strong geometric
frustration encoded in the structure of the crossed-chains lattice.
\begin{figure}
\center
\includegraphics[width=0.7\columnwidth]{cros.eps}
\caption{(Color online)
Heisenberg antiferromagnet on the checkerboard lattice, viewed as
coupled spin chains. Horizontal (vertical) spin chains run along
the $x$ ($y$) axis. Spins belonging to the horizontal (vertical)
chains are shown as green (red) filled circles. The intra-chain
exchange (thick lines) is $J$, and the inter-chain exchange (diagonal
thin lines) is $J_\times$. }
\label{ccm-pic}
\end{figure}
The resulting checkerboard antiferromagnet has been analyzed by a
variety of techniques along several complimentary ``directions" in the
parameter space: semi-classical analysis in the limit of large spin
$S\gg 1$,\cite{paradigm,canals,olegt} large-$N$ expansion,
\cite{moessner,sachdev,toronto} easy-axis generalization (of the 3D
model) \cite{hermele} and a quasi-1D ($J_\times/J \ll 1$)
approach.\cite{sliding} In parallel with analytic approaches, the
model was investigated numerically via exact diagonalization studies
\cite{chalker,fouet,sindzingre} and cluster-based strong-coupling
expansion techniques.\cite{brenig,altman,brenig2} The present paper
complements these approaches by combining a controlled analysis of the
quasi-1D limit with general arguments to pin down limits
of the phase diagram and postulate a likely global phase structure of
the model.
We begin by expounding the more general context of the problem. One
of the central theoretical motivations behind the study of frustrated
quantum magnets is the hope that, when magnetic ordering is suppressed
by frustration, more novel types of order or even criticality may
emerge. Phenomenological approaches suggest possible interesting
quantum phases exhibiting ``valence bond solid'' (VBS) order, in which
spins pair into singlets that are spontaneously localized on specific
bonds, breaking lattice symmetries. More exotically, such approaches
suggest the possibility of phases with ``topological order'', in which spins
fluctuate quantum mechanically in a liquid-like state with however
subtle topological properties and often excitations with anomalous
(e.g., fractional) quantum numbers. More recent predictions from such
theories also include ``deconfined'' quantum critical points and
phases in which several types of quasi-long-range (power-law) orders
coexist unconnected by microscopic symmetries.
Unfortunately, these types of phenomenological methods do not give
precise guidance as to the specific models in which such quantum
orders appear, and attempts to find them in realistic microscopic
Hamiltonians have met with at best limited success. The one specific
context in which examples of all the above phenomena are, however, known
to occur is in one-dimensional spin chains. Moreover, the theoretical
and microscopic understanding of such spin models is vastly more
complete than in two or three dimensions. A natural hunting ground
for the exotic phenomenology described above would hence seem to lie
in spin models consisting of chains weakly coupled into two or
three dimensional arrays. A recently gained understanding of the
crucial role of nominally irrelevant operators and
fluctuation-generated interactions in describing frustrated quasi-1D
magnetic systems,\cite{oleg-leon} described below, brings the hunt to
(some degree of) fruition.
In this paper, as in a previous work,\cite{oleg-leon} we
follow this approach, taking as the weakly coupled units
in question $S=1/2$ Heisenberg nearest-neighbor antiferromagnetic
chains (other further-neighbor interactions along each chain may
be included, provided they are not overly strong). A cause for hope is
that such a 1D chain is well known to exhibit a critical ground state
with power-law correlations of various types. One prominent type of
correlation in such a chain is antiferromagnetic, specifically:
\begin{equation}
\label{eq:antiferro}
\langle \vec{S}(n)\cdot \vec{S}(n')\rangle \sim
\frac{(-1)^{n-n'}}{|n-n'|} + \cdots,
\end{equation}
where $n$ is the coordinate along the chain, and the brackets indicate
a ground state expectation value. The omitted $\cdots$ terms decay
much faster ($\sim 1/|n-n'|^2$ or faster) than the dominant
slowly-decaying antiferromagnetic one shown here (we have also for
simplicity neglected an unimportant multiplicative logarithmic
correction to this term). The dominance of antiferromagnetic
correlations in the two-spin correlation function often leads to the
misconception that a good picture of the ground state of the 1d
Heisenberg chain is that of fluctuating local antiferromagnetic order,
i.e., a magnet in which spins are locally N\'eel ordered but the
quantization axis fluctuates in space and time. Such a picture is in
fact incomplete. This becomes clear upon considering the fluctuation
of the local bond energy or dimerization,
\begin{equation}
\label{eq:bonden}
B(n) = \vec{S}(n)\cdot\vec{S}(n+1)
- \langle\vec{S}(n)\cdot\vec{S}(n+1)\rangle.
\end{equation}
One finds that its (staggered) correlations,
\begin{equation}
\label{eq:bondcorr}
\langle B(n) B(n')\rangle \sim \frac{(-1)^{n-n'}}{|n-n'|} + \cdots,
\end{equation}
have {\sl precisely} the same slow power-law decay
(again, up to a
multiplicative logarithmic correction) as the
antiferromagnetic ones in Eq.~(\ref{eq:antiferro}) above! Further
examination of other correlators reveals no additional power-law
correlations with competitive slow decay. Thus the 1D
Heisenberg chain should be thought of as consisting of locally
fluctuating antiferromagnetic {\sl and} valence bond solid order of
comparable strength.
With this understanding, it is natural to expect that weakly-coupled
arrays of such chains might be pushed by the inter-chain coupling into
magnetically ordered, dimer ordered, or perhaps critical states, if
this coupling favors the intrinsic antiferromagnetic or VBS ordering
tendency, or fosters their balanced competition, respectively. While
we believe this reasoning to be essentially correct, for many years,
the richness of such possible behaviors went unrealized in the
literature. This is because if the spin chains are linked by magnetic
two-spin Heisenberg interactions, these couple primarily to the
antiferromagnetic fluctuations within the chains, and not to the VBS
ones. Hence, for such a case, the problem of Heisenberg spin chains
coupled by weak inter-chain interactions is rather well understood.
With non-frustrated transverse (with respect to the chain direction)
couplings, both renormalization group \cite{rg-affleck} and
self-consistent mean-field analysis \cite{schulz} predict an instability
towards classical long-range ordered phase characterized by a non-zero
expectation value of the spin $\langle \vec{S}_r\rangle \neq 0$. This
instability follows from the correlations in
Eq.~(\ref{eq:antiferro}), which, loosely speaking, make the spin chain
highly susceptible to magnetic ordering.
More recently, it was recognized that the situation becomes more
interesting and less clear-cut when the inter-chain interaction is
strongly frustrated, as is the case for the crossed-chains
model we investigate here. The effect of frustration is to reduce and,
ultimately, nullify the effective inter-chain magnetic field
experienced by spins of the chain due to transverse inter-chain
exchange interactions (due to cancellations between contributions from
spins whose local fluctuating orientations, according to
Eq.~(\ref{eq:antiferro}), are antiparallel). With no effective external
field present, the classical ordering instability is naturally absent,
resulting in (almost) decoupled behavior of distinct spin chains. In formal
calculations embodying this physical picture, the weak residual
inter-chain interaction which does not cancel with predominantly
antiferromagnetic correlations appears to be described by the scalar product
of conserved spin currents from the chains involved. This observation
led to the proposal that, as a result, the system of such
coupled chains forms a liquid-like ground state with fractionalized
spin excitations (spinons). The systems considered included a frustrated
spin ladder,\cite{allen-essler-ners,kim} its 2D extension, i.e.,
the spatially-anisotropic frustrated square lattice
antiferromagnet,\cite{NT} and the crossed-chains model.\cite{sliding}
As shown in Ref.~\onlinecite{oleg-leon},
in the former two cases these
conclusions are in fact incorrect, due to the neglect of the VBS
correlations, Eq.~(\ref{eq:bondcorr}), equally as inherent as the
antiferromagnetic ones to the Heisenberg chain. Although the
microscopic magnetic exchange between spins on different chains does
not directly couple to VBS fluctuations, such a dimer coupling between
(certain pairs of) chains is inevitably generated by the weak
residual magnetic interactions remaining after the dominant
antiferromagnetic cancellation. A careful analysis of the types of
such dimer couplings allowed by symmetry and the detailed mechanism of
their generation are crucial in determining the fate of the spin
system and the strength of any ordering tendency.
Technically, this analysis can be accomplished in a controlled fashion
using powerful field-theoretical methods borrowed from 1D physics.
The point, made in Ref.~\onlinecite{oleg-leon}, is that no
fine-tuning of the two-spin inter-chain
exchange interaction can make the low-energy field theory
{\em exactly} of current-current type. Some higher-order derivative
terms (typically involving spatial derivatives of the staggered
magnetization field) are bound to be present (as getting rid of all of
them to all orders would require tuning infinite number of inter-chain
couplings to zero). Such derivative terms are commonly neglected on
the grounds of their irrelevance with respect to the Luttinger
liquid fixed point of the independent spin chain. However, the
quasi-1D problem is not the same as the purely
1D one. Instead of disregarding irrelevant
high-derivatives terms from the outset, one has to consider if they,
in combination with the leading current-current term, can produce
quantum corrections to the {\em relevant} inter-chain couplings. This
indeed occurs both in the models of Ref.~\onlinecite{oleg-leon}, and,
as we will see, in the crossed-chains model studied here.
In the present paper we extend the analysis of Ref.~\onlinecite{oleg-leon}
to the CCM and show that previous claim of the sliding
Luttinger liquid ground state \cite{sliding} is not correct. Instead,
similarly to the spatially-anisotropic square lattice model discussed
above, the ground state is of spontaneously dimerized type, albeit
with staggered ordering of dimers on parallel chains. The resulting
configuration, shown in Fig.~\ref{patterns}, can be described as a
{\sl crossed-dimer} one.
\begin{figure}
\center
\includegraphics[width=0.7\columnwidth]{cros-cd.eps}
\caption{(Color online)
Crossed-dimer dimerization pattern.
``Strong'' bonds (ones
where $\epsilon > 0$)
on horizontal (vertical) chains are shown in green (red). As before,
spins on horizontal (vertical) chains
are denoted by green (red) circles.}
\label{patterns}
\end{figure}
The paper is organized as follows. Section~\ref{latticeH} describes
the Hamiltonian of the CCM model, its lattice symmetries, and the
passage to the field-theoretical description of the low-energy degrees
of freedom and the operator product algebra they form. Section
\ref{sec:solution} describes perturbative analysis of the model in the
one-dimensional limit of weakly coupled chains, $J_\times/J \ll 1$. It
contains key technical details of our work and explains the mechanism
by which the crossed-dimer phase is stabilized. The limit of the
fully two-dimensional model ($J_\times\approx J$), the planar
pyrochlore antiferromagnet, is analyzed within the plaquette-operator
mean-field approximation in Sec.~\ref{planar-pyro}. This is followed
by Sec.~\ref{global}, which summarizes the preceding material in terms of
two possible scenarios for the global zero-temperature phase
diagram of the checkerboard antiferromagnet. There we present
phenomenological symmetry-based analyses of the quantum phase
transitions between various phases of the model (and also point out an
interesting connection with the recent deconfined quantum critical
point idea). Section~\ref{sec:3d} describes a three-dimensional
extension of our model, the quasi-one-dimensional pyrochlore
antiferromagnet, and its possible relevance to the experiments on
GeCu$_2$O$_4$ and ZnV$_2$O$_4$. Our main points are briefly summarized
in Sec.~\ref{sec:conclusions}. Two Appendices contain important
technical details of the fermionic formulation of the low-energy
sector of the $S=1/2$ isotropic Heisenberg chain.
\section{From Lattice to Continuum Field Theory}
\label{latticeH}
\subsection{Lattice model and symmetries}
\label{sec:latt-model-symm}
The Hamiltonian of the system $H$ describes a collection of horizontal
($H_h$) and vertical ($H_v$) Heisenberg chains interacting with each
other via the inter-chain interaction $V$:
\begin{equation}
H=H_0 + V= H_h + H_v + V.
\label{lattice-H}
\end{equation}
Spins ($S=1/2$) are located at the sites of the checkerboard
(crossed-chains) lattice shown in Fig.~\ref{ccm-pic}. The crossings
of the lattice have integer coordinates $(n,m)$, so the sites
of horizontal chains have half-integer $x$-coordinates $n+\frac12$ and
integer $y$-coordinate $m$, while sites of the vertical chains are
described by $(n,m+\frac12)$ pairs. With this convention the Hamiltonian
of horizontal chains reads
\begin{equation}
H_h= J \sum_{n,m} \vec{S}_h(n-1/2,m)
\cdot \vec{S}_h(n+1/2,m).
\label{lattice-H-horiz}
\end{equation}
Similarly, $H_v$ is given by
\begin{equation}
H_v= J \sum_{n,m} \vec{S}_v(n,m-1/2) \cdot \vec{S}_v(n,m+1/2).
\label{lattice-H-vert}
\end{equation}
With local uniform magnetization defined by
\begin{subequations}
\begin{eqnarray}
\vec{s}_h(n,m)&=&
\vec{S}_h(n-1/2,m)+\vec{S}_h(n+1/2,m),
\qquad\\
\vec{s}_v(n,m)&=&
\vec{S}_v(n,m-1/2)+\vec{S}_v(n,m+1/2),
\end{eqnarray}
\label{J_h J_v}
\end{subequations}
the inter-chain interaction reads
\begin{equation}
V=J_\times\sum_{n,m}\vec{s}_h(n,m)\cdot\vec{s}_v(n,m)
\label{lattice-V}
\end{equation}
and is characterized by the inter-chain exchange $J_\times > 0$
which is much smaller than the in-chain antiferromagnetic exchange
$J > 0$.
We note that $J_\times$ is the nearest-neighbor exchange on the
checkerboard lattice while $J$ is the next-nearest-neighbor
exchange interaction.
The space group symmetries of $H$, Eq.~(\ref{lattice-H}), can now be
summarized. The translational subgroup is generated by unit
translation along the horizontal chains $T_h$ and that along the
vertical chains $T_v$. The remainder
is generated by $\pi/2$ rotations about a crossing, and reflections
about e.g., a vertical line through either a site or midpoint of a bond
of a horizontal chain. We denote these two operations ``site parity''
$P_{sh}$ and ``link parity'' $P_{Lh}$, respectively. As these are
microscopic lattice symmetries, they will be preserved by any
renormalization group transformation. Observe that $P_L$ is a product
of two other operations: $P_{Lh}=P_{sh} \circ T_h$.
\subsection{Continuum field theory and scaling operators}
\label{subsec:CFT}
The limit $J_\times \ll J$ allows us to approach the problem
from one-dimensional perspective: we treat $V$ as a perturbation and
ask whether it can destabilize the critical ground state of the
independent (decoupled) spin chains. The smallness of the
$J_\times/J$ ratio allows us to take the continuum limit along every chain
involved. As mentioned in the Introduction, a single Heisenberg chain
is described in the continuum limit (i.e., at low energies) by a
universal critical theory, with a variety of power-law correlations.
Formally, this is most compactly described as the Wess-Zumino-Witten
(WZW) $SU(2)_1$ theory \cite{wzw}, with the action (in $1+1$
dimensions) \cite{itzykson,gnt-book}
\begin{eqnarray}
\label{eq:WZW}
S_{WZW} & = & \frac{1}{8\pi} \int\! d^2x \, {\rm Tr}\, \partial_\mu
g^\dagger \partial_\mu g \nonumber \\
&& - \frac{i}{12\pi}\int\! d^3 x\, \epsilon_{\mu\nu\lambda} {\rm Tr}\,
g^\dagger \partial_\mu g g^\dagger \partial_\nu g g^\dagger
\partial_\lambda g .
\end{eqnarray}
Here $g$ is an $SU(2)$ matrix. The coordinate $x_0=v\tau$ ($v$ is the
spin velocity and $\tau$ is imaginary time) and $x_1=x$,
the coordinate along the chain, and $d^3x$ is defined by extending
this 2D space into a three-dimensional hemisphere $x_2<0$,
the boundary of which is the (compactified) physical 2D plane $(x_0,x_1)$,
and analytically continuing $g(x_0,x_1)$ into this hemisphere such that
$g(x_0,x_1,x_2 \to -\infty)\rightarrow 1$ and $g(x_0,x_1,x_2=0)=g(x_0,x_1)$. This formal action is
not of very much direct practical use, but serves to illustrate the
underlying degrees of freedom of the critical theory. All operators
in the WZW $SU(2)_1$ theory can be constructed from $g$. Corresponding
to the two dominant power-law correlations in
Eqs.~(\ref{eq:antiferro}) and (\ref{eq:bondcorr}), there are two scaling
operators \cite{itzykson,gnt-book}
\begin{eqnarray}
\label{eq:domops}
\vec{N} & \sim & -i \, {\rm Tr}\, g\vec\sigma, \\
\epsilon & \sim & {\rm Tr}\, g.
\end{eqnarray}
Here $\vec\sigma$ is the vector of Pauli matrices, and the $\sim$
indicates that the proportionality between these fields and the
physical staggered magnetization/dimerization involves a cut-off
dependent factor. The operator $\vec{N}$ represents the local
staggered magnetization, while $\epsilon$ represents the local
staggered dimerization (it is the continuum version of the bond
operator
in (\ref{eq:bonden})). There are also subdominant power-law
correlations arising from fluctuations of the chiral $SU(2)$ currents,
\begin{eqnarray}
\label{eq:currents}
\vec{J}_R & = & \frac{1}{4\pi}{\rm Tr}\, g^\dagger \bar\partial g
\vec\sigma, \\
\vec{J}_L & = & \frac{1}{4\pi}{\rm Tr}\, g \partial g^\dagger
\vec\sigma,
\end{eqnarray}
with $\partial = (\partial_0 - i\partial_1)/2$, and $\bar\partial =
(\partial_0 + i\partial_1)/2$. Physically, the operator $\vec{J}=\vec{J}_R
+ \vec{J}_L$ represents the local uniform magnetization, while
$v(\vec{J}_R-\vec{J}_L)$ represents the local magnetization (spin
transport) current.
All the low-energy power-law correlations of the weakly coupled
Heisenberg chains can be exposed by decomposing lattice operators into
a set of the above continuum operators (and generally their
derivatives, see below) {\sl for each chain}. This, for example,
leads to the following decomposition of the spin at a site $n-1/2$
along the horizontal chain number $m$:
\begin{equation}
\vec{S}_{h}(n-1/2,m)=
a\left[\vec{J}_{h,m}(x) + (-1)^n \vec{N}_{h,m}(x)\right].
\label{spin-operator}
\end{equation}
Here $x=na$ ($a$ is the lattice spacing) and $\vec{J}$
(respectively, $\vec{N}$) represents uniform (respectively, staggered)
part of the spin density.
Similarly, for the vertical spin chains we have
\begin{equation}
\vec{S}_v(n,m-1/2)=
a\left[\vec{J}_{v,n}(y) + (-1)^m \vec{N}_{v,n}(y)\right],
\end{equation}
where $y=ma$.
Notice that the
continuum limit is taken only for the coordinate along the chain; the
perpendicular one becomes an index $m$ (respectively, $n$ for vertical
chains).
The uniform spin magnetization $\vec{J}$ is the sum of the
right $(\vec{J}_R)$ and left $(\vec{J}_L)$ moving components,
$\vec{J}=\vec{J}_R + \vec{J}_L$, and represents the conserved spin
density (it is often referred to in the literature as the spin
``current'', the term originating from the relativistic concept of
space-time current, whose time component is the conserved density).
Note that the staggered dimerization $\epsilon$ does not appear in
Eq.~(\ref{spin-operator}); in fact, it cannot appear in the decomposition
of any single spin operator since it is not a vector under $SU(2)$.
As discussed in the Introduction, for
this reason dimer order does not appear likely in weakly coupled
Heisenberg chains with unfrustrated inter-chain couplings.
The action of the microscopic space group symmetries (described above)
upon the continuum scaling operators will be crucial in the
following. These are rather clear on physical grounds \cite{eggert}:
{\em Translation:}
\begin{equation}
T:
\vec{J} \to \vec{J}, \quad
\vec{N} \to -\vec{N}, \quad
\epsilon \to -\epsilon.
\label{Tr}
\end{equation}
{\em Site parity:}
\begin{equation}
P_s:
\vec{J} \to \vec{J}, \quad
\vec{N} \to \vec{N}, \quad
\epsilon \to -\epsilon.
\label{P_s}
\end{equation}
{\em Link parity:}
Using $P_L=P_s \circ T$ we find
\begin{equation}
P_L:
\vec{J} \to \vec{J}, \quad
\vec{N} \to -\vec{N}, \quad
\epsilon \to \epsilon.
\label{P_L}
\end{equation}
We will see at the end of this section that this symmetry is
responsible for the absence of $\vec{N}_v\cdot \vec{N}_h$ terms in the
Hamiltonian of the problem.
Because of somewhat non-intuitive point-splitting identities, the WZW
model can be written in Hamiltonian form (known as the Sugawara form)
in terms of the spin currents. For a single chain, one has
\begin{equation}
H_{WZW} = \frac{2\pi v}{3} \int\! dx\,
\left[ \vec{J}_R(x)\cdot\vec{J}_R(x) +
\vec{J}_L(x)\cdot\vec{J}_L(x)\right] .
\label{eq:sugawara}
\end{equation}
Applied to the set of horizontal chains (labelled by $m$), the lattice
Hamiltonian $H_h$, Eq.~(\ref{lattice-H-horiz}), transforms into
\begin{eqnarray}
H_h&=&\frac{2\pi v}{3}\sum_m \int dx
\left[ \vec{J}_{h,m,R}(x)\cdot\vec{J}_{h,m,R}(x)
\right.
\nonumber\\
&&
\qquad\qquad\qquad{}
+\vec{J}_{h,m,L}(x)\cdot\vec{J}_{h,m,L}(x)
\nonumber\\
&&\qquad\qquad\qquad
\left.{}
+ g_{bs} \vec{J}_{h,m,R}(x)\cdot\vec{J}_{h,m,L}(x)
\right].
\qquad
\label{H-horiz}
\end{eqnarray}
Here $v = \frac{\pi}{2} J a$ is the spin velocity. Note again that
$J^a_{h,m,R/L}(x)$ depends on position $x=n a$ along the chain
direction whereas its $y=m a$ coordinate dependence only shows up via
the (horizontal) chain index $m$. We have actually included in
Eq.~(\ref{H-horiz}) a {\sl correction} (proportional to $g_{bs}$)
to the WZW model, which is present
in the Heisenberg chain but is {\sl marginally irrelevant} in the
situation under consideration. For this reason, it may be safely
neglected in what follows.
Similarly
\begin{eqnarray}
H_v&=&\frac{2\pi v}{3}\sum_n \int dy
\left[ \vec{J}_{v,n,R}(y)\cdot\vec{J}_{v,n,R}(y)
\right.
\nonumber\\
&&\qquad\qquad\qquad{}
+\vec{J}_{v,n,L}(y)\cdot\vec{J}_{v,n,L}(y)
\nonumber\\
&&\qquad\qquad\qquad
\left.{}
+ g_{bs} \vec{J}_{v,n,R}(y)\cdot\vec{J}_{v,n,L}(y)
\right].
\qquad
\label{H-vert}
\end{eqnarray}
\subsection{Decomposition of the full lattice model}
Now we are ready to express the inter-chain perturbation Eq.~(\ref{lattice-V})
in terms of low-energy modes $\vec{J}$ and $\vec{N}$. We begin by
analyzing the sum of two neighboring spins on the same (say,
horizontal) chain,
\begin{eqnarray}
\vec{s}_h(n,m)&=&
\vec{S}_h(n-1/2,m)+\vec{S}_h(n+1/2,m)
\nonumber\\
&=&
a\left[2\vec{J}_{h,m}(x) - (-1)^n a\partial_x \vec{N}_{h,m}(x)\right].
\quad
\end{eqnarray}
For the reasons to be explained in detail below, we have retained the
next-to-leading irrelevant contribution $(\partial_x \vec{N})$ in this
expression. Similar decomposition is done for the sum of two spins on
the crossing vertical chain. The interchain interaction $V$ thus
reads
\begin{eqnarray}
V&=&\sum_{n,m}\left\{g_{jj}\vec{J}_{h,m}(x)\cdot\vec{J}_{v,n}(y)
\right.
\nonumber\\
&&\qquad{}
-g_{nj}\left[(-1)^n \partial_x\vec{N}_{h,m}(x)\cdot\vec{J}_{v,n}(y)
\right.
\nonumber\\
&&\left.\qquad\qquad{}
+ (-1)^m\vec{J}_{h,m}(x)\cdot\partial_y \vec{N}_{v,n}(y)\right]
\nonumber\\
&&\left.\qquad{}
+ g_{nn}(-1)^{n+m}\partial_x\vec{N}_{h,m}(x)\cdot\partial_y\vec{N}_{v,n}(y)
\right\},
\qquad
\label{V-ccm-full}
\end{eqnarray}
where, as before, $x=na, y=ma$ and the following couplings are
introduced to shorten notations:
\begin{equation}
g_{jj}=4J_\times a^2, \quad
g_{nj}=2J_\times a^3, \quad
g_{nn}=J_\times a^4.
\label{couplings}
\end{equation}
It is important to observe that Eq.~(\ref{V-ccm-full}) does not contain
$\vec{N}_h \cdot \vec{N}_v$ type of terms, which are forbidden by the
symmetry of the checkerboard lattice. For example, reflection with
respect to the vertical chain changes sign of $\vec{N}_h$,
$(P_L: \vec{N}_h \rightarrow -\vec{N}_h)$, while leaving $\vec{N}_v$
invariant, see Eq.~(\ref{P_L}).
This reflects strong frustration of the
model under study, as discussed in
the Introduction. Observe also that any pair of horizontal and
vertical chains cross only once, which makes Eq.~(\ref{V-ccm-full}) {\em
local} in space. This requires us to think carefully about the
short-distance regularization of the low-energy theory defined by
Eq.~(\ref{H-horiz}), Eq.~(\ref{H-vert}), and Eq.~(\ref{V-ccm-full})
---the corresponding analysis is described in the next Section.
\subsection{Operator product expansion}
Various perturbations to the WZW model Eq.~(\ref{eq:sugawara}) [such
as the intra-chain backscattering $g_{bs}$ in Eq.~(\ref{H-horiz}) and
Eq.~(\ref{H-vert}), and the inter-chain $V$, Eq.~(\ref{V-ccm-full})]
are most conveniently analyzed with the help of {\sl operator product
expansions} (OPE). These are operator identities that are derived
by applying Wick's theorem to a correlation function of a pair of
operators at nearby points, say, $(x,\tau)$ and $(0,0)$ ---several of
the examples below are worked out in Appendix~\ref{a2}; see also
Appendix A of Ref.~\onlinecite{lin} for more examples. The OPE below
are valid for operators from the same chain, and, to lighten
expressions, we suppress chain indices here.
The spin currents $\vec{J}_{R/L}$ obey the following chiral OPEs,
which are frequently used in the literature\cite{gnt-book} [these,
for example, are used to derive the renormalization-group flow of
$g_{bs}$ term in Eqs.~(\ref{H-horiz}) and (\ref{H-vert})]:
\begin{eqnarray}
J^a_R(x,\tau) J^b_R(0)&=&\frac{\delta^{ab}}{8\pi^2 v^2 (\tau - ix/v +
\alpha\sigma_\tau)^2} \nonumber\\
&&+ \frac{i\epsilon^{abc}
J^c_R(0)}
{2\pi v (\tau - ix/v + \alpha\sigma_\tau)}
\label{ope-r}
\end{eqnarray}
and
\begin{eqnarray}
J^a_L(x,\tau) J^b_L(0)&=&\frac{\delta^{ab}}{8\pi^2 v^2 (\tau + ix/v +
\alpha\sigma_\tau)^2} \nonumber\\
&&+ \frac{i\epsilon^{abc}
J^c_L(0)}
{2\pi v (\tau + ix/v + \alpha\sigma_\tau)},
\label{ope-l}
\end{eqnarray}
where, as explained in Appendix~\ref{app:Green-fermions},
$\alpha=a/v$ is the short-time cutoff of the theory
and $\sigma_\tau=\mathrm{sign}(\tau)$.
Being a conserved current, $J^a$ is also a generator of rotations.
Thus the OPE of $J^a$ and $N^a$ should be
nontrivial. In fact, this one is the most important OPE for the
subsequent analysis (see Appendix~\ref{a2} for the derivation),
\begin{eqnarray}
J^a_R(x,\tau)N^b(0)&=&\frac{i\epsilon^{abc}N^c(0)
-i\delta^{ab}\epsilon(0)}{4\pi v(\tau-ix/v + \alpha\sigma_\tau)}
,\nonumber\\
J^a_L(x,\tau)N^b(0)&=&\frac{i\epsilon^{abc}N^c(0)
+i\delta^{ab}\epsilon(0)}{4\pi v(\tau+ix/v + \alpha\sigma_\tau)}.
\label{ope-JN}
\end{eqnarray}
Finally, fusing spin current with dimerization $\epsilon$ gives back
the staggered magnetization
\begin{eqnarray}
&&J^a_R(x,\tau) \epsilon(0)=
\frac{i N^a(0)}{4\pi v (\tau-ix/v + \alpha\sigma_\tau)} ,
\nonumber\\
&&J^a_L(x,\tau) \epsilon(0)=
\frac{-i N^a(0)}{4\pi v (\tau+ix/v + \alpha\sigma_\tau)}.
\label{ope-Je}
\end{eqnarray}
Observe that Eqs.~(\ref{ope-r})--(\ref{ope-Je}) form a closed
operator algebra---this is the key technical reason behind the
generation of the inter-chain interaction of staggered
magnetizations
in frustrated spin chains models, see
Ref.~\onlinecite{oleg-leon} and Sec.~\ref{subsec:irrel} below.
\section{Low-energy Hamiltonian}
\label{sec:solution}
The spatially-anisotropic $J_1$-$J_2$ model \cite{oleg-leon} has
taught us that keeping track of the nominally irrelevant terms is
crucial for a correct solution of the problem. In this section, we
extend this line of thinking to the crossed-chains model and
demonstrate that indeed irrelevant terms produce symmetry-allowed
relevant ones in a simple perturbation theory.
\subsection{Symmetry analysis}
\label{sec:symmetry}
Before proceeding with microscopic calculations, it is instructive to
write down most general form of the inter-chain Hamiltonian $\delta V$
which is allowed by symmetries of the crossed-chains lattice. The
reason to do so is that, while many such terms will be absent in a
na\"ive continuum limit of the original spin model, those which are
``accidentally'' missing (i.e. not prohibited by any symmetry) may be
expected to be generated as a ``quantum correction'' (i.e. through a
RG transformation) when na\"ively irrelevant terms are taken into
account. The necessary complete set of space group generators for
this analysis, $T_h, P_{sh}, P_{Lh},$ and $R_{\pi/2}$, was introduced
in Sec.~\ref{sec:latt-model-symm}.
Naturally (as in any field theory), there are an infinite number of
possible interactions, and since there are additionally an infinite
number of chains, the multitude of potential terms is compounded.
Physically, however, ``pairwise'' interactions involving fields on
only two chains at a time are expected to be most important
(interactions involving more chains simultaneously can be shown to
occur only in higher order in $J_\times/J$). Such an inter-chain
Hamiltonian naturally splits into the sum of $\delta V_\times$, which
describes interactions between two crossing chains, and $\delta
V_\parallel$, which includes interactions between {\em parallel}
chains, $\delta V = \delta V_\times + \delta V_\parallel$. Within
these chain-pair interactions, we narrow the search by considering the
``most relevant'' possibilities (ones involving the smallest number of
the smallest scaling dimension primary fields $\vec{N}$ and $\epsilon$
and no derivatives). Since we are perturbing the decoupled-chain
system, the appropriate sense of ``relevant'' is that of the decoupled
1+1-dimensional critical theories. We find
\begin{equation}
\delta V_\times = \sum_{n,m} a_1 (-1)^{n+m}
\epsilon_{h,m}(na) \, \epsilon_{v,n}(ma)
\label{delta-V-times}
\end{equation}
and
\begin{eqnarray}
\delta V_\parallel &=&
\sum_{n,m,l}\sum_{\nu=h,v}
\Big[ a_2(l) \vec{N}_{\nu,m}(na) \cdot \vec{N}_{\nu,m+l}(na)
\nonumber\\
&&\qquad\qquad{}
+ a_3(l) \epsilon_{\nu,m}(na)\, \epsilon_{\nu,m+l}(na)
\Big].
\quad
\label{delta-V-parallel}
\end{eqnarray}
We note that in Eq.~(\ref{delta-V-parallel}), an interaction is
possible between parallel chains an arbitrary distance $l$ apart.
From the point of view of the decoupled-chain fixed point, there is no
notion (or effect in RG rescaling) of ``distance'' between chains, so
all such terms are equally ``relevant'' in this point of view. One
would expect, however, these terms (i.e. $a_2(l), a_3(l)$) to decay
{\sl in magnitude} with increasing $l$.
It is straightforward to check that these terms and only these terms
satisfy the symmetry requirements of the checkerboard lattice. First,
the invariance of $\delta V_\parallel$ is easy to establish, as it
involves pairs of operators $\epsilon$ and $\vec{N}$ from like chains
(i.e., horizontal-horizontal or vertical-vertical). These transform
identically under all operations, and invariance is trivially
shown.
The crossed-chains term, $\delta V_\times$, is more involved. We
sketch the arguments for its invariance. Rotation by $\pi/2$ about a
crossing is manifest, as the fields in Eq.~(\ref{delta-V-times}) are
drawn from a single such crossing. Unit translation along the
$x$-direction makes $\epsilon_h \rightarrow -\epsilon_h$ while
$\epsilon_v$ is obviously not affected. However, $(-1)^{n+m}$ also
changes its sign, $(-1)^{n+m}\rightarrow (-1)^{n+1+m}$, so that
$T_h(\delta V_\times)=\delta V_\times$. Reflection with respect to a
site on a horizontal chain $P_{sh}$ preserves $\epsilon_v$ but does
change sign of dimerization on every horizontal chain:
$P_{sh}(\epsilon_h)=-\epsilon_h$. But at the same time $P_{sh}$
interchanges even and odd {\em vertical} chains, i.e.,
$P_{sh}((-1)^{n+m})=-(-1)^{n+m}$. Thus $P_{sh}(\delta
V_\times)=\delta V_\times$. Link parity $P_{Lh}$ is simple since
every $\epsilon$ is even under it. Moreover, since $P_{Lh}$ is
nothing but reflection with respect to, say, the vertical chain number
$n$, the vertical chain with index $n+1$ then transforms into that
with index $n-1$, etc. Hence, even and odd vertical chains are not
interchanged by $P_{Lh}$, and $(-1)^{n+m} \rightarrow (-1)^{n+m}$,
showing the invariance under this final generator. Notice that the
staggering factor $(-1)^{n+m}$ plays a very important role in this
consideration -- its presence makes the local interaction of staggered
dimerizations possible.
One could wonder if $\delta V_\times$ could similarly include
a staggered product of magnetizations, $(-1)^{n+m} \vec{N}_h \cdot
\vec{N}_h$, but this is prohibited by the $P_{Lh}$ symmetry. We note
that microscopically, such a term cannot be generated (see the
following subsection for the mechanics of generation of the allowed
terms) as a consequence of the
identity $J^a(x,\tau)\epsilon(x,\tau')=0$ which follows from the OPE
Eq.~(\ref{ope-Je}). The only symmetry-allowed combination of
$\vec{N}$'s that can show up in $\delta V_\times$ is $(\vec{N}_h
\cdot\vec{N}_v)^2$. Such a term does arise in the large-$S$
``order-from-disorder'' calculations, see Ref.~\onlinecite{olegt}, but
in the $S=1/2$ microscopic model it has scaling dimension $2$ and is
thus deemed irrelevant. Moreover, one can derive, using abelian
bosonization, the OPE of two $\vec{N}$ fields at the same spatial
point $x$: $N^a(x,\tau) N^b(x,0) \sim i\epsilon^{abc}
{\text{sign}}(\tau) J^c(x,0)$. This allows one to identify
\cite{affleck-thanks} this biquadratic term with the dimension $2$ scalar
product of two spin currents on crossing chains [that is, the $g_{jj}$
term in Eq.~(\ref{V-ccm-full})], $(\vec{N}_h \cdot\vec{N}_v)^2
\rightarrow \vec{J}_h \cdot \vec{J}_v$.
Observe now that none of the symmetry-respecting terms in $\delta
V_\times$ and $\delta V_\parallel$ are present in the na\"ive
continuum limit of the theory Eq.~(\ref{V-ccm-full}). Below we show
that second-order perturbation theory in the inter-chain exchange
$J_\times$ generates $\delta V_\times$ with coupling constant $a_1
\sim J_\times^2/J$. Similar arguments show that $a_{2,3} \sim
J_\times^4/J^3$. This is because in $J_\times^2$ order one generates
terms involving a product of derivatives of $\vec{N}$ fields on parallel
chains, $\sim J_\times^2 \partial_x \vec{N}_h \cdot \partial_x \vec{N}_h$.
Once these are present, one can follow calculations in
Ref.~\onlinecite{oleg-leon} to find that both $a_2$ and $a_3$ terms in
$\delta V_\parallel$ are generated, but this happens only in the next,
$(J_\times^2)^2 = J_\times^4$, order of the perturbation expansion.
Since, as we show below in Sec.~\ref{subsec:mf}, the $\delta V_\times$
contribution is relevant, it is sufficient to keep only the leading
$a_1$ type terms---subleading $a_{2,3}$ ones are too small
($a_{2,3}/a_1 \sim J_\times^2/J^2 \ll 1$) to change the outcome.
\subsection{On the importance of irrelevant terms}
\label{subsec:irrel}
Here we describe microscopic calculation of $\delta V_\times$. We
begin by expanding the action in powers of $V$ Eq.~(\ref{V-ccm-full}).
This generates a number of terms of which the most important ones
involve products of spin currents and staggered magnetizations from
the same chain (and with the same spatial coordinate---that is, all
fields belong to the same crossing). Thus we pick out $g_{jj} g_{nn}$
and cross-terms from $g_{nj}^2$. These contributions can be written
in the form
\begin{equation}
\sum_{a,b}\sum_{n,m}(-1)^{n+m}\int d\tau d\tau'
v_\times(x,n;y,m;\tau,\tau')\Big|_{x=na,y=ma},
\label{2nd-order}
\end{equation}
where
\begin{eqnarray}
v_\times \!\!&=&\!\!
g_{nj}^2 \left[
\partial_x N^a_{h,m}(x,\tau) J^b_{h,m}(x,\tau')
J^a_{v,n}(y,\tau)\partial_y N^b_{v,n}(y,\tau')
\right.\nonumber\\
&&\left.{}\!\!
+ J^a_{h,m}(x,\tau)\partial_x N^b_{h,m}(x,\tau')
\partial_y N^a_{v,n}(y,\tau) J^b_{v,n}(y,\tau')\right]
\nonumber\\
&&{}\!\!
+ g_{jj}g_{nn} J^a_{h,m}(x,\tau)\partial_xN^b_{h,m}(x,\tau')
\nonumber\\&&\qquad\times
J^a_{v,n}(y,\tau) \partial_y N^b_{v,n}(y,\tau').
\label{V^2-long}
\end{eqnarray}
Now apply the OPE Eq.~(\ref{ope-JN}) to the product of fields from the same
chain and at the same {\em spatial point} $x$.
For example,
\begin{eqnarray}
J^a_R(x,\tau)\partial_x N^b(x,\tau')&=&
\lim_{x'\rightarrow x}\partial_{x'}J^a(x,\tau)N^b(x',\tau')
\nonumber\\
&=&
\frac{-i[i\epsilon^{abc} N^c(x,\tau') - i \delta^{ab}\epsilon(x,\tau')]}
{4\pi v^2 (\tau-\tau' + \alpha\sigma_{\tau-\tau'})^2}.
\nonumber\\&&
\end{eqnarray}
Similarly
\begin{equation}
J^a_L(x,\tau)\partial_x N^b(x,\tau')=
\frac{i[i\epsilon^{abc} N^c(x,\tau') + i \delta^{ab}\epsilon(x,\tau')]}
{4\pi v^2 (\tau-\tau' + \alpha\sigma_{\tau-\tau'})^2}.
\end{equation}
We observe that the OPE of the {\em full} spin current $\vec{J}$ and
$\vec{N}$ at the same spatial point does
not contain staggered magnetization
\begin{eqnarray}
J^a(x,\tau)\partial_x N^b(x,\tau')&=&
[J^a_R(x,\tau) + J^a_L(x,\tau)]\partial_x N^b(x,\tau')
\nonumber\\
&=&{}
-\frac{\delta^{ab} \epsilon(x,\tau')}
{2\pi v^2 (\tau-\tau' + \alpha\sigma_{\tau-\tau'})^2}.
\end{eqnarray}
This is a very important result, with which Eq.~(\ref{2nd-order}) can
be brought into the surprisingly compact form:
\begin{eqnarray}
V_2&=&\sum_{a,b} \delta^{ab}\sum_{n,m}(-1)^{n+m} (2g_{nj}^2 +
g_{jj}g_{nn})\nonumber\\
&&\quad\times \int d\tau d\tau'
\frac{\epsilon_{h,m}(na,\tau') \, \epsilon_{v,n}(ma,\tau')}
{[2\pi v^2 (\tau-\tau' + \alpha\sigma_{\tau-\tau'})^2]^2}.
\end{eqnarray}
The integral involved is obviously convergent
\begin{equation}
\int_{-\infty}^\infty dt \frac{1}{(t + \alpha\sigma_t)^4}
=\frac{2}{3\alpha^3}=\frac{2v^3}{3a^3}.
\end{equation}
Using Eq.~(\ref{couplings}) for the $g$'s involved,
we finally obtain the
following fluctuation-generated
correction to the low-energy effective action
\begin{eqnarray}
\delta S&=& - \frac{1}{2} V_2 \nonumber\\
&=&
-\frac{3J_\times^2 a^3}{\pi^2 v}
\int d\tau \sum_{n,m} (-1)^{n+m}
\epsilon_{h,m}(na,\tau) \, \epsilon_{v,n}(ma,\tau).
\nonumber\\&&
\end{eqnarray}
Denoting
\begin{equation}
g_\epsilon=v \frac{3J_\times^2 a^2}{\pi^2 v^2},
\label{g-epsilon}
\end{equation}
we have the following addition to the inter-chain Hamiltonian $V$,
Eq.~(\ref{V-ccm-full}), to analyze
[this is because $Z=\int e^{-S}$ and $\delta S=\int d\tau \delta V$]
\begin{equation}
\delta V_\times =- g_\epsilon a \sum_{n,m} (-1)^{n+m}
\epsilon_{h,m}(na) \epsilon_{v,n}(ma).
\label{delta-V-epsilon}
\end{equation}
Staggered dimerization $\epsilon$ has scaling dimension $1/2$, which
means that it is as
important for the chain physics as $\vec{N}$ is. In fact, up to
logarithmic corrections, correlation
functions of staggered dimerization and magnetization decay with the
same power law, $x^{-1}$, see (\ref{eq:antiferro},\ref{eq:bonden}).
This is also clear from the OPE Eqs.~(\ref{ope-JN}) and
(\ref{ope-Je}), which
show that $\vec{N}$ and $\epsilon$ transform into each other under
chiral rotations generated by $\vec{J}_{R/L}$.
Since any pair of horizontal and vertical chains has only one crossing,
Eq.~(\ref{delta-V-epsilon}) is a sum of
{\em local} terms, each of which is marginal (space-time dimension
$=1$, and dimension
of the product $\epsilon_h \epsilon_v$ is $1$ as well). However, we
shall see below that this marginality
is superficial---an infinite number of marginal crossings add up to a
relevant perturbation.
\subsection{Mean-field analysis of the effective inter-chain
interaction Eq.~(\ref{delta-V-epsilon})}
\label{subsec:mf}
From now on it is safe to omit derivative terms present in
$V$, Eq.~(\ref{V-ccm-full}); their role was to generate,
as described in Sec.~\ref{subsec:irrel}, more relevant
symmetry-allowed inter-chain interactions.
With this in mind, we write down the renormalized version of
Eq.~(\ref{V-ccm-full}),
\begin{eqnarray}
V&=&\sum_{n,m}
\Big[
g_{jj} \vec{J}_{h,m}(na) \cdot \vec{J}_{v,n}(ma)
\nonumber\\
&&\qquad{}
- (-1)^{n+m}g_\epsilon a \,
\epsilon_{h,m}(na) \, \epsilon_{v,n}(ma)
\Big].
\label{g_jj+g_e}
\end{eqnarray}
As discussed above, the first term originating from the naive
continuum limit of Eq.~(\ref{lattice-V})
has scaling dimension $2$ while the second term, which is $\delta
V_\times$ generated by high-energy fluctuations,
has scaling dimension $1$.
Thus we are allowed to discard the irrelevant current-current piece
of $V$, Eq.~(\ref{g_jj+g_e}).
As a result, all that remains of the interchain interaction is given by
$\delta V_\times$ [Eq.~(\ref{delta-V-epsilon})], $V \to \delta V_\times$,
which was not present in the naive continuum
limit Eq.~(\ref{V-ccm-full}) at all!
We tackle it, in analogy with analysis of Ref.~\onlinecite{oleg-leon},
by the chain mean-field approximation.
The staggering factor $(-1)^{n+m}$ suggests a {\em staggered} dimer
order on parallel chains.
That is, we assume the pattern
\begin{equation}
\epsilon_{h,m}(x)=(-1)^m \langle \epsilon\rangle, \qquad
\epsilon_{v,n}(y)=(-1)^n \langle \epsilon\rangle,
\end{equation}
where $\langle\epsilon\rangle$ is a mean-field expectation value.
The inter-chain coupling is then decoupled into
a sum of independent single-chain Hamiltonians,
\begin{eqnarray}
\delta V_\times&=&
- \sum_m (-1)^m g_\epsilon a
\langle\epsilon\rangle \sum_n \epsilon_{h,m}(x=na) \nonumber\\
&&
- \sum_n (-1)^n g_\epsilon a \langle\epsilon\rangle
\sum_m \epsilon_{v,n}(y=ma).
\end{eqnarray}
Look on one of them, say, that of the horizontal chain with index $m$
(which is fixed now).
Now we can take continuum limit
($\sum_n f(na) \rightarrow a^{-1}\int dx f(x)$) and
\begin{equation}
\delta V_\times(m) = -(-1)^m g_\epsilon \langle\epsilon\rangle\int
\epsilon(x) \, dx,
\end{equation}
which can be easily analyzed along the lines of
Ref.~\onlinecite{oleg-leon}.
Using abelian bosonization expression for the staggered dimerization
\begin{equation}
\epsilon(x)=\frac{\lambda}{\pi a}\cos[\sqrt{2\pi} \varphi(x)] ,
\end{equation}
where $\varphi$ is the {\em spin} boson field and $\lambda$ is
a nonuniversal constant of order 1,\cite{shelton-ladder}
we arrive at the
effective single-chain
sine-Gordon action for the $m$th chain
\begin{equation}
S(m)=\int d^2r \Big(\frac{1}{2}(\nabla \varphi)^2
- G \cos\sqrt{2\pi}\varphi\Big).
\end{equation}
The action $S(m)$ is written in terms of dimensionless coordinates
$\vec{r}=(x/a,v\tau/a)$ and the effective coupling constant
$G=\lambda^2 g_\epsilon \langle \cos\sqrt{2\pi}\varphi\rangle/(\pi^2 v)$.
The self-consistent equation for $\langle
\cos\sqrt{2\pi}\varphi\rangle$ follows from the exact
solution \cite{luk} for the free energy $F_m$ of the sine-Gordon
model $Z_m = \int D\varphi \exp[-S(m)] = \exp(-F_m)$;
\begin{equation}
\langle \cos\sqrt{2\pi}\varphi\rangle=-\frac{dF}{dG}=\frac{d\ln
Z}{dG}=\frac{c_0^2}{3\sqrt{3}} G^{1/3} ,
\end{equation}
where the constant $c_0$ reads
\begin{equation}
c_0 = \frac{2\Gamma(1/6)}{\sqrt{\pi}\,\Gamma(2/3)}
\left(\frac{\pi \Gamma(3/4)}{2\Gamma(1/4)}\right)^{2/3}.
\end{equation}
Simple algebra gives
\begin{equation}
\langle \cos\sqrt{2\pi}\varphi\rangle
=\frac{c_0^3}{3}
\sqrt{\frac{\lambda^2 g_\epsilon}{\pi^2 v}}
=0.265316 \frac{J_\times}{J},
\end{equation}
where we have set $\lambda=1$.
Hence the expectation value of the staggered dimerization is proportional
to $J_\times$,
\begin{equation}
\langle\epsilon\rangle=\frac{\lambda}{\pi a}
\langle \cos\sqrt{2\pi}\varphi\rangle
=0.0844\frac{J_\times}{J a}.
\end{equation}
The spin gap $\Delta$ is given by the mass $m$ of the lightest
breather in the sine-Gordon theory \cite{luk}
\begin{equation}
m=2M \sin(\pi/6)=M ,
\end{equation}
where $M$ and $G$ are related by
\begin{equation}
\frac{G}{2}=\frac{\Gamma(1/4)}{\pi \Gamma(3/4)}
\left(M \frac{\sqrt{\pi}\,\Gamma(2/3)}{2 \Gamma(1/6)}\right)^{3/2} .
\end{equation}
Thus $M=c_0 G^{2/3}$, and finally the spin gap $\Delta$ is found as
\begin{equation}
\Delta=m=M=
\frac{4\lambda^2 c_0^3}{\pi^6 \sqrt{3}}
\left(\frac{J_\times}{J}\right)^2=
0.675688 \left(\frac{J_\times}{J}\right)^2.
\end{equation}
The resulting dimerization pattern is shown in Fig.~\ref{patterns}.
An equivalent configuration is obtained by a global shift of crosses
by one lattice spacing along either the $x$ or $y$ direction. It is
worth pointing out that exactly such inter-dimer correlations
-- crossed-dimer ones -- have been observed in the exact diagonalization
study of finite CCM clusters, see Table II and Fig.~5 in
Ref.~\onlinecite{sindzingre}.
\section{The planar pyrochlore: plaquette VBS and its instabilities}
\label{planar-pyro}
In the preceding sections we focused on the quasi-1D limit
$J_\times\ll J$, and established the existence of the spontaneous
long-range order of the crossed dimer configuration (Fig.~\ref{patterns}).
In this section we will explore a different region in the parameter space,
where the nearest-neighbor coupling $J_\times$
and next-nearest-neighbor exchange coupling $J$ are nearly equal.
Earlier numerical studies using exact diagonalization\cite{fouet} and
strong-coupling expansion techniques\cite{brenig,altman,brenig2}
showed that the ground state at $J=J_\times$ is a valence bond crystal
with long-range quadrumer order, shown in
Fig.~\ref{fig:quadrumerized}. Here we review a simple theoretical
account of this plaquette VBS (P-VBS) state using the quadrumer-boson
approximation,\cite{starykh96,zhitomirsky,lauchli} and examine its
instabilities to other orders. This analysis, together with the
results in the preceding sections, will serve as a basis for our
discussion on the global phase diagram of the CCM in the following
section. Our simple approach presented here is meant to give a
qualitative picture; more quantitatively reliable numerical results
can be obtained, for example, by series expansion, as developed in
Refs.~\onlinecite{brenig} and \onlinecite{brenig2}.
\begin{figure}
\begin{center}
\begin{picture}(240,220)(0,0)
\unitlength=0.9pt
\thicklines
\multiput(20,10)(40,0){6}{\line(0,1){200}}
\multiput(20,10)(0,40){6}{\line(1,0){200}}
\put(20,30){\line(1,1){180}}
\put(20,70){\line(1,1){140}}
\put(20,110){\line(1,1){100}}
\put(20,150){\line(1,1){60}}
\put(20,190){\line(1,1){20}}
\put(40,10){\line(1,1){180}}
\put(80,10){\line(1,1){140}}
\put(120,10){\line(1,1){100}}
\put(160,10){\line(1,1){60}}
\put(200,10){\line(1,1){20}}
\put(20,30){\line(1,-1){20}}
\put(20,70){\line(1,-1){60}}
\put(20,110){\line(1,-1){100}}
\put(20,150){\line(1,-1){140}}
\put(20,190){\line(1,-1){180}}
\put(40,210){\line(1,-1){180}}
\put(80,210){\line(1,-1){140}}
\put(120,210){\line(1,-1){100}}
\put(160,210){\line(1,-1){60}}
\put(200,210){\line(1,-1){20}}
\multiput(20,30)(40,0){6}{\multiput(0,0)(0,40){5}{\circle*{7}}}
\multiput(40,10)(0,40){6}{\multiput(0,0)(40,0){5}{\circle*{7}}}
\multiput(40,30)(80,0){3}{\multiput(0,0)(0,80){3}
{\textcolor{blue}{\oval(25,25)}}}
\multiput(80,70)(80,0){2}{\multiput(0,0)(0,80){2}
{\textcolor{blue}{\oval(25,25)}}}
\put(85,107){\textcolor{red}{$\vec{S}_1$}}
\put(115,76){\textcolor{red}{$\vec{S}_2$}}
\put(145,107){\textcolor{red}{$\vec{S}_3$}}
\put(115,137){\textcolor{red}{$\vec{S}_4$}}
\textcolor{red}{
\put(118,100){0}
\multiput(40,30)(40,40){5}{\circle*{2}}
\multiput(40,190)(40,-40){5}{\circle*{2}}
\put(15,5){\vector(1,1){212}}
\put(230,216){\large$j$}
\put(158,140){1}
\put(225,5){\vector(-1,1){212}}
\put(4,215){\large$k$}
\put(78,140){1}
}
\end{picture}
\end{center}
\caption{(Color online) Quadrumerized checkerboard lattice with
coordinates $(j,k)$ shown. The plaquettes with blue circles are
quadrumerized. Each unit cell contains 4 spins.}
\label{fig:quadrumerized}
\end{figure}
In the following analysis it is more convenient to use
a new coordinate system labelled by $(j,k)$, rotated by $\pi/4$
from the $x$ and $y$ axes; see Fig.~\ref{fig:quadrumerized}.
The quadrumerized valence bond crystal breaks lattice translation
symmetry, and each quadrumerized plaquette centered at $(j,k)$ has
four spins, $\vec{S}_l$ ($l=1,2,3,4$).
From the outset we assume the breaking of translation symmetry and
begin with the Hamiltonian for a single quadrumerized plaquette,
\begin{eqnarray}
H_p&=&
J_\times\left(
\vec{S}_1\cdot\vec{S}_2+\vec{S}_2\cdot\vec{S}_3
+\vec{S}_3\cdot\vec{S}_4+\vec{S}_4\cdot\vec{S}_1
\right)
\nonumber\\
&=&
\frac{J_\times}{2}\left[
\left(\vec{S}_1+\vec{S}_2+\vec{S}_3+\vec{S}_4\right)^2
-\left(\vec{S}_1+\vec{S}_3\right)^2
\right.\nonumber\\
&&\left.\qquad{}
-\left(\vec{S}_2+\vec{S}_4\right)^2
\right].
\label{H_p}
\end{eqnarray}
The lowest-energy state of $H_p$ is a spin singlet with energy
$-2J_\times$, which can be written as
\begin{eqnarray}
s^\dagger|0\rangle
&=&
\frac{1}{2\sqrt3}\bigl(
|\!\uparrow\uparrow\downarrow\downarrow\,\rangle
+|\!\downarrow\downarrow\uparrow\uparrow\,\rangle
+|\!\uparrow\downarrow\downarrow\uparrow\,\rangle
+|\!\downarrow\uparrow\uparrow\downarrow\,\rangle
\nonumber\\
&&\qquad{}
-2|\!\uparrow\downarrow\uparrow\downarrow\,\rangle
-2|\!\downarrow\uparrow\downarrow\uparrow\,\rangle
\bigr),
\label{eq:singlet}
\end{eqnarray}
where $|\sigma_1 \sigma_2 \sigma_3 \sigma_4\rangle$ denotes the state
with $S^z_l=\sigma_l$.
The first excited states are a triplet with energy $-J_\times$,
\begin{subequations}
\label{eq:triplet}
\begin{eqnarray}
t^\dagger_+|0\rangle\!\!&=&\!\!
\frac{1}{2}\bigl(
|\!\uparrow\uparrow\uparrow\downarrow\,\rangle
+|\!\uparrow\downarrow\uparrow\uparrow\,\rangle
-|\!\uparrow\uparrow\downarrow\uparrow\,\rangle
-|\!\downarrow\uparrow\uparrow\uparrow\,\rangle
\bigr),
\\
t^\dagger_z|0\rangle\!\!&=&\!\!
\frac{1}{\sqrt2}\bigl(
|\!\uparrow\downarrow\uparrow\downarrow\,\rangle
-|\!\downarrow\uparrow\downarrow\uparrow\,\rangle
\bigr),
\\
t^\dagger_-|0\rangle\!\!&=&\!\!
\frac{1}{2}\bigl(
|\!\downarrow\downarrow\downarrow\uparrow\,\rangle
+|\!\downarrow\uparrow\downarrow\downarrow\,\rangle
-|\!\downarrow\downarrow\uparrow\downarrow\,\rangle
-|\!\uparrow\downarrow\downarrow\downarrow\,\rangle
\bigr).
\qquad
\end{eqnarray}
\end{subequations}
The operators $s^\dagger$, $t^\dagger_\pm$, and $t^\dagger_z$
can be thought of as creation operators of hard-core bosons.
As mentioned above, the ground state of the CCM is known to be a
gapped P-VBS state at the planar pyrochlore point $J=J_\times$.
As long as $J\approx J_\times$, we may thus expect that
a good approximation to the ground state should be obtained
by direct product of the singlet states, Eq.~(\ref{eq:singlet}),
weakly hybridized with the triplets, Eqs.~(\ref{eq:triplet}).
Motivated by this observation, we employ the quadrumer-boson
approximation\cite{starykh96,zhitomirsky,lauchli}
in which we keep only the
low-lying four states, singlet and triplet,
in each quadrumerized plaquette,
and discard the other higher-energy states.
Now the boson operators are subject to the constraint
\begin{equation}
s^\dagger s + t^\dagger_+t^{}_+ + t^\dagger_zt^{}_z
+t^\dagger_-t^{}_-
=1.
\end{equation}
The plaquette Hamiltonian is then written as
\begin{equation}
H_p=
-2J_\times
+J_\times\left(
t^\dagger_+t^{}_++t^\dagger_zt^{}_z+t^\dagger_-t^{}_-
\right).
\end{equation}
The spins $\vec{S}_l$ can also be written in terms of the hard-core
boson operators.
The representations are found from matrix elements of the spin
operators with the four states.
After some algebra we find
\begin{subequations}
\begin{eqnarray}
S^z_l&=&
\frac{1}{4}\left(t^\dagger_+t^{}_+-t^\dagger_-t^{}_-\right)
+\frac{(-1)^l}{\sqrt6}\left(t^\dagger_zs+s^\dagger t^{}_z\right),
\\
S^+_l&=&
\frac{1}{\sqrt8}\left(t^\dagger_+t^{}_z-t^\dagger_zt^{}_-\right)
-\frac{(-1)^l}{\sqrt3}\left(t^\dagger_+s+s^\dagger t^{}_-\right),
\qquad
\\
S^-_l&=&
\frac{1}{\sqrt8}\left(t^\dagger_zt^{}_+-t^\dagger_-t^{}_z\right)
-\frac{(-1)^l}{\sqrt3}\left(t^\dagger_-s+s^\dagger t^{}_+\right),
\end{eqnarray}
\end{subequations}
where $l=1,2,3,4$.
Assuming that the density of triplets is low in the P-VBS state,
we keep only terms linear in $t_\mu$ and
set $s=s^\dagger=1$;
\begin{equation}
S^a_l=
\frac{(-1)^l}{\sqrt6}\left(t^\dagger_a+t^{}_a\right),
\label{eq:S_i}
\end{equation}
where $a=x,y,z$ and we have introduced
\begin{equation}
t^{}_x=-\frac{1}{\sqrt2}(t^{}_+ + t^{}_-),
\qquad
t^{}_y=\frac{1}{\sqrt2 i}(t^{}_+ - t^{}_-).
\end{equation}
With the coordinate system $(j,k)$ in Fig.~\ref{fig:quadrumerized}
the total Hamiltonian of the checkerboard antiferromagnet reads
\begin{eqnarray}
H\!\!&=&\!\!
\sum_{j,k}H_p(j,k)
\nonumber\\
&&{}
+J_\times\sum_{j,k}\left(
\vec{S}_{j,k,1}\cdot\vec{S}_{j-1,k,4}
+\vec{S}_{j,k,2}\cdot\vec{S}_{j,k-1,1}
\right.\nonumber\\
&&\left.\qquad\qquad{}
+\vec{S}_{j,k,3}\cdot\vec{S}_{j+1,k,2}
+\vec{S}_{j,k,4}\cdot\vec{S}_{j,k+1,3}
\right)
\nonumber\\
&&{}
+J\sum_{j,k}\left(
\vec{S}_{j,k,1}\cdot\vec{S}_{j,k+1,3}
+\vec{S}_{j,k,4}\cdot\vec{S}_{j,k+1,2}
\right.\nonumber\\
&&\left.\qquad\qquad{}
+\vec{S}_{j,k,3}\cdot\vec{S}_{j+1,k,1}
+\vec{S}_{j,k,4}\cdot\vec{S}_{j+1,k,2}
\right),
\nonumber\\&&
\label{eq:H3}
\end{eqnarray}
where $\vec{S}_{j,k,l}$ is the $l$th spin $\vec{S}_l$ in
the quadrumerized plaquette centered at $(j,k)$.
With the approximation Eq.~(\ref{eq:S_i}) the Hamiltonian becomes
\begin{equation}
H=
\sum_{\vec{p}}
\left(
-\frac{7}{2}J_\times
+\frac{1}{2}\sum_{a=x,y,z}
\Psi^\dagger_a(\vec{p})\mathcal{H}(\vec{p})\Psi^{}_a(\vec{p})
\right),
\label{eq:HPsi}
\end{equation}
where we have introduced the triplet boson field,
\begin{equation}
\Psi_a(\vec{p})=
\begin{pmatrix}
\tilde{t}^{}_a(\vec{p}) \\ \\
\tilde{t}^\dagger_a(-\vec{p})
\end{pmatrix},
\end{equation}
with the momentum $\vec{p}=(p_1,p_2)$ and the Fourier transform
\begin{equation}
\tilde{t}_a(\vec{p})=\frac{1}{\sqrt{\cal{N}_\circ}}\sum_{j,k}
e^{-i(jp_1+kp_2)}t_a(j,k) ,
\end{equation}
where $\cal{N}_\circ$ is the number of quadrumerized plaquettes.
The Hamiltonian matrix is given by
\begin{equation}
\mathcal{H}(\vec{p})=
\begin{pmatrix}
J_\times+\varepsilon(\vec{p}) &
\varepsilon(\vec{p}) \\ \\
\varepsilon(\vec{p}) &
J_\times+\varepsilon(\vec{p})
\end{pmatrix},
\end{equation}
where
\begin{equation}
\varepsilon(\vec{p})=\frac{2}{3}(J-J_\times)(\cos p_1 + \cos p_2).
\end{equation}
With the Bogoliubov transformation,
\begin{equation}
\begin{pmatrix}
\tilde{t}^{}_a(\vec{p}) \\ \tilde{t}^\dagger_a(-\vec{p})
\end{pmatrix}=
\begin{pmatrix}
\cosh\theta_{\vec{p}} & \sinh\theta_{\vec{p}} \\
\sinh\theta_{\vec{p}} & \cosh\theta_{\vec{p}}
\end{pmatrix}
\begin{pmatrix}
b^{}_a(\vec{p}) \\ b^\dagger_a(-\vec{p})
\end{pmatrix},
\end{equation}
where
\begin{equation}
\exp(-4\theta_{\vec{p}})=1+\frac{2\varepsilon(\vec{p})}{J_\times},
\end{equation}
the Hamiltonian [Eq.~(\ref{eq:HPsi})] is diagonalized,
\begin{equation}
H
=\sum_{\vec{p}}
\left(
-\frac{7}{2}J_\times
+\frac{3}{2}E(\vec{p})
+E(\vec{p})\sum_a
b^\dagger_a(\vec{p})b^{}_a(\vec{p})
\right).
\end{equation}
The energy dispersion of the triplet eigen mode $b_a(\vec{p})$
is given by
\begin{equation}
E(\vec{p})=
\left[J_\times
\left(J_\times+\frac{4}{3}(J-J_\times)(\cos p_1+\cos p_2)\right)
\right]^{1/2}.
\label{triplet_dispersion}
\end{equation}
\begin{figure}
\begin{center}
\begin{picture}(240,213)(0,5)
\unitlength=0.9pt
\thicklines
\multiput(20,10)(40,0){6}{\line(0,1){200}}
\multiput(20,10)(0,40){6}{\line(1,0){200}}
\put(20,30){\line(1,1){180}}
\put(20,70){\line(1,1){140}}
\put(20,110){\line(1,1){100}}
\put(20,150){\line(1,1){60}}
\put(20,190){\line(1,1){20}}
\put(40,10){\line(1,1){180}}
\put(80,10){\line(1,1){140}}
\put(120,10){\line(1,1){100}}
\put(160,10){\line(1,1){60}}
\put(200,10){\line(1,1){20}}
\put(20,30){\line(1,-1){20}}
\put(20,70){\line(1,-1){60}}
\put(20,110){\line(1,-1){100}}
\put(20,150){\line(1,-1){140}}
\put(20,190){\line(1,-1){180}}
\put(40,210){\line(1,-1){180}}
\put(80,210){\line(1,-1){140}}
\put(120,210){\line(1,-1){100}}
\put(160,210){\line(1,-1){60}}
\put(200,210){\line(1,-1){20}}
\multiput(20,70)(40,0){6}{\multiput(0,0)(0,80){2}
{\textcolor{white}{\circle*{8}}}}
\multiput(20,70)(40,0){6}{\multiput(0,0)(0,80){2}{\circle{8}}}
\multiput(20,30)(40,0){6}{\multiput(0,0)(0,80){3}{\circle*{8}}}
\multiput(80,10)(0,40){6}{\multiput(0,0)(80,0){2}{\circle*{8}}}
\multiput(40,10)(0,40){6}{\multiput(0,0)(80,0){3}
{\textcolor{white}{\circle*{8}}}}
\multiput(40,10)(0,40){6}{\multiput(0,0)(80,0){3}{\circle{8}}}
\multiput(40,30)(80,0){3}{\multiput(0,0)(0,80){3}
{\textcolor{blue}{\oval(24,24)}}}
\multiput(80,70)(80,0){2}{\multiput(0,0)(0,80){2}
{\textcolor{blue}{\oval(24,24)}}}
\end{picture}
\end{center}
\caption{(Color online) The magnetically ordered state due to condensation
of bosons $b_a(\pi,\pi)$. The solid circles represent, say, up
spins, and the open circles represent down spins. The quadrumerized
plaquettes in the nearby P-VBS phase are indicated by blue circles.
}
\label{fig:order}
\end{figure}
The simple quadrumer-boson approximation described above reproduces
the basic feature of the previous numerical
studies:\cite{fouet,brenig,altman,brenig2}
at $J\approx J_\times$ the P-VBS state is a stable
singlet ground state with a gap to excited states.
The softening of the triplet mode Eq.~(\ref{triplet_dispersion}) tells us
potential instabilities that the P-VBS state may have (see also
Ref.~\onlinecite{brenig2}). First, from
$E(0)=[J_\times(8J-5J_\times)/3]^{1/2}$ we see that it becomes
unstable at $J_\times\to 8J/5$, when the bosons condense at
$\vec{p}=0$. The resulting state has the long-range N\'eel order,
like the one realized in the square-lattice limit $J_\times\gg J$.
This can be easily seen from Eq.~(\ref{eq:S_i}), in which we may
suppose that $t^\dagger_a + t^{}_a$ is a nonvanishing $c$-number upon
condensation of bosons. In the present approximation, the transition
from the plaquette singlet to the N\'eel-ordered state occurs at
$(J/J_\times)_{c1}=5/8$, which should be compared with the estimate
$(J/J_\times)_{c1}=0.79\sim0.81$ from a sophisticated strong-coupling
expansion.\cite{brenig2} Second, the P-VBS state becomes unstable as
$J/J_\times\to(J/J_\times)_{c2}=11/8$, at which bosons with momentum
$\vec{p}=(\pi,\pi)$ condense. Upon bose condensation the spins will
have a magnetic long-range order, shown in Fig.~\ref{fig:order}, with
the spin configuration $\uparrow\uparrow\downarrow\downarrow
\uparrow\uparrow\downarrow\downarrow$ along the diagonal directions
and the N\'eel order along the horizontal and vertical chain
directions [as can be seen by replacing $t_a^\dagger + t_a \to
(-1)^{j+k}$ in Eq.~(\ref{eq:S_i})]. In fact, this is one of the
magnetically ordered states which are shown to be stable at $J\gg
J_\times$ in the $1/S$ expansion; see Fig.~2(b) and Fig.~6 in
Ref.~\onlinecite{olegt}. The softening of the triplet mode at
$\vec{p}=(\pi,\pi)$ is also found in the strong-coupling expansion in
Ref.~\onlinecite{brenig2} with the estimated critical coupling
$(J/J_\times)_{c2}=1.06\sim1.13$.
Finally, we comment on two defects in our approach. One is that the
lattice translation symmetry is explicitly broken from the outset and
cannot be restored within the theory. This means that the N\'eel
ordered state at $J_\times>5J/8$ inevitably has the P-VBS order as
well -- i.e., it is a coexistence phase. This should probably be viewed
as an artifact of the approach -- see Sec.~\ref{global} for discussion
of this portion of the phase diagram. The other defect is that we
have ignored interactions among the triplet bosons (as well as
projected out the other higher-energy states -- singlet, triplet, and
quintuplet -- on each quadrumerized plaquette). In the crudest
approximation we adopted, the first excited states are a triplet
excitation $S=1$ with no energy dispersion at $J_\times=J$. The
numerical studies\cite{fouet,brenig,altman,brenig2} showed, however,
that at $J=J_\times$ the lowest-excited states are in the spin singlet
sector (see, for example, Sec.\ V in Ref.~\onlinecite{fouet}). The
cluster-based calculations\cite{brenig,altman} indicate that these
singlet excitations are bound states of two $S=1$ excitations. To
describe correctly the singlet excitations in terms of the quadrumer
bosons, one needs to go beyond the linear approximation Eq.~(\ref{eq:S_i})
and include interactions among the bosons. We do not try to do this
here, but only point out that the {\em dispersionless} triplet mode is
naturally susceptible to forming a bound state. Away from the planar
pyrochlore point $J_\times=J$, not much is known about the low-energy
excitations in the $S=0$ sector; it is not well understood how the
singlet energy levels in the spin gap change as a function of
$J_\times/J$.\cite{brenig-discussion}
\section{Global phase diagram of the CCM}
\label{global}
In this section we discuss the global zero-temperature phase diagram
of the CCM as the control parameter $J_\times/J$ is increased from $0$
to $\infty$. Our analysis relies on three well-established facts.
First, the ground state at $J_\times\gg J$ obviously has the N\'eel
order and is smoothly connected to the N\'eel ordered state of the
antiferromagnetic Heisenberg model on the square lattice. Second, by
now there is convincing numerical \cite{fouet,brenig,altman,brenig2}
and analytical \cite{sachdev,moessner,toronto} evidence for the P-VBS
state at and around
$J=J_\times$. Finally, as shown in Sec.~\ref{sec:solution}, the
ground state is spontaneously dimerized in the quasi-one-dimensional
$J_\times \ll J$ limit as well, where long-range crossed-dimer order
(Fig.~\ref{patterns}) sets in.
The first question we address here is how the two dimerized phases,
plaquette VBS (P-VBS) and crossed-dimer VBS (CD-VBS), are connected.
We propose the two complementary scenarios in subsections
\ref{subsec:1order} and \ref{subsec:ordered} below.
The nature of the transition between the P-VBS and the N\'eel states
is discussed in Sec.~\ref{subsec:p-vbs-neel}.
\begin{figure}
\center
\includegraphics[width=\columnwidth]{cros-glob1.eps}
\caption{(Color online) Global phase diagram of the CCM model in scenario I of
Sec.~\ref{subsec:1order}. Thick vertical lines with question
marks indicate that the corresponding transition is, according to
Landau theory reasoning, either {\sl first-order} or occurs via an
intermediate {\sl co-existence} phase.}
\label{fig:global-1}
\end{figure}
\subsection{Scenario I: direct transition between the crossed-dimer and
plaquette VBS}
\label{subsec:1order}
The `minimal' assumption is that the two quantum-disordered valence
bond phases connect at some critical value $J_\times/J < 1$. The
validity of this assumption can only be verified by the {\sl exact}
calculation of the ground state energies of two dimerized phases in
the whole interval $0 < J_\times/J < 1$ of interest. Such a
calculation is obviously beyond our analytic approximations suited for
$J_\times/J \to 0$ (CD-VBS, Sec.~\ref{sec:solution}) and $J_\times/J
\to 1$ (P-VBS, Sec.~\ref{planar-pyro}) limits. Instead, we take a
phenomenological point of view here and assume that the ground state
energies are such that a {\sl direct} transition between the CD-VBS
and P-VBS phases is possible. At least partial support to this point
of view is provided by the exact diagonalization study of Sindzingre
{\it et al.}\cite{sindzingre} which seems to indicate a single change
in the ground state around $J_\times/J \approx 0.8$.
The question then is if this transition between these two phases can
be continuous. In general, this question is difficult to answer.
Most formally, the renormalization group theory of continuous critical
phenomena sets only some rather weak constraints on the existence
of a continuous phase transition between any two phases ``A'' and
``B''. In particular, it requires the existence of an abstract
scale-invariant fixed point (critical field theory) with a single
``relevant'' symmetry-allowed operator in its spectrum, such that a
positive/negative coefficient of this operator in the action takes the
system into phase A/B. The critical fixed point theory itself must
clearly have {\sl higher} symmetry than either phase A or B, but no a
priori restriction is placed on the relation of the symmetries of
phase A to those of phase B.
A conventional -- and more stringent -- ``criterion'' for the existence of
a continuous transition is based on the specific realization of the
critical theory provided by a Landau-Ginzburg-Wilson (LGW) action
written in terms of order parameters. More physically, LGW theory
permits a continuous transition by the ``condensation'' of some ``soft
mode'' of phase A, which transforms non-trivially under the symmetry
group of A. The condensation of this soft-mode order parameter then
leads to a lowering of symmetry (since by assumption the condensation
breaks some symmetries that it transforms under) in phase B. A
necessary criterion for a LGW-allowed continuous phase transition is
thus that the symmetry group of one phase (B in the example) is a
sub-group of the other (phase A).\cite{landau-statmech}
Further restrictions are implied by
detailed consideration of the LGW expansion (e.g., presence of cubic
terms, etc.), as is standard.\cite{landau-statmech}
Recent work on related but distinct quantum phase transitions has
provided explicit theoretical examples of non-LGW critical
theories,\cite{dcp} demonstrating that the violation of this
conventional ``criterion'' is more than a formal possibility.
Unfortunately, there is at present no general prescription to supplant
the LGW criterion, so we are left in the uncomfortable position of
being unable to solidly argue for or against the possibility of a
continuous quantum critical point (QCP).
Instead, we will content ourselves here with the LGW analysis.
It is straightforward to conclude that a continuous transition between
the CD-VBS and P-VBS states is prohibited by the LGW criterion. This
can be seen by the two lattice reflections ${\cal R}_{1,2}$, which map
the crossed-chains lattice onto itself, i.e., symmetries of the
Hamiltonian. Here ${\cal R}_1$ is the reflection with respect to a
horizontal chain [it corresponds to a link parity operation $P_L$
Eq.~(\ref{P_L}) on all vertical chains], and ${\cal R}_2$ is the
reflection with respect to a horizontal line passing through the
centers of empty plaquettes [this is a site parity $P_s$ Eq.~(\ref{P_s})
from the point of view of vertical chains]; similar reflections with
respect to vertical lines are accounted for by the $\pi/2$ rotational
symmetry (about chain crossings) of the lattice.
Both phases are two-fold degenerate, as can been seen from
Fig.~\ref{fig:global-1} (and hence can be described by Ising order
parameters). Their symmetries are distinct. In particular, note
first that ${\cal R}_1$ is a symmetry of the CD-VBS phase, but not the
P-VBS (it interchanges the two P-VBS ground states). Thus the
symmetry group of the CD-VBS phase is not a subgroup of that of the
P-VBS phase. Second, ${\cal R}_2$ is a symmetry of the P-VBS phase,
but not the CD-VBS phase. Thus the symmetry group of the P-VBS phase
is not a subgroup of the CD-VBS state. Since neither symmetry group
is a subgroup of the other, a continuous LGW transition between the
two states is not possible, as promised.
The simplest alternative is a first order transition between the two
phases, which is always possible, and may perhaps be likely. Another
possibility, is that, between the two states, there is a finite range
of coexistence of P-VBS and CD-VBS order. Such a coexistence phase can
have continuous LGW (Ising) transitions to both the P-VBS and CD-VBS
states. The latter scenario is only one of a multitude of
conceptually possible phase structures, for which we have no physical
motivation. We indicate this uncertainty by the question mark in
Fig.~\ref{fig:global-1}.
\subsection{Scenario II: CD to P-VBS via an intermediate ordered phase}
\label{subsec:ordered}
\begin{figure}
\center
\includegraphics[width=0.95\columnwidth]{cros-glob2.eps}
\caption{(Color online) Global phase diagram of the CCM model in scenario II of
Sec.~\ref{subsec:ordered}. The continuous $O(3)$ transition between
the N\'eel$^*$ and P-VBS phases is indicated by a dashed vertical
line. Other notations are as in Fig.~\ref{fig:global-1}.}
\label{fig:global-2}
\end{figure}
A quite different scenario is suggested by the quadrumer-boson
approximation of Sec.~\ref{planar-pyro}: an intermediate {\sl
magnetically ordered} phase between the CD-VBS and P-VBS states. It
was found there that the P-VBS state becomes unstable at
$(J_\times/J)_{c2}=8/11$ as the $J_\times/J$ ratio is reduced below
the planar pyrochlore value of $1$. The resulting state, depicted in
Fig.~\ref{fig:order} and Fig.~\ref{fig:global-2}, possesses long-range
magnetic order. We denote it as the N\'eel$^*$ state in the following.
This magnetically ordered state was previously found in the large-$S$
approach \cite{olegt}, where it arises as a result of a quantum
``order-from-disorder'' selection among the large family of degenerate
(at $S=\infty$) collinear ordered states. The fact that it also
appears as a result of the triplet softening of the $S=1/2$ P-VBS
phase of Sec.~\ref{planar-pyro} gives a strong independent argument in
favor of its stability. Of course, as in the previous scenario,
the ultimate fate of the N\'eel$^*$ phase is to be decided by
accurate numerical investigations, and here we just assume that this
ordered phase is indeed the ground state in a finite $J_\times/J$
window within the $(0,1)$ interval.
The transition between the P-VBS and the N\'eel$^*$ phases manifests
itself as a triplet condensation transition within the consideration
of Sec.~\ref{planar-pyro}. Since such a ``soft-mode'' transition is
the general physical interpretation of LGW theory, it should not come
as a surprise that the N\'eel$^*$ to P-VBS transition satisfies the
LGW criterion for a continuous critical point. In particular, the
symmetry group of the N\'eel$^*$ state is generated by: (1) spin
rotations about the ordering axis, (2) translations along a diagonal
[$(1,1)$ or $(1,-1)$ in the coordinate system of
Fig.~\ref{fig:global-2}] composed with a time-reversal which inverts
the spins $\vec{S}_r \rightarrow -\vec{S}_r$, (3) reflections ${\cal
R}_2$, and (4) $\pi/2$ rotations about the center of a plaquette
containing four parallel spins. Plainly, from Fig.~\ref{fig:global-2},
every one of these operations is also a symmetry of the P-VBS state;
hence the symmetry group of the N\'eel$^*$ state is a subgroup of that
of the P-VBS state. Moreover, the triplet condensation amplitude can
be identified with the N\'eel$^*$ order parameter: an $O(3)$ vector
specifying the direction of spin orientation at some reference site in
Fig.~\ref{fig:global-2}. Indeed, an LGW expansion could be developed
in this order parameter, but we content ourselves with the expectation
that the P-VBS to N\'eel$^*$ transition is likely in the continuous
$O(3)$ universality class.
Clearly, the N\'eel$^*$ phase cannot survive down to the
$J_\times/J=0$ point which describes decoupled $S=1/2$ spin chains,
as quantum spin fluctuations destroy antiferromagnetic
N\'eel long-range order in individual chains.\cite{olegt} The
calculations of Sec.~\ref{sec:solution} demonstrate that at small
$J_\times/J$, the fluctuating dimerization field $\epsilon$ takes
over the (quasi-classical) spin fluctuations and drives the chains
into the CD-VBS phase.
We can again apply the LGW criterion to ask whether a continuous
CD-VBS to N\'eel$^*$ transition is possible. Clearly, the symmetry
group of the CD-VBS state cannot be a subgroup of the N\'eel$^*$
phase, since the former is spin-rotationally invariant. However, the
N\'eel$^*$ phase is invariant under ${\cal R}_2$, which, as we saw in
the previous subsection, is not a symmetry of the CD-VBS phase. Thus,
the symmetry group of the N\'eel$^*$ state is not a subgroup of that
of the CD-VBS phase, and a continuous LGW transition between these two
phases is prohibited.
The transition then is likely either a {\sl first-order} one or
proceeds via an intermediate ordered and bond-modulated {\sl
coexistence} phase. Such a phase is easy to imagine --- start with the
N\'eel$^*$ state and modulate slightly spin exchange couplings along
horizontal and vertical chains so that ``strong'' bonds repeat the
pattern of dimers in the CD-VBS phase. This state clearly breaks
${\cal R}_2$, but does preserve the long-range magnetic order (for
sufficiently weak modulation) of the N\'eel$^*$ phase. The transition
from the CD-VBS to such a ``modulated'' N\'eel$^*$ one is then
continuous $O(3)$ spin-symmetry breaking, while at some higher
$J_\times/J$ the bond modulation goes away via a continuous $Z_2$
transition and one obtains the pure N\'eel$^*$ phase. Which of the two
possibilities, first-order or coexistence, is realized cannot be
decided within our analytical approach, and for this reason the
transition between CD-VBS and N\'eel$^*$ phases is marked by a
question mark in Fig.~\ref{fig:global-2}.
\subsection{Plaquette VBS to N\'eel transition}
\label{subsec:p-vbs-neel}
Regardless of the phase structure for $J_\times/J <1$, we expect a
transition at larger $J_\times$ from the P-VBS state to the
conventional N\'eel state. As is the case with the CD-VBS to
N\'eel$^*$ transition discussed above,
a continuous P-VBS to N\'eel QCP is forbidden within
LGW theory (by similar arguments, which we refrain from giving
explicitly for brevity). In this particular case, however, an alert
reader may note that the two phases appear to be very similar to those
recently argued to be connected by a continuous but non-LGW continuous
phase transition, deemed a {\sl Deconfined} Quantum Critical Point
(DQCP).\cite{dcp}
The analogy, however, is not complete. Significantly, the
checkerboard lattice differs in detail from the square lattice
discussed in Ref.~\onlinecite{dcp} in its point symmetry group. In
particular, a $\pi/2$ rotation about a {\sl site} of the lattice, a
symmetry of the square lattice, is {\sl not} a symmetry operation of
the checkerboard lattice. We believe this symmetry distinction is
sufficient to destabilize the putative DQCP. It is beyond the scope
of this paper to fully recapitulate the arguments of
Ref.~\onlinecite{dcp}, which would be necessary to explain this
conclusion in a stand-alone fashion. Instead, we will sketch these
arguments, assuming the reader will refer to Ref.~\onlinecite{dcp} for
further details.
The crucial, indeed defining, property of the DQCP is an emergent
topological conservation law, exactly maintained at the critical fixed
point. Specifically, ``skyrmion number'' is conserved by the fixed
point theory. This is not true microscopically at the lattice level,
but is an emergent feature of critical theory, as argued in
Ref.~\onlinecite{dcp}. A crucial step in that argument is the
remarkable identification (due to Haldane\cite{haldane88})
of the skyrmion creation operator with the columnar/plaquette
VBS order parameter. These can
be defined through a complex scalar field $\Psi$ (see
Ref.~\onlinecite{dcp}). We emphasize that although $\Psi$ has the
symmetries of the rather ``conventional'' VBS order parameter, it is
not to be viewed as a nearly-free field in the LGW sense, but rather
some ``composite'' operator in the critical field theory. Under the
$\pi/2$ site rotation above, one finds $\Psi \rightarrow i\Psi$, and
consequently, for the square lattice, the skyrmion creation operator
can appear only in the fourth order in the continuum field theory
action for the square lattice antiferromagnet, i.e., as a perturbation
of the form $S'=\int \! d\tau d^2x \, \lambda_4 {\rm Re}\, \Psi^4$. A
variety of arguments indicate then, because of the large (fourth)
power of $\Psi$ which appears, $\lambda_4$ is {\sl irrelevant} at the
DQCP. On the checkerboard lattice, lacking this $\pi/2$ site
rotation, the remaining symmetries of the system are insufficient to
rule out the much more relevant ``quadratic'' term $S'=\int \! d\tau
d^2x \, \lambda_2 {\rm Re}\, \Psi^2$. The presence of the non-zero
$\lambda_2$ term is, incidentally, also tied to the only {\sl
two-fold} degeneracy of the P-VBS state, compared to the {\sl
four-fold} degeneracy of the columnar and plaquette VBS states on
the square lattice.
A necessary and sufficient condition for the stability of this DQCP on
the checkerboard lattice is thus the irrelevance of $\lambda_2$.
Ultimately, this can be decided only by detailed numerical
calculations of the scaling dimension of the $\Psi^2$ operator.
Unpublished numerical results\cite{motrunichprivate}\ for the
easy-plane deformation of the theory (which is more numerically
tractable) suggest that it is in fact {\sl relevant}. If this
conclusion is, as we suspect, true for the SU(2) symmetric
model, {\sl the DQCP is not stable on the checkerboard lattice}. Thus
we are led to conclude that there is no viable candidate theory for a
continuous N\'eel to P-VBS quantum critical point in this model, and
that such a transition is quite unlikely.
The simple harmonic analysis of the previous section predicts another
``soft mode'' transition in the $O(3)$ universality class out of the
quadrumerized VBS state to one with N\'eel order at $J/J_\times<1$.
The resulting magnetically ordered phase is, by its very construction,
a coexistence region with both P-VBS and N\'eel order, that is, with
{\sl less} symmetry than either phase. This is built into the
quadrumer boson expansion because all excitations are constructed
about a background that explicitly has the reduced symmetry of the
P-VBS state, and there is no mechanism to restore the full point group
symmetries of the checkerboard lattice. Thus we believe the
alternative possibility of a direct first-order (since the DQCP theory
is unstable) transition from the P-VBS to the true N\'eel state should
not be ruled out as a possibility. The possible existence of a
coexistence region between VBS and N\'eel orders in various models is
still a subject of some contention. It has been discussed in great
detail in Ref.~\onlinecite{sachdev-park}. An exact diagonalization study
of the quantum checkerboard antiferromagnet,\cite{sindzingre} has
concluded that, if present at all, the co-existence phase is very
narrow. Clearly more detailed studies of this interesting question are
needed. At present, we can only reiterate that a single continuous
transition is highly unlikely in view of the arguments presented
above.
\section{Back to three dimensions}
\label{sec:3d}
Consider now the 3D pyrochlore. Although all bonds of a
tetrahedron are equivalent by symmetry, it makes sense to ask, in
analogy with the 2D lattice that we analyzed in this paper, what would
happen if some bonds were stronger than others. The particular
generalization motivated by the present study of the 2D projected
model (which can be thought of as a ``shadow" of the 3D pyrochlore on
a 2D plane), involves
a modified model in which two opposite bonds of tetrahedron are strong
($J$) whereas four remaining bonds, connecting the strong ones, are
weak ($J_\times$). Then, in the limit $J_\times/J \ll 1$, one is back
to the problem of strong spin chains coupled by weak and frustrated
inter-chain $J_\times$. Now, however, chains are arranged in layers: chains
are parallel to each other (oriented along either $x$ or $y$
direction) in each layer, but are orthogonal to those in the layers
right above and below. That is, chains form a stack of the type
$x-y-x-y...$ along the vertical ($z$) direction. Chains in one layer
do not interact with each other, $J_\times$ couples orthogonal chains
from neighboring layers. This is just a 3D generalization of 2D
situation analyzed in this paper. It does not introduce any new features,
and hence the answer for the ground state is straightforward -- it is
spontaneously dimerized into the pattern shown in
Fig.~\ref{fig:pyro3d}.
\begin{figure}
\center
\includegraphics[width=0.6\columnwidth]{pyro.eps}
\caption{(Color online) Three-dimensional dimerization pattern of a generalized
``quasi-one-dimensional" pyrochlore antiferromagnet. Also shown, by
a long blue arrow, is its ``two-dimensional'' projection, which
coincides with the crossed-dimer order of Fig.~\ref{patterns}.}
\label{fig:pyro3d}
\end{figure}
Such a generalization is not unrealistic. It appears that $S=1/2$
pyrochlore material GeCu$_2$O$_4$ has exactly such a
quasi-one-dimensional structure,\cite{gecuo} thanks to a strong
Jahn-Teller elongation of CuO$_6$ octahedra along the crystal $c$
direction.\cite{gecuo} From the high-temperature tail of the uniform
spin susceptibility one can estimate the ratio of (frustrated)
inter-chain exchange $J_\times$ to the intra-chain one $J$ as
$J_\times/J \approx 0.16$.\cite{gecuo} At lower temperatures the
uniform spin susceptibility follows that of the spin chain down to
$T_c = 33$\,K, where a small discontinuity is observed. The specific
heat shows a sharp peak at the same temperature, suggesting a
first-order transition to the magnetically ordered state of yet
unknown structure (as far as we know, Ref.~\onlinecite{gecuo} is the
only experimental study of this interesting material at the moment).
The theory presented in this paper predicts that for a sufficiently
small $J_\times/J$ ratio, the ordered state will be replaced by the
quantum-disordered valence-bond solid shown in Fig.~\ref{fig:pyro3d}.
Another very interesting realization of ``one-dimensionality" in the
3D setting is provided by the
$S=1$ pyrochlore material ZnV$_2$O$_4$.\cite{znvo} There spin chains
are formed below the structural
{\em orbital-ordering} transition at $T_{c1} =50$\,K, as observed in
the recent neutron scattering experiment.\cite{shlee}
This is followed by the second, {\em magnetic}, transition at
$T_{c2}=40$~K. The resulting collinear magnetic
order can be described
as follows: $S=1$ spins order antiferromagnetically along the strong
(chain) directions in such a way that
spins along weaker ($J_\times$) bonds form a ``4-spin"
pattern,\cite{tsunetsugu}
``up-up-down-down-up-up-\ldots."
This 3D structure is, in fact, rather similar to the
2D one, Fig.~{\ref{fig:order},
found in Sec.~\ref{subsec:ordered}:
observe that spins on the inter-chain bonds in that Figure
follow the same ``4-spin" pattern of two up and two down spins. This
is not a coincidence - in both cases
the classical order can be understood in terms of the well-known
``order-from-disorder" phenomenon,\cite{shender-henley}
induced by either quantum \cite{olegt} or thermal \cite{tsunetsugu}
fluctuations.
This analogy suggests that magnetically ordered state of
GeCu$_2$O$_4$, observed in Ref.~\onlinecite{gecuo},
should be similar to that in the
low-temperature phase of ZnV$_2$O$_4$---clearly more experimental
and theoretical studies of this question are
required. The analogy {\em does not} apply in the $J_\times/J \to 0$ limit
where decoupled $S=1$ chains, although in the
quantum-disordered phase with a finite spin gap
$\Delta \sim 0.4J$,\cite{haldane2} do not break translational symmetry.
This is in contrast with $S=1/2$ chains studied in this paper; the
decoupled limit is characterized by the
crossed-dimer order, Fig.~\ref{fig:pyro3d}, which does break translational
symmetry. Properties of the 3D
phase transitions between quantum-disordered and ordered phases
constitute another interesting theoretical problem
which we leave for future studies.
\section{Conclusions}
\label{sec:conclusions}
The main result of this work consists in the prediction of the novel
VBS phase, the {\sl crossed-dimer} phase, illustrated in
Fig.~\ref{patterns} and Fig.~\ref{fig:pyro3d}.
This new VBS phase arises in the 1D limit of the model as a result of
the frustration-fostered competition
between classical (represented by
the staggered magnetization $\vec{N}$) and quantum (represented by the
staggered dimerization $\epsilon$) ordering tendencies.
Our analysis is based on the careful perturbative implementation of the
well-known $SU(2)$ symmetry of the $g$ matrix field of the WZW
model, which provides
rigorous field-theoretical description of the
low-energy sector of the $S=1/2$ isotropic Heisenberg chain.
This symmetry is made explicit by the OPE
Eqs.~(\ref{ope-r})--(\ref{ope-Je}),
which show transformation properties of
the low-energy ``quantum triad'' $\{\vec{J}_{R/L}, \vec{N}, \epsilon\}$.
As shown in Sec.~\ref{sec:solution}, consistent
implementation of these OPEs requires a careful treatment of the
often-neglected {\sl gradient} terms
(``non-primary fields'' in the conformal field theory nomenclature, such as $\partial_x \vec{N}$)
which link together quantum fluctuations
of the conserved spin current with those
of the staggered magnetization and dimerization fields.
Once this is done, straightforward
inter-chain perturbation
theory leads to the frustration-generated interaction of
dimerizations
on the crossing chains, Eq.~(\ref{delta-V-epsilon}).
Our finding of the CD-VBS phase in the $J_\times \ll J$ limit of the
checkerboard antiferromagnet eliminates a previously proposed
\cite{sliding} sliding Luttinger liquid phase as the candidate for the
ground state. Like many others, that work \cite{sliding}
overlooked the crucial role of the gradient terms in the analysis of
the frustrated inter-chain interaction between critical $S=1/2$
Heisenberg chains.
It is also worth pointing out here that our calculation clarifies
previous somewhat inconclusive ``sightings'' of the decoupled-chains phase
\cite{sachdev,toronto} that arise in a widely-used large-$N$ approach
\cite{read}
to frustrated spin models. By its very construction, that technique
fails to account, at the leading $N=\infty$ level, for the
fluctuation-generated residual dimer-dimer interaction in the
anisotropic 1D limit (although one does expect finite $1/N$
corrections to the inter-dimer interaction to appear once the
fluctuations of the compact gauge field are accounted
for.\cite{sachdev})
We have also presented a global phase diagram of the CCM
(Sec.~\ref{global}). Although phenomenological in nature, our
analysis stresses the importance of lattice symmetries in delineating
the {\sl order} of possible direct transitions between various quantum
(CD- and P-VBS) and classical (ordered N\'eel and N\'eel$^*$) phases
of the CCM found in this and previous studies. We find that most of
such transitions are required to be of the {\sl first-order} type, or
proceed via an intermediate co-existence phases, as illustrated in
Figs.~\ref{fig:global-1} and \ref{fig:global-2}. This claim concerns
even the relatively well studied P-VBS to N\'eel transition
\cite{sindzingre,brenig,brenig2} and clearly calls for more numerical
(as well as analytical) investigations of this interesting question.
Last but not least, we have also presented a simple but intriguing
extension of the approach to the anisotropic three-dimensional
pyrochlore antiferromagnet, which may be relevant to both $S=1/2$ and
$S=1$ pyrochlore-based magnetic materials.\cite{gecuo,shlee} We hope
that this interesting connection will inspire new experiments in this
exciting area.
\acknowledgments
We would like to thank A. Abanov, P. Azaria, I. Affleck, W. Brenig, F.H.L.
Essler, M.P.A. Fisher, T. Fukui, F.D.M. Haldane, P. Lecheminant, R. Moessner, O.
Motrunich, A.A. Nersesyan, S. Sachdev, P. Sindzingre, O.
Tchernyshyov,
H. Tsunetsugu, A.M. Tsvelik, and A. Vishwanath for
discussions on questions related to
this investigation. We are grateful to O. Tchernyshyov for the help
with Fig.~\ref{fig:pyro3d}.
We thank Aspen Center for Physics, Kavli Institute for Theoretical
Physics at UC Santa Barbara and Yukawa Institute for Theoretical
Physics at Kyoto University for their hospitality during various
stages of this research. O.A.S. thanks Research Corporation (award
CC5491) for the partial support of this research. The work of A.F.
was in part supported by a Grant-in-Aid for Scientific Research
(No.~16GS50219) from Ministry of Education, Culture, Sports, Science
and Technology, Japan. L.B. was supported by the NSF under grant
DMR-9985255 and by the Packard Foundation.
| -59,118.51887 |
[
-3.451171875,
3.052734375
] | 15.757879 |
[
-2.470703125,
0.5537109375,
-2.439453125,
-6.296875,
-1.2109375,
9.1328125
] |
[
2.76171875,
6.97265625,
1.650390625,
5.11328125
] | 683 | 11,016 |
[
-3.244140625,
3.80859375
] | 27.932998 |
[
-5.86328125,
-4.6328125,
-5.828125,
-3.095703125,
1.6533203125,
14.328125
] | 1.358511 | 8.706254 | 21.877269 | 3.415763 |
[
2.5900282859802246
] | -34,644.91439 | 6.013889 | -58,144.652521 | 0.274721 | 6.336122 |
[
-2.056640625,
-3.857421875,
-4.08203125,
-5.046875,
2.2265625,
12.609375
] |
[
-5.640625,
-2.86328125,
-3.078125,
-1.728515625,
4.0234375,
5.84765625
] | |
BkiUdw04eIXguwdVFc-7
|
\section{Introduction}
\label{sec:intro}
Dwarf galaxies of the \ac{LG} provide uniquely faint limits, yielding a similarly unique range of constraints on galaxy formation and cosmology. These constraints typically hinge on connecting simulations in an assumed cosmology (usually the concordance $\Lambda$CDM{} cosmology) to a particular model of galaxy formation, and comparing that in some manner to observations. This general approach has led to a number of unexpected findings \citep[e.g.][]{klypinmsp, mooremsp, krav10satrev, BKBK11, brooks14}. However, these approaches have tended to focus on the stellar component of \ac{LG} dwarfs and their model equivalents. While this is a practical approach given that most dwarf galaxies of the \ac{LG} are passive and lack gas \citep{grcevich09, spekkens14}, the dwarf galaxy Leo T demonstrates that even ultra-faint dwarfs can have gas even to the present day \citep{ryanweber08}. Hence, it is important to consider what additional constraints may be gained by investigating the presence or absence of gas-bearing galaxies.
Recent advancements have improved the prospects for such investigations, especially in tracking the diffuse gas traced by \ion{H}{1}. With the installation of the \ac{ALFA} at the William E. Gordon 305 meter antenna at the Arecibo Observatory, the rate at which Arecibo can survey the 21-cm line of \ion{H}{1}\ has increased nearly 7-fold. Arecibo provides the sensitivity of a single dish instrument unavailable to radio interferometers with an angular resolution ($\sim$ 1 kpc for objects 1 Mpc away) unavailable to smaller single-dish observatories. Therefore, the surveys conducted with ALFA provide the best available data sets for searching for these gas-rich dwarf galaxies in the local group. These surveys, tuned to compact \ion{H}{1} clouds, have found interesting populations of Galactic disk clouds mixed in with dwarf galaxy candidates. Some of these have later been confirmed to be galaxies beyond the local group \citep{saul12, T15, Sand15}. This suggests that such \ion{H}{1}{} surveys may be useful for improving the understanding of nearby dwarf galaxies.
A major uncertainty to any investigation of the gas (or stellar) content of low-mass galaxies is the impact of reionization. Theory and simulations strongly suggest that dwarf galaxies with low enough mass dark matter halos are significantly affected by reionization, dramatically altering how the galaxies form \citep[e.g.][]{barkanaandloeb, okamoto08, ricotti09}. This can have major observational consequences, such as explaining the relatively small number of dwarf galaxies in the \ac{LG} compared to the number of subhalos that are seen in $\Lambda$CDM{} simulations \citep[e.g.][]{bullock00}. While it is strongly suspected that these impacts are important, the details of these effects, particularly when and at what mass scale the effects become important, are not at all clear. In this work we investigate whether simply positing the presence of such an effect combined with the empirically observed \ion{H}{1}{} content of \ac{LG} dwarf galaxies can provide independent constraints on the impacts of reionization on dwarf galaxies and their dark matter halos.
This paper is organized as follows:
In \S \ref{sec:galfa}, we describe the \ac{GALFA-HI} survey and the compact high-velocity \ion{H}{1}{} clouds it has identified as dwarf galaxy candidates;
In \S \ref{sec:dwarfgals} we describe optical searches for dwarf galaxies near the \ion{H}{1}{} clouds described in \S \ref{sec:galfa};
In \S \ref{sec:mockgalfa} we describe a method for creating mock sets of \ac{GALFA-HI} Compact Cloud Catalog galaxies, using the \ac{ELVIS} suite of simulations;
In \S \ref{sec:reionization} we describe how the combination of all of the above provides constraints on the scale at which reionization strips gas from dwarf galaxies;
and in \S \ref{sec:conc} we conclude.
To allow reproducibility, the analysis software used for this paper is available
at \url{https://github.com/eteq/galvis}.
\section{\ac{GALFA-HI} and its Compact Clouds}
\label{sec:galfa}
The \ac{GALFA-HI} survey comprises observations of neutral hydrogen with the 21-cm line taken with the 305 meter William E. Gordon telescope located in Arecibo, Puerto Rico. These observations began in 2004 with the installation of \ac{ALFA}, which provided an almost 7-fold increase in mapping speed over the L-band wide feed. \ac{GALFA-HI} observations have an angular resolution of 4$^\prime$, and a spectral resolution of 0.184 km s$^{-1}$, over the velocity range -650 to +650 km s$^{-1}$ V$_{LSR}$. This work relies on the observations compiled into the \ac{GALFA-HI} Data Release 1 (DR1): 3046 hours of observations covering 7520 deg$^2$ of sky between Declination -1$^\circ$ and 38$^\circ$ \citep{Peek11galfadr1}.
\citet{saul12} used \ac{GALFA-HI} DR1 to generate a catalog of 1964 compact \ion{H}{1}{} clouds using a template-matching technique, the \ac{GALFA-HI} \ac{CCC}. The search was designed to be sensitive to clouds smaller than 20$^\prime$ and with linewidths between 2.5 and 35 ${\rm km/s}$.
The sensitivity of the search was measured empirically through a signal injection approach (section 3.3 of \citet{saul12}). Simulated clouds were injected with a range of positions, velocities, linewidths, sizes, aspect ratios, and brightnesses and the detection algorithm run. It was found that a rather simple function of these parameters was able to reliably predict whether the search algorithm could find a given cloud.
\citet{saul12} divided the 1964 clouds in the \ac{CCC} into a number of categories based on their linewidth, position, and velocity. Among these categories, we are interested in two in particular, which contain all 719 the of the clouds with $|v_{LSR}| > 90 \; {\rm km/s}$. All clouds that are close in position velocity space to large \acp{HVC} known in the \citet{WvW91} catalog are categorized as \acp{HVC}, while those far from these clouds are called ``Galaxy Candidates.''
This distinction was made under the assumption that small \ion{H}{1}{} clouds near larger high velocity clouds are much more likely to be \acp{HVC} than to be galaxies. Practically, this proximity is quantified using the parameter
\begin{equation}
D = \sqrt{\left(\Theta^2 + f^2\left(\delta v\right)^2 \right)},
\end{equation}
where $\Theta$ is the angular distance between two clouds, $\delta v$ is the LSR velocity difference, and $f$ is 0.5 degrees / km s$^{-1}$ \citep{Peek09}. For each cloud in \citet{saul12}, this parameter is measured against all \acp{HVC} in the \citet{WvW91} catalog, and those clouds with minimum D $>$ 25$^\circ$ are classified as Galaxy Candidates. This procedure finds a total of 27 such candidates.
This method of distinguishing between likely HVCs and possible dwarf galaxies is supported by work by \citet{meyer15}, who showed that including D $<$ 25$^\circ$ clouds diluted an \ion{H}{1}-based search for more distant UV-bright galaxies.
Our goal in this work is to identify which of these candidates are actually \ac{LG} dwarf galaxies (\S \ref{sec:dwarfgals}), and create a mock version of this survey using simulations in an appropriate cosmological context (\S \ref{sec:mockgalfa}).
\section{Dwarf Galaxies in \ac{GALFA-HI}}
\label{sec:dwarfgals}
The set of Galaxy Candidates we describe in \S \ref{sec:galfa} require follow-up to determine which, if any, have optical galaxies within the \ac{GALFA-HI} beam.
It is precisely this that was the goal of the surveys described in \citet{T15} and \citet{Sand15}.
These samples, as described in more detail in Bennet et al. (in prep), observed the \ac{GALFA-HI}
\ac{CCC} Galaxy Candidate fields and resolve stars to a depth of $\sim 2$ mags deeper than \ac{SDSS}.
These surveys determined that 5 have optical counterparts that are likely nearby galaxies, but they are at $2$ to $10$ Mpc \citep{T16}, placing them in the \ac{LV} rather than the \ac{LG}.
While the description above focuses on new unknown galaxies, the \ac{GALFA-HI} footprint also contains nearby dwarf galaxies that are not in the \ac{CCC} simply because they were previously-known. Specifically, Sextans B, GR 8, and KKH 86 are in the \ac{GALFA-HI} footprint.
However, as with the confirmed galaxy candidates described above, and as discussed in more detail in \citet[][Figure 5]{mcconlgcat}, all three of these galaxies are clearly in the Hubble Flow (both by distance and velocity), and therefore not a part of the \ac{LG}.
Hence, while there are a large set of \ac{GALFA-HI} \ac{CCC} Galaxy Candidates, \emph{no} \ion{H}{1}{}-bearing \ac{LG} galaxies exist in the \ac{GALFA-HI} footprint. This then raises the question of whether or not such galaxies might be \emph{expected} by galaxy formation in an appropriate cosmological context. It is to this question that we turn in the following sections.
\section{Constructing a Mock \ac{GALFA-HI} Survey}
\label{sec:mockgalfa}
We begin by asking how many dwarf galaxies we would expect in \ac{GALFA-HI} given a simple set of galaxy formation and cosmological assumptions (i.e., $\Lambda$CDM{}). We start with the assumption that all dwarf galaxies are contained inside of dark matter halos \citep[e.g.][]{willmanstrader12}. This allows our starting point to be $\Lambda$CDM{} halo catalogs in an environment comparable to the \ac{LG}.
The specific set of halo catalogs we use for this experiment are taken from the \ac{ELVIS} suite \citep{gk14elvis}. These simulations were designed to simulate environments similar to the \ac{LG} in the sense of having two $M_{\rm halo} \sim 10^{12}$ halos at distances and relative velocities similar to the \ac{MW} and M31. We use this suite of simulations, because accounting for \emph{both} $\sim M_*$ galaxies of the \ac{LG} is critical: a significant part of the \ac{GALFA-HI} footprint is in the direction of the M31 halo on the sky. \citet{gk14elvis} demonstrated that the non-satellite dwarf galaxy halos in \ac{LG}-like environments are significantly different than individual isolated $\sim M_*$ galaxy halos. Hence, including both galaxies in their correct orientations relative to the simulated \ac{GALFA-HI} footprint is important for generating a realistic estimate. The \ac{ELVIS} suite provides just this, with 12 \ac{LG}-like analogs (i.e., 24 total $\sim M_*$ galaxies and their attendant dwarf halos).
The \ac{ELVIS} suite comprises dark matter-only simulations. \ac{GALFA-HI} is only sensitive to $M_{\rm HI}$ (and our optical observations detected $M_*$), so we must impose some model on the simulations to determine the observables for a given dark matter halo in \ac{ELVIS}. Such models can span a wide range of complexity and assumptions, from full cosmological hydrodynamic simulations to basic semi-analytic approaches \citep[e.g.][]{stewart09, galacticus, rod12, sawala14, snyder15, wheeler15}. Here we consider a simple, primarily empirical model. While such a model is unlikely to capture the complex physics of galaxy formation in detail, our goal is a rough comparison with the \ac{GALFA-HI} observables; a detailed investigation of galaxy formation models is beyond the scope of this work.
Our model is as follows: we begin with positions, velocities, and halo masses provided in the public data release of \ac{ELVIS}\footnote{\url{http://localgroup.ps.uci.edu/elvis/}}.
To obtain stellar masses for each halo, we apply the abundance-matching based halo-to-stellar mass relation of \citet{gk14elvis} to obtain $M_*$ for each halo. This has the specific advantage of being calibrated to both the \ac{LG} and \ac{ELVIS}, precisely the two data sets we are interested in here. For considerations of completeness of our optical follow-up, we convert to luminosity assuming a mass-to-light ratio of unity and a standard $r-band$ bolometric correction \footnote{\url{http://mips.as.arizona.edu/~cnaw/sun.html}}. To determine $M_{\rm HI}$ we then use the $M_*$ in combination with the $M_*$-to-$M_{\rm gas}$ relation of \citet{bradford15}. We convert this relation from $M_{\rm gas}$ to $M_{\rm HI}$ by inverting the procedure described in \citet{bradford15}. This procedure is straightforward as the observations from that work are also \ion{H}{1}{} observations. In Figure \ref{fig:MstarMHI} we show this $M_*$-to-$M_{\rm HI}$ relation (black lines), along with \ac{LG} dwarf galaxies with \ion{H}{1}{} from \citet{mcconlgcat} and limits from \citet{spekkens14}. This figure demonstrates that, even beyond where it is calibrated, the \citet{bradford15} is consistent with the \ac{LG} dwarfs (although with scatter comparable to the \citealt{bradford15} dataset). While there may be a small bias in the relation relative to the \ac{LG}, the \citet{mcconlgcat} compilation is inhomogeneous enough that we opt to stick with the \citet{bradford15} extrapolation with no corrections, as it is a much more homogenous dataset. We also see, from the \citet{spekkens14} dataset, that \emph{satellites} in the \ac{LG} are clearly \ion{H}{1}{} deficient relative to the \citet{bradford15} relation. This fact motivates our choice to remove satellites from the mock \ac{GALFA-HI} \ac{CCC} galaxies, described in more detail below.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1\columnwidth]{mstarmhi}
\caption{$M_*$-to-$M_{\rm HI}$ relation of \citet[][black lines]{bradford15}, along with \ac{LG} dwarf galaxies. The solid line shows the approximate region where the \citet{bradford15} relation is directly calibrated, while the dotted section is extrapolation. The (blue) circles show non-satellite \ac{LG} dwarfs with \ion{H}{1}{} detections from the \citet{mcconlgcat} compilation. The (red) triangles show \emph{upper limits} on \ion{H}{1}{} from \citet{spekkens14}, which are are all satellite galaxies. This demonstrates that, while not directly calibrated in this region, the \citet{bradford15} relation is moderately consistent with the \ac{LG} galaxies with \ion{H}{1}{}, but lies well above the limits for \ac{LG} satellites.}
\label{fig:MstarMHI}
\end{center}
\end{figure}
The above procedure results in a set of 12 halo catalogs of \ac{LG} analogs, with halo, stellar, and \ion{H}{1}{} masses for each. We now describe how we convert these catalogs into mock \ac{CCC} galaxies that would be found in mock \ac{GALFA-HI} surveys.
The first step is identification of the massive halos in the catalog with the \ac{MW} and M31. While the \ac{LG} total mass is relatively well constrained and the \ac{MW} and M31 are clearly dominant, exactly how to apportion the mass between the two is relatively uncertain \citep[e.g.,][]{klypin02, watkins10, toll12, gonzalez14, pen14}. Hence, to marginalize over this uncertainty and at the same time boost our count of mock surveys, we create a mock \ac{GALFA-HI} footprint \emph{twice} for each \ac{ELVIS} pair, swapping which large halo hosts the \ac{MW} and M31. While this does reuse some of the same halos twice, the details of detectability and different distances from the two hosts means that each mock survey is a different sample, and there is therefore a relatively weak covariance between the two mock surveys from the same \ac{ELVIS} pair. While the presence of weak covariance means these are not 24 completely independent samples, the weakness of the covariance means there is substantially more power in using all of the halos individually instead of only one of each pair. Hence, the procedure described below is repeated for each of the 24 \ac{MW}/M31 pairs to produce our ensemble of mock surveys.
For each pair, we fix the orientation of the halos on the mock sky by placing the mock Earth in the unique position and orientation where the distance to the center of the \ac{MW} halo is 8.5 kpc (IAU standard), the center of the \ac{MW} halo is in the direction of the origin, and the center of the M31 halo is in the direction of $l=121.17^{\circ}$, $b=-21.57^{\circ}$ (M31 in Galactic coordinates). In that orientation, we add a $220 {\rm km/s} $ velocity offset to each halo in the $l=90^{\circ}$ direction to model the Sun's motion around the Galactic center (IAU standard). This yields a mock survey footprint in Galactic coordinates with radial velocities and distances as would be observed from the real Earth.
To determine detectability of a galaxy in the \citet{saul12} \ac{CCC} from the mock survey, we overlay the footprint (and spatially variable depth) of the real \ac{GALFA-HI}, and identify the nearest pixel to each halo. Using the distance for that halo and its $M_{\rm HI}$ from above, we compute its expected \ion{H}{1} flux, and we accept it if it is higher than the detectability threshold of the \ac{CCC} (described in \S \ref{sec:galfa}). We further apply a $|v_{LSR}| > 90 \; {\rm km/s}$ cut to match the galaxy candidate sample described in \S \ref{sec:galfa}.
We also consider a final cut on \emph{optical} detectability in the follow-up observations by assuming $M_*/L_r \sim 1$, and cutting on the $r$-band detection thresholds for follow-up observations.
However, this cut is less stringent than the $M_{\rm HI}$ sensitivity cut based on our thresholds from \S \ref{sec:dwarfgals}, and hence has no effect on the final count as described below.
We leave this cut in as a parameter in the model, however, and investigate its effects further in \S \ref{sec:reionization}.
This yields our sample of mock dwarf galaxies that could be found in the \ac{CCC} under the assumption that \emph{all} galaxies have their full allotment of \ion{H}{1}. However, Figure \ref{fig:MstarMHI} demonstrates clearly that this is not the case: in the \ac{LG}, most lower-mass \emph{satellite} galaxies have orders-of-magnitude less \ion{H}{1}\ than star-forming galaxy scaling relations imply. The exact process by which theses satellites are quenched is not certain.
Whether non-satellite dwarfs self-quench is an open question, but at least in the \ac{LG} it seems likely to be quite rare given that nearly all dwarfs beyond the \ac{MW} and M31 have gas \citep[e.g.,][]{mcconlgcat, wetzel15}.
But it is clear that some mechanism removes gas and therefore quenches star formation in \emph{satellites} \citep[e.g.][and references therein]{spekkens14, wetzel15, fill15, simpson17}.
Here, we consider two simple scenarios intended to bracket various host-driven quenching mechanisms and timescales. The two scenarios, as well as the approaches described in this section as a whole, are illustrated in Figure \ref{fig:elvistogalfacartoon}. In Scenario ``Not-Now'', we assume any galaxy that is a subhalo\footnote{Subhalos are defined here following the \ac{ELVIS} catalogs, based on the 6D friends-of-friends Rockstar halo finder \citep{rockstar}.} of the \ac{MW} or M31 at $z=0$ has lost its gas, and all others are normal. In Scenario ``Never'', we assert that galaxies immediately lose all their \ion{H}{1}\ gas the moment they become subhalos at \emph{any} time. Note that this makes Scenario ``Never'' a strict subset of ``Not-Now''. While neither of these scenarios are likely to be correct in detail, and do not allow for self-quenching, they bracket many scenarios based on direct physical effects between a host and its subhalos, and hence serve for our current purpose of providing an estimate of what we would expect \ac{GALFA-HI} to see.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=1.4\columnwidth]{Flow_Diagram}
\caption{Schematic diagram of the method to construct the dwarf galaxy candidate samples (described in detail in \S \ref{sec:mockgalfa}).}
\label{fig:elvistogalfacartoon}
\end{center}
\end{figure*}
In Figure \ref{fig:elvismallmultiples} we show the counts of halos in each of the 24 mock \acp{CCC} (i.e., each of the \ac{ELVIS} host halos as the MW). Each point represents an individual halo, and points inside a given circle are those that pass the corresponding cut. One clear point this Figure demonstrates is the importance of the velocity and detectability cuts -- together they remove a $\gtrsim 50$\% of the sample, underscoring the importance of correctly modeling the specific observational details of \ac{GALFA-HI} and the \ac{CCC} detection algorithm. Also important here is the recognition that there is quite a lot of variability in these samples, both in the total number of halos and the effects of the cuts. This represents a mix of true cosmic variance as well as uncertainty in the \ac{MW} and M31's properties, encoded in the spread of properties for the \ac{ELVIS} host halos.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=1\textwidth]{tpvenn_ann2}
\caption{Number of halos in each of the \ac{ELVIS} hosts for the sample cuts described in the text. The number of points in each circle/overlap region demonstrates the effect of each cut in limiting the sample of candidate galaxies. The red and orange (larger) points correspond to halos for scenario ``Not-Now'' and ``Never'', respectively. }
\label{fig:elvismallmultiples}
\end{center}
\end{figure*}
In Figure \ref{fig:countsummary}, we summarize the cumulative distribution functions for predicted dwarfs in the footprint over the 24 mock surveys. Unsurprisingly, for the scenario where satellites are included (green dot-dashed), there are many satellites predicted. However, as discussed above, this case is already ruled out by the observation that most \ac{LG} satellites lack observable \ion{H}{1}{}.
More striking are the lines for the Not-Now (red dashed) and Never (solid orange) scenarios. While the numbers are much reduced relative to the All scenario, the typical number counts are in the $5-20$ range. In contrast, the observations (discussed in Section \ref{sec:dwarfgals}) show \emph{zero} galaxies. \emph{In fact, none of the mock surveys in the scenarios outlined above are consistent with the observations.}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.8\textwidth]{countsummary}
\caption{Summary of the expected dwarf irregular candidates predicted following the mock \ac{GALFA-HI} \ac{CCC} procedure outlined in \S \ref{sec:mockgalfa}. Lines show the cumulative distribution of the number of galaxies for the ``Never'' Scenario (orange solid), ``Not-Now'' Scenario (red dotted), and with no cut on satellites (green dot-dashed). Also shown is the observed number of \ac{LG} galaxies in the {\emph real} \ac{GALFA-HI} \ac{CCC} as the dashed (black) vertical line: zero. This demonstrates the stark difference between the observations and simulation predictions for the scenarios described here.
}
\label{fig:countsummary}
\end{center}
\end{figure*}
The process outlined in this section demonstrates that, by constructing a mock \ac{GALFA-HI} catalog from the \ac{ELVIS} simulations, we determine that the \ac{GALFA-HI} observation of no \ac{LG} dwarfs is quite surprising at face value.
Several caveats apply, however.
First, because there are only 12 realizations of \ac{ELVIS} simulations, we only have 24 mock catalogs.
Hence, it is possible that the \ac{LG} is simply a $<$ 1 in 24 ($\sim 2 \sigma$) outlier.
This cannot be ruled out without more \ac{ELVIS}-like simulations, but is at least suggested against by the magnitude of the discrepancy outlined above.
Second, this could be interpreted as evidence that $\Lambda$CDM{} (the underlying cosmology for \ac{ELVIS}) is not an accurate description of small-scale structure, like some interpretations of the ``missing satellites problem'' \citep{gotzwdm, rochasidm}, but here specific to gas-bearing dwarfs. Third, because \ac{ELVIS} is a collisionless dark matter simulation, it does not contain any baryons effects.
Hence, baryonic or hydrodynamic effects that might suppress the numbers of dwarf galaxies are not accounted for \citep[e.g.][]{pontzen12, governato12, brooks13}.
Most work on this topic has focused on how this impacts the \ac{LG} satellites, however, which we explicitly excise from the sample.
It is unclear which, if any, of these mechanisms apply for non-satellites like those considered here, and investigating such effects is beyond the scope of this paper.
Finally, it is possible that the $M_{\rm gas}$-to-$M_*$ relation of \citet{bradford15} does \emph{not} extend to below $M_*<10^6 \, M_{\odot}$ (i.e., the extrapolation in Figure \ref{fig:MstarMHI}). If there is a break in this relation, the number of dwarfs with gas would be suppressed, solving the aforementioned tension. It is precisely this possibility that is described in the next section, taking the causative mechanism to be reionization.
\section{Implications for Reionization}
\label{sec:reionization}
To estimate the effects of reionization we now consider a minimal toy model of the effect of reionization on dwarf galaxies, inspired in part by the approach of \citet{bk14}.
Of course, there are a wide range of models for reionization and its effect on dwarf galaxies, far more thorough than that used here \citep[e.g.,][]{barkanaandloeb, gnedin00, okamoto08, bovill11, fitts17}.
We use the model described here primarily because it is both simple and can provide a direct probabilistic mapping between \ac{ELVIS} and the \ac{GALFA-HI} observations.
\subsection{Lower Limits from \ac{GALFA-HI} and \ac{ELVIS}}
\label{ssec:galfalimit}
Our model assumes there exists a characteristic halo virial mass ($M_c$) at a particular redshift ($z_{\rm reionization}$). Halos with a virial mass below $M_c$ at $z_{\rm reionization}$ have their gas entirely removed (by $z=0$), and those above have their gas content unaffected by reionization.
While such a sharp break in the $M_{\rm HI}$-to-$M_{\rm halo}$ relation is unphysical, the subsequent evolution in $M_{\rm halo}$ from $z_{\rm reionization}$ to $z=0$ has the effect of smearing the break over $\sim 1$ dex in $M_*$ at $z=0$, where our comparison to observations is performed (see \S \ref{ssec:leotlimit}). This model is implemented in the context of the formalism of \S \ref{sec:mockgalfa} by identifying the main-branch progenitors of the $z=0$ halos at a particular $z_{\rm reionization}$. Those with a virial mass below $M_c$ are flagged as undetectable in \ac{GALFA-HI} due to the removal of their gas.
This model is flexible enough to immediately solve the problem posted at the end of \S \ref{sec:mockgalfa}: if $M_c$ is high enough, no \ac{LG} dwarf galaxies will have gas at $z=0$, thereby solving the apparent tension between the \ac{ELVIS} model and the observations.
With that in mind, we now turn to asking the probabilistic question of what $M_c$ is, given the observation that there are no \ac{LG} dwarfs in the \ac{GALFA-HI} \ac{CCC}.
To obtain this probability distribution, we start from the process of \S \ref{sec:mockgalfa} applied to each of the \ac{ELVIS} hosts for both the ``Never'' and ``Not-Now'' scenarios, for a range of optical $r$-band follow-up detection limits. We apply the reionization model described above over a grid of $M_c$ and $z_{\rm reionization}$ values of $z\sim6.3, 7.1, 8.1,$ {\rm and} $9.3$ (set by available \ac{ELVIS} timesteps).
We then ask what fraction of the 24 hosts yield galaxy number counts consistent with the observations (zero), and consider this to be proportional to the probability density $P(M_c, r_{\rm lim}, z_{\rm reionization} | 0)$.
We then marginalize the probabilities over the available $z_{\rm reionization}$ values to provide estimates of $M_c$ (and the $r$ limit).
To do the marginalization we assume a (discrete) uniform distribution of the $z_{\rm reionization}$'s available, because our goal is to provide estimates relatively agnostic to assumptions about $z_{\rm reionization}$.
However, a more specific reionization model would likely provide a more peaked $z_{\rm reionization}$ distribution, and therefore provide tighter constraints than we obtain here.
Relatedly, we note in passing that a different choice of marginalization (and stronger assumptions) could yield a different inference: marginalizing over over $M_c$ to instead estimate $z_{\rm reionization}$.
For the purposes of this paper, we opt not to do this because we are more interested in $M_c$, and our $z_{\rm reionization}$ grid is quite coarse, but this methodology could be straightforwardly applied to a more specific galaxy formation model's $M_c$ prediction, yielding a probability distribution over $z_{\rm reionization}$.
Figure \ref{fig:reionization} shows the result of the above procedure for the ``Not-Now'' scenario.
The $M_c$ values are converted to virial temperatures in this figure for comparison with literature values that are often reported as $T_{\rm vir}$.
It is immediately apparent that the probability density goes to $\sim 1$ at high $T_{\rm vir}$/$M_c$ values.
This is a result of the effect described above, that an arbitrarily large $M_c$ will \emph{always} be consistent with the observational result of no dwarf galaxies, as all of the candidates are removed by reionization.
Figure \ref{fig:reionization} also shows horizontal lines for two optical detection thresholds: the upper one corresponds to our estimated follow-up limits (\S \ref{sec:dwarfgals}), and the lower one is a highly conservative estimate based on the actual detected non-\ac{LG} dwarfs from \citet{T15} and \citet{Sand15}.
The corresponding probabilities for $T_{\rm vir}$ along those limits are essentially identical, showing that even if our follow-up detection limits are overly-optimistic, the key results of this work still hold.
Figure \ref{fig:reionization} also shows that the cutoff $T_{\rm vir}$ has a $\sim 50\%$ of being at least $10^{3.6} \; M_\odot$, or $M_c \gtrsim 10^{9} \; M_\odot$, consistent with simulation predictions of $M_c$ \citep[e.g.,][]{okamoto08}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1\columnwidth]{reionizationp_green}
\caption{Joint probability density for the halo mass at which reionization remove all gas and optical detection limit in the follow-up observations. The characteristic mass $M_c$ is converted to virial temperature $T_{\rm vir}$ in this plot for comparison to other literature. The colored regions are proportional to the probability density, and the gray and white lines give the $10\%$ and $50\%$ contours. The horizontal dotted (lighter) line is the estimated follow-up detection limit described in \S \ref{sec:dwarfgals}. The dashed (darker) line is a more conservative detection limits based on actual detected galaxies from \citep{T15}. This demonstrates both that the $M_c$ distribution has little covariance with the follow-up detection limits, and that the follow-up limits are deep enough that optical non-detections are unlikely to impact the inferred $M_c$.}
\label{fig:reionization}
\end{center}
\end{figure}
\subsection{Upper Limits from Leo T}
\label{ssec:leotlimit}
The procedure above provides only a \emph{lower limit} on $M_c$, because the \ac{GALFA-HI} observation here is the lack of \emph{any} detected \ion{H}{1}{}-bearing galaxies. From these observations alone, $M_c$ could be arbitrarily high, as this would still yield no observable \ion{H}{1}{}-bearing galaxies. But of course, observations of the wider universe, indeed even other observations in the \ac{LG}, provide evidence of the existence of galaxies with \ion{H}{1}. Applying the Copernican principle, it is reasonable to then use the existence of such galaxies in the \ac{LG} as a \emph{joint} constraint with the \ac{GALFA-HI} observation to achieve an actual estimate of $M_c$. Leo T provides the ideal constraint: it is both a dwarf galaxy in the \ac{LG} (at $\sim 400$ kpc from the \ac{MW}, not a satellite as we have defined it here) and the lowest-mass known \ion{H}{1}{}-bearing galaxy \citep{ryanweber08, weisz12}. It therefore provides the best source for an \emph{upper limit} on $M_c$. Combining this constraint with the \ac{GALFA-HI} observations can then provide an estimate of $M_c$ (rather than just a limit).
Creating an upper limit on $M_c$ based on the existence of Leo T requires an estimate for the virial mass of the halo hosting Leo T at the time of reionization. To make such an estimate we start from the present day luminosity of Leo T from \citet{dejong08}, converted to a stellar mass ($1.4 \times 10^5 \; M_\odot$). We then find all the $z=0$ halos from the model outlined in \S \ref{sec:mockgalfa} that have stellar masses within $\pm 20 \%$\footnote{This specific percentage was chosen as the needed minimum to include enough halos to adequately sample the probability distribution. A slightly wider or narrow range in stellar mass showed no signs of systematic bias in the center or width of the distribution.} of our Leo T estimate. For those halos we then identified the main progenitor at $z_{\rm reionization}$ and adopt that as a possible $M_c$ limit. This procedure provides an estimate of the scatter in the possible virial mass at $z_{\rm reionization}$ of Leo T due \emph{only} to uncertainty in its merger history. Other sources of scatter may contribute to the effects of reionization on Leo T-mass galaxies, possibly quite significantly \citep[e.g.][]{fitts17}. However, we find that relatively large changes in the stellar mass assumed here yield negligible changes to the width of the distribution of $z_{\rm reionization}$ halo masses. This implies that the primary impact on the $z_{\rm reionization}$ virial mass of Leo T \emph{itself} is driven by uncertainty in mapping any individual present day halo back to $z_{\rm reionization}$. We therefore adopt this merger-history driven scatter as the sole source of scatter for the purposes of this $M_c$ estimate.
\subsection{Combined Limits}
\label{ssec:combinedlimit}
With an upper limit on the $z_{\rm reionization}$ $M_c$ set by the above procedure for Leo T, and lower limit set by the above \ac{GALFA-HI} observation comparison, the joint probability of the two together provides an estimate of $M_c$. Precisely this exercise is demonstrated in Figure \ref{fig:infercombo}. We show this for both the ``Not-Now'' (top, red) and the ``Never'' (bottom, orange) scenarios. While the latter scenario has a notably wider distribution due to the more conservative assumptions built into it, both provide a constraint on the halo mass of reionization around $3 \times 10^8 \; M_\odot$, although potentially from $10^8$ to $10^9 \; M_\odot$. But unlike other estimates for $M_c$, this estimate depends on no assumptions about the effects of reionization on star formation - rather this estimate derives (solely) from observations of the \ion{H}{1}{} content of the $z=0$ \ac{LG} galaxies.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1\columnwidth]{uniplot2}
\caption{Probability distributions of the critical mass for retaining \ion{H}{1}{} ($M_c$) at the time of reionization (which is a marginalized parameter in this model but $\sim 6$). The two \emph{limits} are shown as solid lines (their derivations are discussed in \S \ref{ssec:galfalimit} and \S \ref{ssec:leotlimit}). The upper panel is for the ``Not-now'' scenario when inferring the \ac{GALFA-HI} limits, and the lower panel is for the ``Never'' scenario. Note that the limits as shown are $P(<\log{M_c})$ or $P(>\log{M_c})$, which is why the probabilities asymptote to 1. The filled distribution shows the \emph{joint} constraint set by the two limits (discussed in more detail in \S \ref{ssec:combinedlimit}). These distributions are normalized to unity to show the fine detail, rather than being normalized as true joint probability distributions (see text for more discussion on this). These therefore show how combining the lower limit from Leo T and the upper limits from the \ac{GALFA-HI} and \ac{ELVIS} analysis provides an actual estimate of $M_c$ rather than only limits.}
\label{fig:infercombo}
\end{center}
\end{figure}
We also note a possibly surprising feature of Figure \ref{fig:infercombo}. In both scenarios, the relative overlap of the probability distributions from the Leo T upper bound and the \ac{GALFA-HI} lower bound is quite limited. That is, the \emph{absolute} probability of both being correct is relatively low. While the joint probability shown in Figure \ref{fig:infercombo} has been normalized to unity for clarity, the absolute values are quite low. This implies that, fundamentally, there is a tension between the \ac{GALFA-HI} observations and the very existence of Leo T's \ion{H}{1}{} (particularly for the ``never'' scenario). This tension persists even if we compare Leo T to only the \emph{lowest} mass \ion{H}{1}{}-bearing galaxy surviving through the models described in \label{ssec:galfalimit} - this experiment is illustrated inFigure \ref{fig:lowestmassleots}. While these model Leo T analogs are on average lower mass than the Never/NotNow distributions shown in Figure \ref{fig:infercombo}, they are still in tension with the Leo T distribution. While this could be due to inadequacies in the assumptions baked into the models used to infer $M_c$, it may also imply that Leo T truly is at the edge of the stochastic regime suggested by \citet{fitts17}, and therefore is at a mass where reionization can have a major impact on galaxy formation.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1\columnwidth]{figure7_TP18}
\caption{Probability distributions (over the ELVIS simulations) of virial mass at reionization for halos that end up at $z=0$ as the lowest-mass \ion{H}{1}{}-bearing galaxies in the simulation, for the ``NotNow'' and ``Never'' scenarios (red and orange, respectively). Both are computed assuming $M_c$ is the median from the joint distributions of Figure \ref{fig:infercombo}. These halos can be interpreted as Leo T analogs under the assumption that Leo T truly is the lowest-mass gas-bearing galaxy in the Local Group. Also shown as the green line is the at reionization virial mass distribution for Leo T (identical to the green lines in Figure \ref{fig:infercombo}). It is clear that these distributions only weakly overlap, highlighting the tension between Leo T's existence and the \ac{GALFA-HI} observations (see text in \S \ref{ssec:combinedlimit} for more details).}
\label{fig:lowestmassleots}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conc}
In this paper, we:
\begin{itemize}
\item Described a process that takes dark matter simulations analogous to the local group and, using scaling laws and well-characterized sensitivity functions, generates mock HI galaxy catalogs.
\item Applied this transformation to the \ac{ELVIS} suite of simulations removing dwarf galaxies likely to have been stripped by interaction with the host. We find more HI observable gas-rich dwarf galaxies in all simulations than we see in \ac{GALFA-HI} CCC. At face value this suggests a significant tension between the observations and the simulations.
\item This non-detection of gas-rich dwarf galaxies expected by simulations can be interpreted as a lower limit of the mass-scale of reionization ($\sim 10^{8.5} M_\odot$), \emph{independent} of any assumption about the impact of reionization on star formation.
\item Combining this limit with the limit inferred from the very existence of the gas-rich dwarf galaxy Leo T, we infer the mass-scale at which reionization significantly affects dwarf galaxy formation. This scale is consistent with those inferred by theoretical estimates.
\end{itemize}
The final point also serves to explain the relative paucity of new discoveries of gas-rich \ac{LG} dwarf galaxies despite extensive \ion{H}{1}{} surveys. While here it is cast as a \emph{limit} on reionization, reversing the conclusion yields the result that if reionization becomes significant at roughly the Leo T mass scale, reionization has suppressed these galaxies' gas content. This explains their absence in the \ion{H}{1}{} surveys.
While these results are at the limits of what is achievable with current \ion{H}{1}{} and optical surveys, new prospects are on the horizon. \ac{FAST} with its higher resolution, larger field of regard, and larger multiplexing factor, will allow us to conduct an order of magnitude more powerful study and further constrain the history of reionization. It will also provide a way to test the predictions of the simple model posed here, as new gas-bearing dwarf galaxy detections (or lack thereof) will provide cross-checks on the results presented here. \ac{FAST} is beginning its science surveys next year, and should support experiments similar to the \ac{GALFA-HI} analysis presented here. Further afield, combined with focused observations or large, deep optical surveys like the \ac{LSST}, the techniques laid out in this paper will provide an excellent opportunity to improve the constraints on dwarf galaxy formation and reionization.
\acknowledgments
The authors thank Mike Boylan-Kolchin, James Bullock, Andrew Hearin, Justin Read, and Kirill Tchernyshyov for helpful discussions regarding this work. The authors also thank the anonymous referee for suggestions that improved this work. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{astropy}. It also used the open-source software tools Numpy, Scipy, Matplotlib, and IPython \citep{numpyscipy, matplotlib, ipython}.
This research has made use of NASA's Astrophysics Data System.
Some support for EJT was provided by a Giacconi Fellowship.
\bibliographystyle{yahapj}
| -26,022.04336 |
[
-2.9765625,
2.84375
] | 63.829787 |
[
-2.244140625,
0.97900390625,
-1.876953125,
-5.0546875,
-1.07421875,
7.0625
] |
[
4.67578125,
7.125,
4.09375,
6.5625
] | 296 | 5,942 |
[
-2.56640625,
2.912109375
] | 23.654124 |
[
-5.94921875,
-3.482421875,
-3.814453125,
-2.138671875,
1.564453125,
11.6171875
] | 0.950867 | 31.434143 | 23.241333 | 0.868436 |
[
2.2348697185516357
] | -19,578.362306 | 5.398182 | -25,668.769545 | 0.196409 | 5.935743 |
[
-3.2109375,
-3.392578125,
-2.990234375,
-3.82421875,
2.470703125,
10.2578125
] |
[
-5.5546875,
-2.8203125,
-2.66015625,
-2.19140625,
4.0546875,
6.15234375
] | |
BkiUa7E5ixsDL9BWwlSg
|
\section{Introduction}
A Jacobi diagram is a uni-trivalent graph with some extra structure.
Such diagrams play a leading role in the theory of Vassiliev
invariants and Kontsevich invariants of knots. Vassiliev invariants
are defined by a filtration of the vector space spanned by knots,
whose graded spaces are identified with vector spaces spanned by
Jacobi diagrams subject to certain defining relations. The
Kontsevich invariant of a knot is defined as an infinite linear sum
of Jacobi diagrams. The physical background of these invariants is
in the perturbative expansion of the Chern--Simons path integral,
which is formulated in terms of uni-trivalent graphs; this is one
explanation why Jacobi diagrams appear in this theory. The
Kontsevich invariant is expected to classify knots, and from this
point of view it is important to identify the vector space spanned
by Jacobi diagrams subject to the defining relations.
It is conjectured that the space of Jacobi diagrams with an odd
number of legs vanishes; \cite{BN95b,problem_list}. This would imply
the claim that no Vassiliev invariant can distinguish a knot from
its inverse, where the {\it inverse} of an oriented knot is the knot
with the opposite orientation. In general, a knot and its inverse
are not isotopic, the simplest counter-example being the knot
$8_{17}$ with its two possible orientations. The consequences of the
possibility that Vassiliev invariants cannot make this distinction
are discussed in \cite{Kup96}. For the Lie algebra version of this
claim, see Remark \ref{rem.q_inv}. Dasbach claimed to have proved
the vanishing of $n$--loop Jacobi diagrams with an odd number of
legs for $n\leq 6$, but his proof has a gap for $n\geq 3$; see
Remark \ref{rem.Dasbach_gap}.
In the present paper, we prove the vanishing of $3$--loop Jacobi
diagrams with an odd number of legs (Theorem \ref{thm.main}). In our
proof, we consider the internal graph of a Jacobi diagram, which is
the trivalent graph obtained from the Jacobi diagram by removing its
legs, where a leg of a Jacobi diagram is an edge adjacent to a
univalent vertex. Then, following Nakatsuru \cite{Nts98}, we
identify each Jacobi diagram with a polynomial whose variables
correspond to the edges of the internal graph of the Jacobi diagram,
and present the space of $3$--loop Jacobi diagrams as a quotient
space of a direct sum of polynomial algebras corresponding to
$3$--loop internal graphs. Here, the quotient is derived from the
defining relations of Jacobi diagrams and from the symmetries of the
internal graphs. Thus, the proof is reduced to calculating the image
of the relations by the (skew) symmetrizer corresponding to the
internal graph's symmetry. This approach provides in passing
an alternative proof of \cite[Theorem 7.4]{Das98} in the `even
number of legs' case as well. The $4$--loop, $5$--loop, and
$6$--loop cases which Dasbach's result would have covered remain
open. In these higher loop degrees, the techniques used here lead to
more complicated calculations, which we have not been able to
complete. New ideas seem necessary in order to make further
progress.
The paper is organized as follows. In Section \ref{sec.Jd}, we
review several definitions concerning Jacobi diagrams and related
notions. In Section \ref{sec.3l_J_d}, we show how to identify the
space of $3$--loop Jacobi diagrams with a quotient space of a direct
sum of polynomial algebras and prove the vanishing of $3$--loop
Jacobi diagrams with an odd number of legs, which is the main
theorem of this paper. This proof requires the use of a certain
lemma, which we prove in Section \ref{sec.pf_lem}.
The gap in the proof of \cite[Theorem 5.4.3(\textrm{iii})]{Das97}
was discovered in a seminar when the first author tried to
generalize Dasbach's proof. The authors thank the participants of
the seminar --- Kazuo Habiro, Tadayuki Watanabe, and Atsushi Ishii
for their attention. The authors would especially like to thank
Pierre Vogel for useful comments regarding the identification of the
space of $n$--loop Jacobi diagrams. The first author would also like
to thank Alexander Stoimenow for useful discussions regarding
Dasbach's papers, and Oliver Dasbach for useful discussions.
The authors would also like to thank
the referees for their careful comments.
\section{Jacobi diagrams}
\label{sec.Jd}
In this section we review definitions of Jacobi diagrams, the space
of Jacobi diagrams, $n$--loop Jacobi diagrams, and define some
notations. For general references on the theory of Jacobi diagrams
see {\it e.g.} \cite{BN95b,Oht02}.
A {\it Jacobi diagram} is a graph whose vertices have valence $1$ or
$3$ and whose trivalent vertices are oriented {\it i.e.}, a cyclic
order of $3$ edges around each trivalent vertex is fixed. The {\it
degree} of a Jacobi diagram is defined to be half the total number
of vertices of the diagram. The {\it space of Jacobi diagrams} is
the vector space over ${\mathbb Q}$ spanned by Jacobi diagrams subject to the
AS (Anti--Symmetry) and IHX (written as ``I''$=$``H''$-$``X'')
relations, which are local moves between Jacobi diagrams which
differ inside a dotted circle as indicated below. The space of
Jacobi diagrams is graded by degree. (A Jacobi diagram of the type
we have just defined is sometimes called an {\it open Jacobi
diagram}, and the space of these Jacobi diagrams is sometimes
denoted $\mathcal{B}$ in the literature.)
\begin{enumerate}
\item[The {\it AS relation} \hspace*{-3.8pc}]$$
\begin{minipage}{42pt}
\includegraphics[width=42pt]{as2s}
\end{minipage}
\hspace{10pt}=\hspace{10pt}-\hspace{5pt}
\begin{minipage}{42pt}
\includegraphics[width=42pt]{as3s}
\end{minipage}
$$
\item[The {\it IHX relation} \hspace*{-4.3pc}]$$
\begin{minipage}{42pt}
\includegraphics[width=42pt]{ihxI}
\end{minipage}
\hspace{10pt}=\hspace{10pt}
\begin{minipage}{42pt}
\includegraphics[width=42pt]{ihxH}
\end{minipage}
\hspace{10pt}-\hspace{10pt}
\begin{minipage}{42pt}
\includegraphics[width=42pt]{ihxX}
\end{minipage}
$$
\end{enumerate}
A Jacobi diagram is called {\it $n$--loop} if
it is connected and its Euler number is
equal to $1-n$; {\it i.e.}, its first Betti number is equal to $n$.
(An $n$--loop Jacobi diagram is sometimes said to be of \emph{loop
degree $n-1$} in the literature.) We denote by $\mathcal{A}_{\,
\mbox{\scriptsize $n$--loop}}$ the space of $n$--loop Jacobi
diagrams, {\it i.e.}, the vector space spanned by $n$--loop Jacobi
diagrams subject to the AS and IHX relations. An edge adjacent to a
univalent vertex is called a {\it leg}.
We assume without loss of generality that
a Jacobi diagram does not have a trivalent vertex
which is adjacent to 2 legs,
since a Jacobi diagram with such a trivalent vertex vanishes
by the AS relation.
The {\it internal graph} of a Jacobi diagram is
the trivalent graph obtained from the Jacobi diagram by removing its legs.
We denote by $\mathcal{A}(\Gamma)$ the space of Jacobi diagrams
whose internal graph is $\Gamma$
modulo the action of the symmetry of $\Gamma$.
\section{$3$--loop Jacobi diagrams}
\label{sec.3l_J_d}
In this section we identify the space of $3$--loop Jacobi diagrams
as a graded vector space. In Section \ref{sec.s3lJd} we present the
space of $3$--loop Jacobi diagrams in terms of spaces $\mathcal{A}(\Gamma)$
for $3$--loop trivalent graphs $\Gamma$. In Section
\ref{sec.polyn_Jd} we present the space of such diagrams using
polynomial algebras. Using this presentation, we prove in Section
\ref{sec.odd_deg} that the odd degree part of this space vanishes,
which is the main theorem of this paper. In Section
\ref{sec.even_deg} we identify the even part of this space with some
polynomial algebra (following \cite{Nts98}).
\subsection{The space of $3$--loop Jacobi diagrams}
\label{sec.s3lJd}
In this section, we present the space of $3$--loop Jacobi diagrams
in terms of spaces $\mathcal{A}(\Gamma)$ for $3$--loop trivalent graphs
$\Gamma$.
Ignoring orientations of internal vertices, the internal graph of a
$3$--loop Jacobi diagram may be one of the five graphs below,
\begin{equation}
\label{eq.5ig}
\pc{wtr2b}{0.33}, \
\pc{bbl2b}{0.33}, \
\pc{mdl2b}{0.33}, \
\pc{tsq2b}{0.33}, \
\pc{tet2b}{0.33}.
\end{equation}
The space of $3$--loop Jacobi diagrams is presented by
\begin{equation}
\label{eq.A3l}
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}} \ \cong \
\Big( \bigoplus_{\mbox{\scriptsize $\Gamma$ in (\ref{eq.5ig})}} \!\!
\mathcal{A} (\Gamma) \Big) \Big/ \, {\rm IHX} ,
\end{equation}
where ``IHX'' implies the IHX relations among these $\Gamma$; all
such relations are obtained by replacing a neighborhood of a
$4$--valent vertex of one of the following graphs with the defining
graphs of the IHX relation,
\begin{equation}
\label{eq.4vg_ihx}
\pc{g1}{0.32}, \
\pc{g2}{0.32}, \
\pc{g3}{0.32}, \
\pc{g4}{0.32}, \
\pc{g5}{0.32}.
\end{equation}
We will see,
in Sections \ref{sec.odd_deg} and \ref{sec.even_deg}
for the odd and even degree parts respectively,
that (\ref{eq.A3l}) is isomorphic to
\begin{equation}
\label{eq.tsq_tet}
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}} \ \cong \ \Big( \mathcal{A} \big( \dbnframe\hspace{-8.25pt}\smoothing
\big) \oplus \mathcal{A} \big( \tetrahedron \big) \Big) \Big/ \,
{\rm IHX},
\end{equation}
where this ``IHX'' implies the IHX relation obtained from the fourth
graph of (\ref{eq.4vg_ihx}). We describe $\mathcal{A} \big( \dbnframe\hspace{-8.25pt}\smoothing \big)$ and
$\mathcal{A} \big( \tetrahedron \big)$ in terms of polynomial algebras in the
next section.
\subsection{Polynomial presentation of $3$--loop Jacobi diagrams}
\label{sec.polyn_Jd}
In this section we see that the space of $3$--loop Jacobi diagrams
is identified, as a graded vector space, with a quotient space of a
direct sum of polynomial algebras.
We identify $\mathcal{A} \big(\tetrahedron \big)$ with the polynomial algebra
on six letters signifying legs on each of the arcs of the internal
graphs, modulo the IHX relations on the legs, and modulo the action
of $\mathfrak{S}_{4}$ the automorphism group of the tetrahedron.
Thus:
$$
\mathcal{A}(\tetrahedron) \ \cong \ {\mathbb Q}[x_1,x_2,x_3,x_4,x_5,x_6] \big/
(\ref{eq.Atet_rel}), {\mathfrak S}_4 ,
$$
where
\begin{equation}
\label{eq.tet_polyn}
\begin{array}{c}
\begin{picture}(100,90)
\put(0,0){\includegraphics[width=90pt]{tet1}}
\put(-22,70){\footnotesize $n_1$ \!\!\! legs}
\put(88,70){\footnotesize $n_2$ \!\!\! legs}
\put(65,5){\footnotesize $n_3$ \!\!\! legs}
\put(55,60){\footnotesize $n_6$}
\put(55,52){\footnotesize legs}
\put(32,24){\footnotesize $n_4$ \!\!\! legs}
\put(14,53){\footnotesize $n_5$ \!\!\! legs}
\end{picture}\end{array}
\mbox{is identified with }
x_1^{n_1} x_2^{n_2} x_3^{n_3} x_4^{n_4} x_5^{n_5} x_6^{n_6} ,
\end{equation}
and the following relations (as algebra relations) imply the IHX
relations on the legs:
\begin{equation}
\label{eq.Atet_rel}
\begin{cases}
\ x_1 - x_2 - x_6 = 0 , \\
\ x_1 - x_3 + x_5 = 0 , \\
\ x_4 + x_5 + x_6 = 0 .
\end{cases}
\end{equation}
In order to better describe the action of ${\mathfrak S}_4$,
following \cite{Nts98}, we make the substitution
$$
\begin{cases}
\ y_{1}=x_{1}-x_{5}+x_{6} , \\
\ y_{2}=x_{2}+x_{4}-x_{6} , \\
\ y_{3}=x_{3}-x_{4}+x_{5} , \\
\ y_{4}=-x_{1}-x_{2}-x_{3} ,
\end{cases}
$$
replacing variables corresponding with edges of the tetrahedron with
variables corresponding with its faces. In these new variables,
\begin{align*}
\mathcal{A} \big(\tetrahedron \big) & \ \cong \ {\mathbb Q}[y_{1},y_{2},y_{3},y_{4}]
\big/ (y_{1}\!+\!y_{2}\!+\!y_{3}\!+\!y_{4}=0),\mathfrak{S}_{4} \\* &
\ \cong \ {\mathbb Q}[y_{1},y_{2},y_{3},y_{4}]^{{\mathfrak S}_4} \big/
(y_{1}\!+\!y_{2}\!+\!y_{3}\!+\!y_{4}=0),
\end{align*}
where ${\mathfrak S}_4$ acts on ${\mathbb Q}[y_{1},y_{2},y_{3},y_{4}]$ by
permuting $y_1,y_2,y_3,y_4$ symmetrically in even degrees and
skew-symmetrically in odd degrees.
We may identify $\mathcal{A} \big(\dbnframe\hspace{-8.25pt}\smoothing\big)$ with the polynomial algebra on
six letters modulo the IHX relations on the legs and modulo the
action of the automorphism group of the $\dbnframe\hspace{-8.25pt}\smoothing$--shape as above.
Thus:
$$
\mathcal{A} \big(\dbnframe\hspace{-8.25pt}\smoothing\big) \ \cong \ {\mathbb Q}[z_1,z_2,z_3,z_4] \big/
(z_1+z_2+z_3+z_4=0), \mathrm{Aut}(\dbnframe\hspace{-8.25pt}\smoothing) ,
$$
where
\begin{equation}
\label{eq.tsq_polyn}
\begin{array}{c}
\begin{picture}(90,90)
\put(0,0){\includegraphics[width=80pt]{tsq1}}
\put(26,90){\footnotesize $m_1$ \!\!\! legs}
\put(26,66){\footnotesize $m_2$ \!\!\! legs}
\put(26,37){\footnotesize $m_3$ \!\!\! legs}
\put(26,13){\footnotesize $m_4$ \!\!\! legs}
\end{picture}\end{array}
\mbox{is identified with }
z_1^{m_1} z_2^{m_2} z_3^{m_3} z_4^{m_4} .
\end{equation}
Jacobi diagrams whose internal graphs are $\dbnframe\hspace{-8.25pt}\smoothing$ and $\tetrahedron$
are related by the IHX relation
which is obtained from the fourth graph of (\ref{eq.4vg_ihx}),
\begin{equation}
\label{eq.ihx_tsq_tet}
\begin{array}{c}
\begin{picture}(82,95)
\put(0,0){\includegraphics[width=80pt]{tsq1}}
\put(26,90){\footnotesize $m_1$ \!\!\! legs}
\put(26,66){\footnotesize $m_2$ \!\!\! legs}
\put(26,37){\footnotesize $m_3$ \!\!\! legs}
\put(26,13){\footnotesize $m_4$ \!\!\! legs}
\end{picture}\end{array}
\underset{IHX}{=} \
\begin{array}{c}
\begin{picture}(82,95)
\put(0,0){\includegraphics[width=80pt]{tet4}}
\put(26,90){\footnotesize $m_1$ \!\!\! legs}
\put(26,66){\footnotesize $m_2$ \!\!\! legs}
\put(26,37){\footnotesize $m_3$ \!\!\! legs}
\put(26,13){\footnotesize $m_4$ \!\!\! legs}
\end{picture}\end{array}
+ \
\begin{array}{c}
\begin{picture}(85,95)
\put(0,0){\includegraphics[width=80pt]{tet4}}
\put(26,90){\footnotesize $m_2$ \!\!\! legs}
\put(26,66){\footnotesize $m_1$ \!\!\! legs}
\put(26,37){\footnotesize $m_3$ \!\!\! legs}
\put(26,13){\footnotesize $m_4$ \!\!\! legs}
\end{picture}\end{array} .
\end{equation}
\subsection{Odd degree part}
\label{sec.odd_deg}
The aim of this section is to prove the following theorem.
\begin{thm}
\label{thm.main}
The space of $3$--loop Jacobi diagrams of odd degree vanishes.
That is, $\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (odd)} = 0$.
\end{thm}
\begin{proof}
By (\ref{eq.A3l}),
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (odd)} \ \cong \
\Big( \bigoplus_{\mbox{\scriptsize $\Gamma$ in (\ref{eq.5ig})}} \!\!
\mathcal{A} (\Gamma)^{\rm (odd)} \Big) \Big/ \, {\rm IHX} .
$$
We show the vanishing of $\mathcal{A} (\Gamma)^{\rm (odd)}$
for the first four graphs $\Gamma$ in (\ref{eq.5ig}).
The vanishing of $\mathcal{A} \big( \dbnframe\hspace{-8.25pt}\smoothing \big)^{\rm (odd)}$ is shown as follows.
It is shown by the IHX
relation that this space is spanned by diagrams of the form
(\ref{eq.tsq_polyn}). Such a diagram $D$ is equal modulo the AS
relation to $-D$ by reflection of the internal graph with respect to
a vertical line, therefore $D=0$.
Hence, $\mathcal{A} \big( \dbnframe\hspace{-8.25pt}\smoothing \big)^{\rm (odd)} = 0$.
Similarly, reflection of the internal graph shows us that the spaces
$\mathcal{A} \big( \bbl \big)^{\rm (odd)}$ and $\mathcal{A} \big( \mdl \big)^{\rm
(odd)}$ also both vanish.
The vanishing of $\mathcal{A} (\wtr)^{\rm (odd)}\vspc{15}$ is shown as
follows. Let $D$ be a Jacobi diagram whose internal graph is $\wtr$.
We can assume by the IHX relation that there are no legs adjacent to
any separating arc. If there is a loop with an even number of legs,
then the AS relation on the vertex connecting a separating arc with
this loop gives $D=-D$ and therefore $D=0$. Otherwise, by applying
the IHX relation to a separating arc, $D$ is equal to $2$ times a
Jacobi diagram in $\mathcal{A}(\bbl)^{\rm (odd)} = 0$, and therefore $D=0$.
Hence, $\mathcal{A} (\wtr)^{\rm (odd)} =0$.
Therefore, the space of $3$--loop Jacobi diagrams of odd degree is presented by
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (odd)} \ \cong \ \mathcal{A} \big(
\tetrahedron \big)^{\rm (odd)} \big/ \big( \mbox{(the right hand
side of (\ref{eq.ihx_tsq_tet}))} = 0 \big).
$$
The vector space spanned by the right hand side of (\ref{eq.ihx_tsq_tet})
is spanned by
$$
\big( x_1^{m_1} x_5^{m_2} + x_1^{m_2} x_5^{m_1} \big) x_4^{m_3} (-x_2)^{m_4}
$$
in terms of polynomials under the identification (\ref{eq.tet_polyn}).
This space is spanned by
$$
(x_1+x_5)^m (x_1 x_5)^n x_4^{m_3} (-x_2)^{m_4} .
$$
Noting that $x_1+x_5 = x_3 = x_2-x_4$,
this space is further spanned by diagrams of the following form.
\begin{equation}
\label{eq.image}
\begin{array}{c}
\begin{picture}(100,90)
\put(0,0){\includegraphics[width=90pt]{tet3}}
\put(-17,65){\footnotesize $n$ \!\!\! legs}
\put(88,65){\footnotesize $n$ \!\!\! legs}
\put(35,15){\footnotesize $n_4$ \!\!\! legs}
\put(15,47){\footnotesize $n_5$ \!\!\! legs}
\end{picture}\end{array}
\end{equation}
Hence,
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (odd)} \ \cong \ \mathcal{A} \big(
\tetrahedron \big)^{\rm (odd)} \big/ \big( \mbox{(\ref{eq.image})} =
0 \big) .
$$
In order to show that $\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm
(odd)} = 0$, it is sufficient to show that $\mathcal{A} \big( \tetrahedron
\big)^{\rm (odd)}$ is spanned by diagrams of the form
(\ref{eq.image}). As mentioned in Section \ref{sec.polyn_Jd}, the
space of $3$--loop Jacobi diagrams of odd degree is presented by
$$
\mathcal{A} \big(\tetrahedron \big)^{\rm (odd)} \ \cong \ \big(
{\mathbb Q}[y_1,y_2,y_3,y_4]^{\rm (odd)} \big)^{{\mathfrak S}_4} \big/
(y_1\!+\!y_2\!+\!y_3\!+\!y_4=0),
$$
where the action of $\mathfrak{S}_{4}$ on
${\mathbb Q}[y_{1},y_{2},y_{3},y_{4}]^{\text{(odd)}}$
is skew symmetric.
Since a skew symmetric polynomial is presented by
the product of a symmetric polynomial and
the discriminant $\Delta = \prod_{i<j}(y_{i}-y_{j})$,
$$
\mathcal{A} \big(\tetrahedron \big)^{\rm (odd)}
\ \cong \
\Delta \cdot {\mathbb Q}[\sigma_{2},\sigma_{3},\sigma_{4}]^{\rm (odd)}
\ \cong \
\Delta\sigma_{3}\cdot{\mathbb Q}[\sigma_{2},\sigma_{3}^{2},\sigma_{4}] ,
$$
recalling that
$\sigma_i$ denotes the $i$th symmetric polynomial in $y_1,y_2,y_3,y_4$.
Hence, the vector space spanned by the diagrams of the form (\ref{eq.image})
in $\mathcal{A} \big(\tetrahedron \big)^{\rm (odd)}$
is presented by the image of the following map
$$
{\mathbb Q}[x_1 x_2, x_4, x_5]^{\rm (odd)} \longrightarrow
{\mathbb Q}[x_1,x_2,x_3,x_4,x_5,x_6]^{\rm (odd)} \big/ (\ref{eq.Atet_rel}),
{\mathfrak S}_4 \ \cong \
\Delta\sigma_{3}\cdot{\mathbb Q}[\sigma_{2},\sigma_{3}^{2},\sigma_{4}] .
$$
By Lemma \ref{lem.surj}, this map is surjective, noting that
\begin{align*}
x_1 &= (y_1-y_4)/4 , \\
x_2 &= (y_2-y_4)/4 , \\
x_4 &= (y_2-y_3)/4 , \\
x_5 &= (y_3-y_1)/4 .
\end{align*}
Therefore,
$\mathcal{A} \big( \tetrahedron \big)^{\rm (odd)}$ is spanned by
the diagrams of the form (\ref{eq.image}),
which implies the theorem.
\end{proof}
\begin{rem}
\label{rem.Dasbach_gap} Dasbach \cite{Das97} claimed to have proved
the vanishing of $n$--loop Jacobi diagrams with odd number of legs
for $n\leq 6$ (cited in two of his subsequent papers--- in
\cite{Das98} as Theorem 2.2 and half of Theorem 7.4, and in
\cite{Das00}, although the focus of both papers is on the `even
number of legs' case). There is however a gap in the proof of his
Theorem 5.4.3(\textrm{iii}) (the second equation on page 58 is
wrong, since he is using `modulo greater CW--vectors' to go one way
but not the other).
\end{rem}
\vskip 0.5pc
\begin{rem}
\label{rem.q_inv} It is known that no quantum invariant can
distinguish a knot and its inverse. Hence, if there existed a
counter-example to the conjecture that Jacobi diagrams with an odd
number of legs vanish, such a Jacobi diagram would not be detectable
by weight systems derived from Lie algebras. It is known
\cite{Vogel,Lieberum} how to construct elements which can not be
detected by weight systems derived from Lie algebras, but the method
employed in these papers would not give non-trivial diagrams with an
odd number of legs, as it involves constructing non-trivial diagrams
by multiplying particular elements of Vogel's algebra $\Lambda$, and
the action of $\Lambda$ does not change the number of legs.
\end{rem}
\subsection{Even degree part}
\label{sec.even_deg}
In this section, we review the identification of the space of
$3$--loop Jacobi diagrams of even degree with a polynomial algebra,
following Nakatsuru \cite{Nts98}.
This identification recovers \cite[Theorem 7.4]{Das98}.
By (\ref{eq.A3l}),
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (even)} \ \cong \
\Big( \bigoplus_{\mbox{\scriptsize $\Gamma$ in (\ref{eq.5ig})}} \!\!
\mathcal{A} (\Gamma)^{\rm (even)} \Big) \Big/ \, {\rm IHX} .
$$
Unlike the odd degree case, it is necessary to describe IHX
relations among internal graphs $\Gamma$ concretely, since $\mathcal{A}
(\Gamma)^{\rm (even)}$ do not vanish for most $\Gamma$. Let
$\mathcal{D}(\Gamma)$ denote the space of Jacobi diagrams whose internal
graph is $\Gamma$, not divided by the action of the symmetry of
$\Gamma$. Then, by definition, $\mathcal{A}(\Gamma) = \mathcal{D}(\Gamma) / {\rm
Aut}\, (\Gamma)$. The IHX relations obtained from the first 4 graphs
of (\ref{eq.4vg_ihx}) induce the maps
\begin{align*}
& \psi_1 : \mathcal{D}( \wtr ) \longrightarrow \mathcal{D}( \bbl ), \\
& \psi_2 : \mathcal{D}( \bbl ) \longrightarrow \mathcal{D}( \mdl ), \\
& \psi_3 : \mathcal{D}( \mdl ) \longrightarrow \mathcal{D}( \dbnframe\hspace{-8.25pt}\smoothing ), \\
& \psi_4 : \mathcal{D}( \dbnframe\hspace{-8.25pt}\smoothing ) \longrightarrow \mathcal{D}( \tetrahedron ).
\end{align*}
Here, for example, $\psi_4$ is the map taking the left hand side of
(\ref{eq.ihx_tsq_tet}) to the right hand side of
(\ref{eq.ihx_tsq_tet}).
Further, the IHX relation obtained from the last graph of (\ref{eq.4vg_ihx})
is the relations,
\begin{equation}
\label{eq.ihx_theta-o}
\begin{picture}(80,30)
\put(0,-2){\pc{g10}{0.33}}
\put(12,28){\scriptsize $n_1$ legs}
\put(15,11){\scriptsize $n_2$ legs}
\put(17,-4){\scriptsize $n_3$ legs}
\put(75,-14){\scriptsize $n$ legs}
\end{picture}
+
\begin{picture}(80,30)
\put(0,-2){\pc{g10}{0.33}}
\put(12,28){\scriptsize $n_2$ legs}
\put(15,11){\scriptsize $n_3$ legs}
\put(17,-4){\scriptsize $n_1$ legs}
\put(75,-14){\scriptsize $n$ legs}
\end{picture}
+
\begin{picture}(80,30)
\put(0,-2){\pc{g10}{0.33}}
\put(12,28){\scriptsize $n_3$ legs}
\put(15,11){\scriptsize $n_1$ legs}
\put(17,-4){\scriptsize $n_2$ legs}
\put(75,-14){\scriptsize $n$ legs}
\end{picture}
= \ 0 .
\end{equation}
By using these,\vspc{23}
the space of $3$--loop Jacobi diagrams of even degree is presented by
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (even)} \ \cong \
\Big( \bigoplus_{\mbox{\scriptsize $\Gamma$ in (\ref{eq.5ig})}} \!\!
\mathcal{D} (\Gamma)^{\rm (even)} \Big) \Big/ \,
\big( {\rm Aut}\,(\Gamma) \mbox{ for $\Gamma$ in (\ref{eq.5ig})}, \
\psi_1,\, \psi_2, \, \psi_3, \, \psi_4, \, \mbox{(\ref{eq.ihx_theta-o})} \big).
$$
Since $\mathcal{A}( \wtr )^{\rm (even)} = 0$ and $\psi_1$ induces the zero
map $\mathcal{A}( \wtr )^{\rm (even)} \to \mathcal{A}( \mdl )^{\rm (even)}$, we can
ignore the contribution from $\mathcal{A}( \wtr )^{\rm (even)}$.
Further, since $\psi_3 \psi_2$ descends to
a map $\mathcal{A}( \bbl )^{\rm (even)} \to \mathcal{A}( \dbnframe\hspace{-8.25pt}\smoothing )^{\rm (even)}$,
we can ignore the contribution from $\mathcal{A}( \bbl )^{\rm (even)}$.
Furthermore, since $\psi_3$ induces
a map $\mathcal{A}( \mdl )^{\rm (even)} \to \mathcal{A}( \dbnframe\hspace{-8.25pt}\smoothing )^{\rm (even)}$ and
(\ref{eq.ihx_theta-o}) vanishes in the image of $\psi_4 \psi_3$, we
can ignore the contribution from $\mathcal{A}( \mdl )^{\rm (even)}$. Hence,
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (even)} \ \cong \
\Big( \mathcal{D} (\dbnframe\hspace{-8.25pt}\smoothing)^{\rm (even)} \oplus \mathcal{D} (\tetrahedron)^{\rm (even)} \Big)
\Big/
\Big( {\rm Aut}\,(\dbnframe\hspace{-8.25pt}\smoothing), \, {\rm Aut}\,(\tetrahedron), \, \psi_4 \Big) .
$$
It can be checked by concrete calculation that
if Jacobi diagrams $D, D' \in \mathcal{D} \big(\dbnframe\hspace{-8.25pt}\smoothing \big)^{\rm (even)}$
are related by ${\rm Aut}\, \big( \dbnframe\hspace{-8.25pt}\smoothing \big)$,
then $\psi_4(D)$ and $\psi_4(D')$ are related by
${\rm Aut}\, \big( \tetrahedron \big)$.
Hence, $\psi_4$ induces
a map $\overline{\psi_4} :
\mathcal{A}( \dbnframe\hspace{-8.25pt}\smoothing )^{\rm (even)} \to \mathcal{A}( \tetrahedron )^{\rm (even)}$.
Therefore,
$$
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (even)} \ \ \cong \ \
\Big( \mathcal{A} (\dbnframe\hspace{-8.25pt}\smoothing)^{\rm (even)} \oplus \mathcal{A} (\tetrahedron)^{\rm (even)} \Big)
\Big/ \, \overline{\psi_4}
\ \ \cong \ \
\mathcal{A} (\tetrahedron)^{\rm (even)} .
$$
Hence, by the identification of $\mathcal{A} (\tetrahedron)$ with the
polynomial algebra mentioned in Section \ref{sec.polyn_Jd},
\begin{align*}
\mathcal{A}_{\, \mbox{\scriptsize 3--loop}}^{\rm (even)} & \ \cong \ \big(
{\mathbb Q}[y_{1},y_{2},y_{3},y_{4}]^{\rm (even)}
\big)^{\mathfrak{S}_{4}}\big/
(y_{1}\!+\!y_{2}\!+\!y_{3}\!+\!y_{4}=0) \\
& \ \cong \
{\mathbb Q}[\sigma_{2},\sigma_{3},\sigma_{4}]^{\rm (even)}
\ \cong \ {\mathbb Q}[\sigma_{2},\sigma_{3}^{2},\sigma_{4}] ,
\end{align*}
where $\sigma_{i}$ denotes the $i$th symmetric polynomial
in four variables $y_1,y_2,y_3,y_4$.
It has as its generating function
$$
\frac{1}{(1-x^{2})(1-x^{4})(1-x^{6})}=\sum_{n \text{
even}}\left(\Bigl\lfloor\frac{n^{2}+12n}{48}\Bigr\rfloor+1\right)x^{n}
$$
recovering \cite[Theorem 7.4]{Das98} and agreeing with the results
of \cite{Das00}.
\section{A lemma on polynomial algebras}
\label{sec.pf_lem}
The aim of this section is to prove Lemma \ref{lem.surj}, which was
used in the proof of the main theorem in the previous section.
The {\it skew symmetrizer}
$$
{\mathbb Q}[y_1,y_2,y_3,y_4] \ \longrightarrow \
\Delta \cdot {\mathbb Q}[\sigma_1,\sigma_2,\sigma_3,\sigma_4]
$$
is the linear map sending
$$
f(y_1,y_2,y_3,y_4) \ \mbox{ to } \ \frac{1}{4!} \sum_{\tau \in
{\mathfrak S}_4} \mbox{sgn}(\tau) \,
f(y_{\tau(1)},y_{\tau(2)},y_{\tau(3)},y_{\tau(4)}),
$$
where $\sigma_i$ is the $i$th symmetric polynomial in
$y_1,y_2,y_3,y_4$ and $\Delta = \prod_{i<j}(y_{i}-y_{j})$ as before.
We consider the composition
\begin{multline*} {\mathbb Q}[y_1-y_3, y_2-y_3,
(y_1-y_4)(y_2-y_4)] \ \longrightarrow \\ {\mathbb Q}[y_1,y_2,y_3,y_4]/(y_1 +
y_2 + y_3 + y_4) \ \longrightarrow \ \Delta \cdot
{\mathbb Q}[\sigma_2,\sigma_3,\sigma_4]
\end{multline*}
where the first map is the projection of the inclusion,
and the second map is a quotient of the skew symmetrizer.
\begin{lem}
\label{lem.surj}
The odd degree part of the above map,
$$
{\mathbb Q}[y_1 \!-\! y_3, \, y_2 \!-\! y_3, \,
(y_1 \!-\! y_4)(y_2 \!-\! y_4)]^{\rm (odd)} \ \longrightarrow \
\Delta \sigma_3 \cdot {\mathbb Q}[\sigma_2,\sigma_3^2,\sigma_4],
$$
is surjective,
where ${\mathbb Q}[\cdots]^{\rm (odd)}$ denotes the vector subspace of ${\mathbb Q}[\cdots]$
spanned by polynomials of odd degrees.
\end{lem}
\begin{proof}
We put
\begin{align*}
& P_2(y_1,y_2,y_3) = (y_1-y_2)^2 + (y_2-y_3)^2 + (y_3-y_1)^2 , \\
& P_3(y_1,y_2,y_3) = (y_1-y_2)(y_2-y_3)(y_3-y_1) , \\
& P_4(y_1,y_2,y_3,y_4) = (y_1-y_3)(y_2-y_3)(y_1-y_4)(y_2-y_4) .
\end{align*}
By definition,
$$
12 \, P_2(y_1,y_2,y_3)^n \, P_3(y_1,y_2,y_3)^{2m+3} \, P_4(y_1,y_2,y_3,y_4)^k
$$
belongs to ${\mathbb Q}[y_1\!-\!y_3, \, y_2\!-\!y_3, \,
(y_1\!-\!y_4)(y_2\!-\!y_4)]^{\rm (odd)}$ for any non-negative
integers $n,m,k$. Since $P_2(y_1,y_2,y_3)$ and $P_3(y_1,y_2,y_3)$
are invariant under cyclic permutations of $y_1,y_2,y_3$, the above
polynomial and
\begin{align*}
& 4 \, P_2(y_1,y_2,y_3)^n \, P_3(y_1,y_2,y_3)^{2m+3} \\*
& \quad \times\big(
P_4(y_1,y_2,y_3,y_4)^k + P_4(y_1,y_3,y_2,y_4)^k + P_4(y_1,y_4,y_2,y_3)^k \big)
\end{align*}
are taken to the same image by the skew symmetrizer.
Further, since the last factor of the above formula is a symmetric polynomial,
the skew symmetrizer takes the above formula to
\begin{align*}
Q^{n,m,k} & = \big( P_2(y_1,y_2,y_3)^n P_3(y_1,y_2,y_3)^{2m+3}
+ P_2(y_4,y_3,y_2)^n P_3(y_4,y_3,y_2)^{2m+3} \\*
& \quad + P_2(y_3,y_4,y_1)^n P_3(y_3,y_4,y_1)^{2m+3}
+ P_2(y_2,y_1,y_4)^n P_3(y_2,y_1,y_4)^{2m+3} \big) \\*
& \times \big(
P_4(y_1,y_2,y_3,y_4)^k + P_4(y_1,y_3,y_2,y_4)^k + P_4(y_1,y_4,y_2,y_3)^k \big).
\end{align*}
Hence, it is sufficient to show that
$\Delta \sigma_3 \cdot {\mathbb Q}[\sigma_2,\sigma_3^2,\sigma_4]$
is spanned by $Q^{n,m,k}$.
For a fixed non-negative integer $d$, we consider the vector
subspace of $\Delta \sigma_3 \cdot {\mathbb Q}[\sigma_2,\sigma_3^2,\sigma_4]$
spanned by polynomials of degree $2d+9$. Since it is spanned by
$\Delta \sigma_3 \cdot \sigma_2^n \sigma_3^{2m} \sigma_4^k$ for
non-negative integers $n,m,k$ satisfying $n+2k+3m=d$, its dimension
is equal to the number of such $(n,m,k)$. Since the $Q^{n,m,k}$'s
are such polynomials of this number, it is sufficient to show the
linear independence of $Q^{n,m,k}$ for non-negative integers $n,m,k$
satisfying that $n+2k+3m=d$.
In order to prove the linear independence of the $Q^{n,m,k}$'s we
first make the substitution
\begin{align*}
y_1 &= (3 t^a - t^b - t^c)/4, \\
y_2 &= (- t^a +3 t^b - t^c)/4, \\
y_3 &= (- t^a - t^b +3 t^c)/4, \\
y_4 &= -(t^a + t^b + t^c)/4,
\end{align*}
where $t$ is a variable tending to $\infty$,
and $a,b,c$ are real numbers satisfying that
$a>b>c>0$ and $a-b \, < \, b-c \, < \, 2(a-b)$.
Since
\begin{align*}
& y_1-y_4 = t^a, \\
& y_2-y_4 = t^b, \\
& y_3-y_4 = t^c,
\end{align*}
we have that
\begin{align*}
P_2(y_1,y_2,y_3) &=
(t^a-t^b)^2 + (t^b-t^c)^2 + (t^c-t^a)^2 \\*
&= 2 t^{2a} \big( 1 - t^{-(a-b)} + o(t^{-(b-c)}) \big),
\end{align*}
where $f(t)=g(t)+o(t^\varepsilon)$ means that
$\big( f(t)-g(t) \big)/t^\varepsilon \to 0$ as $t \to \infty$.
Hence,
$$
P_2(y_1,y_2,y_3)^n = 2^n t^{2a n} \big( 1 - n t^{-(a-b)} + o(t^{-(b-c)}) \big).
$$
Similarly,
\begin{align*}
&P_2(y_4,y_3,y_2)^n = 2^n t^{2b n} \big( 1 + o(t^{0}) \big), \\
&P_2(y_3,y_4,y_1)^n = 2^n t^{2a n} \big( 1 + o(t^{0}) \big), \\
&P_2(y_2,y_1,y_4)^n=2^n t^{2a n}\big(1-n\, t^{-(a-b)} + o(t^{-(b-c)}) \big), \\
&P_3(y_1,y_2,y_3)^{2m+3} = -t^{(2a+b)(2m+3)}
\big(1- (2m+3) (t^{-(a-b)}+t^{-(b-c)}) + o(t^{-(b-c)}) \big), \\
&P_3(y_4,y_3,y_2)^{2m+3} = t^{(2b+c)(2m+3)} \big( 1 + o(t^{0}) \big), \\
&P_3(y_3,y_4,y_1)^{2m+3} = -t^{(2a+c)(2m+3)} \big( 1 + o(t^{0}) \big), \\
&P_3(y_2,y_1,y_4)^{2m+3} = t^{(2a+b)(2m+3)}
\big(1- (2m+3) \, t^{-(a-b)} + o(t^{-(b-c)}) \big), \\
&P_4(y_1,y_2,y_3,y_4)^k = t^{(2a+2b)k} \big( 1 + o(t^{0}) \big), \\
&P_4(y_1,y_3,y_2,y_4)^k = (-1)^{k}t^{(2a+b+c)k} \big( 1 + o(t^{0}) \big), \\
&P_4(y_1,y_4,y_2,y_3)^k = t^{(2a+b+c)k} \big( 1 + o(t^{0}) \big).
\end{align*}
Hence,
\begin{align*}
& P_2(y_1,y_2,y_3)^n P_3(y_1,y_2,y_3)^{2m+3}
+ P_2(y_4,y_3,y_2)^n P_3(y_4,y_3,y_2)^{2m+3} \\*
& \quad + P_2(y_3,y_4,y_1)^n P_3(y_3,y_4,y_1)^{2m+3}
+ P_2(y_2,y_1,y_4)^n P_3(y_2,y_1,y_4)^{2m+3} \\
& = -2^n t^{2a n+(2a+b)(2m+3)} \big(1- (n+2m+3) \, t^{-(a-b)}
-(2m+3) \, t^{-(b-c)} + o(t^{-(b-c)}) \big) \\* & \quad +2^n t^{2b
n+(2b+c)(2m+3)} \big( 1 + o(t^{0}) \big) \\* & \quad -2^n t^{2a
n+(2a+c)(2m+3)} \big( 1 + o(t^{0}) \big) \\* & \quad +2^n t^{2a
n+(2a+b)(2m+3)} \big(1- (n+2m+3) \, t^{-(a-b)} + o(t^{-(b-c)}) \big)
\\* & = 2^n (2m+3) \, t^{2a n+(2a+b)(2m+3)-(b-c)} \big( 1 + o(t^{0})
\big),
\end{align*}
noting that we need $2m+3>1$ when we verify that
$$
2a n+(2a+b)(2m+3)-(b-c) \ > \ 2a n+(2a+c)(2m+3).
$$
Further,
$$
P_4(y_1,y_2,y_3,y_4)^k + P_4(y_1,y_3,y_2,y_4)^k + P_4(y_1,y_4,y_2,y_3)^k
= \varepsilon \, t^{(2a+2b)k} \big( 1 + o(t^{0}) \big) ,
$$
where $\varepsilon=1$ if $k>0$, and $\varepsilon=3$ if $k=0$.
Therefore,
$$
Q^{n,m,k} = \varepsilon \, 2^n (2m+3) \, t^{ 2(n+2m+k+3)a + 2(m+k+1)b +c }
\big( 1 + o(t^{0}) \big) .
$$
This implies that the only possible linear relations between the
$Q^{n,m,k}$'s are between those having the same value of $(n+2m+k, \
m+k )$, and in particular the same value of $m+k$. In other words,
the vector space which we are considering is presented by the direct
sum:
$$
\mbox{span} \{ Q^{n,m,k} \ | \ n+2k+3m=d \} \ = \
\bigoplus_{\ell}
\mbox{span} \{ Q^{n,m,k} \ | \ n+2k+3m=d, \ \ m+k=\ell \} .
$$
Next, in order to complete the proof of the linear independence of the
$Q^{n,m,k}$'s,
we make another substitution
\begin{align*}
y_1 &= (2 t^a - t^c)/4\thickspace\ + t^b/2 , \\
y_2 &= (2 t^a - t^c)/4\thickspace\ - t^b/2 , \\
y_3 &= (-2 t^a + 3 t^c)/4 , \\
y_4 &= -(2 t^a + t^c)/4 ,
\end{align*}
where $t$ is as above,
and $a,b,c$ are real numbers satisfying that
$a>b>c>0$ and $b-c \, < \, a-b \, < \, 2(b-c)$.
Since
\begin{align*}
& y_1-y_4 = t^a + t^b/2, \\
& y_2-y_4 = t^a - t^b/2, \\
& y_3-y_4 = t^c , \\
& y_1-y_2 = t^b ,
\end{align*}
we have that
\begin{align*}
&P_2(y_1,y_2,y_3)^n = 2^n t^{2a n} \big( 1 -2 n \, t^{-(a-c)} + o(t^{-(a-c)}) \big), \\
&P_2(y_4,y_3,y_2)^n = 2^n t^{2a n}
\big( 1 -n\, t^{-(a-b)} -n\, t^{-(a-c)}+ o(t^{-(a-c)}) \big), \\
&P_2(y_3,y_4,y_1)^n = 2^n t^{2a n}
\big( 1 +n\, t^{-(a-b)} -n\, t^{-(a-c)} + o(t^{-(a-c)}) \big), \\
&P_2(y_2,y_1,y_4)^n = 2^n t^{2a n} \big( 1 + o(t^{-(a-c)}) \big), \\
&P_3(y_1,y_2,y_3)^{2m+3} = -t^{(2a+b)(2m+3)}
\big(1- 2 (2m+3) \, t^{-(a-c)} + o(t^{-(a-c)}) \big), \\
&P_3(y_4,y_3,y_2)^{2m+3} = t^{(2a+c)(2m+3)}
\big( 1 - (2m+3)\, (t^{-(a-b)} + t^{-(a-c)}) + o(t^{-(a-c)}) \big), \\
&P_3(y_3,y_4,y_1)^{2m+3} = -t^{(2a+c)(2m+3)}
\big( 1 + (2m+3)\, (t^{-(a-b)} - t^{-(a-c)}) + o(t^{-(a-c)}) \big), \\
&P_3(y_2,y_1,y_4)^{2m+3} = t^{(2a+b)(2m+3)} \big( 1 + o(t^{-(a-c)}) \big), \\
&P_4(y_1,y_2,y_3,y_4)^k = t^{4a k} \big( 1 + o(t^{0}) \big), \\
&P_4(y_1,y_3,y_2,y_4)^k = (-1)^{k}t^{(2a+b+c)k} \big( 1 + o(t^{0}) \big), \\
&P_4(y_1,y_4,y_2,y_3)^k = t^{(2a+b+c)k} \big( 1 + o(t^{0}) \big).
\end{align*}
Hence,
\begin{align*}
& P_2(y_1,y_2,y_3)^n P_3(y_1,y_2,y_3)^{2m+3}
+ P_2(y_4,y_3,y_2)^n P_3(y_4,y_3,y_2)^{2m+3} \\*
& \quad + P_2(y_3,y_4,y_1)^n P_3(y_3,y_4,y_1)^{2m+3}
+ P_2(y_2,y_1,y_4)^n P_3(y_2,y_1,y_4)^{2m+3} \\
& = -2^n t^{2a n+(2a+b)(2m+3)} \big(1 - 2(n+2m+3) \, t^{-(a-c)} +
o(t^{-(a-c)}) \big) \\* & \quad +2^n t^{2a n+(2a+c)(2m+3)} \big(1 -
(n+2m+3) \, (t^{-(a-b)} + t^{-(a-c)}) + o(t^{-(a-c)}) \big)
\\* & \quad -2^n t^{2a n+(2a+c)(2m+3)} \big(1 + (n+2m+3) \,
(t^{-(a-b)} - t^{-(a-c)}) + o(t^{-(a-c)}) \big) \\* & \quad +2^n
t^{2a n+(2a+b)(2m+3)} \big(1 + o(t^{-(a-c)}) \big) \\* & = 2^{n+1}
(n+2m+3) \, t^{2a n+(2a+b)(2m+3)-(a-c)} \big( 1 + o(t^{0}) \big).
\end{align*}
Further,
$$
P_4(y_1,y_2,y_3,y_4)^k + P_4(y_1,y_3,y_2,y_4)^k + P_4(y_1,y_4,y_2,y_3)^k
= \varepsilon \, t^{4a k} \big( 1 + o(t^{0}) \big) ,
$$
where $\varepsilon=1$ if $k>0$, and $\varepsilon=3$ if $k=0$.
Therefore,
$$
Q^{n,m,k} =
\varepsilon \, 2^{n+1} (n+2m+3) \, t^{(2(n+2m+2k)+5)a +(2m+3) b +c}
\big( 1 + o(t^{0}) \big).
$$
This implies that the only possible linear relations between the
$Q^{n,m,k}$'s are between those having the same value of $(n+2m+2k, \ m )$.
Thus, the only linear relations
that could exist between $Q^{n,m,k}$'s with fixed $n+2k+3m=d$ are
between those having the same value $m+k$ (from the first substitution)
and the same values of $m$ (from the second substitution).
In other words,
$$
\mbox{span} \{ Q^{n,m,k} \ | \ n+2k+3m=d \} \ = \
\!\!\! \bigoplus_{n+2k+3m=d} \!\!\!
\mbox{span} \{ Q^{n,m,k} \} .
$$
It follows that the $Q^{n,m,k}$'s are indeed linearly independent,
as required.
\end{proof}
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
| -45,770.246408 |
[
-3.357421875,
3.02734375
] | 20.399113 |
[
-2.001953125,
1.5078125,
-1.9970703125,
-5.3671875,
-1.9833984375,
7.64453125
] |
[
2.9921875,
9.3125,
3.078125,
6.6484375
] | 312 | 4,048 |
[
-3.166015625,
3.74609375
] | 40.600611 |
[
-4.63671875,
-2.984375,
-4.29296875,
-2.25390625,
1.0390625,
10.796875
] | 2.671756 | 11.559312 | 21.813241 | 8.727425 |
[
2.5047621726989746
] | -31,185.916031 | 5.566206 | -45,832.972855 | 0.932008 | 5.705266 |
[
-1.4248046875,
-3.0625,
-3.97265625,
-5.3515625,
1.8837890625,
12.1875
] |
[
-5.46875,
-1.876953125,
-2.193359375,
-1.2177734375,
3.54296875,
4.32421875
] | |
BkiUdMs5qhDBbAkZEQrP
|
\section{Introduction}
\iffalse
\begin{figure}
\centering
\includegraphics[scale=0.3]{img/figure-overview-whole.pdf}
\caption{The overview of a vanilla DNN and our proposed two attention models, the red line connects a inner block and its previous block with the highest probability in its own attention vector.}
\label{fig:overview}
\end{figure}
\fi
Recent progress in deep neural networks (DNNs)
has significantly benefited
many if not all tasks in artificial intelligence,
ranging from computer vision to
natural language processing.
The encouraging results, however,
come with an enormous cost:
state-of-the-art DNNs have been scaled up to hundreds and even thousands of
layers with millions of parameters,
making it demanding to
train and deploy even on GPU clusters, let alone
on edge devices like mobile phones.
Many research efforts have thus
been made to craft lightweight DNNs
applicable to in-the-wild scenarios.
Representative schemes include
weight pruning~\cite{Han2015LearningBW},
model quantization~\cite{Jacob2018QuantizationAT},
and knowledge distillation~(KD)~\cite{Hinton2015DistillingTK},
among which KD has recently emerged as one
of the most flourishing topics in
the field.
The goal of KD is to extract
knowledge from a well-behaved but
cumbersome teacher model,
often known as \emph{dark knowledge},
and to learn a compact student model
capable to handle the task of the teacher
but with fewer parameters.
Since the pioneering work of \cite{Hinton2015DistillingTK},
a variety of dark knowledge has been explored,
including \emph{hint}~\cite{Romero2015FitNetsHF},
\emph{attention}~\cite{Zagoruyko2017AT}, and
\emph{instance relationship}~\cite{Liu2019KnowledgeDV}.
Despite the above forms of dark knowledge showcase promising results,
the naive \emph{soft target}~\cite{Hinton2015DistillingTK}
is found still among the most competitive ones~\cite{tian2019crd}.
Nevertheless, few attempts have been dedicated to
explaining the rationale of soft targets on the student learning.
A common belief is that soft labels reveal richer
information like category similarities than the widely-used one-hot vector,
so that the student obtains more supervision signals for learning.
Recently, the work~\cite{Yuan_2020_CVPR} argues that the soft target is intrinsically a type of
learned label smoothing~\cite{Szegedy2016RethinkingTI}
regularization. ~\cite{Furlanello2018BornAN} conjectures
that soft targets resemble importance-weighting,
where weights correspond to confidences
of the teachers in the correct prediction.
~\cite{Tang2020UnderstandingAI},
on the other hand,
dissects the effects of soft targets into three main factors:
label smoothing, importance weighting, and category similarities.
Albeit the inspiring insights provided by these works,
the underlying mechanism of how category similarities
regularize the student model learning
has remained largely under-studied to date.
In this paper, we take a closer look at the role of the soft label
through the lens of a novel attention model, which we term as KDExplainer. KDExplainer is an interpretation-friendly
student model, which distinguishes itself from conventional student DNNs
from two main aspects, as illustrated in Figure~\ref{fig:overview}.
First, KDExplainer takes the form of
a Hierarchical Mixture of Experts (HME), where each expert
is expected to specialize in specific subtasks and
learn adaptive features.
Second, KDExplainer casts the multi-class classification task as a multi-task problem,
in which each task is formulated as a binary classification problem.
Such a design explicitly
shapes KDExplainer as a
neural tree,
for which the inherent working mechanism
is well understood,
in the aim to interpret KD.
We then carry out KD from a free-form pre-trained DNN to the dedicated
KDExplainer, through which process the rationale
of KD is highlighted thanks to the
interpretable essence of the HME architecture.
Interestingly, we find that the KD objective
promotes lower entropy of the attention distribution,
indicating that soft labels, in reality,
encourage specialization of different experts
and hence play a role in modulating the knowledge
conflicts for solving different subtasks.
To further understand
the connection and difference between KD and
label smoothing,
we train a KDExplainer using label smoothing,
and discover that the derived
attention distribution
exhibits no significant differences with those by vanilla training without KD.
This phenomenon marks that soft labels indeed have more to offer,
including feature specialization,
than label smoothing.
Inspired by these observations,
we further introduce a portable component,
termed as virtual attention module (VAM),
to enhance the performance of conventional DNNs under KD.
The key idea of VAM is to coordinate
the knowledge conflicts for discriminating different categories,
achieved via lowering the entropy of the attention distribution.
VAM can be readily integrated with existing DNNs
while bringing negligible additional computation cost.
Moreover, since VAM is naturally orthogonal to KD,
it can be seamlessly combined with various KD schemes,
rendering it a handy module
to facilitate KD.
Our contributions are therefore summarized as follows.
\begin{itemize}
\item We propose a novel attention model of
interpretable nature, KDExplainer,
to understand the role of soft labels
in training the student.
\item Through KDExplainer, we observe that soft labels
implicitly modulate the knowledge
conflicts between different subtasks by promoting feature specialization,
and offer more regularization
than only label smoothing.
\item Understanding the KD rationale via KDExplainer further motivates us to design a portable and compact module,
VAM, readily applicable to various DNNs and KDs.
\end{itemize}
Extensive experimental results across benchmarks
demonstrate that VAM not only consistently improves
the performance of vanilla KD under various experimental settings,
but also can be readily integrated with
other state-of-the-art KD schemes to further promote their results.
\section{Related Work}
\paragraph{Knowledge Distillation}
Knowledge distillation has attracted increasing attention thanks to its important role in deploying deep networks to low-capacity edge devices. The main idea is leveraging the \textit{dark knowledge} encoded in a bulky teacher to craft a lightweight student model with performance on par with the teacher. Over the last several years, most works devote themselves to the exploration of different forms of the dark knowledge, including soft targets~\cite{Hinton2015DistillingTK}, features~\cite{Romero2015FitNetsHF}, attention~\cite{Zagoruyko2017AT}, factors~\cite{NIPS2018_7541}, activation boundary~\cite{ABdistill}, and instance relationship~\cite{Liu2019KnowledgeDV,Park2019RelationalKD,tung2019similarity}. By imitating the teacher to behave in a similar way, the student achieves comparable performance even with much fewer parameters.
\paragraph{Attention Mechanism}
Inspired by human cognition, attention mechanisms focus on relevant regions of input data to solve the desired task rather than ingesting the entire input. Attention-based neural networks have been broadly adopted in natural language models for machine translation~\cite{Bahdanau2015NeuralMT}, image caption generation~\cite{Xu2015ShowAA}, and unsupervised representation learning~\cite{Devlin2019BERTPO}. Attention mechanisms also achieve great success in vision models~\cite{Mnih2014RecurrentMO,Bello_2019_ICCV}. Except the performance boost, attention mechanism also provides an important way to explain the workings of neural models~\cite{Li2016UnderstandingNN,Wiegreffe2019AttentionIN,Xu2015ShowAA}.
Unlike most prior works,
our focus here is to utilize an
attention mechanism to interpret KD, which has been largely overlooked in previous literature.
\section{KDExplainer}
\begin{figure*}
\centering
\includegraphics[scale=0.65]{img/overview3.pdf}
\vspace{-0.5em}
\caption{A conceptual illustration. \textbf{Top}: a conventional student network for knowledge distillation. \textbf{Bottom}: the proposed KDExplainer. To explain the effects of soft targets as the distillation objective, we use the KDExplainer as the student model for knowledge distillation. }
\label{fig:overview}
\vspace{-1em}
\end{figure*}
\subsection{Knowledge Distillation with Soft Targets}
Vanilla KD~\cite{Hinton2015DistillingTK} distills the ``dark knowledge'' from the teacher via aligning the soft targets \begin{equation}\label{eq:vanilla_distill}
\mathcal{O}_{\text{KD}} =\alpha \mathcal{L}_{\text{CE}}(p(\mathbi{z}^s), \mathbi{y}) + (1-\alpha) \mathcal{D}_{\text{KL}}\left(p(\mathbi{z}^t; {\tau}), p(\mathbi{z}^s; {\tau})\right),
\end{equation}
where $\mathbi{z}^s$ and $\mathbi{z}^t$ are respectively the logits from the student and the teacher models. $\mathbi{y}$ is its associated one-hot label vector, $p$ denotes the softmax function that produces the category probabilities given the logits, and
$\tau$ is a non-negative temperature hyperparameter used to smooth the distributions. As for $p_i(\mathbi{z};\tau)$, we have
\begin{equation}\label{eq:soft_target}
p_i(\mathbi{z};\tau)=\frac{\exp({z}_i/\tau)}
{\sum\nolimits_{j}\exp({z}_j/\tau)}.
\end{equation}
Also, $\mathcal{L}_{\text{CE}}$ is the conventional cross-entropy loss, and $\mathcal{D}_{\text{KL}}$ is the Kullback-Leibler divergence between the categorial distributions predicted from the teacher and the student models. $\alpha$ is a hyperparameter to trade off the two objective terms.
\subsection{The Proposed KDExplainer}
We propose the KDExplainer, a task-oriented attention model, as the student model to uncover the rationale
underlying KD.
KDExplainer makes two main modifications based on exiting popular DNNs: (1) dissecting the effects of class similarity in soft targets, KDExplainer reformulates the multi-class classification problem as an ensemble of multiple binary classification problems; (2) KDExplainer remodels the student model as a Hierarchical Mixture of Experts (HME) and introduces a task-oriented attention mechanism as the gating function to make the student model more friendly for human interpretation. The overview of the proposed KDExplainer is shown in Figure~\ref{fig:overview}. KDExplainer is designed based on existing widely-used networks such as ResNet~\cite{he2016deep}, Wide Residual Network~\cite{zagoruyko2016wide}, and VGG~\cite{simonyan2014very}. These classic DNNs have some common characteristics: these models are usually composed by several blocks, each of which is a stack of convolutional layers, batch normalization layers~\cite{ioffe2015batch}, and nonlinear activation layers. The number of filters usually keeps fixed in the same block and changes across different blocks.
Formally, we use $B^i$ to denote the $i$-th block, and
then a standard DNN can be formulated as $F_{DNN} = B^C\circ B^{L}\circ\cdot\cdot\cdot \circ B^2\circ B^1$, where symbol $\circ$ denotes the function composition operation. $B^C$ is the classification block that consists of the fully connected layer and the softmax layer. KDExplainer roughly follows the design of the existing DNN, but divides each block $B^i$ (except the first block $B^1$) into $N_i$ equal-sized sub-blocks, \textit{i.e.}, $\hat{B}^i=\{B^i_1, B^i_2, ..., B^i_{N_i}\}$ for any $i>1$. We view each sub-block $B^i_j$ as an expert, and introduce a task-oriented attention module before each expert as the gating function to select a combination of the outputs from previous experts. Therefore, the whole model can be viewed as an HME model.
\subsubsection{Task-oriented Attention}
The proposed attention is ``task-oriented'' as the parameters of the attention module are trained on the whole dataset, which is largely different from the attention mechanism in existing literature~\cite{Bahdanau2015NeuralMT,Bello_2019_ICCV} where the attention weights are determined by instances. For each sub-block $B^i_j$, we use a trainable parameter vector $\mathbi{v}^i_j=[v^i_{j,1}, v^i_{j,2}, ..., v^i_{j,N_{i-1}}]$ to learn the attention distribution over previous experts.
Let output feature maps from previous blocks be $\left\{\mathbi{F}^{i-1}_1,...,\mathbi{F}^{i-1}_{N_{i-1}}\right\}$. At training phase, the input of sub-block $B^i_j$ is computed by
\begin{equation}
\widetilde{\mathbi{F}^i_{j}} = \sum\nolimits^{N_{i-1}}_{k=1}a^i_{j,k}\cdot\mathbi{F}^{i-1}_{k},
\end{equation}
where $a^i_{j,k}$ is the attention weight of the $k$-th expert, $a^i_{j,k} = \frac{\exp({v^i_{j,k}}/T)}{\sum_m \exp({v^i_{j,m}/T})}$. $T$ is a temperature hyper-parameter shared by all attention modules. Note that the classification block $B^C$ is divided into $K$ blocks, where $K$ equals the number of classes in the classification problem. The multi-class classification problem thus turns into an ensemble of binary classification problems in KDExplainer, as shown in Figure~\ref{fig:overview}.
\subsubsection{Explaining KD by KDExplainer}
Recall that KDExplainer is a substitute of the conventional student model, using KDExplainer to understand the effects of KD is straightforward: analyzing differences between the experimental results of KDExplainer trained with and without KD.
As KDExplainer reformulates the multi-class classification as multiple binary classification tasks, each binary classification task encounters the imbalanced data problem. Let $\mathbi{p}^s_k=[p^s_{k,0}, p^s_{k, 1}]$ be the probability predictions of KDExplainer for the $k$-th classification task, and $\mathbi{y}_k=[y_{k,0}, y_{k, 1}]$ be the ground truth in the form of a one-hot vector. When the proposed KDExplainer is trained with KD, the objective function is
\begin{equation}
\label{eq:KDEobj}
\begin{aligned}
\mathcal{O}_{\text{KD}} =\sum\nolimits_{k=1}^K \Big \{
\alpha \mathcal{L}_{\text{WCE}}\left(\mathbi{p}^s_k, \mathbi{y}_k\right) +
(1-\alpha) \mathcal{D}_{\text{KL}}\left(\hat{\mathbi{q}}^t_k, \hat{\mathbi{p}}^s_k\right)
\Big \},
\end{aligned}
\end{equation}
where $\mathcal{L}_{\text{WCE}}$ is the weighted cross-entropy loss
\begin{equation}
\mathcal{L}_{\text{WCE}}=-w_0 y_{k,0}\log p^s_{k,0}-w_1y_{k,1}\log p^s_{k,1}.
\end{equation}
$w_0$ and $w_1$ are the weights balancing the cost incurred by the positive and the negative samples, which alleviates the negative effects caused by imbalanced data. $\hat{\mathbi{q}}^t_k$ and $\hat{\mathbi{p}}^s_k$ are the softened category probabilities from the teacher and the student models, respectively. Note that all teacher models involved in this paper are still conventional DNNs for multi-class classification, so we convert the $K$-class probability prediction $\mathbi{p}^t=[p^t_0,p^t_1, ..., p^t_K]$ from the teacher to $K$ two-dimensional probability vectors as follows
\begin{equation}
\hat{\mathbi{q}}^t_k = [1-p^t_k, p^t_k], \ \text{for any}\ k \in \{1, 2, ..., K\}.
\end{equation}
If KDEplainer is trained without KD, the second term in Eqn.~\ref{eq:KDEobj} is removed.
KDExplainer enjoys an appealing property thanks to the elaborate design: after training, it becomes a tree-like multi-branch network (i.e., neural tree) if only the maximum in the attention weights is kept:
\begin{equation}
\label{eq:keep-maximum}
a^i_{j,k}= \left\{
\begin{aligned}
&1, \ \text{if}\ k=\arg\max_m a^i_{j, m}; \\
&0, \ \text{otherwise}.
\end{aligned}
\right.
\end{equation}
The derived neural trees provide us with stronger support for interpretation and analysis of knowledge distillation.
\section{Virtual Attention Mechanism}
\begin{figure}[t]
\centering
\includegraphics[scale=0.51]{img/vam.pdf}
\caption{The proposed virtual attention mechanism. For simplicity, here we only depict the details of how $\mathbi{F}^{i+1}$ is convolved with the virtual filter block $\mathcal{K}^{i+1}_j$ to produce $\mathbi{F}^{i+2}_j$.}
\label{fig:vam}
\vspace{-1em}
\end{figure}
KDExplainer is tailored to understand the effects of soft targets in KD. However, it may suffer from slight accuracy sacrifice compared to conventional student DNNs due to the \textit{ad hoc} design that multi-class classification turns into multiple binary classification tasks. Here we propose a virtual attention mechanism that is easily integrated into existing student models with few architecture modifications, to retain the capacity of these student models for higher KD performance. As shown in Figure 2, VAM views all convolution filters in a layer as several ``virtual'' filter blocks, each filter block akin to the expert in the KDExplainer. Note that ``virtual'' means that VAM does not really split the convolution filters into any blocks. The only modification is that VAM slightly changes the conventional convolution operation by incorporating an attention mechanism.
Formally, we use $\mathcal{K}^i\in \mathbb{R}^{C^i_{out}C^i_{in}S S}$ to denote the convolution filters in the $i$-th layer, where
$C^i_{in}$ and $C^i_{out}$ denote the input and the output channels. Here for simplicity, we assume all filters are square, where both the width and the length are $S$. Assume the input tensor of the $i$-th convolution layer is ${\mathbi{F}^i}\in \mathbb{R}^{C^i_{in}HW}$, where $H$ and $W$ are the height and the width of the feature map. After the $i$-th layer and the $(i+1)$-th layer, the features turn into $\mathbi{F}^{i+1}\in \mathbb{R}^{C^i_{out}HW}$ and $\mathbi{F}^{i+2}\in \mathbb{R}^{C^{i+1}_{out}HW}$, respectively.
\begin{figure*}[ht]
\centering
\subfigure[Normal]{\includegraphics[scale=0.38]{img/tree3-1.pdf}}
\subfigure[Label Smoothing]{\includegraphics[scale=0.38]{img/tree3-2.pdf}}
\subfigure[KD]{\includegraphics[scale=0.38]{img/tree3-3.pdf}}
\vspace{-1em}
\caption{Visualization of the derived neural trees from the proposed KDExplainer, which is designed based on ResNet18 and trained on CIFAR-10. (a) KDExplainer trained with normal cross-entropy loss; (b) KDExplainer trained with label smoothing; (c) KDExplainer trained with KD. Each node denotes the retained block by removing the unused blocks according to Eqn~\ref{eq:keep-maximum}. The histogram in each node represents the attention distribution over its input tensors. The red and the blue bars denote the maximum and the minimum, respectively.}
\label{fig:experiment-tree}
\vspace{-1em}
\end{figure*}
To incorporate the attention module into the DNN, we view convolution filters in each layer as a set of filter blocks. Assume the filters in the $i$-th layer are divided into $M$ virtual groups, such that $\mathcal{K}^{i}=\{\mathcal{K}^{i}_1, \mathcal{K}^{i}_2, ..., \mathcal{K}^{i}_M\}, \mathcal{K}^{i}_j\in \mathbb{R}^{(C^i_{out}/M)C^i_{in}S S}$ for $1\le j\le M$. After the $i$-th layer, the feature maps produced by each block can be calculated as follows
\begin{equation}
\mathbi{F}^{i+1}_j = \mathbi{F}^{i}\odot \mathcal{K}^i_j, \ \text{for any}\ j \in \{1, 2, ..., M\},
\end{equation}
where $\odot$ denotes the convolution operation. The whole feature tensor can be denoted by
\begin{equation}
\mathbi{F}^{i+1} = \coprod\nolimits^{M}_{m=1}\mathbi{F}^{i+1}_m,
\end{equation}
where $\coprod$ denotes feature concatenation. Assume filters in the $(i+1)$-th layer are divided into $N$ virtual blocks, such that $\mathcal{K}^{i+1}=\{\mathcal{K}^{i+1}_{1},\mathcal{K}^{i+1}_{2},..., \mathcal{K}^{i+1}_{N}\}, \mathcal{K}^{i+1}_{j}\in\mathbb{R}^{(C^{i+1}_{out}/N)C^{i+1}_{in}S S}$ for $1\le j\le N$. To align each filter block in the $(i+1)$-th layer with the output from each block of the $i$-th layer, we further divide each block $\mathcal{K}^{i+1}_{j}$ into $M$ groups along the input channel, such that $\mathcal{K}^{i+1}_{j}=\{\mathcal{K}^{i+1}_{j,1}, \mathcal{K}^{i+1}_{j,2}, ..., \mathcal{K}^{i+1}_{j, M}\}, \mathcal{K}^{i+1}_{j,l}\in\mathbb{R}^{(C^{i+1}_{out}/N)(C^{i+1}_{in}/M)S S}$ for any $1\le l\le M$. Similar to KDExplainer, we introduce a task-oriented attention module $\mathbi{a}^{i+1}_j=\left[a^{i+1}_{j,1}, a^{i+1}_{j,2}, ..., a^{i+1}_{j,M}\right]$ before each filter block $\mathcal{K}^{i+1}_{j}$, thus the output of the block can be computed by
\begin{equation}
\mathbi{F}^{i+2}_j = \sum\nolimits^{M}_{m=1}a^{i+1}_{j,m}\mathbi{F}^{i+1}_{m}\odot\mathcal{K}^{i+1}_{j, m}.
\end{equation}
The attention weights are computed by a softmax function over trainable parameters $\mathbi{v}=\left[v^{i+1}_{j,1},..., v^{i+1}_{j,M}\right]$, i.e.,
\begin{equation}
a^{i+1}_{j,m} = \frac{\exp{(v^{i+1}_{j,m})}}{\sum\nolimits^{M}_{k=1}\exp{}(v^{i+1}_{j,k})}.
\end{equation}
As soft targets are found to encourage lower entropy of the attention distribution (seen in Section~\ref{subsec:experiment-KDE}), we introduce another regularization term based on the conventional KD objective in Eqn.~\ref{eq:vanilla_distill}
\begin{equation}\label{eq:vam_obj}
\begin{aligned}
\mathcal{O}_{\text{KD}} = & (1-\alpha) \mathcal{D}_{\text{KL}}\left(p(\mathbi{z}^t; {\tau}), p(\mathbi{z}^s; {\tau})\right) + \\
& \alpha \mathcal{L}_{\text{CE}}(p(\mathbi{z}^s), \mathbi{y}) + \gamma\mathcal{H}(A),
\end{aligned}
\end{equation}
where $A$ denotes all the involved attention distributions, and $\mathcal{H}$ is the sum of their entropy
\begin{equation}
\label{eq:entropy-minimize}
\mathcal{H}(A)=\sum\nolimits_{i}\sum\nolimits_{j}\sum\nolimits_k-a^{i}_{j,k}\log{a^i_j,k}.
\end{equation}
\section{Experiments}
\subsection{Explaining KD with KDExplainer}
\label{subsec:experiment-KDE}
\subsubsection{Experimental settings} Experiments are conducted on CIFAR-10 and CIFAR-100~\cite{krizhevsky2009learning}. We implement the proposed
KDExplainers based on four DNN architectures: ResNet18~\cite{he2016deep}, VGG8~\cite{simonyan2014very}, WRN-16-2, and WRN-40-1~\cite{zagoruyko2016wide}. For all models, we use ResNet50 as their teacher model for KD. The initial learning rate is 0.1 and decayed every 30 epochs. The training ceases at 300 epochs. For more details, please refer to supplementary materials.
\subsubsection{Experimental results}
\begin{table*}[t]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{c|cccccc|cccccc}
\toprule
\textbf{Data} &\textbf{Model}& \textbf{P-DNN} &\textbf{VAM} &$\mathcal{L}_{\text{CE}}$ & $\mathcal{D}_{\text{KL}}$ &$\mathcal{H}$ & \textbf{VGG8} & \textbf{WRN-16-2} & \textbf{WRN-40-1} & \textbf{resnet20} & \textbf{ResNet18} & \textbf{ShufNetV1} \\ \midrule
\multirow{6}{*}{\rotatebox{90}{CIFAR-10}} &M1&$\checkmark$
&&$\checkmark$&& & 91.41 & 93.71 & 93.34 & 92.83 & 95.26 & 92.56 \\
&M2&&$\checkmark$&$\checkmark$&& & 91.69 & 93.46 & 93.47 & 92.56 & \blue{95.51} & 92.82 \\
&M3&&$\checkmark$&$\checkmark$&& $\checkmark$& 91.82 & 93.90 & 93.95 & 92.86 & 94.99 & 92.83 \\
&M4&$\checkmark$&&$\checkmark$&$\checkmark$& & 93.14 & 94.55 & 93.86 & 93.08 & 95.43 & 93.50 \\
&M5&&$\checkmark$&$\checkmark$&$\checkmark$& & \blue{93.19} & \blue{94.67} & \blue{93.96} & \blue{93.39} & \blue{95.51} & \blue{93.70} \\
&M6&&$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$ & \textbf{93.36} & \textbf{94.85} & \textbf{94.32} & \textbf{93.50} & \textbf{95.59} & \textbf{93.79} \\
\midrule
\multirow{6}{*}{\rotatebox{90}{CIFAR-100}} &M1&$\checkmark$
&&$\checkmark$&& & 70.37 & 73.15 & 71.36 & 69.84 & 77.18 & 71.45 \\
&M2&&$\checkmark$&$\checkmark$&& & 70.54 & 73.56 & 71.15 & 69.58 & 76.62 & 71.70 \\
&M3&&$\checkmark$&$\checkmark$&& $\checkmark$& 70.98 & 73.92 & 71.61 & 69.71 & 78.30 & 71.81 \\
&M4&$\checkmark$&&$\checkmark$&$\checkmark$& & 73.53 & 75.01 & 73.44 & 70.05 & 79.54 & 75.41 \\
&M5&&$\checkmark$&$\checkmark$&$\checkmark$& & \blue{73.78} & \blue{75.43} & \blue{73.87} & \blue{70.32} & \blue{79.63} & \blue{75.62} \\
&M6&&$\checkmark$&$\checkmark$&$\checkmark$&$\checkmark$ & \textbf{74.17} & \textbf{75.63} & \textbf{73.92} & \textbf{70.43} & \textbf{79.77} & \textbf{76.10} \\
\midrule
\multirow{6}{*}{\rotatebox{90}{Tiny-ImageNet}} &M1&$\checkmark$
&&$\checkmark$&& & 56.47 & 57.52 & 56.26 & 50.54 & 65.59 & 60.52 \\
&M2&&$\checkmark$&$\checkmark$&& & 57.18 & \blue{58.16} & 56.02 & 50.53 & 65.79 & 61.24 \\
&M3&&$\checkmark$&$\checkmark$&& $\checkmark$& 57.63 & 58.08 & 56.23 & 50.70 & 66.33 & 62.47 \\
&M4&$\checkmark$&&$\checkmark$&$\checkmark$& & 60.41 & 57.91 & 56.39 & 52.78 & 69.37 & \blue{65.54} \\
&M5&&$\checkmark$&$\checkmark$&$\checkmark$& & \blue{60.45} & \textbf{58.24} & \blue{56.80} & \blue{52.83} & \blue{69.63} & 65.32 \\
&M6&&$\checkmark$&$\checkmark$&$\checkmark$& $\checkmark$& \textbf{60.57} & 58.01 & \textbf{56.99} & \textbf{53.54} & \textbf{69.90} & \textbf{65.65} \\
\bottomrule
\end{tabular}
}
\vspace{-0.5em}
\caption{Top-1 classification accuracy in $\%$ of six model variants. ``P-DNN'' denotes the plain DNN. Bold font indicates the best performance, and blue font denotes the second best. Experiments are repeated three times and the average results are reported.}
\label{tab:vam-over-vanilla-kd}
\vspace{-1em}
\end{table*}
To understand the effects of vanilla KD, KDExplainer is trained with and without soft targets as the optimization objective. Furthermore, as label smoothing~\cite{Szegedy2016RethinkingTI} is
recently also viewed as a type of KD~\cite{Yuan_2020_CVPR}, we also train the KDExplainer with label smoothing to understand its effects. In Figure~\ref{fig:experiment-tree}, we visualize the derived neural trees by keeping only the maximum attention weight in every attention module as Eqn.~\ref{eq:keep-maximum}. It can be seen that KD significantly encourages sparse connectivity between blocks, as its retained blocks are much fewer than normal training and label smoothing. It can be also verified by the attention distribution shown in the histograms in internal nodes, where KD produces sharper~(i.e., lower entropy) attention distribution than normal training and label smoothing. Furthermore, KD produces more human-interpretable branching architecture than normal training and label smoothing. For example, in the derived trees, man-made vehicle categories are attached to the same branch, while animal categories are attached to other branches. This is a reasonable organization as similar categories or tasks share similar decision patterns, and thus they can share the same network branch. Less related categories or tasks, on the other hand, depend on different patterns to make their decision, such that they should be solved separately. The results of normal training and label smoothing somewhat violate this principle, thus their derived trees are much larger than that of KD.
\begin{figure}[t]
\centering
\subfigure[CIFAR-10]{
\includegraphics[scale=0.595]{img/Graph1-10.pdf}}
\subfigure[CIFAR-100]{
\includegraphics[scale=0.595]{img/Graph2-100.pdf}}
\vspace{-1em}
\caption{Accuracy~(\%) of KDExplainers trained with different objectives in different architectures. ``DNN with KD'' and ``LS'' denote the conventional DNN trained with KD and label smoothing.}
\vspace{-1em}
\label{fig:kde-10-100}
\end{figure}
In Figure~\ref{fig:kde-10-100}, we provide the accuracies of all the KDExplainers. Under all experimental settings, KD significantly outperforms label smoothing and vanilla training. Label smoothing marginally improves the performance in most cases compared to vanilla training, but it sometimes leads to performance degradation. For example, it causes performance drops by $0.51\%$ and $0.14\%$ of WRN-40-1 and WRN-16-2 on CIFAR-100. These results prove that by properly organizing categories into different branches, KD helps modulate the knowledge conflicts between different tasks and thus achieves higher accuracy. Label smoothing, on the other hand, provides some regularization on the model learning, but it seems to have play no role in addressing conflicts between decision patterns of different categories. In the supplementary material, we provide more results and analyses to explain the differences between KD and label smoothing.
\subsection{Improving KD with VAM}
\label{subsec:experiment-VAM}
\subsubsection{Experimental settings}
Experiments are conducted on CIFAR-10, CIFAR-100, and Tiny-ImageNet~\cite{le2015tiny}. We incorporate VAM into six widely-used DNNs as student models, including ResNet18, resnet20\footnote{Fowlloing~\cite{tian2019crd}, ResNet and resnet represent cifar- and ImageNet-style networks, respectively.}, VGG8, WRN-16-2, WRN-40-1, and ShuffleNetV1~\cite{zhang2018shufflenet}. For all involved student models, we adopt ResNet50 as their teacher model to provide the soft logits. During the training phase of the student model, the learning rate is initially $0.05$ (attention module $0.01$), gets decayed at epoch $150$, $180$, $210$ by a factor of $0.1$, and ceases at $240$. The temperature hyper-parameter is set to $1$ for all attention modules and $4$ for KD loss. The trading-off factor $\alpha$ is set to $0.9$. The number of channels in each virtual block is $8$ for VGG8, WRN-16-2, and WRN-40-1, $4$ for resnet20, $16$ for ResNet18, and $10$ for ShuffleNetV1. For more details, please refer to the supplementary material.
\subsubsection{Experimental Results}
\textbf{Improving vanilla KD.}~~As VAM is motivated by vanilla KD, we first validate its effectiveness upon vanilla KD. To give a more comprehensive view of VAM, we make comparisons between six model variants. Note that we simply use VAM to denote the DNN incorporated with the proposed virtual attention module. (M1) Plain+$\mathcal{L}_{\text{CE}}$: plain DNN trained with only $\mathcal{L}_{\text{CE}}$; (M2) VAM+$\mathcal{L}_{\text{CE}}$: VAM trained with $\mathcal{L}_{\text{CE}}$; (M3) VAM+$\mathcal{L}_{\text{CE}}$+$\mathcal{H}$: VAM trained with $\mathcal{L}_{\text{CE}}$ and $\mathcal{H}$ (Eqn.~\ref{eq:entropy-minimize}); (M4) Plain+$\mathcal{L}_{\text{CE}}$+$\mathcal{D}_{\text{KL}}$: plain DNN trained with $\mathcal{L}_{\text{CE}}$ and $\mathcal{D}_{\text{KL}}$; (M5) VAM+$\mathcal{L}_{\text{CE}}$+$\mathcal{D}_{\text{KL}}$: VAM trained with $\mathcal{L}_{\text{CE}}$ and $\mathcal{D}_{\text{KL}}$; (M6) VAM+$\mathcal{L}_{\text{CE}}$+$\mathcal{D}_{\text{KL}}$+$\mathcal{H}$: VAM trained with $\mathcal{L}_{\text{CE}}$, $\mathcal{D}_{\text{KL}}$ and $\mathcal{H}$. Experimental results are shown in Table~\ref{tab:vam-over-vanilla-kd}. In general, DNN incorporated with VAM yields consistently superior performance to the plain DNN without VAM. For example, M5 consistently outperforms M4 in almost all our experiments, which validates its effectiveness under KD. Furthermore, when optimized with low entropy constraint $\mathcal{H}$, VAM produces better performance (M3$>$M2, M6$>$M5) under almost all settings. As $\mathcal{H}$ is motivated by the results from KDExplainer, it indicates that the proposed KDExplainer indeed provides us with general and valuable insights into KD.
\noindent\textbf{Improving state-of-the-art KD.}
\begin{table}[t]
\centering
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{cc|cc|cc}
\toprule
& & \multicolumn{2}{c|}{\textbf{CIFAR-10}} & \multicolumn{2}{c}{\textbf{CIFAR-100}} \\
\textbf{Method} &\textbf{Model} & resnet20 & WRN-16-2 & resnet20 & WRN-16-2 \\ \midrule
\multirow{2}{*}{FitNet} & \small{P-DNN} & 92.49 & 93.99 & 69.09 & 72.98 \\
& \small{VAM} & \textbf{92.95} & \textbf{94.08} & \textbf{70.11} & \textbf{73.79}
\\ \midrule
\multirow{2}{*}{FT} & \small{P-DNN} & 92.39 & 93.74 & 69.47 & 72.78 \\
& \small{VAM} & \textbf{93.01} & \textbf{93.78} & \textbf{69.67} & \textbf{73.43}
\\ \midrule
\multirow{2}{*}{SP} & \small{P-DNN} & 93.01 & 94.55 & 69.17 & 74.40
\\
& \small{VAM} & \textbf{93.51} & \textbf{94.61} & \textbf{70.35} & \textbf{74.91} \\ \midrule
\multirow{2}{*}{CRD} & \small{P-DNN} & 91.83 & 93.36 & 70.64 & 74.97 \\
& \small{VAM} & \textbf{92.16} & \textbf{93.53} & \textbf{70.85} & \textbf{75.15} \\
\bottomrule
\end{tabular}
}
\vspace{-0.5em}
\caption{Results of combing VAM with other KD methods. ``P-DNN'' denotes the plain DNN.}
\vspace{-1em}
\label{tab:vam-sota}
\end{table}
~~Although VAM is motivated by vanilla KD, it is also straightforward to be combined with other KD methods. Here we evaluate VAM combined with other KD methods, including FitNets~\cite{romero2014fitnets}, FT~\cite{kim2018paraphrasing}, SP~\cite{tung2019similarity}, and CRD~\cite{tian2019contrastive}. Results are listed in Table~\ref{tab:vam-sota}. It can be seen that combined with other KD methods, VAM still yields performances consistently superior to the plain DNN though it is motivated by only vanilla KD. For resnet20, VAM achieves $0.20\%\sim1.18\%$ performance boosts on CIFAR-100 in four KD methods. For WRN-16-2, VAM improves $0.18\%\sim0.81\%$ on CIFAR-100. Since VAM brings negligible additional overhead, it is an economical way to further improve the performance of existing KD methods. Please refer to the supplementary material for more experimental results, including how the block size and the hyper-parameter $\gamma$ affect the final performance.
\iffalse
\begin{table}[htbp]
\centering
\begin{subtable}[Effects of different attention group parameters $g$]{
\centering
\scalebox{1.0}{
\begin{tabular}{c|cc|cc}
\toprule
\multirow{2}{*}{$g$} & \multicolumn{2}{c}{\textbf{CIFAR-10}} & \multicolumn{2}{c}{\textbf{CIFAR-100}} \\
& resnet20 & WRN-16-2 & resnet20 & WRN-16-2 \\ \midrule
8 & 93.48 & 94.85 & 70.37 & 75.63 \\
4 & 93.50 & 94.92 & 70.43 & 75.35 \\
2 & 93.14 & 94.78 & 70.68 & 75.67 \\
1 & 93.21 & 93.91 & 70.27 & 73.14 \\ \bottomrule
\end{tabular}}}
\label{tab:t3-1}
\end{subtable}
\begin{subtable}[Effects of different minimum entropy coefficients $\gamma$]{
\centering
\scalebox{0.95}{
\begin{tabular}{c|cc|cc}
\toprule
\multirow{2}{*}{$\gamma$} & \multicolumn{2}{c}{\textbf{CIFAR-10}} & \multicolumn{2}{c}{\textbf{CIFAR-100}} \\
& resnet20 & WRN-16-2 & resnet20 & WRN-16-2 \\ \midrule
$1e^{-4}$ & 93.33 & 94.64 & 70.12 & 75.20 \\
$1e^{-3}$ & 92.45 & 94.43 & 70.17 & 75.39 \\
$1e^{-2}$ & 93.18 & 94.72 & 70.44 & 75.49 \\
$1e^{-1}$ & 93.50 & 94.85 & 70.43 & 75.63 \\ \bottomrule
\end{tabular}}}
\label{tab:t3-2}
\end{subtable}
\begin{subtable}[Effects of VAM \& ME without KD]{
\centering
\scalebox{0.95}{
\begin{tabular}{c|cc|cc}
\toprule
& \multicolumn{2}{c}{\textbf{CIFAR-10}} & \multicolumn{2}{c}{\textbf{Tiny-ImageNet}} \\
& resnet20 & WRN-16-2 & resnet20 & WRN-16-2 \\ \midrule
original & 92.83 & 93.71 & 52.66 & 57.52 \\
\small{VAM} & 92.56 & 93.46 & 52.73 & 58.31 \\
\small{VAM-ME} & \textbf{92.86} & \textbf{93.90} & \textbf{53.06} & \textbf{58.48} \\
\bottomrule
\end{tabular}}}
\label{tab:t3-4}
\end{subtable}
\caption{Influence of different settings.}
\label{tab:t3}
\end{table}
\fi
\section{Conclusion and Future Work}
In this paper, we propose KDExplainer to shed light on
the working mechanism underlying soft targets
during KD. We find that KD implicitly modulates
the knowledge conflicts between different subtasks,
and effectively brings about more benefits as compared to label smoothing.
Based on these observations, we propose a portable module, VAM,
to further improve the results of
vanilla KD alongside other state-of-the-art ones.
Extensive experimental results exhibit that
the proposed VAM significantly enhances the KD performance
at a negligible additional cost. In our future work, we will extend the proposed VAM to other tasks and systematically evaluate its effectiveness beyond the scope of KD.
\paragraph{Acknowledgment}
This work is funded by the National Key R\&D Program of China (Grant No: 2018AAA0101503) and the Science and technology project of SGCC (State Grid Corporation of China): fundamental theory of human-in-the-loop hybrid-augmented intelligence for power grid dispatch and control.
{\small
\bibliographystyle{named}
| -29,649.729503 |
[
-1.1220703125,
1.30859375
] | 28.131417 |
[
-2.494140625,
1.0908203125,
-1.021484375,
-4.0234375,
-1.1064453125,
5.6640625
] |
[
1.7529296875,
5.51953125,
-0.474365234375,
4.9296875
] | 415 | 4,014 |
[
-2.435546875,
2.7578125
] | 31.235899 |
[
-6.40625,
-5.09375,
-4.8359375,
-1.689453125,
2.8515625,
13.0078125
] | 0.416856 | 17.584307 | 33.756851 | 6.4988 |
[
1.8271820545196533
] | -20,107.784264 | 6.573991 | -29,203.602413 | 0.450963 | 6.20091 |
[
-2.6875,
-3.54296875,
-3.509765625,
-4.19140625,
2.576171875,
11.15625
] |
[
-5.41796875,
-2.4375,
-2.39453125,
-1.6748046875,
3.671875,
5.8671875
] | |
BkiUdUzxK6EuM_Ubq55H
|
\section{Introduction} \label{introduction}
\subsection{Statement of Problem} \label{statement}
Metamodeling has become widespread for approximating expensive black-box functions that arise in applications ranging from engineering to environmental science and finance \citep{santner2013design}. Rather than aiming to capture the precise shape of the function over the entire region, in this article we are interested in estimating the \emph{level set} where the function exceeds some particular threshold. Such problems are common in cases where we need to quantify the reliability of a system or its performance relative to a benchmark. It also arises intrinsically in control frameworks where one wishes to rank the pay-off from several available actions~\citep{hu2015sequential}.
We consider a setup where the latent $f: D \rightarrow \mathbb{R}$ is a continuous function over a $d$-dimensional input space $D \subseteq \mathbb{R}^d$. The level-set estimation problem consists in classifying every input $x \in D = S \cup N$ according to
\begin{align}\label{eq:objective}
S &= \{x \in D: f(x) \geq 0 \}, \qquad N = \{x \in D: f(x)< 0\}.
\end{align}
Without loss of generality the threshold is taken to be zero, so that the level set estimation is equivalent to learning the sign of the response function $f$. For later use we also define the corresponding zero-contour of $f$, namely the partition boundary $\partial S = \partial N = \{x \in D :f(x)=0\}$.
For any $x\in D$, we have access to a simulator $Y(x)$ that generates noisy samples of $f(x)$:
\begin{align}
Y(x) &= f(x)+\epsilon(x), \label{fundamental}
\end{align}
where $\epsilon(x)$ are realizations of independent, mean zero random variables with variance ${\tau}^2(x)$.
To assess a level-set estimation algorithm, we compare the resulting estimate $\hat{S}$ with the true $S$ in terms of their symmetric difference. Let $\mu$ be a probability measure on the Borel $\sigma$-algebra $\bm{\mathcal{B}}(D)$ (e.g.,~$\mu=\text{Leb}_D$). Then our loss function is
\begin{align}
L(S,\hat{S}) &= \mu(S\Delta \hat{S}), \label{loss}
\end{align}
where $S_1 \Delta S_2 := (S_1 \cap S_2^C) \bigcup (S_1^C \cap S_2)$.
Frequently, the inference is carried out by first producing an estimate $\hat{f}$ of the response function; in that case we take $\hat{S}=\{x \in D:\hat{f}(x) \geq 0\}$) and rewrite the loss as
\begin{align}\label{eq:loss-f}
L(f, \hat{f}) &= \int_{x \in D} \mathbb{I} (\sgn \hat{f}(x) \neq \sgn f(x)) \mu(dx),
\end{align}
where $\mathbb{I}(\cdot)$ is the indicator function.
\subsection{Motivation} \label{motivation}
As a concrete example of level set estimation, consider the problem of evaluating the probability of failure,
determined via the limit state $S$ of a performance function $f(\cdot)$ \citep{picheny2013nonstationary}. The system is safe when $f(x) \le h$, and fails otherwise. In the context where the performance function can be evaluated via deterministic experiments, the estimation of the safe zone (more precisely its volume $\mu(S)$) was carried out in \cite{bect2012sequential} and \cite{mukhopadhyay2005modeling} employing a Gaussian Process approach with a sequential design. A related example dealing with the probability of failure in a nuclear fissile chain reaction appeared in \cite{chevalier2014fast}.
Another application, which motivated this present investigation, comes from simulation-based algorithms for valuation of Bermudan options \citep{gramacy2015sequential,ludkovski2015kriging}. This problem consists of maximizing the expected reward $h(\tau,X_\tau)$ over all stopping times $\tau \in \{0, \Delta t, 2\Delta t, \ldots, T\}$ bounded by the specified horizon $T$:
\begin{align}
V(t,x) &= \text{sup}_{\tau \geq t, \tau \in \mathcal{S}} \mathbb{E}[h(\tau,X_\tau) | X_t= x], \label{payoff}
\end{align}
where $(X_t)$ is the underlying asset price at time $t$, typically satisfying a stochastic differential equation and $\Delta t$ is the frequency of exercising. The approach in the so-called Regression Monte Carlo methods \citep{longstaff2001valuing,tsitsiklis2001regression} is to convert the decision of whether to exercise the option $\tau(t,x) = t$ or continue $\tau(t,x) > t$ when $X_t = x$ at intermediate step $t$, into comparing the immediate reward $h(t,x)$ vis \`a vis the reward-to-go $C(t,x)$. In turn this is equivalent to determining the zero level set (known as the continuation region) $S_{t} = \{ x \in D : f(x; t) \ge 0 \}$ of the timing value $f(x; t) := C(t,x) - h(t,x)$. The stopping problem \eqref{payoff} is now solved recursively by backward induction over $t=T-\Delta t,T - 2\Delta t, \ldots$, which allows noisy samples of $f(x; t)$ to be generated by simulating a trajectory $X^x_{t:T}$ emanating from $x$ and evaluating the respective \emph{pathwise} reward-to-go. Probabilistically, this means that we are interested in \eqref{fundamental} where $f$ corresponds to a \emph{conditional expectation} related to a path-dependent functional of the Markov process $X_{\cdot}$; the loss function \eqref{loss} arises naturally as a metric regarding the quality of the estimated stopping rule in terms of the underlying distribution $\mu(\cdot; t)$ of $X_t$. We refer to \cite{ludkovski2015kriging} for a summary of existing state of the art and the connection to employing a GP metamodel for learning the timing value $T(\cdot; t)$.
\subsection{Design of Experiments for Contour Finding}
Reconstructing $S$ via a metamodel can be divided into two steps: the construction of the response model and the development of methods for efficiently selecting the simulation inputs $x_{1:N}$, known as design of experiments (DoE). Since the level set is intrinsically defined in terms of the unknown $f$, an \emph{adaptive} DoE approach is needed that selects $x_n$'s sequentially.
For the response modeling aspect, GP regression, or kriging, has emerged as the most popular nonparametric approach for both deterministic and stochastic black-box functions \citep{bect2012sequential,gramacy2009adaptive,picheny2013quantile,jalali2016comparison}. GPs have also been widely used for the level-set estimation problem; see
\cite{bryan2008actively,gotovos2013active,hu2015sequential,picheny2010adaptive} and \cite{ranjan2012sequential}.
In a nutshell, at step $n$ the GP paradigm constructs a metamodel $\hat{f}^{(n)}$ that is then used to guide the selection of $x_{n+1}$ and also to construct the estimate $\hat{S}^{(n)}$. To this end, GPs are well suited for sequential design by offering a rich uncertainty quantification aspect that can be (analytically) exploited to construct information-theoretic DoE heuristics. The standard framework is to develop an acquisition function $\mathcal{I}_n(x)$ that quantifies the value of information from taking a new sample at input $x$ conditional on an existing dataset $(x_{1:n}, y_{1:n})$ and then to myopically maximize $\mathcal{I}_n$:
\begin{align}
x_{n+1} = \arg \max_{x \in D } \mathcal{I}_{n}(x). \label{seq}
\end{align}
Early level-set sampling criteria were proposed by~\cite{bichon2008efficient}, \cite{picheny2010adaptive}, and \cite{ranjan2012sequential} based on modifications to the Expected Improvement criterion~\citep{jones1998efficient} for response function optimization. A criterion more targeted to reduce the uncertainty about $S$ itself was first developed by ~\cite{bect2012sequential} using the concept of stepwise uncertainty reduction (SUR). Specifically, the SUR strategy aims to myopically maximize the global learning rate about $S$; see also~\cite{chevalier2014fast} for related computational details.
Recently, further criteria using tools from random set theory were developed in \cite{chevalier2013estimating,azzimonti2015quantifying}. Specifically, those works use the notions of Vorob'ev expectation and Vorob'ev deviation to choose inputs that minimize the posterior expected distance in measure between the level set $S$ and its estimate $\hat{S}$. This approach is computationally expensive however, and requires conditional simulations of the posterior Gaussian field. {Other works dealing with more conservative estimates are \cite{bolin2015excursion,azzimonti2016adaptive}.} Clear analysis comparing all these choices in the stochastic setting is currently lacking.
\subsection{Summary of Contributions} \label{summary approach}
{Most} of the cited papers consider only the deterministic setting without any simulation noise. The main goal of this article is to present a comprehensive assessment of GP-based surrogates for stochastic contour-finding. In that sense, our analysis complements the work of~\cite{picheny2013benchmark} and~\cite{jalali2016comparison}, who benchmarked GP metamodels for Bayesian optimization (BO) where the objective is to evaluate $\max_x f(x)$.
While simple versions (with constant or prespecified Gaussian noise) are easily handled, the literature on GP surrogates for complex stochastic simulators remains incomplete. Recently, several works focused on heteroskedastic simulation variance; see the Stochastic Kriging approach of~\cite{ankenman2010stochastic} and the earlier works by two of the authors~\citep{binois2016practical,binois2017replication}. In the present article we instead target the non-Gaussian aspects, in particular the likely heavy-tailed property. This issue is fundamental to any realistic stochastic simulator where there is no justification for assuming Gaussian-distributed $\epsilon(x)$ (as opposed to the physical experimental setup where $\epsilon$ represents observation noise and is expected to be Gaussian thanks to the central limit theorem). This motivates us to study \emph{alternative GP-based metamodels} for learning $\hat{S}$ that are more robust to non-Gaussian $\epsilon$ in \eqref{fundamental}. In parallel, we investigate which of the contour-finding heuristics outlined above perform best in such setups.
To stay within the sequential design paradigm, we continue to work with a GP-based setup but investigate several modifications that are relevant for learning $\hat{S}$.
\begin{itemize}
\item To relax the Gaussian noise assumption, we investigate $t$-observation GPs \citep{rasmussen2006gaussian,jylanki2011robust}; the use of the Student-$t$ likelihood nests both the heavy-tailed and Gaussian cases.
\item As another non-Gaussian specification we consider Student-$t$ processes (TPs) \citep{shah2014student,wang2017extended}, as one replacement of GPs, that are also resistant to observation outliers.
\item To target the classification-like objective underlying \eqref{loss}, we consider the use of classification GPs that model the sign of the response $Y(x)$ via a probit logistic model driven by a latent GP $Z(\cdot)$: $\mathbb{P}( Y(x) > 0 | x) = \probit (Z(x))$. Deployment of the logistic regression is expected to ``wash out'' non-Gaussian features in $\epsilon(x)$ beyond its effect on the sign of the observations.
\item In a different vein, to exploit a structure commonly encountered in applications where the level set $S$ is \emph{connected}, we study the performance of \emph{monotone} GP regression/classification metamodels \citep{riihimaki2010gaussian} that force $f$ (or $Z$) to be monotone in the specified coordinates.
\end{itemize}
Our analysis is driven by the primal effect of noise on contour-finding algorithms. This effect was already documented in related studies, such as that of~\cite{jalali2016comparison} who observed the strong impact of $\epsilon(\cdot)$ on performance of BO. Consequently, specialized metamodeling frameworks and acquisition functions are needed that can best handle the stochasticity for the given loss specification. Thus, the combination of the above tools with the GP framework aims to strike the best balance in carrying out uncertainty quantification and constructing a robust surrogate that is not too swayed by the simulation noise structure. In the context of GPs, this means accurate inference of the mean response and sampling noise that in turn drive the posterior mean $\hat{f}$ and the posterior GP variance $s(x)^2$. Both of the latter ingredients are needed to blend the exploitation objective to locally learn the contour $\partial S$ and to explore less-sampled regions. These issues drive our choices of the metamodels and also factor in developing the respective acquisition functions $\mathcal{I}_n(x)$; see cf.~Section~\ref{sec:improvementmetrics}. On the latter front we consider four choices (MCU, cSUR, tMSE, ICU), including heuristics that depend only on the posterior standard deviation $s^{(n)}(\cdot)$, as well as those that anticipate information gain from sampling at $x_{n+1}$ via the look-ahead standard deviation $s^{(n+1)}(\cdot)$. Because in the non-Gaussian GPs $s^{(n+1)}$ depends on $Y(x_{n+1})$, we develop tractable approximations $\hat{s}^{(n+1)}$ for that purpose, see Propositions~\ref{updatevarcxt}-\ref{updatevarcxcl}-\ref{updatevarcxtp}.
To recap,
our contributions can be traced along five directions. First, we investigate two ways to handle heavy-tailed simulation noise via a GP with $t$-observations and via TP. As far as we are aware, this is the first application of either tool in sequential design and contour-finding contexts. Second, we present an original use of monotonic GP metamodels for level set estimation. This idea is related to a gray-box approach that aims to exploit known structural properties of $f$ (or $S$) so as to improve on the agnostic black-box strategies. Third, we analyze the performance of classification GP metamodels for contour-finding. This context offers an interesting and novel comparison between regression and classification approaches benchmarked against a shared loss function. Fourth, we develop and implement approximate \emph{look-ahead} formulas for all our metamodels that are used for the evaluation of acquisition functions. To our knowledge, this is the first presentation of such formulas for non-Gaussian GPs, as well as TPs. Fifth, beyond the metamodels themselves, we also provide a detailed comparison among the proposed acquisition functions, identifying the best-performing combinations of $\mathcal{I}(\cdot)$ and metamodel $\hat{f}$ and documenting the complex interplay between design geometry and surrogate architecture.
The rest of the article is organized as follows. Section \ref{sec:model} describes the metamodels we employ. Section \ref{sec:improvementmetrics} develops the sequential designs for the level-set estimation problem, and Section~\ref{sec:update} discusses the look-ahead variance formulas for non-Gaussian GPs. Section~\ref{sec:synthetic} compares the models using synthetic data where ground truth is known. Two case studies from derivative pricing are investigated in Section~\ref{sec:Bermudan}. In Section~\ref{sec:conc} we summarize our conclusions.
\section{Statistical Model}\label{sec:model}
\subsection{Gaussian Process Regression with Gaussian Noise} \label{gauss}
We begin by discussing regression frameworks for contour finding that target learning the latent $f(\cdot)$ based on the loss \eqref{eq:loss-f}. The Gaussian process paradigm treats $f$ as a random function whose posterior distribution is determined from its prior and the collected samples $\mathcal{A}_n \equiv \{(x_i,y_i),1 \leq i \leq n\}$. We view $f(\cdot) \sim GP( m(\cdot), K(\cdot,\cdot))$, a priori, as a realization of a Gaussian process completely specified by its mean function $m(x) := \mathbb{E}[f(x)]$ and covariance function $K(x,x') := \mathbb{E}[(f(x)-m(x))(f(x')-m(x'))]$.
In the classical case \citep{rasmussen2006gaussian}, the noise distribution is homoskedastic {Gaussian} $\epsilon(x) \sim \mathcal{N}(0, {\tau}^2)$, and the prior mean is zero, {$m(x)=0$}. Given observations $\mathbf{y}_{1:n}=[y_1, \dots,y_n]^T$ at inputs $\mathbf{x}_{1:n}=[x_1,\ldots,x_n]^T$, the conditional distribution $f | \mathcal{A}_n$ is then another Gaussian process, with posterior marginal mean $\hat{f}_{\mathrm{Gsn}}^{(n)}(x_*)$ and covariance $v_{\mathrm{Gsn}}^{(n)}(x_*,x_*')$ given by (throughout we use subscripts to indicate the metamodel type, e.g.,~$Gsn$ for Gaussian noise)
\begin{align}
\hat{f}_{\mathrm{Gsn}}^{(n)}(x_*) &= k(x_*)[\mathbf{K}+{\tau}^2\mathbf{I}]^{-1}\mathbf{y}_{1:n}, \label{mean}\\
v_{\mathrm{Gsn}}^{(n)}(x_*,x_*') &= K(x_*,x_*')-k(x_*) [\mathbf{K}+{\tau}^2 \mathbf{I}]^{-1}k(x_*')^T, \label{cov}
\end{align}
with the $1 \times n$ vector $k(x_*)$ and $n \times n$ matrix $\mathbf{K}$ defined by $k(x_*) := K(x_*,\mathbf{x}_{1:n}) = [K(x_*,x_1),...,K(x_*,x_n)]$, and $\mathbf{K}_{i,j} := K(x_i,x_j)$.
\sloppy The posterior mean $\hat{f}_{\mathrm{Gsn}}^{(n)}(x_*)$ is treated as a point estimate of $f(x_*)$ and the posterior standard deviation $s_{\mathrm{Gsn}}^{(n)}(x_*)^2 = v_{\mathrm{Gsn}}^{(n)}(x_*,x_*)$ as the uncertainty of this estimate. We use $\mathbf{f}$ to denote the random posterior vector $f(\mathbf{x}_{1:n}) | \mathcal{A}_n$.
\sloppy \textbf{Model Fitting:} In this article, we model the covariance between the values of $f$ at two inputs $x$ and $x'$ with the squared exponential (SE) function:
\begin{align}
K_{\text{se}}(x,x') := \sigma_{\text{se}}^2\exp\bigg(-\sum_{i=1}^d\frac{(x^i-x'^i)^2} {2\theta_{i}^2}\bigg), \label{covf}
\end{align}
defined in terms of the hyperparameters $\bm{\vartheta}=\{\sigma_{\text{se}},\theta_1,...,\theta_d, {\tau}\}$ known as the process variance and length-scales, respectively. Simulation variance ${\tau}$ is also treated as unknown and part of $\bm{\vartheta}$.
Several common ways exist for estimating $\bm{\vartheta}$. Within a Bayesian approach we integrate against the prior $p(\bm{\vartheta})$ using
\begin{eqnarray}
p(\mathbf{f}|\mathbf{y}_{1:n},\mathbf{x}_{1:n},\bm{\vartheta})&=&\frac{p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\mathbf{f})p(\mathbf{f}|\bm{\vartheta})}{p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\bm{\vartheta})}, \label{2.1.1}
\end{eqnarray}
where $p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\mathbf{f})$ is the likelihood and $p(\mathbf{f}|\bm{\vartheta})$ is the latent function prior. Notice that following the Gaussian noise assumption, the likelihood $p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\mathbf{f})$ is Gaussian. With a Gaussian prior $p(\mathbf{f}|\bm{\vartheta})$, the posterior $p(\mathbf{f}|\mathbf{y}_{1:n},\mathbf{x}_{1:n},\bm{\vartheta})$ is tractable and also follows a Gaussian distribution. The normalizing constant in the denominator $p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\bm{\vartheta})$ is independent of the latent function and is called the marginal likelihood, given by
\begin{eqnarray}
p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\bm{\vartheta})&=&\int p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\mathbf{f})p(\mathbf{f}|\bm{\vartheta})d \mathbf{f}. \label{2.1.1.2}
\end{eqnarray}
One may similarly express the posterior over the hyperparameters $\bm{\vartheta}$, where $p(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\bm{\vartheta})$ plays the role of the likelihood.
To avoid expensive MCMC integration, we use the Maximum Likelihood (ML) estimate $\hat{\bm{\vartheta}}$ which maximizes the likelihood \eqref{2.1.1.2}.
Given the estimated hyperparameters $\hat{\bm{\vartheta}}$, we take the posterior of $f$ as $p(\mathbf{f}|\mathbf{y}_{1:n},\mathbf{x}_{1:n},\hat{\bm{\vartheta}})$.
\subsection{Gaussian Process Regression with Student $t$-Noise} \label{t observation gp}
Taking the noise term $\epsilon(x)$ as Gaussian is widely used since the marginal likelihood is then analytically tractable. In a stochastic simulation setting however, the exact distribution of the outputs relative to their mean is unknown and often is clearly non-Gaussian. A more robust choice is to assume that $\epsilon(x)$ has a Student-$t$ distribution \citep{jylanki2011robust}. In particular, this may work better when the noise is heavy-tailed by making inference more resistant to outliers \citep{o1979outlier}.
In the resulting $t$-GP formulation $\epsilon(x)$ is assumed to be $t$-distributed with variance ${\tau}^2$ and $\nu > 2$ degrees of freedom (the latter is treated as another hyperparameter). The marginal likelihood of observing $\mathbf{y}_{1:n}$ can be written as
\begin{align}
& p_{t\mathrm{GP}}(\mathbf{y}_{1:n}|\mathbf{x}_{1:n}, \mathbf{f}) = \prod_{i=1}^n \frac{\Gamma((\nu+1)/2)}{\Gamma(\nu/2)\sqrt{\nu\pi}\sigma_{n}}
\left(1+\frac{(y_i-f_i)^2}{\nu\sigma_{n}^2}\right)^{-(\nu+1)/2}, \label{liket}
\end{align}
where $\Gamma(\cdot)$ is the incomplete Gamma function.
The likelihood $p_{t\mathrm{GP}}(\mathbf{y}_{1:n}|\mathbf{x}_{1:n},\mathbf{f})$ in~(\ref{2.1.1}) is no longer Gaussian, and integrating (\ref{liket}) against the Gaussian prior $p(f|\bm{\vartheta})$ is intractable; we therefore use the Laplace approximation (LP) method \citep{williams1998bayesian} to calculate the posterior. A second-order Taylor expansion of $\log p_{t\mathrm{GP}}(\mathbf{f}|\mathbf{x}_{1:n},\mathbf{y}_{1:n})$ around its mode, $\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)}:=\arg \max_\mathbf{f}p_{t\mathrm{GP}} (\mathbf{f}|\mathbf{x}_{1:n},\mathbf{y}_{1:n})$, gives a Gaussian approximation
\begin{align}\label{lpt}
p_{t\mathrm{GP}}(\mathbf{f}|\mathbf{x}_{1:n},\mathbf{y}_{1:n}) &\approx q_{t\mathrm{GP}}(\mathbf{f}|\mathbf{x}_{1:n},\mathbf{y}_{1:n})
= \mathcal{N}\left(\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)},\mathbf{\Sigma}_{t\mathrm{GP}}^{-1}
\right),
\end{align}
where $\mathbf{\Sigma}_{t\mathrm{GP}}^{-1}$ is the Hessian of the negative conditional log posterior density at $\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)}$:
\begin{align}
\mathbf{\Sigma}_{t\mathrm{GP}} &= -\nabla^2 \log p_{t\mathrm{GP}}(\mathbf{f}|\mathbf{x}_{1:n},\mathbf{y}_{1:n})|_{\mathbf{f}=\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)}}
= \mathbf{K}^{-1}+\mathbf{W}_{t\mathrm{GP}},
\end{align}
and $\mathbf{W}_{t\mathrm{GP}}=-\nabla^2 \log p_{t\mathrm{GP}}(\mathbf{y}_{1:n}|\mathbf{f},\mathbf{x}_{1:n})|_{\mathbf{f}=\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)}}$ is diagonal, since the likelihood factorizes over observations.
Using \eqref{lpt}, the approximate posterior distribution is also Gaussian $f(x_*)| \mathcal{A}_n \sim \mathcal{N}( \hat{f}_{t\mathrm{GP}}^{(n)}(x_*), s_{t\mathrm{GP}}^2(x_*))$, defined by its mean $\hat{f}_{\mathrm{t}}^{(n)}(x_*)$ and covariance $v_{t\mathrm{GP}}^{(n)}(x_*,x_*')$:
\begin{align}
\hat{f}_{t\mathrm{GP}}^{(n)}(x_*) &= k(x_*)\mathbf{K}^{-1}\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)}, \label{meant} \\
v_\mathrm{t}^{(n)}(x_*,x_*') &= K(x_*,x_*')-k(x_*) [\mathbf{K}+\mathbf{W}_{t\mathrm{GP}}^{-1}] ^{-1}k(x_*'). \label{covt}
\end{align}
Note the similarity to \eqref{mean}--\eqref{cov}: with Student-$t$ likelihood the mode $\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n)}$ plays the role of $\mathbf{y}_{1:n}$ and $\bm{W}_{t\mathrm{GP}}^{-1}$ replaces the noise matrix $\tau^2 \mathbf{I}$. Critically, the latter implies that the posterior variance is a function of both designs $\mathbf{x}_{1:n}$ and observations $\mathbf{y}_{1:n}$.
\subsection{Gaussian Process Classification}
Our target in \eqref{eq:objective} is to learn where the mean response is positive, which is equivalent to classifying each $x \in D$ as belonging either to $S$ or to $N$. Assuming that $\epsilon(x)$ is symmetric, $\{ x \in S \} = \{ f(x) \ge 0 \} = \{ \mathbb{P}( Y(x) > 0) > 0.5\}$. This motivates us to consider the alternative of directly modeling the response sign (rather than overall magnitude) via a classification GP model (Cl-GP)~\citep{williams1998bayesian,rasmussen2006gaussian}.
The idea is to model the probability of a positive observation $Y(x)$ by using a probit logistic regression: $\mathbb{P}(Y(x) > 0 | x) = \Phi( Z(x))$, with $\Phi(\cdot)$ the standard normal cdf. The latent classifier function is taken as the GP $Z \sim GP( 0, K(\cdot, \cdot) )$. After learning $Z$ we then set $\hat{S} = \{ x \in D: \hat{Z}(x) > 0\}$.
To compute the posterior distribution of $Z$ conditional on $\mathcal{A}_n$, we use the fact that for an observation $(x_i,y_i)$ and conditional on $z_i = Z(x_i)$ the likelihood of $y_i > 0$ is $\Phi( z_i) 1_{\{y_i \ge 0 \}} + (1-\Phi( z_i)) 1_{\{y_i < 0\} }$. To simplify notation we use $\check{Y}(x) = \sgn Y(x) \in \{-1, 1\}$ to represent the signed responses driving Cl-GP, leading to $p_{\mathrm{Cl}}(\check{\bm{y}}_{1:n}|\mathbf{z},\mathbf{x}_{1:n})=\prod_{i=1}^n \Phi(\check{y}_iz_i)$. The posterior of the latent $\mathbf{z} = Z(x_{1:n})$ is
therefore
\begin{align}
p_{\mathrm{Cl}}(\mathbf{z}|\mathbf{x}_{1:n},\check{\bm{y}}_{1:n}) &= \frac{p(\mathbf{z}|\mathbf{x}_{1:n})\prod_{i=1}^n \Phi(\check{y}_iz_i)}{p(\check{\bm{y}}_{1:n}|\mathbf{x}_{1:n})}. \label{posteriorz}
\end{align}
Similar to $t$-GP, we use a Laplace approximation for the non-Gaussian $p_{\mathrm{Cl}}(\mathbf{z}|\mathbf{x}_{1:n},\check{\bm{y}}_{1:n})$ in Eq.~(\ref{posteriorz}) (details to be found in Appendix \ref{app:clgp}). The posterior mean for $Z(\cdot)$ at $x_*$ is {\color{blue}then} expressed by using the GP predictive mean equation \eqref{mean} and LP approximation \eqref{cls}:
\begin{align}
\hat{z}^{(n)}(x_*) &= k(x_*)\mathbf{K}^{-1}\tilde{\mathbf{z}}^{(n)}, \label{meanz} \\
v_{\mathrm{Cl}}^{(n)}(x_*,x_*') &= K(x_*,x_*')-k(x_*)[\mathbf{K}+\mathbf{V}^{-1}]^{-1}k(x_*')^T. \label{covz}
\end{align}
We again see the same algebraic structure, with $\tilde{\mathbf{z}}^{(n)}$ a stand-in for $\mathbf{y}_{1:n}$ in \eqref{mean} and $\mathbf{V}^{-1}$ a stand-in for $\tau^2 \mathbf{I}$ in \eqref{cov}.
Also note that we may formally link the $Z$ of the Cl-GP metamodel to the GP $f$ used previously via the posterior probability that $x \in S$:
\begin{align}
\begin{split}
\mathbb{P}(f(x) \ge 0 | \mathcal{A}_n) &= \mathbb{P}(Y(x) > 0|\mathcal{A}_n)
= \int_{\mathbb{R}} \Phi(z) p_{Z(x)}(z|\mathcal{A}_n) dz \\
&= \int \Phi(z) \phi \bigg(\frac{z - \hat{z}^{(n)}(x)}{{s_{\mathrm{Cl}}^{(n)}(x)}}\bigg) dz
= \Phi\bigg(\frac{\hat{z}^{(n)}(x)}{\sqrt{1+s_{\mathrm{Cl}}^{(n)}(x)^2}}\bigg).
\end{split}
\end{align}
{
\subsection{Student-$t$ Process Regression with Student-$t$ Noise}
Instead of just adding Student-$t$ likelihood to the observations, \cite{shah2014student} proposed $t$-processes (TPs) as an alternative to GPs, deriving closed-form expressions for the marginal likelihood and posterior distribution of the $t$-process by imposing an inverse Wishart process prior over the covariance matrix of a GP model. They found the $t$-process to be more robust to model misspecification and to be particularly promising for BO. Moreover, \cite{shah2014student} showed that TPs retain most of the appealing properties of GPs, including analytical expressions, with increased flexibility.
As noticed for example in \cite{rasmussen2006gaussian}, dealing with noisy observations is less straightforward with TPs, since the sum of two independent Student-$t$ distributions has no closed form. Still, this drawback can be circumvented by incorporating the noise directly in the kernel. The corresponding data-generating mechanism is taken to be multivariate-$t$ $\mathbf{y}_{1:n} \sim \mathcal{T} \left(\nu, m(\mathbf{x}_{1:n}), \mathbf{K}+{\tau}^2 \mathbf{I} \right)$, where the degrees of freedom are $\nu \in (2,\infty)$. The posterior predictive distribution is then $f(x_*)|\mathcal{A}_n \sim \mathcal{T}\left(\nu + n, \hat{f}^{(n)}_{\mathrm{TP}}(x_*), v^{(n)}_{\mathrm{TP}}(x_*, x_*) \right)$, where \citep{shah2014student}
\begin{align}
\hat{f}_{\mathrm{TP}}^{(n)}(x_*) =& k(x_*)[\mathbf{K}+{\tau}^2\mathbf{I}]^{-1}\mathbf{y}_{1:n}, \label{TPmean}\\
v_{\mathrm{TP}}^{(n)}(x_*,x'_*) =& \frac{\nu + \beta^{(n)} -2}{\nu + n - 2} \left\{ K(x_*,x_*')
- k(x_*) [\mathbf{K}+{\tau}^2 \mathbf{I}]^{-1}k(x_*')^T \right\}, \label{TPcov}
\end{align}
with $$\beta^{(n)} := \mathbf{y}_{1:n}^\top [\mathbf{K} + {\tau}^2\mathbf{I}]^{-1} \mathbf{y}_{1:n}.$$
Comparing with the regular GPs, we have the same posterior mean $\hat{f}_{\mathrm{TP}}^{(n)}(x_*) = \hat{f}_{\mathrm{Gsn}}^{(n)}(x_*)$, but the posterior covariance now depends on observations $\mathbf{y}_{1:n}$ and is inflated: $v_{\mathrm{TP}}^{(n)}(x_*,x'_*) = \frac{\nu + \beta^{(n)} -2}{\nu + n - 2} v_{\mathrm{Gsn}}^{(n)}(x_*,x'_*)$. Moreover, the latent function $f$ and the noise are uncorrelated but not independent. As noticed in \cite{shah2014student}, assuming the same hyperparameters, as $n$ goes to infinity, the above predictive distribution becomes Gaussian.
Inference of TPs can be performed similarly as for a GP, for instance based on the marginal likelihood:
\begin{equation}
p_{\mathrm{TP}}(\mathbf{y}_{1:n} | \mathbf{x}_{1:n}, \bm{\vartheta}) = \frac{\Gamma(\frac{\nu + n}{2})}{((\nu-2)\pi)^{\frac{n}{2}} \Gamma(\frac{\nu}{2})} |\mathbf{K}|^{-1/2}
\left( 1 + \frac{\mathbf{y}_{1:n}^\top \mathbf{K}^{-1} \mathbf{y}_{1:n} }{\nu - 2} \right)^{-\frac{\nu + n}{2}}.
\end{equation}
}
One issue is estimation of $\nu$, which plays a central role in the TP predictions. We find that restricting $\nu$ to be small is important in order to avoid degenerating to the plain Gaussian GP setup.
\subsection{Metamodel Performance for Level Set Inference} \label{statistics}
To evaluate the performance of different metamodels, we consider several metrics.
The first statistic is the error rate $\mathcal{ER}$ based on the loss function $L$ defined in Eq.~(\ref{loss}), measuring the distance between the level set $S$ and its estimate $\hat{S}$:
\begin{align}
\mathcal{ER} &:= \mu(S\Delta \hat{S})
= \int_{x \in D} \mathbb{I} \left[\sgn f(x)\neq \sgn \hat{f}(x)\right] \mu(dx).
\label{erc}
\end{align}
For Cl-GP, we replace $f(x)$ with $z(x)$ in the above, namely, use~$\mu(S \Delta \hat{S}) = \mu\{ x \in D: \hat{z}(x) < 0 < z(x) \cup \hat{z}(x) > 0 > z(x) \}$. A related statistic is the bias $\mathcal{B}$, which is based on the \emph{signed} ($\mu$-weighted) difference between $S$ and $\hat{S}$:
\begin{align}
\mathcal{B} =& \mu(S\backslash \hat{S}) - \mu(\hat{S}\backslash S)
= \int_{x \in D} \! \left\{\mathbb{I}[\hat{f}(x)<0 < f(x)]-\mathbb{I}[\hat{f}(x)>0 > f(x)] \right\} \mu(dx).
\label{bci}
\end{align}
The error rate $\mathcal{ER}$ and bias $\mathcal{B}$ evaluate the accuracy of the point estimate $\hat{S}$ when the ground truth is known. In a realistic case study when the latter is unavailable, we replace $\mathcal{R}$ by its empirical counterpart, based on quantifying the uncertainty in $\hat{S}$ through the associated uncertainty of $\hat{f}$.
Following \cite{azzimonti2015quantifying}, we define the empirical error $\mathcal{E}$ as the expected distance in measure between the random set $S | \mathcal{A}$ and its estimate $\hat{S}$:
\begin{align}
\mathcal{E} := \mathbb{E} \left[\mu(S \Delta \hat{S}) | \, \mathcal{A} \right]
= \int_{x \in D} \bar{E}(x) \mu(dx), \label{eec}
\end{align}
with $\bar{E}(x)$ calculated by using
(\ref{mean}) and (\ref{cov}):
\begin{align}
\nonumber \bar{E}(x) &:=
\mathbb{E} \left[\mathbb{I}[\sgn f(x) \neq \sgn \hat{f}(x)] | \mathcal{A} \right]
\\
\nonumber &= \int_{\mathbb{R}} \mathbb{I}[\sgn f(x) \neq \sgn \hat{f}(x)]p(f(x)|\mathcal{A}) df(x) = \Phi\bigg(\frac{-|\hat{f}(x)|}{s(x)}\bigg).\label{empiricalerrorc}
\end{align}
The local empirical error $\bar{E}(x)$ is the posterior probability of wrongly classifying $x$ conditional on the training dataset $\mathcal{A}$.
It is intrinsically tied to the point estimate $\hat{f}(x)$ and the associated posterior variance $s(x)^2$ through the Gaussian uncertainty quantification. For the TPs, the predictive distribution is Student-$t$, so that the Gaussian cdf $\Phi$ is replaced with the respective survival function.
\textbf{Uncertainty Quantification:}
To quantify the overall uncertainty about $S$ (rather than local uncertainty about $f(x)$), a natural criterion is the \emph{volume}
of the credible band $CI_{\partial S}$ that captures inputs $x$ whose sign classification remains ambiguous given $\mathcal{A}$. A simple definition at a credibility level $\alpha$ (e.g.,~$\alpha = 0.05$) would be
\begin{equation}
CI_{\partial S}^{(n)} = \left\{ {x} \in D: \left(\hat{f}^{(n)}({x}) + z_{1-\frac{\alpha}{2}}s^{(n)}({x})\right)
\left(\hat{f}^{(n)}({x}) - z_{1-\frac{\alpha}{2}}s^{(n)}({x})\right)<0 \right\}, \label{cis}
\end{equation}
where $z_{1-\frac{\alpha}{2}}$ is the appropriate Gaussian/Student-$t$ $\alpha$-quantile. Thus \eqref{cis} evaluates the region where the sign of $f$ is nonconstant over the posterior $\alpha$-CI of $f$. Heuristically however, $CI_{\partial S} \simeq \{x \in D : \bar{E}(x) > {\alpha} \}$ is effectively equivalent to empirical error $\bar{E}(x)$ exceeding $\alpha$, so that the volume of $CI_{\partial S}$ is roughly proportional to the integrated empirical error $\mathcal{E}$.
In a more sophisticated approach based on random set theory, \cite{chevalier2013estimating} used the Vorob'ev deviation to define the uncertainty measure $V_\alpha(\hat{S})$:
\begin{align}
\nonumber V_\alpha(\hat{S}) :=& \mathbb{E} \left[\mu(\hat{S}^\alpha \Delta S) | \ \mathcal{A} \right] \\
\nonumber =& \int_{x \in \hat{S}^\alpha} \mathbb{P}(x \notin S | \mathcal{A}) \mu(dx)
+ \int_{x \in (\hat{S}^\alpha)^C} \mathbb{P}(x \in S | \mathcal{A}) \mu(dx) \\
=& \int_{x \in \hat{S}^\alpha} \left(1-p_V(x) \right) \mu(dx) + \int_{x \in (\hat{S}^\alpha)^C} p_V(x) \mu(dx), \label{vorob}
\end{align}
where $$
\hat{S}^\alpha := \left\{x \in D: \hat{f}(x)- z_{1-\frac{\alpha}{2}} s(x) \geq 0\right\}$$ and $$p_V(x) = \mathbb{P}(x \in S | \mathcal{A}) = \Phi \bigg(\frac{\hat{f}(x)}{s(x)} \bigg).
$$
An $\alpha$ satisfying the unbiasedness condition $$\int_{x \in D} p_V(x) \mu(dx) = \mathbb{E}[\mu(S) | \mathcal{A}]=\mu(\hat{S}^\alpha)$$ is referred to as the \emph{Vorob'ev threshold} and can be determined through dichotomy \citep{chevalier2013estimating}. If the {Vorob'ev threshold} is picked to be zero, then the Vorob'ev deviation is reduced to the empirical error $\mathcal{E}$.
Because of the computational overhead of working with \eqref{vorob}, we restrict attention to the credible bands defined through $\hat{S}^\alpha$, which correspond to local uncertainty about $f$ (or $Z$) as in \eqref{cis}.
\section{Sequential Design}\label{sec:improvementmetrics}
We estimate the level set $S$ in a sequential design setting that assumes that $f$ is expensive to evaluate, for example because of the complexity of the underlying stochastic simulator. Therefore efficient selection of the inputs $\mathbf{x}_{1:n}$ is important. In sequential design, at each step the next sampling location $x_{n+1}$ is selected given all previous measurements. The Bayesian approach to sequential design is based on greedily optimizing an acquisition function as in \eqref{seq}. These strategies got popularized thanks to the success of the expected improvement (EI) criterion and the associated efficient global optimization (EGO) algorithm~\citep{jones1998efficient}.
The basic loop for sequential design is as following:
\begin{itemize}
\item Initialize $\mathcal{A}_{n_0} = \{(x_i,y_i),1 \leq i \leq n_0\}.$
\item Loop for $n=n_0 {+1},\dots$ $N$.
\begin{itemize}
\item Choose the next input $x_{n+1} = \arg\max_{x\in \mathcal{M}} \mathcal{I}_{n}(x)$, and sample $y_{n+1}=Y(x_{n+1})$.
\item Augment $\mathcal{A}_{n+1}=\mathcal{A}_n \bigcup \{(x_{n+1},y_{n+1})\}.$
\item Update $\hat{S}^{(n+1)}$ with $\mathcal{A}_{n+1}.$
\end{itemize}
\end{itemize}
We now propose several metrics for the acquisition function $\mathcal{I}_n(x)$ in Eq.~(\ref{seq}). The key plan is to target regions close to the boundary $\partial \hat{S}$. A second strategy is to use the look-ahead posterior standard deviation $s^{(n+1)}$ conditional on sampling at $x$, in order to assess the corresponding \emph{information gain}. This links the constructed design to the metamodel for $f$, since different surrogate architectures quantify uncertainty differently.
The first metric, dubbed Maximum Contour Uncertainty (MCU), stems from the Upper Confidence Bound (UCB) strategies proposed by~\cite{srinivas2012information} for Bayesian optimization. The idea of UCB is to express the exploitation-exploration trade-off through the posterior mean $\hat{f}(x)$ and standard deviation $s(x)$. Following the spirit of UCB, MCU blends the minimization of $|\hat{f}^{(n)}(x)|$ (exploitation) with maximization of the posterior uncertainty $s^{(n)}(x)$ (exploration):
\begin{align}
\mathcal{I}_n^{\text{MCU}}(x) &:= -|\hat{f}^{(n)}(x)| + \gamma^{(n)} s^{(n)}(x), \label{ucb}
\end{align}
where $\gamma^{(n)}$ is a step-dependent sequence of weights. Thus, MCU targets inputs with high uncertainty (large $s^{(n)}(x)$) and close to the boundary $\partial \hat{S}$ (small $|\hat{f}^{(n)}|$ ). Small $\gamma^{(n)}$ leads to aggressive sampling concentrated along the estimated $\partial \hat{S}$; large $\gamma^{(n)}$ leads to space-filling sampling that effectively minimizes the integrated mean-squared error. Thus, the choice of $\gamma$'s is critical for the performance; in particular $\gamma^{(n)}$ should be increasing to avoid being trapped in local minima of $|\hat{f}^{(n)}(x)|$. In the original application to BO \citep{srinivas2012information} it is proved that with $\gamma^{(n)} = ({2 \log \big (\frac{|D| \pi^2 n^2}{6\delta}\big)})^{1/2}$, the regret (a metric measuring the distance between estimated optima and the trueth in BO) of the estimate is guaranteed to converge. Further recipes for \eqref{ucb} for level set estimation were proposed in~\cite{gotovos2013active} and~\cite{bogunovic2016truncated}; both papers mention that the above recommendation is too conservative and tends to over-explore. A constant choice of $\gamma^{(n)} = 1.96$ corresponds to the Straddle scheme in~\cite{bryan2006active} and leads to $\mathcal{I}_n(x) \ge 0 \Leftrightarrow x \in$ (95\% CI band of $\partial S$). Similarly, \cite{gotovos2013active} employed $\gamma^{(n)} = 3$ and \cite{bogunovic2016truncated} suggested to use $\gamma^{(n)} = \sqrt{\log (|D| n^2)}$. Based on our experiments (see Appendix \ref{app:mcu}), we find that a constant value of $\gamma^{(n)}$ may be problematic and recommend to adapt $\gamma^{(n)}$ to the relative ratio between $f(x)$ (for steeper response surfaces $\gamma$ should be larger) and $s(x)$ ($\gamma$ needs to rise as posterior uncertainty decreases). One recipe is to use $\gamma^{(n)} = IQR(\hat{f}^{(n)}) \backslash 3 Ave(s^{(n)})$ ($Ave(s^{(n)})$ denotes the average of standard deviation and IQR the inter-quantile range) which keeps both terms in \eqref{ucb} approximately comparable as $n$ changes.
\begin{remark}\label{app:mee}
The local empirical error $\bar{E}(x)$ as defined in Eq.~(\ref{empiricalerrorc}) could be directly used as an acquisition function, i.e.,
\begin{align}
\mathcal{I}_n^{\text{MEE}}(x) \equiv \bar{E}(x) =
\Phi\bigg(-\frac{|\hat{f}^{(n)}(x)|}{s^{(n)}(x)}\bigg). \label{criterionmee-disc}
\end{align}
This Maximal Empirical Error (MEE) acquisition function measures the local probability of misclassification and is similar to the sequential criteria in \cite{bect2012sequential,echard2010kriging,ranjan2012sequential,bichon2008efficient}, all based on the idea of sampling at $x$ where the event $\{f(x) \geq 0\} | \mathcal{A}_n$ is most uncertain. However, \eqref{criterionmee-disc} is not suitable for our purposes since it is maximized across the entire $\partial \hat{S}$ (namely $\mathcal{I}_n^{\text{MEE}}(x) =0.5$ for any $x$ where $\hat{f}^{(n)}(x) = 0$), so does not possess a unique maximizer as soon as $\partial \hat{S}$ is non-trivial. One potential solution could be to maximize \eqref{criterionmee-disc} over a finite candidate set, which however requires significant fine-tuning.
\end{remark}
Our second strategy focuses on quickly \emph{reducing} $\bar{E}$ by comparing the current $\bar{E}(x)$ given $\mathcal{A}_n$ and the expected $\bar{E}(x)$ conditional on the one-step-ahead sample, $\mathcal{A}_n \cup \{ x_{n+1}, y_{n+1} \}$. This is achieved by integrating out the effect of $Y(x_{n+1})$ on $\bar{E}(x_{n+1})$:
\begin{align}\label{criterionmeesur-1}
\begin{split}
\mathcal{I}_n^{\text{cSUR}}(x) =& \mathcal{I}_n^{\text{MEE}}(x) - \mathbb{E}_{Y(x)} \left[ \mathcal{I}_{n+1}^{\text{MEE}}(x) \right] \\
=&
\Phi\bigg(-\frac{|\hat{f}^{(n)}(x)|}{s^{(n)}(x)}\bigg)
-\mathbb{E}_{Y(x)}\bigg[\Phi\bigg(-\frac{|\hat{f}^{(n+1)}(x)|}{s^{(n+1)}(x)}\bigg)\bigg].
\end{split}
\end{align}
The name cSUR is because \eqref{criterionmeesur-1} is directly related to the SUR strategy \citep{bect2012sequential}, modified to target \emph{c}ontour-finding. Crucially, $\mathcal{I}^{cSUR}$ ties the selection of $x_{n+1}$ to the look-ahead mean $\hat{f}^{(n+1)}(x_{n+1})$ and look-ahead standard deviation $s^{(n+1)}(x_{n+1})$ that appear on the right-hand side of \eqref{criterionmeesur-1}.
To compute the integral over $Y(x)$, we replace $\hat{f}^{(n+1)}(x)$ with its average $\hat{f}^{(n)}(x)=\mathbb{E}_n[f(x)]=\mathbb{E}_n[\mathbb{E}_{n+1}[f(x)]]=\mathbb{E}_n[ \hat{f}^{(n+1)}(x)]$. Similarly, we plug in the approximate one-step-ahead standard deviation $\hat{s}^{(n+1)}$ discussed in Section \ref{sec:update} (especially Equations (\ref{updategv}), (\ref{estvt}), and (\ref{estvc})) for $s^{(n+1)}(x)$:
\begin{align}\label{criterionmeesur}
\hat{\mathcal{I}}_n^{\text{cSUR}}(x) =& \Phi\bigg(-\frac{|\hat{f}^{(n)}(x)|}{s^{(n)}(x)}\bigg) -\Phi\bigg(-\frac{|\hat{f}^{(n)}(x)|}{\hat{s}^{(n+1)}(x)|_{x_{n+1} = x} }\bigg).
\end{align}
Note that if $x$ is such that $\hat{f}^{(n)}(x) = 0$ then both terms above are $1/2$ and $\mathcal{I}_n^{\text{cSUR}}(x) = 0$. Thus, the cSUR criterion will not place samples \emph{directly} on $\partial \hat{S}$, but will aim to bracket the zero-contour.
In \eqref{criterionmeesur} cSUR only measures the \emph{local} improvement in $\bar{E}(x_{n+1})$ at the sampling location $x_{n+1}$ and consequently might be overly aggressive in targeting $\partial \hat{S}$. This motivates us to target the \emph{global} reduction in the uncertainty of $\hat{S}$, so as to take into account the spatial structure of $D$. The resulting Integrated Contour Uncertainty (ICU) is linked to the already defined empirical error $\mathcal{E}$ from Section \ref{statistics}:
\begin{align}
\begin{split}
& \mathcal{I}_n^{\text{ICU}}(x) := \mathcal{E}^{(n)} - \mathbb{E}_{Y(x)}[\mathcal{E}^{(n+1)}|x_{n+1}=x] \\
& \; = \mathcal{E}^{(n)} -
\mathbb{E}_{Y(x)} \bigg[\int_{u \in D} \!\Phi\bigg(\frac{-|\hat{f}^{(n+1)}(u)|}{s^{(n+1)}(u)|_{x_{n+1}=x}}\bigg)\mu(du)\bigg].
\end{split}
\end{align}
We apply the same approximation as for cSUR to simplify the expectation over $Y(x)$ and replace the integral over $D$ with a sum over a finite subset $\mathcal{D}$ of size $M$:
\begin{align}\label{criterioneee}
\hat{\mathcal{I}}_n^{\text{ICU}}(x) & = -\sum_{x_m \in \mathcal{D}} \Phi\bigg(\frac{-|\hat{f}^{(n)}(x_m)|}{\hat{s}^{(n+1)}(x_m)|_{x_{n+1} = x}}\bigg) \mu(x_m).
\end{align}
Then $\mathcal{I}^{ICU}(x)$ can be viewed as measuring the overall information gain about $S$ from sampling at $x$. The motivation behind ICU is to myopically minimize the expected one-step-ahead empirical error $\mathcal{E}$, which would correspond to 1-step Bayes-optimal design.
As a last alternative, we utilize the targeted mean squared error (tMSE) criterion, a localized form of targeted IMSE criterion in~\cite{picheny2010adaptive}:
\begin{align}
\mathcal{I}_n^{\text{tMSE}}(x) &:= s^{(n)}(x)^2 \cdot W_n^{\text{tMSE}}(x), \label{criteriontmse}
\end{align}
where
\begin{align}
W_n^{\text{tMSE}}(x) &:= \frac{1}{\sqrt{2 \pi} s^{(n)}(x)} \exp \bigg(-\frac{\hat{f}_n(x)^2}{2s^{(n)}(x)^2}\bigg).
\label{eq:w-tmse}
\end{align}
The tMSE criterion upweighs regions close to the zero contour through the weight function $W_n^{\text{tMSE}}(x)$ which measures the distance of $x$ to $\partial \hat{S}^{(n)}$ using the Gaussian posterior density $\mathcal{N}(\hat{f}^{(n)}, s^{(n)}(x)^2)$. Like MCU, tMSE is based only on the posterior at step $n$ and does not integrate over future $Y(x)$'s.
\begin{remark}
In~\cite{picheny2010adaptive} an additional parameter $\sigma_\epsilon$ was added to the definition of $W_n^{\text{tMSE}}(x)$ by replacing $s^{(n)}(x)$ everywhere with $\sqrt{ s^{(n)}(x)^2 + \sigma_\epsilon^2}$. Larger $\sigma_\epsilon$ yields more space-filling as $W_n^{\text{tMSE}}(x)$ becomes flatter. Since~\cite{picheny2010adaptive} dealt with deterministic experiments, $\sigma_\epsilon$ was necessary to ensure that $W_n^{\text{tMSE}}(x)$ is well defined at existing $x_{1:n}$ and the recommendation was for $\sigma_\epsilon$ to be 5\% of the range of $f$. In our case $s^{(n)}(x)$ is intrinsically bounded away from zero and \eqref{eq:w-tmse} works well as is. Additional experiments (available upon request) indicate that the performance of \eqref{criteriontmse} is not sensitive to $\sigma_\epsilon$, so to minimize the number of tuning parameters we stick to $\sigma_\epsilon=0$ in \eqref{eq:w-tmse}.
\end{remark}
In the TP case, for MCU, cSUR, and ICU, we replace the standard normal cdf $\Phi(\cdot)$ appearing in the formulas by its Student-$t$ counterpart (with the estimated degrees of freedom $\nu_n$). For tMSE, to maintain tractability, we keep the same expression \eqref{eq:w-tmse} for the weights $W^{\text{tMSE}}$.
\begin{figure}[ht]
\centering
\includegraphics[height=2.4in]{Fig1}
\caption{Comparison of acquisition functions. \emph{Upper} panel: true function $f= (x+0.75)(x-0.75)$ (black solid line), the posterior mean $\hat{f}(\cdot)$ (dashed line) and 95\% $CI_f$ (shaded area) based on observed samples $(\mathbf{x}_{1:100},\mathbf{y}_{1:100})$ (blue dots). Along the x-axis we also show the credible interval of the partition boundary $CI_{\partial S}$ (grey solid line) relative the true zero level set $S=[0,0.75]$ (red triangle). \emph{Lower} panel: acquisition functions $\mathcal{I}_n(\cdot)$ for MCU, cSUR, ICU, and tMSE criteria, with vertical lines marking the respective maxima $\arg\max_x\mathcal{I}_n(x)$.}
\label{acquisition}
\end{figure}
\subsection*{Illustration}
For instructive purposes, we consider a one-dimensional case where we use the Gaussian observation GP to learn the sign of the quadratic $f(x) = x^2 - 0.75^2$ on $D=[0,1]$, where $S=[0,0.75]$ and with the unique zero contour at $\partial S=0.75$. The initial design $\mathbf{x}_{1:10}$ consists of $n=10$ inputs drawn according to Latin hypercube sampling (LHS). The observations are $Y(x) = f(x) +\epsilon$, where $\epsilon \sim t_3(0,0.1^2)$. In the top plot in Figure \ref{acquisition}, we plot the true $f(\cdot)$, the posterior mean $\hat{f}^{(100)}(\cdot)$, and associated 95\%-CI. We also show the credible band for $\partial \hat{S}$; in the respective bottom panel, we plot the acquisition functions $\mathcal{I}_n^{\text{MCU}}(\cdot)$, $\mathcal{I}_n^{\text{cSUR}}(\cdot)$, $\mathcal{I}_n^{\text{ICU}}(\cdot)$ and $\mathcal{I}_n^{\text{tMSE}}(\cdot)$ as defined in Equations (\ref{ucb}), (\ref{criterionmeesur}), (\ref{criterioneee}), and (\ref{criteriontmse}).
Comparing the acquisition functions of the four criteria, we find that, besides ICU, all of the others have maxima within the shaded credible interval of the boundary $CI_{\partial S}$. In practice, we care only about the maximizer of the acquisition function, rather than its full shape, since the former drives the selection of the next sample $x_{n+1}$. The $x_{n+1}$'s selected by MCU and tMSE criteria are close.
For the cSUR criterion, because $\mathcal{I}_n^{\text{cSUR}}(x) = 0$ at $\partial \hat{S}$, there are two local maxima with a ``valley'' between them.
The interval between the two local maxima is roughly the confidence interval $CI_{\partial S}$ for the boundary \eqref{cis}. Both MCU and tMSE select a location very close to the boundary $\hat{f}^{(n)}(x_{n+1}) \simeq 0$. We note that MCU has a flatter acquisition function, i.e., tMSE is more aggressive. In contrast, the ICU and cSUR criteria are more ``global''; in particular, ICU is the flattest among all the criteria.
\begin{figure}[h]
\centering
\includegraphics[height=2.6in]{Fig2}
\caption{\emph{Top row}: Fitted metamodel $\hat{f}^{(100)}$ (dashed red line) and its 95\%-CI (shaded region) versus the true $f = (x+0.75)(x-0.75)$ (solid black), for each of the four design strategies. The estimated 95\% CI for the zero-contour $\partial S$ is marked on the $x$-axis with a grey interval; red triangle indicates the true zero-contour $\partial S=0.75$. \emph{Bottom row}: sampled inputs $x_n$ (on the $x$-axis to match the top row) as a function of step $n=1,\ldots,100$ (on the $y$-axis, moving from top to bottom) for MCU, tMSE, cSUR, and ICU criteria. The rug plots at the bottom visualize the overall distribution of $\mathbf{x}_{1:n}$ at $n=100$. The first ten inputs are selected using a (fixed-across schemes) LHS design on $D = [0,1]$. }
\label{designfit}
\end{figure}
After using the various acquisition functions to select $x_{n+1}$ at $n=11, \ldots, 100$, we show in Figure~\ref{designfit} the resulting designs $\mathbf{x}_{1:n}$ and the final estimate $\hat{f}^{(100)}$ with a Gaussian observation GP metamodel. As desired, all methods target the true zero-contour at $\partial S = 0.75$. As a result, the posterior variance $s^{(n)}(x)^2$ is much lower in this neighborhood; in contrast, especially for tMSE and MCU, few samples are taken far from $x=0.75$, and the posterior uncertainty there remains high. The true zero contour is within the estimated posterior CI for all the criteria. However, the CIs for MCU and tMSE are much wider than those for the others.
The bottom row in Figure~\ref{designfit} shows the sampled location $x_{n}$ as a function of step $n$. We observe that MCU and tMSE heavily concentrate their search around the zero contour, leading to few samples (and consequently relatively large empirical errors $\mathcal{E}^{(n)}$) in other areas, although the overall error rate $\mathcal{R}$ is comparable. The ICU and cSUR criteria exhibit an ``edge'' effect; that is, besides the desired zero contour $x=0.75$, multiple samples are taken close to the edges of the input space at $x=0$ and $x=1$. This occurs due to the relatively large posterior variance $s^2(\cdot)$ in those regions (which arises intrinsically with any spatial-based metamodel) that in turn strongly influences $\mathcal{I}^{\text{cSUR}}$ in \eqref{criterionmeesur} and $\mathcal{I}^{\text{ICU}}$ in \eqref{criterioneee}. Inputs sampled by the cSUR criterion bracket the contour $\partial S$ from both directions, matching the two-hill-and-a-valley shape of $\mathcal{I}^{\text{cSUR}}$ in Figure~\ref{acquisition}. We note that the two sampling ``curves'' get closer as $n$ grows, indicating a gradual convergence of the estimated zero contour $\partial \hat{S}^{(n)}$, akin to a shrinking credible interval of $\hat{S}^{(n)}$. The ICU criterion generates a much more diffuse design: it engages in more exploration and is less dependent on the current levels of the empirical error $\mathcal{E}$. This eventually creates a flatter profile for $\bar{E}(x)$.
The preceding discussion considered a single metamodel choice for $f$. Other metamodels will generate different design features; in particular, sensitivity to $\epsilon(x)$ will lead to a different mix of exploration ($x_n$'s far from the zero-contour) and exploitation even for the same choice of a $\mathcal{I}_n$ criterion. Figures~\ref{2dexpresult} and \ref{fig:2doption}, as well as Table~\ref{tbl:2d}, emphasize our message that one must jointly investigate the \emph{combinations} of $\mathcal{I}(\cdot)$ and $\hat{f}$ when benchmarking the ultimate performance of the algorithm.
\section{Look-Ahead Variance} \label{sec:update}
The cSUR and ICU acquisition functions $\mathcal{I}_n$ require estimates of the look-ahead standard deviation $s^{(n+1)}(x_*)$ conditional on sampling at $x_{n+1}=x$. A related computation is also important for efficient updating of the GP/TP metamodels during sequential design, assimilating the observation $(x_{n+1}, y_{n+1})$ into $\mathcal{A}_n$. As is well known, usage of GP necessitates inverting the covariance matrix $\mathbf{K}^{-1}$ which presents a computational bottleneck as $n$ grows. Updating hinges on computing $[\mathbf{K}^{(n+1)}]^{-1}$ via applying the Woodbury identities to the current $[\mathbf{K}^{(n)}]^{-1}$.
A major advantage of the classical GP paradigm is that the posterior variance $s^{(n)}(x)^2$ is a function only of the design $\mathbf{x}_{1:n}$; that is, it is independent of the observations $\mathbf{y}_{1:n}$. This allows an exact analytic expression for $s^{(n+1)}(x)\big|_{x_{n+1}=x}$ in terms of $x_{n+1}$. Recall that for an existing design $\mathbf{x}_{1:n}$, after adding a new $(x_{n+1}, y_{n+1})$, the mean and variance at location $x_*$ are updated via~\citep{chevalier2014corrected}
\begin{align}
\hat{f}^{(n+1)}_{\mathrm{Gsn}}(x_*) =& \hat{f}^{(n)}_{\mathrm{Gsn}}(x_*) +
\lambda^{(n)}(x_*, x_{n+1})(y^{n+1}- \hat{f}^{(n)}_{\mathrm{Gsn}}(x_{n+1})), \\
s^{(n+1)}_{\mathrm{Gsn}}(x_*)^2 =& s^{(n)}_{\mathrm{Gsn}}(x_*)^2 -
\lambda^{(n)}(x_*, x_{n+1})^2({\tau}^2+ s^{(n)}_{\mathrm{Gsn}}(x_{n+1})^2),
\end{align}
where $\lambda^{(n)}(x_*,x_{n+1})$ is a weight function
that measures the influence of the new sample at $x_{n+1}$ on $x_*$ conditioned on the existing inputs $\mathbf{x}_{1:n}$.
\begin{lemma}[Woodbury formula] \label{matrixinv}
Assume $\mathbf{b}$ is a $n \times 1$ vector, $\mathbf{A}$ is a $n \times n$ matrix, and $d$ and $c$ are nonzero scalars; then we have
\begin{align}
[\mathbf{b}^T \quad d]
\left[\begin{array}{cc}
\mathbf{A} & \mathbf{b} \\
\mathbf{b}^T & c
\end{array}\right]^{-1}
\left[\begin{array}{c}
\mathbf{b} \\
d
\end{array}\right]
= \mathbf{b}^T\mathbf{A}^{-1}\mathbf{b}-\frac{1}{c-\mathbf{b}^T\mathbf{A}^{-1}\mathbf{b}}(d-\mathbf{b}^T\mathbf{A}^{-1}\mathbf{b})^2.
\end{align}
\end{lemma}
Using Lemma \ref{matrixinv}, we obtain the one-step-ahead variance at $x_*$:
\begin{proposition}\label{updatevarcx}
For any $x_*$,
\begin{align}
\begin{split}
\lambda^{(n)}(x_*, x_{n+1}) &= \frac{v^{(n)}_{\mathrm{Gsn}}(x_*,x_{n+1})}{{\tau}^2+s^{(n)}_{\mathrm{Gsn}}(x_{n+1})^2} \\
\Rightarrow \quad
s^{(n+1)}_{\mathrm{Gsn}}(x_*)^2
&= s^{(n)}_{\mathrm{Gsn}}(x_*)^2 - \frac{v^{(n)}_{\mathrm{Gsn}}(x_*,x_{n+1})^2}{{\tau}^2+s^{(n)}_{\mathrm{Gsn}}(x_{n+1})^2}.
\end{split}
\label{updategv}
\end{align}
In particular, after sampling at $x_{n+1}$ the local updated posterior variance is proportional to the current $s^{(n)}_{\mathrm{Gsn}}(x_{n+1})^2$ with a proportionality factor \citep{hu2015sequential}:
\begin{align}
\frac{s^{(n+1)}_{\mathrm{Gsn}}(x_{n+1})^2}{s^{(n)}_{\mathrm{Gsn}}(x_{n+1})^2} &= \frac{{\tau}^2}{{\tau}^2+s^{(n)}_{\mathrm{Gsn}}(x_{n+1})^2}. \label{updatevg}
\end{align}
\end{proposition}
The above lemma is our basis for calculating the acquisition function for the cSUR criterion (\ref{criterionmeesur}) that requires only \eqref{updatevg} and the ICU criterion (\ref{criterioneee}). As we see below, because \eqref{updategv} holds only in the Gaussian prior/Gaussian likelihood setting, further approximations are required to apply \eqref{updatevarcx}--\eqref{updatevg} for the alternative metamodels. Such look-ahead variance expressions are of independent interest, applicable beyond the context of level set estimation.
A limitation of using a non-Gaussian observation or classification likelihood is that, unlike for Gaussian observation GP, there are no exact variance look-ahead formulas for the resulting $t$-GP, Cl-GP and TP metamodels. There are two main reasons for this. First, both the posterior mean $\hat{f}^{(n+1)}(x_*)$ in (\ref{meant}) and \eqref{meanz} and the posterior variance $s^{(n+1)}(x_*)^2$ in (\ref{covt}) and \eqref{covz} for $t$-GP and Cl-GP depend on the posterior mode $\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n+1)}$ or $\tilde{\mathbf{z}}_{Cl}^{(n+1)}$, which changes every step. Therefore, they cannot be accessed in advance. Furthermore, for $t$-GP and Cl-GP $s^{(n+1)}(x_*)$ depends on the next-step Hessian $\bm{W}$ (namely on $w_{n+1}^{(n+1)}$), and for TP $s^{(n+1)}(x_*)$ depends on $\beta^{(n+1)}$. Both of
these again depend on $y_{n+1}$. To overcome this challenge, we develop an approximation $\hat{s}^{(n+1)}(\cdot)$ for each metamodel. Our strategy is to replace each inaccessible term with its expected value from the point of view of step $n$. For example, we calculate the expectation of $\tilde{\mathbf{f}}_{t\mathrm{GP}}^{(n+1)}$, $\tilde{\mathbf{z}}_{Cl}^{(n+1)}$ and $\beta^{(n+1)}$ with respect to $\mathcal{A}_n$. Propositions~\ref{updatevarcxt}-\ref{updatevarcxcl}-\ref{updatevarcxtp} provide the resulting look-ahead formulas for $t$-GP, Cl-GP and TP respectively, with derivation details in Appendix~\ref{app:lookahead}.
\begin{proposition}\label{updatevarcxt}
For any $x_*$, the formula for the look-ahead variance for $t$-GP is
\begin{align}
\hat{s}^{(n+1)}_{t\mathrm{GP}}(x_*)^2
& := s^{(n)}_{t\mathrm{GP}}(x_*)^2 - \frac{v^{(n)}_{t\mathrm{GP}}(x_*,x_{n+1})^2}{(\tau^2 \frac{\nu+1}{\nu-1})+s^{(n)}_{t\mathrm{GP}}(x_{n+1})^2}. \label{updatetv}
\end{align}
\end{proposition}
\begin{proposition}\label{updatevarcxcl}
Let $\check{v}_{n+1} = v_{n+1}^+p_++v_{n+1}^-p_-$, where
\begin{align}
v_{n+1}^+ &= \frac{\phi(\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))^2}{\Phi(\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))^2} + \frac{\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1})\phi(\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))}{\Phi(\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))}, \\
p_+ &= \Phi\bigg(\frac{\hat{z}^{(n)}(x_{n+1})}{\sqrt{1+s^{(n)}_C(x_{n+1})^2}}\bigg),
\end{align}
\text{and}
\begin{align}
v_{n+1}^- &= \frac{\phi(\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))^2}{\Phi(-\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))^2} -\frac{\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1})\phi(\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))}{\Phi(-\hat{z}_{\mathrm{Cl}}^{(n)}(x_{n+1}))}, \\
p_- &= 1 - p_+.
\end{align}
For any $x_*$, the formula for the look-ahead variance for Cl-GP is
\begin{align}
\hat{s}^{(n+1)}_{\mathrm{Cl}}(x_*)^2 := {s^{(n)}_{\mathrm{Cl}}(x_*)^2} - \frac{v^{(n)}_{\mathrm{Cl}}(x_*,x_{n+1})^2}{ (\check{v}_{n+1})^{-1}+s^{(n)}_{\mathrm{Cl}}(x_{n+1})^2}.
\end{align}
\end{proposition}
\begin{proposition}\label{updatevarcxtp}
For any $x_*$, the formula for the look-ahead variance for TP is
\begin{align}
s^{(n+1)}_{\mathrm{TP}}(x_*)^2 &= \frac{\nu + \check{\beta}^{(n+1)} - 2}{\nu + n - 1} s^{(n+1)}_{\mathrm{Gsn}}(x_*)^2, \label{updatetpv}
\end{align}
where $\check{\beta}^{(n+1)} =\beta^{(n)} + \frac{\nu}{\nu - 2}.$
\end{proposition}
We note that in our experiments we only use the above to evaluate $\mathcal{I}_n$, and directly re-estimate $\tilde{f}^{(n+1)}$ at each step of the sequential design.
\section{Synthetic Experiments} \label{sec:synthetic}
\subsection{Benchmark Construction} \label{benchmark}
As synthetic experiments, we consider three benchmark problems in dimension $d=1, 2$, and 6. For the latter two we employ the widely used \emph{Branin-Hoo} 2-D and \emph{Hartman} 6-D functions; see, for example, \cite{picheny2013benchmark}. The original functions have been rescaled to map their sample space $D$ onto $[0,1]^d$; see Table~\ref{syntheticfunc}.
The latent functions are chosen to cover a variety of problem properties. The quadratic $f$ in 1-D is strictly monotonically increasing, yielding a single boundary $\partial S$. The original Branin-Hoo function \citep{picheny2013benchmark} is modified so that $f$ is increasing in $x^1$ and the zero-level set has a non-trivial shape in $x^2$. The \emph{Hartman} is a multimodal function with a complex zero contour. The parameters in the original \emph{Hartman} function described in \cite{picheny2013benchmark} are adjusted to reduce the "bumps" in the zero contour and make the problem more appropriate for the sign classification task.
\begin{table*}[ht]
\caption{Response surfaces ${x} \mapsto f({x})$ for synthetic experiments.}
\centering
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Quadratic (1-D) & $f(x) = (x+0.75)(x-0.75)$
with $x\in[0,1]$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Branin-Hoo (2-D) & $f({x}) = \frac{1}{178} \big[\big(\bar{x}^1-\frac{5.1(\bar x^2)^2}{4\pi^2}+\frac{5\bar x^2}{\pi}-20\big)^2 + (10-\frac{10}{8\pi})\cos(\bar x^1)-181.47\big] $\\
& with: $\bar x^1 = 15x^1$, $\bar x^2 = 15x^2 - 5$, $x^1, x^2 \in [0,1]$\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Hartman6 (6-D) & $f({x}) = \frac{-1}{0.1}\big[\sum_{i=1}^{4}C_i\exp\big(-\sum_{j=1}^{6}a_{ji}(x^j-p_{ji})^2\big)-0.1\big]$\\
& with: $\mathbf{C} = [0.2,0.22,0.28,0.3]$ \\
& $\mathbf{a} = \begin{bmatrix}
8.00 & 0.50 & 3.00 & 10.00 \\
3.00 & 8.00 & 3.50 & 6.00 \\
10.00 & 10.00 & 1.70 & 0.50\\
3.50 & 1.00 & 8.00 & 8.00 \\
1.70 & 6.00 & 10.00 & 1.00 \\
6.00 & 9.00 & 6.00 & 9.00 \end{bmatrix}\!, \quad
\mathbf{p} = \frac{1}{10^{4}}\begin{bmatrix}
1312 & 2329 & 2348 & 4047 \\
1696 & 4135 & 1451 & 8828 \\
5569 & 8307 & 3522 & 8732 \\
124 & 3736 & 2883 & 5743 \\
8283 & 1004 & 3047 & 1091 \\
5886 & 9991 & 6650 & 381 \end{bmatrix}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\label{syntheticfunc}
\end{table*}
A large number of factors can influence the performance of metamodels and designs. In line with the stochastic simulation perspective, we concentrate on the impact of the simulation noise and consider four observation setups. These cover a variety of noise distributions and signal-to-noise ratio, measured through the proportion of standard deviation $\sigma_\tau$ to the range $R_f$ of the response. The first two settings use
\emph{Student-$t$} distributed noise, with (i) low $\sigma_\tau$ and (ii) high $\sigma_\tau$. The third setting uses (iii) Gaussian mixture noise to further test misspecification of $\epsilon$. The fourth setting considers the challenging case of (iv) a heteroscedastic Student-$t$ noise with state-dependent degrees of freedom. In total we have $3 \times 4 \times 4 \times 6$ experiments (indexed by their dimensionality, noise setting, design heuristic, and metamodel type).
Besides the noise distribution, we fix all other metamodeling aspects. All schemes are initialized with $n_0=10d$ inputs drawn from an LHS design on $[0,1]^d$ and use the SE kernel \eqref{covf} for the covariance matrix $\bm{K}$. To analyze for the variability due to the initial design and the noise realizations, we perform 100 macroruns of each design/acquisition function combination. For each run, the same initial inputs are used across all GP metamodels and designs, but otherwise the initial $\mathbf{x}_{1:n_0}$ vary across runs.
\begin{table*}[ht]
\caption{Stochastic simulation setup for synthetic experiments. ($R_f \equiv \max_{{x}} f({x})-\min_{{x}} f({x}) = 1$)
}
\centering
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Initial design & Latin hypercube sampling of size $n_0 = 10d$\\
Total budget $n$ & $d = 1, n = 100; \quad d = 2, n = 150; \quad d = 6, n = 1000$ \\
Test set size $M=|\mathcal{D}|$ & $d = 1, M = 1000; \quad d = 2, M = 500; \quad d = 6, M = 1000$ \\ \noalign{\smallskip}\hline\noalign{\smallskip}
Noise setting for $\epsilon(x)$ & (i) $t/ \text{small}:t_3(0, (0.1R_f)^2)$ \\
& (ii) $t/ \text{large}: t_3(0, (0.5R_f)^2)$ \\
& (iii) $Gsn/ \text{mix: 50/50 mix of }\; \mathcal{N}(0,(0.5R_f)^2) \text{ and }\; \mathcal{N}(0,R_f^2)$ \\
& (iv) $t/ \text{hetero}: t_{6-4x^1}(0,(0.4(4x^1+1))^2)$ \\
\noalign{\smallskip}\hline
\end{tabular}
\label{syntheticnoise}
\end{table*}
\textbf{Optimization of the Improvement Metric:} We employed the MCU, ICU, tMSE and cSUR criteria to maximize the improvement metric $\mathcal{I}$ and select the next input $x_{n+1}$. This maximization task is nontrivial in higher dimensions because $\mathcal{I}$ is frequently multimodal and can be flat around its local maxima. We use a genetic optimization approach as implemented in the \texttt{ga} function in MATLAB, with tolerance of $10^{-3}$ and $200$ generations. This is a global, gradient-free optimizer that uses an evolutionary algorithm to explore the input space $D$.
\textbf{Evaluation of Performance Metrics:}
Recall that evaluating the quality of $\partial \hat{S}$ is based on $\mathcal{R}$ and $\mathcal{E}$ from \eqref{erc} and \eqref{eec} that require integration over $D$. In practice, these are computed based on a weighted sum over a finite $\mathcal{D}$, $\hat{\mathcal{E}} := \sum_{m=1}^{M} \Phi\big(\frac{-|\hat{f}(x_m)|}{s(x_m)}\big) \mu(x_m)$ for a space-filling sequence $\mathcal{D} \equiv x_{1:M} \in D$ of test points. In 1-D experiments $\mathcal{D}$ was an equispaced grid of size $M=1000$. In higher dimensions, to avoid the use of a lot of test points that are required to ensure an accurate approximation, we adaptively pick $\mathcal{D}$ that targets the critical region close to the zero contour. To do so, we replace the integral with a weighted sum:
\begin{align}
\begin{split}
\mathcal{R} \simeq& \frac{p_c}{M_1} \sum_{x_{1:M_1} \in D_1} \!\mathbb{I} (\sgn f(x_m)\neq \sgn \hat{f}(x_m))
+ \frac{(1-p_c)}{M_2} \sum_{x_{1:M_2} \in D_2} \mathbb{I} (\sgn f(x_m)\neq \sgn \hat{f}(x_m)),
\end{split}
\end{align}
where $M = M_1 + M_2$ and the test locations $x_{1:M_1}$ and $x_{1:M_2}$ are subsampled from a large space-filling (scrambled Sobol) sequence on $D$. The weight $p_c$ determines the relative volume of $D_1$ and $D_2 = D \backslash D_1$, where on $D_1 = \{ x : f(x) \simeq 0 \}$ we are close to the zero contour.
In the experiments below we use $M_1 = 0.8M, M_2 = 0.2M$, and $p_c = 0.4$, so that the density of test points close to $\partial S$ is double relative to those far from the zero contour. We employ the same strategy for speeding the evaluation of the empirical error $\mathcal{E}$.
\textbf{Surrogate Inference:} Values of hyperparameters $\bm{\vartheta}$ are crucial for good performance of GP metamodels. We estimate $\bm{\vartheta}$ using maximum likelihood. Except for TP, all models are fitted with the open source package \texttt{GPstuff} \citep{vanhatalo2012bayesian} in MATLAB. TPs are fitted with the \texttt{hetGP}~\citep{binois2016practical} package in \texttt{R}. Auxiliary tests did not reveal any significant effects from using other available tools for plain GPs and $t$-GP, such as \texttt{GPML} \citep{rasmussen2010gaussian}.
In principle, the hyperparameters $\bm{\vartheta}$ change at every step of the sequential design, in other words, whenever $\mathcal{A}_n$ is augmented with $(x_{n+1}, y_{n+1})$. To save time however, we do not update $\bm{\vartheta}$ at each step. Instead, we first estimate the hyperparameters $\bm{\vartheta}$ based on the initial design $\mathcal{A}_{n_0}$ and then freeze them, updating their values only every few steps. Specifically, $\bm{\vartheta}$ is re-estimated at steps $n_0+1, n_0+2, n_0+4, n_0+8, n_0+16, \ldots $ (as the sample size becomes large, the inference of hyperparameters becomes more stable).
The lengthscales $\theta_i$ are the most significant for surrogate goodness of fit. A too-small lengthscale will make the estimated $\hat{f}$ look ``wiggly'' and might lead to overfitting, while $\theta_i$ too large will fail to capture an informative shape of the true $f$ and hence $S$. Since our input domain is always $[0,1]^d$, we restrict $\theta_i \in [0.3, 2]~\forall i$ to be on the order of the length of the sample space $D$.
\textbf{Computational Overhead:} All the considered metamodels are computationally more demanding than the baseline Gaussian GP. For $t$-GP and Cl-GP, additional cost arises due to the Laplace approximation. TP necessitates estimation of the parameter $\nu$ and also the computation of $\beta$ in \eqref{TPcov}. In the experiments considered, the respective computation times were roughly double to triple relative to the Gaussian GP. In terms of sequential design, MCU, tMSE, and cSUR have approximately equal overhead; ICU is significantly more expensive because it requires evaluating the sum in \eqref{criterioneee}. Note that all heuristics include two expensive steps: optimization for $x_{n+1}$ and computation of $\hat{f}^{(n)}$ and $s^{(n)}$ (and/or $\hat{s}^{(n+1)}$).
Overall timing of the schemes is complicated because of the combined effects of $n$ (design budget), $M$ (size of test set), and the use of different software (some schemes run in \texttt{R} and others in Matlab). Most important, the ultimate computation time is driven by the simulation cost of generating $Y(x)$-samples, which is trivial in the synthetic experiments but assumed to be large in the motivating context.
\subsection{Comparison of GP Metamodels} \label{compgp}
Figure \ref{fig:r} shows the boxplots of the error rate $\cal{ER}$ of $\hat{S}^{(N)}$ at the final design ($N=100$ in 1-D; $N=150$ in 2-D; $N=1000$ in 6-D). The plots are sorted by noise settings and design strategies, facilitating comparison between the discussed metamodels. In Table \ref{tbl:2d}, we list the best metamodel and design combination in each case. Several high-level observations can be made. First, we observe the limitations of the baseline Gaussian GP metamodel, which cannot tolerate too much model misspecification. As the noise structure gets more complex, the classical GP surrogate begins to show increasing strain; in the last $t/hetero$ setup, it is both unstable (widely varying performance across runs) and inaccurate, with error rates upward of 30\% on ``bad'' runs. In addition, according to results shown in Table \ref{tbl:2d}, across all of the twelve cases, besides 1d example with $t/{small}$ noise, the Gaussian GP never performs as the best model. This result is not surprising but confirms that the noise distribution is key for the contour-finding task and illustrates the nonrobustness of the Gaussian observation model, due to which outliers strongly influence the inference.
Second, we document that the simple adjustment of using Student-$t$ observations significantly mitigates the above issue. $t$-GP performs consistently and significantly better than Gaussian GP in essentially all settings. This result is true even when both models are misspecified (the $Gsn/mix$ and $t/hetero$ cases). The performance of $t$-GP was still better (though not statistically significantly so) when we tested it in the setting of homoscedastic Gaussian noise (not shown in the plots). The latter fact is not surprising---$t$-GP adaptively learns the degrees-of-freedom parameter $\nu$ and hence can ``detect'' Gaussian noise by setting $\nu$ to be large. Conversely, in heavy-tailed noise cases, the use of $t$ samples will effectively ignore outliers \citep{o1979outlier} and thus produce more accurate predictions than working with a Gaussian observation assumption. We find that $t$-GP can handle complex noise structures and offers a good choice for all-around performance, making it a good default selection for applications. It brings smaller error rate $\cal{ER}$, more stable hyperparameter estimation, less contour bias, and tighter contour CI. Moreover $t$-GP is significantly better than all the other GPs in eight of the twelve setups, indicating that $t$-GP is essentially the best out of all GP metamodels in most cases.
Third, we also inspect the performance of the TP metamodel. As shown in Table \ref{tbl:2d}, TP is the best in two cases out of the twelve, both of which are with the $t /{small}$ noise. We note that TP works worst in $t/{hetero}$ cases, having both large error rate $\cal{ER}$ and empirical error $\mathcal{E}$. Therefore, TP does not work well in cases with low signal-to-noise ratio or greatly misspecified noise. This may be related to the parameterization of TPs, with noise handled in the kernel, which seems less robust to misspecification. Also, since TPs revert to GPs as $n$ increases, the advantage of flexibility offered by the modeling decreases as iterations goes and thus may not last enough for low signal-to-noise ratios, which require more samples. It is apparent for instance in Figure \ref{fig:eeerstep}, where the error at step 0 is lower than for its counterparts.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.95\textwidth,trim=0.3in 0.4in 0.3in 0.2in]{Fig3}
\end{center}
\caption{Boxplots of final error rate $\mathcal{ER}^{(n)}$ from \eqref{erc} across designs (rows) and noise setups (columns). Colors correspond to different GP metamodels. Note that $x$-axis limits are different across columns. Top row is for the 1-D experiment and design size $n=100$; middle row: 2-D Branin-Hoo function with $n=150$; bottom row: 6-D Hartman6 function with $n=1000$.}
\label{fig:r}
\end{figure*}
Fourth, Cl-GP is also better than Gaussian GP in some cases with tMSE and MCU designs (except for the 6-D cases, where the error rate $\cal{ER}$ of MCU is not significantly different from that of ICU, although mean of ICU is slightly smaller). There is significant improvement for models with low signal-to-noise ratio; the only exception is for the low-noise setup where Cl-GP underperforms classical GP. This matches the intuition that employing classification ``flattens'' the signal by removing outliers. By considering only the sign of the response, the classification model disregards its magnitude, simplifying the noise structure at the cost of some information loss. The net effect is helpful when the noise is mis-specified or too strong so as to interfere with learning the mean response. It is detrimental if the above gain is outweighed by the information loss, as apparently happens in the 6-D experiments. Of note, Cl-GP with MCU design has the smallest error rate among all models in one ($t/hetero$ in 1-D) out of 12 cases shown in Table~\ref{tbl:2d}. We also observe, however, that the stability of Cl-GP is highly dependent on the design: some designs create large across-run variations in performance. We hypothesize that this is due to a more complex procedure for learning the hyperparameters of Cl-GP; therefore, designs that are not aggressive enough to explore the zero contour region (such as cSUR) face difficulties in estimating $\vartheta$. As a result, relative to $t$-GP, Cl-GP has larger sampling variances.
\begin{table*}[htb]
\caption{Mean (w/standard deviation) error rate $\mathcal{R}$ and corresponding best-performing sequential design heuristic for the 1-D, 2-D, and 6-D synthetic case studies. Results are based on 100 macroreplications of each scheme.}
\centering
{\small \begin{tabular}{lrlrlrlrl}
\hline\noalign{\smallskip}
Model & \multicolumn{2}{c}{ $t/{small}$} & \multicolumn{2}{c}{ $t/large$} & \multicolumn{2}{c}{$Gsn/{mix}$} & \multicolumn{2}{c}{$t/{hetero}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \multicolumn{8}{c}{ 1-D \emph{Quadratic} } \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & \emph{tMSE} & 0.73\% (0.60\%) & tMSE & 3.24\% (2.79\%) & MCU & 3.87\% (3.17\%) & cSUR & 15.68\% (12.15\%) \\
$t$-GP & tMSE & 0.80\% (0.93\%) & \emph{tMSE} & 3.15\% (1.83\%) & \emph{cSUR} & 3.28\% (3.74\%) & cSUR & 12.50\% (9.05\%) \\
TP & MCU & 0.97\% (0.84\%) & MCU & 5.93\% (5.60\%) & tMSE & 5.09\% (4.40\%) & ICU & 16.44\% (10.14\%)\\
Cl-GP & tMSE & 0.87\% (0.64\%) & tMSE & 3.39\% (4.16\%) & MCU & 4.99\% (3.77\%) & \emph{MCU} & 8.83\% (7.35\%) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \multicolumn{8}{c}{ 2-D \emph{Branin-Hoo} } \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & MCU &1.78\% (0.57\%) & cSUR & 4.75\% (1.95\%) & ICU & 4.92\% (1.86\%) & MCU & 10.36 \% (3.94\%) \\
$t$-GP & MCU &1.70\% (0.29\%) & \emph{tMSE} &3.95\% (1.47\%) & \emph{ICU} &4.10\% (2.07\%) & \emph{tMSE} & 9.00\% (8.66\%) \\
TP & \emph{tMSE} & 1.27\% (0.41\%) & MCU & 4.79\% (1.84\%) & ICU & 5.19\% (1.68\%) & MCU & 12.75 \% (9.02\%)\\
Cl-GP & MCU & 1.56\% (0.51\%) & MCU &4.27\% (1.59\%) & MCU & 5.71\% (1.85\%) & tMSE &13.23\% (7.74\%) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
& \multicolumn{8}{c}{ 6-D \emph{Hartman6} } \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & ICU &3.81\% (0.34\%) & ICU &5.33\% (0.54\%) & ICU & 5.19\% (0.70\%) & MCU &11.67\% (2.89\%)\\
$t$-GP & ICU &3.75\% (0.40\%) & \emph{ICU} &3.98\% (0.47\%) & \emph{ICU} & 4.86\% (0.67\%) & \emph{ICU} &8.25\% (1.60\%) \\
TP & \emph{ICU} &1.25\% (0.20\%) & MCU &5.66\% (1.98\%) & MCU &4.88\% (0.88\%) & MCU &10.69\% (2.34\%)\\
Cl-GP & MCU &7.99\% (4.69\%) & ICU &7.20\% (0.66\%) & ICU & 8.31\% (2.44\%) & ICU &11.11\% (2.20\%) \\
\noalign{\smallskip}\hline
\end{tabular}}
\label{tbl:2d}
\end{table*}
\subsection{Empirical Errors and Uncertainty Quantification}
Figure~\ref{fig:e} shows the empirical errors $\mathcal{E}$ that are supposed to proxy the true error rates $\cal{ER}$. Overall, we find that MCU tends to produce the largest $\mathcal{E}$, and ICU the smallest. These results are consistent with their design construction and local behavior: MCU heavily concentrates around $\partial \hat{S}$, which leads to little information collected about other regions, especially around the boundaries of sample space $D$ and hence relatively large $\bar{E}(x)$ there, inflating $\mathcal{E}$. Conversely, the objective function of ICU is precisely the myopic minimization of $\mathcal{E}_{n+1}$. The other two designs are intermediate versions in terms of minimizing $\mathcal{E}$. The tMSE heuristic tends to target the zero contour plus the edges of $D$, while cSUR tends to broadly target a ``credible band'' around $\partial \hat{S}$. Both approaches are better at reducing $\mathcal{E}$ compared with MCU but are not directly aimed at this. This logic is less consistent for the classification models, where tMSE often yields the lowest $\mathcal{E}$. This result echoes Section \ref{compgp}, namely, that classification GPs tend to perform better with MCU and tMSE designs in lower dimensional cases. TPs tend to have a greater
empirical error $\mathcal{E}$ when the noise is misspecified or in higher dimensional experiments, consistent with the conclusions obtained regarding the error rate $\mathcal{ER}$.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.95\textwidth,trim=0.3in 0.4in 0.3in 0.2in]{Fig4}
\end{center}
\caption{Empirical error $\mathcal{E}^{(n)}$ in Eq.~(\ref{bci}) for GP, $t$-GP, TP, Cl- GP, and MCl-GP metamodels (colors), using MCU, tMSE, cSUR and ICU-based designs (sub rows) with $n=100$ in 1-D, $n=150$ in 2-D, and $n=1000$ in the 6-D experiments (rows).}
\label{fig:e}
\end{figure*}
As a further visualization, Figure \ref{fig:eeerstep} shows the median error rate $\cal{ER}$ (\ref{erc}) and empirical error $\mathcal{E}$ in Eq.~(\ref{eec}) as a function of step $n$ in the 2-D $Gsn/mix$ experiments. This illustrates the learning rates of different schemes as data is collected and offers a further comparison between the true $\cal{ER}$ and the self-reported $\mathcal{E}$ of the same scheme.
We observe that some metamodels underperform for very low $n$, even if they eventually ``catch up'' after sufficiently large simulation budget. This is especially pronounced for the classification Cl-GP metamodel, which yields very high $\mathcal{ER}^{(n)}$ (which is also much higher than the self-reported $\mathcal{E}$) for $n$ small. We also note that Cl-GP appears to enjoy faster reduction in $\mathcal{ER}^{(n)}$ compared with the baseline Gaussian GP, which we conjecture is due to better resistance against $Y$-outliers that distract plain GP's inference of $S$. Comparing the two rows of the figure, we note that discrepancies between $\mathcal{ER}$ and $\mathcal{E}$ correlate with degraded performance, namely, the metamodel being unable to properly learn the response surface is reflected in poor uncertainty quantification. Moreover, the results suggest that the wedge in performance of different design criteria tends to persist; for example MCU and ICU frequently have not only the highest/lowest $\mathcal{E}^{(n)}$ but also the slowest/fastest rate of \emph{reduction} in $\mathcal{E}^{(n)}$ as $n$ grows. Consistent with results in Section \ref{compgp}, Cl-GP with ICU criterion yields both greater error rate $\cal{ER}$ and empirical error $\mathcal{E}$ in 2-D experiments.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.95\textwidth,trim=0.2in 0.3in 0.1in 0.2in]{Fig5}
\end{center}
\caption{Error rate $\mathcal{R}^{(n)}$ \eqref{erc} and surrogate-based uncertainty measure $\mathcal{E}^{(n)}$ \eqref{eec} as a function of step $n$ in the 2-D $Gsn/{mix}$ setting. We compare six metamodels (columns) and four DoE's (colors). The $y$-axis limits differ across rows. We plot median results across 20 macroreplications of each scheme. }
\label{fig:eeerstep}
\end{figure*}
\subsection{Designs for Contour Finding} \label{optdesign}
A key goal of our study is qualitative insights about experimental designs most appropriate for noisy level set estimation. Through identifying the best-performing heuristics we get an inkling regarding the structure of near-optimal designs for \eqref{eq:objective}. In this section we illustrate the latter within a 2-D setup that can be conveniently visualized. Taking the $t/{large}$ experiment as an example, in Figure \ref{2dexpresult} we plot the fitted zero contour $\partial \hat{S}$ at $N=150$ together with the chosen inputs $\mathbf{x}_{1:150}$ across the six metamodels and the four $\mathcal{I}$ heuristics. As expected, most of the designs are around the contour $\partial S$, which is the intuitive approach to minimize the error $\cal{ER}$. Nevertheless, we observe significant differences in designs produced by different $\mathcal{I}$'s. The MCU criterion places most of the samples close to the estimated zero contour $\partial \hat{S}$, reflecting its aggressive exploitation nature. For tMSE, the samples tend to cluster at several subregions of $\partial \hat{S}$ and on the edges of $D$. For cSUR, $\mathbf{x}_{1:n}$ cover a band along $\partial \hat{S}$, resembling the shape of the MCU design but more dispersed. For ICU the design is much more exploratory, covering a large swath of $D$. All these findings echo the 1-D setting in Figure~\ref{designfit}.
One feature we observe is a so-called edge effect, that is, designs that focus on the edges of the input space. This effect arises due to the intrinsically high posterior uncertainty $s(x)$ for $x$ around $\partial D$. It features strongly in tMSE and cSUR (which have about 45\% of the inputs along the edge) and to some extent in ICU (about 30\% of inputs in this example). In contrast, MCU strongly discounts any region that is far from $\partial \hat{S}$. In the given 2-D experiment, we obtain some inputs directly on the boundary $\partial D = \{ x^1 \in \{0,1\} \cup \{x^2 \in \{0,1\} \}$, that is, the constraint ${x} \in D$ is binding and the maximizer of $\mathcal{I}_n(\cdot)$ lies at its upper/lower bound.
A related phenomenon is the concentration of inputs in the top/left and bottom/right corners of $D$, which are associated with the highest uncertainty about the level set due to the confluence of the zero contour passing there and reduced spatial information from being on the edge of $D$.
Another noteworthy feature is \emph{replication} of some inputs, that is, repeated selection of the same $\mathbf{x}$ sites. This does not occur for MCU, but happens for ICU, tMSE and cSUR that frequently (across algorithm runs) sample repeatedly at the vertices of $D$ (indicated by the size of the corresponding marker in Figure~\ref{2dexpresult}). The replication is typically mild (we observe 145+ unique designs among a total of 150 $x_n$'s). This finding echoes~\cite{binois2017replication} the importance for the metamodel to distinguish between signal and noise, which is a key distinction with the noise-free setting $\epsilon(x) \equiv 0$.
Given the above discussion and the relative overhead of the different heuristics, we conclude that in lower dimensional problems, there is little benefit to using the more sophisticated ICU criterion, while for higher dimensional problems, ICU criterion is significantly better than the others. Beyond that, tMSE appears to be an adequate and cheaper choice. However, as the input space becomes more complicated, we need more exploration over the input space and the explorative criteria like ICU start to shine.
The performance of designs differs when combined with different GP metamodels. Table~\ref{tbl:2d} shows that there is no one overall ``best" design for all metamodels across all cases. However, it does suggest some design/metamodel ``combos" that work better than others, especially in the 1-D and 2-D experiments. The classification GPs seem to prefer more aggressive designs, such as MCU, while the regression GPs prefer more exploratory designs, such as ICU. In higher dimensions, ICU usually wins across all metamodels in accuracy; see the results of 6-D experiments in Table~\ref{tbl:2d}.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.95\textwidth,trim=0.3in 0.2in 0.3in 0.2in]{Fig6}
\end{center}
\caption{Estimates of the zero contour $\partial \hat{S}$ for the 2-D Branin-Hoo example with $t/{large}$ noise setting. We show $\partial \hat{S}^{(n)}$ (red solid line) at step $n=150$, with its 95\% credible band (red dashed lines), the true zero contour $\partial S$ (black solid line) and the sampled inputs $\mathbf{x}_{1:n}$ (replicates indicated with larger symbols). We compare across the six metamodels (rows) and four DoE heuristics (columns).}
\label{2dexpresult}
\end{figure*}
\section{Application to Optimal Stopping Problems in Finance} \label{sec:Bermudan}
In our next case study we consider contour finding for determining the optimal exercise policy of a Bermudan financial derivative, cf.~Section~\ref{motivation}. The underlying simulator is based on a $d$-dimensional geometric Brownian motion $(\bm{X}_{t})$ that represents prices of $d$ assets and follows the log-normal dynamics
\begin{align}\label{eq:brown}
\bm{X}_{t+\Delta t}&= \bm{X}_{t} \exp \bigg((r-\frac{1}{2}\sigma^2)\Delta t +\bm{\Sigma} \Delta \bm{W}_{t}\bigg), \qquad
\Delta \bm{W}_{t} &\sim \mathcal{N}(0,\Delta t \bm{I}),
\end{align}
where $\bm{I}$ is the $d\times d$ identity matrix.
Let $h(t,x)$ be the option payoff from exercising when $\bm{X}_{t} = x \in \mathbb{R}^d$.
Exercising is allowed every $\Delta t$ time units, up to the option maturity $T$, so that we wish to determine the collection $\{ S_t : t \in \{ \Delta t, 2\Delta t, \ldots, T-\Delta t\} \}$, which are the zero level sets of the timing function $ x\mapsto T(t,x)$. During the backward dynamic programming, we iterate over $t = T, T-\Delta t, \ldots, 0$,
and the simulator of $T(t,x)$ returns the difference between the pathwise payoff along a trajectory of $(\bm{X}_{t:T})$ that is based on the forward exercise strategy summarized by the forward-looking $\{ \hat{S}_s, s > t\}$, and $h(t,x)$.
As discussed in \cite{ludkovski2015kriging}, this setting implies a skewed, non-Gaussian, heteroskedastic distribution of the simulation noise and is a challenging stochastic contour-finding problem. Note that in order to reflect the underlying distribution of $\bm{X}_{t}$ at time $t$ (conditional on the given initial value $\bm{X}_0 = x_0$) the weighting measure $\mu(x)= p_{X_t}( x | x_0)$ is used. Thus, $\mu(\cdot)$ is log-normal based on \eqref{eq:brown} and is multiplied by the respective $\mathcal{I}_n$ criteria when selecting $x_{n+1}= \arg \max_{x \in D } \mathcal{I}_{n}(x) \mu(x)$.
In line with the problem context, we no longer directly measure the accuracy of learning $\{ S_{t} \}$ but instead focus on the ultimate output of RMC, which is the estimated option value in \eqref{payoff}. The latter must itself be numerically evaluated via an out-of-sample Monte Carlo simulation that averages realized payoffs along a large database of $M$ paths $x^{1:M}_{0:T}$:
\begin{align}\label{eq:hat-V}
\hat{V}(0, x_0) &= \frac{1}{M} \sum_{m=1}^M h(\tau^m, x^{(m)}_{\tau^m} ), \qquad
\tau^m = \inf \{t : x^{(m)}_{t} \in \hat{S}_{t} \}.
\end{align}
Since our goal is to find the \emph{best} exercise value, higher $\hat{V}$'s indicate a better approximation of $\{S_t\}$.
To allow a direct comparison, we set parameters matching the test cases in \cite{ludkovski2015kriging}, considering a 2-D and 3-D example. In both cases the volatility matrix $\bm{\Sigma} = \sigma \bm{I}$ in \eqref{eq:brown} is diagonal with constant terms; that is, the coordinates
$\mathbf{X}_{1:n}^1,\ldots,\mathbf{X}_{1:n}^d$ are independently and identically distributed. As a first example, we consider a 2-D basket Put option with parameters $r=0.06, \sigma=0.2, \Delta t = 0.04, \mathcal{K}=40, T= 1$. The payoff is $h(t,x) = e^{-rt}(\mathcal{K}-\frac{x^1 + x^2}{2})_+$ with $K=40$. Here it is known that stopping becomes optimal once both asset prices $x^1$ and $x^2$ become sufficiently low, so the level set $S_{t}$ is toward the bottom-left of $D$; see Fig~\ref{fig:2doption}. In contrast, stopping is definitely suboptimal when $h(t,x) = 0 \Leftrightarrow (x^1+x^2)/2 > \mathcal{K}$. Consequently, the input sample space is taken to be $D = [25,55] \times [25, 55] \cap \{x^1+x^2 \leq 80\}$.
In this first case study, the timing value $h(t, x)$ is known to be \emph{monotonically} increasing in the asset price $x$. To incorporate this constraint, we augment the four main metamodels (GP, $t$-GP, Cl-GP and TP) with two monotonic versions, M-GP and MCl-GP. By constraining the fitted $\hat{f}$ to be monotone, we incorporate structural knowledge about the ground truth, which in turn reduces posterior uncertainty and thus might produce more accurate estimates of $S$. Monotonicity of the metamodel for $f$ is also one sufficient way to guarantee that the outputted level set $\hat{S}$ is a \emph{connected} subset of $D$.
Our monotone GPs are based on~\cite{riihimaki2010gaussian}. In general, any infinite-dimensional Gaussian process is intrinsically non monotone, since the multivariate Gaussian distribution is always supported on the entire $\mathbb{R}^d$, rather than an orthant. Nevertheless, local monotonicity in $\hat{f}$ can be enforced by considering the gradient $\nabla f$ of $f$ which is also a Gaussian process. Specifically, \cite{riihimaki2010gaussian} proposed to adaptively add virtual observations for $\nabla f$; we employ the resulting implementation in the public \texttt{GPstuff} library~\citep{vanhatalo2012bayesian} to build our own version dubbed M-GP. We employ the same strategy to restrict the coordinates $z^j$ of the latent logistic GP $Z$ to be increasing (decreasing) across $D$. Implementation details are included in Appendix~\ref{app:mgp}.
As a second example, we consider a 3-D max-Call $x \in \mathbb{R}^3$ with payoff $h(t,x) = e^{-rt}(\max(x^1, x^2, x^3) - \mathcal{K})_+$. The parameters are $r = 0.05, \delta = 0.1, \sigma = 0.2, X_0 = (90,90,90), \mathcal{K} = 100, T = 3$ and $\Delta t = 1/3$. Since stopping is ruled out when $h(t,x) = 0 \Leftrightarrow \max(x^1, x^2, x^3) < \mathcal{K}$, the sample space is taken to be $D = [50,150]^3 \cup \{\max(x^1, x^2, x^3) > \mathcal{K}\}$. In this case, stopping is optimal if \emph{one} of the coordinates $x^i$ is significantly higher than the other two, so $S_{t}$ consists of three disconnected components. In this problem there is no monotonicity, so we employ only the GP, t-GP, Cl-GP, and TP metamodels.
Because of the iterative construction of the simulator, the signal-to-noise ratio gets low for small $t$'s. The variance ${\tau}^2(x)$ is also highly state-dependent, tending to be smaller for sites further from the zero-contour.
To alleviate this misspecification and reduce metamodel overhead, we employ \emph{batched} designs \citep{ludkovski2015kriging,ankenman2010stochastic}, reusing $x \in D$ for $r$ replications to collect observations $y^{(1)}(x),\ldots,y^{(r)}(x)$ from the corresponding simulator $Y(x)$. Then, we treat the mean of the $r$ observations,
\begin{align}
\bar{y}(x) = \frac{1}{r}\sum_{i=1}^r y^{(i)}(x),
\end{align}
as the response for input $x$ and use $(x,\bar{y}(x))$ as a single design entry. The statistical properties of $\bar{y}$ are improved compared with the raw observations $y$: it is more consistent with the Gaussian assumption thanks to the Central Limit Theorem (CLT), and its noise variance $\bar{{\tau}}^2(x) = {\tau}^2(x)/r$ is much smaller. Since the expense of sequential design of GP metamodels comes mainly from choosing the new input at each step, the reduction in budget $n=N/r$ by a factor of $r$ significantly speeds their fitting and updating, with $n$ for the number of unique inputs.
For the 2-D Put case study, we then test a total of three budget settings: (i) $r=3, n = 80$ (low budget of $N=240$ simulations); (ii) $r=15, n = 80$ (high budget $N=800$ with moderate replication); (iii) $r=48, n = 25$ (high $N=800$ with high replication). Comparing (ii) and (iii) shows the competing effects of having non-Gaussian noise (for lower $r$) and small design size (low $n$). The initial design size $n_0=10$. In this example, taking $n \gg 80$ gives only marginally better performance but significantly raises the computation time and hence is ruled out as impractical. Three setups are investigated for the 3-D example: $r = 3, n= 100$ (low-budget of $N=300$), $r = 20, n = 100$ (moderate-budget of $N=2000$) and $r = 20, n = 200$ (high budget $N=4000$), both with $n_0 = 30$. In all examples, the results are based on 25 runs of each scheme and are evaluated through the resulting expected reward $\hat{V}(0,x_0)$ \eqref{eq:hat-V} on a fixed out-of-sample testing set of $M=160,000$ paths of $\bm{X}_{0:T}$.
\begin{table*}[htb]
\caption{Performance of different designs and models on the 2-D Bermudan Put option in Section~\ref{sec:Bermudan}. Results are the mean (standard deviation) payoff of 25 runs of experiments evaluating on the same out-of-sample testing set of $M=160000$ $\bm{X}_{0:T}$-paths at each run.}
\centering
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
& LHS & MCU & tMSE &cSUR & ICU \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{6}{c}{$\mathbf{R = 3, n^* = 80}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & 1.211(0.120) & 1.425(0.008) & 1.427(0.007) & 1.431(0.009) & 1.431(0.007)\\
$t$-GP & 1.125(0.113) & 1.409(0.013) & 1.417(0.008) & 1.409(0.010) & 1.406(0.013) \\
TP & 1.179 (0.133) & 1.408 (0.022) & 1.414 (0.008) & 1.378 (0.044) & 1.316 (0.037) \\
M-GP & 1.403(0.014) & 1.438(0.007) & 1.440(0.006) & 1.442(0.009) & 1.433(0.005) \\
Cl-GP & 1.111(0.121) & 1.395(0.015) & 1.402 (0.013) & 1.393(0.013) & 1.391(0.013) \\
MCl-GP & 1.407(0.008) & 1.429(0.010) & 1.429(0.013) & 1.431(0.007) & 1.396(0.019) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{6}{c}{$\mathbf{R = 15, n^* = 80}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & 1.425 (0.017) & 1.448 (0.003) & 1.450 (0.002) & 1.450 (0.003) & 1.449 (0.003)\\
$t$-GP & 1.406 (0.033) & 1.445 (0.003) & 1.447 (0.002) & 1.444 (0.005) & 1.446 (0.004)\\
TP & 1.414 (0.023) & 1.443 (0.003) & 1.443 (0.004) & 1.441 (0.004) & 1.430 (0.006) \\
M-GP & 1.407 (0.008) & 1.449 (0.003) & 1.451 (0.002) & 1.454 (0.002) & 1.451 (0.003)\\
Cl-GP& 1.353 (0.050) & 1.441 (0.004) & 1.440 (0.003) & 1.435 (0.004) & 1.436 (0.005)\\
MCl-GP& 1.416 (0.010) & 1.448 (0.004) & 1.449 (0.003) & 1.443 (0.003) & 1.418 (0.008)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{6}{c}{$\mathbf{R = 48, n^* = 25}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & 1.341 (0.068) & 1.450 (0.003) & 1.449 (0.003) & 1.443 (0.004) & 1.448 (0.003)\\
$t$-GP & 1.336 (0.126) & 1.449 (0.003) & 1.452 (0.003) & 1.442 (0.004) & 1.449 (0.003)\\
TP & 1.367 (0.063) & 1.433 (0.006) & 1.430 (0.011) & 1.421 (0.039) & 1.423 (0.023) \\
M-GP & 1.415 (0.007) & 1.446 (0.002) & 1.444 (0.002) & 1.445 (0.004) & 1.442 (0.004) \\
Cl-GP& 1.110 (0.144) & 1.430 (0.010) & 1.434 (0.005) & 1.409 (0.008) & 1.388 (0.016) \\
MCl-GP& 1.423 (0.015) & 1.446 (0.004) & 1.448 (0.003) & 1.413 (0.024) & 1.414 (0.024) \\
\noalign{\smallskip}\hline
\end{tabular}
\label{tbl:2doption}
\end{table*}
\subsection{Results}
Tables \ref{tbl:2doption} and \ref{tbl:3doption} compare the different designs and metamodels. To assess the sequential design gains, we also report the results from using a baseline nonadaptive LHS design on $D$. At low budget, we observe the dramatic gains of using adaptive designs for level set estimation, which allow us to obtain the same performance with an order-of-magnitude smaller simulation budget. The tMSE and cSUR criteria work best for the 2-D Put, while ICU is the best for the 3-D max-Call, indicating that the exploratory designs start to win out in more complex settings with higher $d$.
Regarding the metamodels, in the low-budget setups, the monotonic GP metamodel works best for the 2-D Put and $t$-GP for the 3-D max-Call. For the higher budget, which also coincides with higher $r \in \{10, 50\}$, the metamodel performance is similar, with $t$-GP slightly better than the other GP variants. In particular, once the SNR is high, classical Gaussian GP is effectively as good as any alternative. In both examples, TP and classification metamodels do not work well, possibly because of being more sensitive to the heteroscedastic aspect. We note that TP as well as the classification metamodels suffer from instability, so that lower $\hat{V}(0,x_0)$ is matched with a high sampling standard deviation. Another observation is that Cl-GP and MCl-GP perform badly with exploratory heuristic like ICU, especially with high budget.
\begin{figure*}[htb]
\begin{minipage}[t]{0.33\linewidth}
\begin{center}
\includegraphics[width=1\textwidth,trim=0.3in 0.2in 0.3in 0.2in]{Fig7} \\
(a) GP with tMSE
\end{center}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\begin{center}
\includegraphics[width=1\textwidth,trim=0.3in 0.2in 0.3in 0.2in]{Fig8} \\
(b) $t$-GP with tMS
\end{center}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\begin{center}
\includegraphics[width=1\textwidth,trim=0.3in 0.2in 0.3in 0.2in]{Fig9} \\
(c) Cl-GP with MCU
\end{center}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\begin{center}
\includegraphics[width=1\textwidth,trim=0.3in 0.2in 0.3in 0in]{Fig10} \\
(d) M-GP with cSUR
\end{center}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\begin{center}
\includegraphics[width=1\textwidth,trim=0.3in 0.2in 0.3in 0in]{Fig11} \\
(e) TP with cSUR
\end{center}
\end{minipage}
\begin{minipage}[t]{0.33\linewidth}
\begin{center}
\includegraphics[width=1\textwidth,trim=0.3in 0.2in 0.3in 0in]{Fig12} \\
(f) MCl-GP with tMSE
\end{center}
\end{minipage}
\caption{The estimated exercise boundary $\partial \hat{S}$ (solid line with 95\% CI as dashed lines) at $t = 0.6$ for 2-D Bermudan Put from Section~\ref{sec:Bermudan}. Shading, which varies panel to panel, indicates the point estimate for the latent $\hat{f}(x)$ or $\hat{z}(x)$. We also show the design $(\mathbf{x}_{1:n},\mathbf{y}_{1:n})$ with positive $y_n$'s marked by $\times$ and negative $y_n$'s by $\circ$. All schemes used $R = 15, n^* = 80$. }
\label{fig:2doption}
\end{figure*}
Figure \ref{fig:2doption} shows the estimated exercise boundary $\partial \hat{S}_{t}$ with its 95\% CI at $t = 0.4$ for the 2-D Put, for each of the five metamodels, each with the design yielding the highest payoff. We observe that all the best-performing designs look similar, placing about a dozen $x_n$'s (some of which are from the initial design $x_{1:n_0}$) throughout $D$ and the rest tightly along the zero contour. The results suggest that the criteria are largely interchangeable and that simpler $\mathcal{I}_n$ heuristics are able to reproduce the features of the more sophisticated or expensive ICU. The heuristics \emph{do} differ in their uncertainty quantification; $t$-GP and GP generate tightest CI bands, while those of classification GPs and TP are too wide, indicating lack of confidence in the estimate. Of note, the regression GP metamodels (GP, $t$-GP and M-GP) also generate the lowest sampling variance for $\hat{V}(0,x_0)$.
Based on these results, our take-aways are threefold. First, similar to \cite{ludkovski2015kriging} we document significant gains from sequential design.Second, we find that while using ICU is helpful for more complicated settings with higher dimension $d$ and larger budget, tMSE is the recommended DoE heuristic for lower dimensional cases, achieving excellent results with minimal overhead (in particular without requiring look-ahead variance). Third, we find that for applications with thousands of simulations, the Gaussian observation model is sufficient, since the underlying design needs to be replicated $r \gg 1$ in order to avoid excessively large $\mathbf{K}$-matrices. Therefore, there is little need for more sophisticated metamodels, although useful gains can be realized from enforcing the monotonic structure, if available.
\begin{table*}[htb]
\caption{Performance of different designs and models on the 3-D Bermudan max-Call in Section~\ref{sec:Bermudan}. Results are the mean (w/standard deviation) payoff of 25 macroreplications evaluating on the same out-of-sample testing set of $M=160000$ $\bm{X}_{0:T}$-paths at each run.}
\centering
\begin{tabular}{lrrrrr}
\hline\noalign{\smallskip}
& LHS & MCU & tMSE &cSUR & ICU \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{6}{c}{$\mathbf{R = 3, n^* = 100}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & 10.036 (0.331) & 10.725 (0.095) & 10.773 (0.071) & 10.711 (0.086) & 10.753 (0.072) \\
$t$-GP & 9.894 (0.447) & 10.736 (0.088) & 10.747 (0.087) & 10.720 (0.104) & 10.782 (0.076) \\
TP & 9.169 (0.354) & 10.101 (0.218) & 9.872 (0.102) & 8.867 (0.357) & 10.482 (0.156) \\
Cl-GP & 9.552 (0.567) & 10.566 (0.084) & 10.657 (0.097) & 10.586 (0.099) & 10.604 (0.119) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{6}{c}{$\mathbf{R = 20, n^* = 100}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & 10.924 (0.076) & 11.078 (0.029) & 11.072 (0.028) & 11.055 (0.032) & 11.101 (0.023) \\
$t$-GP & 10.923 (0.071) & 11.061 (0.039) & 11.055 (0.027) & 11.044 (0.029) & 11.100 (0.027) \\
TP & 10.385 (0.178) & 10.815 (0.039) & 10.745 (0.045) & 10.620 (0.087) & 10.507 (0.087) \\
Cl-GP & 10.761 (0.112) & 11.026 (0.032) & 10.991 (0.037) & 10.901 (0.049) & 10.937 (0.041) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{6}{c}{$\mathbf{R = 20, n^* = 200}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
GP & 11.105(0.036) & 11.147(0.021) & 11.119(0.022) & 11.131(0.018) & 11.178(0.020) \\
$t$-GP & 11.090(0.034) & 11.141(0.019) & 11.126(0.020) & 11.115(0.027) & 11.175(0.021) \\
TP & 10.585 (0.118) & 10.896 (0.030) & 10.811 (0.035) & 10.764 (0.041) & 10.638 (0.038) \\
Cl-GP & 10.995(0.059) & 11.109(0.025) & 11.056(0.040) & 10.985(0.027) & 11.010(0.029) \\
\noalign{\smallskip}\hline
\end{tabular}
\label{tbl:3doption}
\end{table*}
\section{Conclusion}\label{sec:conc}
We have carried a comprehensive comparison of five metamodels and four design heuristics on 18 case studies ($4 \times 3$ synthetic, plus six real-world). In sum, the considered alternatives to standard Gaussian-observation GP do perform somewhat better. In particular, $t$-GP directly nests plain GP and hence essentially always matches or exceeds the performance of the latter. We also observe gains from using Cl-GP when SNR is low and from monotonic surrogates when the underlying response is monotone. That being said, final recommendation regarding the associated benefit depends on computational considerations, as the respective overhead becomes larger (and exact updating of the metamodel no longer possible).
In terms of design, we advocate the benefits of tMSE in low dimensional simulations, which generates high-performing experimental designs without requiring expensive acquisition function (or even look-ahead variance). The tMSE criterion does sometimes suffer from the tendency to put many designs at the edge of the input space but otherwise tends to match the performance of more complex and computationally intensive $\mathcal{I}_n$'s. For complex simulations, ICU is probably still the best choice (although in that case, random-set-based heuristics should also be considered). Especially in higher dimensions with misspecified noise, ICU is the best choice among all designs. We also stress that the user ought to thoughtfully pick the \emph{combination} of sequential design and metamodel, since cross-dependencies are involved (e.g., classification metamodels generally do not work well with the ICU criterion in lower dimension).
\subsection*{Acknowledgements}
XL and ML are partially supported by NSF DMS-1521743 and DMS-1821240. The work of MB is partially supported by NSF DMS-1521702 and the U.S.~Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Contract No.\ DE-AC02-06CH11357.
\bibliographystyle{spbasic}
| -79,346.534429 |
[
-2.986328125,
2.765625
] | 39.265212 |
[
-3.1875,
0.2333984375,
-1.9267578125,
-5.8828125,
-0.5625,
7.94921875
] |
[
3.83984375,
7.65625,
0.9345703125,
7.97265625
] | 1,056 | 12,725 |
[
-3.4765625,
4.03125
] | 32.910662 |
[
-6.32421875,
-5,
-5.40625,
-2.70703125,
2.462890625,
14.46875
] | 0.713507 | 16.37924 | 24.188605 | 8.131728 |
[
1.2395278215408325
] | -41,734.20338 | 5.837407 | -78,250.969541 | 0.064619 | 6.594714 |
[
-2.41796875,
-3.8671875,
-4.20703125,
-5.39453125,
2.359375,
12.8515625
] |
[
-5.8203125,
-2.638671875,
-2.34765625,
-1.962890625,
3.908203125,
5.52734375
] | |
BkiUeZE4eIOjRwMTjvC0
|
\section{Introduction}
With the rapid advancement of smart city, internet of vehicles, and edge artificial intelligence,
it is expected that Internet of Things (IoT) will support ubiquitous connectivity for billions of devices that generate massive amount of real-world data \cite{zhu2020overtheair}.
Wireless data aggregation among the distributed IoT devices is an important but challenging task \cite{blind,wang_iot,wang2020federated}.
Due to the scarcity of spectrum resources and the ultra-low latency requirement, the conventional transmit-then-compute scheme cannot support fast wireless data aggregation in dense IoT networks.
Fortunately, over-the-air-computation (AirComp) has the potential to achieve fast wireless data aggregation by enabling the paradigm of ``compute when communicate".
In particular, by exploiting the superposition property of multiple access channels (MACs), wireless data aggregation can be achieved in one transmission interval by allowing all IoT devices to transmit concurrently over the same radio channel \cite{yangkai}.
AirComp was firstly investigated in the seminal work \cite{info}, where the authors showed that the superposition property of MACs can be exploited to compute the nomographic functions from an information theoretical perspective.
With the great potential for wireless data aggregation, AirComp has recently attracted considerable interests \cite{optimal_liu, optimal_huang, chen2018uniform, multi_function, Muti_modal}.
In particular, considering simple single-input single-output (SISO) wireless networks with energy-constrained IoT devices, the authors in \cite{optimal_liu, optimal_huang} studied the optimal transmit power control strategies for AirComp.
As an extension, the authors in \cite{chen2018uniform} investigated AirComp in multiple-input single-output (MISO) wireless networks, where a semi-definite relaxation (SDR) based successive convex approximation (SCA) algorithm was proposed to design the receive beamforming vector at the access point (AP).
The authors in \cite{multi_function} and \cite{Muti_modal} integrated multiple-input multiple-output (MIMO) with AirComp, and studied the transceiver design for multi-function computation and multi-modal sensing, respectively.
The approximated receive beamformer was designed by utilizing the Grassman manifold theory in \cite{Muti_modal}.
However, the existing studies based on SDR and SDR-based SCA can only obtain sub-optimal solutions.
The optimal receive beamforming design for AirComp in MISO systems is still not available in the literatures.
In this paper, we consider wireless data aggregation via AirComp in IoT networks with a multi-antenna AP.
Our goal is to minimize the computation distortion at the AP by jointly optimizing the transmit scalars at the transmitter and denoising factor and receive beamforming vector at the AP.
With the transmit scalars and the denoising factor derived in closed-form, the distortion minimization problem turns to a non-convex quadratically constrained quadratic programming (QCQP) problem with respect to the receive beamforming vector at the AP.
We propose a globally optimal branch and bound (BnB) algorithm to design the receive beamforming vector, thereby further reducing the distortion of AirComp when compared to the baseline algorithms, as verified via extensive simulations.
Moreover, the proposed algorithm can be treated as a benchmark to evaluate the quality of the solutions returned by the existing algorithms, e.g., SDR and SDR-based SCA.
\emph{Notations}:
We use boldface upper-case, boldface lower-case, and lower-case letters to denote matrices, vectors, and scalars, respectively.
We denote the imaginary unit of a complex number as $\mathbf{j}$.
$(\bm \cdot)^{\sf{H}}$ stands for conjugate transpose of a matrix or a vector.
$\|\cdot\|$ denotes the $l_2$ norm operator.
$\operatorname{Re}\{\bm\cdot\}$, $\operatorname{Im}\{\bm \cdot\}$, $|\cdot|$, and $\operatorname{arg}(\cdot)$ represent the real part, imaginary part, absolute value, and argument of a scalar, respectively.
$\mathbb{E}\left[\bm \cdot\right]$ denotes the expectation of a random variable.
\section{System Model and Problem Formulation}
\subsection{System Model}
We consider fast wireless data aggregation via AirComp in an IoT system consisting of $K$ single-antenna IoT devices and one AP with $N$ antennas. We denote $\mathcal{K}=\{1,2,\ldots, K\}$ as the index set of IoT devices.
The AP aims to recover the arithmetic mean of the sensory data from all IoT devices.
We denote $s_k = \varphi_k(z_k)$ as the transmit signal of device $k$, where $\varphi_k(\cdot)$ is the specific pre-processing function and
$z_k\in\mathbb{C}$ is the representative information-bearing data at device $k$.
Without loss of generality,
we assume that $\{s_k\}_{k=0}^{K}$ are independent and have zero mean and unit power, i.e., $\mathbb{E}[s_{k}s_{k}^{\sf H}] = 1$, and $\mathbb{E}[s_{k}s_{j}^{\sf H}] = 0, \forall k \neq j$ \cite{optimal_huang}.
To recover the arithmetic mean of the sensory data from all IoT devices, i.e., $\frac{1}{K}\sum_{k\in\mathcal{K}}z_k$, it is sufficient for the AP to estimate the following target function
\begin{align}
g=\sum_{k\in\mathcal{K}}s_k.
\end{align}
By calibrating the transmission timing of each IoT device, we assume that the signals transmitted by all IoT devices are synchronized when receiving at the AP.
The signal received at the AP can be expressed as
\begin{align}
\bm{y}=\sum_{k\in\mathcal{K}}\bm{h}_{k}{w}_ks_k+\bm{n},
\end{align}
where $w_k\in\mathbb{C}$ denotes the transmit scalar of device $k$, $ \bm h_{k}\in\mathbb{C}^{N\times1}$ is the channel coefficient vector of the link from device $k$ to the AP,
and $ \bm n\sim\mathcal{CN}(0, \sigma^2\bm I_N) $ is the additive white Gaussian noise (AWGN) with zero mean and variance $\sigma^2$.
In practice, the maximum transmit power is limited, i.e., $|w_k|^2 \leq P, \forall k$.
After applying the receive combining, the estimated function at the AP is given by
\begin{align}
\hat{g}&={1\over{\sqrt \eta}}{\bm{m}^{\sf{H}}\bm{y}} =\!{1\over{\sqrt \eta}}{\bm{m}}^{\sf{H}}\sum_{k\in\mathcal{K}}\bm{h}_{k} {w}_ks_k +{1\over{\sqrt \eta}}\bm{m}^{\sf{H}}\bm{n},
\end{align}
where $\bm{m}\in\mathbb{C}^N$ and $\eta$ denote the receive beamforming vector and the denoising factor at the AP, respectively.
\subsection{Problem Formulation}
To evaluate the performance of AirComp,
we adopt mean-squared-error (MSE) to quantify the distortion of $\hat{g}$ with respect to $g$, given by
\begin{align}
{\sf{MSE}}(\hat{g}, g)=\mathbb{E}\left(|\hat{g}-g|^2\right) = \!\! \sum_{k\in\mathcal{K}}\left| \frac{{{\bm{m}}^{\sf{H}}\bm{h}_{k}{w}_k}}{\sqrt{\eta}} -1\right|^2 \!+\! \frac{\sigma^2\|\bm{m}\|^2}{\eta}\nonumber .
\end{align}
When the receive beamforming vector $\bm{m}$ is given, the optimal transmit scalars that minimize the MSE can be expressed as \cite{yangkai,chen2018uniform}
\begin{align}\label{a}
w_k^{\star}=\sqrt{\eta}{{(\bm{m}^{\sf{H}}\bm{h}_{k})^{\sf{H}}}\over{\|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2}},\forall k.
\end{align}
Due to the transmit power constraint, $\eta$ can be expressed as \begin{align}\label{b}
\eta=P\min_{k\in\mathcal{K}} \|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2.
\end{align}
With \eqref{a} and \eqref{b}, the MSE can be further rewritten as
\begin{align}
{\sf{MSE}}={{\|\bm{m}\|^2\sigma^2}\over{\eta}}
={{\|\bm{m}\|^2\sigma^2}\over{P\min_{k\in\mathcal{K}} \|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2}}.\nonumber
\end{align}
We thus propose to optimize the receive beamforming vector $ \bm m $ to minimize the MSE as follows:
\begin{equation}\label{eq:ori}
\begin{split}
\underset{\bm m}{\min} \left({{\|\bm{m}\|^2\sigma^2}\over{P\min_{k\in\mathcal{K}} \|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2}}\right).
\end{split}
\end{equation}
According to \cite{chen2018uniform}, problem \eqref{eq:ori} can be further equivalently transformed to the following problem
\begin{equation}\label{new}
\begin{split}
\underset{\bm m}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad\|\bm m^{\sf H}\bm{h}_{k}\|^2 \geq 1, ~ \forall k.
\end{split}
\end{equation}
To this end, we formulate the MSE minimization problem as a non-convex QCQP problem.
The authors in \cite{chen2018uniform,jiang2019} solved the non-convex QCQP problem by proposing the SDR and SDR-based SCA algorithms, which, however, are sub-optimal.
The quality of the solutions obtained by the aforementioned sub-optimal algorithms is still unknown due to the lack of the optimal algorithm.
In the next section, we shall propose a globally optimal algorithm for the optimization of receive beamforming vector $\bm m$ to fully exploit the potential of multiple antennas and to evaluate the performance of the existing sub-optimal algorithms.
\section{Proposed Global Optimal BnB Algorithm}
The BnB algorithm
is capable of approaching an optimal solution within any desired error bound for some non-convex problems \cite{ChengLu}.
The main idea of the BnB algorithm is to first construct the lower bound and upper bound for the non-convex problem, and then lift the lower bound and reduce the upper bound iteratively
through judiciously designing a branching strategy.
Specifically, the lower bound can be obtained by solving a corresponding relaxation problem.
Subsequently, we project the solution of the aforementioned relaxation problem to the original feasible region to form an upper bound.
\subsection{Lower Bound and Upper Bound}
To facilitate the BnB algorithm design, we first introduce an auxiliary variable $\bm{x} = [x_1,x_2,\ldots,x_K]^{\sf T} \in \mathbb{C}^{K}$, and then rewrite problem \eqref{new} as
\begin{equation}\label{ori:BnB}
\begin{split}
\underset{\bm m, \bm x}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad |x_k|\geq1, ~ \forall k,
\end{split}
\end{equation}
where constraint $|x_k|\geq1$ means that the feasible region of $x_k$ is the outer region of the unit circle in a complex plane.
We denote set $\mathcal{X} = \{\bm x \big | |x_k|\geq1, ~ \forall k\}$,
which can be treated as the Cartesian product of $K$ sets, i.e., $\mathcal{X} = \prod_{k=1}^{K} \mathcal{X}_k$ where $\mathcal{X}_k = \left \{x_k\in \mathbb{C} \big | |x_k| \geq 1\right\},~ \forall k$.
For non-convex set $\mathcal{X}_k$, the corresponding convex hull is the whole complex plane.
However, such a relaxation is too loose to generate an effective lower bound.
To this end, we partition $\mathcal{X}_k$ into several subregions, leading to a tighter relaxation.
Specifically, for the $n$-th non-convex subregion
$\mathcal{X}_k^n = \left\{x_k\in \mathbb{C} \Big | |x_k| \geq 1, ~ \arg (x_k) \in \left[l_k^n, u_k^n\right) \right\}$ with the argument interval being not greater than $\pi$, i.e., $u_k^n-l_k^n\leq \pi$, the corresponding convex hull can be represented as
\begin{equation}
\label{convex_hull}
\begin{aligned}
&~~~\operatorname{Conv}\{\mathcal{X}_k^n\} \\
&= \left\{x_k\in \mathbb{C} \Big |
\begin{split}
\operatorname{Re}\left\{\bar{x}_k \cdot \frac{e^{\mathbf{j} u_k^n}+e^{\mathbf{j} l_k^n}}{2}\right\} & \geq \cos \left(\frac{u_k^n-l_k^n}{2}\right), \\
\arg \left(x_k\right) & \in \left[l_k^n, u_k^n\right)
\end{split}
\right\},
\end{aligned}
\end{equation}
where $\bar{x}_k$ denotes the conjugate of $x_k$.
For example, as shown in Fig. \ref{demo}, the convex hull is enclosed by the line BC between points $e^{\mathbf{j}l_k^n}$ and $e^{\mathbf{j}u_k^n}$ at the unit circle
\[
\left\{x_k\in \mathbb{C} \Big |
\begin{aligned}
\operatorname{Re}\left\{\bar{x}_k \cdot \frac{e^{\mathbf{j} u_k^n}+e^{\mathbf{j} l_k^n}}{2}\right\} = \cos \left(\frac{u_k^n-l_k^n}{2}\right)
\end{aligned}
\right\},
\]
ray AB $\{x_k \in \mathbb{C} \mid \text{arg}(x_k) = l_k^n\}$, and ray AC $\{x_k \in \mathbb{C} \mid \text{arg}(x_k) = u_k^n\}$.
\begin{remark}
\emph{The minimum modulus among the convex hull of $\mathcal{X}_k^n$, i.e., $\min_{x_k \in \operatorname{Conv}\{\mathcal{X}_k^n\}} |x_k|$, is $\cos \left(\frac{u_k^n-l_k^n}{2}\right)$ that corresponds to the middle point of line segment BC is also the furthest point to set $\mathcal{X}_k^n$.
The convex relaxation $\operatorname{Conv}\{\mathcal{X}_k^n\}$ approaches to $\mathcal{X}_k^n$ when $u_k^n-l_k^n$ approaches to zero.
}
\end{remark}
In the $t$-th iteration of the BnB algorithm, the original feasible region $\mathcal{X}$ is divided into several subregions $\{\mathcal{S}^i\}_{i\in \mathcal{I}_t}$, where $\mathcal{I}_t$ denotes the index set of subregions at the $t$-th iteration.
Specifically, we rewrite $\mathcal{S}^{i}$ as the Cartesian product of $K$ independent sets, i.e., $\mathcal{S}^{i} = \mathcal{X}^i_1\times \mathcal{X}^i_2\times \cdots \times \mathcal{X}^i_K$,
where
\[
\mathcal{X}^i_k = \left\{ x_k \Big |
\begin{aligned}
|x_k| = 1,
\arg \left(x_k\right) \in \left[ l^i_k, u^i_k \right)
\end{aligned}
\right\}, \forall k.
\]
Besides, we have $\cup_{i\in \mathcal{I}_t} \mathcal{S}^i = \mathcal{X}$ and $\mathcal{S}^i \cap\mathcal{S}^{i^{\prime}} = \emptyset, i \neq i', \forall i, i^{\prime} \in \mathcal{I}_t$.
As a result, problem \eqref{ori:BnB} can be separated into a series of subproblems $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ defined on subregions $\{\mathcal{S}^i\}_{i\in \mathcal{I}_t}$ as follows,
\begin{equation}\label{subproblem_i}
\mathcal{P}^i : \quad \begin{aligned}
\underset{\bm m, \bm x}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm{x} \in \mathcal{S}^i.
\end{aligned}
\end{equation}
To obtain a lower bound for problem \eqref{subproblem_i},
we resort to solve its convex relaxation problem as follows,
\begin{equation}\label{subproblem:convex}
\begin{split}
\underset{\bm m, \bm x}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm{x} \in \hat{\mathcal{S}}_i,
\end{split}
\end{equation}
where $\hat{\mathcal{S}}_i$ denotes the convex hull of $\mathcal{S}^i$.
Specifically,
$\hat{\mathcal{S}}_{i} = \hat{\mathcal{X}}^i_1\times \hat{\mathcal{X}}^i_2\times \cdots \times \hat{\mathcal{X}}^i_K$, where $\hat{\mathcal{X}}^i_k$ is convex hull of $\mathcal{X}^i_k,~\forall k$.
It is worth noting that the convex hull of $\mathcal{X}^i_k$ can be obtained by using \eqref{convex_hull} if its argument interval is less than or equal to $\pi$, i.e., $u^i_k-l^i_k \leq \pi$.
The optimal objective value of convex problem \eqref{subproblem:convex} serves as a lower bound for problem \eqref{subproblem_i} since $\mathcal{S}^i \subseteq \hat{\mathcal{S}}_i$.
We then take the minimum lower bound among $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ as the current lower bound of problem \eqref{ori:BnB}, denoted as $L^t$.
\begin{figure}[t]
\centering
\includegraphics[width=2.5in]{demo2new.pdf}
\caption{Illustration of convex relaxation of the outer regions of arcs for three different argument intervals, i.e., $\pi/2$, $\pi/4$, and $\pi/8$.}
\label{demo}
\vspace{-5mm}
\end{figure}
\begin{remark}
\emph{
The optimal solution of problem \eqref{ori:BnB} lies in one of $\{\mathcal{S}^i\}_{i\in\mathcal{I}_t}$ since $\cup_{i\in \mathcal{I}_t} \mathcal{S}^i = \mathcal{X}$.
We denote the index of the subregion that incorporates the optimal solution as $i^{\prime}$.
The optimal objective value of $\mathcal{P}_{i^{\prime}}$ is identical to the optimal objective value of problem \eqref{ori:BnB}.
Therefore, the lower bound of $\mathcal{P}_{i^{\prime}}$ is less than the optimal objective value of problem \eqref{ori:BnB}.
However, it is challenging to identify which subregion the optimal solution lies in.
Fortunately, the minimum lower bound among $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ will not be larger than the lower bound of $\mathcal{P}_{i^{\prime}}$.
As a result, the minimum lower bound among $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ can serve as a lower bound of problem \eqref{ori:BnB}.
}
\end{remark}
On the other hand, the objective value of problem \eqref{subproblem_i} at any point located in feasible region $\mathcal{S}^i$ can serve as its upper bound.
We scale the optimal solutions of problem \eqref{subproblem:convex}, denoted as $\bm{x}^i_*$ and $\bm{m}^i_*$ to generate a point that belongs to $\mathcal{S}^i$ as follows
\begin{equation}
\label{upper_bound}
\begin{aligned}
\tilde{\bm{x}}^i_{*} &= \frac{\bm{x}^i_*}{\min\{|(x^i_*)_1|,|(x^i_*)_2|,\ldots,|(x^i_*)_K|,1\}} \in \mathcal{S}^i, \\
\tilde{\bm{m}}^i_{*} &= \frac{\bm{m}^i_*}{\min\{|(x^i_*)_1|,|(x^i_*)_2|,\ldots,|(x^i_*)_K|,1\}},
\end{aligned}
\end{equation}
where $(x^i_*)_k,~\forall k$ denotes the $k$-th element of $\bm x^i_*$.
As a result, $\|\tilde{\bm{m}}^i_*\|^2$ can be treated as an upper bound of problem \eqref{subproblem_i}.
The upper bound of problem \eqref{ori:BnB}, denoted as $U^t$, can be updated by the minimum upper bound among the current problem set $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$.
\begin{remark}
\emph{All the points in $\{\tilde{\bm{x}}^i_*\}_{i\in \mathcal{I}_t}$ belong to the feasible region of problem \eqref{ori:BnB}.
As a result, all of the corresponding objective values can serve as an upper bound of problem \eqref{ori:BnB}.
To construct a tighter upper bound for problem \eqref{ori:BnB}, we take the minimum upper bound of $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ to be the upper bound of problem \eqref{ori:BnB}.
}
\end{remark}
\subsection{Branching Strategy}
By performing partition on the feasible regions of current subproblems, we can get more subproblems with smaller feasible regions.
The corresponding relaxation become tighter as the partition continues, and the gap between the upper bound and lower bound diminishes.
On the other hand, $\min\{|(x^i_*)_1|,|(x^i_*)_2|,\ldots,|(x^i_*)_K|\}$ will increase as the relaxations become tighter.
As a result,
according to \eqref{upper_bound},
the upper bound of problem \eqref{ori:BnB} will decrease as the partition continues.
Specifically, in the $t$-th BnB iteration, we shall select a problem with the minimum lower bound in the problem set $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ and perform subdivision on its feasible region.
Without loss of generality, we denote the problem as $\mathcal{P}^{i_t}$, and the solution of the corresponding convex relaxation problem as $\bm{x}_{i_t}^{*}$.
For convenience of elaborating the partition rule, we rewrite $\mathcal{S}^{i_t}$ in the form of Cartesian product of $K$ independent parts, i.e., $\mathcal{S}^{i_t} = \mathcal{X}^{i_t}_1\times \mathcal{X}^{i_t}_2\times \cdots \times \mathcal{X}^{i_t}_K$.
Subsequently, we partition current region $ \mathcal{S}^{i_t}$ into two subregions, i.e.,
$\mathcal{S}^{i_t}_l = \mathcal{X}^{i_t}_1\times \mathcal{X}^{i_t}_2\times \cdots \times \left(\mathcal{X}^{i_t}_{k^{t}}\right)^l \times \cdots \times \mathcal{X}^{i_t}_K$
and
$\mathcal{S}^{i_t}_r = \mathcal{X}^{i_t}_1\times \mathcal{X}^{i_t}_2\times \cdots \times \left(\mathcal{X}^{i_t}_{k^{t}}\right)^r \times \cdots \times \mathcal{X}^{i_t}_K$,
where $k^{t} = \argmin_{i}\{|(x^{i_t}_{*})_i|\}$.
The only difference between $\mathcal{S}^{i_t}_{l}$ and $\mathcal{S}^{i_t}_{r}$ is the $k^{t}$-th part, where the original region is divided into two equal parts, i.e., $\left(\mathcal{X}^{i_t}_{k^{t}}\right)^l$ and $\left(\mathcal{X}^{i_t}_{k^{t}}\right)^r$.
For instance, if $\mathcal{X}^{i_t}_{k^{t}} =
\left\{x_{k^{t}} \Big |
\begin{aligned}
|x_{k^{t}}| = 1,
\arg \left(x_{k^{t}}\right) \in \left[ l^{i_t}_{k^{t}},u^{i_t}_{k^{t}}\right)
\end{aligned}
\right\}$, then
\begin{subequations} \label{partition}
\begin{align}
\left(\mathcal{X}^{i_t}_{k^{t}}\right)^l &=
\left\{x \Big |
\begin{aligned}
|x_{k^{t}}| = 1,
\arg \left(x\right) \in \left[ l^{i_t}_{k^{t}}, \frac{l^{i_t}_{k^{t}}+u^{i_t}_{k^{t}}}{2} \right)
\end{aligned}
\right\}, \\
\left(\mathcal{X}^{i_t}_{k^{t}}\right)^r &=
\left\{x_{k^{t}} \Big |
\begin{aligned}
|x_{k^{t}}| = 1,
\arg \left(x_{k^{t}}\right) \in \left[\frac{l^{i_t}_{k^{t}}+u^{i_t}_{k^{t}}}{2}, u^{i_t}_{k^{t}} \right)
\end{aligned}
\right\}.
\end{align}
\end{subequations}
As a result, problem $\mathcal{P}^{i_t}$ is branched into the following two subproblems
\begin{equation}
\begin{aligned}
\mathcal{P}^{i_t}_l: ~ \underset{\mathbf{m}, \bm{x}}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm x \in \mathcal{S}^{i_t}_{l}.
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\mathcal{P}^{i_t}_r: ~ \underset{\mathbf{m}, \bm{x}}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm x \in \mathcal{S}^{i_t}_{r}.
\end{aligned}
\end{equation}
The lower bound and upper bound of problem $\mathcal{P}^{i_t}_l$ and $\mathcal{P}^{i_t}_r$ can be obtained according to the rules discussed in last subsection.
Finally, we add the two problems into the problem set $\{\mathcal{P}^i\}_{i\in\mathcal{I}_{t+1}}$ and remove $\mathcal{P}^{i_t}$ from it, where $\mathcal{I}_{t+1}$ is the updated index set of subregions at the $(t+1)$-th iteration.
\subsection{Complexity}
With the aforementioned rules for constructing bounds and the branching strategy, the BnB algorithm is guaranteed to converge to an $\epsilon$-optimal solution within at most $\left(2 \pi/ \arccos \left(\frac{1}{\sqrt{1+\epsilon}}\right)\right)^{K}+1$ iterations \cite{ChengLu}.
Besides, in each iteration, the computation of the lower bound dominates the complexity of the proposed algorithm, which involves solving a convex QCQP problem, i.e., \eqref{subproblem:convex}.
According to \cite{Nesterov_interior}, the optimal solution for problem \eqref{subproblem:convex} can be obtained by using the standard interior-point method with complexity $\mathcal{O}(N^3K^{3.5})$.
As a result, the computation time complexity of the proposed BnB algorithm is
$\mathcal{O}(TN^3K^{3.5})$, where $T = \left(2 \pi/ \arccos \left(\frac{1}{\sqrt{1+\epsilon}}\right)\right)^{K}+1$.
\begin{algorithm}[t]
\caption{BnB Algorithm for Solving Problem \eqref{ori:BnB}}
\begin{algorithmic}[1]
\STATE Initialize $\mathcal{S}^0 = \prod_{i=1}^{K}[0,2\pi]$.
Randomly generate $\bm{m}$.
Set $\bm{m}_{*} = \frac{\bm{m}}{\operatorname{max}_k\{|\bm m^{\sf H}\bm{h}_{k}|\}}$, $\bar{\bm{m}}_{*} = \frac{\bm{m}}{\operatorname{min}_k\{|\bm m^{\sf H}\bm{h}_{k}|\}}$.
Set $(x^0_*)_k = \bm{m}_{*}\bm{h}_{k},~\forall k$.
Lower bound $L_0$ and upper bound $U_0$ are set to be $\|\bm{m}_{*}\|^2$ and $\|\bar{\bm{m}}_{*}\|^2$, respectively.
Use problem \eqref{ori:BnB} with $\{L_0,U_0, \bm x^0_*,\mathcal{S}^0\}$ to initialize problem set.
Set convergence tolerance $\epsilon$ and iteration index $t=0$,
\REPEAT
\STATE Select problem $\mathcal{P}^{i_t}$ with the smallest lower bound among current problem set $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$;
\STATE Partition the feasible set of the selected problem into two subregions, $\mathcal{S}^{i_t}_\mathrm{l}$ and $\mathcal{S}^{i_t}_\mathrm{r}$, according to \eqref{partition};
\STATE Compute the lower bound and upper bound for $\mathcal{P}^{i_t}_{l}$ and record the solutions;
\STATE Compute the lower bound and upper bound for $\mathcal{P}^{i_t}_{r}$ and record the solutions;
\STATE Add problems $\mathcal{P}^{i_t}_{l}$ and $\mathcal{P}^{i_t}_{r}$ to problem set $\{\mathcal{P}^i\}_{i\in\mathcal{I}_{t+1}}$;
\STATE $t\leftarrow t+1$;
\STATE Update upper bound $U^t$ and lower bound $L^t$ for problem \eqref{ori:BnB} as the smallest upper bound and lower bound among $\{\mathcal{P}^i\}_{i\in\mathcal{I}_{t}}$, respectively;
\UNTIL $\frac{U^t-L^t}{L^t}\le\epsilon$
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}
In this section, we present the simulation results of the proposed algorithm for AirComp in IoT networks.
We consider a three-dimentional setting, where the AP is located at $(0,0,20)$, while the IoT devices are uniformly located within a circular region centered at $(120,~ 20,~ 0)$ meters with radius $20$ meters.
The antennas at the AP are arranged as a uniform linear array.
In the simulations, we consider both large-scale fading and small-scale fading for the channel.
The distance-dependent large-scale fading is modeled as $T_0(d/d_0)^{-\alpha}$,
where $T_0$ is the path loss at the reference distance $d_0 = 1$ meter, $d$ denotes the distance between transmitter and receiver, and $\alpha$ is the path loss exponent.
Besides, we model the small-scale fading as Rician fading with rician factor $\beta$.
All results in the simulations are obtained by averaging over $500$ channel realizations.
Unless specified otherwise, we set $\alpha = 3$, $T_0 = -30 $ dB, $\beta = 3$, $P = 30$ dBm, $\sigma^2 = -100$ dBm, and $\epsilon = 10^{-5}$.
\subsection{Convergence Performance}
\begin{figure*}[h]
\centering
\subfigure[MSE versus the number of iterations when $K=8$ and $N=4$.]{
\label{convergence}
\includegraphics[width=0.66\columnwidth]{converge.eps}}
\subfigure[MSE versus the number of antennas at AP when $K=10$.]{
\label{antenna}
\includegraphics[width=0.66\columnwidth]{AP_fixed.eps}}
\subfigure[MSE versus the number of IoT devices when $N=10$.]{
\label{devices}
\includegraphics[width=0.66\columnwidth]{user_fixed.eps}}
\caption{Performance of the proposed BnB algorithm for AirComp in IoT networks.}
\end{figure*}
We present the convergence performance of the proposed BnB algorithm in Fig. \ref{convergence}.
It can be observed that the upper bound decreases and the lower bound increases as the iteration proceeds.
In addition, the gap between the upper bound and the lower bound diminishes as the number of iterations increases.
In particular, the algorithm terminates within 250 iterations, where the gap between the upper bound and the lower bound is below a predefined convergence tolerance.
\subsection{Performance Evaluation of the Existing Algorithms}
In this subsection, we compare the proposed BnB algorithm with SDR \cite{Tom_luo_sdr} and SDR-based SCA \cite{chen2018uniform} algorithms.
Fig. \ref{antenna} shows the impact of the number of antennas at the AP on the MSE when the number of IoT devices $K = 10$.
As can be observed, the MSE of AirComp monotonically decreases as the number of antennas increases.
This is because deploying a larger antenna array leads to a greater diversity gain.
Besides, it is clear that the proposed BnB algorithm has the best performance in minimizing the MSE.
This is because our proposed BnB algorithm is the global optimization algorithm that has the ability to approach the optimal solution within any desired error tolerance.
The performance gap between our proposed BnB algorithm and the SDR method is considerably large, as the SDR method is weak at optimizing the AirComp system.
By comparing the SDR-based SCA algorithm with the proposed algorithm, one can claim that the former can obtain a high-quality solution in the sense of the MSE.
The MSE versus the number of the IoT devices is plotted in Fig. \ref{devices}, where the number of antennas at the AP is set to be $10$.
It is obvious that the quality of the solutions of the SDR and SDR-based SCA algorithms degenerates as the number of IoT devices increases.
This is because the performance of the SDR-based SCA algorithm heavily depends on the quality of the solution returned by the SDR algorithm, which usually does not work well as the number of IoT devices increases.
\section{Conclusions}
In this paper, we investigated the joint design of the transmit scalars, the denoising factor, and the receive beamforming vector for AirComp in IoT networks.
We derived the closed-form expressions for the transmit scalars and the denoising factor, resulting in a non-convex QCQP problem with respect to the receive beamforming vector at the AP.
We then proposed a global optimal BnB algorithm to optimize the receive beamforming vector.
The achieved MSE by the proposed BnB algorithm in the simulations revealed the substantial potential of optimizing the receive beamformer.
Our proposed algorithm can be adopted as a benchmark to evaluate the performance of the existing sub-optimal algorithms, e.g., SDR and SDR-based SCA.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Introduction}
With the rapid advancement of smart city, internet of vehicles, and edge artificial intelligence,
it is expected that Internet of Things (IoT) will support ubiquitous connectivity for billions of devices that generate massive amount of real-world data \cite{zhu2020overtheair}.
Wireless data aggregation among the distributed IoT devices is an important but challenging task \cite{blind,wang_iot,wang2020federated}.
Due to the scarcity of spectrum resources and the ultra-low latency requirement, the conventional transmit-then-compute scheme cannot support fast wireless data aggregation in dense IoT networks.
Fortunately, over-the-air-computation (AirComp) has the potential to achieve fast wireless data aggregation by enabling the paradigm of ``compute when communicate".
In particular, by exploiting the superposition property of multiple access channels (MACs), wireless data aggregation can be achieved in one transmission interval by allowing all IoT devices to transmit concurrently over the same radio channel \cite{yangkai}.
AirComp was firstly investigated in the seminal work \cite{info}, where the authors showed that the superposition property of MACs can be exploited to compute the nomographic functions from an information theoretical perspective.
With the great potential for wireless data aggregation, AirComp has recently attracted considerable interests \cite{optimal_liu, optimal_huang, chen2018uniform, multi_function, Muti_modal}.
In particular, considering simple single-input single-output (SISO) wireless networks with energy-constrained IoT devices, the authors in \cite{optimal_liu, optimal_huang} studied the optimal transmit power control strategies for AirComp.
As an extension, the authors in \cite{chen2018uniform} investigated AirComp in multiple-input single-output (MISO) wireless networks, where a semi-definite relaxation (SDR) based successive convex approximation (SCA) algorithm was proposed to design the receive beamforming vector at the access point (AP).
The authors in \cite{multi_function} and \cite{Muti_modal} integrated multiple-input multiple-output (MIMO) with AirComp, and studied the transceiver design for multi-function computation and multi-modal sensing, respectively.
The approximated receive beamformer was designed by utilizing the Grassman manifold theory in \cite{Muti_modal}.
However, the existing studies based on SDR and SDR-based SCA can only obtain sub-optimal solutions.
The optimal receive beamforming design for AirComp in MISO systems is still not available in the literatures.
In this paper, we consider wireless data aggregation via AirComp in IoT networks with a multi-antenna AP.
Our goal is to minimize the computation distortion at the AP by jointly optimizing the transmit scalars at the transmitter and denoising factor and receive beamforming vector at the AP.
With the transmit scalars and the denoising factor derived in closed-form, the distortion minimization problem turns to a non-convex quadratically constrained quadratic programming (QCQP) problem with respect to the receive beamforming vector at the AP.
We propose a globally optimal branch and bound (BnB) algorithm to design the receive beamforming vector, thereby further reducing the distortion of AirComp when compared to the baseline algorithms, as verified via extensive simulations.
Moreover, the proposed algorithm can be treated as a benchmark to evaluate the quality of the solutions returned by the existing algorithms, e.g., SDR and SDR-based SCA.
\emph{Notations}:
We use boldface upper-case, boldface lower-case, and lower-case letters to denote matrices, vectors, and scalars, respectively.
We denote the imaginary unit of a complex number as $\mathbf{j}$.
$(\bm \cdot)^{\sf{H}}$ stands for conjugate transpose of a matrix or a vector.
$\|\cdot\|$ denotes the $l_2$ norm operator.
$\operatorname{Re}\{\bm\cdot\}$, $\operatorname{Im}\{\bm \cdot\}$, $|\cdot|$, and $\operatorname{arg}(\cdot)$ represent the real part, imaginary part, absolute value, and argument of a scalar, respectively.
$\mathbb{E}\left[\bm \cdot\right]$ denotes the expectation of a random variable.
\section{System Model and Problem Formulation}
\subsection{System Model}
We consider fast wireless data aggregation via AirComp in an IoT system consisting of $K$ single-antenna IoT devices and one AP with $N$ antennas. We denote $\mathcal{K}=\{1,2,\ldots, K\}$ as the index set of IoT devices.
The AP aims to recover the arithmetic mean of the sensory data from all IoT devices.
We denote $s_k = \varphi_k(z_k)$ as the transmit signal of device $k$, where $\varphi_k(\cdot)$ is the specific pre-processing function and
$z_k\in\mathbb{C}$ is the representative information-bearing data at device $k$.
Without loss of generality,
we assume that $\{s_k\}_{k=0}^{K}$ are independent and have zero mean and unit power, i.e., $\mathbb{E}[s_{k}s_{k}^{\sf H}] = 1$, and $\mathbb{E}[s_{k}s_{j}^{\sf H}] = 0, \forall k \neq j$ \cite{optimal_huang}.
To recover the arithmetic mean of the sensory data from all IoT devices, i.e., $\frac{1}{K}\sum_{k\in\mathcal{K}}z_k$, it is sufficient for the AP to estimate the following target function
\begin{align}
g=\sum_{k\in\mathcal{K}}s_k.
\end{align}
By calibrating the transmission timing of each IoT device, we assume that the signals transmitted by all IoT devices are synchronized when receiving at the AP.
The signal received at the AP can be expressed as
\begin{align}
\bm{y}=\sum_{k\in\mathcal{K}}\bm{h}_{k}{w}_ks_k+\bm{n},
\end{align}
where $w_k\in\mathbb{C}$ denotes the transmit scalar of device $k$, $ \bm h_{k}\in\mathbb{C}^{N\times1}$ is the channel coefficient vector of the link from device $k$ to the AP,
and $ \bm n\sim\mathcal{CN}(0, \sigma^2\bm I_N) $ is the additive white Gaussian noise (AWGN) with zero mean and variance $\sigma^2$.
In practice, the maximum transmit power is limited, i.e., $|w_k|^2 \leq P, \forall k$.
After applying the receive combining, the estimated function at the AP is given by
\begin{align}
\hat{g}&={1\over{\sqrt \eta}}{\bm{m}^{\sf{H}}\bm{y}} =\!{1\over{\sqrt \eta}}{\bm{m}}^{\sf{H}}\sum_{k\in\mathcal{K}}\bm{h}_{k} {w}_ks_k +{1\over{\sqrt \eta}}\bm{m}^{\sf{H}}\bm{n},
\end{align}
where $\bm{m}\in\mathbb{C}^N$ and $\eta$ denote the receive beamforming vector and the denoising factor at the AP, respectively.
\subsection{Problem Formulation}
To evaluate the performance of AirComp,
we adopt mean-squared-error (MSE) to quantify the distortion of $\hat{g}$ with respect to $g$, given by
\begin{align}
{\sf{MSE}}(\hat{g}, g)=\mathbb{E}\left(|\hat{g}-g|^2\right) = \!\! \sum_{k\in\mathcal{K}}\left| \frac{{{\bm{m}}^{\sf{H}}\bm{h}_{k}{w}_k}}{\sqrt{\eta}} -1\right|^2 \!+\! \frac{\sigma^2\|\bm{m}\|^2}{\eta}\nonumber .
\end{align}
When the receive beamforming vector $\bm{m}$ is given, the optimal transmit scalars that minimize the MSE can be expressed as \cite{yangkai,chen2018uniform}
\begin{align}\label{a}
w_k^{\star}=\sqrt{\eta}{{(\bm{m}^{\sf{H}}\bm{h}_{k})^{\sf{H}}}\over{\|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2}},\forall k.
\end{align}
Due to the transmit power constraint, $\eta$ can be expressed as \begin{align}\label{b}
\eta=P\min_{k\in\mathcal{K}} \|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2.
\end{align}
With \eqref{a} and \eqref{b}, the MSE can be further rewritten as
\begin{align}
{\sf{MSE}}={{\|\bm{m}\|^2\sigma^2}\over{\eta}}
={{\|\bm{m}\|^2\sigma^2}\over{P\min_{k\in\mathcal{K}} \|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2}}.\nonumber
\end{align}
We thus propose to optimize the receive beamforming vector $ \bm m $ to minimize the MSE as follows:
\begin{equation}\label{eq:ori}
\begin{split}
\underset{\bm m}{\min} \left({{\|\bm{m}\|^2\sigma^2}\over{P\min_{k\in\mathcal{K}} \|\bm{m}^{\sf{H}}\bm{h}_{k}\|^2}}\right).
\end{split}
\end{equation}
According to \cite{chen2018uniform}, problem \eqref{eq:ori} can be further equivalently transformed to the following problem
\begin{equation}\label{new}
\begin{split}
\underset{\bm m}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad\|\bm m^{\sf H}\bm{h}_{k}\|^2 \geq 1, ~ \forall k.
\end{split}
\end{equation}
To this end, we formulate the MSE minimization problem as a non-convex QCQP problem.
The authors in \cite{chen2018uniform,jiang2019} solved the non-convex QCQP problem by proposing the SDR and SDR-based SCA algorithms, which, however, are sub-optimal.
The quality of the solutions obtained by the aforementioned sub-optimal algorithms is still unknown due to the lack of the optimal algorithm.
In the next section, we shall propose a globally optimal algorithm for the optimization of receive beamforming vector $\bm m$ to fully exploit the potential of multiple antennas and to evaluate the performance of the existing sub-optimal algorithms.
\section{Proposed Global Optimal BnB Algorithm}
The BnB algorithm
is capable of approaching an optimal solution within any desired error bound for some non-convex problems \cite{ChengLu}.
The main idea of the BnB algorithm is to first construct the lower bound and upper bound for the non-convex problem, and then lift the lower bound and reduce the upper bound iteratively
through judiciously designing a branching strategy.
Specifically, the lower bound can be obtained by solving a corresponding relaxation problem.
Subsequently, we project the solution of the aforementioned relaxation problem to the original feasible region to form an upper bound.
\subsection{Lower Bound and Upper Bound}
To facilitate the BnB algorithm design, we first introduce an auxiliary variable $\bm{x} = [x_1,x_2,\ldots,x_K]^{\sf T} \in \mathbb{C}^{K}$, and then rewrite problem \eqref{new} as
\begin{equation}\label{ori:BnB}
\begin{split}
\underset{\bm m, \bm x}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad |x_k|\geq1, ~ \forall k,
\end{split}
\end{equation}
where constraint $|x_k|\geq1$ means that the feasible region of $x_k$ is the outer region of the unit circle in a complex plane.
We denote set $\mathcal{X} = \{\bm x \big | |x_k|\geq1, ~ \forall k\}$,
which can be treated as the Cartesian product of $K$ sets, i.e., $\mathcal{X} = \prod_{k=1}^{K} \mathcal{X}_k$ where $\mathcal{X}_k = \left \{x_k\in \mathbb{C} \big | |x_k| \geq 1\right\},~ \forall k$.
For non-convex set $\mathcal{X}_k$, the corresponding convex hull is the whole complex plane.
However, such a relaxation is too loose to generate an effective lower bound.
To this end, we partition $\mathcal{X}_k$ into several subregions, leading to a tighter relaxation.
Specifically, for the $n$-th non-convex subregion
$\mathcal{X}_k^n = \left\{x_k\in \mathbb{C} \Big | |x_k| \geq 1, ~ \arg (x_k) \in \left[l_k^n, u_k^n\right) \right\}$ with the argument interval being not greater than $\pi$, i.e., $u_k^n-l_k^n\leq \pi$, the corresponding convex hull can be represented as
\begin{equation}
\label{convex_hull}
\begin{aligned}
&~~~\operatorname{Conv}\{\mathcal{X}_k^n\} \\
&= \left\{x_k\in \mathbb{C} \Big |
\begin{split}
\operatorname{Re}\left\{\bar{x}_k \cdot \frac{e^{\mathbf{j} u_k^n}+e^{\mathbf{j} l_k^n}}{2}\right\} & \geq \cos \left(\frac{u_k^n-l_k^n}{2}\right), \\
\arg \left(x_k\right) & \in \left[l_k^n, u_k^n\right)
\end{split}
\right\},
\end{aligned}
\end{equation}
where $\bar{x}_k$ denotes the conjugate of $x_k$.
For example, as shown in Fig. \ref{demo}, the convex hull is enclosed by the line BC between points $e^{\mathbf{j}l_k^n}$ and $e^{\mathbf{j}u_k^n}$ at the unit circle
\[
\left\{x_k\in \mathbb{C} \Big |
\begin{aligned}
\operatorname{Re}\left\{\bar{x}_k \cdot \frac{e^{\mathbf{j} u_k^n}+e^{\mathbf{j} l_k^n}}{2}\right\} = \cos \left(\frac{u_k^n-l_k^n}{2}\right)
\end{aligned}
\right\},
\]
ray AB $\{x_k \in \mathbb{C} \mid \text{arg}(x_k) = l_k^n\}$, and ray AC $\{x_k \in \mathbb{C} \mid \text{arg}(x_k) = u_k^n\}$.
\begin{remark}
\emph{The minimum modulus among the convex hull of $\mathcal{X}_k^n$, i.e., $\min_{x_k \in \operatorname{Conv}\{\mathcal{X}_k^n\}} |x_k|$, is $\cos \left(\frac{u_k^n-l_k^n}{2}\right)$ that corresponds to the middle point of line segment BC is also the furthest point to set $\mathcal{X}_k^n$.
The convex relaxation $\operatorname{Conv}\{\mathcal{X}_k^n\}$ approaches to $\mathcal{X}_k^n$ when $u_k^n-l_k^n$ approaches to zero.
}
\end{remark}
In the $t$-th iteration of the BnB algorithm, the original feasible region $\mathcal{X}$ is divided into several subregions $\{\mathcal{S}^i\}_{i\in \mathcal{I}_t}$, where $\mathcal{I}_t$ denotes the index set of subregions at the $t$-th iteration.
Specifically, we rewrite $\mathcal{S}^{i}$ as the Cartesian product of $K$ independent sets, i.e., $\mathcal{S}^{i} = \mathcal{X}^i_1\times \mathcal{X}^i_2\times \cdots \times \mathcal{X}^i_K$,
where
\[
\mathcal{X}^i_k = \left\{ x_k \Big |
\begin{aligned}
|x_k| = 1,
\arg \left(x_k\right) \in \left[ l^i_k, u^i_k \right)
\end{aligned}
\right\}, \forall k.
\]
Besides, we have $\cup_{i\in \mathcal{I}_t} \mathcal{S}^i = \mathcal{X}$ and $\mathcal{S}^i \cap\mathcal{S}^{i^{\prime}} = \emptyset, i \neq i', \forall i, i^{\prime} \in \mathcal{I}_t$.
As a result, problem \eqref{ori:BnB} can be separated into a series of subproblems $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ defined on subregions $\{\mathcal{S}^i\}_{i\in \mathcal{I}_t}$ as follows,
\begin{equation}\label{subproblem_i}
\mathcal{P}^i : \quad \begin{aligned}
\underset{\bm m, \bm x}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm{x} \in \mathcal{S}^i.
\end{aligned}
\end{equation}
To obtain a lower bound for problem \eqref{subproblem_i},
we resort to solve its convex relaxation problem as follows,
\begin{equation}\label{subproblem:convex}
\begin{split}
\underset{\bm m, \bm x}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm{x} \in \hat{\mathcal{S}}_i,
\end{split}
\end{equation}
where $\hat{\mathcal{S}}_i$ denotes the convex hull of $\mathcal{S}^i$.
Specifically,
$\hat{\mathcal{S}}_{i} = \hat{\mathcal{X}}^i_1\times \hat{\mathcal{X}}^i_2\times \cdots \times \hat{\mathcal{X}}^i_K$, where $\hat{\mathcal{X}}^i_k$ is convex hull of $\mathcal{X}^i_k,~\forall k$.
It is worth noting that the convex hull of $\mathcal{X}^i_k$ can be obtained by using \eqref{convex_hull} if its argument interval is less than or equal to $\pi$, i.e., $u^i_k-l^i_k \leq \pi$.
The optimal objective value of convex problem \eqref{subproblem:convex} serves as a lower bound for problem \eqref{subproblem_i} since $\mathcal{S}^i \subseteq \hat{\mathcal{S}}_i$.
We then take the minimum lower bound among $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ as the current lower bound of problem \eqref{ori:BnB}, denoted as $L^t$.
\begin{figure}[t]
\centering
\includegraphics[width=2.5in]{demo2new.pdf}
\caption{Illustration of convex relaxation of the outer regions of arcs for three different argument intervals, i.e., $\pi/2$, $\pi/4$, and $\pi/8$.}
\label{demo}
\vspace{-5mm}
\end{figure}
\begin{remark}
\emph{
The optimal solution of problem \eqref{ori:BnB} lies in one of $\{\mathcal{S}^i\}_{i\in\mathcal{I}_t}$ since $\cup_{i\in \mathcal{I}_t} \mathcal{S}^i = \mathcal{X}$.
We denote the index of the subregion that incorporates the optimal solution as $i^{\prime}$.
The optimal objective value of $\mathcal{P}_{i^{\prime}}$ is identical to the optimal objective value of problem \eqref{ori:BnB}.
Therefore, the lower bound of $\mathcal{P}_{i^{\prime}}$ is less than the optimal objective value of problem \eqref{ori:BnB}.
However, it is challenging to identify which subregion the optimal solution lies in.
Fortunately, the minimum lower bound among $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ will not be larger than the lower bound of $\mathcal{P}_{i^{\prime}}$.
As a result, the minimum lower bound among $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ can serve as a lower bound of problem \eqref{ori:BnB}.
}
\end{remark}
On the other hand, the objective value of problem \eqref{subproblem_i} at any point located in feasible region $\mathcal{S}^i$ can serve as its upper bound.
We scale the optimal solutions of problem \eqref{subproblem:convex}, denoted as $\bm{x}^i_*$ and $\bm{m}^i_*$ to generate a point that belongs to $\mathcal{S}^i$ as follows
\begin{equation}
\label{upper_bound}
\begin{aligned}
\tilde{\bm{x}}^i_{*} &= \frac{\bm{x}^i_*}{\min\{|(x^i_*)_1|,|(x^i_*)_2|,\ldots,|(x^i_*)_K|,1\}} \in \mathcal{S}^i, \\
\tilde{\bm{m}}^i_{*} &= \frac{\bm{m}^i_*}{\min\{|(x^i_*)_1|,|(x^i_*)_2|,\ldots,|(x^i_*)_K|,1\}},
\end{aligned}
\end{equation}
where $(x^i_*)_k,~\forall k$ denotes the $k$-th element of $\bm x^i_*$.
As a result, $\|\tilde{\bm{m}}^i_*\|^2$ can be treated as an upper bound of problem \eqref{subproblem_i}.
The upper bound of problem \eqref{ori:BnB}, denoted as $U^t$, can be updated by the minimum upper bound among the current problem set $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$.
\begin{remark}
\emph{All the points in $\{\tilde{\bm{x}}^i_*\}_{i\in \mathcal{I}_t}$ belong to the feasible region of problem \eqref{ori:BnB}.
As a result, all of the corresponding objective values can serve as an upper bound of problem \eqref{ori:BnB}.
To construct a tighter upper bound for problem \eqref{ori:BnB}, we take the minimum upper bound of $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ to be the upper bound of problem \eqref{ori:BnB}.
}
\end{remark}
\subsection{Branching Strategy}
By performing partition on the feasible regions of current subproblems, we can get more subproblems with smaller feasible regions.
The corresponding relaxation become tighter as the partition continues, and the gap between the upper bound and lower bound diminishes.
On the other hand, $\min\{|(x^i_*)_1|,|(x^i_*)_2|,\ldots,|(x^i_*)_K|\}$ will increase as the relaxations become tighter.
As a result,
according to \eqref{upper_bound},
the upper bound of problem \eqref{ori:BnB} will decrease as the partition continues.
Specifically, in the $t$-th BnB iteration, we shall select a problem with the minimum lower bound in the problem set $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$ and perform subdivision on its feasible region.
Without loss of generality, we denote the problem as $\mathcal{P}^{i_t}$, and the solution of the corresponding convex relaxation problem as $\bm{x}_{i_t}^{*}$.
For convenience of elaborating the partition rule, we rewrite $\mathcal{S}^{i_t}$ in the form of Cartesian product of $K$ independent parts, i.e., $\mathcal{S}^{i_t} = \mathcal{X}^{i_t}_1\times \mathcal{X}^{i_t}_2\times \cdots \times \mathcal{X}^{i_t}_K$.
Subsequently, we partition current region $ \mathcal{S}^{i_t}$ into two subregions, i.e.,
$\mathcal{S}^{i_t}_l = \mathcal{X}^{i_t}_1\times \mathcal{X}^{i_t}_2\times \cdots \times \left(\mathcal{X}^{i_t}_{k^{t}}\right)^l \times \cdots \times \mathcal{X}^{i_t}_K$
and
$\mathcal{S}^{i_t}_r = \mathcal{X}^{i_t}_1\times \mathcal{X}^{i_t}_2\times \cdots \times \left(\mathcal{X}^{i_t}_{k^{t}}\right)^r \times \cdots \times \mathcal{X}^{i_t}_K$,
where $k^{t} = \argmin_{i}\{|(x^{i_t}_{*})_i|\}$.
The only difference between $\mathcal{S}^{i_t}_{l}$ and $\mathcal{S}^{i_t}_{r}$ is the $k^{t}$-th part, where the original region is divided into two equal parts, i.e., $\left(\mathcal{X}^{i_t}_{k^{t}}\right)^l$ and $\left(\mathcal{X}^{i_t}_{k^{t}}\right)^r$.
For instance, if $\mathcal{X}^{i_t}_{k^{t}} =
\left\{x_{k^{t}} \Big |
\begin{aligned}
|x_{k^{t}}| = 1,
\arg \left(x_{k^{t}}\right) \in \left[ l^{i_t}_{k^{t}},u^{i_t}_{k^{t}}\right)
\end{aligned}
\right\}$, then
\begin{subequations} \label{partition}
\begin{align}
\left(\mathcal{X}^{i_t}_{k^{t}}\right)^l &=
\left\{x \Big |
\begin{aligned}
|x_{k^{t}}| = 1,
\arg \left(x\right) \in \left[ l^{i_t}_{k^{t}}, \frac{l^{i_t}_{k^{t}}+u^{i_t}_{k^{t}}}{2} \right)
\end{aligned}
\right\}, \\
\left(\mathcal{X}^{i_t}_{k^{t}}\right)^r &=
\left\{x_{k^{t}} \Big |
\begin{aligned}
|x_{k^{t}}| = 1,
\arg \left(x_{k^{t}}\right) \in \left[\frac{l^{i_t}_{k^{t}}+u^{i_t}_{k^{t}}}{2}, u^{i_t}_{k^{t}} \right)
\end{aligned}
\right\}.
\end{align}
\end{subequations}
As a result, problem $\mathcal{P}^{i_t}$ is branched into the following two subproblems
\begin{equation}
\begin{aligned}
\mathcal{P}^{i_t}_l: ~ \underset{\mathbf{m}, \bm{x}}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm x \in \mathcal{S}^{i_t}_{l}.
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\mathcal{P}^{i_t}_r: ~ \underset{\mathbf{m}, \bm{x}}{\min} &\quad \|\bm m\|^2\\
\text{s.t.} &\quad \bm m^{\sf H}\bm{h}_{k} = x_k, ~ \forall k, \\
&\quad \bm x \in \mathcal{S}^{i_t}_{r}.
\end{aligned}
\end{equation}
The lower bound and upper bound of problem $\mathcal{P}^{i_t}_l$ and $\mathcal{P}^{i_t}_r$ can be obtained according to the rules discussed in last subsection.
Finally, we add the two problems into the problem set $\{\mathcal{P}^i\}_{i\in\mathcal{I}_{t+1}}$ and remove $\mathcal{P}^{i_t}$ from it, where $\mathcal{I}_{t+1}$ is the updated index set of subregions at the $(t+1)$-th iteration.
\subsection{Complexity}
With the aforementioned rules for constructing bounds and the branching strategy, the BnB algorithm is guaranteed to converge to an $\epsilon$-optimal solution within at most $\left(2 \pi/ \arccos \left(\frac{1}{\sqrt{1+\epsilon}}\right)\right)^{K}+1$ iterations \cite{ChengLu}.
Besides, in each iteration, the computation of the lower bound dominates the complexity of the proposed algorithm, which involves solving a convex QCQP problem, i.e., \eqref{subproblem:convex}.
According to \cite{Nesterov_interior}, the optimal solution for problem \eqref{subproblem:convex} can be obtained by using the standard interior-point method with complexity $\mathcal{O}(N^3K^{3.5})$.
As a result, the computation time complexity of the proposed BnB algorithm is
$\mathcal{O}(TN^3K^{3.5})$, where $T = \left(2 \pi/ \arccos \left(\frac{1}{\sqrt{1+\epsilon}}\right)\right)^{K}+1$.
\begin{algorithm}[t]
\caption{BnB Algorithm for Solving Problem \eqref{ori:BnB}}
\begin{algorithmic}[1]
\STATE Initialize $\mathcal{S}^0 = \prod_{i=1}^{K}[0,2\pi]$.
Randomly generate $\bm{m}$.
Set $\bm{m}_{*} = \frac{\bm{m}}{\operatorname{max}_k\{|\bm m^{\sf H}\bm{h}_{k}|\}}$, $\bar{\bm{m}}_{*} = \frac{\bm{m}}{\operatorname{min}_k\{|\bm m^{\sf H}\bm{h}_{k}|\}}$.
Set $(x^0_*)_k = \bm{m}_{*}\bm{h}_{k},~\forall k$.
Lower bound $L_0$ and upper bound $U_0$ are set to be $\|\bm{m}_{*}\|^2$ and $\|\bar{\bm{m}}_{*}\|^2$, respectively.
Use problem \eqref{ori:BnB} with $\{L_0,U_0, \bm x^0_*,\mathcal{S}^0\}$ to initialize problem set.
Set convergence tolerance $\epsilon$ and iteration index $t=0$,
\REPEAT
\STATE Select problem $\mathcal{P}^{i_t}$ with the smallest lower bound among current problem set $\{\mathcal{P}^i\}_{i\in \mathcal{I}_t}$;
\STATE Partition the feasible set of the selected problem into two subregions, $\mathcal{S}^{i_t}_\mathrm{l}$ and $\mathcal{S}^{i_t}_\mathrm{r}$, according to \eqref{partition};
\STATE Compute the lower bound and upper bound for $\mathcal{P}^{i_t}_{l}$ and record the solutions;
\STATE Compute the lower bound and upper bound for $\mathcal{P}^{i_t}_{r}$ and record the solutions;
\STATE Add problems $\mathcal{P}^{i_t}_{l}$ and $\mathcal{P}^{i_t}_{r}$ to problem set $\{\mathcal{P}^i\}_{i\in\mathcal{I}_{t+1}}$;
\STATE $t\leftarrow t+1$;
\STATE Update upper bound $U^t$ and lower bound $L^t$ for problem \eqref{ori:BnB} as the smallest upper bound and lower bound among $\{\mathcal{P}^i\}_{i\in\mathcal{I}_{t}}$, respectively;
\UNTIL $\frac{U^t-L^t}{L^t}\le\epsilon$
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}
In this section, we present the simulation results of the proposed algorithm for AirComp in IoT networks.
We consider a three-dimentional setting, where the AP is located at $(0,0,20)$, while the IoT devices are uniformly located within a circular region centered at $(120,~ 20,~ 0)$ meters with radius $20$ meters.
The antennas at the AP are arranged as a uniform linear array.
In the simulations, we consider both large-scale fading and small-scale fading for the channel.
The distance-dependent large-scale fading is modeled as $T_0(d/d_0)^{-\alpha}$,
where $T_0$ is the path loss at the reference distance $d_0 = 1$ meter, $d$ denotes the distance between transmitter and receiver, and $\alpha$ is the path loss exponent.
Besides, we model the small-scale fading as Rician fading with rician factor $\beta$.
All results in the simulations are obtained by averaging over $500$ channel realizations.
Unless specified otherwise, we set $\alpha = 3$, $T_0 = -30 $ dB, $\beta = 3$, $P = 30$ dBm, $\sigma^2 = -100$ dBm, and $\epsilon = 10^{-5}$.
\subsection{Convergence Performance}
\begin{figure*}[h]
\centering
\subfigure[MSE versus the number of iterations when $K=8$ and $N=4$.]{
\label{convergence}
\includegraphics[width=0.66\columnwidth]{converge.eps}}
\subfigure[MSE versus the number of antennas at AP when $K=10$.]{
\label{antenna}
\includegraphics[width=0.66\columnwidth]{AP_fixed.eps}}
\subfigure[MSE versus the number of IoT devices when $N=10$.]{
\label{devices}
\includegraphics[width=0.66\columnwidth]{user_fixed.eps}}
\caption{Performance of the proposed BnB algorithm for AirComp in IoT networks.}
\end{figure*}
We present the convergence performance of the proposed BnB algorithm in Fig. \ref{convergence}.
It can be observed that the upper bound decreases and the lower bound increases as the iteration proceeds.
In addition, the gap between the upper bound and the lower bound diminishes as the number of iterations increases.
In particular, the algorithm terminates within 250 iterations, where the gap between the upper bound and the lower bound is below a predefined convergence tolerance.
\subsection{Performance Evaluation of the Existing Algorithms}
In this subsection, we compare the proposed BnB algorithm with SDR \cite{Tom_luo_sdr} and SDR-based SCA \cite{chen2018uniform} algorithms.
Fig. \ref{antenna} shows the impact of the number of antennas at the AP on the MSE when the number of IoT devices $K = 10$.
As can be observed, the MSE of AirComp monotonically decreases as the number of antennas increases.
This is because deploying a larger antenna array leads to a greater diversity gain.
Besides, it is clear that the proposed BnB algorithm has the best performance in minimizing the MSE.
This is because our proposed BnB algorithm is the global optimization algorithm that has the ability to approach the optimal solution within any desired error tolerance.
The performance gap between our proposed BnB algorithm and the SDR method is considerably large, as the SDR method is weak at optimizing the AirComp system.
By comparing the SDR-based SCA algorithm with the proposed algorithm, one can claim that the former can obtain a high-quality solution in the sense of the MSE.
The MSE versus the number of the IoT devices is plotted in Fig. \ref{devices}, where the number of antennas at the AP is set to be $10$.
It is obvious that the quality of the solutions of the SDR and SDR-based SCA algorithms degenerates as the number of IoT devices increases.
This is because the performance of the SDR-based SCA algorithm heavily depends on the quality of the solution returned by the SDR algorithm, which usually does not work well as the number of IoT devices increases.
\section{Conclusions}
In this paper, we investigated the joint design of the transmit scalars, the denoising factor, and the receive beamforming vector for AirComp in IoT networks.
We derived the closed-form expressions for the transmit scalars and the denoising factor, resulting in a non-convex QCQP problem with respect to the receive beamforming vector at the AP.
We then proposed a global optimal BnB algorithm to optimize the receive beamforming vector.
The achieved MSE by the proposed BnB algorithm in the simulations revealed the substantial potential of optimizing the receive beamformer.
Our proposed algorithm can be adopted as a benchmark to evaluate the performance of the existing sub-optimal algorithms, e.g., SDR and SDR-based SCA.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| -61,793.185011 |
[
-2.564453125,
2.466796875
] | 51.363636 |
[
-3.767578125,
0.301513671875,
-2.1875,
-6.72265625,
-0.78125,
9.3359375
] |
[
1.716796875,
6.9921875,
1.5263671875,
5.05078125
] | 393 | 6,710 |
[
-1.99609375,
1.8427734375
] | 31.034603 |
[
-6.28125,
-4.91796875,
-4.51171875,
-1.9130859375,
2.65234375,
12.265625
] | 0.802005 | 22.811701 | 13.830104 | 1.077718 |
[
1.8045918941497803
] | -39,510.83726 | 5.946349 | -62,263.468079 | 0.781955 | 5.752899 |
[
-2.337890625,
-3.6953125,
-4.13671875,
-5.3984375,
2.533203125,
12.8515625
] |
[
-6.0078125,
-2.255859375,
-2.4375,
-1.6689453125,
3.88671875,
5.3984375
] | |
BkiUdTY25V5ik2zA0Vj0
|
\section{Introduction}
In this paper, a new iterative reproducing kernel approach will be
constructed for obtaining the numerical solution of nonlinear fractional three-point
boundary value problem,
\begin{eqnarray} \label{eq1}
a_{2}(\xi){\ }^{c}D^{\alpha}z(\xi)+a_{1}(\xi){\ }^{c}D^{\beta}z(\xi)+a_{0}
\xi)z(\xi)=g(\xi,z(\xi),z^{\prime }(\xi)),\quad \xi\in\lbrack 0,1]
\end{eqnarray}
with following boundary conditions,
\begin{eqnarray} \label{eq2}
z(0)=\gamma_{0},\,z(\theta)=\gamma_{1},\ z(1)=\gamma_{2}, \,\,\ 0<\theta<1,\
1<\alpha\leq2,\,\ 0<\beta\leq1.
\end{eqnarray}
Here, $a_{0}(\xi),$ $a_{1}(\xi),$ $a_{2}(\xi)$ $\in$ $C^{2}(0,1)$ and
g(\xi,z)\in$ $L_{\rho}^{2}[0,1]$ are sufficiently smooth functions and fractional derivatives are taken in Caputo sense. Without
loss of generality, we pay regard to $z(0)=0$, $z(\theta)=0$ and $z(1)=0$.
Because, $z(0)=\gamma_{0}$, $z(\theta)=\gamma_{1}$ and $z(1)=\gamma_{2}$
boundary conditions can be easily reduced to $z(0)=0$, $z(\theta)=0$ and
z(1)=0$.
Nonlinear fractional multi-point boundary value problems appear in a
different area of applied mathematics and physics \cite{1,2,3,4,5,6,7} and
references therein. Many important studies have been concerned in
engineering and applied science such as dynamical systems, fluid mechanics,
control theory, oil industries, heat conduction can be well-turned by
fractional differential equations \cite{8,9,10}. Some applications,
qualitative behaviors of solution and numerical methods to find approximate
solution have been investigated for differential equation with fractional
order \cite{11,12,13,14}.
More particularly, it is not easy to directly get exact solutions to most
differential equations with fractional order. Hence, numerical techniques
are utilised largely. Actually, in recent times many efficient and
convenient methods have been developed such as the finite difference method
\cite{15}, finite element method \cite{16}, homotopy perturbation method
\cite{17}, Haar wavelet methods \cite{18}, Adomian decomposition method \cit
{19}, collocation methods \cite{20}, homotopy analysis method \cite{21},
differential transform method \cite{22}, variational iteration method \cit
{23}, reproducing kernel space method \cite{24,25} and so on \cite{26,27,28}.
In 1908, Zaremba firstly introduced reproducing kernel concept \cite{29}.
His resarches with regard to boundary value problems which includes
Dirichlet condition. Reproducing kernel method (RKM) produces a
solution in convergent series form for many differential, partial and
integro-differential equations. For more information, we refer to \cit
{30,31}. Recently, this RKM is applied for different type of problem. For
example, fractional order nonlocal boundary value problems \cite{32},
Riccati differential equations \cite{33}, forced Duffing equations with
nonlocal boundary conditions \cite{34}, Bratu equations with fractional
order Caputo derivative \cite{35}, time-fractional Kawahara equation \cit
{36}, two-point boundary value problem \cite{37}, nonlinear fractional
Volterra integro-differential equations \cite{38}.
Recently, Legendre reproducing kernel method is proposed for fractional
two-point boundary value problem of Bratu Type Equations \cite{39}. The main
motivation of this paper is to extend the Legendre reproducing kernel
approach for solving nonlinear three-point boundary value problem with
Caputo derivative.
The remainder part of the paper is prepared as follows: some fundamental
definitions of fractional calculus and the theory of reproducing kernel with
Legendre basis functions are given in Section 2. The structure of solution
with Legendre reproducing kernel is demonstrated in Section 3. In order to
show the effectiveness of the proposed method, some numerical findings are
reported in Section 4. Finally, the last section contains some conclusions.
\section{Preliminaries}
In this section, several significant concepts, definitions, theorems, and
properties are provided which will be used in this research. \newline
\newline
\textbf{Definition 2.1} Let $z(\xi)\in C[0,1]$ and $\xi\in[0,1]$. Then, the
\alpha$ order left Riemann-Liouville fractional integral operator is given
as \cite{8,12,13}:
\begin{eqnarray*}
J_{0+}^{\alpha}z(\xi)=\frac{1}{\Gamma(\alpha)}\int\limits_{0}^{\xi}
(\xi-s)^{\alpha -1}z(s)ds},
\end{eqnarray*}
here $\Gamma(.)$ is Gamma function, $\alpha\geq0$ and $\xi>0$. \newline
\newline
\textbf{Definition 2.2} Let $z(\xi)\in AC[0,1]$ and $\xi\in[0,1]$. Then, the
$\alpha$ order left Caputo differential operator is given as \cite{8,12,13}:
\begin{eqnarray*}
{\ }^{c}D_{0+}^{\alpha}z(\xi)=\frac{1}{\Gamma(m-\alpha)}\int_{0}^{\xi} \frac{
\partial^{m}}{\partial \xi^{m}}\frac{z(s)}{(\xi-s)^{m-\alpha-1}}ds, \,\
m-1<\alpha< m, m\in\mathbb{N} \,\ \hbox{and} \,\ \xi>0.
\end{eqnarray*
\newline
\noindent\textbf{Definition 2.3} In order to construct polynomial type
reproducing kernel, the first kind shifted Legendre polynomials are defined
over the interval $[0,1]$. For obtaining these polynomials the following
iterative formula can be given:
\begin{eqnarray*}
P_{0}(\xi) &=& 1, \\
P_{1}(\xi) &=& 2\xi-1, \\
&\vdots& \\
(n+1)P_{n+1}(\xi) &=&(2n+1)(2\xi-1)P_{n}(\xi)-nP_{n-1}(\xi), \,\ n=1,2,...
\end{eqnarray*}
The orthogonality requirement is
\begin{eqnarray}
\langle P_{n},P_{m} \rangle=\int_{0}^{1}\rho_{[0,1]}(\xi)P_{n}(\xi)P_{m}
(\xi)d\xi=\left\{
\begin{array}{ll}
0, & n\neq m, \\
1, & n=m=0, \\
\frac{1}{2n+1}, & n=m\neq0
\end{array}
\right.
\end{eqnarray}
here, weighted function is taken as,
\begin{eqnarray} \label{eq4}
\rho_{[0,1]}(\xi)=1.
\end{eqnarray}
Legendre basis functions can be established so that this basis function
system satisfy the homogeneous boundary conditions as:
\begin{eqnarray} \label{eq5}
z(0)=0 \,\,\hbox{and}\,\ z(1)=0.
\end{eqnarray}
Eq. (\ref{eq5}) has a advantageous feature for solving boundary value
problems. Therefore, these basis functions for $j\geq2$ can be defined as;
\begin{eqnarray} \label{eq6}
\phi_{j}(\xi)= \left\{
\begin{array}{ll}
P_{j}(\xi)-P_{0}(\xi), & \hbox{$j$ is even,} \\
P_{j}(\xi)-P_{1}(\xi), & \hbox{$j$ is odd.
\end{array}
\right.
\end{eqnarray}
such that this system satisfy the conditions
\begin{eqnarray} \label{eq7}
\phi_{j}(0)=\phi_{j}(1)=0.
\end{eqnarray}
It is worth noting that the basis functions given in Eq. (\ref{eq6}) are
complete system. For more information about orthogonal polynomials, please
see \cite{41,42,43}.\newline
\noindent\textbf{Definition 2.4} Let $\Omega \neq \emptyset$, and $\mathbb{H}
$ with its inner product $\langle\cdot,\cdot\rangle_\mathbb{H}$ be a Hilbert
space of real-valued functions on $\Omega$. Then, the reproducing kernel of
\mathbb{H}$ is $R:\Omega\times \Omega\rightarrow \mathbb{R}$ iff
\begin{enumerate}
\item $R(\cdot,\xi) \in \mathbb{H}, \forall \xi \in \Omega$
\item $\langle\phi,R(\cdot,\xi) \rangle_\mathbb{H} = \phi(\xi),
\forall\phi\in \mathbb{H}, \forall \xi \in \Omega$.
\end{enumerate}
The last condition is known as reproducing property. Especially, for any $x
, $\xi$ $\in$ $\Omega$,
\begin{eqnarray}
R(x,\xi)=\langle R(\cdot,x),R(\cdot,\xi) \rangle_\mathbb{H}. \notag
\end{eqnarray}
If a Hilbert space satisfies the above two conditions then is called
reproducing kernel Hilbert space. Uniqueness of the reproducing kernel can
be shown by use of Riesz representation theorem \cite{40}. \newline
\newline
\textbf{Theorem 2.1} Let $\{e_{j}\}_{j=1}^{n}$ be an orthonormal basis of $n
-dimensional Hilbert space $\mathbb{H}$, then
\begin{eqnarray} \label{eq8}
R(x,\xi )=R_{x}(\xi )=\sum_{j=1}^{n}\bar{e}_{j}(x)e_{j}(\xi )
\end{eqnarray}
is reproducing kernel of $\mathbb{H}$ \cite{30,31}.\newline
\newline
\textbf{Definition 2.5} Let $W_{\rho }^{m}[0,1]$ polynomials space be
pre-Hilbert space over $[0,1]$ with real coefficients and its degree $\leq m$
and inner product as:
\begin{equation} \label{eq9}
\langle z,v\rangle _{W_{\rho }^{m}}=\int_{0}^{1}\rho _{\lbrack 0,1]}(\xi
)z(\xi )v(\xi )d\xi ,\,\,\ \forall z,v\in W_{\rho }^{m}[0,1]
\end{equation}
with $\rho _{\lbrack 0,1]}(\xi )$ described by Eq. (\ref{eq4}), and the norm
\begin{equation} \label{eq10}
\Vert z\Vert _{W_{\rho }^{m}}=\sqrt{\langle z,z\rangle }_{W_{\rho
}^{m}},\,\,\ \forall z\in W_{\rho }^{m}[0,1].
\end{equation
With the aid of definiton of $L^{2}$ Hilbert space, $L_{\rho
}^{2}[0,1]=\{g|\int_{0}^{1}\rho _{\lbrack 0,1]}(\xi )|g(\xi )|^{2}d\xi
<\infty \}$ for any fixed $m$, $W_{\rho }^{m}[0,1]$ is a subspace of
L_{\rho }^{2}[0,1]$ and $\forall z,v\in W_{\rho }^{m}[0,1]$, $\langle
z,v\rangle _{W_{\rho }^{m}}=\langle z,v\rangle _{L_{\rho }^{2}}$ \newline
\newline
\textbf{Theorem 2.2} $W_{\rho }^{m}[0,1]$ Hilbert space is a reproducing
kernel space. \newline
\newline
\textbf{Proof.} From Definition 2.5, it is quite apparent that $W_{\rho
}^{m}[0,1]$ functions space is a finite-dimensional. It is well known that
all finite-dimensional pre-Hilbert space is a Hilbert space. Herewith, using
this consequence and Theorem 2.1, $W_{\rho }^{m}[0,1]$ is a reproducing
kernel space.\newline
\newline
For solving problem (\ref{eq1})-(\ref{eq2}), it is required to describe a
closed subspace of $W_{\rho }^{m}[0,1]$ so that satisfy homogeneous boundary
conditions. \newline
\newline
\textbf{Definition 2.6} Let
\begin{equation*}
{\ }^{0}W_{\rho }^{m}[0,1]=\{z\text{ }|\text{ }z\in W_{\rho }^{m}[0,1],\text{
}z(0)=z(1)=0\}.
\end{equation*
One can easily demonstrate that ${\ }^{0}W_{\rho }^{m}[0,1]$ is a
reproducing kernel space using Eq. (\ref{eq6}). From Theorem 2.1, the kernel
function $R_{x}^{m}(\xi )$ of ${\ \ }^{0}W_{\rho }^{m}[0,1]$ can be written
as
\begin{equation} \label{eq11}
R_{x}^{m}(\xi )=\sum_{j=2}^{m}h_{j}(\xi ){h_{j}}(x).
\end{equation
Here, $h_{j}(\xi )$ is complete system which is easily obtained from basis
functions in Eq. (\ref{eq6}) with the help of Gram-Schmidt
orthonormalization process. Eq. (\ref{eq11}) is very useful for
implementation. In other words, $R_{x}^{m}(\xi )$ and $W_{\rho }^{m}[0,1]$
can readily re-calculated by increasing $m$.
\section{Main Results}
In this section, some important results related to reproducing kernel method
with shifted Legendre polynomials are presented. In the first subsection,
generation of reproducing kernel which is satify three-point boundary value
problems is presented. In the second subsection, representation of solution
is given ${\ }^{\theta}W_{\rho}^{m}[0,1]$. Then, we will construct an
iterative process for nonlinear problem in third subsection.
\subsection{Generation of reproducing kernel for three-point boundary value
problems}
In this subsection, we shall generate a reproducing kernel Hilbert space ${\
}^{\theta }W_{\rho }^{m}[0,1]$ in which every functions satisfies $z(0)=0$,
z(\theta )=0$ and $z(1)=0$. \newline
\newline
${\ }^{\theta }W_{\rho }^{m}[0,1]$ is defined as ${\ }^{\theta }W_{\rho
}^{m}[0,1]=\{z|z\in W_{\rho }^{m}[0,1],z(0)=z(\theta )=z(1)=0\}$. \newline
\newline
Obviously, ${\ }^{\theta }W_{\rho }^{m}[0,1]$ reproducing kernel space is a
closed subspace of ${\ }^{0}W_{\rho }^{m}[0,1]$. The reproducing kernel of $
\ }^{\theta }W_{\rho }^{m}[0,1]$ can be given with the following theorem.
\newline
\newline
\textbf{Theorem 3.1} The reproducing kernel ${\ }^{\theta }R_{x}^{m}(\xi )$
of ${\ }^{\theta }W_{\rho }^{m}[0,1]$,
\begin{equation} \label{eq12}
{\ }^{\theta }R_{x}^{m}(\xi )=R_{x}^{m}(\xi )-\frac{R_{x}^{m}(\theta
)R_{\theta }^{m}(\xi )}{R_{\theta }^{m}(\theta )}.
\end{equation}
\newline
\textbf{Proof.} Frankly, not all elements of ${\ }^{0}W_{\rho }^{m}[0,1]$
vanish at $\theta $. This shows that $R_{\theta }^{m}(\theta )\neq $ 0.
Hence, it can be easily seen that ${\ }^{\theta }R_{x}^{m}(\theta )={\
^{\theta }R_{\theta }^{m}(\xi )=0$ and therefore ${\ }^{\theta
}R_{x}^{m}(\xi )\in {\ }^{\theta }W_{\rho }^{m}[0,1]$. For $\forall z\left(
x\right) \in $ $^{\theta }W_{\rho }^{m}[0,1]$, clearly, $z\left( \theta
\right) =0$, it follows that
\begin{equation*}
{<z(x),}^{\theta }R_{x}^{m}(\xi )>_{^{\theta }W_{\rho }^{m}[0,1]}={<z(x),
\text{ }R_{x}^{m}(\xi )>-\frac{R_{x}^{m}(\alpha )z(\theta )}{R_{\theta
}^{m}(\theta )}=z(\xi ).
\end{equation*}
Namely, ${\ }^{\theta }R_{x}^{m}(\xi )$ is of reproducing kernel of
^{\theta }W_{\rho }^{m}[0,1],$. This completes the proof.
\subsection{Representation of solution in ${\ }^{\protect\theta}W_{\protec
\rho }^{m}[0,1]$ Hilbert space}
In this subsection, reproducing kernel method with Legendre polyomials is
established for obtaining numerical solution of three-point boundary value
problem. For Eqs. (\ref{eq1})-(\ref{eq2}), the approximate solution shall be
constructed in ${\ }^{\theta }W_{\rho }^{m}[0,1]$. Firstly, we will define
linear operator $L$ as follow,
\begin{equation*}
L:{\ }^{\theta }W_{\rho }^{m}[0,1]\rightarrow L_{\rho }^{2}[0,1]
\end{equation*
such that
\begin{equation*}
Lz(\xi ):=a_{2}(\xi ){\ }^{c}D^{\alpha }z(\xi )+a_{1}(\xi ){\ }^{c}D^{\beta
}z(\xi )+a_{0}(\xi )z(\xi ).
\end{equation*
The Eqs.(\ref{eq1})-(\ref{eq2}) can be stated as follows
\begin{equation}
\left\{
\begin{array}{ll}
Lz=g(\xi ,z(\xi ),z^{\prime }(\xi )) & \\
z(0)=z(\theta )=z(1)=0. &
\end{array
\right. \label{eq13}
\end{equation
Easily can be shown that linear operator $L$ is bounded. We will obtain the
representation solution of Eq. (\ref{eq13}) in the ${\ }^{\theta }W_{\rho
}^{m}[0,1]$ space. Let $^{\theta }R_{x}^{m}(\xi )$ be the polynomial form of
reproducing kernel in ${\ }^{\theta }W_{\rho }^{m}[0,1]$ space.\newline
\newline
\textbf{Theorem 3.2} Let $\{\xi _{j}\}_{j=0}^{m-2}$ be any $(m-1)$ distinct
points in open interval $(0,1)$ for Eqs. (\ref{eq1})-(\ref{eq2}), then $\psi
_{j}^{m}(\xi )=L^{\ast }$ $^{\theta }R_{\xi _{j}}^{m}(\xi )=L_{x}$ $^{\theta
}R_{x}^{m}(\xi )|_{x=\xi _{j}}.$\newline
\textbf{Proof.} For any fixed $\xi _{j}\in (0,1)$, put
\begin{eqnarray}
\psi _{j}^{m}(\xi ) &=&L^{\ast \text{ }\theta }R_{\xi _{j}}^{m}(\xi
)=\langle L^{\ast \text{ }\theta }R_{\xi _{j}}^{m}(\xi ),^{\theta }R_{\xi
}^{m}(x)\rangle _{{\ }^{\theta }W_{\rho }^{m}} \notag \label{eq14} \\
&=&\langle ^{\theta }R_{\xi _{j}}^{m}(\xi ),L_{x}\text{ }^{\theta }R_{\xi
}^{m}(x)\rangle _{L_{\rho }^{2}}=L_{x}\text{ }^{\theta }R_{\xi
}^{m}(x)|_{x=\xi _{j}}.
\end{eqnarray
It is quite obvious that $^{\theta }R_{\xi }^{m}(x)=$ $^{\theta
}R_{x}^{m}(\xi )$. Therefore $\psi _{j}^{m}(\xi )=L^{\ast }$ $^{\theta
}R_{\xi _{j}}^{m}(\xi )=L_{x}$ $^{\theta }R_{x}^{m}(\xi )|_{x=\xi _{j}}$.
Here, $L^{\ast }$ shows the adjoint operator of $L$. For any fixed $m$ and
\xi _{j}\in (0,1)$, $\psi _{j}^{m}\in {\ }^{\theta }W_{\rho }^{m}[0,1]$
\newline
\newline
\textbf{Theorem 3.3} Let $\{\xi _{j}\}_{j=0}^{m-2}$ be any $(m-1)$ distinct
points in open interval $(0,1)$ for $m\geq 2$, then $\{\psi
_{j}^{m}\}_{j=0}^{m-2}$ is complete in ${\ }^{\theta }W_{\rho }^{m}[0,1]$
\newline
\newline
\textbf{Proof.} For every fixed $z\in {\ }^{\theta }W_{\rho }^{m}[0,1]$, let
\begin{equation*}
\langle z(\xi ),\psi _{j}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}=0,
\end{equation*
this result shows, for $j=0,1,...,m-2$,
\begin{eqnarray}
\langle z(\xi ),\psi _{j}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}
&=&\langle z(\xi ),L^{\ast \text{ }\theta }R_{\xi _{j}}^{m}(\xi )\rangle _{
\ }^{\theta }W_{\rho }^{m}} \notag \\
&=&\langle Lz(\xi ),^{\theta }R_{\xi _{j}}^{m}(\xi )\rangle _{L_{\rho }^{2}}
\notag \\
&=&Lz(\xi _{j})=0.
\end{eqnarray
In Eq. (15), by use of inverse operator, it is decided that $z\equiv 0$.
Thus, $\{\psi _{j}^{m}\}_{j=0}^{m-2}$ is complete in ${\ }^{\theta }W_{\rho
}^{m}[0,1]$. This completes the proof. \newline
\newline
Theorem 3.3 indicates that in Legendre reproducing kernel approach, using a
finite distinct points are enough. But, in traditional reproducing kernel
method need to dense sqeuence on the interval. Namely, this new approach is
vary from traditional method in \cite{28,32,33,34,35,38}.\newline
\newline
The orthonormal system $\{\bar{\psi}_{j}^{m}\}_{j=0}^{m-2}$ of ${\ }^{\theta
}W_{\rho }^{m}[0,1]$ can be derived with the help of the Gram-Schmidt
orthogonalization process using $\{\psi _{j}^{m}\}_{j=0}^{m-2}$,
\begin{equation}
\bar{\psi}_{j}^{m}(\xi )=\sum_{k=0}^{j}\beta _{jk}^{m}\psi _{k}^{m}(\xi ),
\label{eq16}
\end{equation
here $\beta _{jk}^{m}$ show the coefficients of orthogonalization. \newline
\newline
\textbf{Theorem 3.4} Suppose that $z_{m}$ is the exact solution of Eqs. (\re
{eq1})-(\ref{eq2}) and $\{\xi _{j}\}_{j=0}^{m-2}$ shows any $(m-1)$ distinct
points in open interval $(0,1)$, in that case
\begin{equation}
z_{m}(\xi )=\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}g(\xi
_{k},z_{m}(\xi _{k}), z_{m}^\prime(\xi _{k}))\bar{\psi}_{j}^{m}(\xi ). \label{eq17}
\end{equation
\textbf{Proof.} Since $z_{m}\in {\ }^{\theta }W_{\rho }^{m}[0,1]$ from
Theorem 3.3 can be written
\begin{equation*}
z_{m}(\xi )=\sum_{i=0}^{m-2}\langle z_{m}(\xi ),\bar{\psi}_{j}^{m}(\xi
)\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi ).
\end{equation*
On the other part, using Eq. (\ref{eq14}) and Eq. (\ref{eq16}), we obtain
z_{m}(\xi )$ which is the precise solution of Eq. (\ref{eq10}) in ${\
^{\theta }W_{\rho }^{m}[0,1]$ as,
\begin{eqnarray*}
z_{m}(\xi ) &=&\sum_{j=0}^{m-2}\langle z_{m}(\xi ),\bar{\psi}_{j}^{m}(\xi
)\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\langle z_{m}(\xi ),\sum_{k=0}^{j}\beta _{jk}^{m}\psi
_{k}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi )
\\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle z_{m}(\xi ),\psi
_{k}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi )
\\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle z_{m}(\xi ),L^{\ast
}{}^{\theta }R_{\xi _{k}}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}
\bar{\psi}_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle Lz_{m}(\xi
),^{\theta }R_{\xi _{k}}^{m}(\xi )\rangle _{L_{\rho }^{2}}\bar{\psi
_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle g(\xi ,z_{m}(\xi
),z^\prime_{m}(\xi)),^{\theta }R_{\xi _{k}}^{m}(\xi )\rangle _{L_{\rho }^{2}}\bar{\psi
_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}g(\xi _{k},z_{m}(\xi _{k}),z^\prime_{m}(\xi _{k}))\bar{\psi}_{j}^{m}(\xi ).
\end{eqnarray*
The proof is completed. \newline
\newline
\textbf{Theorem 3.5} If $z_{m}(\xi )\in {\ }^{\theta }W_{\rho }^{m}[0,1]$,
then $|z_{m}^{(s)}(\xi )|\leq F\Vert z_{m}\Vert _{{\ }^{\theta }W_{\rho
}^{m}}$ for $s=0,\ldots ,m-1$, where $F$ is a constant. \newline
\newline
\textbf{Proof.} We have $z_{m}^{(s)}\left( \xi \right) =\langle z_{m}\left(
x\right) ,\partial _{\xi }^{s}$ $^{\theta }R_{\xi }^{m}\left( x\right)
\rangle _{{\ }^{\theta }W_{\rho }^{m}}$ for any $\xi ,\,x\in \left[ {0,1
\right] $, $s=0,\ldots ,m-1.$ From the expression of $^{\theta }R_{\xi
}^{m}\left( x\right) $, it pursue that $\left\Vert {\partial _{\xi }^{s}
^{\theta }R_{\xi }^{m}\left( x\right) \right\Vert _{{\ }^{\theta }W_{\rho
}^{m}}\leq F_{s},\,s=0,\ldots ,m-1.$\newline
So,
\begin{eqnarray*}
|z_{m}^{(s)}(\xi )| &=&|{\langle z_{m}(\xi ),\partial _{\xi }^{s}}\text{{\ }
^{\theta }R_{\xi }^{m}\left( x\right) {\rangle _{{\ }^{\theta }W_{\rho }^{m}
}| \\
&\leq &\Vert {z_{m}(\xi )}\Vert _{{\ }^{\theta }W_{\rho }^{m}[0,1]}\Vert
\partial _{\xi }^{s}}\text{{\ }}^{\theta }R_{\xi }^{m}\left( \xi \right)
\Vert _{{\ }^{\theta }W_{\rho }^{m}} \\
&\leq &F_{s}\Vert {z_{m}(\xi )}\Vert _{{\ }^{\theta }W_{\rho
}^{m}},s=0,\ldots ,m-1.
\end{eqnarray*
Therefore, $|z_{m}^{(s)}(\xi )|\leq \max \{F_{0},\ldots ,F_{m-1}\}\left\Vert
{z_{m}\left( \xi \right) }\right\Vert _{{\ }^{\theta }W_{\rho
}^{m}},\,s=0,\ldots ,m-1$. \newline
\newline
\textbf{Theorem 3.6} $z_{m}(\xi )$ and its derivatives $z_{m}^{(s)}(\xi )$
are respectively uniformly converge to $z(\xi )$ and $z^{(s)}(\xi )$ (
s=0,\ldots ,m-1$). \newline
\newline
\textbf{Proof} By using Theorem 3.5 for any $\xi \in \lbrack 0,1]$ we get
\begin{eqnarray*}
|z_{m}^{(s)}(\xi )-z^{(s)}(\xi )| &=&|\langle z_{m}(\xi )-z(\xi ),\partial
_{\xi }^{s}\text{ }^{\theta }R_{\xi }^{m}\left( \xi \right) \rangle |_{{\
^{\theta }W_{\rho }^{m}} \\
&\leq &\Vert \partial _{\xi }^{s}\text{ }^{\theta }R_{\xi }^{m}\left( \xi
\right) \Vert _{{\ }^{\theta }W_{\rho }^{m}}\Vert z_{m}(\xi )-z(\xi )\Vert _
{\ }^{\theta }W_{\rho }^{m}} \\
&\leq &F_{s}\Vert z_{m}(\xi )-z(\xi )\Vert _{{\ }^{\theta }W_{\rho
}^{m}},\,\ s=0,\ldots ,m-1.
\end{eqnarray*
where $F_{0},\ldots ,F_{m-1}$ are positive constants. Therefore, if
z_{m}(\xi )\rightarrow z(\xi )$ in the meaning of the norm of ${\ }^{\theta
}W_{\rho }^{m}[0,1]$ as $m\rightarrow \infty $, $z_{m}(\xi )$ and its
derivatives $z_{m}^{^{\prime }}(\xi ),\ldots ,z_{m}^{(m-1)}(\xi )$ are
respectively uniformly converge to $z(\xi )$ and its derivatives
z^{^{\prime }}(\xi ),\ldots ,z^{(m-1)}(\xi )$. This completes the proof.\\
If considered problem is linear, numerical solution can be directly get from (\ref{eq17}). But, for nonlinear problem the following iterative procedure can be construct.
\subsection{Construction of iterative procedure}
In this subsection, we will use the following iterative sequence to overcome
the nonlinearity of the problem, $y_{m}(\xi )$, inserting,
\begin{equation} \label{eq18}
\left\{ {{\begin{array}{*{20}c} {Ly_{m,n}\left( \xi \right) = g\left(
{\xi,z_{m,n-1}(\xi),z^\prime_{m,n-1}(\xi)} \right)} \hfill \\ {z_{m,n}\left( \xi \right) = P_{m-1}
y_{m,n} (\xi)} \hfill \\ \end{array}}}\right.
\end{equation}
here, orthogonal projection operator is defined as $P_{m-1}:{\ }^{\theta
}W_{\rho }^{m}[0,1]\rightarrow span\{\bar{\psi}_{0}^{m},\bar{\psi
_{1}^{m},\ldots ,\bar{\psi}_{m-2}^{m}\}$ and $y_{m,n}(\xi )\in {\ }^{\theta
}W_{\rho }^{m}[0,1]$ shows the $n$-th iterative numerical solution of (\re
{eq18}). Then, the following important theorem will be given for iterative
procedure. \newline
\newline
\textbf{Theorem 3.7} If $\{\xi _{j}\}_{j=0}^{m-2}$ is distinct points in
open interval $(0,1)$, then
\begin{equation} \label{eq19}
y_{m,n}(\xi )=\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}g(\xi
_{k},z_{m,n-1}(\xi _{k}),z^\prime_{m,n-1}(\xi _{k}))\bar{\psi}_{j}^{m}(\xi )
\end{equation}
\textbf{Proof.} Since $y_{m,n}(\xi )\in {\ }^{\theta }W_{\rho }^{m}[0,1]$,
\{\bar{\psi}_{j}^{m}(\xi )\}_{j=0}^{m-2}$ is the complete orthonormal system
in ${\ }^{\theta }W_{\rho }^{m}[0,1]$,
\begin{eqnarray*}
y_{m,n}(\xi ) &=&\sum_{j=0}^{m-2}\langle y_{m,n}(\xi ),\bar{\psi
_{j}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi )
\\
&=&\sum_{j=0}^{m-2}\langle y_{m,n}(\xi ),\sum_{k=0}^{j}\beta _{jk}^{m}\psi
_{k}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi )
\\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle y_{m,n}(\xi ),\psi
_{k}^{m}(\xi )\rangle _{{\ }^{\theta }W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi )
\\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle y_{m,n}(\xi
),L^{\ast \text{ }\theta }R_{\xi _{k}}^{m}(\xi )\rangle _{{\ }^{\theta
}W_{\rho }^{m}}\bar{\psi}_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle Ly_{m,n}(\xi
),^{\theta }R_{\xi _{k}}^{m}(\xi )\rangle _{L_{\rho }^{2}}\bar{\psi
_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}\langle g(\xi ,z_{m,n-1}(\xi
),z^\prime_{m,n-1}(\xi
)),^{\theta }R_{\xi _{k}}^{m}(\xi )\rangle _{L_{\rho }^{2}}\bar{\psi
_{j}^{m}(\xi ) \\
&=&\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta _{jk}^{m}g(\xi _{k},z_{m,n-1}(\xi
_{k}),z^\prime_{m,n-1}(\xi
_{k}))\bar{\psi}_{j}^{m}(\xi )
\end{eqnarray*
This completes the proof.\newline
Taking $z_{m,0}(\xi )=0$ and define the iterative sequence
\begin{equation} \label{eq20}
z_{m,n}(\xi )=P_{m-1}y_{m,n}(\xi )=\sum_{j=0}^{m-2}\sum_{k=0}^{j}\beta
_{jk}^{m}g(\xi _{k},z_{m,n-1}(\xi _{k}),z^\prime_{m,n-1}(\xi _{k}))\bar{\psi}_{j}^{m}(\xi ),\,\
n=1,2,\ldots
\end{equation}
\section{Numerical applications}
In this section, some nonlinear three-point boundary value problems are
considered to exemplify the accuracy and efficiency of proposed approach.
Numerical results which is achieved by L-RKM are shown with tables.\newline
\newline
\textbf{Example 4.1} We consider the following fractional order nonlinear
three-point boundary value problem with Caputo derivative:
\begin{equation} \label{eq21}
{\ }^{c}D^{\alpha }z(\xi ) + (\xi +1) {\ }^{c}D^{\beta }z(\xi ) + \xi
z(\xi)-z^{2}(\xi )=f(\xi),\quad 1<\alpha \leq 2.\quad 0<\beta \leq 1.
\end{equation}
\begin{equation} \label{eq22}
z(0)=z(\frac{1}{2}) =z(1)=0.
\end{equation
Here, $f(\xi)$ a known function such that the exact solution of this problem is $z(\xi )=\xi(\xi-\frac{1}{2})(\xi-1)$.
By using proposed approach for Eqs. (\ref{eq21})-(\ref{eq22}), and choosing
nodal points as $\xi _{j}=\frac{j+0.3}{m},\,j=0,1,\,2,...,m-2$, the
approximate solution $z_{m,n}\left( \xi \right)$ is computed by Eq. (\re
{eq20}). For (\ref{eq21})-(\ref{eq22}), comparison of absolute errors for
different $\alpha$, $\beta$ values are demonstrated in Table 1 and Table 2
and comparison of exact solution and numerical solution for $\alpha=1.75$
and $\beta=0.75$ is given in Table 3. \newline
\noindent \textbf{Example 4.2} We take care of the following nonlinear
three-point boundary value problem with Caputo derivative
\begin{equation} \label{eq23}
\xi^2{\ }^{c}D^{\alpha }z(\xi ) + (\xi^2-1) {\ }^{c}D^{\beta }z(\xi ) +
\xi^3 z(\xi)-z(\xi)z^\prime(\xi)-z^3(\xi)=f(\xi),\quad 1<\alpha \leq 2.\quad
0<\beta \leq 1.
\end{equation}
\begin{equation} \label{eq24}
z(0)=z(\frac{3}{5}) =z(1)=0.
\end{equation}
Here, $f(\xi)$ a known function such that the exact solution of this problem is $z(\xi )=\xi(\xi-\frac{3}{5})(\xi-1)$.
By using proposed approach for Eqs. (\ref{eq23})-(\ref{eq24}), and choosing
nodal points as $\xi _{j}=\frac{j+0.3}{m},\,j=0,1,\,2,...,m-2$, the
approximate solution $z_{m,n}\left( \xi \right)$ is computed by Eq. (\re
{eq20}). For (\ref{eq23})-(\ref{eq24}), comparison of absolute errors for
different $\alpha$, $\beta$ values are demonstrated in Table 4 and Table 5
and comparison of exact solution and numerical solution for $\alpha=1.75$
and $\beta=0.75$ is given in Table 6. \newline
\section{Conclusion}
In this research, a novel numerical approach which is called L-RKM has been proposed and
successfully implemented to find the approximate solution of nonlinear
three-point boundary value problems with Caputo derivative. For nonlinear
problem, a new iterative process is proposed. Numerical findings show that
the present approach is efficient and convenient for solving three-point
boundary value problems with fractional order.
| -42,957.583587 |
[
-2.06640625,
1.8291015625
] | 14.571949 |
[
-3.837890625,
-0.10174560546875,
-2.328125,
-5.84375,
-0.039764404296875,
8.625
] |
[
1.1748046875,
8.0625,
-1.18359375,
3.26171875
] | 189 | 2,826 |
[
-3.48828125,
3.93359375
] | 40.902069 |
[
-5.859375,
-3.921875,
-4.1796875,
-2.05859375,
1.859375,
11.359375
] | 2.857492 | 7.446523 | 27.494692 | 4.407377 |
[
1.1671967506408691
] | -30,463.128733 | 5.795471 | -43,369.504799 | 1.685187 | 5.777853 |
[
-2.5078125,
-3.4609375,
-3.9375,
-5.21484375,
2.21484375,
12.3671875
] |
[
-5.24609375,
-1.203125,
-1.6064453125,
-0.7958984375,
3.125,
3.18359375
] | |
BkiUa8E5qsFAfmeJoM_k
|
\section{Introduction:}
The marvelous improvements in the technology of numerical relativity in recent
years present opportunities for revolutionizing our understanding of the
classical gravitational field. In the past, much of this understanding has
come from studying solutions with extreme symmetry, and perturbations of such
solutions. However,
with the help of numerical methods, truly generic simulations, particularly of
multiple black hole systems, can now be carried out in full general relativity.
While this work is undertaken, one must keep in mind the fundamental
nature of general relativity and its solutions. In particular, the general
covariance of the theory is not naturally reflected in the numerical context,
where gauge fixing is fundamentally required in the form of coordinate
and tetrad choices. In practice, such gauge choices are tailored to numerical
convenience (or necessity), rather than to physical relevance. Such a simple
task as checking that a black hole merger settles down to a Kerr geometry
can be clouded by the arbitrariness of the simulation coordinates.
One way of dealing with these ambiguities would be to apply coordinate
transformations to numerical simulations {\em a posteriori} to represent
these spacetimes in physically preferable coordinates, if they exist. If
one needs to map
all quantities to an entirely new coordinate grid, then some accuracy would
presumably be lost to the interpolation process, especially if changes of the
time function require interpolation in time. More important, however, is the
difficulty of fixing physically preferred coordinate systems in strongly
dynamical and nonsymmetric spacetimes at all.
Another, perhaps complimentary, approach is to focus physical analysis on
partially (or if possible, totally) gauge invariant quantities. For example,
a major tool in the analysis (and construction) of exact solutions in general
relativity is the algebraic classification system of
Petrov and Pirani~\cite{Petrov2000Reprint, Pirani1957, ExactSolutionsBook}, in
which the Weyl tensor at any given point in spacetime is classified according
to the algebraic properties of its associated {\em eigenbivector problem}:
\begin{equation}
{C_{ab}}^{cd} X_{cd} = \Lambda X_{ab}.\label{e:petrovproblem}
\end{equation}
Another view of this classification system, with a more geometrical
flavor, was expounded particularly by Bel~\cite{Bel1962}
and Penrose~\cite{Penrose1960}. In this approach one classifies the Weyl
tensor in terms of the degeneracy of the so-called {\em principal null
directions}, null vectors defined up to scale by the equation:
\begin{equation}
k^e k^f k_{[a} C_{b] e f [ c} k_{d]} = 0.
\end{equation}
One can show (most easily in spinor language) that this equation is always
satisfied by exactly four null rays, counting multiplicities. If
all four of these null directions are distinct, the spacetime is said to be
{\em algebraically general} or {\em Type I} at that point in spacetime. If
two of them coincide, the spacetime is said to be {\em Type II} there. If
three, {\em Type III}. If all four principal null directions coincide, the
spacetime is said to be {\em Type N}, or {\em null}, in analogy with the pure
radiation fields
of vacuum electrodynamics. If the principal null directions coincide in two
distinct pairs, then the spacetime is said to be {\em Type D}. The Kerr and
Schwarzschild geometries are famous examples of globally Type D spacetimes,
so in some sense, one may hope to infer that a spacetime is ``settling down
to an approximate Kerr geometry'' if its Petrov type ``settles down'' to
Type D (assuming that one has ruled out other, non-Kerr, Type D spacetimes).
This line of reasoning was taken up in a paper by Campanelli, Lousto,
and Zlochower~\cite{Campanelli2008}. The central tool in their approach is a
certain complex polynomial equation:
\begin{equation}
\Psi_4 \lambda^4 + 4 \Psi_3 \lambda^3 + 6 \Psi_2 \lambda^2 + 4 \Psi_1 \lambda + \Psi_0 = 0, \label{e:polynomial}
\end{equation}
the degeneracy of whose roots is known to correspond to the degeneracy of the
principal null directions (assuming that $\Psi_4$ is nonzero). The
coefficients $\Psi_i$ of this polynomial are
the so-called {\em Weyl scalars}, components (defined in
Eq.~\eqref{e:WeylScalars} below) of the Weyl tensor in a given Newman-Penrose
null tetrad. If $\Psi_4$ is nonzero, then the fundamental theorem of algebra
ensures that the polynomial has exactly four complex roots, counting
multiplicities.
Once the four roots $\lambda_i$ have been computed for
Eq.~\eqref{e:polynomial} at any
given point, then one can also compute six distinct positive-definite root
differences:
\begin{equation} \label{e:Deltaij}
\Delta_{ij} := | \lambda_i - \lambda_j |.
\end{equation}
If two of these root differences vanish and the other four are nonzero,
meaning that the four roots coincide in two distinct pairs, then the spacetime
is Type D at that
point. In Ref.~\cite{Campanelli2008} Campanelli {\em et al.}~took the next
logical step: interpreting the two smallest $\Delta_{ij}$ values as
measures of the ``nearness'' of an algebraically general (in that case,
numerical) spacetime to Petrov Type D. While this is a reasonable
interpretation of $\Delta_{ij}$ and we will not suggest any fundamental
modification to this approach of defining approximate algebraic speciality,
there are important subtleties in this interpretation, not fully explored in
Ref.~\cite{Campanelli2008}. These subtleties relate to the geometrical
meaning of $\Delta_{ij}$ and its behavior under
tetrad transformations. The main purpose of this paper is to explore these
subtleties, present an alternative degeneracy measure that avoids certain
blowups that are intricately related to the choice of tetrad (and should
therefore not be considered physically relevant),
and apply both degeneracy measures to a numerical simulation
from the {\tt SpEC} code~\cite{SpEC}. In the process we will also
investigate an interesting conclusion from Ref.~\cite{Campanelli2008}: that
in the ringdown of a binary black hole merger to Kerr, the spacetime
approaches Petrov Type II very quickly, and Type D much later. We will argue
that this conclusion is due essentially to a coordinate singularity on the
space of null rays, and the fact that the tetrad used in
Ref.~\cite{Campanelli2008} was much better suited to representing the
degeneracy of one pair of principal null directions than the other pair, when
the degeneracy measure $\Delta_{ij}$ is used. The alternate degeneracy
measure that
we will introduce, $\Theta_{ij}$ defined in Eq.~\eqref{e:Thetadefined} below,
shows both pairs of principal null directions approaching degeneracy at the
same rate.
Though much of the discussion in this paper centers upon the behavior of these
measures of degeneracy under tetrad transformations, we will unfortunately not
be
able to provide a measure of nearness to Petrov Type D that is fundamentally
any more invariant than $\Delta_{ij}$. This is because no such measure
appears to exist. Geometrically this fact can be understood in terms of the
nonexistence of a boost-invariant geometry on the space of null rays in
Minkowski space, an issue referred to physically as the ``relativistic
aberration of starlight.'' This viewpoint is explored in more detail in
Section~\ref{s:spinors} below.
The issue can also be understood at the algebraic level, as in
Petrov's original construction. The problem shown in
Eq.~\eqref{e:petrovproblem} can be written more compactly if one works in the
three-complex-dimensional space of {\em anti-self-dual} bivectors rather than
in the six-real-dimensional space of real bivectors. In this space, the
eigenbivector problem can be written as:
\begin{equation}
{W_{ab}}^{cd} Z_{cd} = \Lambda Z_{ab}, \label{e:asdeigenbivect}
\end{equation}
where $W_{abcd} := C_{abcd} + i \hspace{1mm}{}^\star C_{abcd}$,
${}^\star Z_{ab} = - i Z_{ab}$, and $\Lambda$ is a complex number. Because
this is a three-dimensional problem one can expect three possible values
for $\Lambda$, though the fact that the Weyl tensor is tracefree implies
that these three eigenvalues must sum to zero. The degeneracy of the
eigenvalues and the completeness of
the corresponding eigenspaces determine the classification of the Weyl
tensor at the point under consideration. If all three eigenvalues
are distinct then the spacetime is algebraically
general. If two roots coincide, then the spacetime is either Type II or
Type D. If all three coincide (and therefore vanish, as they must sum to
zero) then the spacetime is either Type III, Type N, or conformally flat. The
eigenvalues are geometrically defined at each point in spacetime,
independent of the vector basis used to represent the eigenproblem. The
differences between these eigenvalues can therefore be used to construct
invariant measures of the approach to algebraic speciality. For example, the
absolute value of the difference between the two nearest eigenvalues
can be thought of as such an invariant measure. Unfortunately this
measure isn't very specific: it vanishes for
Petrov Types II, D, III, and N.\footnote{In this sense it is like other scalar
measures of algebraic speciality, such as the ``cross ratio'' of principal
null directions, defined in Ref.~\cite{PenroseRindler1}, whose explicit
relationship to the eigenvalues is described in Sec.~8.3 of
Ref.~\cite{PenroseRindler2}, or the Baker-Campanelli ``speciality
index''~\cite{BakerCampanelli2000}, which takes a special value for
{\em any} type of algebraic speciality, but cannot distinguish between
the various types.} The latter pair can be distinguished from
the former pair by the fact that all three eigenvalues vanish in Type III and
Type N, but distinguishing Type II from Type D, or Type III from Type N,
requires more information than just the eigenvalues.
If two of the eigenvalues in Eq.~\eqref{e:asdeigenbivect} coincide, so that
the eigenvalues can be written as $\{\Lambda, \Lambda, -2\Lambda\}$, then
the distinction between Petrov Type II and Type D can be made by the following
quantity\footnote{Incidentally, if one wishes to avoid the assumption that the
spacetime is at least Type II, such that the eigenvalues can be written as
$\{\Lambda, \Lambda, -2\Lambda\}$, this can be done with the help of certain
curvature invariants. See Ref.~\cite{FerrandoSaez2009}.}:
\begin{equation}
{T^{ab}}_{cd} := ( {W^{ab}}_{ef} - \Lambda I^{ab}_{ef} ) ( {W^{ef}}_{cd} + 2 \Lambda I^{ef}_{cd} ), \label{e:tensormeasure}
\end{equation}
where $I^{ab}_{cd}$ is the identity operator on the space of anti-self-dual
bivectors. The object ${T^{ab}}_{cd}$ vanishes in Type D, but not in
Type II~\cite{ExactSolutionsBook}. The difficulty with
using this as a measure of nearness to Petrov Type D is that it is a tensorial
object, and its components are, by definition, basis-dependent. In order to
collapse this object to a single number for each point in spacetime, one might
hope to construct a positive-definite tensor norm:
\begin{equation}
Q := m_{ae} \hspace{1mm} m_{bf} \hspace{1mm} m^{cg} \hspace{1mm} m^{dh} \hspace{1mm} {T^{ab}}_{cd} \hspace{1mm} {\overline{T}^{ef}}{}_{gh},
\end{equation}
where $m_{ab}$ is a positive-definite inner product on spacetime.
Unfortunately, the only inner product that one naturally has available on
spacetime is the indefinite spacetime metric. If a timelike
``observer'' is
introduced, with unit tangent vector $u^a$, then one can construct a positive
definite inner product as:
\begin{equation}
m_{ab} := g_{ab} + 2 u_a u_b,
\end{equation}
but then the quantity $Q$ is not strictly a scalar, as its definition is
dependent on the extra structure of this observer.
Though the language is very different in the geometric approach involving
principal null directions, we will find in Section~\ref{s:spinors} that the
ambiguity in defining a measure of ``nearness'' to Petrov Type D is in that
context essentially the same as here, requiring the choice of a
timelike observer at every point in spacetime. While this state of affairs
seems to endanger any attempt at defining the nearness to any specific
Petrov class, there are some cases where a well-defined fleet of
observers can be chosen. In particular, in any stationary spacetime, one can
choose the stationary observers. In cases such as the ringdown to Kerr
geometry, one can expect an ``approximate'' stationarity to be approached
at late times, again providing a preferred class of observers at least during
the late ringdown. A major practical goal of this paper will be to study
this ringdown process, as in Ref.~\cite{Campanelli2008}. In particular, we
will argue that the degeneracy measure $\Delta_{ij}$ that was used in
Ref.~\cite{Campanelli2008} is in some sense adapted to a {\em null} observer
that happened in that case to be nearly aligned with one of the nearly
degenerate pairs of principal null directions, making this pair of null
directions seem much more degenerate, and the other, much less. This
causes the appearance of a holdup in Petrov Type II before the
spacetime geometry falls to Type D.
The structure of this paper is as follows: in Sec.~\ref{s:tetraddependence}
we will investigate the ambiguity of the measure $\Delta_{ij}$ under
tetrad rotations, particularly those that leave the timelike tetrad leg
fixed. In Sec.~\ref{s:quasikinnersley} we will emphasize the fact that a
tetrad well-suited to gravitational wave extraction, in particular the
quasi-Kinnersley tetrad~\cite{Nerozzi2005}, may be particularly ill-suited to
measuring the nearness to Petrov Type D using $\Delta_{ij}$. In
Sec.~\ref{s:spinors} we will describe the geometry underlying
$\Delta_{ij}$ in spinorial language, and in the process motivate
a modification that is much better suited to
situations such as the ringdown to Kerr geometry. In Sec.~\ref{s:results}
we will present numerical results applying these degeneracy measures to
a binary black hole merger simulation, demonstrating in detail the
approach to Petrov Type D. Finally in Sec.~\ref{s:discussion} we conclude
with further discussion of the subtleties that have been addressed, and those
that remain.
\section{Tetrad dependence}
\label{s:tetraddependence}
The method put forth in Ref.~\cite{Campanelli2008} to define nearness to a
Petrov
class begins with the polynomial in Eq.~\eqref{e:polynomial}, whose
coefficients
are components of the Weyl tensor in a Newman-Penrose tetrad~\cite{Newman1962}:
\begin{subequations}\label{e:WeylScalars}
\begin{eqnarray}
\Psi_0 &:=& C_{a b c d} \ell^a m^b \ell^c m^d,\\
\Psi_1 &:=& C_{a b c d} \ell^a n^b \ell^c m^d,\\
\Psi_2 &:=& \frac{1}{2} C_{a b c d} \left( \ell^a n^b \ell^c n^d - \ell^a n^b m^c \overline m^d \right),\\
\Psi_3 &:=& C_{a b c d} n^a \ell^b n^c \overline m^d,\\
\Psi_4 &:=& C_{a b c d} n^a \overline m^b n^c \overline m^d.
\end{eqnarray}
\end{subequations}
The tetrad $\{ \ell^a, n^a, m^a, \overline m^a \}$ is made up of two
future-directed real null vectors $\ell^a$ and $n^a$ and two complex
conjugate null vectors $m^a$ and $\overline m^a$ with spacelike real and
imaginary parts. These vectors are normalized by the conditions:
\begin{eqnarray}
\ell_a n^a &=& -1,\\
m_a \overline m^a &=& 1,\\
\ell_a m^a = n_a m^a &=& 0.
\end{eqnarray}
These normalization conditions are preserved by three types of tetrad
transformations which, taken together, are equivalent to the proper Lorentz
group. First, there are the ``null rotations about $\ell^a$,'' sometimes
referred to as the ``Type I'' transformations\footnote{To avoid confusion
with the Petrov types, we will hereafter refer to tetrad transformations as
``null rotations about $\ell^a$,'' ``null rotations about $n^a$,'' or
``spin boosts,'' rather than ``Type I,'' ``Type II,'' or ``Type III.''}:
\begin{subequations}\label{e:nullrot_l}
\begin{eqnarray}
\ell^a &\mapsto& \ell^a,\\
m^a &\mapsto& m^a + a \ell^a,\\
\overline m^a &\mapsto& \overline m^a + \overline a \ell^a,\\
n^a &\mapsto& n^a + \overline a m^a + a \overline m^a + a \overline a \ell^a,
\end{eqnarray}
\end{subequations}
where $a$ is a complex number, and can vary over spacetime.
Second, there are the null rotations about $n^a$, sometimes referred to as
``Type II'' transformations:
\begin{subequations}\label{e:nullrot_n}
\begin{eqnarray}
\ell^a &\mapsto& \ell^a + \overline b m^a + b \overline m^a + b \overline b n^a,\\
m^a &\mapsto& m^a + b n^a,\\
\overline m^a &\mapsto& \overline m^a + \overline b n^a,\\
n^a &\mapsto& n^a,
\end{eqnarray}
\end{subequations}
for complex $b$. Third, there are the ``spin-boost'' transformations,
sometimes referred to as the ``Type III'' transformations:
\begin{subequations}\label{e:spinboost}
\begin{eqnarray}
\ell^a &\mapsto& |c|^2 \ell^a,\\
m^a &\mapsto& e^{2 i \arg(c)} m^a,\\
\overline m^a &\mapsto& e^{- 2 i \arg(c)} \overline m^a,\\
n^a &\mapsto& |c|^{-2} n^a,
\end{eqnarray}
\end{subequations}
for complex $c$.
These transformation laws for the tetrad imply transformation laws for the
Weyl scalars. Under the null rotations about $\ell^a$,
Eqs.~\eqref{e:nullrot_l}, the Weyl scalars transform as:
\begin{subequations}\label{e:psi_nr_l}
\begin{eqnarray}
\Psi_0 &\mapsto& \Psi_0,\\
\Psi_1 &\mapsto& \Psi_1 + \overline a \Psi_0,\\
\Psi_2 &\mapsto& \Psi_2 + 2 \overline a \Psi_1 + \overline a^2 \Psi_0,\\
\Psi_3 &\mapsto& \Psi_3 + 3 \overline a \Psi_2 + 3 \overline a^2 \Psi_1 + \overline a^3 \Psi_0,\\
\Psi_4 &\mapsto& \Psi_4 + 4 \overline a \Psi_3 + 6 \overline a^2 \Psi_2 + 4 \overline a^3 \Psi_1 + \overline a^4 \Psi_0.
\end{eqnarray}
\end{subequations}
Under null rotations about $n^a$, Eqs.~\eqref{e:nullrot_n}, the
Weyl scalars transform as:
\begin{subequations}\label{e:psi_nr_n}
\begin{eqnarray}
\Psi_0 &\mapsto& b^4 \Psi_4 + 4 b^3 \Psi_3 + 6 b^2 \Psi_2 + 4 b \Psi_1 + \Psi_0,\\
\Psi_1 &\mapsto& b^3 \Psi_4 + 3 b^2 \Psi_3 + 3 b \Psi_2 + \Psi_1,\\
\Psi_2 &\mapsto& b^2 \Psi_4 + 2 b \Psi_3 + \Psi_2,\\
\Psi_3 &\mapsto& b \Psi_4 + \Psi_3,\\
\Psi_4 &\mapsto& \Psi_4.
\end{eqnarray}
\end{subequations}
Finally, under the spin boosts, Eqs.~\eqref{e:spinboost}, the Weyl scalars
simply rescale, as:
\begin{equation} \label{e:psi_sb}
\Psi_n \mapsto c^{2(2-n)} \Psi_n.
\end{equation}
The transformation laws for the coefficients of the polynomial in
Eq.~\eqref{e:polynomial} imply transformation laws for the roots. It is
straightforward to show that under the transformation in
Eq.~\eqref{e:psi_nr_l}, the roots of the polynomial transform as:
\begin{equation}
\lambda \mapsto \frac{\lambda}{\overline a \lambda + 1}. \label{e:lambda_nr_l}
\end{equation}
Under transformations of the form~\eqref{e:psi_nr_n},
the roots transform as:
\begin{equation}
\lambda \mapsto \lambda + b. \label{e:lambda_nr_n}
\end{equation}
Finally, under spin-boost transformations, Eq.~\eqref{e:psi_sb}, the roots
transform as:
\begin{equation}
\lambda \mapsto c^2 \lambda. \label{e:lambda_sb}
\end{equation}
In Ref.~\cite{Campanelli2008}, nearness to Petrov Type D was mainly argued
through the approach of the absolute values of root differences ($\Delta_{ij}$
as defined in Eq.~\eqref{e:Deltaij})
to zero. While this quantity would indeed be
expected to vanish when $\lambda_i$ and $\lambda_j$ constitute a degenerate
root pair, if they are not exactly degenerate, then the foregoing discussion
implies that this difference is not invariant under tetrad
transformations. The transformation in Eq.~\eqref{e:lambda_nr_n} would leave
$\Delta_{ij}$ unchanged, but that in Eq.~\eqref{e:lambda_sb} would
directly rescale any given root difference (though the complex phase of $c$
would not appear in the absolute value), and transformations of the
form~\eqref{e:lambda_nr_l} would change $\Delta_{ij}$ in a more
complicated way. Arbitrary Lorentz transformations, given by arbitrary
combinations of the above transformations, could alter
$|\lambda_i - \lambda_j|$ in a very complicated manner.
To investigate the practical relevance of this tetrad ambiguity in the
degeneracy measure $\Delta_{ij}$, let us consider a particular case of
possible physical relevance that requires a combination of all three of the
above tetrad transformations. Take the case where one has
a particular timelike vector defined at a point in spacetime, for example a
timelike Killing vector, or a kind of approximate Killing vector
generating time translations in a spacetime that is approaching stationarity
in some sense. Given a Newman-Penrose tetrad $\{ \ell^a, n^a, m^a, \overline m^a \}$, one can construct a standard orthonormal tetrad in the following way:
\begin{subequations}\label{e:orthonormal}
\begin{eqnarray}
e_0^a &:=& (\ell^a + n^a)/\sqrt{2},\\
e_1^a &:=& \sqrt{2} \hspace{2mm}{\rm Re}\left[ m^a \right],\\
e_2^a &:=& \sqrt{2} \hspace{2mm} {\rm Im}\left[ m^a \right],\\
e_3^a &:=& (\ell^a - n^a)/\sqrt{2}.
\end{eqnarray}
\end{subequations}
Rotations of the tetrad legs in the $\vec e_1$--$\vec e_2$ plane are easily
accomplished, through a simple spin-boost transformation with the parameter
$c = e^{i \Phi/2}$. Such rotations also however leave the degeneracy
measure $\Delta_{ij}$ unchanged. For a nontrivial test case, consider a
rotation in the $\vec e_1$--$\vec e_3$ plane:
\begin{subequations}\label{e:xzrotation}
\begin{eqnarray}
\vec e_0{}^\prime &=& \vec e_0\\
\vec e_1{}^\prime &=& \cos(\Phi) \vec e_1 - \sin(\Phi) \vec e_3\\
\vec e_2{}^\prime &=& \vec e_2\\
\vec e_3{}^\prime &=& \cos(\Phi) \vec e_3 + \sin(\Phi) \vec e_1
\end{eqnarray}
\end{subequations}
A straightforward calculation shows that such a transformation
can be carried out by a sequence of the above transformations. First, one
makes a null rotation about $\ell^a$, Eqs.~\eqref{e:nullrot_l}, with
parameter $a = - \tan(\Phi/2)$. Second, there is a null rotation about
$n^a$, Eqs.~\eqref{e:nullrot_n}, with parameter $b = (1/2) \sin(\Phi)$.
The final step is a spin boost, Eqs.~\eqref{e:spinboost}, with parameter
$c = \sec(\Phi/2)$. In this particular case, all three parameters are real.
Composing the transformation laws for the roots,
Eqs.~\eqref{e:lambda_nr_l}~--~\eqref{e:lambda_sb}, with these parameters,
the resulting transformation law is:
\begin{equation}
\lambda^\prime = \frac{\lambda \cos(\Phi/2) - \sin(\Phi/2)}{\lambda \sin(\Phi/2) + \cos(\Phi/2)}.\label{e:lambda_spatrot}
\end{equation}
If we express $\lambda$ as a ratio of two complex numbers,
$\lambda = \xi/\eta$, then Eq.~\eqref{e:lambda_spatrot} takes a very
simple matrix form:
\begin{equation}
\begin{pmatrix} \xi^\prime \\ \eta^\prime \end{pmatrix} = \begin{bmatrix} \cos(\Phi/2) & - \sin(\Phi/2) \\ \sin(\Phi/2) & \cos(\Phi/2) \end{bmatrix} \begin{pmatrix} \xi \\ \eta \end{pmatrix}.\label{e:sl2c_spatrot}
\end{equation}
The general form of this matrix, for arbitrary reorientations using three
Euler angles, is given in Eq.~(1.2.34) of Ref.~\cite{PenroseRindler1}.
The $SL(2,\mathbb{C})$ form of this transformation suggests a
spinorial interpretation of $\lambda$, a point to which we will return in
Sec.~\ref{s:spinors}.
For now let us consider the behavior of the degeneracy measure $\Delta_{ij}$
under these spatial rotations. For concreteness, consider the case
where the four roots of Eq.~\eqref{e:polynomial} are
$\lambda_1 = .005 + .047 i$, $\lambda_2 = .005 + .05 i$,
$\lambda_3 = -5 + 15 i$, and $\lambda_4 = -5 + 15.5 i$. These values are
chosen to very roughly mirror the late-term values seen in Fig.~8 of
Ref.~\cite{Campanelli2008}, with degeneracies roughly similar to those seen in
Figs.~3 and 4 of that paper. The values estimated here are extremely
rough, and should not be taken as having any quantitative importance, but
merely as tools for illustrating the qualitative features of the
transformation law in Eq.~\eqref{e:lambda_spatrot}. So long as one pair of
nearly-degenerate roots is larger, by a few orders of magnitude, than the
other pair, the qualitative behavior that we will describe seems roughly
the same regardless of the particular choice of roots.
\begin{figure}
\begin{center}
\includegraphics[scale=.75]{analyticspatrot}
\end{center}
\caption{ \label{f:analyticspatrot} Behavior of the degeneracy measure
$\Delta_{ij}$ under the tetrad rotation in Eq.\eqref{e:xzrotation} for
a particular (though essentially arbitrary) choice of roots, stated in the
text. Under a rotation through 180 degrees, the root pair that originally
seemed more degenerate becomes less degenerate, and the pair that originally
seemed less degenerate becomes more degenerate.
}
\end{figure}
The degeneracy measure $\Delta_{ij}$ for the two most nearly degenerate root
pairs, under rotation of the $\vec e_1$--$\vec e_3$ plane, is shown in
Fig.~\ref{f:analyticspatrot}. If the
tetrad's spatial legs were rotated through about ninety degrees, then both
root pairs would appear equally close to degeneracy. If
the tetrad were rotated through 180 degrees, then the root pair that
originally appeared closer to degeneracy would begin to seem farther
away from it, and the one that originally seemed less degenerate would seem
more so.
This variation in the degeneracy measure can be interpreted as a coordinate
effect. The quantity $\lambda$ has no inherent geometrical meaning without
a particular reference tetrad. It is essentially a coordinate on the space
of null rays at a point in spacetime. This space of null rays is
topologically a two-dimensional sphere, as can be demonstrated by cutting
a future null cone with a spacelike hyperplane, such as the $t=1$ plane in
Minkowski space. A two-sphere cannot be covered smoothly with a single
coordinate patch. If the quantity $\lambda$ is taken as a (complex)
coordinate labeling all the null rays at a point, then there must be a
coordinate singularity somewhere, near which coordinate distances are
particularly ill-suited to representing the true geometry that may be defined
on the manifold. We will study this issue in more detail in
Sec.~\ref{s:spinors}. For now we simply note that the locations of the
sharp peaks in
Fig.~\ref{f:analyticspatrot} seem to imply that such a coordinate singularity
may have a particularly strong effect in the original, unrotated tetrad.
The following section gives an extreme example of this effect.
\section{The quasi-Kinnersley tetrad:}
\label{s:quasikinnersley}
It appears from the results of the previous section that a tetrad that seems
reasonable for purposes of wave extraction can be particularly ill-suited
to the problem of defining nearness to a Petrov class. To investigate
this point in more detail, here we consider a special family of tetrads
designed especially for wave extraction.
Consider an algebraically general spacetime (eventually we will allow this
spacetime to ``asymptote'' toward Petrov Type D, but we will consider it
always to be, strictly speaking, Type I). As described in
Ref.~\cite{Nerozzi2005},
at any point where the Weyl tensor is Type I, there are
precisely three distinct families of tetrads in which two particular Weyl
scalars vanish, $\Psi_1 = \Psi_3 = 0$ (they each amount to {\em families} of
tetrads, rather than three particular tetrads, because this condition is
preserved by the spin-boost freedom).
A particular tetrad field, chosen from these three families to coincide with
the conventional Kinnersley tetrad near infinity, is often referred to as a
{\em quasi-Kinnersley tetrad}. The usual purpose of such a tetrad is to aid
in gravitational wave extraction, where the relative uniqueness of the
tetrad provides a preferred reference frame in which to define gravitational
radiation. Such a tetrad also simplifies the polynomial in
Eq.~\eqref{e:polynomial}:
\begin{equation}
\Psi_4 \lambda^4 + 6 \Psi_2 \lambda^2 + \Psi_0 = 0.\label{e:qKpoly}
\end{equation}
If, as we are assuming, the spacetime is strictly Type I, and not a more
special algebraic type, then $\Psi_4$ and $\Psi_0$ will be nonzero. In the
limit that the spacetime asymptotes to Type D, they will both settle to zero,
indicating a failure of the polynomial roots to represent the principal null
directions in the conventional sense. What we wish to
investigate is the behavior of these roots as this limit is approached.
Carrying on under the assumption that $\Psi_4$ is nonzero, the roots of
Eq.~\eqref{e:qKpoly} are readily found.
\begin{equation}
\lambda^2 = \frac{3 \Psi_2}{\Psi_4} \left( -1 \pm \sqrt{1 - \frac{\Psi_0 \Psi_4}{9 \Psi_2^2}}\right).
\end{equation}
If we now consider the approach to a Kerr geometry, in which the
quantity $\Psi_0 \Psi_4/(9 \Psi_2^2)$ approaches zero, we can expand the
square root in the above expression to first order in this small
quantity\footnote{Note that the numerator in this quantity, $\Psi_0 \Psi_4$,
which we are evaluating in a ``transverse frame'' --- one where
$\Psi_1 = \Psi_3 = 0$ --- is the Beetle-Burko ``radiation scalar''
described in Ref.~\cite{BeetleBurko2002}.}:
\begin{eqnarray}
\lambda^2 &\approx& \frac{3 \Psi_2}{\Psi_4} \left[-1 \pm \left( 1 - \frac{\Psi_0 \Psi_4}{18 \Psi_2^2}\right)\right],\\
\lambda &\approx& \left\{ \pm \sqrt{-\frac{6 \Psi_2}{\Psi_4}}, \pm \sqrt{-\frac{\Psi_0}{6 \Psi_2}} \right\}. \label{e:halfrate}
\end{eqnarray}
So in the Kerr limit, as $\Psi_4 \rightarrow 0$ and $\Psi_0 \rightarrow 0$,
two of these roots approach zero, and so does their difference,
but the other two approach infinity (this is a standard behavior of
polynomial roots as the leading polynomial coefficient approaches zero).
Moreover, they approach the point at infinity from different directions, so
their difference also approaches infinity. Geometrically, one would think
that the problem is solved if the roots are considered not as numbers on the
complex plane, but as points on the Riemann sphere. The roots that blow up
would then be taken as approaching a degenerate root at the point at
infinity. In the
following section, we will motivate such a viewpoint in detail, and in the
process, outline the geometrical meaning of the degeneracy measure
$\Delta_{ij}$ and present an alternative that avoids the danger of
representing any particular null ray as a ``point at infinity.''
Before moving on, though, we should investigate the robustness of this
behavior under
tetrad rotations. In practice, the tetrads used in numerical relativity
simulations are
usually simple coordinate-adapted tetrads, rather than carefully constructed
quasi-Kinnersley tetrads. But because they are usually adapted to a spherical
coordinate basis, they very roughly tend to approximate the quasi-Kinnersley
tetrad during black hole ringdown, by force of topology alone. For this
reason, it is interesting to investigate the behavior of the polynomial roots
not only in the quasi-Kinnersley tetrad, but also in tetrads slightly offset
from it.
In particular, consider the ringdown to a Kerr black hole, where in the true
Kinnersley tetrad of a Kerr background one would expect the absolute value
of $\Psi_4$ to approach zero
exponentially in time at a rate determined by the quasinormal frequencies of
the hole. The roots $\pm \sqrt{-6 \Psi_2/\Psi_4}$ would then be expected to
grow exponentially at half that rate. Consider, for example, a case where
$\sqrt{-6 \Psi_2/\Psi_4} = i \exp(\tau)$ for some time function
$\tau$.\footnote{The imaginary factor $i$ is inserted to avoid the rotated
tetrad vector exactly coinciding with a principal null direction at some time,
a possibility that, however possible, would not be expected to occur
generically.} The roots $\pm i \exp(\tau)$, if the tetrad were rotated
spatially as in Eq.~\eqref{e:lambda_spatrot}, would instead take the values:
\begin{equation}
\lambda^\prime_\pm = \frac{\pm i \exp(\tau) \cos(\Phi/2) - \sin(\Phi/2)}{\pm i \exp(\tau) \sin(\Phi/2) + \cos(\Phi/2)}.\label{e:ringdown_spatrot}
\end{equation}
So in the limit that $\tau \rightarrow \infty$, these roots would become
degenerate at the value $\cot(\Phi/2)$, and their difference, as measured
by $\Delta_{ij}$, would eventually fall to zero. The details of how this
occurs are plotted in Figs.~\ref{f:qKoffset_manytimes}
and~\ref{f:qKoffset_manyangles}.
In Fig.~\ref{f:qKoffset_manytimes},
\begin{figure}
\begin{center}
\includegraphics[scale=.75]{qKoffset_manytimes.eps}
\end{center}
\caption{ \label{f:qKoffset_manytimes} Profiles of the behavior of the
degeneracy measure $\Delta_{ij}$ under tetrad rotations of the form in
Eqs.~\eqref{e:xzrotation} from a quasi-Kinnersley tetrad, for various values
of a fiducial time coordinate, assuming that this degeneracy measure grows
exponentially in this fiducial time coordinate for the true quasi-Kinnersley
tetrad ($\Phi = 0$). The peak value grows exponentially in time, by
construction, but values well outside the peak decay exponentially in time.
The peak sharpens as it grows, so that values slightly offset from the peak
grow initially, and decay later.}
\end{figure}
the profile of
$\Delta^\prime_{+-} := |\lambda^\prime_+ - \lambda^\prime_-|$, as a function
of tetrad rotation angle $\Phi$ in Eq.~\eqref{e:ringdown_spatrot}, is shown
for a few values of the fiducial time label $\tau$. Each curve is peaked, as
in Fig.~\ref{f:analyticspatrot}, at the quasi-Kinnersley tetrad. The value of
this peak grows exponentially in $\tau$, while the values well outside the
peak (representing more arbitrary tetrads) decay exponentially in $\tau$.
What is of particular interest to us is the behavior {\em near} $\Phi = 0$.
Because the peak sharpens as it grows, values of $\Delta^\prime_{+-}$ slightly
offset
from $\Phi = 0$ grow initially, and eventually decay. This behavior is more
clearly visible in Fig.~\ref{f:qKoffset_manyangles},
\begin{figure}
\begin{center}
\includegraphics[scale=.75]{qKoffset_manyangles.eps}
\end{center}
\caption{ \label{f:qKoffset_manyangles} Behavior over time of the degeneracy
measure $\Delta^\prime_{+-}(\Phi) := |\lambda^\prime_+ - \lambda^\prime_-|$,
for roots $\lambda^\prime_\pm$ given by Eq.~\eqref{e:ringdown_spatrot}, for a
few values of the tetrad rotation parameter $\Phi$. Each curve initially
grows exponentially in the time parameter $\tau$ before eventually falling.
The more offset the tetrad is from the quasi-Kinnersley tetrad ($\Phi = 0$),
the sooner this turnaround occurs.}
\end{figure}
where $\Delta^\prime_{+-}$ is
shown as a function of $\tau$ for various choices of the offset angle. For
any fixed nonzero value of $\Phi$, the curve initially grows exponentially
before eventually falling at the same rate.
The smaller this rotation angle is, the later the
curve turns around. So the nearer a tetrad is to
quasi-Kinnersley, the longer it takes for $\Delta^\prime_{+-}$ to eventually
decay as one might naively expect.
Incidentally, we should note that the ``peak'' in
Fig.~\ref{f:qKoffset_manytimes}, and indeed in Fig.~\ref{f:analyticspatrot},
is actually a saddle point if considered in a larger space of tetrad
rotations. To keep the discussion simple, in
Sec.~\ref{s:tetraddependence} we considered only rotations in the local
$\vec e_1$--$\vec e_3$ tangent plane. Had we considered the case of general
rotations of the spatial tetrad, as in Eq.~(1.2.34) of
Ref.~\cite{PenroseRindler1}, we would have found that the degeneracy measure
$\Delta_{ij}$ becomes infinite whenever the rotated tetrad $\vec n$ vector
coincides with a principal null direction.
\section{Interpreting degeneracy measures:}
\label{s:spinors}
The geometrical underpinnings of the polynomial in Eq.~\eqref{e:polynomial},
and the sense in which $\lambda$ constitutes a coordinate on the space of
null rays,
are most cleanly explained in the language of two-component spinors. Because
many numerical relativists are unfamiliar with this formalism I will attempt
to
keep the discussion self-contained by briefly reviewing crucial elements as
we go along. For a detailed account of spinor methods in spacetime geometry,
see Refs.~\cite{PenroseRindler1, PenroseRindler2}, or for a more compact
treatment specifically geared to numerical relativists, see
Ref.~\cite{StewartBook}.
Throughout this paper, objects with capital latin indices will be referred to
as spinors, elements of a two-complex-dimensional vector space (or its higher
tensorial orders). The complex
conjugate of a spinor is also a spinor, but is defined in a {\em different}
spinor space, because complex conjugation does not commute with multiplication
by a complex scalar. To distinguish objects in spinor space from objects in
the complex conjugate space, we will apply the standard convention of
appending indices referring to the latter space with a prime:
\begin{equation}
\overline{\alpha^A} = \overline \alpha^{A'}.
\end{equation}
Spinors are useful in relativity theory because a simple
correspondence exists between spinor space and Minkowski space (and therefore
also to the tangent space to spacetime at any given point, given an
orthonormal tetrad). From a spinor $\alpha^A$ a unique vector can be
constructed in Minkowski space:
\begin{equation}
V^a = \alpha^A \overline \alpha^{A'} {\sigma^a}_{AA'}, \label{e:correspondence}
\end{equation}
where the ${\sigma^a}_{AA'}$ are soldering forms, specifically referred to as
{\em Infeld--van den Waerden symbols}, conventionally represented as Pauli
matrices. In practice, the transformation provided by ${\sigma^a}_{AA'}$ is
often (and hereafter) taken as implied, with pairs of capital latin indices
(one primed and one unprimed, with the same letter) taken to correspond
abstractly to a single spacetime index.
Vectors defined directly from univalent spinors as in
Eq.~\eqref{e:correspondence} turn out always to be null. For that reason
univalent spinors can be understood as defining null vectors in spacetime.
The standard geometrical interpretation of a univalent spinor (again, see
Ref.~\cite{PenroseRindler1, StewartBook}) is as a ``null flag,'' a null vector
with a particular spacelike half-plane attached to it. This flag plane,
encoded in the spinor's complex phase, is unimportant for our current
purposes.
The spacetime Weyl tensor can be written in terms of a four-index
totally symmetric object called the {\em Weyl spinor} $\Psi_{ABCD}$ and
the antisymmetric metric $\epsilon_{AB}$ on spinor space:
\begin{equation}
W_{abcd} = \Psi_{ABCD} \overline\epsilon_{A'B'} \overline\epsilon_{C'D'}.
\end{equation}
Here, as in the introduction, $W_{abcd}$ refers to the Weyl tensor in its
complex anti-self-dual
form, $W_{abcd} := C_{abcd} + i \hspace{1mm}{}^\star C_{abcd}$. A basic
result in spinor algebra (due essentially to the fundamental theorem of
algebra) is that any totally symmetric spin tensor can be decomposed into
a symmetrized product of univalent spinors. In particular, for the Weyl
spinor,
\begin{equation}
\Psi_{ABCD} = \alpha_{(A} \beta_B \gamma_C \delta_{D)},\label{e:principaldecomp}
\end{equation}
for univalent spinors $\alpha_A$, $\beta_B$, $\gamma_C$, $\delta_D$ defined
up to arbitrary complex scaling (any one of them can be scaled at the cost of
inversely scaling another). These are referred to as {\em principal
spinors} of $\Psi_{ABCD}$. Because
the principal spinors are defined only up to a complex scaling, their
corresponding null vectors are defined only up to an arbitrary real scaling,
and their flag planes are completely undefined. The corresponding null
vectors, defined up to scale, are the principal null directions
of the Weyl tensor at the spacetime point under consideration.
Because the metric on spinor space, $\epsilon_{AB}$, is antisymmetric, all
spinors have vanishing norm.
\begin{equation}
\alpha_A \alpha^A = 0.
\end{equation}
For this reason, the condition for $\alpha^A$ to be a principal
spinor of the Weyl spinor is:
\begin{equation}
\Psi_{ABCD} \alpha^A \alpha^B \alpha^C \alpha^D = 0.\label{e:prinspinor}
\end{equation}
To consider this equation more concretely, we introduce a basis,
$\{o^A, \iota^A\}$, in spinor space, normalized by the standard condition
$\epsilon_{AB} o^A \iota^B = 1$. Such a spin dyad is
equivalent\footnote{Strictly speaking, the correspondence is two-to-one, as
the spin dyad $\{-o^A, -\iota^A\}$ defines the same tetrad as
$\{o^A, \iota^A\}$. The distinction, however, is not important here.} to a
Newman-Penrose tetrad through the definitions
$\ell^a = o^A \overline o^{A'}$, $n^a = \iota^A \overline \iota^{A'}$,
$m^a = o^A \overline \iota^{A'}$.
Given a spin dyad, an arbitrary spin vector can be written as:
\begin{equation}
\alpha^A = \eta o^A + \xi \iota^A,
\end{equation}
for complex components $\eta$, $\xi$. Because we are only interested in
spinors up to arbitrary complex scaling, we can divide by $\eta$ to let:
\begin{equation}
\alpha^A = o^A + \zeta \iota^A,\label{e:zetaintroduced}
\end{equation}
where $\zeta = \xi/\eta$ is a possibly-infinite complex number, an element of
the one-point-compactified complex
plane, ${\mathbb C} \cup \{\infty\}$, the Riemann sphere.
Scaling an arbitrary spinor to be of this form, and inserting it into
Eq.~\eqref{e:prinspinor}, the resulting equation is:
\begin{equation}
\Psi_4 \zeta^4 + 4 \Psi_3 \zeta^3 + 6 \Psi_2 \zeta^2 + 4 \Psi_1 \zeta + \Psi_0 = 0,\label{e:polynomial_zeta}
\end{equation}
where we have used the standard spinorial definition of the Weyl scalars:
\begin{subequations}
\begin{eqnarray}
\Psi_0 &:=& \Psi_{ABCD} o^A o^B o^C o^D,\\
\Psi_1 &:=& \Psi_{ABCD} o^A o^B o^C \iota^D,\\
\Psi_2 &:=& \Psi_{ABCD} o^A o^B \iota^C \iota^D,\\
\Psi_3 &:=& \Psi_{ABCD} o^A \iota^B \iota^C \iota^D,\\
\Psi_4 &:=& \Psi_{ABCD} \iota^A \iota^B \iota^C \iota^D.
\end{eqnarray}
\end{subequations}
We thus find, comparing Eq.~\eqref{e:polynomial} with
Eq.~\eqref{e:polynomial_zeta}, that the quantity $\lambda$ can be interpreted
as the complex stereographic coordinate $\zeta$ on the Riemann
sphere, and in particular, as defining a spinor $\alpha^A$ of the form in
Eq.~\eqref{e:zetaintroduced} in a given spin dyad. Hereafter we will
consider $\zeta$ and $\lambda$ to be the same quantity, and use the symbols
interchangably.
This stereographic interpretation of $\zeta$ (or $\lambda$)
is not merely a formality. As
described in Chapter 1 of Ref.~\cite{PenroseRindler1}, the space of
future-directed null rays at a point in spacetime is topologically a
two-sphere. This can be demonstrated by cutting a future null cone with a
spacelike 3-plane, given by $t=1$ in some local Minkowski coordinate system.
Furthermore, if we choose a particular such cut, whose intersection with the
null cone we will label $S^+$ and call the {\em anti-celestial sphere},
after Ref.~\cite{PenroseRindler1}, the metric
induced on this two-sphere from that in the local Minkowski spacetime is:
\begin{equation}
ds^2 = \frac{4 d\zeta d\overline \zeta}{(1 + \zeta \overline \zeta)^2},\label{e:unitspherestereo}
\end{equation}
where for the coordinate we have chosen the $\zeta$ value of the spinor, of
the form in Eq.~\eqref{e:zetaintroduced}, whose associated null direction
intersects $S^+$. Applying the transformation to conventional
spherical coordinates,
\begin{equation}
\zeta = e^{i \phi} \cot(\theta/2),\label{e:stereographic}
\end{equation}
we arrive at the standard form of the unit sphere metric:
\begin{equation}
ds^2 = d\theta^2 + \sin^2(\theta) d\phi^2.\label{e:unitspheresphere}
\end{equation}
As a geometrical method for defining the nearness of two null directions
to degeneracy, one can consider the metric~\eqref{e:unitspheresphere} on
the anti-celestial sphere. If $\zeta_i$ and $\zeta_j$ are two roots
of Eq.~\eqref{e:polynomial}, then one can translate them to spherical
coordinates $(\theta_i, \phi_i)$, $(\theta_j, \phi_j)$ by inverting
Eq.~\eqref{e:stereographic}, and then use the metric distance function
on the unit sphere, given by the haversine formula as:
\begin{equation}
\begin{split}
\Theta_{ij} := &2 \arcsin \left\{\left(\sin^2\left[\left(\theta_i-\theta_j\right)/2\right]\right.\right.\\
& \left. \left.+ \sin \theta_i \sin \theta_j \sin^2\left[(\phi_i-\phi_j)/2\right]\right)^{1/2}\right\}.
\end{split}
\end{equation}
This can also be written directly in terms of the stereographic coordinates as:
\begin{equation}
\Theta_{ij} = 2 \arcsin \left[ \frac{|\zeta_i - \zeta_j|}{\sqrt{(1+\zeta_i \overline \zeta_i)(1+\zeta_j \overline \zeta_j)}}\right]. \label{e:Thetadefined}
\end{equation}
As one can verify by a direct substitution of Eq.~\eqref{e:lambda_spatrot},
this degeneracy measure is invariant under spatial rotations of the form
in Eq.~\eqref{e:xzrotation}, or indeed any tetrad rotation that leaves the
timelike tetrad leg invariant.
We must stress, however, that even this is not a totally invariant measure
of degeneracy. In fact, there are fundamentally as many degrees of ambiguity
in this measure as there are in $|\zeta_i - \zeta_j|$. The ambiguity in
$\Theta_{ij}$ is encoded in the choice of cut one makes to the null cone in
order to construct $S^+$. This can be interpreted physically as a
result of the special relativistic effect known as ``relativistic aberration
of starlight,'' by which the inferred geometry of the celestial (or as in
this case, anti-celestial) sphere is conformally mapped under Lorentz
boosts.
A geometrical interpretation of the degeneracy measure
$\Delta_{ij} := |\zeta_i - \zeta_j|$ can be found in spinor space.
Two spinors $\alpha_1^A$ and $\alpha_2^A$ are proportional --- and therefore
their associated real null vectors are proportional --- if and only if
their antisymmetrized product vanishes:
\begin{equation}
\brak{\alpha_1}{\alpha_2} := \epsilon_{AB} \alpha_1^A \alpha_2^B = 0.
\end{equation}
It is tempting to use this quantity as a
measure of the degeneracy of the null rays associated with $\alpha_1^A$ and
$\alpha_2^A$, but we must remember to account for the scaling ambiguity of
the spinors. If $\brak{\alpha_1}{\alpha_2}$ is nonzero, then an arbitrary
rescaling of either spinor, which should not alter any reasonable
measure of the degeneracy of the null rays, would directly rescale
$\brak{\alpha_1}{\alpha_2}$. This ambiguity must be fixed by imposing a
condition on the scaling of $\alpha_1^A$ and $\alpha_2^A$. One possibility,
given a particular spin dyad $\left\{ o^A, \iota^A \right\}$, is to assume
that the spinors are of the form~\eqref{e:zetaintroduced}, with
$\alpha_1^A = o^A + \zeta_1 \iota^A$, and
$\alpha_2^A = o^A + \zeta_2 \iota^A$. This condition
can be stated for the associated null vectors $V_1^a$ and $V_2^a$ in
terms of the Newman-Penrose tetrad as:
\begin{equation}
V_i^a n_a = - 1, \label{e:nullcut}
\end{equation}
for $i \in \{1, 2\}$, along with the conditions that the $V_i^a$ are real null
vectors, and where the Newman-Penrose tetrad vector $n^a$ is defined from the
dyad spinor $\iota^A$ by $n^a := \iota^A \overline \iota^{A'}$. This subset of the
future null cone can be visualized as its intersection with a null hyperplane
defined by $t + z = \sqrt{2}$ in the local Minkowski coordinates. To the mind
accustomed to Euclidean geometry, this intersection might be assumed
to be a paraboloid. However, interestingly, the Lorentzian structure
of the spacetime metric causes the intersection to be, in terms of the
induced metric, a flat
two-dimensional plane, with $\zeta$ a standard complex coordinate on this
plane. In fact, the absolute value of the degeneracy measure
$\brak{\alpha_1}{\alpha_2}$, under this particular normalization condition, is
precisely the quantity $\Delta_{12} = | \zeta_1 - \zeta_2 |$. For this
reason, the degeneracy measure used in~\cite{Campanelli2008} can be
understood as a geometric distance between the two associated principal
null directions along the cut made by a null hyperplane through the future
null cone.
The degeneracy measure $\Theta_{ij}$ introduced in Eq.~\eqref{e:Thetadefined}
can similarly be understood in terms of the symplectic product
$\brak{\alpha_1}{\alpha_2}$. If the condition on the null vectors associated
with the $\alpha_i^A$ is taken to be that $\vec V_i \cdot \vec e_0 = -1$, for
a timelike tetrad vector defined from a Newman-Penrose tetrad through
Eqs.~\eqref{e:orthonormal}, rather than $\vec V_i \cdot \vec n = -1$, then
the spinors must be scaled as:
\begin{equation}
\alpha_i^A = \frac{1}{\sqrt{1 + \zeta_i \overline \zeta_i}} \left( o^A + \zeta_i \iota^A \right).
\end{equation}
In this case the absolute value of $\brak{\alpha_1}{\alpha_2}$ becomes:
\begin{equation}
\left| \brak{\alpha_1}{\alpha_2} \right| = \frac{|\zeta_1 - \zeta_2|}{\sqrt{\left(1+\zeta_1 \overline \zeta_1\right)\left(1+\zeta_2 \overline \zeta_2\right)}},
\end{equation}
essentially equivalent to $\Theta_{12}$, as defined in
Eq.~\eqref{e:Thetadefined}.
The distinction between the degeneracy measures $\Delta_{ij}$ and $\Theta_{ij}$
can therefore be understood as a distinction between two different realizations
of the geometry of the space of null rays at a point. If the geometry in this
space is inferred by cutting the null cone with a null hyperplane, the distance
function on the set of null rays is given by $\Delta_{ij}$. If the cut is
taken by a spacelike hyperplane, the distance is given by $\Theta_{ij}$.
The ambiguity of these distance functions stems from the ambiguity of these
cuts. Fundamentally there are equally many degrees of ambiguity in both types
of cut. A spacelike hyperplane can be boosted in any of three directions,
translating into three continuous degrees of ambiguity for $\Theta_{ij}$ at
each point in spacetime. A null hyperplane cut can also be given
in terms of three degrees of freedom: the null normal to the hyperplane (for
which there are two degrees of freedom, the anti-celestial sphere), and a
parameter describing the translation of the hyperplane away from the
vertex of the cone. This last degree of freedom also exists for spacelike
hyperplanes, but because the intersection of the spacelike hyperplane with the
null cone is compact (specifically a two-sphere), one can fix this translation
degree of freedom by fixing the area of the sphere. In the case of a cut by
a null hyperplane, the intersection is noncompact, so this degree of freedom
cannot be fixed.
Though the degeneracy measure $\Theta_{ij}$ may be no more well-defined in
general than $\Delta_{ij}$, there are still reasons to prefer it for purposes
of defining a notion of approximate Petrov class. The main reason is that
when a null hyperplane cut is made through a null cone, one special null ray
is singled out: the one parallel to the hyperplane. Again, the
intersection of the future null cone with a null hyperplane is itself a
spacelike two-dimensional plane, and because the null ray parallel to the
hyperplane never intersects the hyperplane, it is only represented on
the intersection plane as a point at infinity. Eq.~\eqref{e:nullcut} shows
that the null ray that gets mapped to the point at infinity is the one that
points along the tetrad $\vec n$ vector. This is the behavior
that we saw in Sec.~\ref{s:quasikinnersley}. The quasi-Kinnersley tetrad
naturally adapts itself to the principal null directions in the ringdown
to Kerr geometry, such that two of them fall toward the origin of the
$\zeta$ plane and two approach infinity. This is because the
quasi-Kinnersley tetrad
is {\em designed} to adapt itself to nearly-degenerate principal null
directions. To the extent that the numerical tetrad approximates a
quasi-Kinnersley tetrad (a common implicit hope for the extraction of
gravitational waveforms) this behavior will be seen also in numerical
simulations. An example of this will be seen in the next section.
\section{Numerical Results}
\label{s:results}
Our numerical implementation of these mathematical tools
begins, as in Ref.~\cite{Campanelli2008}, with the fourth-order polynomial
in Eq.~\eqref{e:polynomial}. We begin by computing the Weyl scalars in a
reference tetrad. The timelike orthonormal tetrad leg $\vec e_0$
is taken to be the normal to the spatial slice, and the spacelike orthonormal
tetrad legs are constructed from a Gram-Schmidt orthogonalization of the basis
vectors of a spherical-like coordinate system within the spatial slice,
essentially similar to the method in~\cite{Campanelli2008}. This tetrad is
singular at the $z$ axis, as the complex phase of the $\vec m$ leg becomes
undefined due to the coordinate singularity of the spherical chart, but all
quantities we present will be independent of this complex phase, and thus will
have well-defined values on the axis.
Our
code computes the electric and magnetic parts of the Weyl curvature tensor
directly from data on the spatial slice, using Gauss-Codazzi relations and
assuming the Einstein equations are satisfied and that no matter fields are
present:
\begin{eqnarray}
E_{ij} &=& \left({}^3R_{ij} + K K_{ij} - K_{ik} K_j^k \right)_{\rm STF}\\
B_{ij} &=& \left(\epsilon_i{}^{mn}D_mK_{nj}\right)_{\rm STF}.
\end{eqnarray}
Here, $K_{ij}$ is the extrinsic curvature of the spatial slice, $D_i$ is the
torsion free covariant derivative compatible with the spatial metric,
${}^3R_{ij}$ its Ricci curvature, $\epsilon_{ijk}$ the spatial Levi-Civita
tensor, and the subscript ${\rm STF}$ means that the quantity in brackets is
made symmetric and tracefree in the indices $i$ and $j$. Once these tensors
are computed, we construct the Weyl scalars from them as in Eqs.~(4)
of Ref.~\cite{Nerozzi2007}, using the radial tetrad leg for $\vec u$, and the
polar leg plus $i$ times the azimuthal leg for $\vec m$.
Once the Weyl scalars are known, one can go about solving for the roots
$\lambda_i$ of Eq.~\eqref{e:polynomial}. We do so point by point on the
computational grid with simple Newton-Raphson iteration and polynomial
deflation~\cite{numrec_cpp}. In many cases, these methods are conventionally
followed by root polishing --- using the computed roots of the deflated
polynomials as initial guesses in new Newton-Raphson iterations of the initial
polynomial, with the hope of correcting roundoff error accumulated in the
deflation process --- but in this case root polishing has no noticeable
effect. This is presumably because the roots under consideration are very
nearly
degenerate, so error in the Newton-Raphson iterations themselves dominates the
error accumulated in the polynomial deflation.
As in Ref.~\cite{Campanelli2008}, we focus our attention on a simulation
of the
ringdown of a binary black hole merger to Kerr geometry. The simplest example
of such a merger is one following the inspiral of two equal mass nonspinning
black holes in a noneccentric configuration. This data set was presented
in detail in Ref.~\cite{Scheel2008}, and the multipolar structure of the
post-merger horizon was studied in Ref.~\cite{Owen2009}. In the
former paper, it was noted that two independent measures of black hole spin,
designed to agree if the final black hole is Kerr, agreed to well within the
estimated accuracy of the numerical truncation. In the latter paper, this
correspondence
was studied in much greater detail, demonstrating that all of the multipole
moments on the apparent horizon that we were able to compute agreed very
well with those of the Kerr horizon (see
Refs.~\cite{Schnetter2006, Jasiulek2009} for a similar analyses). While these
provide a very compelling
case that the final black hole is Kerr, they do not present a major benefit of
the methods described here and in Ref.~\cite{Campanelli2008}: being fully
local. The degeneracy measures described here can be
computed independently at each point in spacetime, rather than simply for the
apparent horizon as a whole. In
this way one can imagine demonstrating not just the fact that a spacetime is
settling down to Kerr geometry, but where it is doing so more quickly and more
slowly, and possibly even the relationship between the approach to Kerr
geometry and the presence of gravitational radiation.
This locality of the approximate Petrov classification system, while
beneficial for the reasons described above, unfortunately comes at the cost
of another type of gauge ambiguity. If one wishes to investigate the
time dependence of the degeneracy measures, then one
must choose a worldline in spacetime along which to compute these quantities.
In principle one could reduce this ambiguity, for example by computing along
timelike geodesics, or worldlines preferred by some sort of symmetry, if any
exist. For example, the symmetries inherent in a merger of equal mass,
initially nonspinning holes
provide at least one preferred axis for consideration. The initial data
satisfy a discrete symmetry under 180-degree rotations about a certain axis,
taken in our simulations to be the coordinate $z$ axis, along which the
initial ``orbital angular
momentum vector'' can be intuitively said to point. To the extent that the
numerical simulation preserves this discrete symmetry, the $z$ axis sweeps out
a geometrically well-defined worldsheet in spacetime. In principle this
timelike worldsheet could be restricted to a single well-defined timelike
worldline, on which data can be extracted, by intersecting the worldsheet with
a level surface of some curvature invariant. Here, however, we do not go to
such lengths, electing instead to follow coordinate worldlines, as
in Ref.~\cite{Campanelli2008}, but paying special attention to the symmetry
axis.
In Fig.~\ref{f:zaxis},
\begin{figure}
\begin{center}
\includegraphics[scale=.3]{zaxis}
\end{center}
\caption{ \label{f:zaxis} The ringdown after the merger of two equal mass,
initially nonspinning holes. The curves show the behavior of the two smallest
values of each of the degeneracy measures $\Delta_{ij}$ and $\Theta_{ij}$,
with respect to coordinate time, evaluated at $z = 4.5$, $x = y = 0$ in an
asymptotically inertial coordinate system. The heavier curves show the two
values of the $\Theta_{ij}$ measure, the lighter curves the $\Delta_{ij}$
measure. The solid curves show the smaller values of these measures, and
the dashed curves the larger. The symmetries along this axis
force the tetrad to satisfy the basic conditions of a ``quasi-Kinnersley''
tetrad, as described in section~\ref{s:quasikinnersley}, and the results of
that section explain the initial exponential growth of the higher
$\Delta_{ij}$ curve.
}
\end{figure}
data are shown for the two smallest --- smallest
among the various possible root pairings --- values of the degeneracy measures
$\Delta_{ij}$ and $\Theta_{ij}$ evaluated at the coordinate location
$x = y = 0$, $z = 4.5$\footnote{For a sense of scale we note that the
apparent horizon, in these coordinates, settles down at late times roughly to
a coordinate sphere with radius of $2.61$.}
as a function of coordinate time after the formation of the common
apparent horizon in the dataset described in Ref.~\cite{Scheel2008}. Because
the tetrad is, by construction, adapted
to the symmetry axis at this location,\footnote{Actually the tetrad is,
strictly speaking, not well defined on the axis, because it is constructed
from the spherical coordinate basis, which is singular there. However, for
the objects we compute, this singularity has no effect. The $\vec \ell$ and
$\vec n$ tetrad legs are well-defined on the axis, it is just the complex
phasing of the $\vec m$ vector that becomes undefined there. The actual
quantities we compute, however, $\Delta_{ij}$ and $\Theta_{ij}$, are invariant
under spin transformations (spin-boost transformations with $|c| = 1$ in
Eq.~\eqref{e:spinboost}). Because there are no grid points on the axis, these
quantities can always be computed, and because they are spin invariant, they
can be smoothly interpolated to the axis.} it is forced to be ``transverse''
in the sense of Sec.~\ref{s:quasikinnersley} (a fact which we have verified
by a direct inspection of the computed values of $|\Psi_1|$ and $|\Psi_3|$).
In Fig.~\ref{f:zaxis}, we initially see exponential growth in the
second-smallest root difference $\Delta_{ij}$, as one would expect from the
considerations of Sec.~\ref{s:quasikinnersley}.
Eventually, this exponential growth
gives way to exponential decay, similar to the behavior seen in
Fig.~\ref{f:qKoffset_manyangles}. This occurs because the data we compute
here are actually interpolated to the polar axis from data on grid points
slightly offset from it. On these grid points, the tetrad differs slightly
from the quasi-Kinnersley tetrad, as in the discussion near the end of
Sec.~\ref{s:quasikinnersley}. One
might hope that this eventual decay would only occur on these offset grid
points, and that the data interpolated to the axis would grow indefinitely as
$\sqrt{-6 \Psi_2/\Psi_4}$, but as the black hole settles down the
growing peak in Fig.~\ref{f:qKoffset_manytimes} shrinks in width, so
eventually one would expect it not to be resolved by the spectral
discretization.
Incidentally, we have confirmed that
the rates of exponential decay in the decaying curves, and the rate of
exponential growth in the growing curve, each roughly equal half of the
damping rate of the least-damped quasinormal mode of a Kerr black hole of the
same final mass and spin as our final remnant. One would expect this
from Eq.~\eqref{e:halfrate}. The most important issue to note about
Fig.~\ref{f:zaxis}, though, is the discrepancy between the picture implied by
the $\Delta_{ij}$ values, and that implied by the $\Theta_{ij}$ values. The
highest and lowest curves are the two relevant values of $\Delta_{ij}$. At
early times, even the qualitative behavior of these curves are different,
one growing and one decaying. Even at late times, when both curves decay
exponentially (and eventually settle to fixed limits due to numerical
truncation error), they still differ by four orders of magnitude. If
$\Delta_{ij}$ were naively interpreted as defining the ``nearness'' to any
Petrov class, then the response to Fig.~\ref{f:zaxis} would be that the
final result of the numerical simulation is of Petrov Type II, not Type D, on
this axis. The other two curves in Fig.~\ref{f:zaxis} tell a very different
story. The two heavier curves in Fig.~\ref{f:zaxis} show the
two smallest values of the $\Theta_{ij}$ measure. Both curves fall
exponentially at the same rate, and they lie within roughly a factor of ten
of one another throughout the entire ringdown. According to $\Theta_{ij}$ the
spacetime falls quite unambiguously to Petrov Type D on the polar axis.
The behavior at different coordinate locations is less striking, but shows
roughly similar features. Fig.~\ref{f:xaxis}
\begin{figure}
\begin{center}
\includegraphics[scale=.3]{xaxis}
\end{center}
\caption{ \label{f:xaxis} The two smallest values of the degeneracy measures
$\Delta_{ij}$ and $\Theta_{ij}$ evaluated at the coordinate location
$x = 4.5$, $y = z = 0$ in the ringdown after a merger of equal-mass,
initially nonspinning black holes, as in Fig.~\ref{f:zaxis}. The lighter
curves, which are generally the highest and lowest curves, are the two values
of the $\Delta_{ij}$ measure. The heavier curves represent the $\Theta_{ij}$
measure. Initial growth
of the larger $\Delta_{ij}$ value is still present, but less striking than in
Fig.~\ref{f:zaxis}. Nonetheless, the two $\Delta_{ij}$ values again
differ by multiple orders of magnitude, while the two $\Theta_{ij}$ values
generally differ by only one.
}
\end{figure}
presents the same quantities as
Fig.~\ref{f:zaxis} computed instead at $x = 4.5$, $y = z = 0$. Again,
the highest and lowest curves are the two relevant
values of $\Delta_{ij}$. The higher one grows slightly (on average) in the
early ringdown, but settles into exponential decay quite a bit sooner than in
Fig.~\ref{f:zaxis}, and throughout the ringdown remains separated from the
values in the lowest curve by roughly two to three orders of magnitude. This
still, however, provides a marked contrast from the two $\Theta_{ij}$ curves,
which again lie within roughly a single order of magnitude of one another
throughout the ringdown. A particularly interesting feature is visible in the
uppermost curve of Fig.~\ref{f:xaxis} from the beginning of the post-merger
dataset to roughly the coordinate time $8275$. This time frame is magnified
in Fig.~\ref{f:xaxis_Diff2zoom}.
\begin{figure}
\begin{center}
\includegraphics[scale=.3]{xaxis_Diff2zoom_psi4_split}
\end{center}
\caption{ \label{f:xaxis_Diff2zoom} Magnification of the highest curve in
Fig.~\ref{f:zaxis}. The sharp peaks in the bottom curve before time $8275$
imply that principal
null directions are occasionally crossing the tetrad legs. The upper curve
corroborates this by explicitly showing that the absolute value of the
Weyl scalar $\Psi_4$ approaches zero at times coincident with these sharp
peaks.
}
\end{figure}
Because the spacetime is symmetric under reflections across the $z = 0$ plane,
the spatial projections of the principal null directions on this plane must
either be tangent to the plane, or otherwise reflection-symmetric across it.
The jagged peaks in Fig.~\ref{f:xaxis_Diff2zoom} imply that the former
possibility seems to apply here. When the tetrad $\vec n$ vector happens
to point along a principal null direction, the corresponding $\zeta$ value
of the polynomial root becomes infinite. If the spatial projections of two
of the principal null directions lie in the same plane as the spatial
projection of the tetrad $\vec n$ vector, and they oscillate in direction,
repeatedly crossing the spatial projection of $\vec n$ (due either to
physical or gauge effects), then one would expect the $\Delta_{ij}$ values
involving these principal null directions to show sharp, repeating peaks, such
as those in Fig.~\ref{f:xaxis_Diff2zoom}. Eventually, such crossings could be
expected to stop as the angle between the spatial projections of the principal
null directions becomes small and as gauge and physical oscillations become
less wild.
After roughly $t = 8275$ in these code units\footnote{To aid in translating
the code units to physically relevant units, we note that the final mass of
remnant black hole, in these code units, is roughly $1.98$.} the
oscillations in this curve become more smooth, implying that the principal
null directions are no longer crossing $\vec n$.
Figure~\ref{f:L2Norms}
\begin{figure}
\begin{center}
\includegraphics[scale=.3]{L2Norms}
\end{center}
\caption{ \label{f:L2Norms} $\L_2$ norms of the two smallest values of
$\Delta_{ij}$ and $\Theta_{ij}$, integrated over a thick spherical shell
extending from just within the apparent horizon, $r = 2.2$ in code
coordinates, to an outer boundary at $r = 5$ code coordinates. Again,
both values of $\Theta_{ij}$
fall quickly to zero at the same rate, while the larger $\Delta_{ij}$ value
hangs up initially, and eventually falls only to a level over two orders of
magnitude above its smaller counterpart.
}
\end{figure}
presents $L_2$ norms of the same degeneracy measures for the same ringdown
dataset.
While this avoids the choice of a specific coordinate location for
observation, we should note that there is still a certain amount of coordinate
ambiguity in this quantity. The $L_2$ norm is only over a certain subset of
the
spatial slice. The inner boundary is the excision boundary of the simulation,
slightly inside the apparent horizon. The outer boundary is a boundary
of subdomains in the simulation, with coordinate radius 5. The
purpose of this outer boundary is to
avoid numerical difficulties when the Weyl scalars become too small to
calculate accurately the roots of the polynomial in Eq.~\eqref{e:polynomial}.
Again in Fig.~\ref{f:L2Norms}, we see the larger value of $\Delta_{ij}$
hanging up while both values of $\Theta_{ij}$ decay exponentially.
\section{Discussion}
\label{s:discussion}
The primary goal of this paper has been to explain the peculiar behavior, noted
in
Ref.~\cite{Campanelli2008}, that the spacetimes of binary black hole mergers
seem to ``hang up'' in Petrov Type II, and if they fall to Type D at all
(according to one's choice of tolerance), they do so much later.
Properly clarifying this issue has required us to investigate the
geometrical meaning of the polynomial in Eq.~\eqref{e:polynomial} and the
measure of degeneracy principally used in Ref.~\cite{Campanelli2008}, the
absolute value of the difference of any two roots,
$\Delta_{ij} := |\lambda_i - \lambda_j|$. The true space of interest is the
space of future-directed null rays at a point, the so-called anti-celestial
sphere. As argued in Sec.~\ref{s:spinors}, the complex quantity $\lambda$
acts as a coordinate on this two-dimensional space. It should therefore not
be surprising that $\Delta_{ij}$, a coordinate distance, fails to represent
geometries in the space of null rays, since it is impossible to cover a
topological sphere with a single coordinate chart without the presence of
coordinate singularities.
It would therefore seem that the right thing to do would be to consider
truly geometrical distances in the space of null rays as defining the nearness
of principal null directions to one another. This approach, unfortunately, is
clouded by the nonexistence of a preferred metric structure on the
anti-celestial sphere. The anti-celestial sphere has a six-dimensional
conformal group, corresponding to the proper Lorentz group of
Minkowski spacetime. While this group carries a three-dimensional subgroup of
isometries --- corresponding to rotations --- which have no effect on
the ``distances'' between any two null rays, the three remaining
dimensions --- corresponding to boosts --- conformally
rescale the metric on the space of null rays.
For this reason, fixing a unique geometry on the space of null rays requires
fixing a unique ``observer'' with respect to which this boost freedom is
fixed. In the introduction, we described similar difficulties in attempting
to define a concept of approximate Petrov class by algebraic means.
In Sec.~\ref{s:spinors} we also presented a geometrical interpretation of the
quantity $\Delta_{ij}$. Rather than simply as a coordinate distance on the
space of null rays, $\Delta_{ij}$ can be interpreted as a metric distance
along the cut of a null cone
made by a null hyperplane. In a sense, one is here adapting the metric on
the space of null rays to a null observer. Similarly, $\Theta_{ij}$ is a
distance function on the intersection of the future null cone with a spacelike
plane orthogonal to our timelike observer.
There are equally many degrees of freedom in cutting the null cone with a
spacelike hyperplane as there are in cutting it with a null hyperplane (once
one restricts the spacelike cuts to normalize the area of the anti-celestial
sphere, a restriction that cannot be made on null hyperplanes because the
intersection is noncompact). For this reason the degeneracy measure that we
have introduced, $\Theta_{ij}$ in Eq.~\eqref{e:Thetadefined}, cannot be
considered fundamentally any more invariant than $\Delta_{ij}$, though in
practice it is easier to imagine a fleet of preferred timelike observers than
of null observers, such as stationary observers in stationary spacetimes,
observers adapted to timelike approximate Killing vectors in approximately
stationary spacetimes (if such vectors can be reasonably defined),
or freely falling observers following timelike geodesics from infinity.
We have not, however, attempted to find any such preferred classes of
observers in the numerical results presented here, either for fixing the
geometry on the space of null rays or for fixing the worldlines along which
data are extracted. The main value of the new degeneracy measure
$\Theta_{ij}$ is not that it is more gauge-invariant, but rather that it
naturally captures the compactness of the space of null rays,
and thereby avoids relegating any particular null ray to a point at infinity.
The need to avoid relegating any null ray to a point at infinity is
particularly acute in practice, as the rays at infinity in
the physically preferable quasi-Kinnersley tetrads become the principal null
directions themselves as a spacetime settles down to Kerr geometry. This
behavior was investigated in Sec.~\ref{s:quasikinnersley}. As principal null
directions relax to degeneracy at the point at infinity in $\lambda$ space,
the degeneracy measure $\Delta_{ij}$ grows exponentially rather than decaying
exponentially as one would naively expect. While the tetrads used in
numerical simulations are not commonly quasi-Kinnersley tetrads, there is
generally an implicit hope, for purposes of wave extraction, that they are
close to it in some rough sense. Indeed, as is visible in Fig.~\ref{f:zaxis},
this nonintuitive behavior
in the quasi-Kinnersley tetrad can corrupt measurements of $\Delta_{ij}$ in
even a simple coordinate-adapted tetrad ({\em cf.}
Fig.~\ref{f:qKoffset_manyangles}).
The other figures in Sec.~\ref{s:results} tell much the same story, though in
somewhat less dramatic terms. Fig.~\ref{f:xaxis_Diff2zoom} shows indications
of principal null directions directly crossing the tetrad null vectors,
repeatedly causing the null directions to be represented by the point at
infinity in $\lambda$ space, due purely to the choice of spatial tetrad.
Fig.~\ref{f:L2Norms} shows that the hangups in the degeneracy measure
$\Delta_{ij}$ are not limited to the partially symmetry-preferred
$x$ and $z$ axes. In fact, $\Delta_{ij}$, computed everywhere in a tetrad
adapted to spherical coordinates, clearly shows this hangup in Petrov Type II
even in an integral $L_2$ norm, while $\Theta_{ij}$ does not.
Another motivation of this paper has been to provide further evidence that the
final remnants of our black hole merger simulations are Kerr black holes.
This was indeed the central focus of Ref.~\cite{Campanelli2008}, and they
even went so far as to demonstrate that their final black hole has no NUT
charge or acceleration (as in the C-metrics; see
Ref.~\cite{GriffithsPodolsky}), an issue that we have not explored here.
The question of whether a black hole is ``settling down to Kerr'' can be
attacked at a variety of levels. In a recent paper~\cite{Owen2009}, we
studied the question
at a quasilocal level, verifying that the quasilocal source multipole moments
of the black hole settle to the values required by the Kerr geometry (see also
Refs.~\cite{Schnetter2006, Jasiulek2009}).
More recently, Ref.~\cite{BaeckdahlKroon2010} presented theoretical tools
for approaching the question at a global level (global on any given spacelike
slice). The approach taken in Ref.~\cite{Campanelli2008} was in part
local (measurement of $\Delta_{ij}$), and in part global. The method by
which Campanelli {\em et al.} verified the vanishing of the NUT charge and
acceleration involved limits of curvature invariants to large radii. If one
wishes to rule out NUT charge or acceleration at a local level, to provide a
more cohesive picture when combined with local algebraic degeneracy measures,
this can be done with quantities presented in Ref.~\cite{FerrandoSaez2009},
though as described in the introduction of this paper, collapsing
tensorial quantities to scalar quantities would require
a positive-definite background metric, which could require fixing a slicing
or a threading of spacetime.
\acknowledgments{
I thank Mark Scheel for providing the numerical simulation data studied in
Sec.~\ref{s:results}. I also thank the Caltech and Cornell numerical
relativity groups, particularly Jeandrew Brink, Larry Kidder, Geoffrey
Lovelace, and Saul Teukolsky for frequent discussions. This work is supported
in part by grants from the Sherman Fairchild Foundation to Cornell, and by
NSF grants PHY-0652952, DMS-0553677, PHY-0652929, and NASA grant NNX09AF96G.
}
| -39,719.878864 |
[
-3.2734375,
3.03515625
] | 13.772455 |
[
-2.5390625,
0.9580078125,
-1.8662109375,
-5.8046875,
-1.1982421875,
7.8671875
] |
[
4.82421875,
8.5078125,
1.4755859375,
6.41796875
] | 553 | 10,622 |
[
-2.76953125,
2.794921875
] | 25.378966 |
[
-5.93359375,
-4.8359375,
-5.71484375,
-2.388671875,
1.9912109375,
13.7734375
] | 1.195116 | 7.317251 | 17.04952 | 0.98626 |
[
2.3321948051452637
] | -27,320.1168 | 5.435417 | -39,654.112397 | 0.956092 | 6.065847 |
[
-1.884765625,
-3.380859375,
-3.44921875,
-4.84375,
2.01171875,
11.8046875
] |
[
-5.25,
-2.486328125,
-2.32421875,
-1.169921875,
3.966796875,
5.23046875
] | |
BkiUdY44eIXhpe8BP_8y
|
\section{Introduction}
This is a joint work with Vyacheslav Futorny and Vladimir V. Sergeichuk.
V.I. Arnold \cite{arn} pointed out that the reduction of a matrix to its Jordan form is an unstable operation: both the Jordan form and the reduction transformations depend discontinuously on the elements of the original matrix. Therefore, if the elements of a matrix are known only approximately, then it is unwise to reduce it to its Jordan form therefore V. I. Arnold obtained a miniversal deformation of Jordan matrix, i.e. a simplest possible normal form, to which not only a given matrix A, but an arbitrary family of matrices close to it can be reduced by means of a similarity transformation smoothly depending on the elements of A in a neighborhood of zero.
We give the analogous form for a pair of symmetric matrices (earlier we gave it for a pair of skew-symmetric matrices \cite{skew}). The problem is important for applications, when the matrices arise as a result of measures, i.e. their entries are given with errors.
(Mini)versal deformation were studied by a various authors in a great number of papers (see \cite{bilin}).
\section*{Outline}
In Section $2$ we present the main result in terms of holomorphic functions, and in terms of miniversal deformations. We use the canonical
matrices of a pair of symmetric forms given by Thompson \cite{thom}.
Section $3$ is a proof of the main result. Firstly the method of constructing deformations is presented and after using it we
calculate deformations step by step: for the diagonal blocks, for the off diagonal blocks that correspond to the canonical summands of
the same type, and for the off diagonal blocks that correspond to the canonical summands of different types.
Note that in the analogous paper for skew-symmetric matrices \cite{skew} there is a section devoted to the constructive proof of the versality of
deformations. This section is missed here but it can be done exactly in the same way as in \cite{skew}.
\section{The main theorem}
In this section we
formulate a theorem
about miniversal
deformations of pairs
of symmetric
matrices under
congruence (it will
be proved in the next
section), but first we
recall a canonical
form of pairs of symmetric
matrices under
congruence.
Define the $n\times n$
matrices
\begin{equation*}\label{1aa}
\Lambda_n(\lambda):=
\begin{bmatrix}
0&&&\lambda\\
&&\lambda&1\\
&\ddd&\ddd&\\
\lambda&1&&0\\
\end{bmatrix},\qquad
\Delta_n:=\begin{bmatrix}0&&&1\\
&&1&\\
&\ddd&&\\
1&&&0\\
\end{bmatrix},
\end{equation*}
and the $n\times
(n+1)$ matrices
\begin{equation*}
F_n :=
\begin{bmatrix}
1&0&&0\\
&\ddots&\ddots&\\
0&&1&0\\
\end{bmatrix}, \qquad
G_n :=
\begin{bmatrix}
0&1&&0\\
&\ddots&\ddots&\\
0&&0&1\\
\end{bmatrix}.
\end{equation*}
The following lemma
was proved in
\cite{thom}.
\begin{lemma}\label{lkh}
Every pair of
symmetric complex
matrices is congruent
to a direct sum,
determined uniquely up
to permutation of
summands, of pairs of
the form
\begin{equation}
\label{can}
H_n(\lambda):= (\Delta_n ,
\Lambda_n(\lambda) ),\quad \lambda \in\mathbb C,
\end{equation}
\begin{equation}\label{can2}
K_n:= (\Lambda_n(0),\Delta_n ),
\end{equation}
\begin{equation}\label{can3}
L_n:= \left(
\begin{bmatrix}0&F_n^T\\
F_n &0
\end{bmatrix},
\begin{bmatrix}0&G^T_n\\
G_n&0
\end{bmatrix}
\right).
\end{equation}
\end{lemma}
\subsection{The main
theorem in terms of
holomorphic functions}
Let $(A,B)$ be a given
pair of $n \times n$ symmetric
matrices. For all pairs of symmetric matrices
$(A+E,B+E')$ that are close to $(A,B)$,
we give their normal form ${\cal A}(E,E')$ with respect to congruence
transformations
\begin{equation} \label{trans}
(A+E,B+E') \mapsto {\cal S}(E,E')^T(A+E,B+E') {\cal S}(E,E'),
\end{equation}
in which ${\cal S}(E,E')$ is holomorphic at 0 (i.e., its
entries are power series in the entries of $E$ and $E'$
that are convergent in a neighborhood of 0) and
$ {\cal S}(0,0)$ is a nonsingular matrix.
Since ${\cal A}(0,0)=
{\cal S}(0,0)^T(A,B)
{\cal S}(0,0)$, we can
take ${\cal A}(0,0)$
equalling the
congruence canonical
form $(A,B)_{\text{\rm
can}}$ of $(A,B)$.
Then
\begin{equation}
\label{ksy} {\cal
A}(E,E')=
(A,B)_{\text{\rm
can}}+{\cal D}(E,E'),
\end{equation}
where ${\cal
D}(E,E')$ is a pair of
matrices that are
holomorphic at $0$ and
${\cal D}(0,0)=(0,0)$.
In the next theorem we
obtain ${\cal
D}(E,E')$ with the
minimal number of
nonzero entries that
can be attained by
using transformations
\eqref{trans}.
We use the following
notation:
$\bullet$ $0_{mn}$ is
the $m \times n$ zero
matrix;
$\bullet$
$0_{mn \ast}$
is the $m \times n$
matrix
$
\begin{bmatrix}
& && 0\\
&0_{m-1,n-1}&& \vdots\\
& && 0\\
0&\ldots&0&*
\end{bmatrix};
$
$\bullet$
$0_{mn}^{\leftarrow}$
is the
$m \times n$ matrix
$\label{bjhf}
\begin{bmatrix}
* & \\
\vdots & 0_{m,n-1} \\
* &
\end{bmatrix};
$
$\bullet$
$0_{mn}^{\rightarrow}$ is, respectively, the
$m \times n$ matrix $
\begin{bmatrix}
& * \\
0_{m,n-1} & \vdots \\
& *
\end{bmatrix};
$
$\bullet$
$0_{mn}^{\righthalfcap}$
is the $m \times n$
matrix
$$
\begin{bmatrix}
* & \ldots & *\\
&0_{m-1,n-1}& \vdots\\
&&*
\end{bmatrix}\quad
\text{or}
\begin{bmatrix}
* & & \\
\vdots&0_{m-1,n-1}& \\
*&\ldots & *
\end{bmatrix};
$$
$\bullet$
$0_{nn}^{\nwarrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwarrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwarrow$}}}$
is the $n \times n$
matrix (here and after unspecified entries are zeros)
$$
\left[\begin{array}{c|c}
\begin{matrix} *&*& \\
*& \ddots& \ddots \\
&\ddots&*
\end{matrix}& \begin{matrix}&&\\
&&\\
*&&
\end{matrix}\\
\hline
\begin{matrix}&&&*\\
&&&\\
&&&
\end{matrix}&\\
\end{array}\right] \quad
\text{when $n$ is even and}
$$ $$
\left[\begin{array}{c|c|ccc}
\begin{matrix} *&*& \\
*& \ddots& \ddots \\
&\ddots&*
\end{matrix}& \begin{matrix}\\
\\
*
\end{matrix}&&&\\
\hline
\begin{matrix}&&&*
\end{matrix}&*&&&\\
\hline
&&&&\\
&&&&\\
&&&&\\
\end{array}\right] \quad
\text{when $n$ is odd};
$$
$\bullet$
$0_{nn}^{\nwsearrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwsearrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwsearrow$}}}$
is the $n \times n$
matrix
$$
\begin{bmatrix}
*& *& & \\
*& *& \ddots & \\
& \ddots& \ddots & * \\
&&*&*
\end{bmatrix};
$$
$\bullet$ ${\cal
Q}_{nm}$ with $n < m$
is the $n \times m$
matrix
\begin{equation*}\label{hui}
\begin{bmatrix}
\begin{matrix}
0&\dots& 0\\
\vdots&& \vdots
\end{matrix} &0\\
\begin{matrix}
0& \dots&0\end{matrix}&
\begin{matrix}
*\ \dots\ * \ 0\
\end{matrix}
\end{bmatrix}\qquad(\text{$m-n$
stars});
\end{equation*}
when $n\geq m$ than ${\cal Q}_{nm}=0$.
Further, we will usually omit the indices $m$ and $n$.
Our main result is the
following theorem,
which we reformulate
in a more abstract
form in Theorem
\ref{teojy}.
\begin{theorem}\label{teo2}
Let
\begin{equation}\label{gto}
(A,B)_{\text{\rm \rm
can}}=X_1\oplus\dots
\oplus X_t
\end{equation}
be a canonical pair of
symmetric complex
matrices for
congruence, in which
$X_1,\dots,X_t$ are
pairs of the form
\eqref{can}--\eqref{can3}.
Its simplest
miniversal deformation
can be taken in the
form $(A,B)_{\text{\rm
can}} +{\cal D}$ in
which ${\cal D}$ is a
$(0,*)$ matrix pair
(the stars denote
independent
parameters, up to
symmetry, see Remark \ref{indep} ) whose
matrices are
partitioned into
blocks conformally to
the decomposition
\eqref{gto}:
\begin{equation}\label{grsd}
{\cal D} = \left(
\begin{bmatrix}
{\cal
D}_{11}&\dots&{\cal
D}_{1t}
\\
\vdots&\ddots&\vdots\\
{\cal
D}_{t1}&\dots&{\cal
D}_{tt}
\end{bmatrix},
\begin{bmatrix}
{\cal
D}'_{11}&\dots&{\cal
D}'_{1t}
\\
\vdots&\ddots&\vdots\\
{\cal
D}'_{t1}&\dots&{\cal
D}'_{tt}
\end{bmatrix}
\right)
\end{equation}
These blocks are
defined as follows.
Write
\begin{equation}\label{lhsd}
{\cal D}(X_i):=({\cal
D}_{ii},{\cal
D}'_{ii})
\end{equation}
\begin{equation} \label{lhs}
{\cal
D}(X_i,X_j) :=(({\cal
D}_{ij},{\cal
D}'_{ij}),({\cal
D}_{ji},{\cal
D}'_{ji}))\ \ \text{if
}i<j,
\end{equation}
(Remaind that $(({\cal
D}_{ij},{\cal
D}'_{ij}),({\cal
D}_{ji},{\cal
D}'_{ji}))=(({\cal
D}_{ij},{\cal
D}'_{ij}),({\cal
D}_{ij}^T,{\cal
D}_{ij}^{'T}))$, hence we drop the second pair from the notation.)
then
{\rm(i)} The
diagonal blocks of
${\cal D}$ are defined
by
\begin{equation}\label{Hdef}
{\cal
D}(H_n(\lambda))=
\left( 0, 0^{\nwarrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwarrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwarrow$}}} \right)
\end{equation}
\begin{equation} \label{Kdef}
{\cal D} (K_n)=\left(
0^{\nwarrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwarrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwarrow$}}}, 0 \right)
\end{equation}
\begin{equation}\label{Ldef}
{\cal D}(L_n)=\left(
\begin{bmatrix}
0_{*}&0
\\ 0&0
\end{bmatrix} ,
\begin{bmatrix}
0^{\nwsearrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwsearrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwsearrow$}}}&0
\\ 0&0
\end{bmatrix}
\right).
\end{equation}
{\rm(ii)} The
off-diagonal blocks of
${\cal D}$ whose
horizontal and
vertical strips
contain summands of
$(A,B)_{\text{\rm
can}}$ of the same
type are defined by
\begin{equation}\label{lsiu1}
{\cal D}
(H_n(\lambda),\,
H_m(\mu))
=
\begin{cases}
(0,\:0) &\text{if
$\lambda\ne\mu$} \\
\left(0,0^{\nwarrow}\right)
&\text{if $\lambda=\mu$}
\end{cases}
\end{equation}
\begin{equation}\label{lsiu2}
{\cal D} (K_n,K_m)=
\left( 0^{\nwarrow} , 0 \right)
\end{equation}
\begin{equation}\label{lsiu3}
{\cal D}
(L_n,L_m)=\left(
\begin{bmatrix}
0_{\ast}&0
\\ 0&0
\end{bmatrix} ,
\begin{bmatrix}
0^{\righthalfcap}&{\cal Q}
\\ {\cal Q}^T&0
\end{bmatrix}
\right).
\end{equation}
{\rm(iii)} The
off-diagonal blocks of
${\cal D}$ whose
horizontal and
vertical strips
contain summands of
$(A,B)_{\text{\rm
can}}$ of different
types are defined by:
\begin{equation}\label{kut}
{\cal D}
(H_n(\lambda),K_m)=(0,0)
\end{equation}
\begin{equation}\label{hnlm}
{\cal D}
(H_n(\lambda),L_m)=\left(
0,0^{\leftarrow} \right)
\end{equation}
\begin{equation}\label{ktlm}
{\cal D}
(K_n,L_m)=\left(
\begin{bmatrix}
0^{\rightarrow}&0
\end{bmatrix} ,
0
\right).
\end{equation}
\end{theorem}
\begin{remark}[About independency of parameters] \label{indep}
A matrix pair ${\cal D}$ is symmetric. It means that each ${\cal D}_{ij}, i<j$ and each of ${\cal D}'_{ij}, i<j$
contain independent parameters and just $\frac{(n+1)n}{2}$ parameters of ${\cal D}_{ii}$ and ${\cal D}'_{ii}$
(both are $n \times n$ matrices) are independent (i.e. all parameters in the upper triangular parts of matrices
and on the main diagonals of ${\cal D}$ are independent).
\end{remark}
The matrix pair $\cal
D$ from Theorem
\ref{teo2} will be
constructed in Section
\ref{sur} as follows.
The vector space
\begin{equation*}\label{msi}
V:=\{C^T(A,B)_{\text{\rm
can}}
+(A,B)_{\text{\rm
can}}C\,|\,
C\in{\mathbb
C}^{n\times n}\}
\end{equation*}
is the tangent space
to the congruence
class of
$(A,B)_{\text{\rm
can}}$ at the point
$(A,B)_{\text{\rm
can}}$ since
\[
(I+\varepsilon
C)^T(A,B)_{\text{\rm
can}} (I+\varepsilon
C) =(A,B)_{\text{\rm
can}}+ \varepsilon(C^T
(A,B)_{\text{\rm
can}}+
(A,B)_{\text{\rm
can}}C)
\]
\[+\varepsilon^2C^T
(A,B)_{\text{\rm
can}}C
\]
for all $n \times n$
matrices $C$ and each
$\varepsilon\in\mathbb
C$. Then $\cal D$
satisfies the
following condition:
\begin{equation}\label{jyr}
{\mathbb C}^{\,n\times
n}_{s}\times{\mathbb
C}^{\,n\times n}_{s}=V
\oplus {\cal
D}({\mathbb C})
\end{equation}
in which ${\mathbb
C}^{\,n\times n}_{s}$
is the space of all $n
\times n$
symmetric matrices,
${\cal D}({\mathbb C})$ is
the vector space of
all matrix pairs
obtained from $\cal D$
by replacing its stars
by complex numbers.
Thus, the number of
stars in $\cal D$ is
equal to the
codimension of the
congruence class of
$(A,B)_{\text{\rm
can}}$. Lemma
\ref{t2.1} from the
next section ensures
that any matrix pair
with entries $0$ and
$*$ that satisfies
\eqref{jyr} can be
taken as $\cal D$ in
Theorem \ref{teo2}.
\subsection{The main
theorem in terms of
miniversal
deformations}
\label{sect2}
The notion of a
miniversal deformation
of a matrix with
respect similarity was
given by V. I. Arnold
\cite{arn} (see also
\cite[\S\,30B]{arn3}).
This notion is easily
extended to matrix
pairs with respect to
congruence.
A \emph{deformation}
of a pair of $n\times
n$ matrices $(A,B)$ is
a holomorphic mapping
${\cal A}$ from a
neighborhood
$\Lambda\subset
\mathbb C^k$ of $\vec
0=(0,\dots,0)$ to the
space of pairs of
$n\times n$ matrices
such that ${\cal
A}(\vec 0)=A$.
Let ${\cal A}$ and
${\cal B}$ be two
deformations of
$(A,B)$ with the same
parameter space
$\mathbb C^k$. Then
${\cal A}$ and ${\cal
B}$ are considered as
\emph{equal} if they
coincide on some
neighborhood of $\vec
0$ (this means that
each deformation is a
germ); ${\cal A}$ and
${\cal B}$ are called
\emph{equivalent} if
the identity matrix
$I_n$ possesses a
deformation ${\cal I}$
such that
\begin{equation}\label{kft}
{\cal B}(\vec\lambda)=
{\cal
I}(\vec\lambda)^{T}
{\cal A}(\vec\lambda)
{\cal I}(\vec\lambda)
\end{equation}
for all
$\vec\lambda=(\lambda_1,\dots,
\lambda_k)$ in some
neighborhood of $\vec
0$.
\begin{definition}\label{d}
A deformation ${\cal
A}(\lambda_1,\dots,\lambda_k)$
of a matrix pair
$(A,B)$ is called
\emph{versal} if every
deformation ${\cal
B}(\mu_1,\dots,\mu_l)$
of $(A,B)$ is
equivalent to a
deformation of the
form ${\cal
A}(\varphi_1(\vec\mu),\dots,
\varphi_k(\vec\mu)),$
where all
$\varphi_i(\vec\mu)$
are convergent in a
neighborhood of $\vec
0$ power series such
that $\varphi_i(\vec
0)=0$. A versal
deformation ${\cal
A}(\lambda_1,\dots,\lambda_k)$
of $(A,B)$ is called
\emph{miniversal} if
there is no versal
deformation having
less than $k$
parameters.
\end{definition}
By a \emph{$(0,*)$
matrix pair} we mean a
pair $\cal D$ of
matrices whose entries
are $0$ and $*$. We
say that a matrix pair
\emph{is of the form}
$\cal D$ if it can be
obtained from $\cal D$
by replacing the stars
with complex numbers.
Denote by ${\cal
D}({\mathbb C})$ the
space of all matrix
pairs of the form
$\cal D$, and by
${\cal D}(\vec
{\varepsilon})$ the
pair of parametric
matrices obtained from
${\cal D}$ by
replacing each $(i,j)$
star with the
parameter
${\varepsilon}_{ij}$.
This means that
\begin{equation}\label{a2z}
{\cal D}(\mathbb C):=
\Big(\bigoplus_{(i,j)\in{\cal
I}_1({\cal D})}
{\mathbb C}
E_{ij}\Big)\times\Big(
\bigoplus_{(i,j)\in{\cal
I}_2({\cal D})}
{\mathbb C}
E_{ij}\Big),
\end{equation}
\begin{equation}
{\cal D}(\vec
{\varepsilon}):=
\Big(\sum_{(i,j)\in{\cal
I}_1({\cal D})}
\varepsilon_{ij}E_{ij},
\sum_{(i,j)\in{\cal
I}_2({\cal D})}
f_{ij}E_{ij}\Big),
\end{equation}
where
\begin{equation}\label{a2za}
{\cal I}_1({\cal
D}),{\cal I}_2({\cal
D})\subseteq
\{1,\dots,n\}\times
\{1,\dots,n\}
\end{equation}
are the sets of
indices of the stars
in the first and
second matrices,
respectively, of the
pair ${\cal D}$, and
$E_{ij}$ is the
elementary matrix
whose $(i,j)$th entry
is $1$ and the others
are $0$.
We say that a
miniversal deformation
of $(A,B)$ is
\emph{simplest} if it
has the form
$(A,B)+{\cal D}(\vec
{\varepsilon})$, where
$\cal D$ is a $(0,*)$
matrix pair. If $\cal
D$ has no zero
entries, then it
defines the
deformation
\begin{equation}\label{edr}
{\cal U}(\vec
{\varepsilon}):=
\Big(A+\sum_{i,j=1}^n
\varepsilon_{ij}E_{ij},\
B+\sum_{i,j=1}^n
\varepsilon_{ij}E_{ij}\Big).
\end{equation}
Since each matrix pair
is congruent to its
canonical matrix pair,
it suffices to
construct miniversal
deformations of
canonical matrix pairs
(a direct sum of the
summands \eqref{can}-\eqref{can3}). These
deformations are given
in the following
theorem, which is a
stronger form of
Theorem \ref{teo2}.
\begin{theorem}\label{teojy}
A simplest miniversal
deformation of the
canonical matrix pair
$(A,B)_{\text{\rm
can}}$ of
symmetric
matrices for
congruence can be
taken in the form
$(A,B)_{\text{\rm
can}} + {\cal D}(\vec
{\varepsilon})$, where
$\cal D$ is the $(0,*)$
matrix partitioned into
blocks ${\cal D}_{ij}$ as in
\eqref{grsd} that are defined
by \eqref{Hdef} - \eqref{ktlm}
in the notation \eqref{lhsd} -
\eqref{lhs}.
\end{theorem}
\section{Proof of the main theorem}
\label{sur}
\subsection{A method
of construction of
miniversal
deformations}
Now we give a method
of construction of
simplest miniversal
deformations, which
will be used in the
proof of Theorem
\ref{teojy}.
The deformation
\eqref{edr} is
universal in the sense
that every deformation
${\cal
B}(\mu_1,\dots,\mu_l)$
of $(A,B)$ has the form
${\cal
U}(\vec{\varphi}
(\mu_1,\dots,\mu_l)),$
where
$\varphi_{ij}(\mu_1,\dots,\mu_l)$
are convergent in a
neighborhood of $\vec
0$ power series such
that
$\varphi_{ij}(\vec 0)=
0$. Hence every
deformation ${\cal
B}(\mu_1,\dots,\mu_l)$
in Definition \ref{d}
can be replaced by
${\cal U}(\vec
{\varepsilon})$, which
proves the following
lemma.
\begin{lemma}\label{lem}
The following two
conditions are
equivalent for any
deformation ${\cal
A}(\lambda_1,\dots,\lambda_k)$
of pair of matrices $(A,B)$:
\begin{itemize}
\item[\rm(i)]
The deformation ${\cal
A}(\lambda_1,\dots,\lambda_k)$
is versal.
\item[\rm(ii)]
The deformation
\eqref{edr} is
equivalent to ${\cal
A}(\varphi_1(\vec{\varepsilon}),\dots,
\varphi_k(\vec{\varepsilon}))$
in which all
$\varphi_i(\vec{\varepsilon})$
are convergent in a
neighborhood of\/
$\vec 0$ power series
such that
$\varphi_i(\vec 0)=0$.
\end{itemize}
\end{lemma}
For a pair of $n$-by-$n$ symmetric
matrices $(A,B)$ and $C$,
we define
\begin{equation}
\label{eelie}
T(A,B):=\{C^T(A,B)+(A,B)C\,|\,C\in{\mathbb
C}^{n\times n}\}.
\end{equation}
If $U$ is a subspace
of a vector space $V$,
then each set $v+U$
with $v\in V$ is
called a \emph{coset
of \/$U$ in $V$}.
\begin{lemma}
\label{t2.1}
Let $(A,B)\in ({\mathbb
C}^{\,n\times n}_s, {\mathbb
C}^{\,n\times n}_s)$ and
let $\cal D$ be a pair of
$(0,*)$-matrices of size
$n\times n$. The
following are
equivalent:
\begin{itemize}
\item[\rm(i)]
The deformation
$(A,B)+{\cal D}(E,E')$
defined in \eqref{a2z}
is miniversal.
\item[\rm(ii)]
The vector space
$({\mathbb
C}^{\,n\times n}_s, {\mathbb
C}^{\,n\times n}_s)$
decomposes into the
direct sum
\begin{equation}\label{a4}
({\mathbb C}^{\,n\times
n}_s, {\mathbb
C}^{\,n\times n}_s)=T(A,B)
\oplus {\cal
D}({\mathbb C}).
\end{equation}
\item[\rm(iii)]
Each coset of
$T(A,B)$
in $({\mathbb
C}^{\,n\times n}_s, {\mathbb
C}^{\,n\times n}_s)$
contains exactly one
matrix of the form
${\cal D}$.
\end{itemize}
\end{lemma}
\begin{proof}
Define the action of
the group
$GL_n(\mathbb C)$ of
nonsingular $n$-by-$n$
matrices on the space
$[{\mathbb
C}^{\,n\times n}_s,{\mathbb
C}^{\,n\times n}_s]$ by
\[
(A,B)^S=S^T (A,B)S,\qquad
(A,B)\in [{\mathbb
C}^{\,n\times n}_s,{\mathbb
C}^{\,n\times n}_s],\quad
S\in GL_n(\mathbb C).
\]
The orbit $(A,B)^{GL_n}$
of $(A,B)$ under this
action consists of all pairs
of symmetric
matrices that are
congruent to the pair $(A,B)$.
The space $T(A,B)$
is the tangent space
to the orbit
$(A,B)^{GL_n}$ at the
point $(A,B)$ since
\begin{multline*}
(A,B)^{I+\varepsilon C}=
(I+\varepsilon
C)^T(A,B)(I+\varepsilon C) \\
=(A,B)+ \varepsilon(C^T (A,B)+
(A,B)C) +\varepsilon^2C^T
(A,B)C
\end{multline*}
for all $n$-by-$n$
matrices $C$ and
$\varepsilon\in\mathbb
C$. Hence ${\cal
D}(\vec
{\varepsilon})$ is
transversal to the
orbit $(A,B)^{GL_n}$ at
the point $(A,B)$ if
\[
({\mathbb C}^{\,n\times
n}_s,{\mathbb
C}^{\,n\times n}_s)=T(A,B) +
{\cal D}({\mathbb C})
\]
(see definitions in
\cite[\S\,29E]{arn3};
two subspaces of a
vector space are
called
\emph{transversal} if
their sum is equal to
the whole space).
This proves the
equivalence of (i) and
(ii) since a
transversal (of the
minimal dimension) to
the orbit is a
(mini)versal
deformation
\cite[Section
1.6]{arn2}. The
equivalence of (ii)
and (iii) is obvious.
\end{proof}
Recall that versality
of each deformation
$(A,B)+{\cal D}(\vec
{\varepsilon})$ in
which ${\cal D}$
satisfies \eqref{a4} means
that there exist a
deformation ${\cal
I}(\vec
{\varepsilon})$ of the
identity matrix such
that ${\cal
D}(\vec{\varepsilon})=
{\cal
I}(\vec{\varepsilon})^{T}
{\cal
U}(\vec{\varepsilon})
{\cal
I}(\vec{\varepsilon})$,
where ${\cal U}(\vec
{\varepsilon})$ is
defined in
\eqref{edr}.
Thus, a simplest
miniversal deformation
of $(A,B)\in ({\mathbb
C}^{\,n\times n}_s, {\mathbb
C}^{\,n\times n}_s)$ can
be constructed as
follows. Let
$(T_1,\dots,T_r)$ be a
basis of the space
$T (A,B)$,
and let
$(E_1,\dots,E_{\frac{n(n+1)}{2}};F_1,\dots,F_{\frac{n(n+1)}{2}})$ be
the basis of $({\mathbb
C}^{\,n\times n}_s, {\mathbb
C}^{\,n\times n}_s)$
consisting of all
elementary matrices
$(E_{ij},F_{ij})$. Removing
from the sequence
$(T_1,\dots,T_r,E_1,\dots,E_{\frac{n(n+1)}{2}},F_1,\dots,F_{\frac{n(n+1)}{2}})$
every pair of matrices that is a
linear combination of
the preceding
matrices, we obtain a
new basis \newline $(T_1,\dots,
T_r,
E_{i_1},\dots,E_{i_k},
F_{i_1},\dots,F_{i_m})$
of the space $({\mathbb
C}^{\,n\times n}_s, {\mathbb
C}^{\,n\times n}_s)$. By
Lemma \ref{t2.1}, the
deformation
\[
{\cal
A}(\varepsilon_1,\dots,
\varepsilon_k,f_1, \dots, f_m)=
(A+\varepsilon_1
E_{i_1}+\dots+\varepsilon_kE_{i_k},B+f_1F_{i_1}+ \dots + f_mF_{i_m})
\]
is miniversal.
For each pair of $m\times m$
symmetric matrices $(M,N)$ and each pair
$n\times n$ symmetric matrices
$(L,P)$, define the vector
spaces
\begin{equation}\label{neh}
V(M,N):=\{ S^T(M,N)+(M,N)S|S\in
{\mathbb C}^{m\times m} \}
\end{equation}
\begin{equation} \label{neh1}
V((M,N),(L,P)):=\{(
R^T(L,P)+(M,N)S,
S^T(M,N)+(L,P)R)|
\end{equation}
$$
S\in {\mathbb C}^{m\times n}, R\in
{\mathbb C}^{n\times m} \}
$$
\begin{lemma}\label{thekd}
Let
$(A,B)=(A_1,B_1)\oplus\dots\oplus
(A_t, B_t)$ be a
block-diagonal matrix
in which every $(A_i,B_i)$
is $n_i\times n_i$.
Let $\cal D$ be a pair of
$(0,*)$-matrices having
the size of $(A,B)$.
Partition it into
blocks $({\cal D}_{ij},{\cal D}'_{ij})$
conformably to the
partition of $(A,B)$
$($see
\eqref{grsd}$)$. Then
$(A,B)+{\cal D}(E,E')$ is a
simplest miniversal
deformation of $(A,B)$ for
congruence if and only
if
\begin{itemize}
\item[\rm(i)]
every coset of\/
$V(A_i,B_i)$ in $({\mathbb
C}^{n_i\times n_i}_s, {\mathbb
C}^{n_i\times n_i}_s)$
contains exactly one
matrix of the form
$({\cal D}_{ii},{\cal D}'_{ii})$, and
\item[\rm(ii)]
every coset of
$V((A_i,B_i),(A_i^T,B_i^T))$ in
$({\mathbb
C}^{n_i\times
n_j}, {\mathbb
C}^{n_i\times
n_j}) \oplus ({\mathbb
C}^{n_j\times n_i}, {\mathbb
C}^{n_j\times n_i})$
contains exactly two pair of
matrices $((Q_1,Q_2),(W_1,W_2))$ in
which $(Q_1,Q_2)$ is of the
form $({\cal D}_{ij},{\cal D}'_{ij})$, $(W_1,W_2)$ is of the form
$({\cal D}_{ji},{\cal D}'_{ji})=({\cal D}_{ij}^T,{\cal D}_{ij}^{'T})$.
\end{itemize}
\end{lemma}
\begin{proof}
By Lemma
\ref{t2.1}(iii),
$(A,B)+{\cal D}(\vec
{\varepsilon})$ is a
simplest miniversal
deformation of $(A,B)$ if
and only if for each
$(C,C')\in({\mathbb
C}^{n\times n}_s,{\mathbb
C}^{\,n\times n}_s)$ the
coset $(C,C')+T(A,B)$
contains exactly one
$(D,D')$ of the form ${\cal
D}$; that is, exactly
one
\begin{equation}\label{kid}
(D,D')=(C,C')+S^T(A,B)+(A,B)S\in{\cal
D}(\mathbb C)\qquad
\text{with
$S\in{\mathbb
C}^{n\times n}$.}
\end{equation}
Partition $(D,D'),\ (C,C')$, and
$S$ into blocks
conformably to the
partition of $(A,B)$. By
\eqref{kid}, for each
$i$ we have
$(D_{ii},D'_{ii})=(C_{ii},C'_{ii})+
S_{ii}^T(A_{i},B_{i})
+(A_{i},B_{i})S_{ii}$, and
for all $i$ and $j$
such that $i<j$ we
have
\begin{multline}\label{mht}
\left(
\begin{bmatrix}
D_{ii}&D_{ij}
\\ D_{ji}&D_{jj}
\end{bmatrix},
\begin{bmatrix}
D'_{ii}&D'_{ij}
\\ D'_{ji}&D'_{jj}
\end{bmatrix}
\right)
=
\left(
\begin{bmatrix}
C_{ii}&C_{ij}
\\ C_{ji}&C_{jj}
\end{bmatrix},
\begin{bmatrix}
C'_{ii}&C'_{ij}
\\ C'_{ji}&C'_{jj}
\end{bmatrix}
\right) \\
+ \begin{bmatrix}
S_{ii}^T&S_{ji}^T
\\ S_{ij}^T&S_{jj}^T
\end{bmatrix}
\left(
\begin{bmatrix}
A_i&0
\\ 0& A_j
\end{bmatrix},
\begin{bmatrix}
B_i&0
\\ 0& B_j
\end{bmatrix}
\right)
+
\left(
\begin{bmatrix}
A_i&0
\\ 0& A_j
\end{bmatrix},
\begin{bmatrix}
B_i&0
\\ 0& B_j
\end{bmatrix}
\right)
\begin{bmatrix}
S_{ii}&S_{ij}
\\ S_{ji}&S_{jj}
\end{bmatrix}.
\end{multline}
Thus, \eqref{kid} is
equivalent to the
conditions
\begin{multline}\label{djh}
(D_{ii},D'_{ii})=(C_{ii},C'_{ii})
+ S_{ii}^T(A_i,B_i)+(A_i,B_i)S_{ii}\in{\cal
D}_{ii}(\mathbb
C), (1\le i\le t)
\end{multline}
\begin{multline}\label{djhh}
((D_{ij},D'_{ij}),(D_{ji},D'_{ji}))=
((C_{ij},C'_{ij}), (C_{ji},C'_{ji})) + \\
((S_{ji}^TA_j+A_iS_{ij},S_{ji}^TB_j+B_iS_{ij}),
(S_{ij}^TA_i+A_jS_{ji},S_{ij}^TB_i+B_jS_{ji}))
\in {\cal
D}_{ij}(\mathbb
C)\oplus {\cal
D}_{ji}(\mathbb C)
\\ (1\le i<j\le t)
\end{multline}
Hence for each
$(C,C')\in ({\mathbb
C}^{n\times n}_s,{\mathbb
C}^{n\times n}_s)$ there
exists exactly one
$(D,D')\in{\cal D}$ of the
form \eqref{kid} if
and only if
\begin{itemize}
\item[(i$'$)]
for each
$(C_{ii},C'_{ii})\in({\mathbb
C}^{n_i\times n_i}_s,{\mathbb
C}^{n_i\times n_i}_s)$
there exists exactly
one $(D_{ii},D'_{ii})\in{\cal
D}_{ii}$ of the form
\eqref{djh}, and
\item[(ii$'$)]
for each $((C_{ij},C'_{ij}),
(C_{ji},C'_{ji}))\in ({\mathbb
C}^{n_i\times
n_j},{\mathbb
C}^{n_i\times
n_j})\oplus ({\mathbb
C}^{n_j\times n_i},{\mathbb
C}^{n_j\times n_i})$
there exists exactly
one
$((D_{ij},D'_{ij}),(D_{ji},D'_{ji}))\in
{\cal D}_{ij}(\mathbb
C)\oplus {\cal
D}_{ji}(\mathbb C)$ of
the form \eqref{djhh}.
\end{itemize}
This proves the lemma.
\end{proof}
\begin{corollary}\label{the}
In the notation of
Lemma \ref{thekd},
$(A,B)+{\cal D}(\vec
{\varepsilon})$ is a
miniversal deformation
of $(A,B)$ if and only if
each submatrix of the
form
\begin{equation*}\label{a8}
\left(
\begin{bmatrix}
A_i+{\cal D}_{ii}(\vec
{\varepsilon}) &
{\cal D}_{ij}(\vec
{\varepsilon})\\
{\cal D}_{ji}(\vec
{\varepsilon}) &A_j+
{\cal D}_{jj}(\vec
{\varepsilon})
\end{bmatrix}
\begin{bmatrix}
B_i+{\cal D}'_{ii}(\vec
{\varepsilon}) &
{\cal D}'_{ij}(\vec
{\varepsilon})\\
{\cal D}'_{ji}(\vec
{\varepsilon}) &B_j+
{\cal D}'_{jj}(\vec
{\varepsilon})
\end{bmatrix}
\right),\qquad i<j
\end{equation*}
is a miniversal
deformation of the pair
$(A_i\oplus A_j,B_i\oplus B_j)$.
\end{corollary}
Let us start to prove
Theorem \ref{teo2}.
Each $X_i$ in
\eqref{gto} is of the
form $H_n(\lambda)$,
or $K_n$, or
$L_n$, and so there
are 9 types of pairs
${\cal D}(X_i)$ and
${\cal D}(X_i,X_j)$
with $i<j$; they are
given
\eqref{Hdef}--\eqref{ktlm}.
It suffices to prove
that the pairs
\eqref{Hdef}--\eqref{ktlm}
satisfy the conditions
(i) and (ii) of Lemma
\ref{thekd}.
\subsection{Diagonal blocks
of matrices of $\cal
D$}
Fist we verify that
the diagonal blocks of
$\cal D$ defined in
part (i) of Theorem
\ref{teo2} satisfy the
condition (i) of Lemma
\ref{thekd}.
\subsubsection{Diagonal blocks
${\cal D}(H_{n}(\lambda))$ and ${\cal D}(K_{n})$}
\label{dhndkn}
At first consider the pair of blocks $H_{n}(\lambda)$.
The deformation of $K_{n}$ is equal to the
deformation of $H_{n}(\lambda)$ for $\lambda =0$
up to the permutation of matrices.
Due to Lemma
\ref{thekd}(i), it
suffices to prove that
each pair of symmetric
$n$-by-$n$ matrices
$(A,B)$ can be reduced to
exactly one pair of matrices of
the form \eqref{Hdef} (or, respectively \eqref{Kdef})
by adding
$$\Delta (A,B)=C^T (\Delta_n , \Lambda_n(\lambda))
+(\Delta_n , \Lambda_n(\lambda))C =
(C^T\Delta_n + \Delta_n C ,C^T \Lambda_n(\lambda) +
\Lambda_n(\lambda) C)$$
in which
$C$ is an arbitrary
$n$-by-$n$ matrix.
Obviously, that adding $C^T\Delta_n + \Delta_n C$ one can reduce $A$ to zero.
To preserve $A$ we must hereafter take $C$ such that $C^T\Delta_n + \Delta_n C=0$.
This means that $C$ is a skew-symmetric matrix with respect to its skew diagonal.
We reduce $B$ by adding
\begin{multline}
\Delta B=C^T \Lambda_n(\lambda) + \Lambda_n(\lambda) C\\
=\begin{bmatrix}
s_{11}& s_{12} & \ldots &s_{1n-2}&s_{1n-1}& 0\\
s_{21}& s_{22} & \ldots &s_{2n-2}&0& -s_{1n-1}\\
s_{31}& s_{32} & \ldots &0&-s_{2n-2}& -s_{1n-2}\\
\vdots&\vdots&\ddd&\vdots& \vdots & \vdots\\
s_{n-11}& 0 & \ldots & -s_{32}& -s_{22}& -s_{12}\\
0& -s_{n-11} & \ldots & -s_{31}& -s_{21}& -s_{11}\\
\end{bmatrix}
\begin{bmatrix}
0&&&&&\lambda\\
&&&&\lambda&1\\
&&&\lambda&1&\\
&&\ddd&\ddd&&\\
&\lambda&1&&&\\
\lambda&1&&&&0\\
\end{bmatrix} \\
+\begin{bmatrix}
0&&&&&\lambda\\
&&&&\lambda&1\\
&&&\lambda&1&\\
&&\ddd&\ddd&&\\
&\lambda&1&&&\\
\lambda&1&&&&0\\
\end{bmatrix}
\begin{bmatrix}
s_{11}& s_{21} & \ldots &s_{n-21}&s_{n-11}& 0\\
s_{12}& s_{22} & \ldots &s_{n-22}&0& -s_{n-11}\\
s_{13}& s_{23} & \ldots &0&-s_{n-22}& -s_{n-21}\\
\vdots&\vdots&\ddd&\vdots& \vdots & \vdots\\
s_{1n-1}& 0 & \ldots & -s_{23}& -s_{22}& -s_{21}\\
0& -s_{1n-1} & \ldots & -s_{13}& -s_{12}& -s_{11}\\
\end{bmatrix}\\ \label{34hd}
=\begin{bmatrix}
0& 0& s_{n-11}& s_{n-21}& \ldots& s_{31}& s_{21}\\
0& -2s_{n-11}& -s_{n-21}& s_{n-22}-s_{41}& \ldots& s_{32}-s_{21}& s_{22}-s_{11}\\
s_{n-11}& -s_{n-21}& -2s_{n-22}& -s_{42}& \ldots& s_{33}-s_{22}& s_{23}-s_{12}\\
s_{n-21}& s_{n-22}-s_{41}& -s_{42}& -2s_{43}& \ldots & s_{34}-s_{23}& s_{24}-s_{13}\\
\vdots&\vdots&\vdots&\vdots& \ddots&\vdots & \vdots\\
s_{31}& s_{32}-s_{21}& s_{33}-s_{22}& s_{34}-s_{23}& \ldots& -2s_{2n-2}& -s_{1n-2}\\
s_{21}& s_{22}-s_{11}& s_{23}-s_{12}& s_{24}-s_{13}& \ldots& -s_{1n-2}& -2s_{1n-1}
\end{bmatrix}
\end{multline}
Each skew diagonal of $\Delta B$ has unique variables
thus we reduce each skew diagonal of $B$ independently. Starting from the lower
right hand corner for each of $n-1$ skew
diagonals we have a system of equations which has a solution by the
Kronecker-Capelli theorem but for each half of the fist
$n$ skew diagonals we have a system of equations with the matrix
\begin{equation}
\left(
\begin{matrix}
1&&&& x_1 \\
-1&1&&& x_2 \\
&\ddots&\ddots&& \vdots \\
&&-1&1& x_{k-1} \\
&&&-1& x_{k} \\
\end{matrix}
\right)
\text{for even skew diagonals,}
\end{equation}
\begin{equation}
\left(
\begin{matrix}
1&&&& x_1 \\
-1&1&&& x_2 \\
&\ddots&\ddots&& \vdots \\
&&-1&1& x_{k-1} \\
&&&-2& x_{k} \\
\end{matrix}
\right)
\text{for odd skew diagonals,}
\end{equation}
where $x_1 \ldots x_k$ are corresponding elements of $B$. The matrix of this system has $k-1$ columns and $k$ rows, where
$1 \leq k \leq [\frac{n}{2}]$, when $n$ is even, and $1 \leq k \leq [\frac{n}{2}]+1$,
when $n$ is odd. Hence
its rank is less or equal to $k-1$. But the rank of the extended matrix of
the system is $k$, by the Kronecker-Capelli theorem this system does not have
a solution. If we drop the first or the last equation of the system
then it will have a solution. We should drop the last equation because
in that case we can set more elements to zero (on even skew diagonals). Our result does not
depend on $\lambda$ therefore $ {\cal D}(H_n(\lambda))=(0,0^{\nwarrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwarrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwarrow$}}})$ and ${\cal D}(K_n)=(0^{\nwarrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwarrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwarrow$}}},0)$.
\subsubsection{Diagonal blocks ${\cal D}(L_{n})$}
\label{dln}
In the same way
(using Lemma \ref{thekd}(i))
we prove that
each pair $(A,B)=\bigg( \begin{bmatrix}
A_{11}&A_{12} \\
A_{21}&A_{22}
\end{bmatrix},
\begin{bmatrix}
B_{11}&B_{12} \\
B_{21}&B_{22}
\end{bmatrix}
\bigg)$ of symmetric
$2n+1$-by-$2n+1$ matrices can be reduced
to the \eqref{Ldef}
by adding
\begin{multline}\label{moh}
\Delta (A,B)=\bigg( \begin{bmatrix}
\Delta A_{11}&\Delta A_{12} \\
\Delta A_{21}&\Delta A_{22}
\end{bmatrix},
\begin{bmatrix}
\Delta B_{11}&\Delta B_{12} \\
\Delta B_{21}&\Delta B_{22}
\end{bmatrix}
\bigg)\\ =\begin{bmatrix}
S_{11}^T&S_{21}^T
\\ S_{12}^T&S_{22}^T
\end{bmatrix}
\bigg( \begin{bmatrix}
0&F_n^T \\
F_n&0
\end{bmatrix},
\begin{bmatrix}
0&G_n^T \\
G_n&0
\end{bmatrix}
\bigg)
\\ + \bigg( \begin{bmatrix}
0&F_n^T \\
F_n&0
\end{bmatrix},
\begin{bmatrix}
0&G_n^T \\
G_n&0
\end{bmatrix} \bigg)
\begin{bmatrix}
S_{11}&S_{12}
\\ S_{21}&S_{22}
\end{bmatrix}
\\=
\bigg( \begin{bmatrix}
S_{21}^TF_n+F_n^TS_{21}&
S_{11}^TF_n^T+F_n^TS_{22}\\
S_{22}^TF_n+F_nS_{11}&
S_{12}^TF_n^T+F_nS_{12}
\end{bmatrix}, \\
\begin{bmatrix}
S_{21}^TG_n+G_n^TS_{21}&
S_{11}^TG_n^T+G_n^TS_{22}\\
S_{22}^TG_n+G_nS_{11}&
S_{12}^TG_n^T+G_nS_{12}
\end{bmatrix} \bigg)
\end{multline}
in which
$S=[S_{ij}]_{i,j=1}^2$
is an arbitrary $2n+1 \times 2n+1$ matrix.
Each pair of blocks of our pair
of matrices is changed independently.The firth one is
$(S_{21}^TF_n+F_n^TS_{21},S_{21}^TG_n+G_n^TS_{21})$
in which $S_{21}$ is an arbitrary $n$-by-$n+1$
matrix. Obviously, that adding $\Delta A_{11}=S_{21}^TF_n+F_n^TS_{21}$
one can reduce each $n+1$-by-$n+1$ symmetric matrix $A_{11}$ to
$0_{*}$. To preserve $A_{11}$ we must hereafter take $S_{21}$ such that
$F_n^TS_{21}=-S_{21}^TF_n$.
This means that
$$S_{21}=\begin{bmatrix}
0&s_{12}&s_{13}&\ldots & s_{1n}&0 \\
-s_{12}&0&s_{23}&\ldots & s_{2n}&0 \\
-s_{13}&-s_{23}&0&\ldots & s_{3n}&0 \\
\vdots&\vdots&\vdots&\ddots & \vdots &0 \\
-s_{1n}&-s_{2n}&-s_{3n}&\ldots & 0&0 \\
\end{bmatrix}.$$
The matrix $S_{21}$ without the last column is skew-symmetric.
Now we reduce $B_{11}$ by adding
\begin{multline*}
\Delta B_{11}=\begin{bmatrix}
0&-s_{12}&-s_{13}&\ldots & -s_{1n}\\
s_{12}&0&-s_{23}&\ldots & -s_{2n}\\
s_{13}&s_{23}&0&\ldots & -s_{3n}\\
\vdots&\vdots&\vdots&\ddots & \vdots \\
s_{1n}&s_{2n}&s_{3n}&\ldots & 0\\
0&0&0&\ldots &0
\end{bmatrix}
\begin{bmatrix}
0&1&0&\ldots&0&0\\
0&0&1&\ddots & \vdots&\vdots\\
0&0&0&\ddots & 0&0\\
\vdots&\ddots&\ddots&\ddots& 1&0\\
0&\ldots&0&0 & 0&1\\
\end{bmatrix}
\\ +
\begin{bmatrix}
0&0&0&\ldots&0\\
1&0&0&\ddots & \vdots\\
0&1&0&\ddots & 0\\
\vdots&\ddots&\ddots&\ddots& 0\\
0&\ldots&0&1 & 0\\
0&\ldots&0&0 & 1\\
\end{bmatrix}
\begin{bmatrix}
0&s_{12}&s_{13}&\ldots & s_{1n}&0 \\
-s_{12}&0&s_{23}&\ldots & s_{2n}&0 \\
-s_{13}&-s_{23}&0&\ldots & s_{3n}&0 \\
\vdots&\vdots&\vdots&\ddots & \vdots &0 \\
-s_{1n}&-s_{2n}&-s_{3n}&\ldots & 0&0 \\
\end{bmatrix}
\\=
\begin{bmatrix}
0& 0& -s_{12}& \ldots & & -s_{1n-1}& -s_{1n} \\
0& 2s_{12}& s_{13}&\ldots & & s_{1n}-s_{2n-1}& -s_{2n}\\
-s_{12}& s_{13}& 2s_{23}& \ddots && s_{2n}-s_{3n-1}& -s_{3n}\\
\vdots&\vdots&\ddots&\ddots & \ddots & \vdots& \vdots\\
-s_{1n-2}& s_{1n-1}-s_{2n-2}& &\ddots & 2s_{n-2n-1}& s_{n-2n}& -s_{n-1n}\\
-s_{1n-1}& s_{1n}-s_{2n-1}& &\ldots & s_{n-2n}& 2s_{n-1n}& 0\\
-s_{1n}& -s_{2n}& &\ldots & -s_{n-1n}& 0& 0
\end{bmatrix}
\end{multline*}
Both upper and lower parts (with respect to its skew diagonal) of matrix $\Delta B_{11}$
are analogous to the upper part of \eqref{34hd}.
So each skew diagonal of $\Delta B_{11}$ has unique variables, hence we
reduce $B_{11}$ skew diagonal by skew diagonal to the form $0^{\nwsearrow\!\!\!\! \text{\raisebox{1.5pt}{$\nwsearrow$}} \!\!\!\! \text{\raisebox{3pt}{$\nwsearrow$}}}$.
The pair of block $(A_{21},B_{21})$ is reduced by adding
$\Delta (A_{21},B_{21}) = (S_{22}^TF_n+F_nS_{11},S_{22}^TG_n+G_nS_{11})$
in which $S_{11}$ and $S_{22}$ are arbitrary matrices of the corresponding size. Obviously, that
adding $S_{22}^TF_n+F_nS_{11}$ we reduce $A_{21}$ to zero.
To preserve $A_{21}$ we must hereafter take $S_{11}$ and $S_{22}$ such that
$F_nS_{11}=-S_{22}^TF_n$. This means,that
$$S_{11}=\begin{bmatrix}
&&& 0\\
&-S^T_{22}&& 0\\
&&& \vdots\\
&&& 0\\
-y_{1}&-y_{2}&\ldots & -y_{n+1}\\
\end{bmatrix}.$$
We reduce $B_{12}$ by adding
\begin{multline}
\Delta B_{12}=S_{22}^TG_n+G_nS_{11}=
\begin{bmatrix}
s_{11}&s_{12}&s_{13}&\ldots & s_{1n}\\
s_{21}&s_{22}&s_{23}&\ldots & s_{2n}\\
s_{31}&s_{32}&s_{33}&\ldots & s_{3n}\\
\vdots&\vdots&\vdots&\ddots & \vdots\\
s_{n1}&s_{n2}&s_{n3}&\ldots & s_{nn}\\
\end{bmatrix}
\begin{bmatrix}
0&1&0&\ldots&0&0\\
0&0&1&\ddots & \vdots& \vdots\\
0&0&0&\ddots & 0& 0\\
\vdots&\ddots&\ddots&\ddots& 1&0\\
0&\ldots&0&0 & 0&1\\
\end{bmatrix}
\\ +
\begin{bmatrix}
0&1&0&\ldots&0&0\\
0&0&1&\ddots & \vdots& \vdots\\
0&0&0&\ddots & 0& 0\\
\vdots&\ddots&\ddots&\ddots& 1&0\\
0&\ldots&0&0 & 0&1\\
\end{bmatrix}
\begin{bmatrix}
-s_{11}&-s_{12}&-s_{13}&\ldots & -s_{1n}& 0\\
-s_{21}&-s_{22}&-s_{23}&\ldots & -s_{2n}& 0\\
-s_{31}&-s_{32}&-s_{33}&\ldots & -s_{3n}& 0\\
\vdots&\vdots&\vdots&\ddots & \vdots & \vdots\\
-s_{n1}&-s_{n2}&-s_{n3}&\ldots & -s_{nn}&0\\
y_{1}&y_{2}&y_{3}&\ldots &y_{n} & y_{n+1}\\
\end{bmatrix}
\\=
\begin{bmatrix}
-s_{21}& s_{11}-s_{22}& s_{12}-s_{23}& \ldots & s_{1n-1}-s_{2n}& s_{1n} \\
-s_{31}& s_{21}-s_{32}& s_{22}-s_{33}& \ldots & s_{2n-1}-s_{3n}& s_{2n} \\
-s_{41}& s_{31}-s_{42}& s_{32}-s_{43}& \ldots & s_{3n-1}-s_{4n}& s_{3n} \\
\vdots&\vdots&\vdots&\ddots & \vdots & \vdots\\
-s_{n1}& s_{n-11}-s_{n2}& s_{n-12}-s_{n3}& \ldots & s_{nn-1}-s_{nn}& s_{n-1n} \\
y_1& s_{n1}+y_2 & s_{n2}+y_3 & \ldots & s_{nn+1}+y_n & s_{nn}+y_{n+1} \\
\end{bmatrix}.
\end{multline}
It is easily seen that we can set $B_{12}$ to zero by adding $\Delta B_{12}$
(diagonal by diagonal).
The pair of blocks $(A_{12},B_{12})$ is equal to the transposition
of $(A_{21},B_{21})$.
To the pair of blocks $(A_{22},B_{22})$ we can add
$\Delta (A_{22},B_{22})=(S_{12}^TF_n^T+F_nS_{12},S_{12}^TG_n^T+G_nS_{12})$
in which $S_{12}$ is an arbitrary $n+1$-by-$n$
matrix. Obviously, that adding $S_{21}^TF_n^T+F_nS_{21}$
one can reduce each $n$-by-$n$ symmetric matrix $A_{22}$ to
zero. To preserve $A_{22}$ we must hereafter take $S_{21}$ such
that $F_nS_{12}=-S_{12}^TF_n^T$.
This means that
$$S_{12}=\begin{bmatrix}
0&s_{12}&s_{13}&\ldots & s_{1n}& \\
-s_{12}&0&s_{23}&\ldots & s_{2n}& \\
-s_{13}&-s_{23}&0&\ldots & s_{3n}& \\
\vdots&\vdots&\vdots&\ddots & \vdots & \\
-s_{1n}&-s_{2n}&-s_{3n}&\ldots & 0& \\
s_{1n+1}&s_{2n+1}&s_{3n+1}&\ldots & s_{nn+1}& \\
\end{bmatrix}.$$
The matrix $S_{12}$ without the last row is skew-symmetric.
Now we reduce $B_{22}$ by adding
\begin{multline*}
\Delta B_{22}
\begin{bmatrix}
0&-s_{12}&-s_{13}&\ldots & -s_{1n}& s_{1n+1}\\
s_{12}&0&-s_{23}&\ldots & -s_{2n}& s_{2n+1}\\
s_{13}&s_{23}&0&\ldots & -s_{3n}& s_{3n+1}\\
\vdots&\vdots&\vdots&\ddots & \vdots & \vdots\\
s_{1n}&s_{2n}&s_{3n}&\ldots & 0& s_{nn+1}\\
\end{bmatrix}
\begin{bmatrix}
0&0&0&\ldots&0\\
1&0&0&\ddots & \vdots\\
0&1&0&\ddots & 0\\
\vdots&\ddots&\ddots&\ddots& 0\\
0&\ldots&0&1 & 0\\
0&\ldots&0&0 & 1\\
\end{bmatrix}
\\ +
\begin{bmatrix}
0&1&0&\ldots&0&0\\
0&0&1&\ddots & \vdots& \vdots\\
0&0&0&\ddots & 0& 0\\
\vdots&\ddots&\ddots&\ddots& 1&0\\
0&\ldots&0&0 & 0&1\\
\end{bmatrix}
\begin{bmatrix}
0&s_{12}&s_{13}&\ldots & s_{1n} \\
-s_{12}&0&s_{23}&\ldots & s_{2n} \\
-s_{13}&-s_{23}&0&\ldots & s_{3n} \\
\vdots&\vdots&\vdots&\ddots & \vdots \\
-s_{1n}&-s_{2n}&-s_{3n}&\ldots & 0 \\
s_{1n+1}&s_{2n+1}&s_{3n+1}&\ldots & s_{nn+1} \\
\end{bmatrix}
\\=
\begin{bmatrix}
-2s_{12}& -s_{13}& \ldots& -s_{1n}+s_{2n-1}& s_{1n+1}+s_{2n}\\
-s_{13}& -2s_{23}& \ldots& -s_{2n}+s_{3n-1}& s_{2n+1}+s_{3n}\\
\vdots&\vdots&\ddots&\vdots & \vdots \\
-s_{1n}+s_{2n-1}& -s_{2n}+s_{3n-1}& \ldots& -2s_{n-1n}& s_{n-1n+1}\\
s_{1n+1}+s_{2n}& s_{2n+1}+s_{3n}& \ldots & s_{n-1n+1}& 2s_{nn+1}
\end{bmatrix}
\end{multline*}
Each skew diagonal of $\Delta B_{22}$ has unique variables thus we
reduce $B_{22}$ skew diagonal by skew diagonal. For each half of any
skew diagonal we have the system of equations,
which has a solution, by the Kronecker-Capelli theorem.
Therefore we reduce each skew-diagonal to zero
and so we reduce $(A_{22},B_{22})$ to zero.
Hence ${\cal D}(L_m)$ is equal to \eqref{Ldef}.
\subsection{Off-diagonal
blocks of matrices of
$\cal D$ that
correspond to summands
of
$(A,B)_{\text{can}}$
of the same type}
Now we verify the
condition (ii) of
Lemma \ref{thekd} for
off-diagonal blocks of
$\cal D$ defined in
Theorem
\ref{teo2}(ii); the
diagonal blocks of
their horizontal and
vertical strips
contain summands of
$(A,B)_{\text{can}}$ of the same type.
\subsubsection{Pairs of
blocks ${\cal
D}(H_n(\mu),\,
H_m(\lambda))$ and
${\cal D}(K_n,K_m)$}
\label{sub4}
Due to Lemma
\ref{thekd}(ii), it
suffices to prove that
each group of four matrices $((A,B),(A^T,B^T))$
can be reduced to
exactly one group of
the form \eqref{lsiu1} (or, respectively \eqref{lsiu2})
by adding
\[
(R^TH_m(\lambda)
+H_n(\mu)S, S^T
H_n(\mu)+ H_m(\lambda)
R),\quad S\in
{\mathbb C}^{n\times m}, R \in
{\mathbb C}^{m\times n}.
\]
Obviously, that we can reduce only $(A,B)$
the second pair $(A^T,B^T))$ is reduced automatically.
$$
\Delta (A,B) = R^TH_m(\lambda)
+H_n(\mu)S =(R^T \Delta_m + \Delta_n S,
R^T \Lambda_m(\lambda)+ \Lambda_n(\mu) S) .
$$
It is clear that we can set $A$ to zero. To preserve $A$ we must
hereafter take $R$ and $S$ such that
\[
R^T \Delta_m + \Delta_n S=0 \Leftrightarrow R^T =-\Delta_n S\Delta_m.
\]
It follows that $B$ is reduced by adding
\begin{multline}
\Delta B =R^T \Lambda_m(\lambda)+ \Lambda_n(\mu) S=-\Delta_n S\Delta_m\Lambda_m(\lambda)+ \Lambda_n(\mu) S \\ =
\begin{cases}
\begin{matrix}
(\lambda-\mu)s_{n-i+1,j}-s_{n-i+1,j-1}-s_{n-i+2,j}& \text{if} \ \ \ 2 \leq i \leq n , \ \ 2 \leq j \leq m \\
(\lambda-\mu)s_{n-i+1,j}-s_{n-i+1,j-1}& \text{if} \ \ \ 2 \leq j \leq m, \ \ i=1 \\
(\lambda-\mu)s_{n-i+1,j}+s_{n-i+2,j}& \text{if} \ \ \ 2 \leq i \leq n, \ \ j=1 \\
(\lambda-\mu)s_{n1}& \text{if} \ \ \ i=j=1
\end{matrix} .
\end{cases}
\end{multline}
We have the system of $nm$ equations that has a solution if $\lambda \neq \mu$.
Hence in the case $\lambda \neq \mu$ we can set any
pair of $n$-by-$m$ matrices to zero.
Further we look at the case $\lambda = \mu$ then
\begin{multline}
\Delta B = R^T \Lambda_m(\lambda)+ \Lambda_n(\lambda) S=-\Delta_n S\Delta_m\Lambda_m(\lambda)+ \Lambda_n(\lambda) S \\ =
\begin{bmatrix}
0& -s_{51}& -s_{52}& -s_{53}& -s_{54}& -s_{55}& -s_{56} \\
s_{51}& s_{52}-s_{41}& s_{53}-s_{42}& s_{54}-s_{43}& s_{55}-s_{44}& s_{56}-s_{45}& s_{57}-s_{46} \\
s_{41}& s_{42}-s_{31}& s_{43}-s_{32}& s_{44}-s_{33}& s_{45}-s_{34}& s_{46}-s_{35}& s_{47}-s_{36} \\
s_{31}& s_{32}-s_{21}& s_{33}-s_{22}& s_{34}-s_{23}& s_{35}-s_{24}& s_{36}-s_{25}& s_{37}-s_{26} \\
s_{21}& s_{22}-s_{11}& s_{23}-s_{12}& s_{24}-s_{13}& s_{25}-s_{14}& s_{26}-s_{15}& s_{27}-s_{16}
\end{bmatrix}
\end{multline}
We reduce $B$ by adding $\Delta B$ diagonal by diagonal to the form $0^{\nwarrow}$.
We prove that ${\cal D}(H_m(\mu),\, H_n(\lambda))$ is equal to \eqref{lsiu1}
and respectively ${\cal D}(K_m,\, K_n)$ is equal to \eqref{lsiu2}.
\subsubsection{Pairs of
blocks ${\cal D}(L_n, L_m)$ }
\label{sub42}
Due to Lemma
\ref{thekd}(ii), it
suffices to prove that
each group of four matrices $((A,B),(A^T,B^T))$
can be reduced to
exactly one group of
the form \eqref{lsiu3}
by adding
\[
(R^T L_m+L_nS, S^T L_n+L_m R),\quad S\in
{\mathbb C}^{2n+1\times 2m+1},\ R\in
{\mathbb C}^{2m+1\times 2n+1}.
\]
Obviously, that we can reduce only $(A,B)$ and the pair $(A^T,B^T)$ is reduced automatically.
\begin{multline*}
\Delta (A,B) = \left( \begin{bmatrix}
\Delta A_{11}&\Delta A_{12}\\
\Delta A_{21}&\Delta A_{22}\\
\end{bmatrix},
\begin{bmatrix}
\Delta B_{11}&\Delta B_{12}\\
\Delta B_{21}&\Delta B_{22}\\
\end{bmatrix}\right)= R^TL_m
+L_n S \\ = \left(R^T
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}
+
\begin{bmatrix}
0&F_n^T\\
F_n&0\\
\end{bmatrix}
S,R^T
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}
+
\begin{bmatrix}
0&G_n^T\\
G_n&0\\
\end{bmatrix}
S \right)
\\ = \bigg( \begin{bmatrix}
R_{12}^TF_m+F^T_nS_{21}&R_{11}^TF^T_m+F^T_nS_{22}\\
R_{22}^TF_m+F_nS_{11}&R_{21}^TF^T_m+F_nS_{12}\\
\end{bmatrix}, \\
\begin{bmatrix}
R_{12}^TG_m+G^T_nS_{21}&R_{11}^TG^T_m+G^T_nS_{22}\\
R_{22}^TG_m+G_nS_{11}&R_{21}^TG^T_m+G_nS_{12}\\
\end{bmatrix} \bigg).
\end{multline*}
Firstly we reduce the pair $(A_{11},B_{11})$. Easy to see that by adding
$\Delta A_{11}$ we can reduce $A_{11}$ to $0_{*}$.
To preserve $A_{11}$ we must hereafter take $R_{12}$ and $S_{21}$ such that
$R^T_{12}F_m=-F^T_nS_{21}$. This means
\begin{equation*}
R^T_{12}=
\begin{bmatrix}
&-Q&\\
0& \ldots &0\\
\end{bmatrix},
S_{21}=
\begin{bmatrix}
&&0\\
Q&& \vdots\\
&&0\\
\end{bmatrix}, \text{where $Q$ is any $n$-by-$m$ matrix.}
\end{equation*}
Hence
\begin{multline}
\Delta B_{11} = R^T_{12}G_m+G^T_nS_{21}=
\begin{bmatrix}
&-Q&\\
0& \ldots &0\\
\end{bmatrix}
G_m+G_n^T
\begin{bmatrix}
&&0\\
Q&& \vdots\\
&&0\\
\end{bmatrix} =\\
\begin{bmatrix}
0& -q_{11}& -q_{12}& -q_{13}& \ldots & -q_{1m-1}& -q_{1m} \\
q_{11}& q_{12}-q_{21}& q_{13}-q_{22}& q_{14}-q_{23}& \ldots& q_{1m}-q_{2m-1}& -q_{2m} \\
q_{21}& q_{22}-q_{31}& q_{23}-q_{32}& q_{24}-q_{33}& \ldots& q_{2m}-q_{3m-1}& -q_{3m} \\
q_{31}& q_{32}-q_{41}& q_{33}-q_{42}& q_{34}-q_{43}& \ldots& q_{3m}-q_{4m-1}& -q_{4m} \\
\vdots&\vdots&\vdots&\vdots&\ddots & \vdots& \vdots\\
q_{n-11}& q_{n-12}-q_{n1}& q_{n-13}-q_{n2}& q_{n-14}-q_{n3}& \ldots& q_{n-1m}-q_{nm-1}& -q_{nm} \\
q_{n1}& q_{n2}& q_{n3}& q_{n4} & \ldots& q_{nm}& 0
\end{bmatrix}
\end{multline}
By adding $\Delta B_{11}$ we can set each element of
$B_{11}$ to zero except either the first column and the
last row or the first row and the last column.
Now we turn to the second pair of
blocks. We can set $A_{12}$ to zero by
adding $\Delta A_{12}$.
To preserve $A_{12}$ we must hereafter take $R_{11}$ and $S_{22}$
such that $R^T_{11}F_m^T=-F_n^TS_{22}$ thus
\begin{equation*}
R^T_{11}=
\begin{bmatrix}
&&&b_1\\
&-S_{22}&& \vdots\\
0&\ldots&0&b_{n+1}\\
\end{bmatrix},
\end{equation*}
where $S_{22}$ is any $n$-by-$m$ matrix.
Therefore
\begin{multline}
\Delta B_{12}= R^T_{11}G^T_m+G^T_nS_{22}=
\begin{bmatrix}
&&&b_1\\
&-S_{22}&& \vdots\\
0&\ldots&0&b_{n+1}\\
\end{bmatrix}
G_m^T+G_n^TS_{22} =\\
\begin{bmatrix}
-s_{12}& -s_{13}& -s_{14}& \ldots& -s_{1m}& b_{1} \\
s_{11}-s_{22}& s_{12}-s_{23}& s_{13}-s_{24}& \ldots& s_{1m-1}-s_{2m}& b_{2}+s_{1m}\\
\vdots&\vdots&\vdots&\ddots & \vdots& \vdots\\
s_{n-11}-s_{n2}& s_{n-12}-s_{n3}& s_{n-13}-s_{n4}& \ldots& s_{n-1m-1}-s_{nm}& b_{n}+s_{n-1m}\\
s_{n1}& s_{n2}& s_{n3}& \ldots& s_{nm-1}& b_{n+1}+s_{nm}
\end{bmatrix}
\end{multline}
If $n \geq m-1$ then we can set $B_{12}$ to zero
by adding $\Delta B_{12}$. If $n < m-1$ then we can not
set the whole block $B_{12}$ to zero.
We reduce each diagonal of $B_{12}$ independently.
By adding the first $n$ diagonals of $\Delta B_{12}$ starting
from the down left hand corner we set corresponding
diagonals of $B_{12}$ to zero. We set the next $m-n-1$
diagonals of $B_{12}$ to zero , except the last elements of every of them.
The last $n+1$ diagonals
we set to zero completely. Hence we reduce
this pair of blocks to the form $(0,{\cal Q}_{n+1m})$.
Now we reduce $(A_{21},B_{21})$. We can set $A_{21}$ to zero
by adding $\Delta A_{21}$.
To preserve $A_{21}$ we must hereafter take $R_{22}$ and $S_{11}$
such that $R^T_{22}F_m=-F_nS_{11}$ thus
\begin{equation*}
S_{11}=
\begin{bmatrix}
&&&0\\
&-R^T_{22}&& \vdots\\
&&&0\\
b_1&\ldots&&b_{m+1}\\
\end{bmatrix},
\end{equation*}
where $R^T_{22}$ is any $n$-by-$m$ matrix.
Therefore
\begin{multline}
\Delta B_{21}= R^T_{22}G_m+G_nS_{11}=
R^T_{22}G_m+G_n
\begin{bmatrix}
&&&0\\
&-R^T_{22}&& \vdots\\
&&&0\\
b_1&\ldots&&b_{m+1}\\
\end{bmatrix} \\ =
\begin{bmatrix}
-r_{21}& r_{11}-r_{22}& r_{12}-r_{23}&\ldots & r_{1m-1}-r_{2m}& r_{1m}\\
-r_{31}& r_{21}-r_{32}& r_{22}-r_{33}&\ldots & r_{2m-1}-r_{3m}& r_{2m}\\
-r_{41}& r_{31}-r_{42}& r_{32}-r_{43}&\ldots & r_{3m-1}-r_{4m}& r_{3m}\\
-r_{51}& r_{41}-r_{52}& r_{42}-r_{53}&\ldots & r_{4m-1}-r_{5m}& r_{4m}\\
\vdots&\vdots&\vdots&\ddots & \vdots& \vdots\\
-r_{n1}& r_{n-11}-r_{n2}& r_{n-12}-r_{n3}&\ldots & r_{n-1m-1}-r_{nm}& r_{n-1m}\\
b_{1}& r_{n1}+b_{2}& r_{n2}+b_{3}&\ldots & r_{nm-1}+b_{m}& r_{nm}+b_{m+1}\\
\end{bmatrix}
\end{multline}
If $m+1 \geq n$ then we can set $B_{21}$ to zero
by adding $\Delta B_{21}$. If $m+1 < n$ then we can not
set the whole $B_{21}$ to zero. We reduce like
in the previous case. Hence $(A_{21},B_{21})$ is reduced
by adding $\Delta (A_{21},B_{21})$ to $(0,{\cal Q}_{m+1n}^T)$.
Now let us consider the pair $(A_{22},B_{22})$. We can set $A_{22}$ to zero
by adding $\Delta A_{22}$. To preserve $A_{22}$ we must hereafter
take $R_{21}$ and $S_{12}$ such that $R^T_{21}F_m^T=-F_nS_{12}$
thus
\begin{equation*}
S_{12}=
\begin{bmatrix}
&-Q&\\
a_{1}& \ldots &a_{m}\\
\end{bmatrix},
R^T_{21}=
\begin{bmatrix}
&&b_1\\
Q&& \vdots\\
&&b_n\\
\end{bmatrix}, \text{where $Q$ is any $n$-by-$m$ matrix.}
\end{equation*}
It follows that
\begin{multline}
\Delta B_{22}=R^T_{21}G^T_m+G_nS_{12}=
\begin{bmatrix}
&&b_1\\
Q&& \vdots\\
&&b_n\\
\end{bmatrix}
G_m^T+G_n
\begin{bmatrix}
&-Q&\\
a_{1}& \ldots &a_{m}\\
\end{bmatrix} =\\
\begin{bmatrix}
q_{12}-q_{21}&q_{13}-q_{22}&q_{14}-q_{23}& \ldots & q_{1m}-q_{2m-1}& b_1-q_{2m}\\
q_{22}-q_{31}&q_{23}-q_{32}&q_{24}-q_{33}& \ldots & q_{2m}-q_{3m-1}& b_2-q_{3m}\\
q_{32}-q_{41}&q_{33}-q_{42}&q_{34}-q_{43}& \ldots & q_{3m}-q_{4m-1}& b_3-q_{4m}\\
\vdots&\vdots&\vdots&\ddots & \vdots& \vdots\\
q_{n-12}-q_{n1}&q_{n-13}-q_{n2}&q_{n-14}-q_{n3}& \ldots & q_{n-1m}-q_{nm-1}& b_{m-1}-q_{nm}\\
q_{n2}+a_1&q_{n3}+a_2&q_{n4}+a_3& \ldots & q_{nm}-a_{n-1}& a_n+b_m\\
\end{bmatrix}
\end{multline}
We can set each skew-diagonal of $B_{22}$ to zero independently.
Thus adding $\Delta B_{22}$ we can reduce $B_{22}$ to zero.
Hence ${\cal D}(L_m, L_n)$ has the form \eqref{lsiu3}.
\subsection{Off-diagonal
blocks of matrices of
$\cal D$ that
correspond to summands
of
$(A,B)_{\text{can}}$
of distinct types}
Finally, we verify the
condition (ii) of
Lemma \ref{thekd} for
off-diagonal blocks of
$\cal D$ defined in
Theorem
\ref{teo2}(iii); the
diagonal blocks of
their horizontal and
vertical strips
contain summands of
$(A,B)_{\text{can}}$ of different
types.
\subsubsection{Pairs of
blocks ${\cal D}(H_n(\lambda), K_m)$ }
\label{sub7}
Due to Lemma
\ref{thekd}(ii), it
suffices to prove that
each group of four matrices $((A,B),(A^T,B^T))$
can be reduced to
exactly one group of
the form \eqref{kut}
by adding
\[
(R^T K_m+ H_n(\lambda) S, S^T H_n(\lambda)
+K_m R),\quad S\in
{\mathbb C}^{n\times m},\ R\in
{\mathbb C}^{m\times n}.
\]
Obviously, that we can reduce only $(A,B)$ and the pair $(A^T,B^T)$
is reduced automatically.
$$
\Delta (A,B) =R^T K_m+ H_n(\lambda) S = \\ (R^T \Lambda_m(0)+\Delta_nS,
R^T \Delta_m+ \Lambda_n(\lambda)S) .
$$
It is clear that we can set $A$ to zero by adding $\Delta A$. To preserve $A$ we
must hereafter take $R$ and $S$ such that
\[
R^T \Lambda_m(0)+\Delta_nS=0 \Rightarrow S=-\Delta_nR^T \Lambda_m(0).
\]
Thus $B$ is reduced by adding:
$$\Delta B = R^T \Delta_m+ \Lambda_n(\lambda)S=
R^T \Delta_m - \Lambda_n(\lambda)\Delta_nR^T \Lambda_m(0)$$
We can set $B_{22}$ to zero by adding $\Delta B_{22}$.
Hence ${\cal D}(H_n(\lambda), K_m)$ is equal to zero.
\subsubsection{Pairs of
blocks ${\cal D}(H_n(\lambda), L_m)$ }
\label{sub8}
Due to Lemma
\ref{thekd}(ii), it
suffices to prove that
each group of four matrices $((A,B),(A^T,B^T))$
can be reduced to
\eqref{hnlm}
by adding
\[
(R^T L_m+ H_n(\lambda) S,
S^TH_n(\lambda) +L_mR),\quad S\in
{\mathbb C}^{n\times 2m+1},\ R\in
{\mathbb C}^{2m+1\times n}.
\]
Obviously, that we can reduce only $(A,B)$ and
the pair $(A^T,B^T)$ is reduced automatically.
$$
\Delta (A,B)=R^T L_m+ H_n(\lambda) S
=(R^T
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}
+
\Delta_n S,R^T
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}
+
\Lambda_n(\lambda) S) .
$$
It is easy to check that we can set $A$ to zero. To preserve $A$ we
must hereafter take $R$ and $S$ such that
\[
R^T
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}
+
\Delta_n S=0
\Rightarrow
S=
-\Delta_n
\begin{bmatrix}
R^T_{11}&R^T_{21}\\
R^T_{12}&R^T_{22}\\
\end{bmatrix}
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}
\]
Hence $B$ is reduced by adding:
\begin{multline*}
\Delta B =R^T
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}
-
\Lambda_n(\lambda) \Delta_n R^T
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}\\
=
\begin{cases}
\begin{matrix}
-\lambda r_{in-1}-r_{i-1n-1}& \text{if} \ \ \ 1 \leq j \leq n, \ \ j=1 \\
-\lambda r_{i,m+1+j}-r_{i-1,m+1+j}+r_{i,m+j}& \text{if} \ \ \ 1 \leq i \leq n, \ \ 1 < j < m+1\\
r_{in}& \text{if} \ \ \ 1 \leq i \leq n, \ \ j=m+1 \\
-\lambda r_{i,j-m-1}-r_{i-1,j-m-1}+r_{i,j-m}& \text{if} \ \ \ 1 \leq i \leq n, \ \ m+1 < j \leq 2m+1 \\
\end{matrix} ,
\end{cases}
\end{multline*}
where we put $r_{0t}:=0$. Adding $\Delta B$ we reduce $B$ to the form $0^{\leftarrow}$.
Therefore ${\cal D}(H_n(\lambda), L_m)$ is equal to \eqref{hnlm}.
\subsubsection{Pairs of
blocks ${\cal D}(K_n, L_m)$ }
\label{sub9}
Due to Lemma
\ref{thekd}(ii), it
suffices to prove that
each group of four matrices $((A,B),(A^T,B^T))$
can be reduced to
\eqref{ktlm}
by adding
\[
(R^T L_m+ K_nS,S^TK_n
+L_mR),\quad S\in
{\mathbb C}^{n\times 2m+1},\ R\in
{\mathbb C}^{2m+1\times n}.
\]
Obviously, that we can reduce only $(A,B)$ and
the pair $(A^T,B^T)$ is reduced automatically.
$$
\Delta (A,B)=R^T L_m+ K_n S \\
=(R^T
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}
+ \Lambda_n(0) S
,R^T
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}
+
\Delta_n S) .
$$
It is easy to check that we can set $B$ to zero. To preserve $B$ we
must hereafter take $R$ and $S$ such that
\[
R^T
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}
+
\Delta_n S=0
\Rightarrow
S=
-\Delta_n
\begin{bmatrix}
R^T_{11}&R^T_{21}\\
R^T_{12}&R^T_{22}\\
\end{bmatrix}
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}
\]
Thus $A$ is reduced by adding:
\begin{multline*}
\Delta A = R^T
\begin{bmatrix}
0&F_m^T\\
F_m&0\\
\end{bmatrix}
-
\Lambda_n(0) \Delta_n R^T
\begin{bmatrix}
0&G_m^T\\
G_m&0\\
\end{bmatrix}\\
=
\begin{cases}
\begin{matrix}
r_{in-1}& \text{if} \ \ \ 1 \leq j \leq n, \ \ j=1 \\
r_{i,m+1+j}-r_{i-1,m+j}& \text{if} \ \ \ 1 \leq i \leq n, \ \ 1 < j < m+1\\
r_{i-1,n}& \text{if} \ \ \ 1 \leq i \leq n, \ \ j=m+1 \\
r_{i,j-m-1}-r_{i-1,j-m}& \text{if} \ \ \ 1 \leq i \leq n, \ \ m+1 < j \leq 2m+1 \\
\end{matrix} ,
\end{cases}
\end{multline*}
where we put $r_{0t}:=0$.
Therefore ${\cal D}(K_n, L_m)$ is equal to \eqref{ktlm}.
| -85,961.254105 |
[
-2.759765625,
2.490234375
] | 11.432326 |
[
-3.376953125,
0.04107666015625,
-2.138671875,
-5.6796875,
-0.43408203125,
8.0078125
] |
[
2.052734375,
7.55078125,
0.865234375,
5.26953125
] | 235 | 5,987 |
[
-3.57421875,
4.12109375
] | 43.202355 |
[
-4.75390625,
-2.87890625,
-3.724609375,
-2.130859375,
1.037109375,
9.7265625
] | 0.680083 | 9.699571 | 21.095707 | 6.823977 |
[
2.3310647010803223
] | -57,742.929846 | 5.403207 | -84,621.413648 | 0.52861 | 5.940759 |
[
-1.9541015625,
-3.046875,
-3.669921875,
-5.27734375,
2.09375,
11.7578125
] |
[
-5.4453125,
-1.6513671875,
-1.7099609375,
-0.90478515625,
3.146484375,
3.373046875
] | |
BkiUdic5qoYAmQl5_qD3
|
\section{Introduction}
\label{sIntro}
Together with stellar mass and chemical composition, stellar age is one of the most important properties characterising a star,
and methods of its determination are of great interest.
The knowledge of the stellar age is important, for example, for testing theories of stellar evolution, for reconstructing the star
formation history of the Galaxy, and the relationship between
enrichment and kinematics in the Galaxy, and for estimating the life-times of some circumstellar structures (e.g. debris discs).
Two most accurate methods for determining the ages of open star clusters are isochrone fitting and the lithium depletion boundary.
However, each method suffers from limitations.
Isochrone fitting is sensitive to not fully understood processes in stellar evolution such as convective overshooting, rotational mixing,
internal gravity waves and diffusion \citep{Meynet2009,Soderblom2010}.
Moreover, stellar rotation prolongs stellar life-times and increases the luminosity for a given stellar mass \citep{Meynet2000}.
Blue stragglers, if present, add another bias \citep{Brandt2015} as they appear as stars of a younger age.
These uncertainties in stellar evolution introduce an error in the age determination of the order of tens of a percent \citep{Meynet2009}.
Systematic errors (e.g. omission of the most luminous stars) can easily lead to much larger errors.
For example, \citet{Meingast2019} overestimate the age of the Pisces-Eridanus stream by a factor of $8$ as later corrected by \citet{Curtis2019}.
The other dating method, based on the lithium depletion boundary, is considered to be more accurate (but see \citealt{Jeffries2017,Bouvier2018} for complications
due to rotation and uncertainties in stellar radii).
This method can be applied only for the most nearby clusters because of the very low luminosity of the relevant objects.
Up to date, only a dozen clusters have ages determined by this method.
In general, the lithium depletion boundary method provides older ages (by $\approx 50$\%) than isochrone fitting \citep[e.g.][]{Soderblom2010,Binks2021}.
The difference between these two dating methods is not restricted to the youngest clusters.
For example, \citet{Stauffer1998} estimate the age of the Pleiades based on lithium depletion boundary to be $\approx 125 \, \mathrm{Myr}$,
while isochrone fitting provides the age from $70$ to $100 \, \mathrm{Myr}$.
Another reason for dating clusters is that they can be used as anchors for other (and generally less accurate) dating methods,
for example gyrochronology, chromospheric emission, lithium abundance
and coronal X-ray emission \citep[e.g.][]{Skumanich1972,Soderblom1991,Lachaume1999,Barnes2007,Mamajek2008},
which are instrumental in estimating the ages of field stars.
Unlike isochrone fitting and the lithium depletion boundary, these methods lack a theoretical background, and they need to be calibrated empirically on
coeval stellar systems such as star clusters or associations.
In addition to the dating methods described above, which are based on stellar evolution, there are also attempts to provide age estimates
from stellar kinematics or cluster dynamics.
Kinematical methods usually backtrace a population of stars until they assume the smallest volume, which is taken as the time of formation,
or they estimate the onset of expansion from the current size and expansion velocity.
Kinematical estimates are usually less accurate than those using stellar evolution, and they can be used only for the youngest objects (age $\lesssim 20 \, \mathrm{Myr}$)
because of gravitational interactions within the cluster and the external field of the Galaxy, which bends the stellar trajectories \citep{Blaauw1991,Brown1997}.
More recently, \citet{Crundall2019} suggest a more sophisticated tool taking into account the external Galactic field as well as the
complicated morphology of star forming regions, extending the usability of the kinematic method up to $\approx 100 \, \mathrm{Myr}$ for objects with a
sufficiently low velocity dispersion.
Dynamical methods compare numerical models of star clusters at different degree of dynamical evolution with observations.
They compare the radial mass distribution \citep{Buchholz1980}, or the ratio of the number of stars in two different mass bins \citep{Kroupa1995c}.
Dynamical methods are rarely used.
In this work, we present another method, which utilises stellar kinematics but in a very different way than the previous methods.
Instead of tracing individual stars backward or forward in time, we investigate the shape of the extended tidal structure which is formed from
an expanding group of stars.
The expanding stellar structure is supposed to be formed of stars which escape from a star cluster
as the result of gas expulsion \citep[e.g.][]{Lada1984,Kroupa2001b,Geyer2001}, which terminates its embedded phase.
The remnant of the star cluster which survives this event is accompanied by these stars.
The existence of the extended stellar structures was predicted theoretically from the results of N-body simulations \citep{Kroupa2001b},
and further explored by \citet{Dinnbier2020a},
before similar structures surrounding star clusters were observed \citep{Meingast2021} thanks to the
unprecedented sensitivity of the Gaia mission \citep{Gaiac2016b,Gaiac2018}.
At least some of these structures are coeval with the star cluster \citep{Bouma2021}, which supports the view of their common origin.
In an alternative scenario where the extended structures were never bound to the clusters, but probably formed close to them and are coeval with them,
this dating method can be applied as well.
We introduce the new method for age determination in \refs{sMethodDescr}, and then apply it to
the open star clusters which are surrounded by known extended structures in \refs{sApplication}.
In addition, we analyse two gravitationally unbound stellar streams.
The method is discussed in \refs{sDiscussion}.
We conclude in \refs{sSummary}.
\section{The time evolution of the tilt and the size of the tidal tail}
\label{sMethodDescr}
\iffigs
\begin{figure*}
\includegraphics[width=\textwidth]{tail_appearance}
\caption{
The orientation and shape of tidal tail I as calculated according to eqs. \ref{ePosition} and \ref{eXasAlpha} for
stars escaping at $\widetilde{v}_{\rm e,I} = 2 \, \mathrm{km} \, \, \mathrm{s}^{-1}$, and for the conditions at the solar circle.
The age of the tail is indicated by the colour.
The star cluster is located at the centre of the coordinate system (the black star), and it orbits the Galaxy in the direction indicated
at upper right.
As the cluster and tail age, the direction of the long axis of the tidal tail changes from pointing almost towards the Galactic centre (at $t = 40 \, \mathrm{Myr}$),
to the direction of the cluster motion (at $t = 160 \, \mathrm{Myr}$), and with increasing tilt again afterwards.
Also note that the shape and aspect ratio of the tidal tail undergoes complicated changes with time.
The dotted cyan line shows the definition of the tail tilt angle $\beta$.
}
\label{ftailShape}
\end{figure*} \else \fi
We assume that star clusters form with a relatively low star formation efficiency ($\mathrm{SFE} \approx$ stellar mass/stellar and gaseous mass within the star forming volume)
\citep{Lada2003,Megeath2016,Banerjee2018},
and that they expel the non-star forming gas on a time-scale which is short in comparison to the cluster crossing time.
These conditions unbind a substantial fraction of stars from the cluster (typically more than 60\%; \citealt{Lada1984,Goodwin1997,Kroupa2001b,Baumgardt2007}),
which expand and form an extended tidal structure surrounding the cluster \citep{Dinnbier2020a, Dinnbier2020b} (hereafter Papers I and II, respectively).
We refer to this tidal structure as \textit{tidal tail I}
\footnote{Although extended structures surrounding star clusters have been referred to by various terms in the literature (e.g. "halos", "coronae", "strings";
\citealt{Bouma2021}), we use the term tidal tail in the present work because we assume that they originate as tidal structures related to the cluster.}
.
The post-gas expulsion cluster revirialises \citep{Banerjee2013}, and loses stars gradually due to encounters between stars,
producing the classical S-shaped tidal tail \citep{Chumak2006a,Kupper2008,Kupper2010}, which is refereed to as \textit{tidal tail II}.
On the time-scale of several hundreds of Myr, tail I contains substantially more stars and is more extended than tail II \citep{Dinnbier2020b}.
The tidal tails of type I and II have been recently explored using observational data \citep[e.g.][]{Pang2021}.
The method takes advantage from the following kinematic property of stars.
Stars which escape from the cluster through gas expulsion have the Galactic orbital velocity either
slightly larger or smaller than the orbital velocity of the cluster.
The stars with larger orbital velocity have the guiding centres of their orbits outside the orbit of the cluster (which is at Galactocentric radius $R_{\rm 0}$),
while the stars with smaller orbital velocity have the guiding centres inside the orbit of the cluster.
The former stars trail behind the cluster, while the latter overtake it.
Consequently, the volume occupied by the escaping stars gradually stretches from a sphere to an elongated spheroid,
and the direction $\beta$ (see its meaning in \reff{ftailShape}) of the long axis of the spheroid gets more aligned with the direction of the cluster orbit with time.
Since all the stars escaped almost at the same time and they follow epicyclic motions with the same epicyclic frequency $\kappa$ to a good approximation,
they reach their birth Galactocentric radius $R_{\rm 0}$ at the same time at $2\pi n/\kappa$, where $n$ is a positive integer,
but the stars are stretched along the azimuthal direction.
This means that the tidal tail is aligned with the direction of motion each time $2\pi n/\kappa$, and it tilts relative to this
direction in between these time events.
Thus, the age of the tail can be estimated by inverting the theoretical time dependence of the tail tilt $\beta = \beta(t)$.
The width of the stellar structure depends on the characteristic speed of escaping stars, which is a function of the embedded cluster mass,
and the phase of the tail oscillation (for example, the thickness of the idealised tail drops to zero at $2\pi n/\kappa$).
The width of the tail at the known age can constrain the initial mass of the cluster.
In this section, we derive the time dependence of two quantities: the tilt (\refs{ssDynAge}) and width (\refs{ssMass}) of the tidal tail.
The present semi-analytic study is an extension of the analysis of Paper I, it uses the same assumptions, and it applies only to tail I.
The first assumption is that star clusters release many stars during a time window whose duration is short in comparison to the epicyclic
time scale $2\pi/\kappa$ (which is $\approx 168 \, \mathrm{Myr}$ at the Solar circle, \citealt{Allen1991}).
The physical process which is responsible for unbinding the stars is assumed to be gas expulsion of the residual gas from the newly formed open star clusters;
however the solution is general, and it applies to any isotropically expanding stellar system in an external tidal field as
long as the initial size of the stellar system is significantly smaller than the Galactocentric distance.
The more general examples include a shock caused by an encounter between a star cluster (not necessarily young) and a molecular cloud,
or a dissolution of a population of sparse clusters as their natal clouds are disrupted, forming a gravitationally unbound stellar stream.
The second assumption is that the speeds of escaping stars $\widetilde{v}_{\rm e,I}$ are approximately
equal or larger than the speed $\widetilde{v}_{\rm e,II}$ corresponding to
the difference between the maximum and minimum of the gravitational potential around the tidal radius $r_{\rm t}$.
This condition means that stars of typical velocity $\widetilde{v}_{\rm e,I}$ can overcome the Jacobi potential at any direction without
significant change to $\widetilde{v}_{\rm e,I}$, and thus escape the cluster with comparable probability in any direction.
As shown in paper I (sect. 2.1 and 4.7 there), this condition is fulfilled for practically all embedded star clusters currently forming in the Galaxy.
We adopt the usual coordinate system, where the $x$-axis points in the direction of the Galactic anti-centre, and the $y$-axis points in the direction of the
Galactic rotation
\footnote{The adopted coordinate system is left-handed because the Galaxy rotates clockwise when seen from the North Galactic Pole \citep{Binney2008}.
This choice, which is more convenient for comparison with observations, is different from Papers I and II, where a right-handed system was adopted.}
.
The star cluster is on a circular orbit, so it is at rest at the origin of the coordinate axes.
In this idealised model, we assume that all stars escape the cluster with the same velocity $\widetilde{v}_{\rm e,I}$.
The components of the velocity vector ($v_{\rm R}$, $v_{\rm \phi}$), where $\widetilde{v}_{\rm e,I}^2 = v_{\rm R}^2 + v_{\rm \phi}^2$,
are parallel to the coordinates ($x$, $y$), which lie in the plane of the Galaxy.
The vertical motion is neglected.
The star escapes the cluster at an angle $\alpha$, whose meaning is shown in fig. 1 of paper I, and then moves along an epicycle
of a guiding centre at Galactocentric radius $R_{\rm g}$, and of semi-major axis $Y$, and semi-minor axis $X$.
The guiding centre itself moves in the azimuthal direction at velocity $v_{\rm g}$ relative to the cluster.
The relative position of the star to the cluster at time $t$ (since its escape) is given by (c.f. eq. 12 in Paper I)
\begin{eqnarray}
x(\alpha, t) = && X(\alpha) \cos(\alpha) \sin(\kappa t) - X(\alpha) \sin(\alpha) (1 - \cos(\kappa t)), \nonumber \\
y(\alpha, t) = && \gamma X(\alpha) \cos(\alpha) (\cos(\kappa t) - 1) - \nonumber \\
&& \gamma X(\alpha) \sin(\alpha) \bigg\{\sin(\kappa t) + \frac{t}{\gamma} \left( \frac{\kappa^2}{2 \omega} - 2 \omega \right) \bigg\},
\label{ePosition}
\end{eqnarray}
\noindent
where $\omega$ and $\kappa$ is the orbital and epicyclic frequency, respectively, and $\gamma = 2\omega/\kappa$ \citep{Binney2008}.
The value of the epicyclic semi-minor axis $X$ depends not only on the velocity $\widetilde{v}_{\rm e,I}$, but also
on the direction of escape $\alpha$.
From eqs. (9) and (10) of paper I, it follows
\begin{equation}
X(\alpha) = \frac{\gamma \widetilde{v}_{\rm e,I}}{\kappa} \frac{1}{\sqrt{\gamma^2 \cos^2 (\alpha) + \sin^2 (\alpha)}}.
\label{eXasAlpha}
\end{equation}
\subsection{The tilt of the tidal tail}
At time $t$, stars which escaped at velocity $\widetilde{v}_{\rm e,I}$ are located
at the curve given by \eq{ePosition} with $X$ substituted from \eq{eXasAlpha}.
\reff{ftailShape} shows the shape of the tail I formed by stars escaping at $\widetilde{v}_{\rm e,I} = 2 \, \mathrm{km} \, \, \mathrm{s}^{-1}$ at seven
time events for the frequencies $\omega = 8.381 \times 10^{-16} \, \mathrm{s}^{-1}$ and
$\kappa = 1.185 \times 10^{-15} \, \mathrm{s}^{-1}$, which are expected at the Solar circle (at Galactocentric radius $R_{\rm 0} = 8.5 \, \mathrm{kpc}$)
for the Galaxy model by \citet{Allen1991}.
At the age of $40 \, \mathrm{Myr}$, the tail points almost towards the Galactic centre (red contour in \reff{ftailShape}), and with
progressing time the tail aligns with the direction of the orbit around the Galaxy (at $160 \, \mathrm{Myr}$; blue contour).
Later, the tail is again tilted with respect to its orbit ($240 \, \mathrm{Myr}$; yellow contour),
and these oscillations continue with time.
At a given time $t$, the tail is parametrised by the angle $\alpha$.
Thus, a quantitative description of the tail tilt can be obtained by finding the value of $\alpha$ for which the distance
from the cluster $r = \sqrt{x^2 + y^2}$ is the largest.
The value of $\alpha_{\rm ext}$, which extremalises the distance on the tail can be found by setting $\pder{r}{\alpha} = 0$, which results in
\begin{eqnarray}
&& \frac{(\widetilde{x}^2 + \widetilde{y}^2)(\gamma^2 - 1)}{\gamma^2 \cos^2 (\alpha_{\rm ext}) + \sin^2 (\alpha_{\rm ext})} (\sin(\alpha_{\rm ext}) \cos(\alpha_{\rm ext})) - \nonumber \\
&& \widetilde{x} \left(\sin(\alpha_{\rm ext}) \sin(\kappa t) + \cos(\alpha_{\rm ext})(1 - \cos(\kappa t)) \right) + \nonumber \\
&& \gamma \widetilde{y} \Bigg(\sin(\alpha_{\rm ext})(1 - \cos(\kappa t)) - \nonumber \\
&& \cos(\alpha_{\rm ext}) \left\{\sin(\kappa t) + \frac{t}{\gamma}\frac{(\kappa^2 - 4 \omega^2)}{2 \omega} \right\} \Bigg) = 0,
\label{eAlphaExt}
\end{eqnarray}
where the dimensionless quantities $\widetilde{x}$ and $\widetilde{y}$ are defined as $\widetilde{x} = x/X$ and $\widetilde{y} = y/X$, respectively.
\iffigs
\begin{figure}
\includegraphics[width=\columnwidth]{extentMinMax_1p_inset}
\caption{The time dependence of the theoretical tail tilt angle $\beta$ (according to eqs. \ref{eAlphaExt} and \ref{eBeta}).
The value of $\beta$ at a given age is independent of the velocity of escaping stars $\widetilde{v}_{\rm e,I}$ or any other cluster property.
Inset: Comparison of the theoretical solution (black line) with N-body models of initial stellar mass
$M_{\rm ecl} = 1400 \, \mathrm{M}_{\odot}$ and $M_{\rm ecl} = 4400 \, \mathrm{M}_{\odot}$ (dots).
}
\label{fTimeEvolv}
\end{figure} \else \fi
In the interval $\alpha \in (-\pi/2, \pi/2)$, \eq{eAlphaExt} has typically two solutions: one corresponding to the maximum distance $r$ (at angle $\alpha_{\rm ext}^{+}$),
and the other corresponding to the minimum distance $r$ (at angle $\alpha_{\rm ext}^{-}$).
The tilt of the tail is the angle $\beta$ at which the tip of the tail is seen from the cluster relatively to the positive $y$-axis (\reff{ftailShape}), i.e.
\begin{equation}
\tan(\beta) = -\frac{x(\alpha_{\rm ext}^{+}(t), t)}{y(\alpha_{\rm ext}^{+}(t), t)}.
\label{eBeta}
\end{equation}
At a given $t$, the angles $\alpha_{\rm ext}^{+}$ and $\alpha_{\rm ext}^{-}$ depend only on the Galactic frequencies $\omega$ and $\kappa$, and they
are independent on any property of the cluster such as for example $\widetilde{v}_{\rm e,I}$.
Thus, the tidal structure occupied by stars escaping the cluster at different velocity $\widetilde{v}_{\rm e,I}$
forms concentric curves with respect to the origin; a group of stars escaping at velocity $k \times$ larger reaches a $k \times$ larger distance from the cluster
in any direction in the $x$-$y$ plane (eq. \ref{eXasAlpha}), but the tidal structure is concentric with the same tilt for
a group of stars of any $\widetilde{v}_{\rm e,I}$.
The time evolution of the angle $\beta$ is shown in \reff{fTimeEvolv}.
To check our calculations, we compared the value of $\beta$ obtained from eqs. \ref{eAlphaExt}
and \ref{eBeta} with \textsc{nbody6} models of $M_{\rm ecl} = 1400 \, \mathrm{M}_{\odot}$ and $4400 \, \mathrm{M}_{\odot}$ clusters
with rapid gas expulsion and a star formation efficiency (SFE) of 33\%,
which were studied in paper II (see there the description of models C03G13 and C10G13 for details).
In the numerical models, we calculate the tilt as the angle of the eigenvector corresponding to the long axis of the covariance matrix from the stellar
$x$ and $y$ positions projected to the Galactic mid-plane.
The comparison, which is shown in the inset of \reff{fTimeEvolv}, demonstrates very
good agreement with the semi-analytical formulation (eqs. \ref{eAlphaExt} and \ref{eBeta}) for most of the time apart from the youngest age.
The scatter at the youngest age ($t \lesssim 20 \, \mathrm{Myr}$) is caused by the difficulty of finding the semi-major axis of the tail, which
is of an almost spherical shape at this time.
Although the dating method uses stellar kinematics, because it is based on \eq{ePosition},
the only important observational quantity is the tail tilt angle $\beta$.
This might make the method simpler to use than standard kinematic methods, because it does not require stellar velocities for the age determination,
velocities being used only to identify the tidal tail.
For this reason, we will refer in the following to this method as to the \textit{morphological method} (and corresponding ages as \textit{morphological ages}).
The standard methods of cluster age determination, which are based on stellar evolution (e.g. isochrone fitting, lithium depletion boundary and gyrochronology),
are refereed to as stellar evolutionary methods (and corresponding ages as stellar evolutionary ages).
Another advantage of the present method is its insensitivity to completeness because $\beta$
can be estimated only from a fraction of the stars in the tail.
\subsection{The width and length of the tidal tail}
\iffigs
\begin{figure}
\includegraphics[width=\columnwidth]{extentMinMax_3p}
\caption{
The time dependence of the width of the tidal tail (top panel),
the tail length along its longest axis (middle panel),
and the tail aspect ratio (lower panel).
The different line styles in the top and middle panels represent expansion speeds $\widetilde{v}_{\rm e,I}$ of $1$, $2$ and $5 \, \mathrm{km} \, \, \mathrm{s}^{-1}$.
}
\label{fExtent}
\end{figure} \else \fi
The width $b$ of the tail is attained at the solution to \eq{eAlphaExt} which corresponds to the minimum distance,
i.e. at angle $\alpha_{\rm ext}^{-}$,
\begin{equation}
b(t) = 2 \sqrt{x^2(\alpha_{\rm ext}^{-}(t), t) + y^2(\alpha_{\rm ext}^{-}(t), t)}.
\label{etThickness}
\end{equation}
At a given age, $b(t)$ is a linear function of $\widetilde{v}_{\rm e,I}$ (eq. \ref{eXasAlpha}).
The time dependence of $b$ for $\widetilde{v}_{\rm e,I} = 1 \, \mathrm{km} \, \, \mathrm{s}^{-1}$, $2 \, \mathrm{km} \, \, \mathrm{s}^{-1}$ and $5 \, \mathrm{km} \, \, \mathrm{s}^{-1}$ is shown in the upper panel of \reff{fExtent}.
We note that although the shape of tail I resembles an ellipse, the curve is not an exact ellipse, because \eq{ePosition} does not
represent a parametric equation of an ellipse.
Likewise, the tail reaches its longest distance $a$ from the origin for stars escaping at the angle $\alpha_{\rm ext}^{+}$,
\begin{equation}
a(t) = 2 \sqrt{x^2(\alpha_{\rm ext}^{+}(t), t) + y^2(\alpha_{\rm ext}^{+}(t), t)}.
\label{etLength}
\end{equation}
The tail length obtained by this way for the three values of $\widetilde{v}_{\rm e,I}$ is shown in the middle panel of \reff{fExtent}.
The lower panel of the figure plots the time evolution of the aspect ratio of the tail, $b(t)/a(t)$, which shows
that the tail gets more elongated with time.
\section{Application to the star clusters with known extended tails and unbound stellar streams}
\label{sApplication}
\subsection{Age determination}
\label{ssDynAge}
\iffigs
\begin{figure}
\includegraphics[width=\columnwidth]{AgeClustersTidalStreams}
\caption{
The morphological age determination for the star clusters with known extended tail structures from \citet{Meingast2021}, and for the
Psc-Eri and $\mu$ Tauri streams \citep{Meingast2019,Gagne2020}.
The upper panel shows the ages which correspond to the observed values of $\beta$.
The degeneracy of age for a given $\beta$ is illustrated by the dot colour: the age corresponding to the first and second tail
oscillation is shown by the red and blue dots, respectively.
The typical uncertainty in measuring $\beta$ is assumed to be $5 ^{\circ}$ (error bars).
\figpan{Lower panel:} Comparison between the morphological cluster age determination from the tail tilt (red and blue bars)
with the age estimates based on stellar evolution (grey bars; see \reft{tageEst} for details).
The most probable cluster age $t_{\rm sev}$ is indicated by the black vertical bars.
}
\label{fCompObs}
\end{figure} \else \fi
\begin{table*}
\begin{tabular}{lcccc|cccc}
Object name & $t_{\rm sev, min}$ & $t_{\rm sev, max}$ & $t_{\rm sev}$ & $M_{\rm ecl, obs}$ & $\beta$ & $t_{\rm mph, min}$ & $t_{\rm mph, max}$ & $M_{\rm ecl, mph}$ \\
& [Myr] & [Myr] & [Myr] & [$\rm{M}_{\odot}$] & [$^\circ$] & [Myr] & [Myr] & [$\rm{M}_{\odot}$] \\
\hline
Platais 9 & 78 & 347 & 100 & 285 & 15 & 103 & 126 & 20 \\
Messier 39 & 279 & 1023 & 310 & 325 & 22 & 215 & 245 & 20 \\
$\alpha$ Per & 35 & 110 & 87 & 1030 & 31 & 75 & 92 & 60 \\
NGC 2451A & 32 & 148 & 44 & 425 & 58 & 36 & 50 & 20 \\
IC 2602 & 30 & 100 & 35 & 400 & 51 & 46 & 60 & 20 \\
NGC 2547 & 27 & 78 & 27 & 590 & 80 & 7 & 20 & 870 \\
Blanco 1 & 63 & 209 & 94 & 365 & 22 & 91 & 110 & 5 \\
IC 2391 & 26 & 81 & 36 & 445 & 21 & 93 & 113 & 50 \\
NGC 2516 & 63 & 299 & 251 & 2550 & 17 & 205 & 265 & 30 \\
Pleiades & 86 & 176 & 86 & 850 & -32 & - & - & - \\
\hline
Psc-Eri Stream & 120 & 120 & 120 & $\gtrsim 2000$ & 39 & 62 & 78 & 200 \\
$\mu$ Tau & 55 & 69 & 62 & $\approx 250$ & 51 & 45 & 60 & 1200
\end{tabular}
\caption{Morphological age and mass estimates for ten open star clusters and two tidal streams (Psc-Eri stream and $\mu$ Tau).
The minimum, maximum and the most probable stellar evolutionary age estimate is denoted by $t_{\rm sev, min}$, $t_{\rm sev, max}$ and $t_{\rm sev}$, respectively.
The measured angle of the tail tilt and the minimum and maximum morphological age is denoted $\beta$, $t_{\rm mph, min}$ and $t_{\rm mph, max}$, respectively;
and the observed and expected mass is denoted by $M_{\rm ecl, obs}$ and $M_{\rm ecl, mph}$, respectively.
For Messier~39 and NGC~2516, we give the morphological age estimates during the second tail oscillation as these are in a better agreement with the
stellar evolutionary ages.
The stellar evolutionary age and mass estimates are adopted from \citet{Meingast2021} for the open clusters (above the horizontal line),
and from \citet{Curtis2019}, \citet{Ratzenbock2020} and \citet{Gagne2020} for the Psc-Eri and $\mu$ Tau streams.
}
\label{tageEst}
\end{table*}
We illustrate the proposed age determination method on data which we compiled from the existing literature.
The examples below serve only as a consistency check of the method, and they are not meant for improvement of the current age estimates of
these clusters and streams because the angle $\beta$ was estimated only by eye from $x$-$y$ maps of the tidal structures, and
we assume that the clusters or streams orbit the Galaxy on exactly circular trajectories (see \refs{ssSevComp} for more details).
We took as the basis for our cluster sample the $x$-$y$ maps from \citet{Meingast2021} (their figure A.2).
In addition to these structures, which still surround gravitationally bound clusters, we include two tidal streams into our sample:
the Psc-Eri stream from \citet{Meingast2019} with an age determination of \citet{Curtis2019}, and the $\mu$ Tauri association from \citet{Gagne2020}.
\reft{tageEst} lists the measured tilt angle $\beta$, and the estimated age range ($t_{\rm mph,min}$, $t_{\rm mph,max}$)
obtained by inverting the function $\beta = \beta(t)$ from \eq{eBeta} as illustrated in the top panel of \reff{fCompObs}.
We estimate that the angle $\beta$ is measured with an uncertainty of $\Delta \beta = \pm 5^\circ$ for each object
\footnote{The estimate is based on the assumption that the tidal tail is divided into $k_{\rm seg}$ equal azimuthal segments
with the origin at the cluster,
and that the number of stars $n_{\rm seg}$ in each segment is distributed according to the Poisson distribution.
Requiring the ratio between the mean and standard deviation as $\langle n_{\rm seg} \rangle / \sigma_{\rm n_{\rm seg}} \gtrsim 2$,
we obtain $\langle n_{\rm seg} \rangle \gtrsim 4$.
Further assuming that each tidal tail contains typically at least $300$ stars \citep[][their fig. 12]{Meingast2021},
one obtains $k_{\rm seg} = 300/\langle n_{\rm seg} \rangle = 75$ azimuthal segments, i.e. each segment spans $4.8 ^\circ$, which we identify with $\Delta \beta$.
This is an order of magnitude estimate as the signal to noise ratio of $2$ and the number of stars in the tail of $\gtrsim 300$ might be too optimistic.
On the other hand, the estimate is provided for a non-elongated tidal tail; an elongated tidal tail has more stars in the segments
along its longer axis, which increases the signal to noise ratio.
}
,
and the uncertainty is propagated to the age uncertainty.
The red and blue points represent, respectively, the age estimate for the first (up to $168 \, \mathrm{Myr}$)
and second (between $168 \, \mathrm{Myr}$ and $336 \, \mathrm{Myr}$) oscillation of the tail, and they
illustrate that the age determination is degenerated for objects with $\beta - \Delta \beta < 18 ^\circ$ (Platais 9, NGC~2516, Messier~39, IC~2391 and Blanco~1).
Because of the degeneracy, it is useful to have a-priory age knowledge for objects with $\beta - \Delta \beta < 18 ^\circ$ so that the
age can be searched for either during the first or second oscillation.
The red and blue bars in the lower panel of \reff{fCompObs} represent the estimated morphological age interval ($t_{\rm mph,min}$, $t_{\rm mph,max}$)
during the first and second tail oscillation, respectively.
The range of age determination from various stellar evolutionary signposts is indicated by the grey bars,
and the most probable age (according to \citealt{Meingast2021}) is indicated by the black bars.
We exclude the Pleiades from further analysis because there has been no prominent tidal tail found around the Pleiades so far, and the putative tail
found in the data of \citet{Meingast2021} points in the direction to the Galactic anti-centre (i.e. $\beta < 0$), for which no age solution exists.
From the $11$ objects left, six (Platais~9, NGC~2516, NGC~2451A, Blanco~1, $\mu$ Tau and $\alpha$ Per) have the
morphological age range either in complete agreement with or differing at maximum by $10\%$
from the most probable stellar evolutionary age.
A notable example is NGC~2516, which agrees with its stellar evolutionary age during its second tail tilt (not the first tilt), which
might constrain its age to the interval from $\approx 200$ to $260 \, \mathrm{Myr}$
\footnote{Using gyrochronology, \citet{Bouma2021} find a slightly younger age of $\approx 150 \, \mathrm{Myr}$.}
.
Three other clusters (NGC~2547, Messier~39, and IC~2602) differ more from their stellar evolutionary ages, but the difference is not huge (\reff{fCompObs}):
In the case of NGC~2547, $t_{\rm sev}$ differs from the lower morphological estimate $t_{\rm mph,min}$ by only $7 \, \mathrm{Myr}$ (see \reft{tageEst} for details);
For Messier~39, $t_{\rm sev} = 310 \, \mathrm{Myr}$ differs by a factor of $1.3$ from $t_{\rm mph, max} = 245 \, \mathrm{Myr}$;
and the morphological age estimate of IC~2602 lies within the interval allowed by the stellar evolutionary age.
Only two objects have $t_{\rm sev}$ substantially different from the morphological age estimate (Psc-Eri stream and IC~2391).
\subsection{Estimate of the initial embedded mass}
\label{ssMass}
\iffigs
\begin{figure*}
\includegraphics[width=\textwidth]{angleMinorAxis_semiAnl_both}
\caption{The relationship between the angle $\beta$ and the tail width $b$ as a function of the cluster age (black dots at selected times),
and the initial cluster mass $M_{\rm ecl}$ (lines).
The area enclosing clusters of mass from $312 \, \mathrm{M}_{\odot}$ to $10^4 \, \mathrm{M}_{\odot}$ is filled red.
The panels from left to right show the intervals between minima of tail thickness, where $\beta$ evolves monotonically:
$\beta$ decreases from $0$ to $168 \, \mathrm{Myr}$ (left panel), increases from $168$ to $228.7 \, \mathrm{Myr}$ (middle panel), and decreases from $228.7$ to $336.1 \, \mathrm{Myr}$ (right panel).
For a given $\beta$, the tail width increases monotonically with the cluster mass.
The blue and cyan crosses represent \textsc{nbody6} simulations of the $M_{\rm ecl} = 1400 \, \mathrm{M}_{\odot}$ and $4400 \, \mathrm{M}_{\odot}$ clusters, respectively.
The simulated point of the age nearest to the selected ages (black dots) is shown by the large cross.
The width of tails of the observed clusters and streams of \reft{tageEst} are indicated by the black
squares.
The expected tail width for the same clusters, but calculated from their observed mass $M_{\rm ecl,obs}$, is indicated by the yellow squares.
The majority of the observed tails are too narrow for their observed mass.
}
\label{fAngleThickness}
\end{figure*} \else \fi
The stars forming tidal tail I escape the cluster at a typical velocity $\widetilde{v}_{\rm e,I}$, which is proportional
to the initial velocity dispersion $\sigma$ in the embedded cluster.
The velocity dispersion is related to the initial cluster mass $M_{\rm ecl}$ and virial radius $R_{\rm V}$ by $\sigma^2 = G M_{\rm ecl}/(2 R_{\rm V})$ \citep{Aarseth2003}.
For a cluster represented by the Plummer model of scale radius $a_{\rm Pl}$, for which $R_{\rm V} = 16 a_{\rm Pl} /(3 \pi)$, the velocity dispersion reads
$\sigma^2 = 3 \pi G M_{\rm ecl}/(32 a_{\rm Pl})$.
Assuming that the cluster length-scale $a_{\rm Pl}$ depends only on the cluster mass \citep{Marks2012}, the velocity dispersion can be expressed only as a function of
the initial (embedded) mass in stars, $M_{\rm ecl}$.
Thus, from the extent of the tidal tail at a given age, we can determine $\widetilde{v}_{\rm e,I}$ (from eqs. \ref{eXasAlpha} and \ref{etThickness}),
and from this quantity $\sigma$ because $\sigma \propto \widetilde{v}_{\rm e,I}$, and finally from $\sigma$ we estimate the initial cluster mass.
Because we expect the volume density of tail I to be the highest (and the least contaminated) along its short axis,
we estimate the tail extent (i.e. its width) in this direction.
We take the typical velocity of tail I stars as $\widetilde{v}_{\rm e,I} = 4 \, \mathrm{km} \, \, \mathrm{s}^{-1}$ for $M_{\rm ecl} = 4400 \, \mathrm{M}_{\odot}$ (table 1 in paper II), and
from the arguments above it follows that
\begin{equation}
\frac{\widetilde{v}_{\rm e,I}}{\, \mathrm{km} \, \, \mathrm{s}^{-1}} = 5.7 \left(\frac{M_{\rm ecl}}{10^4 \, \mathrm{M}_{\odot}}\right)^{7/16}.
\label{evelMass}
\end{equation}
The tail width $b$ expressed from \eq{etThickness} is shown by lines as a function of $\beta$ in \reff{fAngleThickness}.
Each of the panel corresponds to a time interval when the tail tilt evolves monotonically: the left panel is for the first
tail oscillation ($\beta$ decreases), the middle panel for the first part of the second oscillation ($\beta$ increases),
and the right panel for the second part of the second oscillation ($\beta$ decreases).
For comparison, we plot there the tail width for N-body models C03G13 and C10G13 (blue and cyan crosses, respectively) of paper II, which is
calculated as the variance of the stellar distances in the tidal tail (i.e. excluding the stars in the cluster) in the direction of the tail semi-minor axis.
The N-body models, which have cluster masses of $1400 \, \mathrm{M}_{\odot}$ and $4400 \, \mathrm{M}_{\odot}$, fit at the expected positions in the $\beta - b$ plane between the
theoretical curves for $1250 \, \mathrm{M}_{\odot}$, $2500 \, \mathrm{M}_{\odot}$ and $5000 \, \mathrm{M}_{\odot}$ for most of the time.
The tail widths for the observed clusters and streams are plotted by black squares in \reff{fAngleThickness}.
To provide an order of magnitude estimate, we neglect the uncertainty in the measurements of tail widths, which is probably dominated by incompleteness as
discussed below.
For simplicity, we show them during the first oscillation only.
For the tail width at given $t$, we obtain the velocity $\widetilde{v}_{\rm e,I}$ from \eq{etThickness}, and the estimated initial mass $M_{\rm ecl,mph}$
from \eq{evelMass}.
However, the mass obtained by this way is substantially smaller (often by a factor of $10$; \reft{tageEst}) for most of the objects
than the observed mass $M_{\rm ecl,obs}$ of the cluster and tail combined.
In other words, when the expected tail width is calculated from the observed cluster mass $M_{\rm ecl,obs}$, it is substantially (by a factor of $3$) larger
than its observed value (yellow symbols in \reff{fAngleThickness}) for most of the objects.
Possible reasons of the discrepancy are discussed in \refs{ssMeclEst}.
\section{Discussion}
\label{sDiscussion}
\subsection{Comparison to stellar evolutionary age estimates}
\label{ssSevComp}
The analysis of \refs{ssDynAge} provides age estimates that are in very good agreement with stellar evolutionary age
estimates in $\approx 55$ \% of the cases, in reasonable agreement in $\approx 25$ \% of cases, and in tension in $\approx 20$ \% of cases.
The agreement for the majority of cases gives some support to the morphological method.
The discrepancy in the age for $\approx 20$ \% of cases (the Psc-Eri stream and IC~2391) indicates that one of the methods is incorrect.
It is possible that it is either one of the stellar evolutionary methods or the morphological method.
For example, the main sequence fitting method for the
Psc-Eri stream provided an age of $\approx 1 \, \mathrm{Gyr}$ \citep{Meingast2019}, which was later found to be younger by a factor of $8$ \citep{Curtis2019}
because of incompleteness in the former data.
An excellent match with the morphological age for both objects could be obtained by changing the stellar evolutionary age by a factor of two, which is
substantially less than what was the discrepancy for the Psc-Eri stream in the example above.
The morphological method can be further improved by taking into account the non-circularity of the orbit, and the possible influence
of the Galactic bar.
The degree of non-circularity (eccentricity) can be obtained from the proper
motion of the cluster or stream, and then utilised for a correction of the relationship between the morphological
age and angle $\beta$ for the given eccentricity.
It is possible that including the effect of the orbital eccentricity would also improve the agreement between the stellar-evolutionary and morphological
age estimates.
Giant molecular clouds passing through a tidal tail can stir asymmetry in the tail \citep{Jerabkova2021}, which was indicated in
the tail (in this case tail II) of the Hyades \citep{Roser2019}.
According to the results of \citet{Jerabkova2021}, giant molecular clouds might distort also tidal tail I
so that its tilt $\beta$ would no longer be a clear function of the age of the cluster or stream.
Another mechanism, which is likely to distort the shape of the tail I is gas expulsion occurring off the cluster centre
(e.g. caused by a massive star, which was sent to the outskirts of the still embedded cluster by an encounter near the centre).
This would accelerate stars in one direction at a larger velocity than in the opposite direction, breaking the assumption of isotropy.
We intend to explore some of these possibilities in a follow up paper.
\subsection{Applications and limitations}
\label{ssLimitations}
Morphological age determinations can be applied to both tidal tails enveloping gravitationally bound clusters
from which the tails presumably originated as well as to stellar streams, which contain no gravitationally bound remnant cluster.
Moreover, it is not necessary that the stellar stream originated from a gravitationally bound object.
For example, consider a star forming region with sparse star formation occurring only in small groups or clusters,
which completely dissolve and release all their stars to the field during or soon after the star forming cloud is disrupted.
These stars will expand in random directions from the location of their birth clusters, and will be subjected to the Galactic tidal field.
At a given time $t$, stars released from each cluster will occupy an area approximately bounded by the contour of \reff{ftailShape} corresponding to $t$
and scaled by the typical escape velocity $\widetilde{v}_{\rm e,I}$ from each individual cluster.
Whether the morphological dating provides robust estimate depends mainly on the volume occupied by the star forming region.
Since the majority of star formation in the Galaxy occur in filaments \citep{Andre2014}, the stars which formed in the filaments
occupy a filamentary configuration long after the star forming region became inactive with all gas removed \citep{Jerabkova2019}.
Stellar relic filaments reach sizes up to several hundreds of parsec \citep{Jerabkova2019,Kounkel2019,Beccari2020,Wang2021}, and often
span large distances between gravitationally bound clusters \citep{Beccari2020}.
In this case, parts of relic filaments might be confused with tidal tails, which probably posses the largest limitation for the morphological dating method.
On the other hand, many star-forming regions occupy smaller volumes
(e.g. the maximum extent of the Taurus-Auriga star-forming region is $\approx 60 \, \mathrm{pc}$; \citealt{Galli2019})
than the volume of the tail already at a young age ($\approx 100 \, \mathrm{pc}$; \reff{ftailShape}), the tidal tails from the clusters will be
superimposed one over another, overlapping and probably being indistinguishable from each other.
Nevertheless, the tilt $\beta$ of the tail will be comparable for all the clusters (the age spread of clusters forming within the same
star-forming region is $\lesssim 25 \, \mathrm{Myr}$; e.g. \citealt{Kawamura2009,Jeffries2011,Dobbs2013}, which is much less than $2 \pi/\kappa \approx 170 \, \mathrm{Myr}$),
so the morphological dating method can be applied to initially unbound configurations as well.
In order to estimate the limitations due to relic filaments, we consider that a typical relic filament is of length of $100 \, \mathrm{pc}$,
and that the filament forms a star cluster at one of its extremities.
The cluster releases stars to tail I at typical speeds of $\widetilde{v}_{\rm e,I} \approx 2 - 3 \, \mathrm{km} \, \, \mathrm{s}^{-1}$.
These stars will overtake the relic filament at an age of $\approx 30 - 50 \, \mathrm{Myr}$,
when both the filament and the tidal tail will be clearly discernible from one another.
This time is also near the upper limit on the observed age of relic filaments \citep{Jerabkova2019,Beccari2020}, and the age when
the elongation of the tail I becomes less spherical so that its tilt can be measured (see the red contour in \reff{ftailShape}).
At the same time, the density of the relic filament decreases with time as the filament expands radially, lowering its contamination of tail I.
Thus, if the cluster forms as a part of a large filamentary structure, its morphological age can be obtained after it is older than $\approx 40 \, \mathrm{Myr}$.
Another complication in determining the morphological age are the stars which evaporate from the cluster and form tail II.
These stars occupy a very similar area in the position-velocity space as stars from tail I so that both tails would be probably indistinguishable in the Gaia data.
This impacts in particular older clusters because stars forming tail II evaporate from clusters
at an approximately time-independent rate \citep{Baumgardt2003,Aarseth2003,Chumak2006a}.
Also, stars in tail II move at lower speeds than stars in tail I (Paper II), so tail II is to be found closer to the cluster and at
an elevated stellar density.
Unlike tail I, tail II is S-shaped in the vicinity of the cluster \citep{Chumak2006a,Kupper2008}, which would spuriously increase $\beta$
whereupon underestimating the proper cluster age.
In order to suppress contamination by tail II, we suggest that for measuring $\beta$, only sufficiently distant stars from the cluster are
taken into account.
Since the typical speeds in tail II are relatively low ($\widetilde{v}_{\rm e,II} \approx 1.4 \, \mathrm{km} \, \, \mathrm{s}^{-1}$ for a rather massive cluster of $4400 \, \mathrm{M}_{\odot}$ and
lower for lower mass clusters; see table 1 of Paper II), the analysis should ignore stars located closer than $\approx \widetilde{v}_{\rm e,II} t$
to the cluster, where $t$ is the prior estimate of the cluster age.
How many objects do we expect to be accessible for the method given the astrometry of the Gaia DR2 release?
The velocity cut of $2 \, \mathrm{km} \, \, \mathrm{s}^{-1}$ can be provided for A0 stars closer than $1 \, \mathrm{kpc}$ \citep{Gaiac2016b}.
At this distance, the position error is $\lesssim 10 \, \mathrm{pc}$, which should be sufficient for the detection of tail I.
\citet{Porras2003} find $16$ very young star clusters having more than $100$ stars within the circle of radius $1 \, \mathrm{kpc}$ centred at the Sun.
For this estimate, we require that only more numerous clusters (having more than $800$ stars) can produce tails of discernible tilt.
Assuming that the Galaxy forms embedded clusters according to an embedded cluster mass function in the form of
$\nderrow{N_{\rm ecl}(M_{\rm ecl})}{M_{\rm ecl}} \propto M_{\rm ecl}^{-2}$ \citep[e.g.][]{Lada2003,FuenteMarcos2004},
spanning up to $M_{\rm ecl} \approx 10^4 \, \mathrm{M}_{\odot}$ \citep{Johnson2017},
$5/8$ of the star clusters with more than $100$ stars contain more than $800$ stars.
Further assuming that the catalogue of \citet{Porras2003} is complete for clusters younger than $5 \, \mathrm{Myr}$ and that the morphological
method can be used for clusters in the age interval from $40$ to $340 \, \mathrm{Myr}$ (see \refs{ssAgeRange}),
we obtain $5/8 \times 16 \times (340 - 40)/5 \approx 550$ objects (i.e. clusters or streams)
accessible to the method within $1 \, \mathrm{kpc}$ from the Sun.
Even though this is the likely upper estimate (because many of the clusters form within the same star-forming region so that their tails cannot be
distinguished from each other and the upper mass of clusters which form within the Solar circle is probably lower than $10^4 \, \mathrm{M}_{\odot}$
according to \citealt{PflammAltenburg2008}),
we estimate that there might be more than one hundred objects in the Gaia DR2 data accessible for this dating method.
\subsection{The range of cluster ages available for the morphological dating method and its accuracy}
\label{ssAgeRange}
The morphological age determination method is suited for younger clusters, where $\beta$ sensitively depends on the age.
This is mainly because the function $\beta = \beta(t)$ is not an injective function; there is one $t$ for $\beta \gtrsim 18.3 ^\circ$, three possible values
for $t$ for $\beta \in (10.5 ^\circ, 18.3 ^\circ)$, five possible values of $t$ for $\beta \in (7.4 ^\circ, 10.5 ^\circ)$, and the degeneracy quickly
increases for $\beta \lesssim 7.4 ^\circ$ (\reff{fTimeEvolv}).
Any systematic error in the determination of $\beta$ (e.g. from the cluster having an eccentric orbit or confusion between tail I and tail II stars), results in
a large uncertainty in age $t$ for $\beta$ sufficiently small.
Taking $\Delta \beta = 5 ^\circ$ as an estimate of the uncertainty in $\beta$ determination, this method is useless for $t \gtrsim 336 \, \mathrm{Myr}$
because $\beta$ changes by less than $\approx 2 \Delta \beta$ since this age.
On the other hand, $\beta$ changes rapidly for younger clusters (it decreases from $\approx 90 ^\circ$ to the smallest angle of uniquely
determined time of $18.3 ^\circ$ in $105 \, \mathrm{Myr}$) offering a very sensitive tool for clusters in this age range.
Likewise, $\beta$ is sensitive to $t$ also during the second tail oscillation (between $168 \, \mathrm{Myr}$ and $336 \, \mathrm{Myr}$).
The knowledge of a prior estimate of the tail age (e.g. from a stellar evolutionary method) would be useful because it would
constrain whether the tail is in its first or second oscillation,
and then the inversion $t = t(\beta)$ could be done in the appropriate time interval, which would provide the morphological estimate for the age.
We show an example of using the prior estimate for the tail age in \refs{ssDynAge} (for NGC~2516; see also \reff{fCompObs}).
The form of $\beta = \beta(t)$ and the assumed error of $\Delta \beta = 5 ^\circ$ can be used for estimating the age of clusters younger than $\approx 340 \, \mathrm{Myr}$.
On the other hand, the method appears to be less accurate or problematic for clusters younger than $\approx 40 \, \mathrm{Myr}$ because of the possible presence of relic filaments and
the initially spherical expansion of tail I (see \refs{ssLimitations} for details).
The assumed uncertainty in $\beta$ of $\pm 5^\circ$ propagates to an age uncertainty of $\approx 12 \, \mathrm{Myr}$ during the first tail oscillation (i.e. at ages younger
than $168 \, \mathrm{Myr}$), and to an age uncertainty of $\approx 30 \, \mathrm{Myr}$ during the second tail oscillation (i.e. between $168$ and $336 \, \mathrm{Myr}$).
This uncertainty, which is of $\approx 10$ to $20$\%, is comparable to the most accurate stellar evolutionary dating methods, i.e.
isochrone fitting and lithium depletion boundary \citep{Meynet2009,Soderblom2010,Jeffries2013,Martin2018,Binks2021},
but the morphological method is much easier to apply to a particular cluster or stream as it does not need stellar evolutionary models or
dedicated spectroscopic observations.
However, the role of possible systematic errors (e.g. non-circular orbits or contamination) remains to be clarified.
\subsection{Estimate of the initial cluster mass}
\label{ssMeclEst}
Although the morphological age estimate is in agreement with other dating methods for the majority of the clusters in our sample, the
estimate of the initial mass (\refs{ssMass}) provides substantially lower cluster masses than observed in most of the cases.
The discrepancy in the initial cluster masses (or equivalently the tail widths) might point to incompleteness in the
observational data, or possibly contamination with stars of tail II, which is most conspicuous close to the cluster.
The estimate of cluster mass is based on more assumptions (e.g. on the initial cluster radius and the cluster's virial state)
than the estimate for the age, which might amplify the uncertainty.
For the order-of-magnitude estimate in this work, we inspected the available data by eye only.
It is possible that more sophisticated tools (e.g. a Bayesian analysis of the probability that a star belongs to tail I)
might lead to a substantial improvement of the present estimate.
Alternatively, the same outcome could be caused if the velocity $\widetilde{v}_{\rm e,I}$ is lower than assumed, for example if the clusters form with
the SFE being larger than $1/3$; or if the gas expulsion time-scale is adiabatic; or if the majority of the extended structure originates from
stars which were never gravitationally bound to the cluster, but which formed nearby to the cluster in the same star forming region.
\section{Summary}
\label{sSummary}
We propose a new method (morphological method) for dating open star clusters with extended tidal tails
and stellar streams based on the tilt angle $\beta$ of their extended tidal structure measured from the
direction of their orbit around the Galaxy.
The tidal tail, which is coeval with the cluster, forms at an early age either from stars released due to expulsion of non-star forming gas from the cluster or
from stars formed in the same star forming region in the vicinity of the cluster.
Classical tidal tails forming by gradual evaporation of stars from the cluster cannot be used by this method.
We show analytically that the tilt angle $\beta$ for objects (i.e. clusters or streams) at a given Galactocentric distance is only a function of the object age $t$
and not a function of any other property of the object such as its initial mass or radius.
The age can be found by inverting the theoretical dependence $\beta = \beta(t)$ for observed $\beta$ (upper panel of \reff{fCompObs}).
The method is suitable for younger objects ($40 \, \mathrm{Myr} \lesssim$ age $\lesssim 350 \, \mathrm{Myr}$),
where we estimate the accuracy to be $10$ to $20$\% of the age of the object (for an error in $\beta$ of $5 ^\circ$),
which is comparable to the errors of the currently most accurate stellar evolutionary dating methods,
which utilise isochrone fitting or the lithium depletion boundary.
The morphological method does not necessarily aim at exceeding the accuracy of the stellar evolutionary methods,
but at providing an estimate, which is completely independent of models of stellar evolution.
The main advantage of this method is its ease of use and its independence from
stellar evolutionary models and stellar velocity measurements (apart from the velocity cut to detect the tidal structure).
The present derivation applies to clusters or streams on circular orbits only, but its extension to mildly eccentric orbits should be straightforward.
The orbital eccentricity can be estimated from stellar velocities and then used to correct the results.
We plan to derive the solution for non-circular orbits in a follow up study.
Possible sources of systematic errors originate from the time-scale and isotropy of gas expulsion,
contamination due to relic stellar filaments \citep{Jerabkova2019,Beccari2020}, and
contamination due to the stars which gradually evaporate from the cluster forming the classical tidal tail.
We attempted to use the same analytical approach for estimating the initial mass of the clusters from the width of the tidal structure,
but this yielded substantially lower masses than the masses of the observed structures for most of the objects.
We attribute the difference to the sensitivity of the method on the initial conditions before and during gas expulsion or to incompleteness
of current observational data.
We illustrate morphological dating on a sample of ten open clusters and two unbound stellar streams,
which we compiled from the literature (\refs{ssDynAge}), and we compare
the results with standard dating methods based on stellar evolution.
Although we assumed that the clusters and streams orbit the Galaxy on circular trajectories and the stellar evolutionary methods often admit large uncertainties,
the morphological method agrees very well with stellar evolutionary methods in $55$\% of cases, approximately in $25$\%
of cases and is in tension in only $20$\% of cases.
Although the results are encouraging, we caution that the present model is idealised, and future theoretical work is needed for the
method to become more accurate and for its limitations to be better understood.
Because of the aforementioned limitations, we use these examples only as an illustration of the method
without attempting to improve the current estimates of the age of the objects.
\begin{acknowledgements}
We would like to thank an anonymous referee for the useful comments, which improved the quality of the paper.
FD, PK and L\v{S} acknowledge support from the Grant Agency of the Czech Republic under grant number 20-21855S.
TJ acknowledges support from the ESA (European Space Agency) Research Fellowship.
\end{acknowledgements}
| -32,279.356914 |
[
-3.076171875,
2.869140625
] | 52.941176 |
[
-2.83203125,
0.4365234375,
-1.9462890625,
-5.484375,
-0.426025390625,
7.59375
] |
[
4.06640625,
6.94921875,
3.171875,
7.18359375
] | 345 | 7,995 |
[
-2.826171875,
2.92578125
] | 27.184536 |
[
-6.02734375,
-3.68359375,
-3.81640625,
-2.328125,
1.875,
11.875
] | 1.803101 | 16.82886 | 18.974359 | 2.964113 |
[
3.29890513420105
] | -23,039.726391 | 5.202627 | -31,422.843139 | 0.454382 | 5.871684 |
[
-3.162109375,
-3.6796875,
-2.794921875,
-3.70703125,
2.638671875,
10.3515625
] |
[
-5.46484375,
-2.421875,
-2.2109375,
-1.197265625,
3.689453125,
4.9921875
] | |
BkiUfeU4eIZjf-0DqkGE
|
\section{Introduction}
\mlabel{sec:1}
One of the core ideas of quantum theory is that the states
of a quantum system correspond to one-dimensional subspaces
of a complex Hilbert space~$\mathcal{H}$, i.e., the elements
$[v] = {\mathbb C} v$, $v \not=0$, of its projective space ${\mathbb P}(\mathcal{H})$.
This set carries a geometric structure defined by
the transition probability
\[ \tau([v],[w]) := \frac{ |\langle v,w \rangle|^2}{\langle v,v\rangle \langle w,w\rangle}
\in [0,1]\]
between two states $[v]$ and $[w]$,
where $d([v],[w]) = \arccos \sqrt{\tau([v],[w])} \in [0,\pi/2]$
is the corresponding Riemannian metric (the Fubini--Study metric),
turning it into a Riemann--Hilbert manifold. {\it Wigner's Theorem}
characterizes the automorphisms of $({\mathbb P}(\mathcal{H}),\tau)$,
resp., the isometries for the metric, as those bijections
induced on ${\mathbb P}(\mathcal{H})$ by elements of the {\it antiunitary group}~$\mathop{{\rm AU}}\nolimits(\mathcal{H})$
of all linear and antilinear surjective isometries of $\mathcal{H}$
(\cite{Ba64}).
Accordingly, we have an isomorphism
\[ \mathop{{\rm Aut}}\nolimits({\mathbb P}(\mathcal{H}),\tau) \cong \mathop{{\rm AU}}\nolimits(\mathcal{H})/{\mathbb T} \mathbf{1} =: \mathop{{\rm PAU}}\nolimits(\mathcal{H})\]
of $\mathop{{\rm Aut}}\nolimits({\mathbb P}(\mathcal{H}),\tau)$ with the {\it projective antiunitary group $\mathop{{\rm PAU}}\nolimits(\mathcal{H})$.}
So any action of a group $G$ by symmetries of a quantum
system leads to a homomorphism $\overline\pi \: G \to \mathop{{\rm PAU}}\nolimits(\mathcal{H})$ and further to a
homomorphism of the pullback
extension $G^\sharp = \overline\pi^*\mathop{{\rm AU}}\nolimits(\mathcal{H})$ of $G$ by the circle group ${\mathbb T}$ to $\mathop{{\rm AU}}\nolimits(\mathcal{H})$, i.e., an antiunitary representation.
More precisely, for a pair $(G, G_1)$, where $G_1 \subseteq G$ is a subgroup
of index $2$, a homomorphism $U \: G \to \mathop{{\rm AU}}\nolimits(\mathcal{H})$ is called an
{\it antiunitary representation of $(G,G_1)$} if
$U_g$ is antiunitary for $g \not\in G_1$.
If $G$ is a topological group with two connected components, then
we obtain a canonical group pair by $G_1 := G_0$ (the identity component).
In this case an {\it antiunitary representation of $G$} is a continuous
homomorphism $U \: G \to \mathop{{\rm AU}}\nolimits(\mathcal{H})$ mapping $G\setminus G_0$ into antiunitary operators.
In the mathematical literature on
representations, antiunitary operators have never been in the focus,
whereas in quantum physics one is forced to consider antiunitary operators
to implement a time-reversal symmetry (\cite{Wig59}).
If the dynamics of a quantum system is
described by a unitary one-parameter group $U_t = e^{itH}$, where
the Hamiltonian $H$ is unbounded and bounded from below,
then a unitary time reversal operator $\mathcal{T}$ would
lead to the relation $\mathcal{T} H \mathcal{T} = - H$, which is incompatible with
$H$ being bounded from below. This problem is overcome
by implementing time reversal by an antiunitary operator because it imposes no
restrictions on the spectrum of the Hamiltonian.
In particular, the PCT Theorem in Quantum Field Theory (QFT)
which concerns the implementation of a symmetry
reversing parity (P), charge (C) and time (T), leads to an
extension of a unitary representation of the
Poincar\'e group $P(d)^\uparrow_+ = {\mathbb R}^d \rtimes \mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})^\uparrow$
to an antiunitary representation of
the larger group $P(d)_+ \cong {\mathbb R}^d \rtimes\mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})$
(\cite[Thm.~II.5.1.4]{Ha96}).
In the modular theory of operator algebras
one studies pairs $(\mathcal{M},\Omega)$ consisting
of a von Neumann algebra $\mathcal{M} \subseteq B(\mathcal{H})$ and
a cyclic separating unit vector $\Omega\in\mathcal{H}$.
Then $S(M\Omega) := M^*\Omega$ for $M \in \mathcal{M}$ defines an unbounded
antilinear involution, and the polar decomposition
of its closure $\overline S = J \Delta^{1/2}$ leads to a positive
selfadjoint operator $\Delta = S^*\overline S$, an antiunitary involution $J$
satisfying the {\it modular relation} $J\Delta J = \Delta^{-1}$,
and $\alpha_t(M) := \Delta^{it}M\Delta^{-it}$ defines automorphisms
of $\mathcal{M}$ (see \cite{BR87} and \S\ref{subsec:4.1}).
In particular, we are naturally led to antiunitary symmetries.
We say that $(\Delta, J)$
is a pairs of {\it modular objects} if $J$ is a conjugation
and $\Delta$ a positive selfadjoint operator satisfying the modular relation.
To connect this with QFT, we recall the notion of a
{\it Haag--Kastler net} of $C^*$-sub\-algebras $\mathcal{A}(\mathcal{O})$
of a $C^*$-algebra $\mathcal{A}$,
associated to (bounded) regions $\mathcal{O}$ in $d$-dimensional Minkowski space.
The algebra $\mathcal{A}(\mathcal{O})$ is interpreted as observables
that can be measured in the ``laboratory'' $\mathcal{O}$.
Accordingly, one requires {\it isotony}, i.e., that $\mathcal{O}_1 \subseteq \mathcal{O}_2$ implies
$\mathcal{A}(\mathcal{O}_1) \subseteq \mathcal{A}(\mathcal{O}_2)$ and that the $\mathcal{A}(\mathcal{O})$ generate $\mathcal{A}$.
Causality enters by the {\it locality} assumption that
$\mathcal{A}(\mathcal{O}_1)$ and $\mathcal{A}(\mathcal{O}_2)$ commute if
$\mathcal{O}_1$ and $\mathcal{O}_2$ are space-like separated, i.e., cannot correspond
with each other (cf.~Example~\ref{ex:caus-comp}).
Finally one assumes an action
$\sigma \: P(d)_+^{\mathop{\uparrow}} \to \mathop{{\rm Aut}}\nolimits(\mathcal{A})$
of the connected Poincar\'e group such that
$\sigma_g(\mathcal{A}(\mathcal{O})) = \mathcal{A}(g\mathcal{O})$. Every Poincar\'e invariant state $\omega$
of the algebra $\mathcal{A}$ now leads by the GNS construction to a covariant
representation $(\pi_\omega, \mathcal{H}_\omega, \Omega)$ of $\mathcal{A}$, and hence
to a net $\mathcal{M}(\mathcal{O}) := \pi_\omega(\mathcal{A}(\mathcal{O}))''$ of von Neumann algebras
on $\mathcal{H}_\omega$. Whenever $\Omega$ is cyclic and separating for
$\mathcal{M}(\mathcal{O})$, we obtain modular objects
$(\Delta_\mathcal{O}, J_\mathcal{O})$. This connection between the
Araki--Haag--Kastler theory of local observables
and modular theory leads naturally to antiunitary group representations
(cf.\ Section~\ref{sec:5}).
The starting point for the recent development that led to
fruitful applications of modular theory in QFT
was the Bisognano--Wichmann Theorem, asserting that,
the modular automorphisms $\alpha_t(M) = \Delta^{it}M \Delta^{-it}$
corresponding to the algebra $\mathcal{M}(W)$ of observables corresponding
to a wedge domain $W$ in Minkowski space (cf.~Definition~\ref{def:wedges})
are implemented by the unitary action of a
one-parameter group of Lorentz boosts preserving $W$ (\cite{BW76}).
This geometric implementation of modular automorphisms in terms of
Poincar\'e transformations was an important first step in a
rich development based on the work of Borchers and Wiesbrock
in the 1990s \cite{Bo92, Bo95, Bo97, Wi92, Wi93, Wi93c}.
They managed to distill the abstract essence from the Bisognano--Wichmann
Theorem which led to a better understanding of the
basic configurations of von Neumann algebras
in terms of half-sided modular inclusions
and modular intersections.
This immediately led to very tight connections between
the geometry of homogeneous spaces and modular theory \cite{BGL93}.
In his survey \cite{Bo00}, Borchers described how
these concepts have revolutionized quantum field theory.
Subsequent developments can be found in \cite{Tr97, Sch97, Ar99, BGL02, Lo08, JM17};
for the approach to Quantum Gravity based on
Non-commutative Geometry and Tomita--Takesaki Theory, see in particular~\cite{BCL10}.
A key insight that simplifies matters considerably is that
modular objects
$(\Delta, J)$ associated to a pair $(\mathcal{M}, \Omega)$ of a von Neumann algebra
$\mathcal{M}$ and a cyclic separating vector $\Omega$ are completely determined by
the real subspace
\[ V_\mathcal{M} := \overline{\mathcal{M}_h\Omega}, \quad \mbox{ where } \quad
\mathcal{M}_h = \{ M \in \mathcal{M} \: M^* = M\}.\]
It satisfies
$V_\mathcal{M} \cap i V_\mathcal{M} = \{0\}$ and $V_\mathcal{M} + i V_\mathcal{M}$ is dense in $\mathcal{H}$.
Closed real subspaces $V \subseteq \mathcal{H}$ with these two properties
are called {\it standard}.
Every standard subspace $V$ determines by the polar decomposition
of the closed operator $S_V$ defined on $V + i V$ by
$S_V(x + i y) = x- iy$ a pair $(\Delta_V, J_V)$ of modular objects
and, conversely, any such pair $(\Delta, J)$ determines a standard subspace
as the fixed point space of $J\Delta^{1/2}$ (see Section~\ref{sec:3}).
We refer to \cite{Lo08} for an excellent survey on this correspondence.
In QFT, standard subspaces provide
the basis for the technique of modular localization, developed
by Brunetti, Guido and Longo in \cite{BGL02}. For some applications
we refer to \cite{Sch97, MSY06, Sch06, LW11, Ta12, LL14, Mo17}.
From the perspective of antiunitary representations,
standard subspaces $V$ with modular objects $(\Delta, J)$
are in one-to-one correspondence with antiunitary representations
\begin{equation}
\label{eq:antiuni-corres}
U \: {\mathbb R}^\times \to \mathop{{\rm AU}}\nolimits(\mathcal{H})\quad \mbox{ by } \quad
U_{-1} = J \quad \mbox{ and } \quad U_{e^t} = \Delta^{-it/2\pi}
\end{equation}
(Proposition~\ref{prop:3.2}).
Accordingly, antiunitary representations $(U,\mathcal{H})$
of the affine group $\mathop{{\rm Aff}}\nolimits({\mathbb R}) \cong {\mathbb R} \rtimes {\mathbb R}^\times$
correspond to one-parameter families of standard subspaces
$(V_x)_{x \in {\mathbb R}}$, where $V_x$ corresponds to the affine stabilizer group of~$x$.
Borchers' key insight was that the positive energy condition
on the representation of the translation group is intimately related to
inclusions of these subspaces. More precisely,
$U_{(t,1)}= e^{itP}$ satisfies $P \geq 0$ if and only
if $U_{(t,1)} V_0 \subseteq V_0$ holds for all $t \geq 0$
(\S \ref{subsec:3.3}). This leads to Borchers
pairs $(V,U)$ of a standard subspace~$V$ and a unitary one-parameter group
$(U_t)_{t \in {\mathbb R}}$, a concept that is equivalent to the so-called half-sided modular
inclusions $V_1 \subseteq V_2$ of pairs of standard subspaces,
which was condensed from the corresponding concept of a
half-sided modular inclusion of von Neumann algebras
(\S\S\ref{subsec:3.3},\ref{subsec:4.2}).
The main objective of this article is to describe
certain structures arising in QFT, such as nets of von Neumann algebras
and standard subspaces, from the perspective of antiunitary group
representations. Since any standard subspace $V$
corresponds to a representation of ${\mathbb R}^\times$
and inclusions of standard subspaces correspond to antiunitary
positive energy representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, it is very likely that a better
understanding of antiunitary representations and corresponding
families of standard subspaces provides new insight into the geometric
structures underlying QFT. This article is written
from a mathematical perspective and we are rather brief on the
concrete physical aspects mentioned in \S \ref{subsec:5.2}.
We tried to describe the mathematical side of the theory
as clearly as possible to make it easier for mathematicians
to understand the relevant aspects without going to much into physics.
For more details of the physical side, we recommend
\cite{BDFS00, BGL02, Lo08, LL14}.
In particular, the programs outlined by Borchers and Wiesbrock,
see f.i., \cite{Bo97, Bo00}, \cite{Wi93c}, leave much potential
for an analysis from the representation theoretic perspective.
The structure of this paper is as follows.
In Section~\ref{sec:2} we discuss antiunitary representations of group
pairs $(G, G_1)$ and criteria for a unitary representation
of $G_1$ to extend to an antiunitary representation of $G$.
An interesting simplifying feature is that, whenever antiunitary
extensions exist, they are unique up to equivalence
(Theorem~\ref{thm:equiv}). We show that
irreducible unitary representations of $G_1$ fall into three types
(real, complex and quaternionic) with respect to their extendability
behavior to antiunitary representations of $G$.
We also take a closer look at antiunitary representations
of one-dimensional Lie groups (\S \ref{subsec:one-par}).
Here ${\mathbb R}^\times$ plays a central role because
its antiunitary representations encode modular objects $(\Delta, J)$
as in \eqref{eq:antiuni-corres}.
We conclude Section~\ref{sec:2} with a discussion of antiunitary
representations of the affine group $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, the projective
group $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ and the $3$-dimensional Heisenberg group $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)$.
Section~\ref{sec:3} is devoted to various aspects of standard subspaces
as a geometric counterpart of antiunitary representations of~${\mathbb R}^\times$.
In particular, we discuss how the embedding $V \subseteq \mathcal{H}$ can be
obtained from the orthogonal one-parameter group
$\Delta^{it}\vert_V$ on $V$ (\S \ref{subsec:orthog}),
and in \S \ref{subsec:3.3} we discuss half-sided modular inclusions
of standard subspaces and how they are related to
antiunitary representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, $P(2)_+$ and $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$.
In Section~\ref{sec:4} we first recall some of the key features of
Tomita--Takesaki Theory. \S \ref{subsec:4.2} is of key importance
because it is devoted to the translation between pairs $(\mathcal{M},\Omega)$
of von Neumann algebras with cyclic separating vectors
and standard subspaces~$V$. We have already seen how to obtain a standard
subspace $V_\mathcal{M} = \overline{\mathcal{M}_h\Omega}$ from $(\mathcal{M},\Omega)$.
Conversely, one can use Second Quantization (see Section~\ref{sec:6} for details)
to associate to each standard subspace $V \subseteq \mathcal{H}$
pairs $(\mathcal{R}_\pm(V),\Omega)$, where $\mathcal{R}_\pm(V)$ is a von Neumann algebra
on the (bosonic/fermionic) Fock space $\mathcal{F}_\pm(\mathcal{H})$.
This method has been invented and
studied thoroughly by Araki and Woods in the 1960s and 1970s in the context of
free bosonic quantum fields (\cite{Ar64, Ar99, AW63, AW68});
some of the corresponding fermionic results are more recent
(cf.\ \cite{EO73}, \cite{BJL02}) and other statistics (anyons)
are discussed in \cite{Sch97}.
A central point is that these correspondences permit to translate between results
on configurations of standard subspaces and configurations of von Neumann algebras
with common cyclic vectors. We explain this in detail for half-sided modular
inclusions and Borchers pairs (\S\S\ref{subsec:4.2} and \ref{subsec:4.3})
but we expect it to go much deeper. Keeping in mind that
standard subspaces are in one-to-one correspondence with antiunitary
representations of ${\mathbb R}^\times$ and half-sided modular inclusions with
antiunitary positive energy representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, we expect that many interesting
results on von Neumann algebras can be obtained from a better understanding
of antiunitary representations of Lie group pairs $(G, G_1)$ and
configurations of homomorphisms $\gamma \: ({\mathbb R}^\times, {\mathbb R}^\times_+) \to (G,G_1)$.
The construction of free fields by second quantization associates to
an antiunitary representation $(U,\mathcal{H})$ of $G$
on the one-particle spaces $\mathcal{H}$, resp., to the corresponding
standard subspaces $V_\gamma$, a net of Neumann algebras on Fock space.
However, there is also a converse aspect which is probably more important,
namely that the passage from pairs
$(\mathcal{M},\Omega)$ to the standard subspaces $V_\mathcal{M}$ is not restricted to free fields
and can be used to attach geometric structure to nets of von Neumann algebras,
all encoded in the subgroup of $\mathop{{\rm AU}}\nolimits(\mathcal{H})$ generated by all operators
$\Delta_\mathcal{M}^{it}$ and $J_\mathcal{M}$.
To substantiate this remark, we discuss in Section~\ref{sec:5}
several aspects of
nets of standard subspaces and von Neumann algebras as they arise in QFT.
In particular, we consider nets of standard subspaces
$(V_\ell)_{\ell \in L}$ arising from antiunitary representations $(U,\mathcal{H})$, which
leads to the covariance relation $U_g V_\ell = V_{g.\ell}$ for $g \in G_1$, and one expects
geometric information
to be encoded in the $G$-action on the index set $L$.
A~common feature of the natural examples is that $L$ has a fibration
over a symmetric space that corresponds to the projection
$(\Delta_\ell, J_\ell) \mapsto J_\ell$, forgetting the modular operator.
For details we refer to the discussion of several examples in
Section~\ref{sec:5}. Typical index sets $L$ arise as
conjugation orbits
$\{ \gamma^g, (\gamma^\vee)^g \: g \in G \} \subseteq \mathop{{\rm Hom}}\nolimits({\mathbb R}^\times, G)$,
where $\gamma^g(t) = g\gamma(t)g^{-1}$ and $\gamma^\vee(t) = \gamma(t^{-1})$.
In this picture, the above
projection simply corresponds to the evaluation map
$\mathop{{\rm ev}}\nolimits_{-1} \: \mathop{{\rm Hom}}\nolimits({\mathbb R}^\times, G) \to \mathop{{\rm Inv}}\nolimits(G)$ and the set $\mathop{{\rm Inv}}\nolimits(G)$ of involutions
of $G$ is a symmetric space (\cite{Lo69}; Appendix~\ref{app:a.2}).
In many concrete situations, the centralizer of $\gamma(-1)$ in $G$
coincides with the centralizer of the whole subgroup $\gamma({\mathbb R}^\times)$, so that
the conjugacy class $C_\gamma = \{\gamma^g \: g\in G\}$ can be identified
with the conjugacy class $C_{\gamma(-1)}$ of the involution $\gamma(-1)$, and
this manifold is a symmetric space. We are therefore led to
index sets which are ordered symmetric spaces, and these objects have been
studied in detail in the 90s. We refer to the monograph \cite{HO96}
for a detailed exposition of their structure theory.
Section~\ref{sec:6} presents the second quantization process from
standard subspaces $V \subseteq \mathcal{H}$ to pairs $(\mathcal{R}^\pm(V), \Omega)$
in a uniform way, stressing in particular the similarity between the
bosonic and the fermionic case.
In the final Section~\ref{sec:7} we briefly describe some
perspectives and open problems.
Antiunitary representations occur naturally for interesting classes of
groups such as the Virasoro group,
conformal and affine groups related to euclidean Jordan algebras,
and automorphism groups of bounded symmetric domains. For detailed
results we refer to the forthcoming paper \cite{NO17}.
In \S\ref{subsec:7.4} we also explain how second quantization
leads to interesting dual pairs in the Heisenberg
group $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$: Any standard subspace
$V\subseteq \mathcal{H}$ satisfying the factoriality condition $V \cap V' = \{0\}$,
where $V'$ is the symplectic orthogonal space,
leads by restriction of the irreducible Fock representation
of $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ to a factor representation
of the subgroup $\mathop{{\rm Heis}}\nolimits(V)$, which forms a dual pair with
$\mathop{{\rm Heis}}\nolimits(V')$ in $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ (both subgroups are their mutual centralizers).
So far, such dual pairs have not been exploited systematically
from the perspective of unitary representations of infinite dimensional
Lie groups.
Some basic auxiliary lemmas and definitions have been
collected in the appendix. \\
{\bf Notation and conventions:}
As customary in physics, the
scalar product $\langle \cdot,\cdot\rangle$ on a complex Hilbert space $\mathcal{H}$
is linear in the second argument. \\
$\lbr S \rbr$ denotes the closed subspace
of a Hilbert space $\mathcal{H}$ generated by the subset $S$. \\
$\{a,b\} := ab + ba$ is the anti-commutator of two elements
of an associative algebra. \\
For the cyclic group of order $n$ we write ${\mathbb Z}_n = {\mathbb Z}/n{\mathbb Z}$.\\
For $\bx, \by \in {\mathbb R}^{d-1}$, we
write $\bx \by = \sum_{j = 1}^{d-1} x_j y_j$ for the scalar product and,
for $x = (x_0, \bx) \in {\mathbb R}^{d}$, we write
$[x,y] = x_0 y_0 - \bx \by$ for the Lorentzian scalar product on the
$d$-dimensional Minkowski space ${\mathbb R}^{1,d-1}\cong {\mathbb R}^d$.
The light cone in Minkowski space
is denoted
\[ V_+ = \{ x \in {\mathbb R}^{1,d-1} \: x_0 > 0, [x,x] > 0\}.\]
Here is our notation for some of the groups arising in physics:
\begin{itemize}
\item the {\it Poincar\'e group}
$P(d) \cong {\mathbb R}^{1,d-1} \rtimes \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})$ of affine isometries
of ${\mathbb R}^{1,d-1}$,
\item $P(d)_+ = {\mathbb R}^{1,d-1} \rtimes \mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})$
is the subgroup of orientation preserving maps, and
\item $P(d)^\uparrow = {\mathbb R}^{1,d-1} \rtimes \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})^\uparrow$
with $\mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})^\uparrow = \{ g \in \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R}) \: gV_+ = V_+\}$
the subgroup preserving the causal structure.
\item The corresponding {\it conformal group}
is $\mathop{\rm O{}}\nolimits_{2,d}({\mathbb R})$, acting on the conformal compactification
${\mathbb S}^1 \times {\mathbb S}^{d-1}$ of $M^d$ with the kernel $\{\pm \mathbf{1}\}$
(see \cite[\S17.4]{HN12}).
\end{itemize}
If not otherwise states, all Lie groups in this paper are finite dimensional.
\section{Antiunitary representations}
\mlabel{sec:2}
In this section we discuss antiunitary representations of group
pairs $(G, G_1)$ and criteria for a unitary representation
of $G_1$ to extend to an antiunitary representation of $G$.
We start in \S \ref{subsec:2.1} with some general remarks on
group pairs $(G,G_1)$ and how to classify twists in this context.
We also take a closer look at antiunitary representations
of one-dimensional Lie groups in \S \ref{subsec:one-par}
and discuss antiunitary
representations of the affine group $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, the projective
group $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ and the $3$-dimensional Heisenberg group
in \S \ref{subsec:2.4}.
\begin{definition} An {\it antiunitary representation} $(U,\mathcal{H})$ of a
group pair $(G, G_1)$, where $G_1 \subseteq G$ is a subgroup of index $2$,
is a homomorphism $U$ of $G$ into the group $\mathop{{\rm AU}}\nolimits(\mathcal{H})$
of unitary or antiunitary operators on a
complex Hilbert space $\mathcal{H}$ for which $G_1 = U_G^{-1}(\mathop{\rm U{}}\nolimits(\mathcal{H}))$, i.e.,
$G_1$ is represented by unitary operators and the coset
$G\setminus G_1$ by antiunitary operators.
If $G$ is a Lie group, then $(G,G_1)$ is called a {\it Lie group pair}.
If $G$ is a topological group with two connected components, then
we obtain a canonical group pair by $G_1 := G_0$ (the identity component).
In this case an {\it antiunitary representation of $G$} is a continuous
homomorphism
$U \: G \to \mathop{{\rm AU}}\nolimits(\mathcal{H})$ mapping $G\setminus G_0$ into antiunitary operators.
\end{definition}
We start this section with a discussion of the natural class of
group pairs that will show up in the context of antiunitary representations.
\subsection{Involutive group pairs}
\mlabel{subsec:2.1}
\begin{definition} An {\it involutive group pair} is a
pair $(G, G_1)$ of groups, where $G_1 \subseteq G$ is a subgroup of index $2$
and there exists an element $g \in G \setminus G_1$ with
$g^2 \in Z(G_1)$. Then $\tau(g_1) := gg_1 g^{-1}$ defines an involutive
automorphism of $G_1$.
\end{definition}
In most examples that we encounter below $G$ is a Lie
group with two connected components and $G_1$ is its identity component.
\begin{remark} (a) If $g^2 \in Z(G_1)$, then other elements
$gh\in g G_1$ need not have central squares.
From $(gh)^2 = ghgh = g^2 \tau(h)h$ it follows that
$(gh)^2$ is central if and only if $\tau(h)h \in Z(G_1)$, which is
in particular the case if $\tau(h) = h^{-1}$.
(b) If $G$ is a Lie group, then any conjugacy class $C_g$ of $g \in G \setminus G_1$
with $g^2 \in Z(G)$ carries a natural symmetric space
structure (Appendix~\ref{app:a.2}). In fact, the stabilizer
of $g$ in $G_1$ is $G_1^\tau$, so that we obtain a diffeomorphism
\[ G_1/G_1^\tau \to C_g, \qquad h G_1^\tau \mapsto h g h^{-1} = h \tau(h)^{-1} g. \]
\end{remark}
\begin{exs} \mlabel{exs:conj}
(a) Let $\mathcal{H}$ be a complex Hilbert space
and $(G, G_1) := (\mathop{{\rm AU}}\nolimits(\mathcal{H}), \mathop{\rm U{}}\nolimits(\mathcal{H}))$.
An anti\-unitary operator $J \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$ is called a {\it conjugation}
if $J^2 = \mathbf{1}$ and an {\it anticonjugation} if $J^2 = - \mathbf{1}$.
Conjugations always exist and define a {\it real structure} on $\mathcal{H}$ in
the sense that $\mathcal{H}^J= \mathop{{\rm Fix}}\nolimits(J) := \ker(J-\mathbf{1})$ is a real Hilbert space whose complexification
is~$\mathcal{H}$.\begin{footnote}{For the existence, fix an orthonormal
basis $(e_j)_{j \in I}$ of $\mathcal{H}$ and defined $J$ to be antilinear with
$Je_j = e_j$ for every $j \in I$.} \end{footnote}
Anticonjugations define on $\mathcal{H}$ a {\it quaternionic structure},
hence do not exist if $\mathcal{H}$ is of finite odd dimension.
Any (anti-)conjugation $J$ on $\mathcal{H}$ is contained in $G\setminus G_1$ and
satisfies $J^2 \in \{\pm \mathbf{1}\} \subseteq Z(\mathop{\rm U{}}\nolimits(\mathcal{H}))$.
(b) If $G_1$ is a group and $\tau \in \mathop{{\rm Aut}}\nolimits(G_1)$ is an involutive automorphism,
then \break $G := G_1 \rtimes \{\mathbf{1},\tau\}$ defines an involutive group pair.
\end{exs}
\begin{ex} (A non-involutive group pair)
Let $\sigma \: C_4 \to \mathop{{\rm Aut}}\nolimits({\mathbb C})$ denote the natural action of
the subgroup $C_4 = \{ \pm 1, \pm i\}\subseteq {\mathbb T}$ by
multiplication and form the semidirect product group
$G := {\mathbb C} \rtimes_\sigma C_4$. Then $G_1 := {\mathbb C} \rtimes_\sigma \{\pm 1\}$
is a subgroup of index $2$ but no element $g \in G\setminus G_1$ satisfies
$g^2 \in Z(G_1)$ because $g^2$ acts on ${\mathbb C}$ as $-\mathop{{\rm id}}\nolimits_{\mathbb C}$.
\end{ex}
\begin{remark} (Classification of involutive group pairs)
\mlabel{rem:2.1}
(a) Suppose we are given a group $G$ and an involutive automorphism
$\tau$ of $G$. We want to classify all group extensions
\[ \mathbf{1} \to G \to G^\sharp \to {\mathbb Z}_2 \to \mathbf{1}, \]
where the corresponding involution in the group $\mathop{{\rm Out}}\nolimits(G)$ of outer automorphisms of $G$
is represented by~$\tau$.
In view of \cite[Thm.~18.1.13]{HN12}, the equivalence classes of these extensions
are parametrized by the cohomology group $H^2_\tau({\mathbb Z}_2,Z(G))$,
where $\overline 1$ acts on $Z(G)$ by $\tau\vert_{Z(G)}$.
As any cocycle
$f \: {\mathbb Z}_2 \times {\mathbb Z}_2 \to Z(G)$ normalized by
$f(\overline 0,g) = f(g,\overline 0) = e$ is determined by the element
$z := f(\overline 1,\overline 1) \in Z(G)$ because all other values vanish,
the group structure on the corresponding
extension is given by an element $\hat\tau \in G^\sharp \setminus G$ satisfying
\[ \hat\tau^2 = z \quad \mbox{ and } \quad \hat\tau g \hat\tau^{-1} = \tau(g)
\quad \mbox{ for }\quad g\in G.\]
This description shows in particular that $\tau(z) = z$, and a closer
inspection of the cohomology groups yields
\begin{equation}
\label{eq:h2-form}
H^2_\tau({\mathbb Z}_2, Z(G)) \cong Z(G)^\tau/Z(G)_\tau,
\quad \mbox{ where } \quad Z(G)_\tau := \{ \tau(z)z \: z \in Z(G)\}
\end{equation}
(\cite[Ex.~18.3.5]{HN12}).
(b) For $\tau\vert_{Z(G)} = \mathop{{\rm id}}\nolimits_{Z(G)}$ we have
\[ Z(G)_\tau = \{ z^2 \: z \in Z(G)\} \quad \mbox{ and }\quad
H^2_\tau({\mathbb Z}_2, Z(G)) \cong Z(G)/Z(G)_\tau.\]
(c) For $\tau\vert_{Z(G)} = -\mathop{{\rm id}}\nolimits_{Z(G)}$, we have
\[ H^2_\tau({\mathbb Z}_2, Z(G)) \cong Z(G)^\tau
= \{ z \in Z(G) \: z^2 = e\},\]
the subgroup of central involutions.
\end{remark}
\begin{remark}
Although by \eqref{eq:h2-form}
the cohomology groups $H^2_\tau({\mathbb Z}_2,Z(G))$ are elementary abelian
two groups, one cannot expect any bound on the order of an element
$g \in G^\sharp \setminus G$. In the cyclic group
$G^\sharp = {\mathbb Z}_{2^n}$ with $G \cong {\mathbb Z}_{2^{n-1}}$, any
element of $G^\sharp \setminus G$ is of order $2^n$.
\end{remark}
\begin{ex} \mlabel{app:b.1} (a) For $G = {\mathbb R}$, Remark~\ref{rem:2.1} implies that
$H^2_\tau({\mathbb Z}_2,{\mathbb R}) = \{0\}$ for any involutive automorphism $\tau$.
This implies that $G^\sharp \cong {\mathbb R} \rtimes_\tau {\mathbb Z}_2$.
(b) For $G = {\mathbb T}$, the cohomology is trivial for $\tau = \mathop{{\rm id}}\nolimits_{\mathbb T}$,
but for $\tau(z) = z^{-1}$ the group
\[ H^2_{\tau}({\mathbb Z}_2, {\mathbb T}) \cong \{ z \in {\mathbb T} \: z^2 = 1\} = \{ \pm 1 \}\]
is non-trivial. A concrete model for the non-trivial extension with
$\hat\tau^2 = -1$ is given by the subgroup
\[ \mathop{{\rm Pin}}\nolimits_2({\mathbb R}) = \exp({\mathbb R} I) \cup J \exp({\mathbb R} I) \subseteq {\mathbb H}^\times, \]
where $I$ and $J$ are the two generators of the skew-field ${\mathbb H}$ of
quaternions satisfying $I^2 = J^2 = - \mathbf{1}$ and $IJ = - JI$
(\cite[Ex.~B.3.24]{HN12}).
This is a $1$-dimensional Lie group without a simply connected covering
group (\cite[Ex.~18.2.4]{HN12})
\end{ex}
\begin{exs} \mlabel{ex:2.11}
Here are some concrete involutive group pairs $(G, G_1)$ that we shall
be dealing with.
(a) $G = \mathop{{\rm Aff}}\nolimits({\mathbb R})\cong {\mathbb R} \rtimes {\mathbb R}^\times$ with $G_1 \cong {\mathbb R} \rtimes {\mathbb R}^\times_+$,
the identity component. Here $r_x^2 = \mathbf{1}$ holds for the reflections
$r_x = (2x,-1)$ in $x \in {\mathbb R}$.
(b) The automorphism group $G = \mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ of the real projective line
${\mathbb P}_1({\mathbb R}) \cong {\mathbb R} \cup \{\infty\}$, where $G_1 = \mathop{{\rm PSL}}\nolimits_2({\mathbb R})$
is the identity component and reflections in $\mathop{{\rm GL}}\nolimits_2({\mathbb R})$ lead to orientation
reversing involutions of ${\mathbb S}^1$.
(c) The {\it Poincar\'e group} $P(d) = {\mathbb R}^{1,d-1} \rtimes \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})$
of $d$-dimensional Minkowski space ${\mathbb R}^{1,d-1}$ contains the subgroup
$P(d)_+ = {\mathbb R}^{1,d-1} \rtimes \mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})$ of orientation preserving
affine isometries. Then we obtain the involutive group pair $(G, G_1)$ with
$G := P(d)_+$ and $G_1 := P(d)_+^\uparrow$.
In the following the involution
$R_{01} := \mathop{{\rm diag}}\nolimits(-1,-1,1,\ldots, 1) \in G \setminus G_1$ plays an important role
(cf.\ Lemma~\ref{lem:4.17}).
(d) For a (bounded) symmetric domain $\mathcal{D} \subseteq {\mathbb C}^n$, the group $\mathop{{\rm Aut}}\nolimits(\mathcal{D})$ of
biholomorphic automorphisms is an index $2$-subgroup of the hermitian
group $\mathop{{\rm AAut}}\nolimits(\mathcal{D})$ of all bijections of $\mathcal{D}$ that are either
holomorphic or antiholomorphic. There always exist
antiholomorphic involutions $\sigma$ in $\mathop{{\rm AAut}}\nolimits(\mathcal{D})$
(see \cite{Ka97} for a classification covering even the infinite dimensional case).
For any such involution $\sigma$, we obtain by
$G_1 := \mathop{{\rm Aut}}\nolimits(\mathcal{D})_0$ and $G := G_1 \rtimes \{\mathbf{1},\sigma\}$
an involutive group pair (cf.~\cite{NO17} and \S\ref{subsec:7.3}).
\end{exs}
\subsection{Extending unitary representations}
\mlabel{subsec:2.2}
Suppose that $G_1$ is an index two subgroup of the group $G$
and $(U,\mathcal{H})$ is a unitary representation of~$G_1$.
In this subsection we discuss extensions of $U$ to antiunitary
representations of $G$. In particular, we show that,
in analogy to the classical case $G = G_1 \times {\mathbb Z}_2$,
irreducible antiunitary representations fall into three types that
we call real, complex and quaternionic, according to their commutant.
We start with the following lemma on a situation
where extensions always exist because the representation
has been doubled in a suitable way.
\begin{lemma} \mlabel{lem:extlem} {\rm(Extension Lemma)}
Let $G_1 \subseteq G$ be a subgroup of index two and
$(U,\mathcal{H})$ be a unitary representation of $G_1$.
Fix $r \in G \setminus G_1$ and consider the automorphism
$\tau(g) := rgr^{-1}$ of $G_1$. Then the unitary
representation $V := U \oplus U^*\circ \tau$ on $\mathcal{H} \oplus \mathcal{H}^*$
extends to an antiunitary
representation of~$G$.
\end{lemma}
\begin{proof} Let $\Phi \: \mathcal{H} \to \mathcal{H}^*, \Phi(v)(w) := \langle v, w \rangle$ denote the canonical
antiunitary operator and note that $U_g^* \circ \Phi = \Phi \circ U_g$ for
$g \in G_1$. We consider the antiunitary operator
\[ J \: \mathcal{H} \oplus \mathcal{H}^* \to \mathcal{H} \oplus \mathcal{H}^*, \quad
J(v,\lambda) := (\Phi^{-1}\lambda, \Phi U_{r^2} v). \]
It satisfies
\[ J^2(v,\lambda)
= J (\Phi^{-1}\lambda, \Phi U_{r^2} v)
= (U_{r^2} v, \Phi U_{r^2} \Phi^{-1}\lambda)
= (U_{r^2} v, U^*_{r^2} \lambda) = V_{r^2}(v,\lambda),\]
where we have used $\tau(r^2) = r^2$ for the last equality.
This proves that $J^2 = V_{r^2}$.
We now show that $J V_g J^{-1} = V_{\tau(g)}$ for $g \in G$:
\begin{align*}
J V_g(v,\lambda)
&= J (U_g v, U^*_{\tau(g)} \lambda)
= (\Phi^{-1} U^*_{\tau(g)} \lambda, \Phi U_{r^2} U_g v)
= (U_{\tau(g)} \Phi^{-1}\lambda, \Phi U_{\tau^2(g)} U_{r^2} v)\\
&= (U_{\tau(g)} \Phi^{-1}\lambda, U^*_{\tau^2(g)} \Phi U_{r^2} v)
= V_{\tau(g)} (\Phi^{-1}\lambda,\Phi U_{r^2} v)
= V_{\tau(g)} J(v,\lambda).
\end{align*}
The relations $J^2 = V_{r^2}$ and $J V_g J^{-1} = V_{\tau(g)}$
now imply by direct calculation that the assignment
$V_{gr} := V_g J$ for $g \in G_1$ defines an extension of
$V$ to an antiunitary representation of $G$
(Lemma~\ref{lem:ext-homo}).
\end{proof}
The following theorem implies
that extensions of unitary representations of $G_1$ to antiunitary
representations $(U,\mathcal{H})$ of $G$ are always unique up to isomorphism.
It also describes the situation for irreducible representations.
Note that the commutant
\[ U_G' = \{ A \in B(\mathcal{H}) \: (\forall g \in G)\, A U_g = U_g A \}\]
is not a complex subalgebra of $B(\mathcal{H})$ because some $U_g$ are antilinear.
\begin{theorem} \mlabel{thm:equiv}
Let $G_1 \subseteq G$ be a subgroup of index two,
$r \in G\setminus G_1$ and $\tau(g) := rgr^{-1}$ for $g \in G_1$.
\begin{itemize}
\item[\rm(a)] For two antiunitary representation $(U^j, \mathcal{H}_j)_{j =1,2}$, we then have
\[ U^1 \cong U^2 \quad \Longleftrightarrow \quad U^1\vert_{G_1} \cong U^2\vert_{G_1}.\]
\item[\rm(b)] For any antiunitary representation $(U, \mathcal{H})$ of $(G,G_1)$,
the von Neumann algebra $U_{G_1}'$ is the
complexification of the real algebra $U_G'$.
\item[\rm(c)] An antiunitary representation $(U, \mathcal{H})$ of $(G,G_1)$
is irreducible if and only if its commutant $U_G'$
is isomorphic to ${\mathbb R}$, ${\mathbb C}$ or $\H$. More specifically:
\begin{itemize}
\item[\rm(i)] If $U_G' \cong {\mathbb R}$, then $U_{G_1}' \cong {\mathbb C}$ and $U\vert_{G_1}$ is irreducible.\item[\rm(ii)] If $U_G' \cong {\mathbb C}$, then $U_{G_1}' \cong {\mathbb C}^2$
and $U\vert_{G_1}$ is a direct sum of two inequivalent irreducible representations
which do not extend to an antiunitary representation of~$G$.
\item[\rm(iii)] If $U_G' \cong \H$, then $U_{G_1}' \cong M_2({\mathbb C})$
and $U\vert_{G_1}$ is a direct sum of two equivalent irreducible representations
which do not extend to an antiunitary representation of~$G$.
\end{itemize}
\item[\rm(d)]
For an irreducible unitary representation $(U,\mathcal{H})$ of $G_1$,
either
\begin{itemize}
\item[\rm(i)] $U$ extends to an antiunitary representations $\overline U$ of $G$,
and then $\overline U$ is irreducible with $\overline U_{G}' \cong {\mathbb R}$; or
\item[\rm(ii)] $U$ does not extend to an antiunitary representation of $G$.
Then $V := U \oplus U^*\circ \tau$ extends to an irreducible
antiunitary representation of $G$ and $V_G' \cong {\mathbb C}$ if \break
$U^* \circ \tau \not\cong U$ and
$V_G' \cong \H$ if $U^* \circ \tau \cong U$.
\end{itemize}
\end{itemize}
\end{theorem}
\begin{proof} (a)
\begin{footnote}
{In the finite dimensional context, this was already known to E.~Wigner; see \cite[p.~344]{Wig59}.}
\end{footnote}
Let $\Phi \: \mathcal{H}_1 \to \mathcal{H}_2$ be a unitary intertwining operator
for the representations $U^j\vert_{G_1}$. Pick $r \in G \setminus G_1$ and consider
the antiunitary operators $J_j := U^j_r \in \mathop{{\rm AU}}\nolimits(\mathcal{H}_j)$.
Then the unitary operator
$U := J_1^{-1} \Phi^{-1} J_2 \Phi \in \mathop{\rm U{}}\nolimits(\mathcal{H}_1)$ commutes with $U^1_{G_1}$.
The map $j_1(M) := J_1MJ_1^{-1}$ defines an antilinear automorphism of the von Neumann
algebra $(U^1_{G_1})'$ satisfying
\[ j_1(U)
= \Phi^{-1} J_2 \Phi J_1^{-1}
= \Phi^{-1} J_2^{-1} U^2_{r^2} \Phi J_1^{-1}
= \Phi^{-1} J_2^{-1} \Phi U^1_{r^2} J_1^{-1}
= \Phi^{-1} J_2^{-1} \Phi J_1 = U^{-1}.\]
Therefore Lemma~\ref{lem:app.1}(c) implies the existence
of a unitary operator $V \in (U^1_{G_1})'$ with $V^2 = U^{-1}$ and $j_1(V) = V^{-1}$.
With $\Psi := \Phi \circ V$, this leads to
\[ \Psi^{-1} J_2 \Psi
= V^{-1} \Phi^{-1} J_2 \Phi V
= V^{-1} J_1 U V
= V^{-1} U^{-1} J_1 V
= V J_1 V = VV^{-1} J_1 = J_1.\]
We conclude that the antiunitary representations $U^1$ and $U^2$ are equivalent.
(b) Let $J := U_r$. Then $U_{G_1}'$ is invariant under the antilinear
automorphism $j(M) := JMJ^{-1}$. Since $J^2$ commutes with $U_{G_1}'$, it is involutive.
As $U_G'$ is the set of fixed points of $j$, it is
a real form of the complex vector space $U_{G_1}'$. This implies the assertion.
(c) The closed complex subspaces invariant under $U_G$ are precisely the
closed real subspaces of the underlying real space $\mathcal{H}^{\mathbb R}$ invariant under
the group ${\mathbb T} \cdot U_G$. Therefore $U_G$ is irreducible if and only if
the real representation of ${\mathbb T} \cdot U_G$ on $\mathcal{H}^{\mathbb R}$
is irreducible, which is equivalent to its commutant
being isomorphic to ${\mathbb R}, {\mathbb C}$ or $\H$ (\cite[Thm.~1]{StVa02}).
Next we observe that the real linear commutant of ${\mathbb T}\mathbf{1}$ consists of the complex linear
operators. Therefore the real linear commutant of ${\mathbb T}\cdot U_G$ equals the complex
linear commutant $U_G'$. Now (b) implies that $U_G' \cong {\mathbb R},{\mathbb C},\H$ leads to
$U_{G_1}' \cong {\mathbb C}, {\mathbb C}^2,M_2({\mathbb C})$, respectively.
In the first case $U\vert_{G_1}$ is irreducible.
In the second case $\mathcal{H} \cong \mathcal{H}_+ \oplus \mathcal{H}_-$, where $\mathcal{H}_\pm$
are $G_1$-invariant subspaces on which the $G_1$-representations are
irreducible and non-equivalent. As $U_r$ permutes the $G_1$-isotypical
subspaces, $U_r \mathcal{H}_\pm = \mathcal{H}_\mp$.
For the representations $U^\pm$ of $G_1$ on $\mathcal{H}_\pm$, this implies that
$U^- \cong (U^+)^*\circ \tau$. If $U^+$ or $U^-$ extends to an
antiunitary representation of $G$, then
$U\vert_{G_1}$ has an extension to a reducible representation
$\tilde U$ of $G$. As $U$ is irreducible, this contradicts (a).
In the third case we have a similar decomposition with
$U^+ \cong U^-$. Again, the irreducibility of $U$, combined with (a),
implies that $U^\pm$ do not extend to~$G$.
(d)
If $U$ extends to an antiunitary representation $\overline U$
of $G$ on the same space, then this representation is obviously
irreducible and (c) implies that $\overline U_G' \cong {\mathbb R}$.
If such an extension does not exist, then
the Extension Lemma~\ref{lem:extlem}
provides an extension of
$V := U \oplus U^* \circ \tau$ to an antiunitary representation of~$G$ by
\[ V_r := J, \quad \mbox{ where } \quad
J(v,\lambda) := (\Phi^{-1}\lambda, U^*_{r^2} \Phi v).\]
If $U^* \circ \tau \not\cong U$, then
$V_{G_1}' \cong {\mathbb C}^2$, and if
$U^* \circ \tau \cong U$, then $V_{G_1}' \cong M_2({\mathbb C})$.
In the first case the algebra
$V_{G_1}' \cong {\mathbb C}^2$ acts by diagonal operators
$T_{(a,b)}(v,\lambda) := (av, b \lambda)$. Such an operator commutes with
$J$ if and only if $b = \overline a$. Therefore $V_G' \cong {\mathbb C}$, and thus
the representation $V$ of $G$ is irreducible.
In the second case, $V_G'$ is a real form of $V_{G_1}' \cong M_2({\mathbb C})$.
We show that the representation $(V,\mathcal{H} \oplus \mathcal{H}^*)$ of $G$ is irreducible.
If this is not the case, there exists a proper $G$-invariant subspace
$\mathcal{K} \subseteq \mathcal{H} \oplus \mathcal{H}^*$. As $V\vert_{G_1} \cong U \oplus U$,
the $G_1$-representation on $\mathcal{K}$ must be irreducible
and equivalent to~$U$. This contradicts the non-extendability of $U$
to an antiunitary representation of $G$. Therefore $V$ is irreducible
and $V_G'$ is isomorphic to $\H$.
\end{proof}
\begin{definition} \mlabel{def:3types}
(Three types of irreducible representations
\begin{footnote}{In a special context, this
classification by three types can already be found in
Wigner's book \cite[\S 26, p.~343]{Wig59}.}\end{footnote}
)
We keep the notation of the preceding theorem.
If $(U,\mathcal{H})$ is an irreducible unitary representation
of $G_1$ with $U \cong U^* \circ \tau$, then there exists a
$\Phi \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$ with
$\Phi U_g \Phi^{-1} = U_{rgr^{-1}}$ for $g \in G_1$.
By Schur's Lemma, such an operator $\Phi$ is unique up to a scalar
factor in ${\mathbb T}$, so that $\Phi^2$ does not depend on the concrete choice of
$\Phi$. Therefore an antiunitary extension to $G$ exists if and only if
$\Phi^2 = U_{r^2}$. Then we call $(U,\mathcal{H})$ of {\it real type
(with respect to $\tau$)}.
If this is not the case, but $U \cong U^* \circ \tau$,
then $(U,\mathcal{H})$ is said to be of {\it quaternionic type
(with respect to $\tau$)},
and otherwise we say that it is of {\it complex type (with respect to $\tau$)}.
This terminology matches the type of the commutant of the corresponding
irreducible antiunitary representation of~$G$.
\end{definition}
\begin{ex} (a) If $\mathcal{H} = {\mathbb C}$ is one-dimensional, then $\mathop{{\rm AU}}\nolimits(\mathcal{H}) ={\mathbb T} \{\mathbf{1},J\}
\cong \mathop{\rm O{}}\nolimits_2({\mathbb R})$ for any conjugation $J$.
We conclude in particular that all antiunitary operators are involutions.
(b) If $\mathcal{H} = {\mathbb C}^2$ is two-dimensional, we can already see all types of situations
for groups generated by a single antiunitary operator, i.e., for antiunitary
representations of the pair $(G,G_1) = ({\mathbb Z}, 2{\mathbb Z})$.
Let $J \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$ be antiunitary and $J^2 \in \mathop{\rm U{}}\nolimits(\mathcal{H})$ be its square.
If $J^2 = \mathbf{1}$, then $J$ is a conjugation, so that there are proper $J$-invariant
subspaces.
If $J^2 = - \mathbf{1}$, then $J$ is an anticonjugation defining a quaternionic structure
on ${\mathbb C}^2 \cong \H$. In particular, the representation is irreducible with
$U_G' \cong \H$ and $U_{G_1}' = B(\mathcal{H}) \cong M_2({\mathbb C})$.
Assume that $J^4 \not=\mathbf{1}$. Then $J^2$ is not an involution, so that
it has an eigenvalue $\lambda \not=\pm 1$. If
$\mathcal{H}^\lambda$ is the corresponding eigenspace, then
$J \mathcal{H}^\lambda = \mathcal{H}^{\overline\lambda}$, so that ${\rm Spec}(J^2) = \{\lambda, \overline\lambda\}$.
Choosing an orthonormal basis $e_1, e_2$ such that
$e_1 \in \mathcal{H}^\lambda$ and $e_2 := J e_1 \in \mathcal{H}^{\overline\lambda}$,
we obtain $Je_2 = J^2 e_1 = \lambda e_1$, so that $J$ is determined up to
equivalence. The corresponding representation on ${\mathbb C}^2$ is irreducible
with $U_{G_1}' \cong {\mathbb C}^2$ and $U_G' \cong {\mathbb C}$
(Theorem~\ref{thm:equiv}(c)).
\end{ex}
\begin{ex} (a) For $G = G_1 \times {\mathbb Z}_2$, the concepts of
real/complex/quaternionic type coincides with the classical definition
for $G_1$, as the characterization in Theorem~\ref{thm:equiv} shows.
(b) For $G = G_1 \rtimes \{\mathbf{1},\tau\}$ and $\tau^2 = \mathop{{\rm id}}\nolimits_{G_1}$,
the extendability of an irreducible unitary representation
$(U,\mathcal{H})$ of $G_1$ is equivalent to the existence of a conjugation $J \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$
satisfying $JU_g J = U_{\tau(g)}$ for $g \in G_1$.
If $U \cong U^* \circ \tau$, then a $J \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$
satisfying $JU_g J = U_{\tau(g)}$ for $g \in G_1$ exists
and $J^2 \in U_{G_1}' = {\mathbb C} \mathbf{1}$, together with $J J^2 J = J^2$ imply
$J^2 \in \{\pm \mathbf{1}\}$.
Accordingly, $U$ is of real, resp., quaternionic type if
$J^2 = \mathbf{1}$, resp., $J^2 = - \mathbf{1}$.
\end{ex}
\begin{ex} (a) For $G = {\mathbb Z}_2$ and $G_1 = \{e\}$,
Theorem~\ref{thm:equiv})(a) reproduces the fact
that all conjugations on $\mathcal{H}$ are conjugate under $\mathop{\rm U{}}\nolimits(\mathcal{H})$.
(b) For $G = {\mathbb Z}_4 = {\mathbb Z}/4{\mathbb Z}$ and $G_1 = \{\overline 0, \overline 2\}$, the case of
antiunitary representations
with $U_{\overline 2} = -\mathbf{1}$ likewise implies that all anticonjugations
are conjugate under $\mathop{\rm U{}}\nolimits(\mathcal{H})$.
(c) The irreducible unitary representation of
$G = \mathop{{\rm SU}}\nolimits_2({\mathbb C})$ on ${\mathbb C}^2 \cong {\mathbb H}$ (by left multiplication)
is of quaternionic type. The complex structure on ${\mathbb H}$
is defined by the right multiplication with~$I$.
Then $\Phi(a) = a J$ defines a $G$-equivariant anticonjugation on ${\mathbb C}^2$.
Therefore the representation is of quaternionic type.
(d) For any compact connected Lie group $G_1$, the irreducible unitary
representations $(U_\lambda, \mathcal{H}_\lambda)$
are classified in terms of their highest weights
$\lambda$ with respect to a maximal torus $T \subseteq G_1$, resp.,
by the orbits $\mathcal{W}\lambda$ under the Weyl group~$\mathcal{W}$.
As $-\mathcal{W}\lambda$ is the Weyl group orbit of the dual representation,
$U_\lambda$ is self-dual if and only if $-\lambda \in \mathcal{W}\lambda$
(\cite[Prop.~VI.4.1]{BtD85}).
It is of real, resp., quaternionic type if and only if an invariant
symmetric, resp., skew-symmetric bilinear form exists
(\cite[Prop.~II.6.4]{BtD85}),
and this can also be read from the highest weight
(\cite[Prop.~VI.4.6]{BtD85}).
Further, for any automorphism $\sigma \in \mathop{{\rm Aut}}\nolimits(G_1)$, there exists an
inner automorphism $\sigma'$ such that $\tau := \sigma\sigma'$ preserves $T$.
Then $\lambda^\tau := \lambda \circ \tau\vert_T$ is an extremal weight
of $U_\lambda \circ \tau \cong U_\lambda \circ \sigma$, so that
$U_\lambda^* \circ \tau \cong U_\lambda$ if and only if
$-\lambda^\tau \in \mathcal{W}\lambda$.
\end{ex}
The following lemma shows that, if only $G_1$ and an involutive
automorphism $\tau$ of $G_1$ are given,
then there always exists an extension to a group of the type
$G_1 \rtimes_\alpha {\mathbb Z}_4$, where $\alpha_{\overline 1} = \tau$.
This issue is already discussed in Wigner's book
\cite[\S 26, p.~329]{Wig59}, where $J^2 = \pm \mathbf{1}$ is related to
spin being integral or half-integral.
\begin{lemma} \mlabel{lem:2.9}
Let $(U,\mathcal{H})$ be a unitary representation of the group $G_1$ and
$\tau \in \mathop{{\rm Aut}}\nolimits(G_1)$ be an involution.
If $U \circ \tau \cong U^*$, then there exists a
$J \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$ with $J^4 = \mathbf{1}$ and $J U_g J^{-1} = U_{\tau(g)}$ for $g \in G_1$.
\end{lemma}
\begin{proof} From $U \circ \tau \cong U^*$ we obtain a $J \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$
with $J U_g J^{-1} = U_{\tau(g)}$ for $g \in G_1$.
As $\tau^2 = \mathop{{\rm id}}\nolimits_{G_1}$, the unitary operator $J^2$ commutes with $U_{G_1}$.
We therefore have a $G_1$-invariant orthogonal decomposition
$\mathcal{H} =\mathcal{H}_+ \oplus \mathcal{H}_-$, where $\mathcal{H}_- = \ker(J^2 + 1)$ and
$\mathcal{H}_+ = \mathcal{H}_-^\bot$.
Since both subspaces are invariant under $G_1$ and $J$,
we may w.l.o.g.\ assume that $\mathcal{H}_-= \{0\}$
and show that there exists a conjugation commuting with $G_1$.
Conjugating with $J$ defines an antilinear
automorphism of the von Neumann algebra $\mathcal{M} := U_{G_1}'$
fixing the unitary element $J^2$. Therefore Lemma~\ref{lem:app.1} implies
the existence of a unitary $A \in U_{G_1}'$ with $JAJ = A$ and $A^2 = J^2$.
Replacing $J$ by $\tilde J := A^{-1}J$, we obtain $\tilde J^2 = \mathbf{1}$.
\end{proof}
\begin{lemma} \mlabel{lem:ext-abelian}
Let $G_1$ be an abelian group, $\tau(g) = g^{-1}$
and $G := G_1 \rtimes \{\mathbf{1},\tau\}$. Then every unitary representation
of $G_1$ extends to an antiunitary representation of $G$.
\end{lemma}
\begin{proof} We consider $G_1$ as a discrete
group, so that any unitary representation $(U,\mathcal{H})$ of $G_1$
is a direct sum of cyclic representations of the form
$(V,L^2(\hat{G_1},\mu))$, where
$(V_g f)(\chi) = \chi(g)f(\chi)$.
Then $Jf := \overline f$ defines a conjugation on $L^2(\hat A,\mu)$
with $J V_g J = V_{g^{-1}}$, so that we obtain an extension of $V$ to an
antiunitary representation of $G$.
\end{proof}
\begin{lemma} \mlabel{lem:2.19}
Suppose that $G \cong G_1 \rtimes \{\mathop{{\rm id}}\nolimits,\tau\}$,
where $\tau \in \mathop{{\rm Aut}}\nolimits(G)$ is an involution.
\begin{itemize}
\item[\rm(i)] If $(U,\mathcal{H})$ is an irreducible antiunitary
representations of $G$ and $x \in {\mathfrak g}^\tau$ satisfies
$-i\dd U(x)\geq 0$, then $\dd U(x) = 0$.
\item[\rm(ii)] If $(U,\mathcal{H})$ is an irreducible unitary
representations of $G_1$ and $x \in {\mathfrak g}^\tau$ satisfies
$-i\dd U(x)\geq 0$ and $\dd U(x) \not=0$,
then $U^* \circ \tau \not\cong U$, i.e., $U$ is of complex type
with respect to $\tau$.
\end{itemize}
\end{lemma}
\begin{proof} (i) The conjugation $U_\tau$ on $\mathcal{H}$ satisfies
$U_\tau i\dd U(x) U_\tau = - i\dd U(\tau x) = - i \dd U(x),$
so that the positivity assumption implies $\dd U(x) = 0$.
(ii) From (i) it follows that $U$ does not extend to an antiunitary
representation of $G$.
By Theorem~\ref{thm:equiv}(d)(ii),
$V := U \oplus U^* \circ \tau$ extends to an irreducible
antiunitary representation of $G$.
If $V_G' \cong \H$, then $U^* \circ \tau \cong U$
implies $-i \dd V(x) \geq 0$, so that $\dd V(x) = 0$ by (i),
and this contradicts $\dd U(x) \not=0$. We conclude that
$V_G' \cong {\mathbb C}$ and $U^* \circ \tau \not\cong U$.
\end{proof}
\begin{remark} \mlabel{rem:doublecone}
In \cite{OM16} the authors study a concept of
a ``Wigner elementary relativistic system'' which
is defined as a faithful irreducible orthogonal representation
$(U,\mathcal{K})$ of the proper orthochronous Poincar\'e group
$G := P(4)_+^\uparrow$ on a real Hilbert space $\mathcal{K}$.
Writing $(\tilde P_j)_{0 \leq j \leq 3}$ for the skew-adjoint generators
of the unitary representation
of the translations groups $U_{t e_j}= e^{t \tilde P_j}$,
the {\it mass squared operator} is defined as
\[ M^2 := -\tilde P_0^2 + \sum_{j = 1}^3 \tilde P_j^2.\]
One of the main results in \cite{OM16} is that if
$M^2 \geq 0$, then $\mathcal{K}$ carries a complex structure $I$ commuting
with the image of $U$ (\cite[Thm.~4.3, Thm.~5.11]{OM16}).
This result can be obtained quite directly in our context.
We consider the complexification $(U_{\mathbb C},\mathcal{K}_{\mathbb C})$ of the representation
on $\mathcal{K}$ by extending all operators $U_g$ to unitary operators on $\mathcal{K}_{\mathbb C}$.
Then the operators $P_j := -i \tilde P_j$ are selfadjoint with
\[ M^2 = P_0^2 - \sum_{j = 1}^3 P_j^2 \geq 0.\]
Since $(U,\mathcal{K})$ is irreducible, its commutant
is isomorphic to ${\mathbb R}, {\mathbb C}$ or $\H$ (\cite[Thm.~1]{StVa02}).
We claim that it is isomorphic to ${\mathbb C}$.
If this is not the case, then
$U_{\mathbb C}$ is either irreducible (if the commutant is ${\mathbb R}$)
or a direct sum of two copies of
the same irreducible unitary representation $(\hat U,\hat\mathcal{H})$ of $G$
(if the commutant is ${\mathbb H}$). As $M^2 \geq 0$, the spectrum
of the translation group is contained in the set
\[ D := \{ (x_0, \bx) \in {\mathbb R}^{1,3} \: x_0^2 \geq \bx^2 \}.\]
The decomposition
$D = D_+ \dot\cup \{0\} \dot\cup D_-$ with $D_\pm := \{ x \in D \: \pm x_0 > 0\}$
is invariant under $\mathop{{\rm SO}}\nolimits_{1,3}({\mathbb R})^\uparrow$, so that we obtain a corresponding
decomposition $\hat U = \hat U^+ \oplus \hat U^0 \oplus \hat U^-$,
where the spectrum of $\hat U_j\vert_{{\mathbb R}^{1,3}}$ is supported by $D_j$.
Since $\hat U$ is irreducible, only one summand is non-zero.
Further, $\hat U = \hat U_0$ implies that the translation group acts
trivially, which is ruled out by the assumption that $U$
is faithful. Hence we may w.l.o.g.\ assume that $\hat U = \hat U_+$,
so that $P_0 > 0$ (i.e., $P_0 \geq 0$ and $\ker P_0 = \{0\}$)
on $\hat\mathcal{H}$ and therefore on $\mathcal{H}_{\mathbb C}$.
Next we observe that the conjugation $J$ of $\mathcal{H}_{\mathbb C}$ with
respect to $\mathcal{H}$ commutes with $\tilde P_0$, hence
satisfies $J P_0 J = - P_0$, which leads to the contradiction
$P_0 = 0$ because it implies that the spectrum of $P_0$ is symmetric
(cf.~Remark~\ref{rem:2.21} below). This shows that
the commutant $U_G'$ is ${\mathbb C}$, so that there exists an, up to sign unique,
complex structure on $\mathcal{H}$ commuting with $U_G$.
\end{remark}
\subsection{One-parameter groups}
\mlabel{subsec:one-par}
We have seen in Example~\ref{app:b.1} that there are three types
of one-dimensional Lie groups defining involutive group pairs:
\begin{itemize}
\item[\rm(A)] ${\mathbb R}^\times$, resp., $({\mathbb R}^\times, {\mathbb R}^\times_+)$,
\item[\rm(B)] ${\mathbb R} \rtimes \{\pm \mathop{{\rm id}}\nolimits\}$, and
\item[\rm(C)] $\mathop{{\rm Pin}}\nolimits_2({\mathbb R})$.
\end{itemize}
Before we turn to the most important case (A), we take a brief look
at the other two cases.
\begin{remark} \mlabel{rem:2.20} Case (B): Here any antiunitary representation $(U,\mathcal{H})$
yields a conjugation $J := U_{(0,-1)}$ which defines a real structure on $\mathcal{H}$ and
satisfies $J U_t J = U_{-t}$ for $t \in {\mathbb R}$.
Conversely, every unitary one-parameter group extends to an antiunitary representation of $G$
(Lemma~\ref{lem:ext-abelian}).
Case (C): For the group $G = {\mathbb T} \{\mathbf{1},J\} = \mathop{{\rm Pin}}\nolimits_2({\mathbb R})$, we have
$J^2 = -\mathbf{1}$ and $JzJ^{-1} = \overline z$ for $z \in {\mathbb T}$, so that
antiunitary representations correspond to pairs $(H,I)$, where
$I \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$ satisfies $I^4 = \mathbf{1}$ and $H$ is a selfadjoint operator
satisfying $IHI = H$ and $e^{\pi i H} = I^2$. This implies in particular that
${\rm Spec}(H) \subseteq {\mathbb Z}$. For any such pair we put
$U_J := I$ and $U_{e^{it}} := e^{itH}$ (see \cite[\S 4.5]{NO16} for a natural occurence
of such representations).
\end{remark}
The following simple observation is the fundamental link between modular
theory and antiunitary representations.
\begin{lemma} \mlabel{lem:funda}
For every continuous antiunitary representation $(U,\mathcal{H})$
of ${\mathbb R}^\times$ and the infinitesimal generator $H$ defined by
$U_{e^t} = e^{itH}$, we obtain by
\[ \Delta := e^H \quad \mbox{ and } \quad J := U_{-1} \]
a pair $(\Delta, J)$, consisting of a positive operator $\Delta$ and a
conjugation $J$ satisfying the modular relation
\begin{equation}
\label{eq:modrel1}
J\Delta J = \Delta^{-1}.
\end{equation}
Conversely, any such pair $(\Delta, J)$ defines an antiunitary
representation of ${\mathbb R}^\times$ by
\[ U_{e^t} := \Delta^{-it/2\pi} \quad \mbox{ and } \quad
U_{-1} := J.\]
\end{lemma}
\begin{proof} The only point one has to observe here is that the
antiunitarity of $J$ implies that
$J U_t J = U_t$ corresponds to the relation $JHJ = -H$, which is equivalent to
$J\Delta J = \Delta^{-1}$.
\end{proof}
Lemma~\ref{lem:funda} motivates the following definition from the perspective of
antiunitary representations:
\begin{definition} A {\it pair of modular objects} on a complex
Hilbert space $\mathcal{H}$ is a pair $(\Delta, J)$,
where $J$ is a {\it conjugation}, i.e., an antilinear
isometric involution and $\Delta > 0$ is a positive selfadjoint
operator satisfying the {\it modular relation} \eqref{eq:modrel1}.
Then $J$ is called the {\it modular conjugation} and $\Delta$ the
{\it modular operator}.
\end{definition}
With this terminology, the preceding lemma immediately yields:
\begin{corollary} \mlabel{cor:2.21}
For any continuous homomorphism $\gamma \: ({\mathbb R}^\times, {\mathbb R}^\times_+) \to (G,G_1)$ and
any continuous antiunitary representation $(U,\mathcal{H})$ of $(G,G_1)$,
we obtain a pair of modular objects $(\Delta_\gamma, J_\gamma)$ from the representation
$U \circ \gamma$ of~${\mathbb R}^\times$.
\end{corollary}
\begin{remark} \mlabel{rem:2.21} For a selfadjoint operator~$H$, the existence
of a conjugation $J$ satisfying $JHJ = - H$
is equivalent to the restriction $H(0,\infty)$ to the strictly positive
spectral subspace being
equivalent to the restriction $H(-\infty,0)$ to the strictly negative
spectral subspace (\cite{Lo08}). Only such
operators $H$ arise as infinitesimal
generators for antiunitary representations of~${\mathbb R}^\times$.
\end{remark}
\begin{ex} Let $(G,G_1)$ be an involutive pair of Lie groups
and $r \in G\setminus G_1$ be such that $\tau :=c_r\vert_{G_1}$ is an involution.
Then $\mathop{{\rm Ad}}\nolimits(\tau)$ is an involutive
automorphism of ${\mathfrak g}$ and if ${\mathfrak g}$ is non-abelian, then ${\mathfrak g}^\tau \not= \{0\}$.
(A) If $r^2 = \mathbf{1}$, then any element $x \in {\mathfrak g}^\tau$ leads to a homomorphism
\[ \gamma_{r,x} \: {\mathbb R}^\times \to G, \quad
\gamma_{r,x}(e^t) := \exp(tx), \quad
\gamma_{r,x}(-1) := r.\]
(B) If $r^2 = \mathbf{1}$, then any element $x \in {\mathfrak g}^{-\tau}$ leads to a homomorphism
\[ \gamma_{r,x} \: {\mathbb R} \rtimes \{\pm \mathop{{\rm id}}\nolimits_{\mathbb R}\} \to G, \quad
\gamma_{r,x}(t) := \exp(tx), \quad
\gamma_{r,x}(-1) := r.\]
(C) If $r^4 = \mathbf{1}$, then any element $x \in {\mathfrak g}^{-\tau}$ with
$\exp(\pi x) = r^2$ leads to a homomorphism
\[ \gamma_{r,x} \: \mathop{{\rm Pin}}\nolimits_2({\mathbb R}) = {\mathbb T} \{\mathbf{1},J\} \to G, \quad
\gamma_{r,x}(e^{it}) := \exp(tx), \quad
\gamma_{r,x}(J) := r.\]
\end{ex}
\begin{definition} \mlabel{def:cplx-type}
(One-parameter groups of complex type)
Let $(G, G_1)$ be an involutive Lie group pair.
We assume that $G$ is a subgroup of a complex Lie group $G_{\mathbb C}$
on which there exists an antiholomorphic involution $\sigma$ such that
$G \subseteq (G_{\mathbb C})^\sigma$. We consider the set
\[ \cY_{(G,G_1)} := \{ x \in {\mathfrak g} \:
2\pi = \min \{ t > 0 \: \exp(ti x) = e\}, \exp(\pi i x) \in G \setminus G_1\}.\]
We associate to each $x \in \cY_{(G,G_1)}$ the holomorphic homomorphism
\[ \gamma_x \: {\mathbb C}^\times \to G_{\mathbb C}, \quad
\gamma_x(e^z) := \exp(z x).\]
Then
$\sigma(\gamma_x(w)) = \gamma_x(\overline w)$ for $w \in {\mathbb C}^\times$ and thus
$\gamma_x({\mathbb R}^\times) \subseteq (G_{\mathbb C})^\sigma$
holds automatically and $r_x := \gamma_x(-1)$
is an involution. For $x \in \cY_{(G,G_1)}$, we thus obtain
\[ \gamma_x \in \mathop{{\rm Hom}}\nolimits(({\mathbb R}^\times,{\mathbb R}^\times_+), (G, G_1)).\]
\end{definition}
In Section~\ref{sec:5} below we shall see that many geometric
realizations of modular automorphism groups come from elements of $\cY_{(G,G_1)}$,
where $G = P(d)_+$ is the Poincar\'e group or the conformal
group $\mathop{\rm Conf{}}\nolimits({\mathbb R}^{1,d-1}) \cong \mathop{\rm O{}}\nolimits_{2,d}({\mathbb R})/\{\pm \mathbf{1}\}$ of Minkowski space
(cf.\ \cite[\S17.4]{HN12}).
This motivates the following discussion of examples.
\begin{ex} \mlabel{ex:one-par}
(a) For $(G, G_1) = ({\mathbb R}^\times, {\mathbb R}^\times_+)$ and $G_{\mathbb C} = {\mathbb C}^\times$ and
$\exp(z) = e^z$, we have
$\cY_{(G,G_1)} = \{\pm 1\} \subseteq {\mathbb R} = {\mathfrak g}.$
(b) (Lorentz groups)
For
\[ G = \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R}) \subseteq G_{\mathbb C} = \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb C})
= \Big\{ \pmat{ a & b \\ b & a} \: a,b \in {\mathbb C},
a^2 - b^2 = 1 \Big\}, \]
we have $G \cong {\mathbb R}^\times$ and $G_{\mathbb C} \cong {\mathbb C}^\times$, so that we basically
have the same situation as under~(a). Here a canonical
generator of the Lie algebra is the boost generator
\begin{equation}
\label{eq:boostgen2}
b_0 := \pmat{0 & 1 \\ 1 & 0}
\quad \mbox{ with } \quad
e^{z b_0} = \pmat{ \cosh z & \sinh z \\ \sinh z & \cosh z} \quad \mbox{ and } \quad
r_{b_0} = e^{\pi i b_0} = - \mathbf{1}.
\end{equation}
We have $\cY_{(G,G_0)} = \{ \pm b_0\}$.
This example embeds naturally into the higher dimensional
Lorentz groups
$G = \mathop{{\rm SO}}\nolimits_{1,d}({\mathbb R}) \subseteq G_{\mathbb C} = \mathop{{\rm SO}}\nolimits_{1,d}({\mathbb C})$, where
\begin{equation}
\label{eq:bostgen-d}
b_0 := E_{10} + E_{01} \in \cY_{(G,G_0)}\quad \mbox{ and } \quad
r_{b_0} = R_{01} = \mathop{{\rm diag}}\nolimits(-1,-1,1,\ldots, 1).
\end{equation}
Since the simple real Lie algebra ${\mathfrak g} = \so_{1,d}({\mathbb R})$ (for $d \geq 2$)
is of real rank $1$, all $\mathop{{\rm ad}}\nolimits$-diagonalizable elements $x \in {\mathfrak g}$ are conjugate
to a multiple of $b_0$. All these elements $x$ are diagonalizable matrices
and $\mathop{{\rm im}}\nolimits(x)$ is a two-dimensional Minkowski plane in which the two eigenvectors
are light-like. Conversely, every triple $(\beta, \ell_+, \ell_-)$
consisting of $\beta \in {\mathbb R}^\times$ and two linearly independent light-like vectors
$\ell_\pm$ specifies such an element $x = x(\ell_+, \ell_-, \beta) \in {\mathfrak g}$ by
$x \ell_\pm = \pm\beta \ell_\pm$ and $\ker x = \{\ell_1, \ell_2\}^\bot$.
We then have
\[ \cY_{(G,G_0)} = \mathop{{\rm Ad}}\nolimits(G)b_0 =
\{ x(\ell_+, \ell_-, \beta) \: \beta = 1 \}
\cong \mathop{{\rm SO}}\nolimits_{1,d}({\mathbb R})/(\mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R}) \times \mathop{{\rm SO}}\nolimits_{d-1}({\mathbb R})),\]
and this is a symmetric space because the centralizers of $b_0$ and
the involution $r_{b_0}$ share the same identity component.
(c) For the affine group $G := \mathop{{\rm Aff}}\nolimits({\mathbb R})\cong {\mathbb R} \rtimes {\mathbb R}^\times$ of the real line,
the coset $G \setminus G_0$ consists of the orientation reversing affine maps.
Note that $G_{\mathbb C} \cong {\mathbb C} \rtimes {\mathbb C}^\times$ and $G_{\mathbb C}^\sigma = G$.
Here $\cY_{(G,G_0)} \cong {\mathbb R} \times \{ \pm 1\}$
is the set of real affine vector fields $X$
for which the vector field $iX$ on ${\mathbb C}$ generates a $2\pi$-periodic flow
(whose center lies on the real axis).
\end{ex}
\begin{ex} \mlabel{ex:proj-grp}
We consider the real projective group
$G = \mathop{{\rm PGL}}\nolimits_2({\mathbb R})\subseteq G_{\mathbb C} = \mathop{{\rm PGL}}\nolimits_2({\mathbb C})$ acting on the real projective line
${\mathbb S}^1 \cong {\mathbb R} \cup \{ \infty\}$,
resp., on the Riemann sphere ${\mathbb C} \cup \{\infty\} \cong {\mathbb P}_1({\mathbb C})$.
A subset $I \subset {\mathbb S}^1$ is called an {\it interval}
if it is connected, open, non-empty and not dense. Then the interior $I'$
of its complement also is an interval. For every interval there is a canonical
involution $r_I \in \mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ fixing both endpoints and exchanging $I$ and~$I'$.
The centralizer of $r_I$ in $\mathop{{\rm PSL}}\nolimits_2({\mathbb R})$ is isomorphic to $\mathop{{\rm PSO}}\nolimits_{1,1}({\mathbb R})
\cong {\mathbb R}$, hence connected, and there exists
an element $x_I \in \cY_{(G,G_0)}$ which is up to sign unique.
The corresponding homomorphism $\gamma^I := \gamma_{x_I} \: {\mathbb R}^\times \to G$
satisfies $\gamma^I_{-1} = \exp(\pi i x_I) = r_I$.
For the interval $I = (0,\infty)$, we have
\[ \gamma^I_t(z) = t z \quad \mbox{ and } \quad r_I(z) = -z. \]
For $I = (-1,1)$, we have $\gamma^I({\mathbb R}^\times) = \mathop{{\rm PO}}\nolimits_{1,1}({\mathbb R})$ and
$\gamma^I_{2t}(z) := \frac{\cosh t \cdot z + \sinh t}{\sinh t \cdot z + \cosh t}.$
This leads to
\[ \gamma^I_{2ti}(z) = \frac{\cos t \cdot z + i \sin t}{i \sin t \cdot z + \cos t},
\quad \mbox{ so that} \quad
\gamma^I_{2\pi i}(z) = z \quad \mbox{ and } \quad
r_I(z) = \gamma^I_{\pi i}(z) = \frac{1}{z}.\]
\end{ex}
\subsection{Some low-dimensional groups}
\mlabel{subsec:2.4}
\subsubsection{The affine group of the real line}
\mlabel{subsubsec:2.5.1}
We consider the affine group $G := \mathop{{\rm Aff}}\nolimits({\mathbb R}) = {\mathbb R} \rtimes {\mathbb R}^\times$
and its identity component $G_1 = {\mathbb R} \rtimes {\mathbb R}^\times_+$.
We say that a unitary representation $(U,\mathcal{H})$ of $G_1$ is of {\it positive
energy} if $U_{(t,1)} = e^{itP}$ with $P \geq 0$, i.e., the restriction to the
translation subgroup has non-negative spectrum. We speak of {\it strictly positive
energy} if, in addition, $\ker P = \{0\}$.
Up to unitary equivalence, $G_1$ has exactly one
irreducible unitary representation with strictly positive energy
and every unitary representation with strictly positive energy is a
multiple of the irreducible one. The analogous statement holds for negative
energy (\cite[Thm.~2.8]{Lo08}). Further, any
unitary representation $U$ of $G_1$ decomposes uniquely as a direct
sum $U = U^+ \oplus U^0 \oplus U^-$, where $U^\pm$ have strictly positive/negative
energy and the translation group is contained in $\ker U^0$.
The unique irreducible representation of strictly positive energy can be realized
on $\mathcal{H} := L^2({\mathbb R}^+)$ by
\begin{equation}
\label{eq:pics-s1}
(U_{(t,e^s)}f)(x) = e^{itx} e^{s/2} f(e^sx).
\end{equation}
It obviously extends by $U_{(0,-1)} f := \overline f$ to an irreducible
antiunitary representation of~$G$.
By Theorem~\ref{thm:equiv} we thus obtain up to equivalence
precisely one irreducible antiunitary representation of $G$
with strictly positive energy. More generally, we have
by \cite[Prop.~2.11]{Lo08} and
Theorem~\ref{thm:equiv}:
\begin{proposition}
Every unitary representation $(U,\mathcal{H})$
of $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$ of strictly positive energy extends to an
antiunitary representation $\overline U$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ on the same Hilbert space
which is unique up to equivalence.
\end{proposition}
The representation theory of the affine group can be used to draw some
general conclusions on spectra of one-parameter groups.
\begin{proposition} \mlabel{prop:2.x} Let $G$ be a connected Lie group and
$(U,\mathcal{H})$ be a unitary representation for which $\dd U$ is faithful
and $x \in {\mathfrak g}$. Then the following assertions hold:
\begin{itemize}
\item[\rm(a)] If $\mathop{{\rm ad}}\nolimits x$ has a non-zero real eigenvalue, then ${\rm Spec}(i\dd U(x)) = {\mathbb R}$.
\item[\rm(b)] If ${\mathfrak g}$ is semisimple and
$0\not=y$ is nilpotent, then ${\rm Spec}(i\dd U(y)) \in \{ {\mathbb R},{\mathbb R}_+, {\mathbb R}_-\}$.
\item[\rm(c)] If $0\not=x \in {\mathfrak g}$ is such that
$\mathop{{\rm ad}}\nolimits x$ is diagonalizable and ${\mathfrak b} \trianglelefteq {\mathfrak g}$ is the ideal generated by
$\mathop{{\rm im}}\nolimits(\mathop{{\rm ad}}\nolimits x)$ and $x$, then $\ker\big(\dd U(x)\big) = \mathcal{H}^B
= \{ \xi \in \mathcal{H} \: (\forall g \in B)\ U_g\xi = \xi\}$ holds
for the corresponding integral subgroup $B \trianglelefteq G$.
\end{itemize}
\end{proposition}
\begin{proof} (a) Let $0\not=y \in {\mathfrak g}$ with $[x,y] = \lambda y$ for some
$\lambda \not=0$. Then ${\mathfrak h} := {\mathbb R} x + {\mathbb R} y$ is a $2$-dimensional non-abelian
subalgebra and $\dd U(y) \not=0$. Therefore the assertion follows from the fact that,
for all irreducible unitary representations of the corresponding
$2$-dimensional subgroup isomorphic to $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$,
the spectrum of $i \dd U(x)$ coincides with ${\mathbb R}$.
(b) Using the Jacobson--Morozov Theorem, we find an $h \in {\mathfrak g}$ with
$[h,x] = x$, so that the Lie algebra ${\mathfrak b} := {\mathbb R} h + {\mathbb R} x$ is isomorphic to
$\mathop{{\mathfrak{aff}}}\nolimits({\mathbb R})$. Then the result follows from the classification of the
irreducible representations of the group $\exp({\mathfrak b}) \cong \mathop{{\rm Aff}}\nolimits({\mathbb R})$.
(c) Let ${\mathfrak g} = \oplus_{\mu\in {\mathbb R}} {\mathfrak g}_\mu(\mathop{{\rm ad}}\nolimits x)$ denote the eigenspace
decomposition of ${\mathfrak g}$ with respect to the diagonalizable operator $\mathop{{\rm ad}}\nolimits x$.
Then the representation theory of $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$ implies that,
for $\mu\not=0$, the operators in $\dd U({\mathfrak g}_\mu(\mathop{{\rm ad}}\nolimits x))$ vanish
on $\ker \dd U(x)$. This shows that the Lie subalgebra ${\mathfrak h}$ generated
by $x$ and $[x,{\mathfrak g}]$ acts trivially on $\ker \dd U(x)$.
Since this subalgebra is invariant under $\mathop{{\rm ad}}\nolimits({\mathfrak g}_0(\mathop{{\rm ad}}\nolimits x))$ and
contains the other eigenspaces of $\mathop{{\rm ad}}\nolimits x$, it is an ideal of ${\mathfrak g}$,
hence coincides with~${\mathfrak b}$. Therefore $\ker\big(\dd U(x)\big) = \mathcal{H}^B$.
\end{proof}
\subsubsection{The projective group of the real line}
We consider the projective group $G = \mathop{{\rm PGL}}\nolimits_2({\mathbb R})$
and its identity component $G_1 = \mathop{{\rm PSL}}\nolimits_2({\mathbb R})$.
We write $r(x) = -x$ for the reflection in $0$ which
commutes with the dilation group ${\mathbb R}^\times \subseteq \mathop{{\rm Aff}}\nolimits({\mathbb R}) \subseteq \mathop{{\rm PGL}}\nolimits_2({\mathbb R})$
(cf.~Example~\ref{ex:proj-grp}). Note that $r$ extends to an antiholomorphic
automorphism $r(z) := -\overline z$ of the upper half plane ${\mathbb C}_+$, so that
we obtain an identification of $G$ with the group $\mathop{{\rm AAut}}\nolimits({\mathbb C}_+)$
(Example~\ref{ex:2.11}(d)).
For the generators of $\fsl_2({\mathbb R})$, we write
\[ T = \pmat{0 & 1 \\ 0 & 0}, \quad
S = \pmat{0 & 0 \\ -1 & 0} \quad \mbox{ and } \quad
E = \frac{1}{2} \pmat{ 1 & 0 \\ 0 & -1}.\]
They satisfy the commutation relations
\[ [E,T] = T, \quad [E,S] = -S \quad \mbox{ and }\quad
[T,S] = - 2 E.\]
In the complexification $\fsl_2({\mathbb C})$, we have the basis
\begin{align*}
L_{\pm 1} := \frac{1}{2}\pmat{1 & \mp i \\ \mp i & -1} = E \mp \frac{i}{2}(T-S),
\qquad L_0 := -\frac{i}{2} \pmat{0 & 1 \\ -1 & 0} =- \frac{i}{2}(T+S).
\end{align*}
These elements satisfy the relations
\[ [L_0, L_{-1}]= L_{-1}, \quad [L_0, L_1] = - L_1
\quad \mbox{ and } \quad [L_1, L_{-1}] = -2 L_0.\]
\begin{definition}
The element $L_0 \in i \fsl_2({\mathbb R})$ is called the {\it conformal Hamiltonian}.
A~unitary representation $(U,\mathcal{H})$ of $\tilde\mathop{{\rm SL}}\nolimits_2({\mathbb R})$ is called a
{\it positive energy representation} if \break ${\dd U(L_0) \geq 0}$.
\end{definition}
The following result is well known; for a proof in the spirit of the
present exposition, we refer to \cite[Cor.~2.9]{Lo08}.
\begin{corollary} \mlabel{cor:psl2-restrict}
For every non-trivial irreducible positive energy representation $U$ of
the simply connected covering group
$\tilde\mathop{{\rm SL}}\nolimits_2({\mathbb R})$, the restriction to $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$ is also irreducible.
If $U$ is non-trivial, then $U\vert_{\mathop{{\rm Aff}}\nolimits({\mathbb R})_0}$ is the unique irreducible
representation with strictly positive energy.
\end{corollary}
\begin{remark} \mlabel{rem:lowei}
In $\mathop{{\rm PSL}}\nolimits_2({\mathbb R})$, we have $\exp(2\pi i L_0) = \mathbf{1}$,
so that, for every
irreducible positive energy representation of $\mathop{{\rm PSL}}\nolimits_2({\mathbb R})$,
the spectrum of $\dd U(L_0)$ is contained in $m + {\mathbb N}_0$ for some
$m \in {\mathbb N}_0$. We call $m$ its
{\it lowest weight} and write $\mathcal{H}_m= {\mathbb C} \xi_m$ for the $m$-eigenspace of
$L_0$ in $\mathcal{H}$. Then $\dd U(L_1) \xi_m = 0$ and
$\xi_{m + k} := \dd U(L_{-1})^k \xi_m$, $k \in {\mathbb N}_0$, is an orthogonal
basis of $\mathcal{H}$.
\end{remark}
\begin{theorem}{\rm(\cite[Thm.~2.10]{Lo08})} \mlabel{thm:1.4}
Every unitary positive energy representation $U$ of $\mathop{{\rm PSL}}\nolimits_2({\mathbb R})$ extends to an
antiunitary representation $\overline U$ of $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ on the same Hilbert space.
This extension is unique up to isomorphism
and, if $U$ is irreducible, then $J := \overline U_r$ {\rm(for $r(x) = -x$)}
is unique up to a multiplicative factor in ${\mathbb T}$.
\end{theorem}
\begin{proof} In view of Theorem~\ref{thm:equiv}, it suffices
to verify the first assertion.
Here the main point is to define the antiunitary
involution on the irreducible lowest weight representation $U^m$ of
lowest weight~$m \in {\mathbb N}_0$. We
specify an antiunitary involution $C$ on $\mathcal{H}$ by $C \xi_n = \xi_n$ for
$n \geq m$ (cf.\ Remark~\ref{rem:lowei}).
Then $C$ commutes with $L_0$ and $L_{\pm 1}$ and
\[ C E C = E, \quad C T C = - T \quad \mbox{ and }\quad
C S C = - S.\]
This implies that $C U^m_g C = U^m_{rgr}$ for $g \in \mathop{{\rm PSL}}\nolimits_2({\mathbb R})$.
\end{proof}
\begin{definition} (Positive energy representations)
\mlabel{def:posen-rep}
(a) A unitary representation $(U,\mathcal{H})$ of the translation group
${\mathbb R}^d= {\mathbb R}^{1,d-1}$ of Minkowski space is said to be a {\it positive
energy representation} if $-i \dd U(x) \geq 0$ for
$x \in \overline{V_+}$.
(b) A unitary representation $(U,\mathcal{H})$ of the Poincar\'e group
$P(d)^\uparrow_+$ is said to be a {\it positive
energy representation} if its restriction to the translation subgroup
is of positive energy. We likewise define antiunitary
positive energy representations of $P(d)_+$.
\end{definition}
\begin{remark}
(a) For the group $G := P(2)_+ \cong {\mathbb R}^{1,1} \rtimes \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})$
and the reflection $r := (0,-\mathbf{1})$ inducing on $G$ the involution
$\tau(b,a)= (-b,a)$, there are similar results to Theorem~\ref{thm:1.4}
(cf.~Theorem~\ref{thm:wies2-standard}).
Here the main point is to see that the irreducible strictly positive energy
representations $(U,\mathcal{H})$ of $G_0$ carry a natural conjugation
that we can use for the extension. In the $L^2$-realization
on the hyperbolas
\[ \mathcal{O}_m = \{ (\lambda, \mu) \in {\mathbb R}^2 \: \lambda^2 - \mu^2 = m^2 \}, \quad
m > 0, \]
suggested by Mackey theory, we can extend the representation
simply by $U_{(0,0,-\mathbf{1})} f = \overline f$.
(b) For the Poincar\'e group $P(d)_+$, the situation is more complicated.
The irreducible strictly positive energy representations of $P(d)^\uparrow_+
\cong {\mathbb R}^d \rtimes \mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})^\uparrow$ are induced from representations
of the stabilizer group $\mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})_{e_0} \cong \mathop{{\rm SO}}\nolimits_{d-1}({\mathbb R})$
and realized in vector-valued $L^2$-spaces on the hyperboloids
\[ \mathcal{O}_m = \{ (p_0,\bp) \in {\mathbb R}^d \:
p_0^2 - \bp^2 = m^2\}, \quad m > 0.\]
Since the stabilizer group is non-trivial for $d > 2$, the existence
of an antiunitary extension to $P(d)_+$ depends on the
existence of an antiunitary extension of the representation
$(\rho, V)$ of $\mathop{{\rm SO}}\nolimits_{d-1}({\mathbb R})$ to $\mathop{\rm O{}}\nolimits_{d-1}({\mathbb R})$. We refer to
\cite{NO17} for a detailed analysis of these issues;
see also \cite[Thm.~9.10]{Va85} for a discussion concerning the
Poincar\'e group.
\end{remark}
\subsubsection{The Heisenberg group}
In this subsection we recall the close connection between
unitary representations of the $3$-dimensional
Heisenberg group and positive energy representations of~$\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$
(cf.~\cite[Thm.~2.8]{Lo08}) which also extends to antiunitary extensions.
We define the {\it Heisenberg group} $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)$ as the
manifold ${\mathbb T} \times {\mathbb R}^2$, endowed with the group multiplication
\[ (z,s,t)(z',s',t') = (zz' e^{is't}, s + s', t + t'). \]
Note that
\[ \mathop{{\rm Heis}}\nolimits({\mathbb R}^2) \cong ({\mathbb T} \times {\mathbb R}) \rtimes_\alpha {\mathbb R}
\quad \mbox{ for } \quad
\alpha_t(z,s) = (z e^{ist}, s).\]
Extending the action of ${\mathbb R}$ on ${\mathbb T} \times {\mathbb R}$
to an action of ${\mathbb R} \times {\mathbb Z}_2 \cong {\mathbb R}^\times$ via
\[ \beta_r(z,s) = (z r^{is} , s) \quad \mbox{ and }\quad
\beta_{-1}(z,s) = (\overline z, -s), \]
we obtain the larger group
\[ \mathop{{\rm Heis}}\nolimits({\mathbb R}^2)_\tau
\cong ({\mathbb T}\times {\mathbb R}) \rtimes_\beta {\mathbb R}^\times
\cong \mathop{{\rm Heis}}\nolimits({\mathbb R}^2) \rtimes \{\mathbf{1},\tau\}, \quad \mbox{ with } \quad
\tau(z,s,t) = (\overline z, -s,t).\]
\begin{proposition} There is a natural one-to-one correspondence between
unitary representations $(\tilde U, \mathcal{H})$ of $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)$
satisfying $\tilde U_{(z,0,0)} = z\mathbf{1}$ and unitary strictly positive
energy representations $(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$. It is established
as follows:
\begin{itemize}
\item[\rm(i)] If $U$ is given and $U_{(b,1)} = e^{ibP}$ with $P > 0$, then we put
$W_s := e^{is \log P}$ and
$\tilde U_{(z,s,t)} := z W_s U_{(0,e^t)}$.
\item[\rm(ii)] If $\tilde U$ is given and
$W_s := \tilde U_{(1,s,0)} = e^{isA}$, then we put
$U_{(s,e^t)} := e^{is \exp A} \tilde U_{(1,0,t)}$.
\item[\rm(iii)] This correspondence extends naturally
to antiunitary representations of $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)_\tau$ and
antiunitary positive energy representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$.
\end{itemize}
\end{proposition}
\begin{proof} (i) Let $V_t := U_{(0,e^t)}$ and $A := \log P$.
Then $V_t P V_{-t} = e^t P$ implies that $W_s = e^{isA}$ satisfies
\[ V_t A V_{-t} = t \mathbf{1} + A \quad \mbox{ and }\quad
V_t W_s V_{-t} = e^{ist} W_s.\]
Therefore $V$ and $W$ define a unitary representation of
$\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)$ via $\tilde U_{(z,s,t)} := z W_s V_t.$
(ii) With $V_t := \tilde U_{(1,0,t)}$ and $W_s = \tilde U_{(1,s,0)} = e^{isA}$,
the positive operator $P := e^A$ satisfies
$V_t P V_{-t}= e^t P$, so that we obtain a positive energy representation of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$ by $U_{(s,e^t)} := e^{is P} V_t$.
(iii) If, in addition, $U$ is an antiunitary representation of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})\cong {\mathbb R} \rtimes {\mathbb R}^\times$ and $J = U_{(0,-1)}$, then
$J U_{(b,a)} J = U_{(-b,a)}$ leads to
$J P J = P$ and thus to $JAJ = A$. We therefore obtain an antiunitary
representation $\hat U$ of
$\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)_\tau \cong ({\mathbb T}\times {\mathbb R}) \rtimes_\alpha {\mathbb R}^\times$ by
$\hat U_{(z,s,a)} := \hat U_{(z,s)} U_{(0,a)}.$
\end{proof}
\begin{remark}
Write $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)_\tau$ as the semidirect
product $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2) \rtimes \{\mathbf{1},\tau\}$,
where $\tau(z,s,t) = (\overline z, -s, t)$.
Then the conjugacy class $C_\tau \subseteq \mathop{{\rm Heis}}\nolimits({\mathbb R}^2)_\tau$ is a $2$-dimensional
symmetric space diffeomorphic to ${\mathbb T} \times {\mathbb R}$
and the centralizer of $\tau$ in $\mathop{{\rm Heis}}\nolimits({\mathbb R}^2)$ is the subgroup
$\{\pm 1\} \times \{0\} \times {\mathbb R}$ which also commutes
with the whole subgroup $\{(1,0)\} \times {\mathbb R}$.
Therefore $C_\tau$ can be identified with the conjugacy class
of the homomorphism $\gamma \: {\mathbb R}^\times \to \mathop{{\rm Heis}}\nolimits({\mathbb R}^2)_\tau
\cong ({\mathbb T} \times {\mathbb R}) \rtimes_\beta {\mathbb R}^\times$ with
$\gamma(t) = (1,0,t)$.
\end{remark}
\section{Modular objects and standard subspaces}
\mlabel{sec:3}
Besides antiunitary representations of ${\mathbb R}^\times$
(Lemma~\ref{lem:funda}),
there are other interesting ways to encode modular objects $(\Delta, J)$.
Below we discuss some of them. In particular, we introduce the
concept of a standard subspace $V \subseteq \mathcal{H}$
which is a geometric counterpart of antiunitary representations of~${\mathbb R}^\times$
(Proposition~\ref{prop:3.2}).
We also discuss how the embedding $V \subseteq \mathcal{H}$ can be
obtained from the orthogonal one-parameter group
$\Delta^{it}\vert_V$ on $V$ (\S \ref{subsec:orthog}),
and in \S \ref{subsec:3.3} we introduce half-sided modular inclusions
of standard subspaces and how they are related to
antiunitary representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, $P(2)_+$ and $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$.
Modular intersections are studied in \S\ref{subsec:modint}.
\subsection{Standard subspaces}
We now turn to the fundamental concept of a standard subspace $V$ of a
complex Hilbert space $\mathcal{H}$. The key structures on the set
$\mathop{{\rm Stand}}\nolimits(\mathcal{H})$ of standard subspaces is a natural action of the
group $\mathop{{\rm AU}}\nolimits(\mathcal{H})$, an order structure induced by inclusion, and
an involution $V \mapsto V'= i V^{\bot_{\mathbb R}}$ defined by the symplectic orthogonal space.
\begin{definition} A closed real subspace $V \subseteq \mathcal{H}$ is called a {\it standard
real subspace} (or simply a {\it standard subspace})
if $V \cap i V = \{0\}$ and $V + i V$ is dense in $\mathcal{H}$.
We write $\mathop{{\rm Stand}}\nolimits(\mathcal{H})$ for the set of standard subspaces of~$\mathcal{H}$.
\end{definition}
For every standard subspace $V \subseteq \mathcal{H}$, we obtain an
antilinear unbounded operator
\[ S \: \mathcal{D}(S) := V + i V \to \mathcal{H}, \qquad
S(v + i w) := v - i w \]
and this operator is closed, so that $\Delta_V := S^*S$ is a positive
selfadjoint operator. We thus obtain
the polar decomposition
\[ S = J_V \Delta_V^{1/2},\]
where $J_V$ is an antilinear isometry, and
$S = S^{-1} = \Delta_V^{-1/2} J_V^{-1} = J_V^{-1} (J_V \Delta_V^{-1/2} J_V^{-1})$
leads to $J_V^{-1} = J_V$ and the modular relation $J_V\Delta_V J_V = \Delta_V^{-1}$.
If, conversely, $(\Delta, J)$ is a pair of modular objects, then
$S := J \Delta^{1/2}$ is a densely defined antilinear involution and
\[ \mathop{{\rm Fix}}\nolimits(S) := \{ \xi \in \mathcal{D}(S) \: S\xi = \xi \} \]
is a standard subspace with $J_V = J$ and $\Delta_V = \Delta$.
The correspondence between modular objects and standard subspaces
is the core of Tomita--Takesaki Theory (see Theorem~\ref{thm:tom-tak} below).
Combining the preceding discussion with
Lemma~\ref{lem:funda}, we obtain:
\begin{proposition}\mlabel{prop:3.2} If $(U,\mathcal{H})$ is an antiunitary
representation of ${\mathbb R}^\times$ with $U_{e^t} = \Delta^{-it/2\pi}$ for $t \in {\mathbb R}$ and
$J := U_{-1}$, then $V := \mathop{{\rm Fix}}\nolimits(J\Delta^{1/2})$ is a standard subspace.
This defines a bijection $V \leftrightarrow U^V$ between antiunitary representations of
${\mathbb R}^\times$ and standard subspaces.
\end{proposition}
\begin{remark} \mlabel{rem:anaext}
The parametrization of the one-parameter group
in Proposition~\ref{prop:3.2} may appear artificial, but it turns out that
it is quite natural.
As $V \subseteq \mathcal{D}(\Delta^{1/2})$, for each $v \in V$, the orbit map
$U^v(g) := U_g v$ has an analytic extension
\[ \{ z \in {\mathbb C} \: 0 \leq \Im z \leq \pi \} \to \mathcal{H},
\quad z \mapsto U^v_{e^z} := \Delta^{-iz/2\pi}v \]
with $U^v(i\pi) = \Delta^{1/2} v = J v$. This fits with $U_{-1} = J$
and it is compatible with the context of Definition~\ref{def:cplx-type}, where
$\gamma(-1) = \exp(\pi ix)$ is obtained by analytic continuation from
$\gamma(e^t) = \exp(tx)$.
\end{remark}
\begin{remark} \mlabel{rem:4.3}
(a) If $V = \mathop{{\rm Fix}}\nolimits(S)$ is a standard subspace with modular objects
$(\Delta, J)$, then
\begin{equation}
\label{eq:delta-conj}
\Delta^{1/4} S \Delta^{-1/4} = \Delta^{1/4} J \Delta^{1/4}
= \Delta^{1/4}\Delta^{-1/4}J = J
\end{equation}
implies that $V = \mathop{{\rm Fix}}\nolimits(S) = \Delta^{-1/4} \mathop{{\rm Fix}}\nolimits(J) = \Delta^{-1/4}\mathcal{H}^J$.
(b) Write $\mathop{{\rm Stand}}\nolimits_0(\mathcal{H})$ for the set of those standard subspaces $V$
for which $V + i V = \mathcal{H}$, i.e., the antilinear involution~$S$
is bounded.
Combining \eqref{eq:delta-conj} with the fact that the unitary group $\mathop{\rm U{}}\nolimits(\mathcal{H})$
acts transitively on the set of all conjugations (=antiunitary involutions),
it follows that the group $\mathop{{\rm GL}}\nolimits(\mathcal{H})$ acts transitively on $\mathop{{\rm Stand}}\nolimits_0(\mathcal{H})$. This leads
to the structure of a Banach symmetric space on this set
\[ \mathop{{\rm Stand}}\nolimits_0(\mathcal{H}) \cong \mathop{{\rm GL}}\nolimits(\mathcal{H})/\mathop{{\rm GL}}\nolimits(\mathcal{H}^J) \cong \mathop{{\rm GL}}\nolimits(\mathcal{H})/\mathop{{\rm GL}}\nolimits(\mathcal{H})^J,\]
where $J$ is any conjugation on $\mathcal{H}$ (cf.\ Appendix~\ref{app:a.2} and \cite{Kl11}).
For $\mathcal{H} = {\mathbb C}^n$, we obtain in particular
\[ \mathop{{\rm Stand}}\nolimits({\mathbb C}^n) = \mathop{{\rm Stand}}\nolimits_0({\mathbb C}^n) \cong \mathop{{\rm GL}}\nolimits_n({\mathbb C})/\mathop{{\rm GL}}\nolimits_n({\mathbb R}). \]
For elements of $\mathop{{\rm Stand}}\nolimits_0(\mathcal{H})$, there are no proper inclusions. As we shall see in
\S \ref{subsec:3.3}, the order structure on $\mathop{{\rm Stand}}\nolimits(\mathcal{H})$ is non-trivial if $\mathcal{H}$ is infinite dimensional.
(c) To extend (b) to arbitrary standard subspaces $V$, we note that
a dense complex subspace $\mathcal{D}\subseteq \mathcal{H}$ carries at most one
Hilbert space structure (up to topological linear isomorphism)
for which the inclusion
$\mathcal{D} \hookrightarrow \mathcal{H}$ is continuous (Closed Graph Theorem).
We consider the category $\mathcal{G}$ whose objects are
all dense subspaces $\mathcal{D} \subseteq \mathcal{H}$
carrying such Hilbert space structures and whose morphisms are
the topological linear isomorphisms $\mathcal{D}_1 \to \mathcal{D}_2$ with respect to the
intrinsic Hilbert space structures. This defines a category
in which all morphisms are invertible, so that we actually
obtain a groupoid. As all these subspaces $\mathcal{D}$ are isomorphic to
$\mathcal{H}$ as Hilbert spaces, this groupoid acts transitively.
For each standard subspace $V \subseteq \mathcal{H}$, the dense subspace
$V + iV$ carries the natural Hilbert structure obtained from the identification
with the complex Hilbert space $V_{\mathbb C}$. Therefore the groupoid $\mathcal{G}$
acts transitively on $\mathop{{\rm Stand}}\nolimits(\mathcal{H})$ with stabilizer groups $\mathcal{G}_V \cong \mathop{{\rm GL}}\nolimits(V)$.
(d) Write $\mathop{{\rm Conj}}\nolimits(\mathcal{H})$ for the set of conjugations on $\mathcal{H}$
(Examples~\ref{exs:conj}).
Then the map $\mathop{{\rm Stand}}\nolimits(\mathcal{H}) \to \mathop{{\rm AU}}\nolimits(\mathcal{H}), V \mapsto J_V$ is surjective and
$\mathop{{\rm AU}}\nolimits(\mathcal{H})$-equivariant. The fiber in a fixed conjugation $J$
corresponds to the set of all positive
operators $\Delta$ satisfying $J\Delta J = \Delta^{-1}.$
Passing to $D := i\log \Delta\vert_{\mathcal{H}^J}$, it follows that it can be parametrized
by the set of all skew-adjoint operators on the real Hilbert space $\mathcal{H}^J$
(see also Remark~\ref{rem:2.3}(b) for a different parametrization).
\end{remark}
The problem to describe the set of pairs
$(V,\mathcal{H})$, where $V \subseteq \mathcal{H}$ is a standard subspace,
can be addressed from two directions. One could
either start with a real Hilbert space $V$ and ask
for all those complex Hilbert spaces into which $V$ embeds
as a standard subspace, or start with the pair $(\mathcal{H},J)$,
respectively the real Hilbert space $\mathcal{H}^J$, and ask for all standard
real subspaces $V \subseteq \mathcal{H}$ with $J_V = J$. Both problems
have rather explicit answers that are easily explained
(see \cite{NO16} for details).
\begin{remark} \mlabel{rem:2.3}
(a) Let $(V, (\cdot,\cdot))$ be a real Hilbert space.
For any realization of $V$ as a standard subspace of $\mathcal{H}$, the
restriction of the scalar product of $\mathcal{H}$ to $V$ is a complex-valued
hermitian form
\[ h(v,w) := \langle v,w\rangle = (v,w) + i \omega(v,w),\]
where $\omega \: V \times V \to {\mathbb R}$ is continuous and skew-symmetric,
hence of the form $\omega(v,w) = (v,Cw)$ for a skew-symmetric operator
$C = - C^\top$ on $V$ satisfying $\|Cv\| < \|v\|$ for any non-zero
$v \in V$ (\cite[Lemma~A.10]{NO16}). Conversely, we obtain
for every such operator $C$ on $V$ by completion of $V_{\mathbb C}$
with respect to $h$ a complex Hilbert space in which $V$ is a standard
real subspace. Then $C$ extends to a bounded skew-hermitian operator
$\hat C$ on $\mathcal{H}$ satisfying
\[ \Delta = \frac{\mathbf{1} - i \hat C}{\mathbf{1} + i \hat C}
\quad \mbox{ and } \quad
\hat C = i \frac{\Delta - \mathbf{1}}{\Delta + \mathbf{1}}.\]
(b) If we start with the conjugation $J$ on $\mathcal{H}$, then
the standard subspaces $V$ with $J_V = J$ are the subspaces of the form
$V = (\mathbf{1} + i C)\mathcal{H}^J$, where $C \in B(\mathcal{H}^J)$ is a skew-symmetric operator
satisfying $\|Cv\| < \|v\|$ for $0\not= v \in \mathcal{H}^J$
(\cite[Lemma~B.2]{NO16}). Writing also $C$ for its complex linear extension
to $\mathcal{H}$, we then have
\[ \Delta^{1/2} = \frac{\mathbf{1} - i C}{\mathbf{1} + i C} \quad \mbox{ and }\quad
C = i \frac{\Delta^{1/2}-\mathbf{1}}{\Delta^{1/2}+ \mathbf{1}}.\]
\end{remark}
\begin{remark} If $V$ is a standard subspace of $\mathcal{H}$
and $W \subseteq V+ i V$ is a real subspace closed in $\mathcal{H}$
such that $W$ corresponds to a standard subspace of the complex Hilbert space $V_{\mathbb C}$, then
$W$ is also standard in $\mathcal{H}$ because the closure of
$W + iW$ contains $V + i V$, hence all of $\mathcal{H}$.
\end{remark}
\subsection{Symplectic aspects of standard subspaces}
\mlabel{subsec:3.2}
Let $V \subseteq \mathcal{H}$ be a standard subspace and
consider the corresponding
antiunitary representation $U^V \:{\mathbb R}^\times \to \mathop{{\rm AU}}\nolimits(\mathcal{H})$
with $U^V_{-1} = J^V$ and $U^V_{e^t} = \Delta^{-it/2\pi}$
(Proposition~\ref{prop:3.2}).
Since the operators $\Delta^{it}$ commute with
$S = J \Delta^{1/2}$, they leave the closed subspace
$V = \mathop{{\rm Fix}}\nolimits(S)$ invariant. Further, the relation
$JSJ = \Delta^{1/2} J = S^* = J \Delta^{-1/2}$
implies that
\[ JV = V',\quad \mbox{ where } \quad
V' := \{ w \in \mathcal{H} \: (\forall v \in V) \
\Im \langle v, w \rangle = 0\} = i V^{\bot_{\mathbb R}} \]
is the {\it symplectic orthogonal space of $V$}, and
$V^{\bot_{\mathbb R}}$ denotes the orthogonal complement of $V$ in the underlying
real Hilbert space $\mathcal{H}^{\mathbb R}$ (\cite[Prop.~3.2]{Lo08}).
In particular, the orbit $U^V_{{\mathbb R}^\times}V = \{V,V'\}$ consists of at most
two standard subspaces.
\begin{lemma}\mlabel{lem:stand-factorial}
The following assertions hold:
\begin{itemize}
\item[\rm(i)] $U^{V'}(t) = U^V(t^{-1})$ for $t \in {\mathbb R}^\times$,
is the antiunitary representation corresponding to~$V'$.
\item[\rm(ii)] $J_{V'} = J_V$ and $\Delta_{V'} = \Delta_V^{-1}$.
\item[\rm(iii)] $V \cap V' = \mathcal{H}^{U^V}$
is the fixed point space for the antiunitary representation $(U^V,\mathcal{H})$ of~${\mathbb R}^\times$.
\item[\rm(iv)] $V = V'$ is equivalent to $\Delta = \mathbf{1}$.
\end{itemize}
\end{lemma}
\begin{proof} (i) and (ii) follow immediately
from $V' = \mathop{{\rm Fix}}\nolimits(J \Delta^{-1/2})$.
(iii) If $v \in V\cap V'$, then
$v = Sv = S^*v$ implies $\Delta v = v$ and hence $Jv = v$. Conversely,
these two relations imply $v \in V \cap V'$.
(iv) follows from (ii).
\end{proof}
\begin{remark} \mlabel{rem:dirsum} (Direct sums of standard subspaces)
(a) Suppose that $V_j \subseteq \mathcal{H}_j$ are standard subspaces for $j =1,2$.
Then $V := V_1 \oplus V_2 \subseteq \mathcal{H}_1 \oplus \mathcal{H}_2$ is a standard subspace.
We have $J_V = J_{V_1} \oplus J_{V_2}$ and
$\Delta_V = \Delta_{V_1} \oplus \Delta_{V_2}$.
In particular, the corresponding antiunitary representation
$U^V$ of ${\mathbb R}^\times$ is the direct sum $U^{V_1} \oplus U^{V_2}$.
(b) In particular, every standard subspace $V$ can be written
as such a direct sum
\[ V = (V \cap V') \oplus V_1, \quad \mbox{ where } \quad V_1' \cap V_1 = \{0\} \]
and $(V \cap V')_{\mathbb C}$ is the set of fixed points of the unitary
representation $U^V\vert_{{\mathbb R}^\times_+}$ (Lemma~\ref{lem:stand-factorial}).
\end{remark}
\begin{lemma} \mlabel{lem:split} Let $V$ be a standard subspace,
$V_1 \subseteq V$ be a closed subspace and $V_2 := V \cap V_1^{\bot_{\mathbb R}}$ be its orthogonal
complement in~$V$. Then the following are equivalent:
\begin{itemize}
\item[\rm(i)] $V = V_1 \oplus V_2$ is a direct sum of standard subspaces.
\item[\rm(ii)] $V_1 \subseteq V_2'$, i.e., $i V_1 \bot V_2$ in $\mathcal{H}$.
\item[\rm(iii)] $V_1$ is invariant under the modular automorphisms
$(\Delta_V^{it})_{t \in {\mathbb R}}$.
\end{itemize}
If these conditions are satisfied and $V_1$ is also standard, then $V =V_1$.
\end{lemma}
\begin{proof} (i) $\Leftrightarrow$ (ii) is easy to verify.
(i) $\Leftrightarrow$ (iii): Clearly, (i) implies (iii). To see the converse,
consider the closed subspace $\mathcal{H}_1 := \overline{V_1 + i V_1}$ of $\mathcal{H}$.
Then, for each $v \in V$, the curve $t \mapsto \Delta_V^{-it/2\pi}v$ is contained in $\mathcal{H}_1$,
hence the same is true for its analytic continuation to the strip
\break $\{ z \in {\mathbb C} \: 0 \leq \Im z \leq \pi\}$ (Remark~\ref{rem:anaext}). Therefore
$\Delta^{1/2}v = Jv \in \mathcal{H}_1$ and thus $\mathcal{H}_1$ is invariant under the
antiunitary representation $U^V$ of ${\mathbb R}^\times$ corresponding to $V$
(Proposition~\ref{prop:3.2}). Since the orthogonal decomposition
$\mathcal{H} = \mathcal{H}_1 \oplus \mathcal{H}_1^\bot$ reduces $U^V$, the standard subspace $V$ decomposes
accordingly.
If (i)-(iii) are satisfied and $V_1$ is also standard, then (i) implies
that $V_1 = V$ (cf.~\cite[Prop.~3.10]{Lo08}).
\end{proof}
\subsection{Orthogonal real one-parameter groups}
\mlabel{subsec:orthog}
For any standard subspace $V$, the unitary operators
$\Delta^{it}$ define on the real Hilbert space~$V$
a continuous orthogonal one-parameter group $(U,V)$
(\S \ref{subsec:3.2}).
If, conversely, $(U_t)_{t \in {\mathbb R}}$ is a strongly continuous one-parameter
group on the real Hilbert space $V$, then we can recover the corresponding
embedding of $V$ as a standard subspace as follows.
Let $V_0 := V^U$ be the subspace of $U$-fixed vectors and $V_1 := V_0^\bot$.
Then $U_t = e^{tD}$ with a skew-symmetric infinitesimal generator
$D = -D^\top$ satisfying $V_0 = \ker D$. On $V_1$ we have the
polar decomposition $D = I|D|$, where $I$ is a complex structure
and $|D| = \sqrt{-D^2}$. We now consider the bounded skew-symmetric
operator $C$ on $V$ defined by $C\vert_{V_0} = 0$ and
\[ C\vert_{V_1} = I \frac{\mathbf{1} - e^{-|D|}}{\mathbf{1} + e^{-|D|}}.\]
Then $h(v,w) := (v,w) + i (v,Cw)$
leads to an embedding of $V$ as a standard subspace $V \subseteq \mathcal{H}$
as in Remark~\ref{rem:2.3}(a). The operator $D$ can be recovered directly from
$C$ by $D\vert_{V_0} = 0$ and
\[ D\vert_{V_1} = I \log\Big(\frac{\mathbf{1} + |C_1|}{\mathbf{1} - |C_1|}\Big) \]
(cf.~\cite[Rem.~4.3]{NO16}, where different sign conventions are used).
The orthogonal one-parameter group $(U_t)_{t \in {\mathbb R}}$ on $V$ is trivial
if and only if $D = 0$, which corresponds to $\Delta = \mathbf{1}$,
resp., to $C = 0$, resp., to $V = \mathcal{H}^J$ (Lemma~\ref{lem:stand-factorial}(iv)).
\subsection{Half-sided modular inclusions of standard subspaces}
\mlabel{subsec:3.3}
We have seen above that standard subspaces $V \subseteq \mathcal{H}$
are in one-to-one correspondence with antiunitary
representations $U^V \: {\mathbb R}^\times \to \mathop{{\rm AU}}\nolimits(\mathcal{H})$ (Proposition~\ref{prop:3.2}).
In this subsection we shall see how certain inclusions of
standard subspaces can be related to antiunitary positive energy representations
of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ (cf.~Section~\ref{subsubsec:2.5.1}).
Here the positive energy condition for the translation group
turns out to be the crucial link
between the inclusion order on $\mathop{{\rm Stand}}\nolimits(\mathcal{H})$ and the affine geometry
of the real line.
There are two ways to approach inclusions of standard subspaces.
One is to consider the interaction of a unitary one-parameter group
with a standard subspace, which leads to the concept of a Borchers pairs
and the other considers the modular groups of
two standard subspaces and leads to the concept of a half-sided modular inclusion.
These perspectives have been introduced by Borchers (\cite{Bo92})
and Wiesbrock (\cite{Wi93}), respectively, in the context of
von Neumann algebras (see \S\ref{subsec:4.2} for the translation
to standard subspaces and \cite{Lo08} for the results in the context of standard subspaces).
\begin{definition} \mlabel{def:3.8} (a) Let $(U_t)_{t \in {\mathbb R}}$ be a continuous unitary
one-parameter group on $\mathcal{H}$ and $V \subseteq \mathcal{H}$ be a standard subspace.
We call $(U,V)$ a {\it (positive/negative) Borchers pair} if
$U_t V \subseteq V$ holds for $t \geq 0$ and
$U_t = e^{itP}$ with $\pm P \geq 0$.
(b)
We call an inclusion $K \subseteq H$ of standard subspaces of
$\mathcal{H}$ a {\it $\pm$half-sided modular inclusion} if
\[ \Delta_H^{-it} K \subseteq K \quad \mbox{ for } \quad \pm t \geq 0.\]
\end{definition}
\begin{remark}\mlabel{rem:3.13e}
The inclusion $K \subseteq H$ is positive half-sided modular
if and only if the inclusion $H' \subseteq K'$ is negative half-sided modular
(\cite[Cor.~3.23]{Lo08}).
\end{remark}
The following theorem provides a passage from Borchers pairs
to antiunitary representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$
(\cite[Thm.~3.2]{BGL02}, \cite[Thm.~3.15]{Lo08}).
\begin{theorem}[Borchers' Theorem---one particle case]
\mlabel{thm:bor-stand}
If $(U,V)$ is a positive/negative Borchers pair,
then
\[ U^V(a) U(b) U^V(a)^{-1} = U(a^{\pm 1} b)
\quad \mbox{ for } \quad a \in {\mathbb R}^\times, b \in{\mathbb R},\]
i.e., we obtain an antiunitary
positive energy representation $(\tilde U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ by
$\tilde U_{(b,a)} = U(b) U^V(a)$.
\end{theorem}
We are now ready to explain how inclusions of
standard subspaces are related to antiunitary representations of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})$. The following result contains in particular a converse of Borchers' Theorem.
For its formulation, we
recall the one-to-one correspondence between standard subspaces and antiunitary
representations of ${\mathbb R}^\times$ from Proposition~\ref{prop:3.2}.
\begin{theorem}{\rm(Antiunitary positive energy
representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ and standard subspaces)}
\mlabel{thm:3.8}
Let $(U,\mathcal{H})$ be an antiunitary representation
of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$. For each $x \in {\mathbb R}$, we consider the homomorphism
\[ \gamma_x \: {\mathbb R}^\times \to \mathop{{\rm Aff}}\nolimits({\mathbb R}), \qquad
\gamma_x(s) := (x,1)(0, s)(-x,1) = ((1-s)x, s)\]
whose range is the stabilizer group $\mathop{{\rm Aff}}\nolimits({\mathbb R})_x$
and the corresponding family $(V_x)_{x \in {\mathbb R}}$
of standard subspaces determined by $U^{V_x} = U \circ \gamma_x$.
Then the following assertions hold:
\begin{itemize}
\item[\rm(i)] $U_{(t,s)} V_x = V_{t + sx}$ and $U_{(t,-s)} V_x = V_{t-sx}'$ for
$t,x \in {\mathbb R}, s > 0.$
\item[\rm(ii)] The following are equivalent:
\begin{itemize}
\item[\rm(a)] $U$ is a positive energy representation.
\item[\rm(b)] $V_s \subseteq V_0$ for $s \geq 0$.
\item[\rm(c)] $V_s \subseteq V_t$ for $s \geq t$.
\item[\rm(d)] $(W, V_0)$ with $W_t := U_{(t,1)}$ is a positive Borchers pair.
\item[\rm(e)] $V_1 \subseteq V_0$ is a +half-sided modular inclusion.
\end{itemize}
\item[\rm(iii)] $V_x = V_0$ for every $x \in {\mathbb R}$ is equivalent to
$U_{(b,1)} = \mathbf{1}$ for every $b \in {\mathbb R}$.
\item[\rm(iv)] $V_\infty := \bigcap_{t \in {\mathbb R}} V_t = \{ v \in V_0 \: (\forall b \in {\mathbb R})\
U_{(b,1)}v = v\}$ is the fixed point space for the translations.
\item[\rm(v)] $V_0 \cap V_0'
= \mathcal{H}^{\mathop{{\rm Aff}}\nolimits({\mathbb R})} = \{ v \in \mathcal{H} \: (\forall g\in \mathop{{\rm Aff}}\nolimits({\mathbb R}))\ U_g v = v\}$.
\end{itemize}
\end{theorem}
\begin{proof} (i) follows from $(t,s)\gamma_x (t,s)^{-1} = \gamma_{t+sx}$,
$U_{(0,-1)} V_0 = V_0'$ and $V_x' = U_{(x,1)}V_0'$.
(ii) (a) $\Leftrightarrow$ (b): For $W(s) := U_{(s,1)}$ we have
\[ \Delta_{V_0}^{-it/2\pi} W(s) \Delta_{V_0}^{it/2\pi}
= U_{(0,e^{t})} W(s) U_{(0,e^{-t})} = W(e^{t}s),\]
so that the assertion follows from
the converse of Borchers' Theorem \cite[Thm.~3.17]{Lo08}.
(b) $\Leftrightarrow$ (c) follows from $V_t = W(t) V_0$ for $t \in {\mathbb R}$.
By definition, (d) is equivalent to (a) and (b).
(b) $\Leftrightarrow$ (e): From (b) we derive (e) by
\[ U^{V_0}_{e^t} V_1
= U_{(0,e^t)} V_1 = V_{e^t}
= U_{(1,1)} V_{e^t-1}\subseteq U_{(1,1)} V_0 = V_1. \]
From (e) we obtain, conversely, for $t \geq 0$
\[ U_{(1,1)} V_0 = V_1 \supseteq U^{V_0}_{e^t} V_1 = V_{e^t} = U_{(1,1)} V_{e^t-1},\]
and thus $V_{e^t-1} \subseteq V_0$, which implies (b).
(iii) If $W(x) := U_{(x,1)} = \mathbf{1}$ for every $x \in {\mathbb R}$, then $V_x = W(x)V_0 = V_0$.
If, conversely, $W(x) V_0 = V_x = V_0$ for every $x \in {\mathbb R}$,
then every $W(x)$ commutes with $\Delta_{V_0}$ and $J_{V_0}$,
so that Theorem~\ref{thm:bor-stand} yields $W(x) = \mathbf{1}$ for every $x \in {\mathbb R}$.
(iv) By (i), the closed real subspace $V_\infty$ of $V_0$
is invariant under $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$. Hence Lemma~\ref{lem:split} implies
that it is a direct summand of the standard subspace~$V_0$ and therefore
also invariant under $J := U_{(0,-1)}$. Now (iii) implies that
the translation group fixes $V_\infty$ pointwise. Conversely,
every fixed vector $v\in V_0$ of the translations is contained in
each subspace $V_x = W(x)V_0$, hence also in $V_\infty$.
(v) From Lemma~\ref{lem:stand-factorial}(iv) we know that
$V_0 \cap V_0'$ is the space of fixed vector for the dilation group
$(U_{(0,a)})_{a \in {\mathbb R}^\times}$. Proposition~\ref{prop:2.x}(c)
implies the translations also act trivially on this space. This proves (v).
\end{proof}
\begin{remark} \mlabel{rem:3.11} (a) If the momentum operator $P$ from
Theorem~\ref{thm:bor-stand} is strictly positive,
then the space of fixed points for the dilation subgroup is trivial
and Theorem~\ref{thm:3.8}(iv) implies $V_\infty = \{0\}$.
(b) If $(U,V)$ is a Borchers pair for which
$U_t V \subseteq V$ for all $t \in {\mathbb R}$,
then $U_t V = V$ for every $t \in {\mathbb R}$ because $V = U_0 V = U_t U_{-t}V\subseteq U_t V$.
Now Theorem~\ref{thm:3.8}(iii) entails $U_t = \mathbf{1}$ for every $t \in {\mathbb R}$.
Therefore non-trivial representations of the translation group
lead to proper inclusions.
(c) For a Borchers pair $(U,V)$, the operators $(U_t)_{t \geq 0}$, and
the modular operators $(\Delta^{it})_{t \in {\mathbb R}}$ act by isometries on the real
Hilbert space $V$, so that we obtain a representation of the semigroup
$[0,\infty) \rtimes {\mathbb R}^\times_+$ by isometries on $V$.
In this sense we may consider Borchers' Theorem~\ref{thm:bor-stand}
as a higher dimensional analog of the Lax--Phillips Theorem which
provides a normal form for one-parameter semigroups of
isometries on real Hilbert spaces as translations acting on spaces like
$L^2({\mathbb R}^+,\mathcal{K})$, where $\mathcal{K}$ is a Hilbert space
(cf.~Remark~\ref{rem:3.17}(b) and \cite{NO15}).
The connection with the Lax--Phillips Theorem can also be made more direct as
follows. The subspace $H := U_1 V$ is invariant under the
modular automorphisms $(\Delta^{-it})_{t \geq 0}$.
More precisely,
$\Delta^{-it} H = \Delta^{-it} U_1 V = U_{e^{2\pi t}} V
= V_{e^{2\pi t}}\subseteq V_1 = H$ for $t \geq 0$, in the notation of Theorem~\ref{thm:3.8}.
This shows that $\bigcup_{t \in {\mathbb R}} \Delta^{-it} H$ is dense in $V$ and
that $\bigcap_{t > 0} \Delta^{-it} H = V_\infty$ is the fixed point
set for $(U_t)_{t\in {\mathbb R}}$ in $V$ (Theorem~\ref{thm:3.8}(iv)).
Assuming that $U$ has no non-zero fixed vectors (as in (a) above), we obtain
$V_\infty = \{0\}$. This means that the subspace
$H \subseteq V$ is outgoing in the sense of Lax--Phillips
for the orthogonal one-parameter group $(\Delta^{-it})_{t \in {\mathbb R}}$.
\end{remark}
The group $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ is generated by translations and dilations,
which is the structure underlying Borchers pairs.
But we can also generate
it by the subgroups $\gamma_0({\mathbb R}^\times)$ and $\gamma_1({\mathbb R}^\times)$.
For every antiunitary representation $(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, the corresponding
modular objects lead to two standard subspaces $V_0$ and $V_1$
and we have already seen above that $V_1 \subseteq V_0$ is a positive
half-sided modular inclusion if $U$ is of positive energy.
The following theorem provides a converse
(see \cite[Thm.~3.21]{Lo08}).
\begin{theorem}[Wiesbrock Theorem---one particle case] \mlabel{thm:wiesbrock}
An inclusion $K \subseteq H$ of standard subspaces is positive
half-sided modular if and only if there exists an antiunitary
positive energy representation $(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ with
$K = V_1$ and $H = V_0$.
\end{theorem}
\begin{proof} In view of Theorem~\ref{thm:3.8}(ii)(e), it remains to
show the existence of $U$ if the inclusion is +half-sided modular.
In view of \cite[Thm.~3.21]{Lo08}, there exists a unitary positive energy
representation $(U,\mathcal{H})$ of the connected affine group $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$
such that $U_{\gamma_0(e^t)} = U^H(e^t)$ and $U_{\gamma_1(e^t)} = U^K(e^t)$ for $t \in {\mathbb R}$.
Further, the translation unitaries
$W_t := U_{(t,1)}$ satisfy $W_1 H = K$ and $W_t H \subseteq H$ for $t \geq 0$.
Therefore $(W,H)$ is a Borchers pair, and thus
$\tilde U_{(b,a)} := W_b U^H_a$ defined an extension of $U$ to an antiunitary
representation of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ (Theorem~\ref{thm:bor-stand}).
The corresponding subspaces are
$V_0 = H$ by construction, and $V_1 = W_1 V_0 = W_1 H = K$.
\end{proof}
\begin{exs} \mlabel{ex:hardy}
Below we provide an explicit description of a
positive Borchers pair in a concrete model of the irreducible antiunitary
positive energy representation of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$
(cf.~Theorem~\ref{thm:bor-stand} and \cite[\S4]{LL14}).
A slight variation of \eqref{eq:pics-s1} leads to the
antiunitary representation of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ on
$L^2\big({\mathbb R}_+, \frac{dp}{p}\big)$ by
\[ (U_{(b,e^t)}\psi)(p) = e^{ibp} \psi(e^tp), \qquad
(U_{(0,-1)}\psi)(p) = \overline{\psi(p)}.\]
Transforming it with the unitary operator
$\Gamma \: L^2({\mathbb R}_+, \frac{dp}{p}) \to L^2({\mathbb R},d\theta),
\Gamma(\psi)(\theta) = \psi(e^\theta),$ transforms it into
the representation
\begin{equation}
\label{eq:strip}
(U_{(b,e^t)}\psi)(\theta) := e^{ib e^\theta} \psi(\theta+t), \qquad
(U_{(0,-1)}\psi)(\theta) := \overline{\psi(\theta)}.
\end{equation}
On the strip $\mathcal{S}_\pi := \{ z \in {\mathbb C} \: 0 < \Im z < \pi\}$ we have the Hardy space
\begin{equation}
\label{eq:hardy}
\mathcal{H}^2(\mathcal{S}_\pi) := \Big\{ \psi \in \mathcal{O}(\mathcal{S}_\pi) \: \sup_{0 < \lambda < \pi}
\int_{\mathbb R} |\psi(\theta + i \lambda)|^2\, d\theta < \infty\Big\},
\end{equation}
and in these terms, the standard subspace $V_0$
corresponding to $\gamma_0(t) = (0,t)$ is given by
\[ V_0 = \{ \psi \in \mathcal{H}^2(\mathcal{S}_\pi) \:
(\forall z \in \mathcal{S}_\pi)\ \overline{\psi(i \pi + \overline z)} = \psi(z)\}. \]
On the strip $\mathcal{S}_\pi$, the functions $B(z) := e^{ib e^z}$ satisfy
\[ |B(x+ iy)| = e^{-b \Im(e^{x+iy})} = e^{-b e^x \sin y}\leq 1\quad \mbox{ because } \quad
\sin y \geq 0\]
and $\overline{B(i\pi + \overline z)} = B(z).$
This shows hat, for $b\geq 0$, multiplication with $B$
defines an isometry of the Hardy space $\mathcal{H}^2(\mathcal{S}_\pi)$ and also of the real subspace
$V_0$ into itself (cf.\ Remark~\ref{rem:3.11}(c)).
One can show that all unitary operators commuting with the representation
of the one-parameter group $(U_{(b,1)})_{b \in {\mathbb R}}$ and mapping $V_0$ into itself
are multiplications with bounded holomorphic functions $\phi$ on $\mathcal{S}_\pi$
satisfying $\phi(i\pi + \overline z) = \overline{\phi(z)}$ and whose boundary
values in $L^\infty({\mathbb R},{\mathbb C})$ satisfy $|\phi(x)| = 1$ for almost every $x \in {\mathbb R}$
(cf.~Remark~\ref{rem:inner}(c)).
For explicit descriptions of standard
subspaces related to free fields, we refer to \cite[p.~422ff]{FG89}.
\end{exs}
\begin{remark} \mlabel{rem:3.17}
(a) If $K \subseteq H$ is a proper positive half-sided modular inclusion and
$V$ is a closed real subspace with $K \subseteq V \subseteq H$, then $V$ is clearly standard.
However, neither the inclusion $K \subseteq V$ nor the inclusion $V \subseteq H$ has to be
half-sided modular. In fact, the existence of the unitary one-parameter group
$(U_t)_{t \in {\mathbb R}}$ with $U_1 H = K$ implies that all the inclusions
$U_t H \subseteq U_s H$ for $0 \leq s < t \leq 1$ are proper
(Theorem~\ref{thm:3.8}(ii)). Therefore $K$ has infinite
codimension in $H$. So subspaces $V$ for which
$V/K$ or $H/V$ is finite dimensional yield counterexamples.
(b) Let $V$ be a standard subspace. We write $\mathop{\rm hsm}\nolimits_+(V)$ for the set of all
standard subspaces $H \subseteq V$ for which the inclusion $H\subseteq V$ is
positive half-sided modular. To obtain a description of this set, one can proceed as
follows. First we can split off the maximal direct summand
$H_1 := \bigcap_{t \in {\mathbb R}} \Delta_V^{it} H$ of $V$ contained in $H$
(Lemma~\ref{lem:split}). This leaves us with the situation where $H_1 = \{0\}$.
Decomposition of the corresponding antiunitary representation $(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$
(Theorem~\ref{thm:wiesbrock}) into a subspaces $\mathcal{H}^0$ on which the translations act trivially
and an orthogonal space $\mathcal{H}^+$ on which the representation is of strict
positive energy, we accordingly obtain
the direct sum $V = V^0 \oplus V^+$ of standard subspaces and
$H = V^0 \oplus H^+$. Hence our assumption implies $V^0 = \{0\}$ and $\mathcal{H} = \mathcal{H}^+$.
Now \cite[Thm.~2.8]{Lo08} implies that $U$ is a multiple of the unique irreducible
positive energy representation of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, so that we may assume that
\[ \mathcal{H} = L^2({\mathbb R}_+, \mathcal{K}) \quad \mbox{ and }\quad
(U_{(t,e^s)}f)(x) = e^{itx} e^{s/2} f(e^sx), \qquad
U_{(0,-1)} f = J_K f,\]
where $J_K$ is a conjugation on $\mathcal{K}$ (see \S\ref{subsubsec:2.5.1}).
As all antiunitary rerpresentations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ with strictly positive
energy and the same multiplicity are equivalent, we obtain all such standard subspaces
$H$ by applying elements of the group
\begin{align*}
K &:= \{ U \in \mathop{\rm U{}}\nolimits(\mathcal{H}) \: UV = V \}
= \{ U \in \mathop{\rm U{}}\nolimits(\mathcal{H}) \: (\forall a \in {\mathbb R}^\times)\ U U_{(0,a)} U^{-1} = U_{(0,a)}\} \\
&\cong \{ U \in \mathop{\rm O{}}\nolimits(V) \: (\forall a \in {\mathbb R}^\times)\
U \Delta_V^{it}\vert_V = \Delta_V^{it}U\}.
\end{align*}
If $\mathcal{K} = {\mathbb C}$, then the representation of ${\mathbb R}^\times$ on $\mathcal{H}$ is (by Fourier transform)
equivalent
to the representation of ${\mathbb R}$ on $L^2({\mathbb R},\mathcal{K})$ by
\[ (V_{e^x} \xi)(p) = e^{ixp} \xi(p) \quad \mbox{ and } \quad
(V_{-1} \xi)(p) = \overline{\xi(-p)}.\]
Therefore any unitary operator $M$ on $\mathcal{H}$ commuting with $V_{{\mathbb R}^\times}$ is of the form
$(M\xi)(p) = m(p)\xi(p)$, where $m \: {\mathbb R} \to {\mathbb T}$ is a measurable function satisfying
$m(-p) = \overline{m(p)}$. It would be interesting to see how this relates to the
inner functions corresponding to endomorphisms of one-dimensional standard pairs
(see Remark~\ref{rem:inner}(c)).
\end{remark}
Combining the preceding results with the fact that
the infinite dimensional irreducible positive energy representation of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$ extends to an antiunitary positive energy representation of
$\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ with lowest weight $1$
(Theorem~\ref{thm:1.4} and Corollary~\ref{cor:psl2-restrict}), we obtain
(\cite[Cor.~4.15]{Lo08}):
\begin{theorem}
There exists a one-to-one correspondence between
\begin{itemize}
\item[\rm(i)] Positive half-sided modular inclusions $K \subseteq H$.
\item[\rm(ii)] Antiunitary positive energy representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$.
\item[\rm(iii)] Positive Borchers pairs $(V,U)$.
\item[\rm(iv)] Unitary representations of $\mathop{{\rm PSL}}\nolimits_2({\mathbb R})$ which are direct sums of
representations with lowest weights $0$ or $1$.
\end{itemize}
\end{theorem}
An important aspect of the last item in
the preceding theorem is that it leads to a considerable enrichment
of the geometry. Starting with a positive half-sided modular inclusion $K \subseteq H$,
we obtain an antiunitary representation of $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$.
Accordingly, for every interval $I \subseteq {\mathbb S}^1$, the corresponding
homomorphism $\gamma^I \: {\mathbb R}^\times \to \mathop{{\rm PGL}}\nolimits_2({\mathbb R})$
(Example~\ref{ex:proj-grp}) determines
a standard subspace $V_I$, whereas the representation of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})$ only leads to standard subspaces $V_I$ indexed by the
open half-lines $I \subseteq {\mathbb R}$.
The following theorem is another result in this direction.
It relates pairs of half-sided modular inclusions
via the corresponding antiunitary representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$
to representations
of the two-dimensional Poincar\'e group $P(2)_+$, resp., $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$.
\begin{theorem} \mlabel{thm:wies2-standard} {\rm(a)}
Let $H_1 \subseteq V$ be a $-$half sided modular inclusion and
$H_2 \subseteq V$ be a $+$half sided modular inclusion such that
\begin{equation}
\label{eq:J-rel}
J_{H_1}J_{H_2} = J_VJ_{H_2} J_{H_1} J_V.
\end{equation}
Then the corresponding three modular one-parameter groups
combine to a faithful continuous antiunitary representation
of the proper Poincar\'e group $P(2)_+ \cong {\mathbb R}^{1,1} \rtimes \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})$.
{\rm(b)} Let $H,V$ be standard subspaces of $\mathcal{H}$
such that $H \cap V \subseteq V$ and $H\cap V \subseteq H$ are
$-$, resp., $+$half-sided modular inclusions satisfying
$J_H V = V.$
Then the corresponding three representation
$U^H, U^V$ and $U^{H \cap V}$
generate a faithful antiunitary positive energy representation of $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$.
\end{theorem}
\begin{proof} (a) The version for von Neumann algebras
is contained in \cite[Lemma~10]{Wi98} and we shall see in
\S \ref{subsec:4.2} below how the present version follows from
this one. Here are some comments on the proof.
Clearly, the two half-sided modular inclusion defines
two antiunitary representations $U^{1/2}$
of $\mathop{{\rm Aff}}\nolimits({\mathbb R}) \cong {\mathbb R} \rtimes {\mathbb R}^\times$
that coincide on the subgroup of dilations. Now the main point is to verify
that the corresponding images of the translation groups commute.
That \eqref{eq:J-rel} is necessary can be seen as follows.
If we have a unitary representation
of $P(2)_0$ as required, then
\[ U_{(b_1, b_2,e^t)} := U^1_{b_1} U^2_{b_2} \Delta_V^{-it/2\pi}
\quad \mbox{ and } \quad U_{(0,0,-1)} := J_V\]
defines an extension to an antiunitary representation of~$P(2)_+$.
Here the modular conjugations
\[ J_{H_1} = U_{(2,0,-1)} \quad \mbox{ and } \quad J_{H_2} = U_{(0,2,-1)} \]
satisfy \eqref{eq:J-rel} because
$(0,0,-1) (0,2,-1) (2,0,-1) (0,0,-1) = (2,-2,1)$
holds in $P(2)$.
(b) The von Neumann version is \cite[Thm.~3]{Wi93b}
(see also \cite[Lemma~2]{Wi93c}).
\end{proof}
\subsection{Half-sided modular intersections}
\mlabel{subsec:modint}
\begin{definition}
We consider two standard subspaces $H_1, H_2 \subseteq \mathcal{H}$ and their
modular objects $(\Delta_{H_j}, J_{H_j})_{j=1,2}$. We say that
the pair $(H_1, H_2)$ has a {\it $\pm$modular intersection} if
the following two conditions are satisfied
\begin{itemize}
\item[\rm(MI1)] The intersection $H_1 \cap H_2$ is a standard subspace and the
inclusions $H_1 \cap H_2 \subseteq H_j$, $j =1,2$, are
$\pm$half-sided modular.
\item[\rm(MI2)] The strong limit $S := \lim_{t \to \pm\infty}
\Delta_{H_1}^{it} \Delta_{H_2}^{-it}$ (which always exists by Remark~\ref{rem:mi-1} below)
satisfies $J_{H_1} S J_{H_1} = S^{-1}.$
\end{itemize}
\end{definition}
\begin{remark} \mlabel{rem:mi-1}
(a) In $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ the two multiplicative one-parameter groups
$\gamma_0(r) := (0,r)$ and $\gamma_1(r) := (1-r,r)$ (stabilizing the points
$0$ and $1$, resp.) satisfy
\[ \gamma_0(r)\gamma_1(r^{-1}) = (0,r)(1-r^{-1}, r^{-1}) = (r-1,1),\]
so that $\lim_{r \to 0} \gamma_0(r)\gamma_1(r^{-1}) = (-1,1)$
exists.
As a consequence, for every continuous unitary representation $(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})_0$, the limit
\begin{equation}
\label{eq:limit2}
\lim_{r \to 0} U_{\gamma_0(r)}U_{\gamma_1(r)}^{-1} = U_{(-1,1)}
\end{equation}
exists.
(b) Suppose that the $+$-variant of
(MI1) is satisfied and let $(U^1, \mathcal{H})$, $(U^2, \mathcal{H})$
be the corresponding positive energy representations of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})$ satisfying
\[ H_1 \cap H_2 = V^1_1 = V^2_1, \qquad
H_1 = V^1_0 \quad \mbox{ and } \quad
H_2 = V^2_0,\]
where $(V^j_x)_{x \in {\mathbb R}}$ are the corresponding families of standard subspaces
(Theorem~\ref{thm:3.8}). We write $W^j(t) := U^j_{(t,1)}$ for the representations of the
translation group.
Then (a) implies that
\[ W^1(-1) = U^1_{(-1,1)}
= \lim_{t \to \infty} U^1_{\gamma_0(e^{-t})}U^1_{\gamma_1(e^t)}
= \lim_{t \to \infty} \Delta_{H_1}^{it/2\pi} \Delta_{H_1 \cap H_2}^{-it/2\pi}
= \lim_{t \to \infty} \Delta_{H_1}^{it} \Delta_{H_1 \cap H_2}^{-it}\]
and likewise
$W^2(-1) = \lim_{t \to \infty} \Delta_{H_2}^{it} \Delta_{H_1 \cap H_2}^{-it}.$
This leads to
\begin{equation}
\label{eq:limit3}
S := \lim_{t \to \infty} \Delta_{H_1}^{it} \Delta_{H_2}^{-it}
= W^1(-1) W^2(1),
\end{equation}
so that the limit in (MI2) exists whenever (MI1) is satisfied.
From the relation
\begin{equation}
\label{eq:s-rel}
S H_2
= W^1(-1) W^2(1) H_2
= W^1(-1) (H_1 \cap H_2) = H_1
\end{equation}
it follows that $S J_{H_2} S^{-1} = J_{H_1}$, i.e.,
\begin{equation}
\label{eq:s12-rel}
S J_{H_2} = J_{H_1} S.
\end{equation}
As condition (MI2) means that $J_{H_1} S$ is an involution, and this is equivalent
to $S J_{H_1} = S(J_{H_1} S)S^{-1}$ being an involution, \eqref{eq:s12-rel}
shows that (MI2) is equivalent to the relation ${J_{H_2} S J_{H_2} = S^{-1}}$.
(c) If the negative variant of (MI1) is satisfied,
then $H_j' \subseteq (H_1 \cap H_2)'$ are $+$half-sided modular inclusions
by Remark~\ref{rem:3.13e}. Let $(U^1, \mathcal{H})$, $(U^2, \mathcal{H})$
be the corresponding positive energy representations of
$\mathop{{\rm Aff}}\nolimits({\mathbb R})$ satisfying
\[ (H_1 \cap H_2)' = V^1_0 = V^2_0, \qquad
H_1' = V^1_1 \quad \mbox{ and } \quad
H_2' = V^2_1\]
(Theorem~\ref{thm:3.8}). With the same notation as in (b),
we obtain
\[ W^1(1)
= \lim_{t \to \infty} U^1_{\gamma_1(e^{-t})} U^1_{\gamma_0(e^{t})}
= \lim_{t \to -\infty} \Delta_{H_1'}^{-it/2\pi} \Delta_{(H_1 \cap H_2)'}^{it/2\pi}
= \lim_{t \to -\infty} \Delta_{H_1}^{it} \Delta_{H_1 \cap H_2}^{-it} \]
and likewise
$W^2(1) = \lim_{t \to -\infty} \Delta_{H_2}^{it} \Delta_{H_1 \cap H_2}^{-it}.$
This leads to
\begin{equation}
\label{eq:limit3b}
S := \lim_{t \to -\infty} \Delta_{H_1}^{it} \Delta_{H_2}^{-it} = W^1(1) W^2(-1),
\end{equation}
so that the limit in (MI2) exists. Here $S H_2' = H_1'$ shows that
(MI2) is equivalent to ${J_{H_2} S J_{H_2} = S^{-1}}$.
\end{remark}
The following theorem extends Wiesbrock's Theorem~\ref{thm:wiesbrock}
from half-sided modular inclusions to general modular intersections.
\begin{theorem}[Wiesbrock's Theorem for modular intersections---one particle version]
\mlabel{thm:wies-modinter}
For a pair $(H_1, H_2)$ of standard subspaces, the following
assertions hold:
\begin{itemize}
\item[\rm(a)] $(H_1, H_2)$ has a $+$modular intersection
if and only if there exists an antiunitary representation
$(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ such that the corresponding family of standard subspaces
$(V_x)_{x \in{\mathbb R}}$ from {\rm Theorem~\ref{thm:wiesbrock}} satisfies
$V_0 = H_1$ and $V_1 = H_2$.
\item[\rm(b)] $(H_1, H_2)$ has a $-$modular intersection
if and only if there exists an antiunitary representation
$(U,\mathcal{H})$ of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ such that $V_0 = H_1'$ and $V_1 = H_2'$.
\end{itemize}
\end{theorem}
\begin{proof} (a) Suppose first that $(H_1, H_2)$ has the $+$modular
intersection property. In the context of Remark~\ref{rem:mi-1}(b) we then have
\[ J_{H_1} J_{H_2}
= J_{H_1} J_{H_1 \cap H_2} J_{H_1 \cap H_2} J_{H_2}
= U^1_{(0,-1)} U^1_{(2,-1)} U^2_{(2,-1)} U^2_{(0,-1)}
= W^1(-2) W^2(2), \]
and this operator commutes with $S$ by (MI2). From these relations Wiesbrock derives in
\cite{Wi97} that the two one-parameter groups $W^1$ and $W^2$ commute, so that
$W(t) := W^1(t) W^2(-t)$ defines a unitary one-parameter group and
$U_{(b,a)} := W(b)U^1_{\gamma_0(a)}$
defines an antiunitary representation of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$
for which the corresponding standard subspaces $(V_x)_{x \in {\mathbb R}}$
satisfy $V_0= H_1$ and $V_{1} = W(1)V_0= S^{-1} H_1 = H_2.$.
Suppose, conversely, that $(U,\mathcal{H})$ is an antiunitary representation
of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ and let $(V_x)_{x \in {\mathbb R}}$ be the corresponding family of
standard subspaces. We decompose $(U,\mathcal{H})$ as a direct sum
\[ (U,\mathcal{H}) = (U^+,\mathcal{H}^+) \oplus (U^0,\mathcal{H}^0) \oplus(U^-,\mathcal{H}^-), \]
where the representation $U^+$ has strictly positive energy,
$U^-$ has strictly negative energy and the translation group acts
trivially on $\mathcal{H}^0$. Then the subspaces $V_x$ decompose accordingly
as orthogonal direct sums $V_x = V_x^+ \oplus V_x^0 \oplus V_x^-$,
where $V_x^0 = V^0_0$ does not depend on~$x$.
Theorem~\ref{thm:3.8} now implies that $V_x^\pm \subseteq V_y^\pm$ for $\pm (x-y) \geq 0$.
To see that $V_0$ and $V_1$ have a $+$modular intersection, we first observe that
\[ V_0 \cap V_1
= (V_0^+ \cap V_1^+) \oplus V_0^0 \oplus (V_0^- \cap V_1^-)
= V_1^+ \oplus V_0^0 \oplus V_0^-\]
is an orthogonal direct sum of three standard subspaces, hence a standard subspace.
Its invariance under the modular operators
$\Delta_{V_0}^{-it/2\pi} = U_{\gamma_0(e^t)}$ for $t \geq 0$ follows from the
invariance of $V_1^+$ under $U^+_{\gamma_0(e^t)}$ and
the invariance of $V_0^+$ and $V_0^0$ under the corresponding modular group.
For the invariance of $V_0 \cap V_1$ under $\Delta_{V_1}^{-it/2\pi} = U_{\gamma_1(e^t)}
= U_{((1-e^t, e^t)}$, we likewise use that
\[ U^-_{(1-e^t, e^t)} V^-_0
= U^-_{(1-e^t, 1)} V^-_0 = V^-_{1 - e^t} \subseteq V^-_0 \quad \mbox{ for } \quad t \geq 0.\]
This shows that $(V_0, V_1)$ has a $+$modular intersection.
(b) If $(H_1, H_2)$ is a $-$modular
intersection, we likewise obtain with Remark~\ref{rem:mi-1}(c)
that $U_{(b,a)} := W(b) U^1_{\gamma_1(a)}$ and $W(t) := W^1(t) W^2(-t)$
define an antiunitary representation of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ with
$S = W(-1)$, $V_0 = V^1_1 = H_1'$ and $V_1 = W(1)V_0 = S^{-1}H_1' = H_2'$.
Suppose, conversely, that $(U,\mathcal{H})$ is an antiunitary representation
of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$. We use the notation from (a).
To see that $V_0'$ and $V_1'$ have a $-$modular intersection, we first observe that
\[ V_0' \cap V_1'
= ( (V_0^+)' \cap (V_1^+)') \oplus (V_0^0)' \oplus ((V_0^-)' \cap (V_1^-)')
= (V_0^+)' \oplus V_0^0 \oplus (V_1^-)'\]
to see that $V_0' \cap V_1'$ is standard.
Its invariance under the modular operators
$\Delta_{V_0'}^{it/2\pi} = \Delta_{V_0}^{-it/2\pi} = U_{\gamma_0(e^{t})}$ for $t \geq 0$ follows from the
invariance of $(V_1^-)'$ under $U^-_{\gamma_0(e^t)}$ (Remark~\ref{rem:3.13e}) and
the invariance of $V_0^+$ and $V_0^0$ under the corresponding modular group.
For the invariance of $V_0' \cap V_1'$ under $\Delta_{V_1'}^{it/2\pi} = U_{\gamma_1(e^t)}
= U_{(1-e^t, e^t)}$, we likewise use that
\[ U^+_{(1-e^t, e^t)} (V^+_0)'
= U^+_{(1-e^t, 1)} (V^+_0)' = (V^+_{1 - e^t})' \subseteq (V^+_0)'\quad \mbox{ for }\quad t \geq 0.\]
Therefore $(V_0', V_1')$ has a $-$modular intersection.
\end{proof}
The key point of modular intersections is that they no longer require
any spectral condition on the corresponding representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$.
The preceding theorem shows that $\pm$modular intersections
are characterized as pairs of standard subspaces that can be obtained from arbitrary
antiunitary representations of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$. This is of particular relevance for
representations of Lorentz groups $\mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})$ which, for $d > 3$, never
satisfy any positive energy condition.
With the same method that we used to obtain Theorem~\ref{thm:wies2-standard},
we now obtain by transcribing \cite[Thm.~6]{Wi97} from the context
of von Neumann algebras the following theorem.
\begin{theorem} \mlabel{thm:wies3-standard}
Let $H_1, H_2, H_3$ be three standard subspaces such that
$(H_1, H_2)$ and $(H_3, H_1')$ are $-$modular intersections
and $(H_2, H_3)$ is a $+$modular intersection.
Then the corresponding antiunitary representations $U^{H_j}$, $j =1,2,3$, of
${\mathbb R}^\times$ generate an anti-unitary representation of $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$.
\end{theorem}
\begin{proof} For the proof we only has to observe that the
group $G$ generated by the three modular one-parameter groups $(\Delta_{H_j}^{it})_{t \in {\mathbb R}}$,
$j = 1,2,3$ is invariant under $J_{H_1}$.
For the subgroup $G_{12}$ generated by the operators
$\Delta_{H_1}^{it}$ and $\Delta_{H_2}^{is}$, this follows from Theorem~\ref{thm:wiesbrock},
and we likewise obtain the invariance of the subgroup
$G_{13}$ generated by the operators
$\Delta_{H_1}^{it}$ and $\Delta_{H_3}^{is}$. As $G$ is generated by $G_{12} \cup G_{13}$,
the assertion follows.
\end{proof}
\section{A glimpse of modular theory}
\mlabel{sec:4}
We now recall some of the key features of
Tomita--Takesaki Theory. In \S \ref{subsec:4.2} we discuss
the translation between pairs $(\mathcal{M},\Omega)$
of von Neumann algebras with cyclic separating vectors
and standard subspaces~$V$. More specifically, we discuss
this translation for half-sided modular inclusions in
\S \ref{subsec:4.3}, and in \S \ref{subsec:4.4}
we take a closer look at the space of modular conjugations
of a von Neumann algebra.
\subsection{The Tomita--Takesaki Theorem}
\mlabel{subsec:4.1}
Let $\mathcal{H}$ be a Hilbert space and $\mathcal{M} \subseteq B(\mathcal{H})$ be a von Neumann algebra.
We call a unit vector $\Omega \in \mathcal{H}$
\begin{itemize}
\item {\it cyclic} if $\mathcal{M}\Omega$ is dense in $\mathcal{H}$.
\item {\it separating} if the map $\mathcal{M} \to \mathcal{H}, M \mapsto M\Omega$ is injective.
\end{itemize}
It is easy to see that $\Omega$ is separating if and only if it is cyclic for the
commutant $\mathcal{M}'$.
\begin{definition}
We write $\mathop{{\rm cs}}\nolimits(\mathcal{M})$ for the set of cyclic and separating unit vectors for $\mathcal{M}$.
\end{definition}
\begin{theorem}[Tomita--Takesaki Theorem] \mlabel{thm:tom-tak}
Let $\mathcal{M} \subseteq B(\mathcal{H})$ be a von Neumann algebra and
$\Omega \in\mathcal{H}$ be a cyclic separating vector for $\mathcal{M}$.
Write $\mathcal{M}_h := \{ M \in \mathcal{M} \: M^* = M\}$ for the real subspace of hermitian
elements in $\mathcal{M}$.
Then $V := \overline{\mathcal{M}_h \Omega}$ is a standard subspace.
The corresponding modular objects $(\Delta, J)$ satisfy
\begin{itemize}
\item[\rm(a)] $J \mathcal{M} J = \mathcal{M}'$ and $\Delta^{it} \mathcal{M} \Delta^{-it} = \mathcal{M}$ for $t \in {\mathbb R}$.
\item[\rm(b)] $J \Omega = \Omega$, $\Delta \Omega = \Omega$ and
$\Delta^{it}\Omega = \Omega$ for all $t \in {\mathbb R}$.
\item[\rm(c)] For $M \in \mathcal{M} \cap \mathcal{M}'$, we have
$JMJ = M^*$ and $\Delta^{it} M \Delta^{-it} = M$ for $t \in {\mathbb R}$.
\end{itemize}
\end{theorem}
\begin{proof}
We only show that $V$ is a standard subspace and refer to
\cite[Thm.~2.5.14]{BR87} for the other assertions.
Clearly, $V$ is a closed real subspace for which $V + i V$ is dense because
it contains $\mathcal{M}_h \Omega + i \mathcal{M}_h\Omega = \mathcal{M}\Omega$.
The same holds for $W := \overline{\mathcal{M}_h' \Omega}$ because $\Omega$ is also
cyclic for $\mathcal{M}'$. For $M \in \mathcal{M}_h$ and $M' \in \mathcal{M}'_h$, we have
\[ \langle M\Omega, M'\Omega \rangle = \langle M'M\Omega, \Omega \rangle = \langle MM'\Omega, \Omega \rangle
= \langle M'\Omega, M\Omega \rangle \in {\mathbb R},\]
which implies that $\omega(V,W) = \{0\}$.
Therefore $V \cap iV$ is a complex subspace of $W^\bot = \{0\}$, hence trivial.
Now the main point is to show that the modular objects $(\Delta, J)$ associated
to $V$ satisfy (a)-(c).
\end{proof}
The key point of the Tomita--Takesaki Theorem is that it provides
for each cyclic separating vector $\Omega \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$
a pair $(\Delta, J)$ of modular objects.
The modular operators $\Delta$ and their spectra are the key tool in
the classification of factors and in the characterization
of von Neumann algebras by their natural cones by A.~Connes \cite{Co73, Co74}.
Here we emphasize that
$(\Delta, J)$ is encoded in an antiunitary representation $U^V$ of ${\mathbb R}^\times$.
We first take a closer look at the antiunitary operators that come directly
from $\mathcal{M}$ and its commutant. The picture will be refined in \S\ref{subsec:4.4} below.
\begin{ex} Let $\mathcal{M} \subseteq B(\mathcal{H})$ be a von Neumann algebra,
$G_1 := \mathop{\rm U{}}\nolimits(\mathcal{M}) \times \mathop{\rm U{}}\nolimits(\mathcal{M})$ and
$\tau \in \mathop{{\rm Aut}}\nolimits(G_1)$ be the flip automorphism. We consider the
group $G := G_1 \rtimes \{\mathbf{1},\tau\}$.
If $\Omega$ is a cyclic separating vector for $\mathcal{M}$ and
$J$ the corresponding modular involution, then
\[ U_{(g,h,\tau^\varepsilon)} := g J h J J^\varepsilon \]
defines an antiunitary representation of the pair $(G, G_1)$.
Any other conjugation $\tilde J$ on $\mathcal{H}$ that we can use
to extend the unitary representation $U\vert_{G_1}$ is of the form
$\tilde J = J g$ for some central unitary element
$g \in \mathop{\rm U{}}\nolimits(\mathcal{M} \cap \mathcal{M}')$.
\end{ex}
\begin{definition} \mlabel{def:symform}
A von Neumann algebra $\mathcal{M} \subseteq B(\mathcal{H})$ is said to be
in {\it symmetric form} if there exists a conjugation $J$ on $\mathcal{H}$ with
\[ J\mathcal{M} J = \mathcal{M}' \quad \mbox{ and }\quad JZJ = Z^* \quad \mbox{ for } \quad Z \in \mathcal{M}\cap \mathcal{M}'.\]
\end{definition}
According to the Tomita--Takesaki Theorem, the existence of a
cyclic separating vector
implies that $\mathcal{M}$ is in symmetric form. According to \cite[Thm.~III.4.5.6]{Bla06},
any two realizations of $\mathcal{M}$ in symmetric form are unitarily equivalent.
Let ${\mathfrak S}_n(\mathcal{M})$ denote the set of {\it normal states} of the von Neumann algebra
$\mathcal{M}$. By the Gelfand--Naimark--Segal construction, any state $\omega$ corresponds
to a cyclic normal representation
$(\pi_\omega, \mathcal{H}_\omega, \Omega_\omega)$ with
$\omega(M) = \langle \Omega_\omega, \pi_\omega(M) \Omega_\omega\rangle$,
which is uniquely determined up to unitary equivalence of cyclic representations.
By construction $\Omega_\omega$ is cyclic, it leads to a faithful representation
for which $\Omega_\omega$ is separating if and only if the state $\omega$ is
{\it faithful}, i.e., $\omega(M^*M) > 0$ for any non-zero $M \in \mathcal{M}$.
\begin{remark} (Existence of cyclic separating vectors)
A von Neumann algebra $\mathcal{M}$ possesses a faithful normal state if and only
if it is {\it $\sigma$-finite} (also called {\it countably decomposable})
in the sense that every family of mutually orthogonal
projections in $\mathcal{M}$ is at most countable (\cite[Prop.~III.4.5.3]{Bla06}).
This is always the case if $\mathcal{M}$ can be realized
on a separable Hilbert space, but not in general. Therefore one has to generalize
the concept of a state to that of a {\it normal weight}. This is an
additive positively homogeneous weakly lower semicontinuous
functional $\omega \: \mathcal{M}^+ \to [0,\infty]$ on the positive cone $\mathcal{M}^+$ of $\mathcal{M}$
that may also take the value $\infty$. A weight $\omega$ is called {\it semifinite}
if the subset $\{M\in\mathcal{M}_+\: \omega(M)<\infty\}$
generates $\mathcal{M}$ as a von Neumann algebra.
Every von Neumann algebra has a faithful normal semifinite weight
(cf. \cite[III.2.2.26]{Bla06}) and the GNS construction as well as
Tomita--Takesaki theory extend naturally to normal semifinite weights.
In particular, any such weight leads to a symmetric form realization of~$\mathcal{M}$.
\end{remark}
\begin{ex} \mlabel{ex:3.2}
(a) Let $\mathcal{H} = L^2(X,{\mathfrak S},\mu)$ for a $\sigma$-finite measure space
$(X,{\mathfrak S},\mu)$ and $\mathcal{M} = L^\infty(X,{\mathfrak S},\mu)$,
acting on $\mathcal{H}$ by multiplication operators.
Then the normal states of $\mathcal{M}$ are of the form
$\omega_h(f) = \int_X fh\, d\mu$, where $0 \leq h$ satisfies
$\int_X h\, d\mu = 1$. Such a state is faithful if and only if
$h\not=0$ holds $\mu$-almost everywhere. Then $\Omega := \sqrt{h} \in \mathcal{H}$
is a corresponding cyclic separating unit vector.
From $S(f\Omega) = \overline f \Omega$, we obtain
$S(f) = \overline f$, which is isometric and therefore
$S = J$ and $\Delta = \mathbf{1}$.
(b) Let $\mathcal{H} = B_2(\mathcal{K})$ be the space of Hilbert--Schmidt operators on the complex
separable Hilbert space $\mathcal{K}$ and consider the von Neumann algebra
$\mathcal{M} = B(\mathcal{K})$ acting on $\mathcal{H}$ by left multiplications.
Then $\mathcal{M}' \cong B(\mathcal{K})^{\rm op}$ acts by right multiplications.
Normal states of $\mathcal{M}$ are of the form
$\omega_D(A) = \mathop{{\rm tr}}\nolimits(AD)$, where $0 \leq D$ satisfies $\mathop{{\rm tr}}\nolimits D = 1$.
Such a state is faithful if and only if
$\ker D = \{0\}$ (which requires $\mathcal{K}$ to be separable),
and then $\Omega := \sqrt{D} \in \mathcal{H}$
is a cyclic separating unit vector.
Then $S(M\Omega) = M^*\Omega = (\Omega M)^*$ implies that
\[ JA = A^* \quad \mbox{ and } \quad
\Delta(A) = \Omega^{2} A \Omega^{-2}= D A D^{-1}
\quad \mbox{ for } \quad A \in B_2(\mathcal{K}).\]
(c) The prototypical pair $(\Delta, J)$ of a modular operator
and a modular conjugation arises from the regular representation
of a locally compact group $G$ on the Hilbert space $\mathcal{H} = L^2(G, \mu_G)$
with respect to
a left Haar measure~$\mu_G$.
Here the modular operator is given by the multiplication
\[ \Delta f = \Delta_G \cdot f,\]
where $\Delta_G \: G \to {\mathbb R}^\times_+$ is the modular function of $G$
and the modular conjugation is given by
\[ (Jf)(g) = \Delta_G(g)^{-\frac{1}{2}} \overline{f(g^{-1})}.\]
Accordingly, we have for $S = J \Delta^{1/2}$:
\[ (Sf)(g) = \Delta_G(g)^{-1} \overline{f(g^{-1})} = f^*(g).\]
The corresponding von Neumann algebra is the algebra $\mathcal{M} \subseteq B(L^2(G,\mu_G))$
generated by the left regular representation.
If $M_f h =f * h$ is the left convolution with $f \in C_c(G)$, then the value of
the corresponding normal weight $\omega$ on $\mathcal{M}$ is given by
$\omega(M_f) = f(e),$ so that $\omega$ corresponds to evaluation in~$e$,
which is defined on a weakly dense subalgebra of~$\mathcal{M}$.
\end{ex}
\subsection{Translating between standard subspaces and von Neumann
algebras}
\mlabel{subsec:4.2}
We have already seen that cyclic separating vectors of a von Neumann
algebra $\mathcal{M}$ lead to standard subspaces. In this subsection we explore
some properties of this correspondence and describe
how half-sided modular inclusions of standard subspaces translate into
corresponding inclusions of von Neumann algebras. This correspondence
shows that antiunitary representations of groups generated by
modular one-parameter groups and conjugations from cyclic vectors
of von Neumann algebras can already be studied in terms of
standard subspaces and their inclusions, and all this can be encoded
in antiunitary representations of pairs $(G,G_1)$,
and homomorphisms ${\mathbb R}^\times \to G$, resp., $\mathop{{\rm Aff}}\nolimits({\mathbb R}) \to G$
(Corollary~\ref{cor:2.21} and Theorems~\ref{thm:3.8}, \ref{thm:wies-modinter}).
\begin{lemma} \mlabel{lem:4.14}
If $\mathcal{M} \subseteq B(\mathcal{H})$ is a von Neumann algebra and
$\Omega \in \mathcal{H}$ a separating vector for $\mathcal{M}$, then we associate to every
von Neumann subalgebra $\mathcal{N} \subseteq \mathcal{M}$ the closed real subspace
$V_\mathcal{N} := \overline{\mathcal{N}_h \Omega}$. This assignment is injective.
\end{lemma}
Note that the subspace $V_\mathcal{N}$ is standard if $\Omega$ is also cyclic for
$\mathcal{N}$.
\begin{proof} (cf.~\cite[Prop.~3.24]{Lo08})
We have to show that
$M \in \mathcal{M}_h$ and $M \Omega \in V_\mathcal{N}$ implies $M \in \mathcal{N}$.
First we find a sequence $A_n \in \mathcal{N}$ such that
$A_n\Omega \to M\Omega$. For any $B \in \mathcal{M}'$, this leads to
$A_n B\Omega = BA_n \Omega \to B M\Omega = MB\Omega$,
so that $A_n \to M$ holds pointwise on the dense subspace
$\mathcal{D} := \mathcal{M}'\Omega$. Since the hermitian operators $A_n$ and $M$ are bounded,
$\mathcal{D}$ is a common core for all of them.
With \cite[Thm.~VIII.25]{RS73} it now follows that $A_n \to M$
holds in the strong resolvent sense, i.e., that
$(i\mathbf{1} + A_n)^{-1} \to (i\mathbf{1} + M)^{-1}$ in the strong operator topology.
This implies that $(i\mathbf{1} + M)^{-1} \in \mathcal{N}$, which entails $M \in \mathcal{N}$.
\end{proof}
The concept of a half-sided modular inclusion was originally conceived
on the level of von Neumann algebras with cyclic separating
vectors, where it takes the following form
(\cite{Wi93,Wi97}).
\begin{definition}
Let $\Omega$ be a cyclic separating vector for the von Neumann algebra $\mathcal{M}$
and $\mathcal{N} \subseteq \mathcal{M}$ be a von Neumann subalgebra for which
$\Omega$ is also cyclic.
The triple $(\mathcal{M}, \mathcal{N},\Omega)$ is called a {\it $\pm$half-sided modular inclusion}
\begin{footnote}{Here we switched signs, compared to
\cite{Bo97, Wi93}, to make the concept compatible with
the sign convention in the context of standard
subspaces \cite{Lo08}.}
\end{footnote}
if
\begin{equation}
\label{eq:modular-inc}
\Delta_\mathcal{M}^{-it} \mathcal{N} \Delta_\mathcal{M}^{it} \subseteq \mathcal{N} \quad \mbox{ for }\quad
\pm t \geq 0.
\end{equation}
Note that $\Omega$ is also separating for $\mathcal{N}$ because $\mathcal{N} \subseteq \mathcal{M}$,
so that we obtain two pairs of modular objects
$(\Delta_\mathcal{M}, J_\mathcal{M})$ and $(\Delta_\mathcal{N}, J_\mathcal{N})$.
\end{definition}
\begin{lemma} Let $\mathcal{N} \subseteq \mathcal{M} \subseteq B(\mathcal{H})$ be von Neumann algebras
with the common cyclic separating vector $\Omega\in\mathcal{H}$.
Then $(\mathcal{M}, \mathcal{N}, \Omega)$ is a $\pm$half-sided modular inclusion if and only
if the corresponding standard subspaces
$V_\mathcal{N} := \overline{\mathcal{N}_h \Omega} \subseteq V_\mathcal{M} := \overline{\mathcal{M}_h \Omega}$
define a $\pm$half-sided modular inclusion.
\end{lemma}
\begin{proof} Since
$\overline{\Delta_\mathcal{M}^{-it} \mathcal{N}_h \Delta_\mathcal{M}^{it} \Omega}
= \Delta_\mathcal{M}^{-it} V_\mathcal{N},$
relation \eqref{eq:modular-inc} implies
\begin{equation}
\label{eq:modinc-std}
\Delta_\mathcal{M}^{-it} V_\mathcal{N} \subseteq V_\mathcal{N} \quad \mbox{ for } \quad \pm t \geq 0.
\end{equation}
If, conversely, the latter condition is satisfied, then
$\Delta_\mathcal{M}^{-it} \mathcal{N}_h \Delta_\mathcal{M}^{it} \Omega
\subseteq \Delta_\mathcal{M}^{-it} V_\mathcal{N} \subseteq V_\mathcal{N},$
so that Lemma~\ref{lem:4.14} implies \eqref{eq:modular-inc}.
\end{proof}
The preceding lemma has a very interesting consequence
because it translates directly between half-sided modular inclusions
of von Neumann algebras and half-sided modular inclusions of the corresponding
standard subspaces. It immediately implies that a triple
$(\mathcal{M},\mathcal{N},\Omega)$ consisting of two von Neumann algebras
$\mathcal{M}$ and $\mathcal{N}$ with a common cyclic separating vector $\Omega$
defines a modular intersection in the sense of \cite{Wi97} if and only if
$V_\mathcal{M}$ and $V_\mathcal{N}$ have a modular intersection.
Clearly, every result on half-sided modular inclusions on
standard subspaces, such as Borchers' Theorem~\ref{thm:bor-stand}
(\cite[Thms.~II.5.2, VI.2.2]{Bo00}),
Wiesbrock's Theorem~\ref{thm:wiesbrock}
(\cite{Wi93, AZ05}), and Theorem~\ref{thm:wies2-standard}
(\cite[Lemma~10]{Wi98}) yield corresponding results on half-sided
modular inclusions of von Neumann algebras which
preceded the corresponding results on standard subspaces.
It is remarkable that this transfer also works in the other direction:
every result on half-sided modular inclusions of von Neumann algebras
can be used to obtain a corresponding result on standard subspaces.
For this transfer one can use the second quantization procedure
described in some detail in Section~\ref{sec:6} below. It associates
to every standard subspace $V \subseteq \mathcal{H}$
a von Neumann algebra $\mathcal{R}(V) \subseteq B(\mathcal{F}_+(\mathcal{H}))$ on the bosonic
Fock space $\mathcal{F}_+(\mathcal{H})$ for which the vacuum $\Omega$ is a cyclic separating vector
and for which the modular objects are obtained by second quantization.
Here we consider the antiunitary representation
\[ \Gamma \: \mathop{{\rm AU}}\nolimits(\mathcal{H}) \to \mathop{{\rm AU}}\nolimits(\mathcal{F}_+(\mathcal{H})),\quad
\Gamma(U)(v_1 \vee \cdots \vee v_n)
:= Uv_1 \vee \cdots \vee Uv_n\]
obtained by second quantization.
If $\gamma_V \: {\mathbb R}^\times \to \mathop{{\rm AU}}\nolimits(\mathcal{H})$ is the antiunitary representation
associated to $V$, then $\tilde\gamma_V := \Gamma \circ \gamma_V$
is the corresponding antiunitary representation on the Fock space $\mathcal{F}_+(\mathcal{H})$
(cf.~Proposition~\ref{prop:3.2}).
If $\Delta_V^{-it/2\pi}H = \gamma_V(e^t)H \subseteq H$ holds for $t \geq 0$, then
\[ \mathcal{R}(\gamma_V(e^t)H)
= \tilde\gamma_V(e^t) \mathcal{R}(H) \tilde\gamma_V(e^{-t})\subseteq \mathcal{R}(H)\]
implies that $(\mathcal{R}(V), \mathcal{R}(H),\Omega)$ is a $\pm$half-sided modular
inclusion whenever $H \subseteq V$ is.
If, conversely, $(\mathcal{R}(V), \mathcal{R}(H),\Omega)$ is a $\pm$half-sided modular
inclusion, then
\[ \mathcal{R}(\gamma_V(e^t)H)
= \tilde\gamma_V(e^t) \mathcal{R}(H) \tilde\gamma_V(e^{-t})\subseteq \mathcal{R}(H) \]
implies that $\gamma_V(e^t)H \subseteq H$ by Theorem~\ref{thm:araki-1}(i).
Therefore $H \subseteq V$ is a half-sided modular inclusion of the same type.
As the subgroup of $\mathop{{\rm AU}}\nolimits(\mathcal{F}_+(\mathcal{H}))$ generated by the corresponding
one-parameter groups $\tilde\gamma_V({\mathbb R}^\times)$ is contained in
the subgroup $\Gamma(\mathop{{\rm AU}}\nolimits(\mathcal{H}))$ which is the range of the
second quantization homomorphism $\Gamma \: \mathop{{\rm AU}}\nolimits(\mathcal{H}) \to \mathop{{\rm AU}}\nolimits(\mathcal{F}_+(\mathcal{H}))$,
anything that we can say about subgroups generated by these groups
and conditions relating to modular objects can be translated into
a corresponding result on standard subspaces and the antiunitary
one-parameter groups $\gamma_V$ on~$\mathcal{H}$.
According to this principle, any result on half-sided modular inclusions
of von Neumann algebras has a ``one-particle version'' concerning
standard subspaces and vice versa (cf.~\S\S\ref{subsec:3.3}, \ref{subsec:modint}).
The advantage of the one-particle
version is that it has a simpler formulation and that
standard subspaces are completely encoded in
the antiunitary representations $\gamma_V$ of ${\mathbb R}^\times$, hence in
an antiunitary representation of a group $(G, G_1)$ generated by
the image of homomorphism $({\mathbb R}^\times, {\mathbb R}^\times_+) \to (G,G_1)$.
Therefore one can hope that any results on standard subspaces,
half-sided modular inclusions and the corresponding groups can be
expressed in terms of antiunitary representations of suitable
involutive pairs of Lie groups $(G, G_1)$. This was one of the key
motivations for us to write this note.
\begin{remark} (a) A typical result of this type is Wiesbrock's Theorem
on half-sided modular inclusions (cf.~Theorem~\ref{thm:wiesbrock} and
\cite{Wi93, Wi97, AZ05}). On the level of modular inclusions of
von Neumann algebras $(\mathcal{M},\mathcal{N},\Omega)$,
Wiesbrock provides the additional information that,
if $\mathcal{M}$ is a factor, then it is of type III$_1$
(see \cite[Thm.~12]{Wi93} which uses \cite{Lo82}).
It would be interesting
to see if and how this can be formulated and derived on the level of standard
subspaces and antiunitary representations.
The discussion of modular nuclearity in \cite[\S 6.3]{Lo08} may
indicate a possible way how this can be done.
(b) In \cite[Thm.~4.11]{GLW98}
(see also \cite[Lemmas 3,4ff]{Wi93c} and \cite[Thm.~2]{Wi93b}),
similar structures related to
multiple modular inclusions are studied, namely
quadruples $(\mathcal{M}_0, \mathcal{M}_1, \mathcal{M}_2, \Omega)$, where
the $\mathcal{M}_j$ are von Neumann algebras with the common separating cyclic vector $\Omega$
such that the $\mathcal{M}_j$ commute pairwise and, in cyclic order,
$\mathcal{M}_j \subseteq \mathcal{M}_{j+1}'$ is a half sided modular inclusion.
From this structure, which arises from partitions of ${\mathbb S}^1$ into
three intervals, one derives antiunitary positive energy representations
of $\mathop{{\rm PGL}}\nolimits_2({\mathbb R})$ as in Theorem~\ref{thm:wies2-standard} (\cite[Thm.~1.2]{GLW98}).
(c) In \cite{Wi93b} it is shown that
the von Neumann version of Theorem~\ref{thm:wies2-standard}(b) characterizes
conformal quantum fields on the circle in terms of modular data associated to three intervals.
(d) In \cite{KW01} configurations of 6 von Neumann algebras
$(\mathcal{M}_{ij})_{1 \leq i < j \leq 4}$ are used to generate unitary representations of the
group $\mathop{{\rm SO}}\nolimits_{1,3}({\mathbb R})^\uparrow$ and further of
the connected Poincar\'e group $P(4)^\uparrow_+$.
\end{remark}
\subsection{Borchers triples}
\mlabel{subsec:4.3}
In this subsection we briefly discuss generalization of Borchers pairs
to higher dimensional situations, where the semigroup
${\mathbb R}_+$ acting on a standard subspace is replaced by a
wedge $W$ in Minkowski space or by the subsemigroup of $P(d)_+$
mapping such a wedge into itself.
\begin{definition} \mlabel{def:wedges}
In $d$-dimensional Minkowski space ${\mathbb R}^{1,d-1}$, we consider the {\it right wedge}
\[ W_R:=\big\{ x = (x_0, \ldots, x_{d-1}) \in {\mathbb R}^{d} \: x_1 > |x_0|\big\}.\]
To fix notation for the following, we write
$W_R = W_R^2 \oplus E_R,$ where
\[ E_R = \{ (x_0, \bx) \: x_0 = x_1 = 0\}\cong {\mathbb R}^{d-2} \]
is the {\it edge of the wedge}
and $W_R^2$ is the standard right wedge in ${\mathbb R}^2$.
A subset of the form $W = gW_R$, $g \in P(d)$, is called a {\it wedge}.
We write $\mathcal{W}$ for the set of wedges in~${\mathbb R}^{1,d-1}$.
\end{definition}
The following lemma contains some details on $\mathcal{W}$ as an
ordered homogeneous space. For item (iii), we
recall the generator $b_0$ of the Lorentz boost from Example~\ref{ex:one-par}
(see also \cite[\S 2]{BGL02}).
\begin{lemma}
\mlabel{lem:4.17}
The wedge space $\mathcal{W}$ has the following properties:
\begin{itemize}
\item[\rm(i)] The stabilizer $P(d)_{W_R} = \{ g \in P(d)\:gW_R = W_R\}$
of the standard right wedge has
the form
\[ P(d)_{W_R} \cong E(d-2) \times \mathop{\rm O{}}\nolimits_{1,1}({\mathbb R})_{W_R^2},\]
where $E(d-2)$ denotes the euclidean group on $E_R \cong {\mathbb R}^{d-2}$.
\item[\rm(ii)] $r_W := g R_{01} g^{-1}$ for $W = gW$ and
$R_{01} = \mathop{{\rm diag}}\nolimits(-1,-1,1,\ldots, 1)$
yields a consistent definition of wedge reflections $(r_W)_{W \in \mathcal{W}}$.
\item[\rm(iii)] The subgroup $P(d)^\uparrow$ acts transitively on $\mathcal{W}$,
and the following are equivalent for $g \in P(d)^\uparrow$:
\begin{itemize}
\item[\rm(a)] $gW_R = W_R$.
\item[\rm(b)] $\mathop{{\rm Ad}}\nolimits_g b_0 = b_0$ and
$g$ commutes with the wedge reflection $r_{W_R} = R_{01}$.
\item[\rm(c)] $\mathop{{\rm Ad}}\nolimits_g b_0 = b_0$.
\end{itemize}
The set of all elements satisfying these conditions is
\begin{equation}
\label{eq:stabgrp}
P(d)^\uparrow_{W_R} \cong E(d-2) \times \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow.
\end{equation}
In particular, the subgroup $\mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow$ is central in
$P(d)^\uparrow_{W_R}$.
For $d > 2$, even the identity component acts transitively on $\mathcal{W}$ with
stabilizer
\begin{equation}
\label{eq:stabgrp2}
P(d)^\uparrow_{+,W_R} \cong E(d-2)_+ \times \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow.
\end{equation}
\item[\rm(iv)] For $\gamma_{W_R} \: {\mathbb R}^\times \to P(d)_+$,
defined by $\gamma_{W_R}(e^t) := e^{tb_0}$ and $\gamma_{W_R}(-1) := r_{W_R}$,
we have a bijection
\[ \mathcal{W} \to C_{\gamma_{W_R}}, \quad g W_R \mapsto \gamma_W := \gamma_{W_R}^g \quad \mbox{ for } \quad
g \in P(d)^\uparrow_+, \quad
\gamma_{W_R}^g(t) = g\gamma_{W_R}(t)g^{-1}\]
and the map
\[ \mathcal{W} \to C_{r_{W_R}} = \{ r_W \: W \in \mathcal{W}\}, \quad
W \mapsto r_W \]
corresponding to evaluation in $-1$ is a two-fold covering map.
\item[\rm(v)] We have a bijection
$\mathcal{W} \to \mathop{{\rm Ad}}\nolimits(P(d)^\uparrow)b_0, gW_R \mapsto \mathop{{\rm Ad}}\nolimits(g)b_0$
of $\mathcal{W}$ onto an adjoint orbit of $P(d)^\uparrow$.
\item[\rm(vi)] The stabilizer $P(d)_{W_R}$ is open in the centralizer
of $r_{W_R}$ in $P(d)$. In particular
$(P(d), P(d)_{W_R})$ is a symmetric pair and $\mathcal{W}$ is a symmetric space.
\item[\rm(vii)] The semigroup
$S_{W_R} := \{ g \in P(d) \: gW_R \subseteq W_R\}$ is given by
$\overline{W_R} \rtimes \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})_{W_R}$.
\end{itemize}
\end{lemma}
\begin{proof} (i) The stabilizer group
contains the translation group corresponding to the edge $E_R$
and $gW_R = W_R$ implies $g(0) \in E_R$, so that
\[ P(d)_{W_R} \cong E_R \rtimes \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})_{W_R}.\]
Further,
\[ \mathop{\rm O{}}\nolimits_{1,d-1}({\mathbb R})_{W_R}
= \mathop{\rm O{}}\nolimits_{d-2}({\mathbb R}) \times \mathop{\rm O{}}\nolimits_{1,1}({\mathbb R})_{W_R^2}
= \mathop{\rm O{}}\nolimits_{d-2}({\mathbb R}) \times (\mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow \{\mathbf{1}, R_1\}), \]
where $R_1 = \mathop{{\rm diag}}\nolimits(1,-1,1,\ldots, 1)$. We thus obtain (i).
(ii) follows from the fact that the wedge
reflection $R_{01}$ commutes with $P(d)_{W_R}$.
(iii) That $P(d)^\uparrow$ acts transitively follows from the fact that
the stabilizer $P(d)_{W_R}$ contains the reflection $R_1$
satisfying $R_1 V_+ = -V_+$.
If $d > 2$, then the stabilizer $P(d)_{W_R}$ intersects all four connected
components of $P(d)$, so that even $P(d)^\uparrow_+$ acts transitively.
For $d = 2$ we obtain two orbits because $\pm W_R$ lie in different orbits
of $P(2)^\uparrow_+$.
It remains to verify the equivalence of (a), (b) and (c).
From \eqref{eq:stabgrp} we derive that (a) implies (b) and hence (c).
That $g = (b,a)$ commutes with $R_{01}$ is equivalent to
$b \in E_R$ and $a = a_1 \oplus a_2$ with
$a_1$ acting on the first two coordinates and $a_2$ on $E_R$.
That, in addition, $g$ commutes with $b_0$ restricts $a_1 \in \mathop{\rm O{}}\nolimits_{1,1}({\mathbb R})^\uparrow$
to an element of $\mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow$. Finally, we observe that,
if $g$ commutes with $b_0$, then the eigenspace decomposition
of $\mathop{{\rm ad}}\nolimits b_0$ on ${\mathfrak p}(d)$ implies that $g = (b,a)$ with
$b \in E_R$ and $a = a_1 \oplus a_2$ with $a_1 \in \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow$.
(iv) The first part follows from the equivalence of (a) and (b) in (iii).
For the second part, we observe with (iii) above that
the stabilizer of $W_R$ is a subgroup of index $2$ in the centralizer
of $r_{W_R}$.
(v) follows from the equivalence of (a) and (c) in (iii).
(vi) The centralizer of $r_{W_R}$ in $P(d)$
is the subgroup $E(d-2) \times \mathop{\rm O{}}\nolimits_{1,1}({\mathbb R})$ in which
the stabilizer group $P(d)_{W_R}$ is open. This means that
$(P(d), P(d)_{W_R})$ is a symmetric pair.
(vii) For a closed convex subset $C \subseteq {\mathbb R}^d$, its recession cone
\[ \lim(C)
:= \{ x \in {\mathbb R}^d \: x + C \subseteq C \}
= \{ x \in {\mathbb R}^d \: (\exists c \in C)\, c + {\mathbb R}_+ x \subseteq C \} \]
is a closed convex cone (\cite[Prop.~V.1.6]{Ne00}), and each
affine map $g = (b,a) \in \break{{\mathbb R}^d \rtimes \mathop{{\rm GL}}\nolimits_d({\mathbb R})} \cong \mathop{{\rm Aff}}\nolimits({\mathbb R}^d)$ satisfies
\begin{equation}
\label{eq:lim-cone}
\lim(gC) = \lim(aC) = a\lim(C).
\end{equation}
If $g = (b,a) \in S_{W_R}$, then $g$ maps $\overline{W_R}$ into itself,
so that $b = g(0) \in \overline{W_R}$. Further \eqref{eq:lim-cone} implies that
$\overline{W_R} \supseteq \lim(gW_R) = a \overline{W_R}$, and hence $aW_R \subseteq W_R$.
It follows that $aE_R \subseteq E_R$, so that $aE_R = E_R$ as $a$ is injective
and $\dim E_R < \infty$. This in turn implies that $a$ commutes with
$r_{W_R}$, so that $a = a_1 \oplus a_2$ as above,
where $a_2 \in \mathop{\rm O{}}\nolimits(E_R)$ and $a_1 W_R^2 \subseteq W_R^2$.
As $a_1 W_R^2$ is a quarter plane bounded by light rays, we get
$a_1 W_R^2 = W_R^2$, and finally $a W_R = W_R$.
\end{proof}
\begin{definition} \mlabel{def:standard-pair} (\cite[Def.~2.7]{Le15})
A $d$-dimensional {\it standard pair $(V,U)$
with translation symmetry
relative to $W\in \mathcal{W}$} consists of a standard subspace
$V \subseteq \mathcal{H}$ and a strongly continuous unitary positive energy
representation $U$ of the translation group ${\mathbb R}^d$ (cf.~Definition~\ref{def:posen-rep})
such that $U_xV \subseteq V$ whenever $x + W \subseteq W$.
\end{definition}
Here is the corresponding concept for von Neumann algebras:
\begin{definition} (\cite[\S4]{BLS11}) A {\it (causal) Borchers triple}
$(\mathcal{M}, U,\Omega)$ relative to the wedge $W\subseteq {\mathbb R}^d$ consists of
\begin{itemize}
\item[\rm(B1)] a von Neumann algebra $\mathcal{M} \subseteq B(\mathcal{H})$,
\item[\rm(B2)] a positive energy representation $(U,\mathcal{H})$ of
the translation group ${\mathbb R}^d$ such that $U_x \mathcal{M} U_x^* \subseteq \mathcal{M}$ if $x + W \subseteq W$, and
\item[\rm(B3)] a $U$-invariant unit vector $\Omega \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$.
\end{itemize}
\end{definition}
\begin{remark} Let $\mathcal{M} \subseteq B(\mathcal{H})$ be a von Neumann algebra,
$U \: {\mathbb R}^d \to \mathop{\rm U{}}\nolimits(\mathcal{H})$ be a continuous unitary representation
and $\Omega \in \mathcal{H}^U\cap \mathop{{\rm cs}}\nolimits(\mathcal{M})$. We consider the corresponding standard subspace
$V := \overline{\mathcal{M}_h\Omega}$.
Then $U_x V \subseteq V$ is equivalent to
$U_x \mathcal{M} U_x^* \subseteq \mathcal{M}$ by Lemma~\ref{lem:4.14}.
Therefore $(\mathcal{M}, U,\Omega)$ is a Borchers triple with respect to $W$
if and only if $(V,U)$ is a standard pair with respect to $W$.
\end{remark}
The following theorem can be obtained by translating \cite{Bo92}
from the context of Borchers triples to standard pairs by
arguing as in \S \ref{subsec:4.2}.
We give a direct proof based on our Theorem~\ref{thm:bor-stand}.
\begin{theorem}[Borchers' standard pair Theorem] \mlabel{thm:6.2a}
Let $(V,U)$ be a $d$-dimensional
standard pair with translation symmetry relative to $W_R$
and $\gamma_{W_R} \: {\mathbb R}^\times \to \mathop{{\rm SO}}\nolimits_{1,d-1}({\mathbb R})$ be the corresponding
homomorphism with $\gamma_{W_R}(e^t) = e^{tb_0}$ and $\gamma_{W_R}(-1)= r_{W_R} = R_{01}$
{\rm(Lemma~\ref{lem:4.17}(iv))}. Then the antiunitary
representation $(U^V,\mathcal{H})$ of ${\mathbb R}^\times$ corresponding to $V$ satisfies
\[ U^V_t U_x U^V_{t^{-1}} = U_{\gamma_{W_R}(t)x} \quad \mbox{ for } \quad x \in {\mathbb R}^d,
t \in {\mathbb R}^\times, \]
so that we obtain an antiunitary representation
of ${\mathbb R}^d \rtimes \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})$
by $(b, \gamma_{W_R}(t)) \mapsto U_b U^V_t$.
Conversely, every antiunitary positive energy representation
$(U,\mathcal{H})$ of ${\mathbb R}^d \rtimes \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})$ defines a standard pair
$(V_{\gamma_{W_R}}, U\vert_{{\mathbb R}^d})$.
\end{theorem}
\begin{proof}
First we write ${\mathbb R}^d = {\mathbb R}^{1,1} \oplus {\mathbb R}^{d-2}$, so that
$W_R = W_R^2 \oplus {\mathbb R}^{d-2}$, where $W_R^2 \subseteq {\mathbb R}^{1,1}$ is the standard right wedge.
For the light-like vectors $\ell_\pm := (1,\pm 1,0,\ldots, 0)$ we then have
$W_R^2 = {\mathbb R}_+^\times \ell_+ - {\mathbb R}_+^\times \ell_-$. By assumption,
$U_{t\ell_\pm} = e^{it P_\pm}$ with $P_\pm \geq 0$.
The strong continuity of $U$ implies
\[ U_x V \subseteq V \quad \mbox{ for all } \quad
x \in \overline W
= ([0,\infty)\ell_+ - [0,\infty)\ell_-) \oplus {\mathbb R}^{d-2}.\]
Now Theorem~\ref{thm:bor-stand} yields
\[ U^V_{e^t} U_{s\ell_\pm} U^V_{e^{-t}}
= U_{e^{\pm t} s \ell_\pm} \quad \mbox{ for } \quad t, s \in {\mathbb R}.\]
Further, $U_x V = V$ for $x = (0,0,x_2, \ldots, x_{d-1})$ implies that
$U_x$ commutes with $\Delta_V$. Combing all this, the first
assertion follows.
For the converse, let $(U,\mathcal{H})$ be an antiunitary
positive energy representation of the group ${{\mathbb R}^d \rtimes \mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})}$
and $V = V_{\gamma_{W_R}}$ be the standard subspace corresponding to
$\gamma^V := U \circ \gamma_{W_R}$ by (Proposition~\ref{prop:3.2}).
Since $\gamma_{W_R}$ commutes with $E_R$, the subgroup
$U_{E_R}$ commutes with $\gamma^U$ and leaves $V$ invariant.
That $U_x V \subseteq V$ for $x \in W_R^2$ follows from the
positive energy condition and Theorem~\ref{thm:3.8}(ii).
\end{proof}
Here is a variant of this concept where the translation group is replaced by the Poincar\'e group:
\begin{definition}
A $d$-dimensional {\it standard pair $(V,U)$
with Poincar\'e symmetry relative to $W_R \in \mathcal{W}$} consists of a standard subspace
$V \subseteq \mathcal{H}$ and a strongly continuous unitary positive energy
representation $U$ of the connected Poincar\'e group $P(d)^\uparrow_+$, such that
\begin{itemize}
\item[\rm(i)] $U_gV \subseteq V$ for all $g \in P(d)^\uparrow_+$ with $gW_R \subseteq W_R$ and
\item[\rm(ii)] $U_gV \subseteq V'$ for all $g \in P(d)^\uparrow_+$ with $gW_R \subseteq -W_R$.
\end{itemize}
\end{definition}
\begin{lemma} \mlabel{lem:stand-pair-poincare}
Let $(U,\mathcal{H})$ be an antiunitary positive energy representation of
$P(d)_+$ for $d \geq 3$ and $V \subseteq \mathcal{H}$ be the standard subspace
corresponding to the canonical homomorphism
$\gamma_{W_R} \: {\mathbb R}^\times \to P(d)_+$.
Then $(V,U)$ is a standard pair with Poincar\'e symmetry.
\end{lemma}
\begin{proof} If $g W_R = W_R$, then $g \in P(d)_+$ commutes with $\gamma_{W_R}$
by Lemma~\ref{lem:4.17}(iii), so that $U_g V = V$.
Further $U_x V \subseteq V$ for $x \in W_R$ (and hence also for
$x \in \overline{W_R}$ by continuity) follows from the second part of
Theorem~\ref{thm:6.2a}. In view of Lemma~\ref{lem:4.17}(vii),
this implies that $U_g V \subseteq V$ if $gW_R \subseteq W_R$.
If $g W_R \subseteq - W_R$, then the element $r := R_{12} = \mathop{{\rm diag}}\nolimits(1,-1,-1,1,\ldots, 1)
\in P(d)_+^\uparrow$ satisfies $r g W_R \subseteq r(-W_R) = W_R$,
so that the above argument leads to
$U_g V = U_rU_{rg} V \subseteq U_rV$. Now $\gamma_{W_R}^r = \gamma_{W_R}^\vee$
yields $U_r V = V' = V_{\gamma_{W_R}^\vee}$, so that $U_g V \subseteq V'$.
\end{proof}
\begin{remark} \mlabel{rem:inner} (a) In \cite{BLS11}, standard pairs with Poincar\'e symmetry
are used to obtain Borchers triples by second quantization
(cf.~Section~\ref{sec:6}). Composing with a
deformation process due to Rieffel, this
construction yields non-free quantum fields in arbitrary large dimensions.
(b) The main point of the notion of a
Borchers triple is that they can be used to construct a representation
of the Poincar\'e group $P(d)_+^\uparrow$ by generating it with
modular one-parameter groups of a finite set of von Neumann algebras
with a common cyclic separating vector (\cite{Bo96}, \cite{Wi93c, Wi98, SW00, KW01}).
(c) For $d = 1$, we think of ${\mathbb R} = {\mathbb R}^d$ as
the underlying space as a light ray in Minkowski space,
so that the Poincar\'e group is replaced by the affine group $\mathop{{\rm Aff}}\nolimits({\mathbb R})$.
In this context unitary ``endomorphisms'' of irreducible
one-dimensional standard pairs (Definition~\ref{def:standard-pair}) are
unitary operators $W \in \mathop{\rm U{}}\nolimits(\mathcal{H})$ commuting with the one-parameter
group $U$ satisfying $WV \subseteq V$.
If $P$ is the momentum operator determined by $U_t = e^{itP}$,
then these are precisely the operators of the form
$W = \phi(P)$, where $\phi$ is a
{\it symmetric inner function} on the upper half plane
${\mathbb C}_+ \subseteq {\mathbb C}$ and
$P$ is the momentum operator. A symmetric inner function is a bounded
holomorphic function on ${\mathbb C}_+$ satisfying
\[ \phi(p)^{-1} = \overline{\phi(p)} = \phi(-p) \quad \mbox{ for almost all } \quad
p \in {\mathbb R}\]
(\cite[Cor.~2.4]{LW11}; see also Example~\ref{ex:hardy}).
That these functions can be used to construct Borchers triples was
shown by Tanimoto in \cite{Ta12}.
\end{remark}
Much more could be said about the structured
related to standard subspaces, half-sided modular inclusions,
modular intersections etc.. For more details and an in depth study
of these concepts, we refer to \cite{Bo97} and Wiesbrock's work
\cite{Wi93c, Wi97b, Wi98}.
\subsection{Modular geometry}
\mlabel{subsec:4.4}
In this subsection we discuss some of the geometric structures
arising from a single von Neumann algebra $\mathcal{M} \subseteq B(\mathcal{H})$ which has
cyclic separating vectors. Any such vector $\xi$ leads
to a standard subspace $V_\xi = \overline{\mathcal{M}_h \xi}$ and corresponding
modular objects $(\Delta_\xi, J_\xi)$ (Theorem~\ref{thm:tom-tak}). Fixing a cyclic separating
vector $\Omega$, the associated natural cone provides a means to
analyze the orbits of the group generated by
$\mathop{\rm U{}}\nolimits(\mathcal{M})$, $\mathop{\rm U{}}\nolimits(\mathcal{M}')$ and the modular conjugations on the data.
\begin{definition} \mlabel{def:4.20} We consider a von Neumann algebra $\mathcal{M} \subseteq B(\mathcal{H})$
for which the set $\mathop{{\rm cs}}\nolimits(\mathcal{M})$ of cyclic and separating unit vectors
is non-empty. We fix an element $\Omega \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$ and the
corresponding modular objects $(\Delta, J)$
(Theorem~\ref{thm:tom-tak}). We recall
the {\it natural positive cone}
\[ \mathcal{P} := \overline{\{Aj(A)\Omega\: A \in \mathcal{M}\}}, \quad \mbox{ where } \quad
j(A) := JAJ\]
(\cite[Def.~2.5.25]{BR87}) and write
\[ \mathop{{\rm cs}}\nolimits(\mathcal{M})_+ := \mathcal{P} \cap \mathop{{\rm cs}}\nolimits(\mathcal{M})\]
for the set of cyclic separating unit vectors in $\mathcal{P}$.
We further write
\[ \mathop{{\rm mc}}\nolimits(\mathcal{M}) := \{ J_\xi \: \xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})\} \]
for the corresponding set of {\it modular conjugations}. We further consider the
set
\[ \mathop{{\rm ms}}\nolimits(\mathcal{M}) = \{ V_\xi = \overline{\mathcal{M}_h \xi} \: \xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})\}
\subseteq \mathop{{\rm Stand}}\nolimits(\mathcal{H}) \]
of {\it modular standard subspaces for $\mathcal{M}$} and
note that $\Delta_{V_\xi} = \Delta_\xi$ and $J_{V_\xi} = J_\xi$.
\end{definition}
We write $\mathcal{Z} := \mathcal{M} \cap \mathcal{M}'$ for the center of $\mathcal{M}$.
\begin{proposition} \mlabel{prop:4.7}
The following assertions hold:
\begin{itemize}
\item[\rm(i)] {\rm(Polar decomposition of $\mathop{{\rm cs}}\nolimits(\mathcal{M})$)} The map
$\mathop{\rm U{}}\nolimits(\mathcal{M}') \times \mathop{{\rm cs}}\nolimits(\mathcal{M})_+ \to \mathop{{\rm cs}}\nolimits(\mathcal{M}), (U,\xi) \mapsto U\xi$
is a bijection.
\item[\rm(ii)] The unitary groups $\mathop{\rm U{}}\nolimits(\mathcal{M})$ and $\mathop{\rm U{}}\nolimits(\mathcal{M}')$
both act transitively on $\mathop{{\rm mc}}\nolimits(\mathcal{M})$ by conjugation.
For $J \in \mathop{{\rm mc}}\nolimits(\mathcal{M})$, the stabilizer in both groups is the discrete central subgroup
\[ \mathop{\rm U{}}\nolimits(\mathcal{M}')_J = \mathop{\rm U{}}\nolimits(\mathcal{M})_J = \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z})) = \{ z \in \mathop{\rm U{}}\nolimits(\mathcal{Z}) \: z^2 = \mathbf{1}\}\]
of central unitary involutions. The orbit map
$\sigma \: \mathop{\rm U{}}\nolimits(\mathcal{M}) \to \mathop{{\rm mc}}\nolimits(\mathcal{M}), \sigma(U) := UJU^{-1}$
is a covering morphism of Banach--Lie groups if we identify
$\mathop{{\rm mc}}\nolimits(\mathcal{M})$ with the quotient $\mathop{\rm U{}}\nolimits(\mathcal{M})/\ker\sigma$.
\item[\rm(iii)] $\mathop{{\rm grp}}\nolimits(\mathop{{\rm mc}}\nolimits(\mathcal{M})) = \mathop{{\rm Comm}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{M})) \mathop{{\rm mc}}\nolimits(\mathcal{M}) \{ \mathbf{1},J\},$
where $\mathop{{\rm Comm}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{M}))$ is the commutator subgroup of $\mathop{\rm U{}}\nolimits(\mathcal{M})$.
\item[\rm(iv)] $\mathop{{\rm cs}}\nolimits(\mathcal{M})_J := \{ \xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M}) \: J_\xi = J \}
= \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z}))\mathop{{\rm cs}}\nolimits(\mathcal{M})_+$.
\item[\rm(v)] The stabilizer of $J$ in the group
$\mathop{\rm U{}}\nolimits(\mathcal{M})U(\mathcal{M}')$ is $\{ Uj(U) \: U \in \mathop{\rm U{}}\nolimits(\mathcal{M})\}\mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z}))$.
\item[\rm(vi)] For $\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$, we have
$V_\xi = V_\Omega$ if and only if there exists a
selfadjoint operator $Z$ affiliated with $\mathcal{Z}$, i.e.,
commuting with $\mathop{\rm U{}}\nolimits(\mathcal{M})\mathop{\rm U{}}\nolimits(\mathcal{M}')$, such that $\xi = Z\Omega$.
\end{itemize}
\end{proposition}
\begin{proof} (i) For any $\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$, there exists a
unit vector $\tilde\xi \in \mathcal{P}$ defining the same state of $\mathcal{M}$
(\cite[Thm.~2.5.31]{BR87}).
By the GNS Theorem, there exists a $U \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$ with
$\xi = U\tilde\xi$.
Since the elements of $\mathop{{\rm cs}}\nolimits(\mathcal{M})_+$ are also separating for $\mathcal{M}'$,
their stabilizer in $\mathop{\rm U{}}\nolimits(\mathcal{M}')$ is trivial.
To verify injectivity, it remains to see that every $\mathop{\rm U{}}\nolimits(\mathcal{M}')$-orbit
in $\mathop{{\rm cs}}\nolimits(\mathcal{M})$ meets $\mathop{{\rm cs}}\nolimits(\mathcal{M})_+$ exactly once.
Let $U \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$ and $\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})_+$ be such that
$U\xi \in \mathcal{P}$. As $J_\xi = J$ for every $\xi \in \mathcal{P}$ by
\cite[Prop.~2.5.30]{BR87}, we obtain
$U J U^{-1} = J_{U\xi} = J$.
Then $j(U)= U$ leads to $U \in \mathcal{M} \cap \mathcal{M}'$ and hence
to $U = j(U) = U^{-1}$ (Theorem~\ref{thm:tom-tak}(c)), so that $U^2 =\mathbf{1}$.
Then $\mathcal{M}_{\pm 1}:= \{ M \in \mathcal{M} \: UM = \pm M\}$ are ideals of $\mathcal{M}$
and $\mathcal{M} \cong \mathcal{M}_+ \oplus \mathcal{M}_-$ is a direct sum of von Neumann algebras.
Now $\xi = \xi_+ \oplus \xi_-$ decomposes accordingly with
$\xi_\pm \in \mathop{{\rm cs}}\nolimits(\mathcal{M}_\pm)$. As
$U \xi = \xi_+ - \xi_-$ and
$\mathcal{P} = \mathcal{P}_+ \oplus \mathcal{P}_-$, it follows that
$\xi_- \in \mathcal{P}_- \cap - \mathcal{P}_- = \{0\}$ (\cite[Prop.~2.5.28]{BR87})
and thus $\mathcal{M}_- = \{0\}$ and $U = \mathbf{1}$.
(ii) If $\Omega_1, \Omega_2 \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$,
then \cite[Lemma~2.5.35]{BR87} implies the existence of a
unitary element $U \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$ with $U J_{\Omega_1} U^{-1} = J_{\Omega_2}.$
Exchanging the roles of $\mathcal{M}$ and $\mathcal{M}'$, it also follows that
$\mathop{\rm U{}}\nolimits(\mathcal{M})$ acts transitively on $\mathop{{\rm mc}}\nolimits(\mathcal{M})$.
For $J \in \mathop{{\rm mc}}\nolimits(\mathcal{M})$ we have $J\mathcal{M} J = \mathcal{M}'$, so that, for $U \in \mathop{\rm U{}}\nolimits(\mathcal{M})$, the relation
$UJU^{-1} = J$ implies $U = JUJ \in \mathcal{Z}$. As in (i), this leads to
$JUJ = U^* = U^{-1}$, so that $U^2 = \mathbf{1}$. Conversely, any involution in
$\mathop{\rm U{}}\nolimits(\mathcal{Z})$ stabilizes~$J$.
Clearly, $\sigma$ is a surjective equivariant map whose kernel is discrete in the
norm topology. As the stabilizer subgroup of $J$ in $\mathop{\rm U{}}\nolimits(\mathcal{M})$
is discrete and central, the quotient $\mathop{\rm U{}}\nolimits(\mathcal{M})/\ker\sigma$ carries a natural
Banach--Lie group structure for which $\sigma$
becomes a covering homomorphism.
(iii) We consider the group $G:= \mathop{\rm U{}}\nolimits(\mathcal{M}')$ and the representation
of $(G \times G) \rtimes \{\mathbf{1},\tau\}$ on $\mathcal{H}$ given by
$U(g,h,\tau^\varepsilon) = g JhJ J^\varepsilon.$
Then Proposition~\ref{prop:4.7} shows that
\[ U(C_{(e,e,\tau)}) = \{ g Jg^{-1}J J \: g \in \mathop{\rm U{}}\nolimits(\mathcal{M}') \}
= \{ g Jg^{-1} \: g \in \mathop{\rm U{}}\nolimits(\mathcal{M}') \} = \mathop{{\rm mc}}\nolimits(\mathcal{M}).\]
Now the assertion follows from Lemma~\ref{lem:abstract-grp}.
(iv) We have already seen in (i) that $J_\xi = J$ for every
$\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})_+$.
If $\xi = U \tilde\xi$ for some $U \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$ and
$\tilde\xi \in\mathop{{\rm cs}}\nolimits(\mathcal{M})_+$ as in (i), then
$J_\xi = U J U^{-1}$ equals $J$ if and only if
$U \in \mathcal{M} \cap \mathcal{M}'$ is an involution. This proves (iv).
(v) Since $J$ commutes with each operator of the form
$J U JU = j(U)U = U j(U)$, the stabilizer contains all these elements
and also $\mathop{{\rm Inv}}\nolimits(\mathcal{Z})$, as we have already seen above.
If, conversely, $U \in \mathop{\rm U{}}\nolimits(\mathcal{M})$ and $W \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$ are such that
$UW$ commutes with $J$, then
$UW = UJUJ (JUJ)^{-1} W$ with
$(JUJ)^{-1}W \in \mathop{\rm U{}}\nolimits(\mathcal{M}')_J = \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z}))$ by (ii).
(vi) We shall use the theory of KMS states (cf.~\cite{BR96}).
We recall that, for a continuous action $\alpha \: {\mathbb R} \to \mathop{{\rm Aut}}\nolimits(\mathcal{A})$
of ${\mathbb R}$ on a $C^*$-algebra $\mathcal{A}$, a state $\omega$ of $\mathcal{A}$
is called an {\it $\alpha$-KMS state} if, for every pair of hermitian elements
$A,B \in \mathcal{A}$, the function
\[ \psi \: {\mathbb R} \to {\mathbb C}, \quad \psi(t) :=\omega(A\alpha_t(B)) \]
extends analyticallty to a holomorphic function on the strip
$\mathcal{S} := \{ z \in {\mathbb C} \: 0 < \Im z < 1\}$, extends continuously
to its closure and satisfies
$\psi(i+t) = \overline{\psi(t)}$ for $t \in {\mathbb R}$.
First we observe that $\omega(A) := \langle \Omega, A \Omega\rangle$
is a KMS state with respect to the modular automorphism
group $\alpha_t(A) = \Delta^{it} A\Delta^{-it}$
(Takesaki's Theorem, \cite[Thm.~5.3.10]{BR96}).
Now let $\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$ with $V_\xi = V_\Omega$, i.e.,
$J_\xi = J$ and $\Delta_\xi = \Delta$. Then, for the same reason, the state
$\omega_\xi(A) := \langle \xi, A \xi\rangle$ is also an $\alpha$-KMS state.
By \cite[Prop.~5.3.29]{BR96}, there exists a unique
positive selfadjoint operator
$T \geq 0$ affiliated with $\mathcal{Z}$ such that
\[\langle \xi, A \xi \rangle
= \omega_\xi(A)
= \omega(\sqrt{T} A \sqrt{T})
= \langle \Omega, \sqrt{T} A \sqrt{T} \Omega \rangle
= \langle \sqrt{T} \Omega, A \sqrt{T} \Omega \rangle
\quad \mbox{ for } \quad A \in \mathcal{M}.\]
Therefore $\xi$ and $\sqrt{T}\Omega$ define the same state.
Further $\sqrt{T}\Omega$ is also contained in the natural cone $\mathcal{P}$
(\cite[Prop.~2.5.26]{BR87}).
As we have seen in (i), there exists a unique
$U \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$ with $U\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})_+$.
As $J = J_{U\xi} = U J_\xi U^{-1} = U^{-1} J U$, it
follows from (ii) that $U \in \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z}))$.
As $U\xi$ and $\sqrt{T}\Omega$ define the same state and
both are contained in $\mathcal{P}$,
\cite[Thm.~2.5.31]{BR87} yields $U\xi = \sqrt{T}\Omega$,
i.e., $\xi = U \sqrt{T}\Omega$. Now the assertion
follows with $Z := U \sqrt{T}$.
Suppose, conversely, that $\xi = Z \Omega$
with a selfadjoint operator affiliated to~$\mathcal{Z}$.
Decomposing $\mathcal{H}$, $\mathcal{M}$ and $\Omega$ as a direct sum
corresponding to bounded spectral projections of $Z$
(which are central in $\mathcal{M}$ as well), we may w.l.o.g.\ assume that
$Z$ is bounded. Since $\xi$ is separating, $\ker Z = \{0\}$,
so that we may further assume that $Z$ is invertible.
As $Z$ commutes with $J$ and $\Delta$,
it commutes with $S = J \Delta^{1/2}$, and thus $Z$
leaves $V = \mathop{{\rm Fix}}\nolimits(S)$ invariant. This shows that
$V_\xi = Z V = V$.
\end{proof}
\begin{remark} (a) Proposition~\ref{prop:4.7}(iv) describes
the fibers of the map
\[ \mathop{{\rm cs}}\nolimits(\mathcal{M}) \to \mathop{{\rm mc}}\nolimits(\mathcal{M}), \quad \xi \mapsto J_\xi.\]
This map is $\mathop{\rm U{}}\nolimits(\mathcal{M}')$-equivariant, so that the space
$\mathop{{\rm cs}}\nolimits(\mathcal{M})$ is a homogeneous $\mathop{\rm U{}}\nolimits(\mathcal{M}')$-bundle over the symmetric
space $\mathop{{\rm mc}}\nolimits(\mathcal{M})$.
(b) Proposition~\ref{prop:4.7}(vi) describes the fibers
of the map $\mathop{{\rm cs}}\nolimits(\mathcal{M})\to \mathop{{\rm ms}}\nolimits(\mathcal{M})$ in terms of the center~$\mathcal{Z}$.
If $\mathcal{M}$ is a factor, i.e., $\mathcal{Z} = {\mathbb C}\mathbf{1}$, then we see in particular that
$V_\xi = V$ implies $\xi = \pm \Omega$ (because $\|\xi\|=1$).
\end{remark}
\begin{lemma}[Stabilizer subgroup of $V = V_\Omega$]
The stabilizer of $V$ in the group $G := \mathop{\rm U{}}\nolimits(\mathcal{M})\mathop{\rm U{}}\nolimits(\mathcal{M}')$
consists of all elements of the form
$g = u j(u) z$ with $z \in \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z}))$ and
$u \in \mathop{\rm U{}}\nolimits(\mathcal{M})$ fixed by the modular automorphisms
$\alpha_t(M) = \Delta^{it} M \Delta^{-it}$.
\end{lemma}
\begin{proof} Since standard subspaces are completely determined by their
modular objects, the stabilizer of $V$ in $G$ is
\[ G_V = G_J \cap G_\Delta = \{ g \in G \: gJg^{-1} = J, g\Delta g^{-1} = \Delta \}.\]
By Proposition~\ref{prop:4.7}(v), any $g \in G_J$ is of the form
$g = uj(u)z$ with $u \in \mathop{\rm U{}}\nolimits(\mathcal{M})$ and $z \in \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z}))$.
As $z$ is central, it commutes with the modular unitaries
$\Delta^{it}$ (Theorem~\ref{thm:tom-tak}(c)), i.e., $z \in G_V$.
An element of the form $g = uj(u)$ is fixed by each $\alpha_t$
if and only if
\[ \alpha_t(u) J \alpha_t(u) J = u J u J,
\quad \mbox{ resp. } \quad
u^{-1} \alpha_t(u) = J u \alpha_t(u)^{-1} J.\]
Then $u^{-1} \alpha_t(u) \in \mathcal{M} \cap J\mathcal{M} J = \mathcal{Z}$.
To see that this implies that $\alpha_t(u) = u$, we consider the
commutative von Neumann subalgebra $\mathcal{A} \subseteq \mathcal{M}$
generated by the center $\mathcal{Z}$ and $u$. As each $\alpha_t$ fixes the
center pointwise, we have $\alpha_t(\mathcal{A}) = \mathcal{A}$ for every $t \in {\mathbb R}$.
Then the state of $\mathcal{A}$ given by
$\omega(A) := \langle\Omega, A \Omega \rangle$ is a KMS state
with respect to $\alpha_t\vert_\mathcal{A}$, so that
the restrictions $\alpha_t\vert_\mathcal{A}$ are the unique automorphisms corresponding
to this KMS states (\cite[Thm.~5.3.10]{BR96}).
Since $\mathcal{A}$ is abelian, the uniqueness of the automorphism group
implies its triviality. We conclude that each $\alpha_t$ fixes~$u$ if
$g \in G_V$.
If, conversely, $\alpha_t(u) =u$, then $\alpha_t$ fixes $g$.
This implies that $g$ commutes with $S = J \Delta^{1/2}$, hence
preserves $V = \mathop{{\rm Fix}}\nolimits(S)$.
\end{proof}
\begin{ex} \mlabel{ex:hilb-schm}
(a) If $\mathcal{M}$ is a factor, then $\mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{Z})) = \{ \pm 1\}$,
so that $\mathop{\rm U{}}\nolimits(\mathcal{M})_J = \{\pm \mathbf{1}\}$ and $\mathop{{\rm mc}}\nolimits(\mathcal{M}) \cong \mathop{\rm U{}}\nolimits(\mathcal{M})/\{\pm \mathbf{1}\}$.
(b) For $\mathcal{M} = B(\mathcal{K})$ acting on $\mathcal{H} = B_2(\mathcal{K})$
by left multiplications, we have
$JA = A^*$ (Example~\ref{ex:3.2}(b)) and by (a),
we have $\mathop{{\rm mc}}\nolimits(B(\mathcal{K})) \cong \mathop{\rm U{}}\nolimits(\mathcal{K})/\{\pm \mathbf{1}\}$.
For any $0 < \Omega = \Omega^* \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$ we have
$\mathcal{P} = \{ A \in B_2(\mathcal{K}) \: A \geq 0\}$.
If $\Omega = \sqrt{D}$ holds for the trace class operator $D > 0$,
then the centralizer of $\Delta$ in $\mathop{\rm U{}}\nolimits(\mathcal{M})$ is
\[ \{ g \in \mathop{\rm U{}}\nolimits(\mathcal{M}) \: g\Delta g^{-1} = \Delta\}
\cong \{ g \in \mathop{\rm U{}}\nolimits(\mathcal{K}) \: gDg^{-1} = D\}, \]
and since $D$ is diagonalizable, this subgroup consists
of those unitaries leaving all eigen\-spaces of $D$ invariant.
In particular
\[ \{ A \in B_2(\mathcal{K}) \: A = A^*, [A,D] = 0\} \subseteq V \cap V' \]
shows that $V \cap V'$ is much larger than ${\mathbb R} \Omega$.
For every elements $A = A^* \not\in \mathcal{P} \cup -\mathcal{P}$
with $\ker A = \{0\}$, we have $JA = A$ but
$J_A \not=J$ (Proposition~\ref{prop:4.7}(iv)).
(c) For $\mathcal{M} = L^\infty(X,{\mathfrak S},\mu)$, $\mu$ finite,
acting on $\mathcal{H} = L^2(X,\mu)$ by multiplication operators,
we find for $Jf= \overline f$ that
$\mathop{\rm U{}}\nolimits(\mathcal{M})_J$ is the set of involutions in $\mathop{\rm U{}}\nolimits(\mathcal{M})$.
As the squaring map $U \mapsto U^2$ is a morphism of Banach--Lie groups,
the Banach symmetric space
$\mathop{{\rm mc}}\nolimits(\mathcal{M})$ is diffeomorphic to the unitary group and we have a
short exact sequence
\[ \mathbf{1} \to \mathop{{\rm Inv}}\nolimits(\mathop{\rm U{}}\nolimits(\mathcal{M})) \to \mathop{\rm U{}}\nolimits(\mathcal{M}) \to \mathop{{\rm mc}}\nolimits(\mathcal{M}) \to \mathbf{1}.\]
\end{ex}
\section{Nets of standard subspaces and von
Neumann algebras}
\mlabel{sec:5}
In this section we briefly discuss some elementary properties of
nets of standard subspaces
$(V_\ell)_{\ell \in L}$ and their connection with
antiunitary representations $(U,\mathcal{H})$. The connection
with nets of von Neumann algebras and QFT is made in \S \ref{subsec:5.2}.
Nets of standard subspaces are considerably simpler than
nets of von Neumann algebras and naturally
determined by an antiunitary representation,
of the group generated by all subgroups $U^{V_\ell}({\mathbb R}^\times) \subseteq \mathop{{\rm AU}}\nolimits(\mathcal{H})$
(Proposition~\ref{prop:3.2}), but this group need not be finite dimensional.
\subsection{Nets of standard subspaces}
Let $\mathcal{V} := (V_\ell)_{\ell \in L}$ be a family of standard subspaces of the Hilbert
space~$\mathcal{H}$. We assume that the map $\ell\mapsto V_\ell$ is injective, so that
the index set $L$ is only a notational convenience and we could equally
well work directly with the subset $\{ V_\ell \: \ell \in L\} \subseteq \mathop{{\rm Stand}}\nolimits(\mathcal{H})$
(cf.\ \cite{BGL02, SW03}).
\begin{definition}
A {\it net automorphism} is an $\alpha \in \mathop{{\rm AU}}\nolimits(\mathcal{H})$ permuting
the standard subspaces $V_\ell$. We write $\mathop{{\rm Aut}}\nolimits(\mathcal{V}) \subseteq \mathop{{\rm AU}}\nolimits(\mathcal{H})$
for the subgroup of net automorphisms.
A net automorphism is called an {\it internal symmetry} if it preserves
each $V_\ell$ separately. The corresponding subgroup of $\mathop{{\rm Aut}}\nolimits(\mathcal{V})$
is denoted $\mathop{{\rm Inn}}\nolimits(\mathcal{V})$.
\end{definition}
We write $(\Delta_\ell, J_\ell)$ for the modular objects corresponding
to $V_\ell$ and consider the {\it modular symmetry group}
\[ \mathcal J := \mathop{{\rm grp}}\nolimits(\{J_\ell \: \ell \in L\}) \subseteq \mathop{{\rm AU}}\nolimits(\mathcal{H}) \]
generated by the modular conjugations.
A natural assumption enriching the underlying geometry is
the condition of {\it geometric modular action}
\begin{description}
\item[\rm(CGMA)] $\mathcal J \subseteq \mathop{{\rm Aut}}\nolimits(\mathcal{V})$
\end{description}
(see \cite{BS93, BDFS00} for the von Neumann context).
The condition
\begin{description}
\item[\rm(MS)] $\Delta_{V_\ell}^{it} \in \mathcal J$ for all $t \in {\mathbb R}, \ell \in L$
\end{description}
is called the {\it modular stability condition}
(\cite{BDFS00}, \cite[\S\S IV.5.6/7]{Bo00}).
From now on we assume that (CGMA) is satisfied.
We obtain an action \break $\sigma \: \mathop{{\rm Aut}}\nolimits(\mathcal{V}) \times L \to L,
(g,\ell) \mapsto \sigma_g(\ell)$
on the index set $L$ by
\begin{equation}
\label{eq:modact1}
g V_\ell = V_{\sigma_g(\ell)}.
\end{equation}
This further implies
\begin{equation}
\label{eq:modact2}
g J_\ell g^{-1} = J_{\sigma_g(\ell)} \quad \mbox{ and }\quad
g \Delta_\ell g^{-1} = \Delta_{\sigma_g(\ell)}.
\end{equation}
In particular, (CGMA) implies that
\[ \mathcal{S} := \{ J_\ell \: \ell \in L\} \]
is a conjugation invariant set of generators of $\mathcal J$.
This fact opens the door to the construction
of geometric structures from the group $\mathcal J$ and its generating set
$\mathcal{S}$ in specific situations.
\begin{lemma} \mlabel{lem:5.2.1}
The subgroup of $\mathcal J$ of those elements acting trivially on $L$
is its center
\[ \mathcal{Z} := \{ g \in \mathcal J \: (\forall \ell \in L)\, gV_\ell = V_\ell \}
= \mathop{{\rm Inn}}\nolimits(\mathcal{V}) \cap \mathcal J = \ker \sigma.\]
\end{lemma}
\begin{proof}
For $g \in \mathcal J$, the relation $\sigma_g = \mathop{{\rm id}}\nolimits_L$ implies that $g$ commutes with
every element $J_\ell \in \mathcal{S}$ by~\eqref{eq:modact2}.
As $\mathcal J$ is generated by $\mathcal{S}$, the assertion follows.
\end{proof}
By Lemma~\ref{lem:5.2.1},
the action of $\mathcal J$ on the index set describes this group as a
central extension of the group $\mathcal J/\mathcal{Z}$ that acts faithfully on the set $L$
which is supposed to carry geometric information (cf.~\cite{BDFS00}
and Remark~\ref{rem:5.5} below).
An immediate consequence of (CGMA) is that the net is invariant
under the passage to the symplectic orthogonal space
$V_\ell \mapsto V_\ell' = J_\ell V_\ell$ (cf.~\S \ref{subsec:3.2}).
In particular, we have a duality map $\ell \mapsto \ell' := \sigma_{J_\ell}(\ell)$
on~$L$.
We also have a natural order structure $\leq$ on $L$ by
\[ \ell_1 \leq \ell_2 \quad \mbox{ if } \quad V_{\ell_1} \subseteq V_{\ell_2}.\]
\begin{remark} \mlabel{rem:order-axioms} The key properties of the triple
$(L, \leq, ')$, given by the partial order $\leq$ and the {\it duality map}
$\ell \mapsto \ell'$ are that
\begin{itemize}
\item[\rm(A1)] $\ell_1 \leq \ell_2$ implies $\ell_2' \leq \ell_1'$, and
\item[\rm(A2)] $\ell_1 \leq \ell_2'$ if and only if $\ell_2 \leq \ell_1'$.
\end{itemize}
From (A2) we immediately derive $\ell \leq \ell''$ and by combining
this with (A1), we obtain $\ell' = \ell'''$
for every $\ell \in L$, i.e., the duality map restricts to an
involution on its range.
\end{remark}
\begin{exs} \mlabel{ex:caus-comp}
(a) For a subset $S$ of the Minkowski space ${\mathbb R}^{1,d-1}$, we
define the {\it causal complement} by
\[ S' := \{ x \in {\mathbb R}^{1,d-1} \: (\forall y \in S) [x-y,x-y] < 0\}.\]
Then $S' = \bigcap_{s \in S} \{s\}'$, which immediately leads to
(A1/2). Here $S \subseteq T'$ means that $S$ and $T$ are {\it space-like separated},
and $S''$ is the {\it causal completion of $S$}.
For $g \in P(d)$ and $S \subseteq {\mathbb R}^{1,d-1}$,
we have $(gS)' = gS'$.
For the standard right wedge $W_R \subseteq {\mathbb R}^{1,d-1}$, we have $W_R' = - \overline{W_R}$
and $W_R'' = W_R$, and for the positive light cone $V_+$ we have $V_+' = \emptyset$ and
$V_+'' = {\mathbb R}^{1,d-1}$. For $x-y \in V_+$, the causal completion
\[ \{x,y\}'' = (x - \overline{V_+}) \cap (y + \overline{V_+}) = \overline{\mathcal{O}_{x,y}} \]
is the closure of the double cone $\mathcal{O}_{x,y}$
(cf.\ Remark~\ref{rem:doubco}(b)).
(b) If $(X,\leq)$ is a partially ordered space, then we define
\[ \{x\}' := \{ y \in X \: x \not\leq y, y \not\leq x\}
\quad \mbox{ and } \quad
S' := \bigcap_{s \in S} \{s\}'.\]
Then the set $L$ of subsets of $X$, endowed with the inclusion order,
satisfies (A1/2).
(c) For a complex Hilbert space $\mathcal{H}$, the set of real subspace
$V \subseteq \mathcal{H}$, endowed with the inclusion order
and the symplectic orthogonal space $V' = i V^{\bot_{\mathbb R}}$
satisfies (A1/2).
(d) For a complex Hilbert space $\mathcal{H}$, the set of von Neumann
subalgebras $\mathcal{M} \subseteq B(\mathcal{H})$, endowed with the inclusion order
and the commutant map $\mathcal{M} \mapsto \mathcal{M}'$
satisfies (A1/2).
\end{exs}
\begin{remark} \mlabel{rem:5.5} (a) In QFT, one expects that the structure
$(L, \leq,')$ encodes physical
information and one would like to recover information
on the geometry of spacetime from this structure.
In this context, causal complements, resp., the notion of being space-like
separated, appears more fundamental than the causal
order if we want to recover a spacetime $M$ from the triple
$(L, \leq, ')$, where $L$ consists of certain subsets of $M$
but does not contain one-point sets (cf.\ \cite{Ke96}).
If the modular stability condition is satisfied, i.e., if $\mathcal J$ contains also the
modular unitaries, this group is supposed to encode the dynamics of
the quantum theory, the isometry group of the corresponding
spacetime and a (projective) unitary representation
of this group (\cite[\S 6.4]{Su05}). This connects naturally
with the approach of Connes and Rovelli who ``construct'' the dynamics
of a quantum statistical system by a modular one-parameter group $\Delta^{it}$
(\cite{CR94}).
Here an interesting result concerning the detection of known group from this
viewpoint is the characterization of the Poincar\'e group in terms of
structure preserving maps on the set $\mathcal{W}$ of wedges in ${\mathbb R}^{1,d-1}$
(\cite{BDFS00}, \cite[\S IV.5]{Bo00}, Lemma~\ref{lem:4.17}).
(b) In \cite[\S 4]{Ke98} the relation of the causal structure
on spacetime and how it can be determined by data encoded in
nets of $C^*$-algebras is discussed very much in the spirit of this
section. We refer to \cite{Ra17} for an approach to quantum field theory
based on modular theory of operator algebras that does not
assume an a priori given underlying spacetime. Instead, one would like
to generate the spacetime geometry from operator theoretic data.
(c) In \cite{SW03} this program
is carried out to a large extent by specifying a set of axioms
formulated in terms of the modular conjugations
$J_\ell$, such that the index set $L$
corresponds to the set $\mathcal{W}$ of wedges in three-dimensional Minkowski space
${\mathbb R}^{1,2}$, $J_W$ corresponds to the the orthogonal
reflection $r_W \in P(3)_+$ in the edge of $W$ and $\mathcal J \cong P(3)_+$
(cf.~Lemma~\ref{lem:4.17}).
In this case, the subset $\mathop{{\rm SO}}\nolimits_{1,2}({\mathbb R})^\uparrow W_R\subseteq \mathcal{W}$
identifies naturally with the anti-deSitter space
$\mathop{{\rm AdS}}\nolimits^2 \cong \mathop{{\rm SO}}\nolimits_{1,2}({\mathbb R})^\uparrow/\mathop{{\rm SO}}\nolimits_{1,1}({\mathbb R})^\uparrow$, which can
be realized as an adjoint orbit in the Lie algebra
$\so_{1,2}({\mathbb R})\cong \fsl_2({\mathbb R})$
(cf.\ Lemma~\ref{lem:4.17}(v)).
\end{remark}
The following construction is of fundamental importance in our
approach. It is inspired by the modular localization approach to QFT developed in
\cite[Thm.~2.5]{BGL02}:
\begin{proposition} {\rm (Nets of standard subspaces from antiunitary representations;
the BGL construction)} \mlabel{prop:antiunirep-stand}
Let $(U,\mathcal{H})$ be an antiunitary representation of the Lie group pair $(G,G_1)$ and
associate to $\gamma \: ({\mathbb R}^\times, {\mathbb R}^\times_+) \to (G,G_1)$
the standard subspace $V_\gamma$ with $U^{V_\gamma} = U \circ \gamma$
{\rm(Proposition~\ref{prop:3.2})}.
Then, for every non-empty subset $\Gamma \subseteq \mathop{{\rm Hom}}\nolimits(({\mathbb R}^\times, {\mathbb R}^\times_+), (G,G_1))$
invariant under conjugation with elements of $G$ and
under inversion, we thus obtain a net
$(V_\gamma)_{\gamma \in \Gamma}$ of standard subspaces
which satisfies {\rm(CGMA)} for the group
$\mathcal J$ which is the image under $U$ of the subgroup
of $G$ generated by the conjugation invariant set of involutions
$\{\gamma(-1) \: \gamma \in \Gamma\}$.
\[ \mat{
\Gamma & \mapright{\mathop{{\rm ev}}\nolimits_{-1}} & \mathop{{\rm Inv}}\nolimits(G\setminus G_1) \\
\mapdown{\gamma \mapsto V_\gamma} & & \mapdown{U} \\
\mathop{{\rm Stand}}\nolimits(\mathcal{H}) &\mapright{V \mapsto J_V} & \mathop{{\rm Conj}}\nolimits(\mathcal{H}).}\]
\end{proposition}
\begin{proof} This follows from the observation that,
for $\gamma^g(t) := g\gamma(t)g^{-1}$, we have
$U_g V_\gamma = V_{\gamma^g}$ for $g \in G_1$ and
$U_g V_\gamma = V_{\gamma^g}'$ for $g \not\in G_1$,
so that $G$ acts naturally by automorphisms on the net
$(V_\gamma)_{\gamma \in \Gamma}$.
\end{proof}
\begin{remark}
(a) Evaluation in $-1$ leads to a fibration
\[ \mathop{{\rm ev}}\nolimits_{-1} \: \mathop{{\rm Hom}}\nolimits(({\mathbb R}^\times, {\mathbb R}^\times_+), (G,G_1))
\to \mathop{{\rm Inv}}\nolimits(G\setminus G_1). \]
An involution $r \in G\setminus G_1$ is contained in the image
if and only if there exists an $x \in {\mathfrak g}$ fixed by $\mathop{{\rm Ad}}\nolimits(r)$,
i.e., if ${\mathfrak g}^{\mathop{{\rm Ad}}\nolimits(r)} = \ker(\mathop{{\rm Ad}}\nolimits(r)-\mathbf{1})\not=\{0\}$. This is always the case if
${\mathfrak g}$ is non-abelian, i.e., if $-\mathop{{\rm id}}\nolimits_{\mathfrak g}$ is not an automorphism. Then the fiber
over $r$ can be identified with the Lie subalgebra ${\mathfrak g}^{\mathop{{\rm Ad}}\nolimits(r)}$.
(b) In many situations one considers minimal sets
\[ \Gamma = C_{\gamma} \cup C_{\gamma^\vee}, \quad \mbox{ where } \quad
C_\gamma := \{ \gamma^g \: g \in G\}. \]
Then $\mathop{{\rm ev}}\nolimits_{-1}(C_\gamma) = C_r$ is the conjugacy class of the
involution $r := \gamma(-1) \in G \setminus G_1$,
hence in particular a symmetric space (cf.~Appendix~\ref{app:a.2}).
An important example in QFT is $\gamma = \gamma_{W_R}$ for $G = P(d)_+$ (Lemma~\ref{lem:4.17}).
(c) In the context of Proposition~\ref{prop:antiunirep-stand}, the relation
$V_\gamma' = V_{\gamma^\vee}$ shows that duality is naturally built into the construction.
However, in general it may not be so easy to determine when
$V_{\gamma_1} \subseteq V_{\gamma_2}$. In \cite[Thm.~3.4]{BGL02} it is shown that,
for $G = P(d)_+$ and homomorphisms $(\gamma_W)_{W \in \mathcal{W}}$ corresponding to wedges,
the relation $W_1 \subseteq W_2$ is equivalent to
$V_{\gamma_{W_1}} \subseteq V_{\gamma_{W_2}}$ if and only if $U$ is a positive energy representation.
\end{remark}
The preceding discussion suggests a closer
look at conjugacy classes of involutions
$\tau \in G \setminus G_1$.
We write $C_\tau \subseteq G$ for the conjugacy class of $G$.
\begin{lemma} \mlabel{lem:conjug-gen}
Let $G_1$ be a connected Lie group with
Lie algebra ${\mathfrak g}$ and $\tau \in \mathop{{\rm Aut}}\nolimits(G_1)$ be an involutive automorphism.
Then the conjugacy class $C_\tau \subseteq G := G_1 \rtimes \{\mathbf{1},\tau\}$
generates the subgroup $\mathop{{\rm grp}}\nolimits(C_\tau) = B \{\mathbf{1},\tau\}$, where
$B$ is the integral subgroup whose Lie algebra is the ideal
${\mathfrak b} := {\mathfrak g}^{-\tau} + [{\mathfrak g}^{-\tau}, {\mathfrak g}^{-\tau}]$. In particular,
$C_\tau$ generates $G$ if and only if ${\mathfrak g}^\tau = [{\mathfrak g}^{-\tau}, {\mathfrak g}^{-\tau}]$.
\end{lemma}
\begin{proof} Let $H = \mathop{{\rm grp}}\nolimits(C_\tau) \subseteq G$ be the subgroup generated
by $C_\tau$. As $\tau \in H$, we have
$H = B \{e,\tau\}$ for $B := H \cap G_1$.
Then $B$ is generated by
the elements of the form $g \tau(g)^{-1}$, $g \in G_1$, hence in particular
arcwise connected.
For $x \in {\mathfrak g}^{-\tau}$, we therefore obtain
$\exp(2x) = \exp(x) \tau(\exp(-x)) \in B$, so that
the Lie algebra ${\mathfrak b}$ of $B$ contains ${\mathfrak g}^{-\tau}$ and hence also
${\mathfrak b} = {\mathfrak g}^{-\tau} + [{\mathfrak g}^{-\tau},{\mathfrak g}^{-\tau}]$, which is an ideal of ${\mathfrak g}_1$.
Let $\tilde G_1$ denote the universal covering group of
$G_1$. Then $\tilde B := \langle \exp_{\tilde G_1} {\mathfrak b} \rangle$ is a normal integral
subgroup of $\tilde G_1$, hence closed. As $\tau$ acts trivially
on the quotient group $\tilde G_1/\tilde B$,
all elements of the form $g\tau(g)^{-1}$ are contained in $\tilde B$.
Therefore $B := \langle \exp_G {\mathfrak b} \rangle$ contains the arcwise
connected subgroup $H \cap G_1$, and thus
$H \cap G_1 = B$. This implies the lemma.
\end{proof}
\begin{ex} \mlabel{exs:5.8}
$G = \mathop{{\rm Aff}}\nolimits({\mathbb R})$ and $\tau = (x,-1)$ with $C_\tau = {\mathbb R} \times \{-1\}$,
and this conjugacy class generates~$G$.
\end{ex}
\begin{lemma}
\mlabel{lem:5.8}
Let $\tau = r_W \in P(d)_+$ be a wedge reflection for some
$W \in \mathcal{W}$. Then
\begin{itemize}
\item[\rm(i)] The conjugacy class $C_\tau$ of $\tau$ generates
$P(d)_+$ if and only if $d > 2$.
\item[\rm(ii)] The conjugacy class $C_\tau$ of $\tau$ in the conformal
group $\mathop{{\rm SO}}\nolimits_{2,d}({\mathbb R})$ generates the whole group for any $d > 0$.
\end{itemize}
\end{lemma}
\begin{proof} (i) Since all wedges
$W \in \mathcal{W}$ are conjugate to the standard right wedge $W_R$, it suffices to consider
$\tau = r_{W_R} = R_{01} = \mathop{{\rm diag}}\nolimits(-1,-1,1,\ldots,1)$.
If $d = 2$, then $R_{01} = - \mathbf{1}$, so that $\mathop{{\rm grp}}\nolimits(C_\tau)
= {\mathbb R}^2 \rtimes \{ \pm \mathbf{1}\}$ is a proper subgroup of $P(2)_+$.
The case $d = 2$ already implies
that $\mathop{{\rm grp}}\nolimits(C_\tau)$ contains all translations in the directions of all Lorentzian
$2$-planes, hence all translations. Therefore it suffices to show that the
conjugacy class of $R_{01}$ in $\mathop{{\rm SO}}\nolimits_{1,d}({\mathbb R})$ generates the whole group.
In view of Lemma~\ref{lem:conjug-gen}, this follows from
the simplicity of the real Lie algebra
${\mathfrak g} = \so_{1,d-1}({\mathbb R})$.
(ii) We consider $\mathop{{\rm SO}}\nolimits_{2,d}({\mathbb R})$ as a group acting on
${\mathbb R}^{1,d-1}$ by rational maps (cf.~\cite[\S17.4]{HN12}).
We have already seen above that the group $\mathop{{\rm grp}}\nolimits(C_\tau)$ generated by
the conjugacy class $C_\tau$ in $\mathop{{\rm SO}}\nolimits_{2,d}({\mathbb R})$
contains the Poincar\'e group $P(d)_+$, which is a parabolic
subgroup of $\mathop{{\rm SO}}\nolimits_{2,d}({\mathbb R})$ and it intersects both connected components.
By the same argument, it contains the opposite
parabolic subgroup, and both subgroups generate $\mathop{{\rm SO}}\nolimits_{2,d}({\mathbb R})$ because
it has only two connected components (cf.~\cite{Be96}).
\end{proof}
If $d$ is odd, then $\mathop{{\rm SO}}\nolimits_{2,d}({\mathbb R}) \cong \mathop{\rm O{}}\nolimits_{2,d}({\mathbb R})/\{\pm \mathbf{1}\}$
is the full conformal
group of ${\mathbb R}^{1,d-1}$, but if $d$ is even, then
the kernel $\{\pm \mathbf{1}\}$ of the action of $\mathop{\rm O{}}\nolimits_{2,d}({\mathbb R})$
is contained in the identity component, so that
$\mathop{\rm Conf{}}\nolimits({\mathbb R}^{1,d}) \cong \mathop{\rm O{}}\nolimits_{2,d}({\mathbb R})/\{ \pm \mathbf{1}\}$
has four connected components (\cite[\S17.4]{HN12}).
Therefore the conjugacy class of a wedge reflection does not
generate the whole conformal group.
\begin{remark} In \cite[Thm.~4.7]{BGL02},
Brunetti, Guido and Longo describe a
one-to-one correspondence between antiunitary
positive energy representations of $P(d)_+$ and certain
nets of closed real subspaces $V_\mathcal{O}$ indexed by certain
open subsets $\mathcal{O} \subseteq {\mathbb R}^d$, for which the subspaces $(V_W)_{W \in \mathcal{W}}$
corresponding to wedges are standard and the modular covariance condition
\[ \Delta_W^{-it/2\pi} V_\mathcal{O} = V_{\gamma_W(t)\mathcal{O}} \]
holds for the homomorphisms $\gamma_W \: {\mathbb R}^\times \to P(d)_+$
and the modular unitaries of $V_W$.
The uniqueness of the local net,
once the unitary representation is given,
is discussed in \cite[Rem.~4.8]{BGL02} (see also \cite{BGL93}). For the converse, i.e.,
the uniqueness of the unitary representation, once the local net
is given, we refer to \cite{BGL93}.
In \cite{Mu01}, Mund shows that, for any representation
$(U,\mathcal{H})$ of $P(d)^\uparrow_+$ that is a finite direct sum of
irreducible representations of strictly positive mass, there
is only one covariant net of standard subspaces; which therefore
coincides with the one obtained in Proposition~\ref{prop:antiunirep-stand}
from any antiunitary extension of $U$ to $P(d)_+$.
\end{remark}
\begin{ex} (Nets arising from a single von Neumann algebra) \\
(a) Let $\mathcal{M} \subseteq B(\mathcal{H})$ be a von Neumann algebra
for which $\mathop{{\rm cs}}\nolimits(\mathcal{M}) \not=\emptyset$ and consider the
corresponding set
\[ \mathcal{V} := \mathop{{\rm ms}}\nolimits(\mathcal{M}) = \{ V_\xi \: \xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})\} \]
of standard subspaces (Definition~\ref{def:4.20}).
Fix fix a cyclic separating vector $\Omega$ and the corresponding
modular objects $(\Delta, J)$ and consider the group
\[ G := \mathop{\rm U{}}\nolimits(\mathcal{M})\mathop{\rm U{}}\nolimits(\mathcal{M}')\{\mathbf{1},J\} \subseteq \mathop{{\rm AU}}\nolimits(\mathcal{H}).\]
It is easy to see that this group
permutes the standard subspaces in~$\mathcal{V}$.
From Proposition~\ref{prop:4.7}(ii) we derive that
\[ \mathop{{\rm mc}}\nolimits(\mathcal{M}) = \{ gJg^{-1} \: g \in G \} \]
is the conjugacy class of $J$ in $G$.
We also note that the $G$-orbit
$\{ g \mathcal{M} g^{-1} \: g \in G \} = \{\mathcal{M},\mathcal{M}'\}$
of $\mathcal{M}$ in the set of von Neumann subalgebras of $B(\mathcal{H})$ consists only of two
elements.
(b) Consider the group
\[ G^\sharp := \mathop{\rm U{}}\nolimits(\mathcal{M})\mathop{\rm U{}}\nolimits(\mathcal{M}') \gamma({\mathbb R}^\times)
\quad \mbox{ for } \quad
\gamma(-1) := J\quad \mbox{ and } \quad \gamma(e^t) := \Delta^{-it/2\pi}.\]
That $G^\sharp$ is a group follows from the fact that $\gamma({\mathbb R}^\times_+)$ normalizes
$\mathop{\rm U{}}\nolimits(\mathcal{M})$ and $\mathop{\rm U{}}\nolimits(\mathcal{M}')$, whereas conjugation by $J = \gamma(-1)$ exchanges both.
This group is strictly larger than $\mathop{\rm U{}}\nolimits(\mathcal{M})\mathop{\rm U{}}\nolimits(\mathcal{M}')\{\mathbf{1},J\}$
if the modular automorphisms
$\alpha_t(M) := \Delta^{it} M \Delta^{-it}$ of $\mathcal{M}$ are not inner.
If $\xi \in \mathop{{\rm cs}}\nolimits(\mathcal{M})$ is different from $\Omega$,
then Connes' Radon Nikodym Theorem\begin{footnote}{
See \cite[Thm.~III.4.7.5]{Bla06}, \cite[Thm.~5.3.34]{BR96},
and in particular \cite{Fl98} for a quite direct proof.}\end{footnote}
implies the existence of a strongly continuous path of unitaries
$(u_t)_{t\in{\mathbb R}}$ in $\mathop{\rm U{}}\nolimits(\mathcal{M})$ such that the corresponding
modular automorphism group
$\alpha_t^\xi(M) = \Delta_\xi^{it} M \Delta_\xi^{-it}$
satisfies
\[ \alpha_t^\xi(M) = u_t\alpha_t(M)u_t^*
\quad \mbox{ for } \quad M \in \mathcal{M}, t \in {\mathbb R}.\]
This implies that
$\Delta_\xi^{-it} u_t \Delta^{it} \in \mathop{\rm U{}}\nolimits(\mathcal{M}')$, so that
$G^\sharp$ also contains the operators $\Delta_\xi^{it}$.
Hence the net of standard subspaces of $\mathcal{H}$
specified by the conjugacy class of the antiunitary
representation $\gamma \in \mathop{{\rm Hom}}\nolimits({\mathbb R}^\times, G^\sharp)$ coincides with the orbit
$G^\sharp V = GV\subseteq \mathop{{\rm Stand}}\nolimits(\mathcal{H})$.
\end{ex}
\subsection{Nets of von Neumann algebras}
\mlabel{subsec:5.2}
The context that actually motivates
the consideration of families of standard subspaces
are families $(\mathcal{M}_\ell)_{\ell \in L}$ of von Neumann algebras on some
Hilbert space~$\mathcal{H}$.
In the theory of algebras of local observables, one considers
$\ell$ as indicating the ``laboratory'' in which observables corresponding
to $\mathcal{M}_\ell$ can be measured, and then $L$ is the set of laboratories
(cf.~\cite{Ha96, Ar99, Bo97}).
We write $\mathcal{M}\subseteq B(\mathcal{H})$
for the von Neumann algebra generated by all the algebras~$\mathcal{M}_\ell$.
We shall discuss several properties of these families and
relate them to antiunitary representations and some results in
Algebraic Quantum Field Theory (AQFT).
Our first assumption is the {\it Reeh--Schlieder property}:
\begin{itemize}
\item[\rm(RS)] There exists a unit vector $\Omega$ that is
cyclic and separating for each $\mathcal{M}_\ell$.
\end{itemize}
By the Tomita--Takesaki Theorem, (RS)
leads to a family of standard subspaces
given
\[ V_\ell := \overline{\mathcal{M}_{\ell,h}\Omega}\]
and the map $\ell \mapsto V_\ell$ is injective if and only if the map
$\ell \mapsto \mathcal{M}_\ell$ is injective
(Lemma~\ref{lem:4.14}). This leads us to the setting of the
preceding subsection, so that everything said there applies in particular
here. As each $J_\ell$ fixes $\Omega$, it is fixed by the whole group~$\mathcal J$.
For $g \in \mathcal J$ and
$\mathcal{M}_\ell^g := g\mathcal{M}_\ell g^{-1}$, we therefore have
$g V_\ell = \overline{\mathcal{M}^g_{\ell,h}\Omega},$ so that Lemma~\ref{lem:4.14} implies that
$gV_\ell = V_{\tilde\ell}$ for some $\tilde\ell \in L$
is equivalent to $g\mathcal{M}_\ell g^{-1} = \mathcal{M}_{\tilde\ell}$.
Hence the {\it condition of geometric modular action} (CGMA) from the
preceding section is equivalent to the following (\cite{BDFS00}):
\begin{description}
\item[\rm(CGMA)] Conjugation with elements of the group
$\mathcal J$ permutes the von Neumann algebras $(\mathcal{M}_\ell)_{\ell \in L}$.
\end{description}
The relation $J_\ell \mathcal{M}_\ell J_\ell = \mathcal{M}_\ell'$
then implies that the net $(\mathcal{M}_\ell)_{\ell \in L}$ is invariant under the
passage to the commutant.
\begin{remark} \mlabel{rem:doubco} (a) In quantum field theory, where
$L$ often is the set $\mathcal{W}$ of wedges in Minkowski space ${\mathbb R}^{1,3}$,
\cite[Thm.~5.2.6]{BDFS00} asserts that (CGMA)
basically is equivalent to the duality condition
$\mathcal{M}(W') = \mathcal{M}(W)'$ for every $W \in \mathcal{W}$.
Then one obtains an antiunitary
representation of the Poincar\'e group $P(4)_+$ fixing
$\Omega$ and acting covariantly on the net.
Further, $U_{r_W} = J_W$ (cf.~Lemma~\ref{lem:4.17})
and the spectrum of the translation subgroup is either
contained in $\overline{V_+}$ or in $-\overline{V_+}$,
i.e., we either have positive or negative energy representations.
(b) For $x,y \in {\mathbb R}^{1,3}$ and $x-y \in V_+$, the open causal interval
\[ \mathcal{O}_{x,y} := (x - V_+) \cap (y + V_+) \]
is called a {\it double cone}.
There are various Reeh--Schlieder Theorems,
that provide sufficient conditions for the vacuum vector to be
cyclic and separating for an algebras $\mathcal{M}(\mathcal{O})$ of
local observables attached to an open subset of Minkowski space
(\cite{Bo92, RS61, Bo68}). The most classical results
concern the cyclicity of the vacuum for
double cone algebras $\mathcal{M}(\mathcal{O}_{x,y})$.
Since every wedge contains double cones, the vacuum is also
cyclic and separating for wedge algebras $\mathcal{M}(W)$, $W \in \mathcal{W}$.
This leads to modular objects $(\Delta_W, J_W)$, so that
the condition (RS) in \S \ref{subsec:5.2} holds for the
index set~$L = \mathcal{W}$.
(c) For nets of von Neumann algebras $\mathcal{M}(\mathcal{O})$ of local oberservables associated
to regions $\mathcal{O}$ in some spacetime $M$, it is important to specify those
regions behaving well with respect to our assumptions.
In \cite{Sa97} they are called {\it test regions}. This requires in particular
that the vacuum vector $\Omega$ should be cyclic for $\mathcal{M}(\mathcal{O})$
(the Reeh--Schlieder property) and that a suitable duality holds
$\mathcal{M}(\mathcal{O})' = \mathcal{M}(\mathcal{O}')$, where $\mathcal{O}'$ is the (interior of the) causal complement of
$\mathcal{O}$. Prototypical examples of test domains are wedges $W$ in Minkowski space (or its conformal
completion) \cite[Thm.~2.5]{BGL02}, but in many situations larger classes also have these properties,
such as double cones or {\it spacelike cones}, i.e.,
translates of convex cones ${\mathbb R}_+ \mathcal{D}$, where $\mathcal{D}$ is a double cone not containing~$0$.
In this context the CGMA is a natural additional requirement for test regions
that ties the corresponding modular structure to spacetime geometry.
(d) For a Haag--Kastler net $\mathcal{A}(\mathcal{O})$ (as in the introduction),
the (CGMA) for the net $\pi_\omega(\mathcal{A}(\mathcal{O}))''$ of von Neumann algebras
specified by a state $\omega$ of $\mathcal{A}$ can be seen as a requirement
that selects states which are particularly natural
(cf.~\cite[p.~485]{BDFS00}).
\end{remark}
Under certain assumptions on the corresponding
net of local observables, the Bisog\-nano--Wichmann Theorem (\cite{BW76, So10})
asserts that the antiunitary representation
$(U,\mathcal{H})$ of $P(4)_+$ obtained from the PCT Theorem, where
$\Theta = U_{-\mathbf{1}}$ is the antiunitary PCT operator,
has the property that the boost generator $b_0$
from \eqref{eq:bostgen-d} in Example~\ref{ex:one-par}
satisfies
\[ \Delta_{W_R}^{-it/2\pi} = U(e^{tb_0}) \quad \mbox{ and } \quad
J_{W_R} = U(r_{W_R}) = \Theta U(\mathop{{\rm diag}}\nolimits(1,1,-1,-1)).\]
The first relation is called the {\it modular covariance relation}
(cf.\ \cite[p.~911]{Mu01}).
In \cite[Props.~2.8,2.9]{GL95} Guido and Longo show that
modular covariance implies covariance of the corresponding modular
conjugations which in turn implies the PCT Theorem.
In the context of standard subspaces,
the Bisognano--Wichmann Theorem and the PCT Theorem
were derived in by Mund (\cite[Thm.~5]{Mu01}).
\begin{exs} \mlabel{exs:5.16} (Conformal invariance)
Beyond the Bisognano--Wichmann Theorem,
the following geometric implementations of modular automorphism
groups are known:
(a) In \cite{Bu78}, Buchholz shows that, for a free scalar massless field on
${\mathbb R}^{1,d-1}$ (which automatically enjoys conformal symmetry), for $d > 2$
the dilation group $\gamma_{V_+}(a)(x) = a x$, $a \in {\mathbb R}^\times$,
corresponds to the modular objects of the light cone algebra $\mathcal{M}(V_+)$.
As we shall see in (b) below, the light cone is conformally equivalent to
the right wedge $W_R$. Therefore
$\gamma_{V_+}$ is conjugate in the conformal group to the homomorphism
\break $\gamma_{W_R} \:{\mathbb R}^\times \to \mathop{\rm Conf{}}\nolimits({\mathbb R}^{1,d-1})$,
corresponding to the right wedge $W_R$, which occurs
in the Bisognano--Wichmann Theorem.
(b) In \cite{HL82}, Hislop and Longo obtain similar results for
double cones in the context of
massless scalar fields by conjugating them conformally to light cones
and then apply \cite{Bu78}. More concretely, the
{\it relativistic ray inversion}
\[ \rho \:
x = (t,\bx) \mapsto \frac{1}{[x,x]}(t,\bx), \quad [x,x] = t^2 - \bx^2 \]
(which is an involution), exchanges the translated right wedge
\[ W_R + \frac{r}{2} e_1
= \Big\{ (x_0, \bx) \: x_1 > \frac{r}{2} + |x_0| \Big\} \]
with the double cone
\[ \mathcal{O}_{\frac{e_0-e_1}{r}, \frac{-e_0-e_1}{r}} =
\Big(\frac{e_0 - e_1}{r} - V_+\Big) \cap \Big(-\frac{e_0 + e_1}{r} + V_+\Big).\]
It also exchanges the double cone
$\mathcal{O}_{r e_0, 0} = (r e_0 - V_+) \cap V_+$ and the light cone
$\frac{e_0}{r} + V_+$
(see \cite[p.~111]{Gu11}).
With these explicit transformations, one also obtains the corresponding
one-parameter groups of automorphisms and the corresponding
conformal involutions. For the light cone $V_+$, we know from (a) that
the corresponding automorphism group is given by the dilations
$\gamma_{V_+}(t)x = t x$. So it follows in particular, that it is conformally
conjugate to the Lorentz boosts $\gamma_W \: {\mathbb R}^\times \to P(4)_+$
corresponding to a wedge $W$ (Lemma~\ref{lem:4.17}).
As a consequence of this discussion, the modular automorphism groups
corresponding to the local observable algebras associated to
double cones, light cones and wedges are conjugate under
the conformal group $\mathop{\rm Conf{}}\nolimits({\mathbb R}^{1,d-1}) \cong \mathop{\rm O{}}\nolimits_{2,d}({\mathbb R})/\{\pm \mathbf{1}\}$.
In particular, they
correspond to a single conjugacy class of homomorphism
$\gamma \: {\mathbb R}^\times \to \mathop{\rm Conf{}}\nolimits({\mathbb R}^{1,d-1})$ which is most simply
represented by $\gamma_{V_+}$.
\end{exs}
\begin{ex} (cf.\ Example~\ref{ex:proj-grp})
In the one-dimensional Minkowski space
${\mathbb R}$, the order intervals are represented by the open interval
$(-1,1)$ transformed by the Cayley map $c(x) := \frac{1 + x}{1-x}$ to $(0,\infty) = V_+$
and the involution $\sigma(x) := x^{-1}$ maps $(-1,1)$ to its (conformal)
complement. These are the geometric transformations corresponding to the modular
operators on the double cone algebra $\mathcal{M}(\mathcal{O}_{1,-1})$ for $d = 1$.
\end{ex}
\begin{ex} Interesting examples of nets of von Neumann algebras
with (CGMA) arise from \cite[Thm.~4.3.9]{BDFS00},
where the index set is the set $\mathcal{W}$ of wedges in ${\mathbb R}^{1,3}$.
Under suitable continuity assumptions, one obtains a
continuous antiunitary representation of $P(4)_+$ with
\[ U_{r_W} = J_W \quad \mbox{ and } \quad \mathcal J = U_{P(4)_+}.\]
Here a key point is that $P(4)_+$
is generated by the conjugacy class of the wedge reflection $r_{W_R}$
(Lemma~\ref{lem:5.8}).
\end{ex}
\begin{remark}
A key observation in the work of Borchers and Wiesbrock
is that von Neumann algebras of local observables corresponding to two
wedges having a light ray in common define modular intersections
(\cite[Prop.~7]{Wi98}). That one can deal with them as pairs without
any direct reference to the intersection (cf.\ Theorem~\ref{thm:wies3-standard})
is crucial because the modular group of the intersection need not be implemented
geometrically \cite{Bo96}. This is of particular interest for QFT on de Sitter
space $\mathop{{\rm dS}}\nolimits^d$ whose isometry group $\mathop{\rm O{}}\nolimits_{1,d}({\mathbb R})$ has no positive energy
representations for $d > 2$.
\end{remark}
\section{Second quantization and modular localization}
\mlabel{sec:6}
In this section we explain how
Second Quantization, i.e., the passage from a (one-particle) Hilbert space
$\mathcal{H}$ to the corresponding Fock spaces $\mathcal{F}_\pm(\mathcal{H})$
(bosonic and fermionic) provides for
each standard subspace $V \subseteq \mathcal{H}$
pairs $(\mathcal{R}^\pm(V),\Omega)$, where $\mathcal{R}^\pm(V)$ is a von Neumann algebra
on $\mathcal{F}_\pm(\mathcal{H})$ and the vacuum vector
$\Omega \in \mathcal{F}_\pm(\mathcal{H})$ is cyclic.
Let $\mathcal{H}$ be a complex Hilbert space and let
\[ \mathcal{F}(\mathcal{H}) := \hat\bigoplus_{n = 0}^\infty \mathcal{H}^{\hat\otimes n} \]
be the full {\it Fock space over $\mathcal{H}$}.
We write $\mathcal{F}_+(\mathcal{H})$ for the subspace of symmetric tensors,
the {\it bosonic Fock space}, and
$\mathcal{F}_-(\mathcal{H})$ for the subspace of skew-symmetric tensors,
the {\it fermionic Fock space}.
Both spaces carry a natural representation
\[ \Gamma_\pm \: \mathop{{\rm AU}}\nolimits(\mathcal{H}) \to \mathop{{\rm AU}}\nolimits(\mathcal{F}_\pm(\mathcal{H})) \]
of the antiunitary group $\mathop{{\rm AU}}\nolimits(\mathcal{H})$ given by
\[ \Gamma_+(U)(v_1 \vee \cdots \vee v_n)
:= Uv_1 \vee \cdots \vee Uv_n, \qquad
\Gamma_-(U)(v_1 \wedge \cdots \wedge v_n)
:= Uv_1 \wedge \cdots \wedge Uv_n.\]
Moreover, the bosonic Fock space
carries a unitary representation of the Heisenberg
group $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ (\S \ref{subsec:7.1}) and its subgroups
can be used to derive a net of von Neumann algebras
on $\mathcal{F}_+(\mathcal{H})$. A similar construction can be carried out
for the fermionic Fock space in terms of the
natural representation of the $C^*$-algebra $\mathop{{\rm CAR}}\nolimits(\mathcal{H})$, a
$C^*$-algebra defined by the {\it canonical anticommutation operators}.
Both constructions are functorial and associate to every
antiunitary representation $(U,\mathcal{H})$ of $(G,G_1)$ on $\mathcal{H}$ a
covariant family $(\mathcal{M}_\gamma)_{\gamma \in \Gamma}$
of von Neumann algebras on $\mathcal{F}_\pm(\mathcal{H})$,
where $\Gamma$ is as in Proposition~\ref{prop:antiunirep-stand}.
\subsection{Bosonic Fock space}
\mlabel{subsec:7.1}
We start with the construction of the von Neumann algebras on the bosonic Fock space.
For $v_1, \ldots, v_n \in \mathcal{H}$, we define
\[ v_1 \cdots v_n := v_1 \vee \cdots \vee v_n :=
\frac{1}{\sqrt{n!}} \sum_{\sigma \in S_n} v_{\sigma(1)} \otimes \cdots \otimes v_{\sigma(n)} \]
and $v^n := v^{\vee n}$, so that
\begin{eqnarray}
\label{eq:symprod}
\langle v_1 \vee \cdots \vee v_n, w_1 \vee \cdots \vee w_n \rangle
&=& \sum_{\sigma \in S_n} \langle v_{\sigma(1)}, w_1 \rangle
\cdots \langle v_{\sigma(n)}, w_m \rangle.
\end{eqnarray}
For every $v \in \mathcal{H}$, the series
$\mathop{{\rm Exp}}\nolimits(v) := \sum_{n = 0}^\infty \frac{1}{n!} v^n$
defines an element in $\mathcal{F}_+(\mathcal{H})$ and the scalar product of two
such elements is given by
\[ \langle \mathop{{\rm Exp}}\nolimits(v), \mathop{{\rm Exp}}\nolimits(w) \rangle
= \sum_{n = 0}^\infty \frac{n!}{(n!)^2} \langle v, w\rangle^n = e^{\langle v, w \rangle}.\]
These elements span a dense subspace of $\mathcal{F}_+(\mathcal{H})$, and therefore
we have for each $x \in \mathcal{H}$ a unitary operator on $\mathcal{F}_+(\mathcal{H})$ determined by the
relation
\begin{equation}
\label{eq:Ux-ops}
U_x \mathop{{\rm Exp}}\nolimits(v) = e^{ -\langle x, v\rangle - \frac{\|x\|^2}{2}} \mathop{{\rm Exp}}\nolimits(v+x) \quad
\mbox{ for } \quad x,v \in \mathcal{H}.
\end{equation}
A direct calculation then shows that
\begin{equation}
\label{eq:comm-rel-U}
U_x U_y = e^{-i\Im \langle x, y \rangle} U_{x+y} \quad \mbox{ for } \quad
x, y \in \mathcal{H}.
\end{equation}
To obtain a unitary representation, we have to replace the
additive group of $\mathcal{H}$ by the {\it Heisenberg group}
\[ \mathop{{\rm Heis}}\nolimits(\mathcal{H}) := {\mathbb T}\times \mathcal{H} \quad \mbox{ with } \quad
(z,v)(z',v') := (zz' e^{-i\Im \langle v,v' \rangle}, v + v'). \]
For this group, we obtain with \eqref{eq:comm-rel-U} a unitary representation
\[ U \: \mathop{{\rm Heis}}\nolimits(\mathcal{H}) \to \mathop{\rm U{}}\nolimits(\mathcal{F}_+(\mathcal{H})) \quad \mbox{ by } \quad U_{(z,v)} := z U_v.\]
In this physics literature, all this is expressed in terms of the
so-called {\it Weyl operators}
\[ W(v) := U_{iv/\sqrt{2}}, \qquad v \in \mathcal{H} \]
satisfying the {\it Weyl relations}
\begin{equation}
\label{eq:weyl}
W(v) W(w) = e^{-i \Im \langle v,w \rangle/2} W(v+w), \qquad v,w \in \mathcal{H}.
\end{equation}
\begin{definition} To each real subspace $V \subseteq \mathcal{H}$, we assign the
von Neumann algebra $\mathcal{R}(V) := \mathcal{R}^+(V) := W(V)'' \subseteq B(\mathcal{F}_+(\mathcal{H}))$ on the bosonic Fock
space of $\mathcal{H}$.
\end{definition}
\begin{lemma} \mlabel{lem:6.3} We have
\begin{itemize}
\item[\rm(i)] $\mathcal{R}(\mathcal{H}) = B(\mathcal{F}_+(\mathcal{H}))$, resp., the representation
of $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ on $\mathcal{F}_+(\mathcal{H})$ is irreducible.
\item[\rm(ii)] $\mathcal{R}(V) \subseteq \mathcal{R}(W)'$ if and only if
$V \subseteq W'$ (locality).
\item[\rm(iii)] $\mathcal{R}(V) = \mathcal{R}(\overline V)$.
\item[\rm(iv)] $\Omega= \mathop{{\rm Exp}}\nolimits(0) \in \mathcal{F}_+(\mathcal{H})$ is cyclic for $\mathcal{R}(V)$ if and only
if $V + i V$ is dense in $\mathcal{H}$.
\item[\rm(v)] $\Omega \in \mathcal{F}_+(\mathcal{H})$ is separating
for $\mathcal{R}(V)$ if and only if $\overline V \cap i \overline V = \{0\}$.
\item[\rm(vi)] $\Omega \in \mathop{{\rm cs}}\nolimits(\mathcal{R}(V))$ if and only if $\overline V$ is standard.
\end{itemize}
\end{lemma}
\begin{proof} (i) is well-known (\cite[Prop.~5.2.4(3)]{BR96}).
(ii) follows directly from the Weyl relations \eqref{eq:weyl}.
(iii) follows from the fact that $\mathcal{H} \to B(\mathcal{F}_+(\mathcal{H})), v \mapsto W_v$
is strongly continuous and $\mathcal{R}(V)$ is closed in the weak operator topology.
(iv) Assume that $\mathcal{K}:=\overline{V+iV}\not= \mathcal{H}$.
Then $\mathcal{R}(V)\Omega \subseteq \mathcal{F}_+(\mathcal{K})$, so that $\Omega$ cannot be cyclic.
Suppose, conversely, that $\mathcal{K} = \mathcal{H}$ and that $f \in (\mathcal{R}(V)\Omega)^\bot$.
Then the holomorphic function $\hat f(v) := \langle f, \mathop{{\rm Exp}}\nolimits(v) \rangle$ on
$\mathcal{H}$ vanishes on $V$, hence also on $V + i V$, and since this subspace is
dense in $\mathcal{H}$, we obtain $f = 0$ because $\mathop{{\rm Exp}}\nolimits(\mathcal{H})$ is total in $\mathcal{F}_+(\mathcal{H})$.
We conclude that $\Omega$ is cyclic.
(v) In view of (iii), we may assume that $V$ is closed.
Let $0 \not= w \in V \cap i V$. To see that $\Omega$ is not separating
for $\mathcal{R}(V)$, it suffices to show that, for the one-dimensional Hilbert space
$\mathcal{H}_0 := {\mathbb C} w$, the vector $\Omega$ is not separating for
$\mathcal{R}({\mathbb C} w) = B(\mathcal{F}_+({\mathbb C} w))$ (which follows from the irreducibility
of the representation of $\mathop{{\rm Heis}}\nolimits({\mathbb C} w)$ on $\mathcal{F}_+({\mathbb C} w)$).
This is obviously the case because $\dim \mathcal{F}_+({\mathbb C} w) > 1$.
Suppose that $\mathcal{K} = \{0\}$. As $\mathcal{K} = V'' \cap (iV'') = (V' + i V')'$,
it follows that $V' + i V'$ is dense in $\mathcal{H}$. By (ii),
$\Omega$ is cyclic for $\mathcal{R}(V')$ which commutes with $\mathcal{R}(V)$. Therefore
$\Omega$ is separating for $\mathcal{R}(V)$.
(vi) follows from (iv) and (v).
\end{proof}
\begin{remark} (a) $\mathcal{R}(V)$ is commutative if and only if
$V \subseteq V'$. For a standard subspace $V$ the relation
$V' = JV$ shows that this is equivalent to $V = V'$, respectively
to $\Delta = \mathbf{1}$ (Lemma~\ref{lem:stand-factorial}).
(b) The imaginary
part $\omega(\xi,\eta) := \Im \langle \xi,\eta\rangle$ turns $\mathcal{H}$ into a symplectic manifold
$(\mathcal{H},\omega)$. From this perspective, we may consider the algebras
$\mathcal{R}(V)$ as ``quantizations'' of the algebra of measurable functions on the
Lagrangian subspace $E := \mathcal{H}^J$. If $V = V'$, then
$\mathcal{F}_+(\mathcal{H}) \cong L^2(E^*, \gamma)$, where
$\gamma$ is a Gaussian probability measure on the algebraic dual space $E^*$ of $E$,
endowed with the smallest $\sigma$-algebra for which all evaluation maps are
measurable. Then the commutative von Neumann algebra $\mathcal{R}(V)$ is isomorphic to
$L^\infty(E^*,\gamma)$. In general, if $V \not\subseteq V'$, then $\mathcal{R}(V)$ is non-commutative
and the degree of non-commutativity depends on the non-degeneracy of $\omega$ on $V$.
It is ``maximal'' if $V \cap V' = \{0\}$, which implies that $\mathcal{R}(V)$ is a factor
by the following theorem.
\end{remark}
\begin{theorem} \mlabel{thm:araki-1} {\rm(\cite[Thm.~1]{Ar63})}
For closed real subspaces $V,W, V_j$ of $\mathcal{H}$, the following assertions hold:
\begin{itemize}
\item[\rm(i)] $\mathcal{R}(V) \subseteq \mathcal{R}(W)$ if and only if $V \subseteq W$ (isotony).
\item[\rm(ii)] $R\big(\bigvee_{j \in J} V_j\big) = \bigvee_{j \in J} \mathcal{R}(V_j)$,
where $\bigvee_{j \in J} V_j$ denotes the closed subspace generated by the
$V_j$ and $\bigvee_{j \in J} \mathcal{R}(V_j)$ denotes the von Neumann algebra generated by
the $\mathcal{R}(V_j)$.
\item[\rm(iii)] $R\big(\bigcap_{j \in J} V_j\big) = \bigcap_{j \in J} \mathcal{R}(V_j)$.
\item[\rm(iv)] $\mathcal{R}(V)' = \mathcal{R}(V')$ (duality).
\item[\rm(v)] $\mathcal{R}(V) \cap \mathcal{R}(V') = \mathcal{R}(V \cap V')$. In particular,
the algebra $\mathcal{R}(V)$ is a factor if and only if $V \cap V' =\{0\}$.
\end{itemize}
\end{theorem}
\subsection{Fermionic Fock space}
\mlabel{subsec:7.2}
On the fermionic Fock space, the construction of the von Neumann algebras is slightly
different but similar in spirit.
For $v_1, \ldots, v_n \in \mathcal{H}$, we define
\[ v_1 \wedge \cdots \wedge v_n :=
\frac{1}{\sqrt{n!}} \sum_{\sigma \in S_n} \mathop{{\rm sgn}}\nolimits(\sigma)
v_{\sigma(1)} \otimes \cdots \otimes v_{\sigma(n)}, \]
so that
\begin{eqnarray}
\label{eq:altprod}
\langle v_1 \wedge \cdots \wedge v_n, w_1 \wedge \cdots \wedge w_n \rangle
&=& \sum_{\sigma \in S_n} \mathop{{\rm sgn}}\nolimits(\sigma) \langle v_{\sigma(1)}, w_1 \rangle
\cdots \langle v_{\sigma(n)}, w_m \rangle
\end{eqnarray}
In $\mathcal{F}_-^0(\mathcal{H}) \cong {\mathbb C}$ we pick a unit vector $\Omega$, called the {\it vacuum}.
\begin{definition}
The {\it CAR-algebra} $\mathop{{\rm CAR}}\nolimits(\mathcal{H})$ of $\mathcal{H}$ is a $C^*$-algebra,
together with a continuous antilinear map
$a \: \mathcal{H} \to \Car(\mathcal{H})$ satisfying the
{\it canonical anticommutation relations}
\begin{equation}
\label{eq:car}
\{a(f), a(g)^*\} = \langle f,g \rangle \mathbf{1}
\quad \hbox{ and } \quad \{a(f),a(g)\} = 0
\quad \mbox{ for } \quad f, g \in \mathcal{H}
\end{equation}
and which has the universal property that,
for any $C^*$-algebra $\mathcal{A}$ and any antilinear map
$a' \: \mathcal{H} \to \mathcal{A}$ satisfying the above anticommutation relations,
there exists a unique homomorphism
$\phi \: \mathop{{\rm CAR}}\nolimits(\mathcal{H}) \to \mathcal{A}$ with $\phi \circ a =a'$.
This determines the pair $(\Car(\mathcal{H}),a)$
up to isomorphism (\cite[Thm.~5.2.8]{BR96}).
We write $a^*(f) := a(f)^*$ and observe that this defines a complex
linear map $a^* \: \mathcal{H} \to \Car(\mathcal{H})$.
\end{definition}
\begin{remark} \mlabel{rem:10.1}
The $C^*$-algebra $\Car(\mathcal{H})$ has an irreducible
representation
$(\pi_0, \mathcal{F}_-(\mathcal{H}))$
on the fermionic Fock space $\mathcal{F}_-(\mathcal{H})$ (\cite[Prop.~5.2.2(3)]{BR96}).
The image $c(f) := \pi_0(a(f))$
acts by $c(f)\Omega=0$ and
\[ c(f)(f_1 \wedge \cdots\wedge f_n)
= \sum_{j = 1}^n (-1)^{j-1} \langle f, f_j \rangle
f_1 \wedge \cdots \wedge f_{j-1} \wedge f_{j+1} \wedge \cdots \wedge f_n. \]
Accordingly, we have
$$ c^*(f)\Omega = f \quad \mbox{ and } \quad
c^*(f)(f_1 \wedge \cdots \wedge f_n)
= f \wedge f_1 \wedge \cdots \wedge f_n. $$
\end{remark}
Consider the hermitian operators
\begin{equation}
\label{eq:6.1}
b(f) := c(f) + c^*(f) \in \mathop{{\rm CAR}}\nolimits(\mathcal{H})
\end{equation}
and note that
\begin{equation}
\label{eq:6.2}
\{b(f), b(g)\} = \{c(f), c^*(g)\} + \{c^*(f), c(g)\}
= \langle f,g \rangle \mathbf{1} + \langle g, f \rangle \mathbf{1} = 2 \beta(f,g) \mathbf{1},
\end{equation}
where
\[ \beta(f,g) = \Re \langle f, g \rangle \quad \mbox{ for } \quad f,g \in \mathcal{H}\]
is the real scalar product on $\mathcal{H}$.
\begin{definition}
Let $\mathcal{H}= \mathcal{H}_{\overline 0} \oplus \mathcal{H}_{\overline 1}$ be a $2$-graded Hilbert space.
Accordingly, $B(\mathcal{H})$ inherits a grading and therefore a
{\it Lie superbracket} which on homogeneous elements is given by
\[ [A,B]_\tau := AB - (-1)^{|A| |B|} BA, \]
where $|A|$ denotes the degree of a homogeneous element~$A$.
For a subset $E \subseteq B(\mathcal{H})$, we accordingly define the {\it super-commutant} by
\[ E^\sharp := \{ A \in B(\mathcal{H}) \: (\forall M \in E)\, [A,M]_\tau = 0\}.\]
\end{definition}
For each homogeneous $M \in B(\mathcal{H})$, the operator $D_M(A) := [M,A]_\tau$ is a
superderivation of the ${\mathbb Z}_2$-graded associative algebra $B(\mathcal{H})$ in the sense that
\begin{equation}
\label{eq:6.3}
D_M(AB) = D_M(A)B + (-1)^{|M| |A|} A D_M(B).
\end{equation}
It follows in particular that, if $E$ is spanned by homogeneous elements,
then $E^\sharp$ is a von Neumann algebra adapted to the
$2$-grading of $B(\mathcal{H})$.
Let $Zv = (-1)^{|v|} v$ ($|v| \in \{0,1\}$)
denote the parity operator on $\mathcal{H}$ and
$\tilde Z v = (-i)^{|v|} v$ (also known as
the {\it Klein twist} $\tilde Z = \frac{1 + iZ}{1+i\mathbf{1}}$)
which satisfies $\tilde Z^2 =Z$. For $A$ and $M$ odd we then have
\[ [\tilde Z^{\pm 1} A \tilde Z^{\mp 1}, M] = \pm i Z\{A,M\} = -i Z[A,M]_\tau.\]
This leads to
\begin{equation}
\label{eq:6.4}
E^\sharp = \tilde Z E' \tilde Z^{-1} = \tilde Z^{-1} E' \tilde Z\end{equation}
for any graded subspace $E \subseteq B(\mathcal{H})$.
As in \cite{Fo83}, we associated to every real
linear subspace $V \subseteq \mathcal{H}$ a von Neumann subalgebra
\[ \mathcal{R}(V) := \mathcal{R}^-(V) := b(V)'' \subseteq B(\mathcal{F}_-(\mathcal{H})).\]
We list some properties of this assignment (cf.\ \cite[Prop.~2.5]{Fo83} for (iv) and (v)):
\begin{lemma} \mlabel{lem:ferm-dual} We have
\begin{itemize}
\item[\rm(i)] $\mathcal{R}(\mathcal{H}) = B(\mathcal{F}_-(\mathcal{H}))$, resp., the representation
of $\Car(\mathcal{H})$ on $\mathcal{F}_-(\mathcal{H})$ is irreducible.
\item[\rm(ii)] $\mathcal{R}(V) = \mathcal{R}(\overline V)$.
\item[\rm(iii)] $\mathcal{R}(V)$ and $\mathcal{R}(W)$ super-commute if and only if $V \bot_\beta W$
(twisted duality).
\item[\rm(iv)] The vacuum $\Omega$ is cyclic for $\mathcal{R}(V)$ if and only if
$V + i V$ is dense in $\mathcal{H}$.
\item[\rm(v)] The vacuum $\Omega$ is separating for $\mathcal{R}(V)$ if and only if
$\overline V \cap i \overline V = \{0\}$.
\item[\rm(vi)] $\Omega \in \mathop{{\rm cs}}\nolimits(\mathcal{R}(V))$ if and only if $\overline V$ is standard.
\end{itemize}
\end{lemma}
\begin{proof} (i) is well-known (\cite[Prop.~5.2.2(3)]{BR96}).
(ii) follows from the fact that
$b \: \mathcal{H} \to B(\mathcal{F}_-(\mathcal{H}))$ is continuous.
(iii) follows immediately from \eqref{eq:6.2}.
(iv) We explain how this can be derived from
\cite[Prop.~3.4]{BJL02}, where a different setting is used:
Consider a conjugation $\Gamma$ on a complex
Hilbert space $\mathcal{K}$ and a corresponding basis projection $P$, i.e.,
$\Gamma P \Gamma = \mathbf{1} - P$. For $v \in \mathcal{K}^\Gamma$ we then have
the orthogonal decomposition $v = Pv + (\mathbf{1} - P)v$, where both summands
are exchanged by $\Gamma$, hence have the same length.
Therefore the map
\[ \Phi \: \mathcal{K}^\Gamma \to P\mathcal{K}, \quad
\Phi(v) = \sqrt 2 Pv \]
is an isometry between the real Hilbert space $\mathcal{K}^\Gamma$ and the
complex Hilbert space $\mathcal{H} := P\mathcal{K}$.
The antilinear map
\[ a \: \mathcal{K} \to \mathop{{\rm CAR}}\nolimits(\mathcal{H}), \qquad
a(f) := c^*(P\Gamma f) + c(Pf) \]
then satisfies
\[ a(\Gamma f) = a(f)^*\quad \mbox{ for }\quad f \in \mathcal{K}\]
and $a$ is the unique antilinear extension of the map
$a\vert_{\mathcal{K}^\Gamma} = b \circ P \: \mathcal{K}^\Gamma \to \mathop{{\rm CAR}}\nolimits(\mathcal{H}).$
For any $\Gamma$-invariant subspace $\mathcal{V} \subseteq \mathcal{K}$, we therefore
have
\begin{equation}
\label{eq:ac-rel}
a(\mathcal{V}) = b(P\mathcal{V}^\Gamma)_{\mathbb C} = b(\Phi(\mathcal{V}^\Gamma))_{\mathbb C}
\end{equation}
and thus, for the real subspace $V := \Phi(\mathcal{V}^\Gamma) = P(\mathcal{V}^\Gamma) \subseteq \mathcal{H}$,
\begin{equation}
\label{eq:ac-rel2}
a(\mathcal{V}^\Gamma)'' = a(\mathcal{V})'' = b(\Phi(\mathcal{V}^\Gamma))'' = \mathcal{R}(V)''.
\end{equation}
As $V + i V = P(\mathcal{V}^\Gamma_{\mathbb C}) = P(\mathcal{V}),$
\cite[Prop.~3.4]{BJL02} implies that
$P(\mathcal{V})$ is dense in $P(\mathcal{H})$ if and only if $\Omega$ is $\mathcal{R}(V)$-cyclic, and
(iv) follows.
(v) In view of (ii), we may assume that $V$ is closed.
Let $0 \not= w \in W := V \cap i V$. To see that $\Omega$ is not separating
for $\mathcal{R}(V)$, it suffices to show that, for the one-dimensional Hilbert space
$\mathcal{H}_0 := {\mathbb C} w$, the vector $\Omega$ is not separating for
$\mathcal{R}({\mathbb C} w) = B(\mathcal{F}_-({\mathbb C} w))$. This follows from the irreducibility
of the representation of $\Car({\mathbb C} w) \cong M_2({\mathbb C})$ on $\mathcal{F}_-({\mathbb C} w)\cong {\mathbb C}^2$
which has no separating vector (see (i)).
Suppose, conversely, that $W = \{0\}$. As $W = (V^\bot+ i V^\bot)^\bot$,
the subspace $V^\bot + i V^\bot$ is dense in $\mathcal{H}$. By (iii),
$\Omega$ is cyclic for $\mathcal{R}(V^\bot)$ which anticommutes with $\mathcal{R}(V)$. Therefore
$\Omega$ is separating for $\mathcal{R}(V)$.
\end{proof}
The following theorem is the fermionic version of
the duality result in Theorem~\ref{thm:araki-1}(iii)
(\cite[Thm.~7.1]{BJL02}, \cite[Thm.~2.4(v)]{Fo83}).
\begin{theorem}[Fermionic Duality Theorem] \mlabel{thm:6.9}
\[ \mathcal{R}(V^{\bot_\beta}) = \mathcal{R}(V)^\sharp = \{ A \in B(\mathcal{F}_-(\mathcal{H})) \: (\forall v \in V) [A, b(v)]_\tau = 0\}
= \tilde Z^{-1} \mathcal{R}(V)' \tilde Z \]
for every real linear subspace $V \subseteq \mathcal{H}$.
\end{theorem}
To match our notation with Foit's in \cite{Fo83}, we note that Foit's operator \break
${V := \frac{1}{\sqrt 2}(\mathbf{1} - i Z)}$ satisfies $V = e^{-\pi i/4}\tilde Z^{-1}$,
so that $\tilde Z^{-1} A \tilde Z = V A V^*$ for every operator~$A$ on $\mathcal{F}_-(\mathcal{H})$.
\subsection{From antiunitary representations to local nets}
For a closed real subspace $V$ of the Hilbert space $\mathcal{H}$,
we write $\mathcal{R}^\pm(V) \subseteq B(\mathcal{F}_\pm(\mathcal{H}))$ for the associated
von Neumann algebras on the bosonic and fermionic Fock space.
\begin{proposition} \mlabel{prop:6.9}
For a closed real subspace $V \subseteq \mathcal{H}$,
the vacuum $\Omega$ is cyclic and separating for the
von Neumann algebras $\mathcal{R}^\pm(V)\subseteq B(\mathcal{F}_\pm(\mathcal{H}))$ if and only if
$V$ is a standard subspace of $\mathcal{H}$.
The corresponding modular objects $(\Delta_V^\pm, J_V^\pm)$ on $\mathcal{F}_\pm(\mathcal{H})$
are obtained by second quantization from the modular objects
$(\Delta_V, J_V)$ associated to~$V$, in the sense that
\begin{equation}
\label{eq:6.1b}
\Delta_V^\pm= \Gamma_\pm(\Delta_V), \quad
J_V^+ = \Gamma_+(J_V) \quad \mbox{ and }\quad
J_V^- = \tilde Z \Gamma_-(i J_V).
\end{equation}
\end{proposition}
\begin{proof} The first assertion follows from
Lemmas~\ref{lem:6.3} and \ref{lem:ferm-dual}.
For the identification of the modular objects,
we refer to \cite[Thm.~1.4]{FG89} (see also \cite{EO73})
in the bosonic case and to
\cite[Prop.~2.8]{Fo83} for the fermionic case
(see also \cite[Cor.~5.4]{BJL02}, \cite[Thm.~4.13]{Lle09}).
\end{proof}
\begin{remark} (a) The twists arising in Theorem~\ref{thm:6.9} and Proposition~\ref{prop:6.9}
arise from the fact that the fermionic situation has to take the
$2$-grading on $\mathcal{F}_-(\mathcal{H})$ into account. In particular Theorem~\ref{thm:6.9} takes its most natural
form $\mathcal{R}(V^{\bot_\beta}) = \mathcal{R}(V)^\sharp$ if the commutant is defined in terms of the
super bracket.
(b) If $\mathcal{M}$ is a ${\mathbb Z}_2$-graded von Neumann algebra on the ${\mathbb Z}_2$-graded Hilbert
space $\mathcal{H} = \mathcal{H}_0 \oplus \mathcal{H}_1$ and $\Omega \in \mathcal{H}_0$ is a cyclic separating vector, then
the theory of Lie superalgebras suggests to consider the antilinear involution
$(x_0 + x_1)^\sharp := x_0^* - i x_1^*$ instead of the operator adjoint. Then the
corresponding {\it unitary Lie superalgebra} is
\[ {\mathfrak u}(\mathcal{M}) = \{ x \in \mathcal{M} \: x^\sharp = - x\}
= \{ x = x_0 + x_1 \in \mathcal{M} \: x_0^* = -x_0, x_1^* = -ix_1\}.\]
Accordingly, modular theory can be based on the unbounded antilinear operator defined by
$\tilde S(M\Omega) := M^\sharp \Omega = \tilde Z S(M\Omega)$ for $M \in \mathcal{M}$.
The polar decomposition $\overline{\tilde S} = \tilde J \Delta^{1/2}$
results in the pair $(\tilde J, \Delta)$ of modular objects, where
$\Delta$ is unchanged, but $\tilde J = \tilde Z J$. This leads to the relation
\[ \tilde J \mathcal{M} \tilde J = \tilde Z \mathcal{M}' \tilde Z^{-1} = \mathcal{M}^\sharp,\]
which is a super version of $J\mathcal{M} J = \mathcal{M}'$.
We also obtain with \eqref{eq:6.1b}
\[ \tilde {J_V^-} = \tilde Z^2 \Gamma_-(iJ_V)
= Z \Gamma_-(i J_V)= \Gamma_-(-i J_V).\]
To obtain a situation where the modular objects on $\mathcal{F}^-(\mathcal{H})$ are simply given
by second quantization, one may consider the von Neumann algebras
$\tilde \mathcal{R}^-(V) := \mathcal{R}^-(\zeta V)$ for $\zeta := e^{\pi i/4}$ instead. The standard subspace
$\tilde V := \zeta V$ satisfies $\Delta_{\tilde V} =\Delta_V$ and
$J_{\tilde V} = i J_V$, so that the modular conjugation corresponding to
$\tilde \mathcal{R}^-(V)$ is
\[ \tilde {J_{\zeta V}^-} = \Gamma_-(-i J_{\zeta V}) = \Gamma_-(J_V).\]
\end{remark}
\begin{remark} Let $(U,\mathcal{H})$ be an antiunitary representation of
$(G,G_1)$ on $\mathcal{H}$ and \break
$\gamma \: {\mathbb R}^\times \to G$ be a homomorphism with
$\gamma(-1) \not\in G_1$, so that it specifies
a standard subspace $V_\gamma \subseteq \mathcal{H}$
(Proposition~\ref{prop:antiunirep-stand}).
Consider the antiunitary representation
\[ \Gamma_\pm \: \mathop{{\rm AU}}\nolimits(\mathcal{H}) \to \mathop{{\rm AU}}\nolimits(\mathcal{F}_\pm(\mathcal{H})) \]
of the antiunitary group of $\mathcal{H}$ on the corresponding Fock spaces.
Then $\Gamma_\pm \circ U$ is an antiunitary representation
of $(G, G_1)$ on $\mathcal{F}_\pm(\mathcal{H})$, so that we also obtain
standard subspaces $V_\gamma^\pm \subseteq \mathcal{F}_\pm(\mathcal{H})$.
The pair $(\mathcal{R}^+(V_\gamma), \Omega)$ then satisfies
\[ V_\gamma^+ = \overline{\mathcal{R}^+(V_\gamma)_h \Omega},\]
and in the fermionic case the pair $(\mathcal{R}^-(V_\gamma), \Omega)$ leads to the
correct modular operator $\Delta_{V_\gamma}^-$, but to the modular conjugation
$\tilde Z \Gamma_-(i J_{V_\gamma}).$
\end{remark}
\section{Perspectives}
\mlabel{sec:7}
For a detailed exposition of the results mentioned below,
we refer to the forthcoming paper \cite{NO17}.
\subsection{The Virasoro group}
In $\mathop{{\rm Diff}}\nolimits({\mathbb S}^1)$ we consider the involution on
${\mathbb S}^1 \cong {\mathbb T} \subseteq {\mathbb C}$, given by
$r(z) = \overline z$. We consider the group $G := \mathop{{\rm Diff}}\nolimits({\mathbb S}^1)
\cong \mathop{{\rm Diff}}\nolimits({\mathbb S}^1)_0 \rtimes \{\mathbf{1},r\}$.
One can show that all projective unitary positive energy
representations of $\mathop{{\rm Diff}}\nolimits({\mathbb S}_1)_0$ extend naturally to projective
antiunitary representations of $G$. To obtain
antiunitary representations, one has to replace $G$ by
a central extension $\mathop{{\rm Vir}}\nolimits \rtimes \{\mathbf{1},r\}$, where
$\mathop{{\rm Vir}}\nolimits$ is the simply connected Virasoro group.
Another closely related ``infinite dimensional'' group that occurs in the context of
modular localization is the free product
$\mathop{{\rm PSL}}\nolimits_2({\mathbb R}) *_{\mathop{{\rm Aff}}\nolimits({\mathbb R})_0} \mathop{{\rm PSL}}\nolimits_2({\mathbb R})$ of two copies of $\mathop{{\rm PSL}}\nolimits_2({\mathbb R})$
over the connected affine group (\cite{GLW98}).
\subsection{Euclidean Jordan algebras}
\mlabel{subsec:7.2b}
Minkowski spaces are particular examples of simple euclidean
Jordan algebras, namely those of rank~$2$ (cf.~\cite{FK94}).
Many of the geometric structures of Minkowski spaces and their conformal
completions are also available for general simple euclidean
Jordan algebras, where the role of the lightcone $V_+$ is played
by the open cone of invertible squares. There also exists a
natural causal compactification $\hat V$ which carries a causal
structure. The corresponding conformal group $G := \mathop{\rm Conf{}}\nolimits(V)$
has an index $2$ subgroup $G_1$ preserving the causal structure on $\hat V$;
other group elements reverse it.
In $\hat V$, the set $\mathcal{W}^c := \{ g V_+ \: g \in \mathop{\rm Conf{}}\nolimits(V)\}$
specializes for Minkowski spaces to the set of conformal wedge
domains, which include in particular the light cone and double cones
(cf.~Example~\ref{exs:5.16}). Moreover, the homomorphism
\[ \gamma_{V_+} \: {\mathbb R}^\times \to \mathop{{\rm GL}}\nolimits(V), \quad
\gamma(t)v = tv\]
is naturally specified because $\gamma_{V_+}({\mathbb R}^\times_+)$ is central
in the identity component of the stabilizer $G_{V_+}$.
Therefore any antiunitary positive energy representation of $G$ yields a net
of standard subspaces indexed by $\mathcal{W}^c$.
In \cite{NO17} we obtain a classification of these representations
along the lines of \S \ref{subsec:2.2}.
The subsemigroups $S_{V_+} := \{ g \in G \: gV_+ \subseteq V_+\} \subseteq G$
also leads to a natural generalization of Borchers pairs in this context.
\subsection{Hermitian groups}
\mlabel{subsec:7.3}
The conformal group $\mathop{\rm Conf{}}\nolimits(V)$ of a euclidean Jordan algebra
can be identified with the group $\mathop{{\rm AAut}}\nolimits(T_{V_+})$ of
holomorphic and antiholomorphic automorphisms
of the corresponding tube domain $T_{V_+} = V_+ + i V$.
This suggests that some of the crucial structure relevant for
antiunitary representations can still be obtained for the
groups $G := \mathop{{\rm AAut}}\nolimits(\mathcal{D})$ of all holomorphic and
antiholomorphic automorphisms of a bounded symmetric domain $\mathcal{D}$.
The irreducible antiunitary positive energy
representations can also be parametrized in a natural way
by writing $G = G_1 \rtimes \{\mathop{{\rm id}}\nolimits,\sigma\}$, where
$\sigma$ is an antiholomorphic involution of $\mathcal{D}$ (\cite{NO17}).
Here there are many homomorphisms $\gamma \: ({\mathbb R}^\times, {\mathbb R}^\times_+) \to (G,G_1)$
with $\gamma(-1) = \sigma$, but one cannot expect
$\gamma({\mathbb R}^\times_+)$ to be central in
$G^\sigma$, which can be achieved
for tube type domains
(coming from euclidean Jordan algebras).
\subsection{Analytic extension}
\mlabel{subsec:7.3b}
We have seen that, for antiunitary representations
of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$, the positive energy condition appears quite naturally
from the order structure on the set of standard subspaces.
If $(U,\mathcal{H})$ is an antiunitary representation of $G$
containing copies of $\mathop{{\rm Aff}}\nolimits({\mathbb R})$ coming from
half-sided modular inclusions, it follows that
the closed convex cone
\[ C_U := \{ x \in {\mathfrak g} \: -i\dd U(x) \geq 0\} \]
is non-trivial.
This further leads to an analytic extension of the representation to the domain
$G \exp(i C_U)$ (see \cite{Ne00} for details on this process).
On the other hand, antiunitary representations of
${\mathbb R}^\times$ correspond to modular objects $(\Delta, J)$ and the
orbit maps of elements $v \in V$ extend to the strip
$\{ z \in {\mathbb C} \: 0 \leq \Im z \leq \pi\}$
(Remark~\ref{rem:anaext}).
Composing families of homomorphisms $\gamma \: {\mathbb R}^\times \to G$
with an antiunitary representation,
we therefore expect analytic continuation of $U$ to natural
complex domains containing $G$ in their boundary.
It would be very interesting to combine these two types of
analytic continuations in a uniform manner, in the same spirit
as the KMS condition is a generalization of the ground state condition
(corresponding to positive energy) (\cite{BR96}). One may further expect
that this leads to ``euclidean realizations''
of antiunitary representations of $G$ by unitary representations
of a Cartan dual group in the sense of the theory of reflection positivity
developed in in \cite{NO14, NO16}; see also \cite{Sch06} for relations
with modular theory. Maybe it can even be combined
with the analytic extensions to the crown of a Riemannian
symmetric space (\cite{KS05}).
\subsection{Geometric standard subspaces}
\mlabel{subsec:7.3c}
In QFT, the algebras $\mathcal{R}(V)$ are supposed to correspond to regions in some
spacetime $M$. Therefore one looks for standard subspaces $V(\mathcal{O})$ that
are naturally associated to a domain in some spacetime, such as Hardy spaces
(Example~\ref{ex:hardy}) or the standard subspaces $K(\mathcal{O})$ constructed in \cite{FG89}
for free fields. From the perspective of antiunitary group representations,
a natural class of representations of a pair $(G,G_1)$ are those realized
in spaces $\mathcal{H}_D \subseteq C^{-\infty}(M)$ of distributions on a manifold $M$ on which $G_1$ acts.
Here $\mathcal{H}_D$ is the Hilbert space completion of the space $C^\infty_c(M)$ of test functions
with respect to the scalar product given by a positive definite distribution $D$ on $M \times M$
via
\[ \langle \xi, \eta \rangle_D = \int_{M \times M} \overline{\xi(x)}\eta(y)\, dD(x,y)\]
(cf.\ \cite{NO14}).
We associate to each open subset $\mathcal{O} \subseteq M$ a subset $\mathcal{H}_D(\mathcal{O})$
generated by the space $C^\infty_c(\mathcal{O})$ of test functions supported in $\mathcal{O}$.
In this context it is an interesting problem to find natural
antiunitary extensions of the representation of $G_1$ to $G$ such
that some of the corresponding standard subspaces
(Proposition~\ref{prop:3.2}) have natural geometric descriptions.
In this context the detailed analysis of KMS conditions for unitary representations
of ${\mathbb R}$ in \cite{NO16} should be a crucial tool because one typically expects
standard subspaces to be described in terms of analytic continuations of
distributions on some domain $\mathcal{O} \subseteq M$
to a complex manifold containing $\mathcal{O}$,
which links this problem to~\S\ref{subsec:7.3b} (cf.\ \cite{NO17b}).
As one also wants the modular
unitaries to act geometrically on the manifold $M$, the case $G_1 = {\mathbb R}$ acting
by translations on ${\mathbb R}$ considered in \cite{NO16} is of key importance. \\
Conversely, one may also consider Hilbert spaces $\mathcal{H}$ of holomorphic functions
on a complex manifold $M$ on which $G$ acts in such a way that $G_1$ acts
by holomorphic maps and $G \setminus G_1$ by antiholomorphic ones. Then
any $\gamma \in \mathop{{\rm Hom}}\nolimits(({\mathbb R}^\times,{\mathbb R}^\times_+),(G,G_1))$ leads to a standard subspace
of $\mathcal{H}$. Many natural examples of this type arise from \S\S\ref{subsec:7.2b} and~\ref{subsec:7.3}.
In particular, the representation of $\mathop{{\rm AU}}\nolimits(\mathcal{H})$ on $\mathcal{F}_+(\mathcal{H})$ is of this type
if we identify $\mathcal{F}_+(\mathcal{H})$ with the Hilbert space of holomorphic functions on $\mathcal{H}$
with the reproducing kernel $K(\xi,\eta) = e^{\langle \xi,\eta\rangle}$ (cf.~\cite{Ne00}).
\subsection{Dual pairs in the Heisenberg group}
\mlabel{subsec:7.4}
Let $\mathcal{H}$ be a complex Hilbert space and $V \subseteq \mathcal{H}$ be a real linear
subspace. We consider the corresponding
subgroup $\mathop{{\rm Heis}}\nolimits(V) := {\mathbb T} \times V \subseteq \mathop{{\rm Heis}}\nolimits(\mathcal{H})$ (\S \ref{subsec:7.1}).
The centralizer of this subgroup in $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ coincides with
$\mathop{{\rm Heis}}\nolimits(V')$. If $V$ is closed, we thus obtain a
{\it dual pair} $(\mathop{{\rm Heis}}\nolimits(V), \mathop{{\rm Heis}}\nolimits(V'))$ of subgroups in
$\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ in the sense that both subgroups are their mutual centralizers.
\begin{remark} For a closed linear subspace
$V \subseteq \mathcal{H}$, we have
$\mathop{{\rm Herm}}\nolimits(\mathcal{H}) = \overline{\mathop{{\rm Heis}}\nolimits(V)\mathop{{\rm Heis}}\nolimits(V')}$ if and only if
$V + V'$ is dense in $\mathcal{H}$, which is equivalent to $(V + V')' = V \cap V' = \{0\}$
(cf.~Lemma~\ref{lem:stand-factorial}).
If this is the case, then $\mathcal{R}(V) \subseteq B(\mathcal{F}_+(\mathcal{H}))$ is a factor
by Theorem~\ref{thm:araki-1}. Accordingly, the restriction
of the irreducible Fock representation $(U, \mathcal{F}_+(\mathcal{H}))$ of
$\mathop{{\rm Heis}}\nolimits(\mathcal{H})$ to $\mathop{{\rm Heis}}\nolimits(V)$ is a factor representation
and the same holds for $\mathop{{\rm Heis}}\nolimits(V')$. We thus obtain many
interesting types of factor representations of Heisenberg
groups of the type $\mathop{{\rm Heis}}\nolimits(V)$ simply by restricting an
irreducible representation of $\mathop{{\rm Heis}}\nolimits(\mathcal{H})$.
In \cite{vD71} this approach is used to realize quasi-free representations
of $\mathop{{\rm Heis}}\nolimits(V)$ in a natural way.
\end{remark}
\begin{remark} (a) Suppose that $G$ is a group which is the product $G = G_1G_2$
of two subgroups $G_1$ and $G_2$ such that $G_1 = Z_G(G_2)$ and $G_2 = Z_G(G_1)$.
Then $G_1 \cap G_2 = Z(G)$ and every unitary representation
$(U,\mathcal{H})$ of $G$ restricts to factor representations of the subgroups
$G_j$.
(b) A typical example arises from a von Neumann algebra
$\mathcal{M} \subseteq B(\mathcal{H})$ in symmetric form, i.e., there exists a conjugation
$J$ with $J\mathcal{M} J = \mathcal{M}'$ (Definition~\ref{def:symform}).
Then $G := \mathop{\rm U{}}\nolimits(\mathcal{M}) \mathop{\rm U{}}\nolimits(\mathcal{M}')$ is a product of two subgroups
$G_1 := \mathop{\rm U{}}\nolimits(\mathcal{M})$ and $G_2 := \mathop{\rm U{}}\nolimits(\mathcal{M}')$ satisfying this condition.
The representation of $G$ on $\mathcal{H}$ is multiplicity free
because $G' = \mathcal{M} \cap \mathcal{M}'$ is the center of $\mathcal{M}$, hence abelian.
It is irreducible if and only if $\mathcal{M}$ is a factor, and then
the representations of the subgroups $G_1$ and $G_2$ are factor representations.
Note that the representation of $G$ extends to an antiunitary
representation of $G \rtimes \{\mathbf{1},j\}$, where
$j(g) = JgJ$.
\end{remark}
\begin{remark}
Similar structures also arise for infinite dimensional
Lie groups such as $\mathop{{\rm Diff}}\nolimits({\mathbb S}^1)$, (doubly extended) loop groups
and oscillator groups because modular objects provide information on restrictions
of irreducible representations to factorial representations of subgroups
(cf.~\cite{Wa98} for loop groups).
So one should also try to develop the theory of modular localization
for antiunitary representations of infinite dimensional Lie groups.
\end{remark}
\subsection{A representation theoretic perspective on
modular localization}
The analysis of ordered families of von Neumann algebras
with a common cyclic separating vector carried out by Borchers in
\cite{Bo97} should also have a natural counterpart in the context
of standard subspaces, in the spirit of the translation mechanism
described in Subsection~\ref{subsec:4.2}. It would be interesting
to see if the corresponding results can be formulated entirely
in group theoretic terms, concerning multiplicative
one-parameter groups of some pair $(G,G_1)$
(cf.~Proposition~\ref{prop:antiunirep-stand}).
As we have seen in \S\S\ref{subsec:3.3} and \ref{subsec:modint}, this works
perfectly well for
half-sided modular inclusions and modular intersections,
The same could be said about Wiesbrock's program, concerning
the generation of Haag--Kastler nets from finite configurations
of von Neumann algebras with common cyclic separating vectors
(\cite{Wi93c, Wi97b, Wi98, KW01}).
| -230,627.194005 |
[
-2.595703125,
2.275390625
] | 29.640783 |
[
-2.642578125,
0.5400390625,
-2.251953125,
-6.39453125,
-1.0380859375,
9.1640625
] |
[
4.01171875,
9.0390625,
0.07720947265625,
5.609375
] | 1,775 | 31,327 |
[
-3.4296875,
3.947265625
] | 34.804535 |
[
-5.48828125,
-4.38671875,
-5.91796875,
-2.80078125,
1.75,
14.296875
] | 0.603448 | 14.570024 | 11.542759 | 2.460593 |
[
2.194338798522949
] | -133,025.862378 | 5.36917 | -228,000.84969 | 0.564804 | 6.262601 |
[
-1.40625,
-3.490234375,
-4.40625,
-5.73828125,
1.837890625,
13.2265625
] |
[
-5.40625,
-1.5185546875,
-1.865234375,
-0.67578125,
3.1484375,
3.37109375
] | |
BkiUcao4ubnjorNy3dBu
|
\section{Introduction}
The novel coronavirus disease that was first reported in Wuhan, China in December 2019 (COVID-19) is quickly spreading around the world. As of {\today}, the total number of cases exceeds 460,000 and the disease has claimed more than 20,000 lives globally. Since March 2020, while new cases in China appears to have settled down, the number of cases are exponentially growing in the rest of the world. To prevent the spread of the new virus, many governments have introduced draconian measures such as restricting travel, ordering social distancing, and closing schools, bars, restaurants, and other businesses.
In a time of such extreme uncertainty, making economic decisions becomes challenging because pandemics are rare. The most recent comparable episode is the Spanish flu of 1918 \citep{Trilla2008}, so pandemics are likely to occur at most once during one's lifetime. Nevertheless, individuals need to make everyday decisions such as how to manage inventories of staples, how much to consume and save, when to buy or sell stocks, etc., and these decisions depend on the expectation of how long and severe the epidemic is. Governments must also make decisions such as to what extent imposing travel restrictions, social distancing, closure of schools and businesses, etc., and for how long \citep{Anderson_2020}.
When past experience or data are not so relevant in new situations such as the COVID-19 pandemic, simple mathematical models are useful in analyzing the current situation and predicting the near future. This paper aims to help decision making by building a mathematical epidemic model, estimating it using the up-to-date data of COVID-19 cases around the world, making out-of-sample predictions, and discussing optimal policy and economic impact. The model is the \cite{KermackMcKendrick1927} Susceptible-Infected-Recovered (SIR) model and is relatively simple. An infected individual interacts with other agents and transmits the disease at a certain rate if the other agent is susceptible. An infected individual also recovers (or dies) at a certain rate. The model can be described as a system of ordinary differential equations, which is nonlinear due to the interaction between the infected and susceptible. The behavior of the model is completely determined by the transmission rate ($\beta$), the recovery rate ($\gamma$), and the initial condition. Despite the nonlinearity, the model admits an exact analytical solution in parametric form \citep{HarkoLoboMak2014}, which is convenient for estimation and prediction. Using this model, I theoretically derive the condition under which an epidemic occurs and characterize the peak of the epidemic.
I next take this model to the data. Because the situation and policies surrounding COVID-19 is rapidly evolving, I use the most recent two weeks ({14} days) of cases and estimate the model parameters by nonlinear least squares. Except for China, Japan, and Korea, which are early epicenters of the outbreak, the transmission rate $\beta$ is around 0.2--0.4 and heterogeneous across countries. The estimated transmission rates far exceed the recovery rate $\gamma$, which is about 0.1 based on the clinical course of COVID-19. Due to the high transmission rate and lack of herd immunity, in the absence of mitigation measures such as social distancing, the virus spreads quickly and may infect around 30 percent of the population at the peak of the epidemic. Using the model, I conduct an experiment where the government introduces temporary mitigation measures and succeeds in reducing the transmission rate. If the mitigation measures are taken too early, the peak is delayed but the epidemic restarts with no effect on the peak because the population does not acquire herd immunity. Assuming the government can take drastic measures up to 12 weeks, the optimal policy is start mitigation measures once the number of cases reaches 6.3\% of the population. Under the optimal policy, the peak infection rate reduces to 6.2\%. Therefore unless vaccines are expected to be developed in the near future, the draconian measures currently taken in many countries may be suboptimal, and it may be desirable to postpone them.
To evaluate the potential economic impact of COVID-19, I build a stylized production-based asset pricing model. Capitalists hire labor at competitive markets and infected workers are unable to work. Because the epidemic (temporarily) drastically reduces the labor supply, output goes down and the model calibration suggests that the stock market crashes by 50\% during the epidemic, though the crash is short-lived. Under the optimal policy, the stock price exhibits a W-shaped pattern and remains about 10\% undervalued than the steady state for about half a year.
\section{SIR epidemic model}\label{sec:SIR}
I first present the compartment model of epidemics following \cite{KermackMcKendrick1927}.
The society consists of $N$ individuals, among which $S$ are susceptible to an infectious disease (they are neither infected nor have immunity) and $I$ are infected. (We ignore population growth because an epidemic occurs in a relatively short interval.) Let $R=N-S-I$ be the number of individuals who are immune (possibly because they are vaccinated, infected and recovered, or dead). Suppose that individuals meet each other randomly, and conditional of an infected individual meeting a susceptible individual, the disease is transmitted with some probability. Let $\beta>0$ be the rate at which an infected individual meets a person and transmits the disease if susceptible. Let $\gamma>0$ be the rate at which an infected individual recovers or dies. Then the following differential equations hold.
\begin{subequations}\label{eq:SIR}
\begin{align}
\diff S/\diff t&=-\beta SI/N, \label{eq:SIR.s}\\
\diff I/\diff t&=\beta SI/N-\gamma I, \label{eq:SIR.i}\\
\diff R/\diff t&=\gamma I. \label{eq:SIR.r}
\end{align}
\end{subequations}
To see why \eqref{eq:SIR.s} holds, note that an infected individual can transmit to $\beta$ people per unit of time if all of them are susceptible, but the probability of meeting a susceptible individual is only $S/N$. Thus, $I$ infected individuals can transmit to $I\times \beta\times (S/N)=\beta SI/N$ individuals per unit of time. \eqref{eq:SIR.i} holds because the change in the number of infected individuals equals the newly infected minus closed cases (either due to recovery or death).
Letting $x=S/N$, $y=I/N$, $z=R/N$ be the fraction of susceptible, infected, and recovered individuals in the society, dividing all equations in \eqref{eq:SIR} by $N$, we obtain
\begin{subequations}\label{eq:xyz}
\begin{align}
\dot{x}&=-\beta xy, \label{eq:xyz.x}\\
\dot{y}&=\beta xy-\gamma y, \label{eq:xyz.y}\\
\dot{z}&=\gamma y,\label{eq:xyz.z}
\end{align}
\end{subequations}
where $\dot{x}=\diff x/\diff t$. Although the system of differential equations \eqref{eq:xyz} is nonlinear, \cite{HarkoLoboMak2014} obtain an exact analytical solution in parametric form.
\begin{prop}\label{prop:HLM}
Let $x(0)=x_0>0$, $y(0)=y_0>0$, $z(0)=z_0\ge 0$ be given, where $x_0+y_0+z_0=1$. Then the solution to \eqref{eq:xyz} is parametrized as
\begin{subequations}\label{eq:HLM}
\begin{align}
x(t)&=x_0v, \label{eq:HLM.x}\\
y(t)&=\frac{\gamma}{\beta}\log v-x_0v+x_0+y_0, \label{eq:HLM.y}\\
z(t)&=-\frac{\gamma}{\beta}\log v+z_0, \label{eq:HLM.z}
\end{align}
\end{subequations}
where
\begin{equation}
t=\int_v^1\frac{\diff \xi}{\xi(\beta x_0(1-\xi)+\beta y_0+\gamma\log \xi)}. \label{eq:v}
\end{equation}
\end{prop}
\begin{proof}
See Equations (26)--(29) in \cite{HarkoLoboMak2014}. The parametrization has been changed slightly for convenience.
\end{proof}
Using Proposition \ref{prop:HLM}, we can study the qualitative properties of the epidemic.
\begin{prop}\label{prop:epidemic}
Let everything be as in Proposition \ref{prop:HLM}. Then the followings are true.
\begin{enumerate}
\item In the long run, fraction $v^*\in (0,1)$ of susceptible individuals will not be infected (fraction $1-v^*$ infected), where $v^*$ is the unique solution to
\begin{equation}
x_0(1-v)+y_0+\frac{\gamma}{\beta}\log v=0.\label{eq:vstar}
\end{equation}
\item If $\beta x_0\le \gamma$, then $\diff y/\diff t\le 0$: there is no epidemic. Furthermore, $v^*\to 1$ as $y_0\to 0$.
\item If $\beta x_0>\gamma$, then there is an epidemic. The number of infected individuals reaches the maximum when $\beta x(t_{\max})=\gamma$, at which point the fraction
\begin{equation}
y_{\max}=y(t_{\max})=\frac{\gamma}{\beta}\log \frac{\gamma}{\beta x_0}-\frac{\gamma}{\beta}+x_0+y_0\label{eq:ymax}
\end{equation}
of population is infected. The maximum infection rate $y_{\max}$ is increasing in $x_0,y_0$ and decreasing in $\gamma/\beta$.
\end{enumerate}
\end{prop}
\begin{proof}
Let $f(v)=x_0(1-v)+y_0+\frac{\gamma}{\beta}\log v$ for $v\in (0,1]$. Then \eqref{eq:v} implies
\begin{equation}
t=\int_{v(t)}^1 \frac{\diff \xi}{\beta\xi f(\xi)}.\label{eq:vt}
\end{equation}
Since $f(1)=y_0>0$, it must be $v(0)=1$. The definite integral \eqref{eq:vt} is well-defined in the range $f(v)>0$. Since
\begin{align*}
f'(v)&=-x_0+\frac{\gamma}{\beta v},\\
f''(v)&=-\frac{\gamma}{\beta v^2}<0,
\end{align*}
$f$ is concave so the set $V=\set{v\in (0,1]|f(v)>0}$ is an interval. Since $f(v)\to -\infty$ as $v\downarrow 0$, we have $V=(v^*,1]$ for $v^*\in (0,1)$, where $v^*$ solves \eqref{eq:vstar}. Because $f$ can be approximated by a linear function around $v^*$, we get
$$\infty=\int_{v^*}^1\frac{\diff \xi}{\beta \xi f(\xi)},$$
so $v(\infty)=v^*$. Using \eqref{eq:HLM.x}, in the long run fraction $x(\infty)/x_0=v^*$ of susceptible individuals are not infected.
Since $f(v)>0$ on $V=(v^*,1]$, we have $v(t)\in (v^*,1]$ for all $t\ge 0$. By \eqref{eq:vt}, $v(t)$ is clearly decreasing in $t$. If $\beta x_0\le \gamma$, it follows from \eqref{eq:HLM.y} that
$$\dot{y}=\left(\frac{\gamma}{\beta v}-x_0\right)\dot{v}=\frac{\gamma-\beta x_0 v}{\beta v}\dot{v}\le 0$$
because $\dot{v}\le 0$ and $v\le 1$ implies $\gamma-\beta x_0v\ge \gamma-\beta x_0\ge 0$. Since $f(1)=0$ when $y_0=0$, $f'(1)=-x_0+\gamma/\beta\ge 0$ if $\beta x_0\le \gamma$, and $f''(v)<0$, it must be $v^*\to 1$ as $y_0\to 0$.
Finally, assume $\beta x_0>\gamma$. Then $\dot{y}(0)=(\beta x_0-\gamma)y_0>0$, so $y(t)$ initially increases. By \eqref{eq:xyz.y}, $y(t)$ reaches the maximum when $0=\dot{y}=\beta xy-\gamma y\iff x=\gamma/\beta$. Using \eqref{eq:HLM.x}, this is achieved when $\gamma/\beta=x_0v\iff v=\frac{\gamma}{\beta x_0}$. Substituting into \eqref{eq:HLM.y}, we obtain \eqref{eq:ymax}. Letting
$$y(\theta,x_0,y_0)=\theta \log\frac{\theta}{x_0}-\theta+x_0+y_0$$
for $\theta=\gamma/\beta$, it follows from simple algebra that
\begin{align*}
\partial y/\partial y_0&=1,\\
\partial y/\partial x_0&=-\frac{\theta}{x_0}+1=\frac{\beta x_0-\gamma}{\beta x_0}>0,\\
\partial y/\partial \theta&=\log \frac{\gamma}{\beta x_0}<0,
\end{align*}
so $y_{\max}$ is increasing in $x_0,y_0$ and decreasing in $\theta=\gamma/\beta$.
\end{proof}
Proposition \ref{prop:epidemic} has several policy implications for dealing with epidemics. First, the policy maker may want to prevent an epidemic. This is achieved when the condition $\beta x_0\le \gamma$ holds. Since before the epidemic the fraction of infected individuals $y_0$ is negligible, we can rewrite the no-epidemic condition as $\beta(1-z_0)\le \gamma$. Unlike bacterial infections, for which a large variety of antibiotics are available, there is generally no curative care for viral infections.\footnote{Currently, the only viruses against which antiviral drugs are available are the human immunodeficiency virus (HIV), herpes, hepatitis, and influenza viruses. See \cite{Razonable2011} for a review of treatments of the latter three viruses.} Therefore the recovery/death rate $\gamma$ is generally out of control. Hence the only way to satisfy the no-epidemic condition $\beta(1-z_0)\le \gamma$ is either
\begin{inparaenum}[(i)]
\item control transmission (reduce $\beta$), for example by washing hands, wearing protective gear, restricting travel, or social distancing, or
\item immunization (increase $z_0$).
\end{inparaenum}
The required minimum immunization rate to prevent an epidemic is $z_0=1-\gamma/\beta$.
Second, the policy maker may want to limit the economic impact once an epidemic occurs. Because the supply of healthcare services is inelastic in the short run, it is important to keep the maximum infection rate $y_{\max}$ in \eqref{eq:ymax} within the capacity of the existing healthcare system. This is achieved by lowering the transmission rate $\beta$.
\section{Estimation and prediction}\label{sec:estim}
In this section I estimate the SIR model in Section \ref{sec:SIR} and use it to predict the evolution of the COVID-19 pandemic.
\subsection{Data}
The number of cases of COVID-19 is provided by Center for Systems Science and Engineering at Johns Hopkins University (henceforth CSSE). The cumulative number of confirmed cases and deaths can be downloaded from the GitHub repository.\footnote{\url{https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series}} The time series starts on January 22, 2020 and is updated daily. Because countries are added as new cases are reported, the cross-sectional size increases every day. For the majority of countries, the CSSE data are at country level. However, for some countries such as Australia, Canada, and China, regional data at the level of province or state are available. In such countries, I aggregate across regions and use the country level data. Figure \ref{fig:data_Early} shows the number of COVID-19 cases in early epicenters, namely China, Iran, Italy, Japan, and Korea.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig_data_Early.pdf}
\caption{Number of COVID-19 cases in early epicenters.}\label{fig:data_Early}
\end{figure}
\subsection{Estimation}
Estimation of the model poses significant challenges because the situation of COVID-19 is rapidly evolving. The model parameters are likely time-varying because new policies are introduced on a day-to-day basis, temperature and weather may affect the virus activity, and the virus itself my genetically mutate. For this reason, I only use the data from the two most recent weeks ({14} days).
I estimate the model parameters by nonlinear least squares, minimizing the distance between model outputs $(x,y,z)$ and data. Because the CSSE data only contains confirmed cases and deaths, but the SIR model abstracts from death, I define $c=y+z=1-x$ to be the fraction of infected or recovered cases in the model. The counterpart in the data is $\widehat{c}=C/N$, where $C$ is the number of confirmed cases and $N$ is population.\footnote{I use the 2015 population data from World Bank at \url{https://data.world/worldbank/total-population-per-country}.} Because the number of cases grows by many orders of magnitude within a short period of time, I define the loss function using log cases:
\begin{equation}
L(\beta,\gamma,y_0,z_0)=\sum_t\left(\log \widehat{c}(t)-\log c(t)\right)^2.\label{eq:lossfunc}
\end{equation}
Since I only include $c$ in the loss function \eqref{eq:lossfunc}, the parameters $\gamma$ and $z_0$, which govern the dynamics of fraction of recovered $z$, are not identified. Therefore I exogenously fix these two parameters. For the recovery rate $\gamma$, because the majority of patients with COVID-19 experience mild symptoms that resemble a common cold or influenza \citep{Zhou_2020}, which takes about 10 days to recover, I set $\gamma=1/10=0.1$. For $z_0$, I set it to one divided by population.\footnote{This number is likely a significant underestimate, but the results are not sensitive to $z_0$ as long as it is small.} Although the fraction of cases $c(t)$ is likely significantly underestimated because infected individuals do not appear in the data unless they are tested, it does not cause problems for estimating the parameter of interest (the transmission rate $\beta$) because under-reporting is absorbed by the constant $y_0$ in \eqref{eq:HLM.y}, which only affects the onset of the epidemic by a few weeks without changing the overall dynamics (see Figure \ref{fig:SIR_example}). To sum up, I estimate the remaining parameters $\beta$ and $y_0$ by numerically minimizing the loss function \eqref{eq:lossfunc}. Standard errors are calculated using the asymptotic theory of $M$-estimators. See Appendix \ref{sec:solve} for the solution algorithm of the SIR model.
\subsection{Results}
I estimate the SIR model for all countries that meet the following inclusion criteria:
\begin{inparaenum}[(i)]
\item the number of confirmed cases as of {\today} exceeds 1,000, and
\item the number of confirmed cases at the beginning of the estimation sample exceeds 10.
\end{inparaenum}
These countries are mostly early epicenters (China, Japan, Korea), European countries, and North America. Table \ref{t:SIR_estim} shows the estimated transmission rate ($\beta$), its standard error, the fraction of infected individuals at the peak ($y_{\max}$), number of days to reach the peak ($t_{\max}$), and the fraction of the population that is eventually infected. Figure \ref{fig:Italy} shows the time evolution of COVID-19 cases in Italy, which is the earliest epicenter outside East Asia.
\begin{table}[!htb]
\centering
\caption{Estimation of SIR model.}\label{t:SIR_estim}
\begin{tabular}{lrrrrr}
\hline
Country & $\beta$ & s.e. & $y_{\max}$ (\%) & $t_{\max}$ (days) & Total (\%) \\
\hline
Australia & 0.29 & 0.052 & 29 & 67 & 93 \\
Austria & 0.29 & 0.005 & 29 & 57 & 93 \\
Belgium & 0.27 & 0.112 & 26 & 64 & 91 \\
Brazil & 0.37 & 0.002 & 37 & 60 & 97 \\
Canada & 0.33 & 0 & 33 & 60 & 96 \\
Chile & 0.37 & 0.223 & 37 & 54 & 97 \\
China & 0.0012 & 0 & 0.0059 & 0 & 0.006 \\
Czechia & 0.29 & 0.003 & 29 & 64 & 93 \\
Denmark & 0.12 & 0.001 & 1.5 & 315 & 31 \\
Ecuador & 0.48 & 0 & 46 & 42 & 99 \\
France & 0.24 & 0.005 & 22 & 74 & 88 \\
Germany & 0.28 & 0.005 & 28 & 60 & 93 \\
Iran & 0.11 & 0.002 & 0.49 & 470 & 19 \\
Ireland & 0.35 & 0.009 & 35 & 50 & 96 \\
Israel & 0.3 & 0.101 & 30 & 62 & 94 \\
Italy & 0.19 & 0.002 & 13 & 91 & 76 \\
Japan & 0.077 & 0.003 & 0.00051 & 0 & 0.0022 \\
Korea, South & 0.02 & 0 & 0.015 & 0 & 0.019 \\
Luxembourg & 0.42 & 0.011 & 42 & 36 & 98 \\
Malaysia & 0.26 & 0.01 & 24 & 80 & 90 \\
Netherlands & 0.25 & 0.002 & 24 & 69 & 90 \\
Norway & 0.15 & 0.001 & 7 & 144 & 60 \\
Pakistan & 0.31 & 0.006 & 31 & 76 & 94 \\
Poland & 0.31 & 0.002 & 31 & 69 & 94 \\
Portugal & 0.37 & 0.004 & 37 & 48 & 97 \\
Spain & 0.28 & 0.118 & 27 & 57 & 92 \\
Sweden & 0.15 & 0.002 & 6 & 173 & 57 \\
Switzerland & 0.28 & 0.169 & 27 & 55 & 92 \\
US & 0.38 & 0.001 & 39 & 48 & 98 \\
United Kingdom & 0.29 & 0.088 & 29 & 64 & 94 \\
\hline
\end{tabular}
\caption*{\footnotesize Note: The table presents the estimation results of the SIR model in Section \ref{sec:SIR}. $\beta$ (s.e.): the transmission rate and standard error; $y_{\max}$: the fraction of infected individuals at the peak in \eqref{eq:ymax}; $t_{\max}$: the number of days to reach the peak; ``Total'': the fraction of the population that is eventually infected.}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig_cases_Italy.pdf}
\caption{Time evolution of COVID-19 cases in Italy.}\label{fig:Italy}
\end{figure}
We can make a few observations from Table \ref{t:SIR_estim}. First, the estimated transmission rates are heterogeneous across countries. While $\beta$ is low in China, the origin of COVID-19, and the neighboring countries (Japan and Korea), where the virus spread first, $\beta$ is very high at around 0.2--0.4 in other countries and the no-epidemic condition $\beta x_0\le \gamma$ fails. Despite the short time series ({14} days), the transmission rate is precisely estimated in most countries. Although current data is insufficient to draw any conclusion, there are a few possible explanations for the heterogeneity of $\beta$. First, the transmission rate $\beta$ may artificially appear high in later epicenters such as Europe and North America just because these countries were slow in adopting tests of COVID-19 and the testing (hence reporting) rate is increasing. Second, the heterogeneity in $\beta$ may be due to the fact that early epicenters have already taken mitigation measures of COVID-19. For example, while Japan closed all schools starting on March 2, many states in US have implemented similar measures such as closing schools, bars, and restaurants only around March 16, so we may not have yet seen the effect of such policies. Finally, it is possible that there are cultural differences. For example, school children in Japan are taught to wash their hands before eating and to gargle after returning home, which they practice, and (from personal experience) Japanese cities tend to be much cleaner than most cities in the world.
Second, according to the model, countries other than China, Japan, and Korea are significantly affected by the epidemic. If the current trend in the transmission rate $\beta$ continues, the epidemic will peak in May 2020, at which point around 30 percent of the population will be infected by the virus simultaneously. By the time the epidemic ends, more than 90 percent of the population is eventually infected. These numbers can be used to do a back-of-the-envelope calculation of health outcomes. In February 2020, the cruise ship Diamond Princess was put under quarantine for two weeks after COVID-19 was detected. All passengers were tested and tracked, among whom 712 tested positive and 8 died. Although this is not a representative sample because the cruise ship passengers tend to be older and wealthier, the mortality of COVID-19 should be around 1\% for this group and possibly lower for the general population. \cite{Zhou_2020} document that 54 patients died among 191 that required hospitalization in two hospitals in Wuhan. Therefore the ratio of patients requiring hospitalization to death is $191/54=3.56$. Thus, based on the model, the fraction of people requiring hospitalization at the peak is $y_{\max}\times 0.01 \times 3.56=1.0\%$ assuming $y_{\max}=28\%$, the median value in Table \ref{t:SIR_estim}.
\subsection{Optimal mitigation policy}
Using the estimated model parameters, we can predict the course of the epidemic. For this exercise, I consider the following scenario. The epidemic starts with the initial condition $(y_0,z_0)=(10^{-8},0)$. The benchmark transmission rate is set to the median value in Table \ref{t:SIR_estim}, which is $\beta=0.29$. When the number of total cases $c=y+z$ exceeds $10^{-5}$, the government introduces mitigation measures such as social distancing, and the transmission rate changes to either $\beta=0.2$ or $\beta=0.1$.\footnote{Using high-frequency data on influenza prevalence and quasi-experimental variation in mitigation measures, \cite{Adda2016} documents that school closures and travel restrictions are generally not cost-effective.} Mitigation measures are lifted after 12 weeks and the transmission rate returns to the benchmark value. I also consider the optimal mitigation policy, where the government chooses the threshold of cases $\bar{c}$ to introduce mitigation measures as well as the transmission rate $\beta$ to minimize the maximum infection rate $y_{\max}$.
Figure \ref{fig:mitigation} shows the fraction of infected and recovered over time. When the government introduces early but temporary mitigation measures (left panel), the epidemic is delayed but the peak is unaffected. This is because the maximum infection rate $y_{\max}$ in \eqref{eq:ymax} is mostly determined by $\beta$ and $\gamma$ since $(x_0,y_0)\approx (1,0)$, and the epidemic persists until the population acquires herd immunity so that the no-epidemic condition $\beta x\le \gamma$ holds. While early drastic mitigation measures might be useful to buy time to develop a vaccine, they may not be effective in mitigating the peak unless they are permanent.
The right panel in Figure \ref{fig:mitigation} shows the course of the epidemic under the optimal policy, which is to introduce mitigation measures such that $\beta=0.13$ when the number of cases reaches $\bar{c}=6.3\%$ of the population. Under this scenario, only $y_{\max}=6.2\%$ of the population is simultaneously infected at the peak as opposed to 28\% under the benchmark scenario. The intuition is that by waiting to introduce mitigation measures, a sufficient fraction of the population is infected (and acquires herd immunity) and thus reduces the peak.
\begin{figure}[!htb]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{fig_mitigation.pdf}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{fig_mitigation_opt.pdf}
\end{subfigure}
\caption{Dynamics of epidemic with mitigation measures.}\label{fig:mitigation}
\end{figure}
\section{Asset pricing with epidemic}
To evaluate the economic impact of the COVID-19 epidemic, in this section I solve a stylized production-based asset pricing model.\footnote{\cite{EichenbaumRebeloTrabandtEpidemics} build a quantitative macroeconomic model where economic activity (consumption and work) affects the transmission rate during an epidemic and discuss the optimal containment policy. On the empirical side, \cite{KarlssonNilssonPichler2014} find that the 1918 Spanish flu had negative effects on poverty and capital income but no effect on earnings.}
\subsection{Model}
The economy consists of two agent types, capitalists and workers, who respectively own the capital stock and labor. The capital stock at time $t$ is denoted by $K_t$. The capital growth rate is exogenous, lognormal, and i.i.d.\ over time:
$$\log (K_{t+1}/K_t) \sim N(\mu,\sigma^2).$$
Capitalists hire labor at competitive markets and produce a perishable good using a Cobb-Douglas production technology $Y=K^\alpha L^{1-\alpha}$, where $\alpha\in (0,1)$ is the capital share. The labor supply is exogenous, deterministic, and normalized to 1 during normal times. During an epidemic, workers are either susceptible, infected, or recovered, and only non-infected agents can supply labor. For simplicity, I assume that workers are hand-to-mouth and consume the entire wage. The financial market is complete, and capitalists maximize the constant relative risk aversion (CRRA) utility
$$\E_t\sum_{s=0}^\infty \beta^s\frac{C_{t+s}^{1-\gamma}}{1-\gamma},$$
where $\beta>0$ is the discount factor and $\gamma>0$ is the relative risk aversion coefficient. A stock is a claim to the representative firm's profit $K^\alpha L^{1-\alpha}-wL$, where $w$ is the wage.
Given the sequence of labor supply $\set{L_t}_{t=0}^\infty$, we can solve for the equilibrium stock price semi-analytically as follows. The first-order condition for profit maximization implies $w=(1-\alpha)(K/L)^\alpha$. Hence the firm's profit, which by market clearing must equal consumption of capitalists, is
\begin{equation}
C=K^\alpha L^{1-\alpha}-wL=\alpha K^\alpha L^{1-\alpha}.\label{eq:cons}
\end{equation}
Because the marginal buyer of the stock is a capitalist, the stochastic discount factor of the economy is given by $M_{t+1}=\beta(C_{t+1}/C_t)^{-\gamma}$. Letting $P_t$ be the stock price, the no-arbitrage condition implies
\begin{equation}
P_t=\E_t\left[\beta \left(\frac{C_{t+1}}{C_t}\right)^{-\gamma}(P_{t+1}+C_{t+1})\right].\label{eq:noarbitrage}
\end{equation}
Dividing both sides of \eqref{eq:noarbitrage} by $C_t$, letting $V_t=P_t/C_t$ be the price-dividend ratio, and using \eqref{eq:cons}, we obtain
\begin{align*}
V_t&=\E_t\left[\beta \left(\frac{C_{t+1}}{C_t}\right)^{1-\gamma}(V_{t+1}+1)\right]\\
&=\E_t\left[\beta\left((K_{t+1}/K_t)^\alpha (L_{t+1}/L_t)^{1-\alpha}\right)^{1-\gamma}(V_{t+1}+1)\right].
\end{align*}
Because capital growth is i.i.d.\ normal and labor supply is deterministic, we can rewrite the price-dividend ratio as
\begin{equation}
V_t=\kappa(L_{t+1}/L_t)^{(1-\alpha)(1-\gamma)}(V_{t+1}+1),\label{eq:PDratio}
\end{equation}
where $\kappa=\beta \e^{\alpha(1-\gamma)\mu+[\alpha(1-\gamma)]^2\sigma^2/2}$. In normal times, we have $L_t\equiv 1$ and $V_t\equiv \frac{\kappa}{1-\kappa}$, where we need $\kappa<1$ for convergence. During an epidemic, it is straightforward to compute the price-dividend ratio by iterating \eqref{eq:PDratio} using the boundary condition $V_\infty=\frac{\kappa}{1-\kappa}$.
\subsection{Calibration}
I calibrate the model at daily frequency. I set the capital share to $\alpha=0.38$ and the relative risk aversion to $\gamma=3$, which are standard values. I assume a 4\% annual discount rate, so $\beta=\exp(-0.04/N_d)$, where $N_d=365.25$ is the number of days in a year. To calibrate capital growth and volatility, note that in normal times we have $L=1$ and hence $Y=K^\alpha$. Taking the log difference, we obtain $\log (Y_{t+1}/Y_t)=\alpha \log (K_{t+1}/K_t)$. Therefore according to the model, capital growth rate and volatility are $1/\alpha$ times those of output. I calibrate these parameters from the US quarterly real GDP per capita in 1947Q1--2019Q4 and obtain $\mu=0.0511$ and $\sigma=0.0487$ at the annual frequency.\footnote{At daily frequency, we need to divide these numbers by $N_d$ and $\sqrt{N_d}$, respectively.} For the transmission rate, using the point estimates in Section \ref{sec:estim}, I consider $\beta_0=0.29$. The recovery rate is $\gamma_0=0.1$. The initial condition is $(y_0,z_0)=(10^{-8},0)$.
Figure \ref{fig:asset_price} shows the stock price relative to potential output $P_t/Y_t^*$, where $Y_t^*=K_t^\alpha$ is the full employment output. The left and right panels are under the benchmark case and optimal policy, respectively. In the benchmark model, the stock price decreases sharply during the epidemic by about 50\%. However, the stock market crash is short-lived and prices recover quickly after the epidemic. This observation is in sharp contrast to the prediction from rare disasters models \citep{rietz1988,Barro2006QJE}, where shocks are permanent. Under the optimal policy, because the infection rate $y$ has two peaks, the stock price shows a W-shaped pattern. However, the decline is much more moderate at around 10\%.
\begin{figure}
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{fig_P_benchmark.pdf}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{fig_P_opt.pdf}
\end{subfigure}
\caption{Asset prices during epidemic.}\label{fig:asset_price}
\end{figure}
\section{Conclusion}
Because the situation with COVID-19 is rapidly evolving, any analysis based on current data will quickly become out of date. However, any analysis based on available data is better than no analysis. With these caveats in mind, I draw the following conclusions from the present analysis.
The COVID-19 epidemic is spreading except in China, Japan, and Korea. In many countries the transmission rate at present (\today) is very high at around $\beta=0.3$. This number implies that it takes only $1/\beta\approx 3$ days for a patient to infect another individual. Since it takes around 10 days to recover from the illness, the number of patients will grow exponentially and may overwhelm the healthcare system if no actions are taken. If the current trend continues, the epidemic will peak in early May 2020 in Europe and North America, at which point around 30 percent of the population will be infected. Because the recovery rate $\gamma$ is an uncontrollable biological parameter, the only way to control the epidemic is to reduce the transmission rate $\beta$, perhaps by restricting travel or social distancing. However, temporary measures only slows the onset of the epidemic but has no effect on the peak because the epidemic persists until the population acquires herd immunity. The optimal policy that minimizes the peak is to wait to introduce mitigation measures until a sufficient fraction of the population is infected, which can reduce the peak to 6.2\%. Policy makers in affected countries may also want to look at measures taken in China, Japan, and Korea, which are the countries relatively successful at controlling the spread so far.
Using the estimated transmission rates, I have solved a stylized production-based asset pricing model. The model predicts that the stock price decreases by 50\% during the epidemic, but recovers quickly afterwards because the epidemic is a short-lived labor supply shock. Under the optimal policy, the stock price exhibits a W-shaped pattern and remains about 10\% undervalued than the steady state level for half a year.
\newpage
| -22,147.960979 |
[
-3.134765625,
2.87890625
] | 34.296029 |
[
-3.408203125,
0.433837890625,
-2.0703125,
-6.12109375,
-0.224853515625,
7.7265625
] |
[
3.8515625,
5.58984375,
3.849609375,
8.0078125
] | 354 | 4,370 |
[
-1.95703125,
1.7119140625
] | 27.788488 |
[
-6.00390625,
-4.15625,
-3.94921875,
-2.05078125,
2.263671875,
11.21875
] | 0.652449 | 21.883997 | 30.823799 | 8.762918 |
[
3.1789491176605225
] | -15,292.679987 | 5.61167 | -21,565.864366 | 0.717694 | 6.098285 |
[
-3.169921875,
-3.02734375,
-2.728515625,
-3.9296875,
2.734375,
9.9453125
] |
[
-5.61328125,
-2.478515625,
-2.349609375,
-1.9140625,
3.75,
5.5625
] | |
BkiUbwPxK19JmejM8FCs
|
\section{Introduction}
Miniaturized imaging devices such as endoscopes are widely used in clinical applications for disease diagnosis and surgical guidance \cite{wallace_minimally_2008, goetz_microscopic_2014}. Commercial endoscopes utilize camera image sensors or fibre optic bundles for image detection, however image sensors usually lack information beyond the surface of a biological sample and fibre optic bundles suffer from artifacts and poor image resolution \cite{renteria_depixelation_2020} Advanced endoscopes utilize a scanning laser beam for image formation, acquiring one pixel at a time \cite{myaing_fiber-optic_2006}. Such endoscopes may be significantly more compact and can be integrated with a variety of imaging modalities that provide valuable functional, molecular or sub-surface structural information. Miniature optical imaging is also of great interest in neuroscience as head-mounted systems for real-time monitoring of brain activity \cite{klioutchnikov_three-photon_2022, guan_deep-learning_2022}
Miniature laser scanning mechanisms are typically achieved by microelectromechanical systems (MEMS) mirrors/actuators \cite{chen_high-speed_2007, park_forward_2012} or resonant fibre scanners \cite{liu_rapid-scanning_2004, lee_scanning_2010, park_high-speed_2020, rivera_compact_2011}. While these mechanisms enable compact high speed laser scanners for endoscopy, high driving voltages and complex fabrication are required, thus limiting the affordability and accessibility of such devices. High driving voltages exceeding 40-50V are not ideal due to medical device safety requirements, which forces the use of lower voltages and consequently a very limited field-of-view (FOV) in the range of a few hundred microns width. To increase the FOV, methods such as mosaicing sequential images \cite{hendargo_automated_2013}, wide-field scanning \cite{song_long-range_2016} or parallel imaging with space-division multiplexing \cite{zhou_space-division_2013}, were previously reported in non-endoscopic systems. It is challenging to design endoscopic scanning devices that are compact yet have a large FOV.
Miniaturized resonant fibre scanners, in particular, are difficult to manufacture at micro-scales and can suffer from poor fibre alignment and imprecise assembly. These result in nonlinear coupling effects, in which a phenomenon known as 'whirling' occurs and distorts the intended scan trajectory. While most groups aim to eliminate the whirling effect in their fibre scanners \cite{kundrat_high_2011, kaur_scanning_2021}, it has been reported to be possible to produce a stable whirling motion for two-dimensional (2D) scanning using a unidirectional actuator near resonance \cite{hyer_whirling_1979, haight_stability_2005}. Wu et al. exploited the whirling effect by introducing asymmetry in their fibre scanner via the addition of rigid structures on the fibre cantilever \cite{huang_resonance-based_2007, wu_realization_2007, wu_two-dimensional_2009}, and reported the generation of Lissajous patterns covering a 2D area using a piezoelectric bender, however practical imaging demonstrations were limited.
We present a fibre scanner that generated a tunable, multi-millimetre 2-D scan with a unidirectional piezoelectric bender, by maximizing the mechanical coupling in the orthogonal axis (Fig. 1). This coupling was tuned by adjusting the magnitude and direction of an angled force that was applied onto the bender via orthogonal threaded screws. Multiple FOVs were generated and stitched from a set of resonant scanning fibres mounted to the bender for a further extended FOV. Imaging was demonstrated on a wide range of biological samples.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig1.png}
\caption{(A) Illustration of the fibre-bender assembly that is capable of generating 2D scans with a single unidirectional actuator. A bench-top clamp set up is shown, where the scan circularity can be tuned by applying an angled force on the bender in the major y-axis ($F_y$) and minor x-axis ($F_x$). The force applied can be adjusted by translating the screws in both axes. (B) Endoscopic probe that consists of a miniaturized clamp setup, where a single screw mounted at a 5 degree angle can be translated to adjust the force applied ($F_{yx}$) in both axes. Scale bar 1mm, unless specified.}
\label{fig:1}
\end{figure}
\section{Results}
We designed a mounting clamp for a piezoelectric bender actuator that enables precise control and optimization of the mechanical coupling from the actuator's major axis to its orthogonal minor axis, producing reliable and tunable 2-D motion. The clamp used a screw in each axis to impart an angled force at the base of the actuator (Fig. 1A). The mechanism was actualized as a miniaturized bench-top setup (Fig. 1B) and was also further shrunk into an endoscopic footprint, where a single angled screw was used to deliver the required clamp force (Fig. 1C). The imaging examples in this study were taken on the bench-top setup for convenience, although we found the endoscopic implementation to be also capable of reliable and precise scan performance. A set of 3 optical fibres were mounted in parallel at the end of the bender using an adhesive glue guide template (Fig. 1D), with free lengths carefully set to be nearly identical to ensure near-equal resonant frequency when vibrated in tandem. The fibres were laterally spaced along the bender edge such that their circular motions overlapped slightly to facilitate image mosaicing (stitching). This further necessitated the use of ball lensing at each fibre end (Fig. 1E), since the use of conventional microlens arrays or gradient index lenses would preclude smooth FOV stitching, and a single large lens covering the entire scanned width (even larger than the bender width) would have very low usable numerical aperture for each fibre and hence poor lateral resolution.
Detailed characterization of the clamping force and fibre resonances was performed (Fig. 2). The clamping force was quantified using a paper-thin force sensing resistor (FSR) temporarily inserted between the clamp and the bender for purpose of characterization. Fig. 2A shows the relationship of scan circularity (in other words, substantial minor-axis coupling) with increasing drive voltage and clamping force in the major axis ($F_y$). Higher voltages generally produced larger scans but the vibration amplitude in the major axis dominated the trajectory, leading to more elliptical or near-linear scans. At high voltages, a larger clamping force $F_y$ was required to maintain the scan circularity. The clamping force $F_y$ had to be sufficiently high to impart a force onto the bender, and sufficiently low for the bender to vibrate freely and couple into the orthogonal axis.
The heatmap of Fig. 2A shows a clear operating regime where circularity can be achieved over a broad range of voltages, providing evidence for the feasibility of amplitude-modulated spiral scanning and the need for elliptical correction in the image reconstruction. The eventual peak voltage that should be selected for spiral imaging would be determined by the manufactured fibre spacing, intended FOV overlap and clamping force.
The frequency response of each fibre (Fig. 2C) were nearly identical, an important manufacturing outcome facilitated by the adhesive guide template (Fig. 1D) and the use of micrometer stages to control fibre lengths. In addition to the frequency response plot, a plot of 'circularity' i.e. the ratio of minor to major axis displacement was generated. The circularity plot showed how the peak resonance in the major axis was at a circularity minimum, since the major axis response was very large and the scan was likely elliptical. The circularity minimum also tended to be bounded by local maxima. The scans were further tuned by adjusting $F_y$ to optimize circularity at the preferred operating point of 500 Hz, selected based on the imaging system parameters. In other words, the frequency of the circularity maxima could be tuned using $F_y$. Operating off-resonance also was more likely to achieve similarly-sized scans across all fibres, where the major axis displacements are reduced to a similar magnitude. This relaxed requirements for the precision of the individual fibres' resonant frequency, enabling repeatable and uniformly shaped scans despite manufacturing tolerances between each fibre.
Lissajous and higher-order patterns could also be generated on the fibre scanner as an extension of the circular scan (Fig. 2B), demonstrating different potential scanning patterns to be used in image formation, although these were not necessary for spiral scanning but are presented here for interest and completeness. Further work is warranted on the mechanism and applications for phase control between the axes and more complex trajectory generation.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig2.png}
\caption{(A) Heatmap of scan circularity with increasing drive voltage and clamping force in the major axis ($F_y$). $F_y$ is measured with a force sensing resistor (FSR) placed in between the clamp and bender. The force applied in the minor axis ($F_x$) is arbitrarily chosen and fixed for all measurements. The bender is actuated at 15V, 500 Hz. (B) Lissajous scan patterns were observed at different ratios of driving frequency ($f_D$) to the designed resonant frequency of the scanning fibre ($f_y$), and at apparent phase differences between the two axes. The phase difference was controlled by adjusting the force applied in the minor axis ($F_x$) in either direction using either the left or right side screw. The bender is actuated at 37.5V, 500 Hz. (C) Frequency response plot in the major (y) and minor (x) axis of three scanning fibres, and a corresponding circularity plot of the ratio of the displacement in the minor to major axis (x/y) against the driving frequency. Maximum circularity is tuned to 500 Hz, a frequency near but off-resonance in the major axis. The bender is actuated at 15V.}
\label{fig:2}
\end{figure}
Imaging was validated on a wide range of samples including printed targets for calibration purposes and several classes of biological samples. A high speed swept source optical coherence tomography (OCT) system was used as the imaging engine, where the long coherence length of the vertical cavity surface emitting laser (VCSEL) source enabled multiplexing of the fibre array along the depth of an OCT axial scan (Methods). A printed grid of 100$\mu$m on a clear plastic substrate overlapped on a sheet of white paper (Fig. 3E) was imaged to visualize scan distortions due to various asymmetries in amplitude and phase between the axes of the scan. Distortions could be largely corrected using a number of simple modifications to the mapping equations (Methods) although a slowly varying sinusoidal distortion remained (Fig. 3B), which did not significantly affect the imaging of real-world samples. Larger field of coverage stitching the 3 FOVs was demonstrated by the imaging of a printed word 'Ruler' (Fig. 3C) on the same plastic calibration target and some floral patterns on local paper currency (Fig. 3D). The amount of overlap between FOVs is a combination of design factors including the manufactured fibre spacing and the desired size of each FOV and the final mosaicked image, which can be deliberately defined based on the application. Non-overlapping fields may be preferred in some biological applications such as maximizing the imaging coverage of adjacent wells on a cell culture plate, while overlapping fields may be preferred for the imaging of larger continuous samples where the overlap can enable accurate stitching at the cost of some coverage area.
For proof of concept studies in tissue, imaging of a human fingerprint and \emph{ex vivo} pig stomach tissue was performed (Fig. 4). The three \emph{en face} views of the fingerprint could be approximately stitched to reconstruct a familiar whorl pattern, although the example did not stitch smoothly due to non-flatness of the surface. \emph{En face} imaging of the stomach showed gastric pit architecture, while cross-sections showed some columnar structure although tissue depth penetration was low due to degrading viability of \emph{ex vivo} tissue over time. The capability of visualizing complex tissue structures is important for endoscopic scanning. While a wider FOV from manual stitching of a single circular field of an endoscopic scanner swept across an area of interest is possible in principle \cite{lurie_rapid_2015}, the limited frame rates or volume rates of advanced modalities coupled with motion of living tissue make this difficult in practice. Our multi-beam approach enables intrinsic field stitching for an extended field.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig3.png}
\caption{\emph{En face} OCT images of a (A) printed grid of pitch 100 $\mu m$, (B) corrected calibration grid, (C) printed word 'Ruler' from three imaging fibres, and (D) printed floral pattern on local paper currency from three imaging fibres. (E) Ground truth photos of imaging targets. \emph{En face} images were 30 um projected. Scale bar 1 mm.}
\label{fig:3}
\end{figure}
To demonstrate applications in bench-top biology, imaging of cell spheroids and a small deceased ant was performed (Fig. 5). The 3-D spheroids could be clearly appreciated in both \emph{en face} and cross-sectional planes, which have been studied for the longitudinal non-destructive monitoring of viability (through the visualization of necrotic core) and treatment response studies on 3-D cultures including more sophisticated organoid models \cite{el-sadek_optical_2020}. A wide bender could scan an arbitrary number of optical beams for parallel screening or monitoring applications. The subsurface \emph{en face} images of the ant showed deeper internal structures that likely corresponded to vital organs or the digestive tract, a unique capability which may be useful for non-destructive functional studies in certain model organisms.
\section{Discussion}
Piezoelectrically actuated resonant fibre scanners have been of great and enduring interest for over two decades, promising microscopic scanning in a tiny package. However, several fundamental limitations of the platform have stymied both technical and translational advances. First, these devices traditionally require a millimeter-scale thin-walled piezoceramic tube to generate 2-D motion, of which the fabrication is known to be challenging (the smallest tubes are achievable only by a couple of research labs and one major manufacturer worldwide), expensive (selling for hundreds of dollars each) and hence difficult to scale. These tubes have virtually no existing market demand outside of atomic force microscopy instruments, a specialized field with low usage volumes. Second, small FOVs are produced by piezoelectric actuation even at high electrical voltages, which has a hard limit set by medical device safety recommendations, nominally around 40V. Sub-millimetre FOVs are very difficult to use in a real-world clinical context and are often insufficient for biological studies of tissue or living models. Third, innovation of the design of such scanners has slowed greatly - the classic centration of an optical fibre in a piezoelectric tube with quartered electrodes that was first proposed by the University of Washington in 2001 \cite{seibel_miniature_2001} has been only challenged by a handful of interesting ideas.
In this work we propose a completely new approach to piezoelectric fibre scanning that eliminates the tube and instead uses a powerful and extremely low cost ($<$ US\$10) planar bending actuator to produce a large 2-D motion at relatively low voltage. Piezoelectric benders are traditionally understood to be relevant only to linear scanning applications. They are extremely cheap due to ease of manufacturing, and are easily capable of large unidirectional displacements at relatively low voltages. By harnessing and amplifying the classically undesired coupled vibration in the orthogonal axis, our design is the first to preserve the advantages of a 1-D actuator while demonstrating a new functionality as a high performance 2D scanner in its own right. The FOV is not limited by the motion of a single fibre but is multiplexed along the entire width of the bender via an array of optical fibres resonating in tandem, uniquely enabled by its planar geometry. The scanner platform is scalable to benders of different sizes, and has the potential to be further miniaturised for endoscopic applications. This completely new approach to miniaturized optical scanning promises new avenues of research and development of such devices, and the substantially lower price point at little cost to performance could be an important factor in accelerating commercialization. For the smallest diameter endoscopes at ~1mm diameter, piezoelectric tubes are hard to beat for now, although the very small scan range (few hundred microns) and limited FOV restricts this class of devices to niche use cases such as the interrogation of small tubular organs.
2-D optical scanning in a small economical package has broad relevance across fields. OCT was the chosen image modality due to the capability of spatial multiplexing by optical path length (OCT depth) using a state of the art long coherence length swept wavelength laser. This enabled each fibre to perform imaging in parallel. High speed OCT also intrinsically enabled the 3rd and 4th dimension (rapid 3-D) of image acquisition. However, this capability of efficient multi-beam 2-D lateral scanning by 1-D actuation is highly generalizable to other optical scanning modalities from lidar to fluorescence microscopy, where the need to fit on a self-driving car roof for sensing, on an augmented/virtual reality headset for image projection or in a scientific setup for image acquisition also motivate compact designs. Each imaging fibre could be multiplexed by time using pulsed illumination and optical path delays if using a single detector, or the fibres could serve separate complementary roles as illumination or detection paths. For purposes of image reconstruction, one fibre could serve a real-time calibration function by imaging a fixed calibration target embedded within the device enclosure, since the fibres have a virtually identical trajectory. The known spatial relationship between each fibre could also be leveraged for motion tracking applications over an elongated FOV. These are just a few concepts novel to miniature microscopy that are potentially enabled by our design.
This proof of concept study had a number of limitations. We did not rigorously explore the relationship between the FOV size and actuator length. In certain scenarios where an endoscope with a shorter rigid tip is required for easier navigation of tight corners, it could be critical to reduce the actuator length. The deflection of a piezoelectric bender is proportional to the square of the length (while insensitive to its width), hence the FOV should be expected to scale accordingly. Our optical design had a few disadvantages that could also limit practical use: the short working distance of the ball lenses meant there was no room for a front window that would be needed to enclose the scanner in a real-world scenario, and the significant curvature of the imaging plane due to the large fibre deflection would likely require further lensing in front of the fibres to achieve a telecentric field.
In summary, we present an innovative approach to miniaturized imaging by the use of piezoelectric benders for 2-D optical scanning. We anticipate a broad range of use cases from consumer imaging, medical endoscopy, to neuro-optical research. Important future developments include further improvements to numerical aperture and imaging resolution that would enable other microscopy modalities, and studies of long-term scan repeatability that would justify more demanding deployment applications.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig4.png}
\caption{\emph{En face} OCT images of (A) a fingertip and (B) \emph{ex vivo} pig stomach tissue from three imaging fibres, with corresponding depth-multiplexed cross-sectional OCT images (C, D). Arrows in white indicate a sweat duct in a human finger and a gastric pit in the stomach antrum respectively. \emph{En face} images were 30 um projected. Scale bar 1 mm.}
\label{fig:4}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{fig5.png}
\caption{\emph{En face} OCT images of (A) an ant and (B) spheroids in a 5 x 5 mm well from three imaging fibres, with corresponding depth-multiplexed cross-sectional OCT images (C, D). The ant head appeared significantly shorter than the thorax due to the head tilting downward. Arrows in white indicate the ant's circularity system and digestive tract. \emph{En face} images were 30 um projected. Scale bar 1 mm.}
\label{fig:5}
\end{figure}
\section{Methods}
\subsection{Probe assembly}
Three optical fibres were attached onto a piezoelectric bender actuator with an epoxy-filled 3D-printed glue guide template to position the fibres in an array of equal intervals of 0.6 mm \ref{fig:1}. Each fiber with a cantilever length of 14.1 mm was precisely measured with the aid of a micrometer jig to produce a designed resonant frequency around 480-490 Hz, after accounting for manufacturing tolerances.
A ball lens was fabricated at the tip of each fibre, where a coreless fibre (FG125LA) of splice distance 0.5 mm was first spliced to a single-mode fibre (SMF-28) and a ball lens of curvature radius 85 um was created using a Specialty fibre Fusion Splicer (Fujikura FSM-100P+) \cite{wu_ultrathin_2022}. The focused spot size and working distance of the ball lens was measured to be 12.6 um (full-width at half-maximum) and 0.35 mm.
The fibre-bender assembly was mounted in a clamping structure that consisted of adjustable M1.4 screw clamps. To tune the circularity of the scan, screws were translated to apply an angled force on the bender. A miniaturised handheld probe set-up that consisted of an angled M1 top screw, in place of top and side screws, was also constructed to demonstrate potential imaging applications in a compact package. The 3D-printed endoscopic probe housing, that encased the fibre-bender assembly and the clamping structure, had a diameter of 12 mm and length of 70 mm.
\subsection{Scan mechanism and characterisation}
A bench-top set-up that consisted of top and side screws was constructed to characterise the scan behaviour.
By imparting a fixed force in the minor axis ($F_x$) and tuning the force in the major axis ($F_y$), an angled force ($F_{yx}$) was introduced and a tunable scan, from a linear to an elliptical to a circular scan, was achieved. The circularity of a scan is described as the x/y ratio, a ratio of the height (displacement in the major axis) and width (displacement in the minor axis) of the scan, in which a perfectly circular scan is 1 and a linear scan is 0. To assess the circularity of the scan and the fibres' frequency responses, the scan trajectory of the imaging fibre tip was captured by a camera and its maximum scan height and width was measured using ImageJ.
To determine the optimal operating point for maximised circularity and scan size, the voltage drive of the bender and the force applied in the major axis ($F_y$) were varied at the scanning fibre's resonant frequency of 500 Hz. Force $F_y$ was measured with a force sensing resistor (FSR) placed in between the bender and the top clamp. The FSR was connected in a voltage divider circuit, which output a relative resistance value correlated to force $F_y$. The bender was actuated at 15V, 500 Hz.
The bender was further characterized at diferent driving frequencies (167 Hz, 250 Hz, 500 Hz) that were at multiples of the fibres' resonant frequency ($f_y$). Lissajous scan patterns and its phase difference variants was controlled by arbitrarily adjusting the force applied in the minor axis ($F_x$) in either direction using the left or the right side screw. The scan trajectory of the imaging fibre tip was captured by a camera. The bender was actuated at 37.5V, 500 Hz.
\subsection{OCT system and piezoelectric drive}
The OCT system was a Michelson interferometer with 2 circulators, a standard design commonly reported for 1300nm systems \cite{liang_endoscopic_2017}. The light source was a commercial microelectromechanical system vertical cavity surface emitting laser (MEMS-VCSEL) source with 200 kHz axial scan rate and 8 mm imaging range in air and $\sim$100nm bandwidth, the latter known in the literature to enable at best 14-15$\mu$m axial resolution in air. By adjusting electrical cable lengths and a software-controlled sub-nanosecond time delay in the optical clock signal, we made a best-effort optimization of axial resolution to $\sim 15\mu$m in air (11$\mu$m in tissue), which could be further optimized by dispersion matching and compensation but was not a primary objective of the present study. The imaging range was realized using a AlazarTech high-speed digitizer with 1 GHz bandwidth and 1 GS/s sampling rate, using the optical clock signal provided by the laser. Data acquisition and live trajectory-mapped image previews was produced by a custom software written in Python. A single-channel amplitude-modulated sinusoidal drive of 500 Hz was generated from a National Instruments card and amplified (Piezodrive) before passing to the actuator.
\subsection{Image acquisition and reconstruction}
The record length for each axial scan trigger was 1792 samples. Each volume consisted of 400 x 800 axial scans (in other words, each spiral volume consisted of 800 circles or 'rings'). The spiral was undersampled (below Nyquist) in the circular direction due to limited axial scan rate at the design resolution, and oversampled (much over Nyquist) in the radial direction to partly compensate for circular undersampling and to enable more nearest-neighbor averaging when mapping the trajectory to a Cartesian grid (below).
Image data was acquired while the fibre tip traced an approximately spiral trajectory, hence the data was mapped to Cartesian coordinates using a lookup table based on the parametric equations defining a spiral geometry:
\begin{align*}
x &= A_x(t)sin(2\pi f t + \phi_0) \\
y &= A_y(t)sin(2\pi f t + \phi_0 + \phi(t))
\end{align*}
where $A_x(t)=A_y(t)=At/t_0$ (ramp) and $\phi(t)=\pi/2$ for a standard circular spiral, and $\phi_0$ is a phase offset used in distortion correction (see later 'Swirl artifact'). For each mapped location, a neighborhood of 5-20 elements (roughly 4x4 neighborhood) was averaged to improve image quality. Previous studies of 2-D resonant fibre scanners used position-sensing detectors (PSD) to measure the actual motion paths of the fibre \cite{lee_scanning_2010}, in order to capture nonlinear motion effects that would produce image artifacts if not precisely modeled by the lookup table. We found that our scanner's motion could be adequately corrected with a few simple modifications to the standard spiral equations. These corrections are likely applicable to other types of 2-D resonant spiral scanning systems.
\begin{enumerate}
\item Swirl artifact (rotationally symmetric). Swirl is often produced by an incorrect $\phi_0$, which can be understood as the angular position (or phase) at which the fibre begins its motion. $\phi_0$ is stable and does not drift significantly over the course of an imaging session, but may change when mechanical forces on the actuator are adjusted. $\phi_0$ has a global optimum in (0, 2$\pi$) and can be found quite easily by trial and error or optimization.
\item Bloat artifact (rotationally symmmetric). Bloat occurs in the central area of the field of view and is due to the scan being initially elliptical and eventually ending up circular. It was observed both visually and from position-sensitive detector measurements that the coupled axis tended to ramp slower than the driven axis. This was modeled as follows:
\begin{align*}
A_x(t) &= At/t_0 \\
A_y(t) &=
\begin{cases}
\alpha A t/t_0 & \text{if $t<\beta t_0$} \\
\beta \alpha A + \frac{(1-\beta \alpha)A}{t_0-\beta t_0}(t-\beta t_0) & \text{if $\beta t_0<t<t_0$}
\end{cases}
\end{align*}
which more intuitively, the coupled axis is simply formed by 2 ramp segments joined at $t=\beta t_0$, where the first segment is a ramp with amplitude scaled down by factor $\alpha$ and the second segment catches up to the same peak amplitude $A$. $\beta$ can be visually estimated by the diameter of the bloat artifact relative to the diameter of the entire FOV, while $\alpha$ can be interpreted as an 'ellipticity factor' for the inner scans and is approximately the scale of feature distortion that is created by the bloat.
\item Wavy artifact. This artifact is likely caused by a nonlinear and evolving phase relationship between the two axes (rather than simply a constant $\pi/2$) leading to rotating elliptical scans within the FOV but we had yet to derive a relatively simple model capable of correcting this at the time of writing.
\end{enumerate}
\subsection{Imaging sample preparation}
A broad range of biological samples was prepared for imaging, including stomach tissue, an ant and cell spheroids.
Pig stomach was purchased from a local market and its inner lining was cleaned with Phosphate Buffered Saline. The stomach antrum was lightly stretched and pinned down to flatten the tissue for imaging.
The human breast adenocarcinoma epithelial MCF-7 cell line was purchased from the American Type Culture Collection (ATCC). Cells were cultured in RPMI Medium 1640 (22400089, Gibco) supplemented with 10\% fetal bovine serum (FBS-HI-12A, Capricorn Scientific), 100 $Units/mL$ penicillin and 100 $\mu g/mL$ streptomycin (15140-122, Gibco) in 5\% CO2 at $37^\circ C$. Formation of spheroids was induced by seeding MCF-7 cells at 2750 cells/well for 700$\mu$m spheroids, in 96-well F-bottomed ultra-low attachment plates (174929, Life Technologies) with shaking for 2-3 days. The spheroids were transferred from the well plates to a 5 x 5 mm custom 3D printed well for imaging.
\section{Funding}
Singapore National Research Foundation Fellowship NRFF13-2021-0002 and the Institute of Bioengineering and Bioimaging, A*STAR.
\section{Acknowledgments}
The authors are grateful to Ko Hui Tan and Jiyong Lim for laboratory and experimental support, and to Dr. Hongwan Liu for valuable scientific discussion.
\section{Disclosures}
The authors declare no conflict of interest.
\printbibliography
\end{document}
| -12,515.754655 |
[
-3.01953125,
2.921875
] | 54.135338 |
[
-2.931640625,
0.42333984375,
-3.0234375,
-6.15234375,
-0.6728515625,
9.4765625
] |
[
3.150390625,
6.6015625,
3.01171875,
6.05078125
] | 209 | 4,579 |
[
-1.7216796875,
1.966796875
] | 20.66113 |
[
-6.203125,
-3.328125,
-3.798828125,
-2.310546875,
1.837890625,
11.7421875
] | 0.804279 | 27.630197 | 29.897358 | 1.867682 |
[
2.374620199203491
] | -11,045.839753 | 5.430662 | -11,823.565588 | 0.402139 | 6.107748 |
[
-3.580078125,
-4.2265625,
-3.681640625,
-4.33203125,
2.939453125,
12.046875
] |
[
-5.23828125,
-2.44140625,
-2.890625,
-2.568359375,
3.6171875,
6.84375
] | |
BkiUaj26NNjgB0SszoBg
|
\section{Introduction}
The collision of heavy ions is the only way to produce and study hot and dense strongly interacting matter in the laboratory. Therefore, relativistic nuclear collisions represent a unique opportunity to explore QCD (Quantum ChromoDynamics) -- the theory of the strong interaction -- in extreme conditions of temperature and density. A vibrant experimental program is currently under way at RHIC (the Relativistic Heavy Ion Collider, at Brookhaven National Laboratory, NY) and at the LHC (the Large Hadron Collider, at CERN, in Geneva). The data accumulated at these facilities represent unequivocal evidence that a new state of matter has been created in the collision of large nuclei: the Quark-Gluon Plasma (QGP) \cite{[{See, for example, }][{, and references therein.}]Braun-Munzinger:2014pya}. One of the most striking features of the QGP is that its time-evolution can be modelled with relativistic fluid dynamics. Early studies concentrated on ideal fluids \cite{Kolb:2003dz,*Huovinen:2003fa}, but the realization that hadronic data from relativistic heavy ion collisions could be used to extract the transport coefficients of QCD -- in particular the shear viscosity to entropy density ratio, $\eta/s$ -- opened new horizons. More specifically, the azimuthal anisotropy of the particle momentum distribution in a single event can be characterized by $v_n$, the coefficients of Fourier expansion in azimuthal angle $\phi$ \cite{Luzum:2013yya}:
\begin{eqnarray}
E \frac{d^3 N}{d^3 p} &=& \frac{1}{2 \pi}
\frac{d N}{p_T d p_T d y}\left[1 + 2 \sum_{n=1}^\infty v_n \cos n \left(\phi - \Psi_n\right)\right], \nonumber \\
\end{eqnarray}
where $p_T$ is the transverse momentum,
the $\Psi_n (p_T, y)$ are orientation angles, and $y$ is the rapidity.
Viscous hydrodynamics calculations have established a quantitative connection between the empirically-extracted $v_n$'s and the value of $\eta/s$, the shear viscosity to entropy density dimensionless ratio \cite{Romatschke:2007mq,Teaney:2003kp}.
Studies of the different experimental anisotropic flow coefficients have concluded that the phenomenologically-extracted $\eta/s$ values were close to the conjectured lower bound of $\eta/s = 1/4 \pi$ \cite{Kovtun:2004de}.
The ability for fluid-dynamical models to quantitatively reproduce the measured behavior of the flow anisotropy coefficients of hadrons has been one of the major highlights of the entire relativistic heavy-ion program \cite{Jacak:2012dx}.
A precise determination of quantities such as $\eta/s$ is made difficult by significant uncertainties in the description of the early time dynamics and the effect of additional sources of dissipation~\cite{Ryu:2015vwa}, among others. Electromagnetic observables produced in ultrarelativistic heavy ion collisions can be used as additional probes of the properties of the QGP, and can help constrain the transport coefficients of QCD.
The task of measuring photons and subtracting the large background of hadronic decay photons has been undertaken at RHIC~\cite{Adare:2008ab,Adare:2011zr,Adare:2014fwh,Adare:2015lcd} and the LHC~\cite{Wilde:2012wc,Lohner:2012ct,Adam:2015lda}, and the direct photon transverse momentum spectra and azimuthal anisotropy are available at both colliders. The observation that the magnitude of the direct photon $v_2$ was similar in size to that of hadrons, along with the exponential behaviour of the measured direct photon spectra at low transverse momentum, suggested that both photons and hadrons were produced by a similar mechanism: (quasi-)thermal production.
While event-by-event hydrodynamical models of heavy ion collisions were shown repeatedly to provide a good description of hadronic observables~\cite{Gale:2013da, Heinz:2013th}, similar attempts at describing direct photon measurements did not meet with the same success \cite{Dion:2011pp,*Chatterjee:2013naa,*Shen:2013cca}. A simultaneous description of the direct photon spectra and momentum anisotropy proved to be a particular challenge.
In response to this apparent tension with measurements, investigations into additional photon production mechanisms multiplied~(e.g. \cite{McLerran:2015mda,Linnyk:2015tha,Tuchin:2014pka,Basar:2014swa,McLerran:2014hza,Gale:2014dfa,Monnai:2014kqa}).
In this paper, a hydrodynamical calculation of direct photon production is presented. It uses an up-to-date hydrodynamical model of heavy ion collisions~\cite{Ryu:2015vwa} along with the latest photon emission rates~\cite{Turbide:2003si,Heffernan:2014mla,Holt:2015cda}. Emphasis is put on photon emission from the expanding QCD medium, referred to as ``thermal photons''.
The aim of this work is to present an up-to-date calculation of thermal photons using the latest developments in hydrodynamical simulation of heavy ion collisions. Understanding the current status of thermal photon production in heavy ion collisions will help guide future effort at identifying and constraining alternative photon production mechanisms.
\section{Hydrodynamical model}
\label{sec:hydro}
The relativistic fluid dynamics background that provides the time-dependent environment in which the photon-generation mechanisms evolve, is the same here as that used for hadrons in Ref. \cite{Ryu:2015vwa}. We summarize its main features again here, for convenience. The initial state of the nuclear collision are modelled using the IP-Glasma approach \cite{Schenke:2012wb}, which builds on the ``impact parameter dependent saturation model'' (IP-Sat) \cite{Bartels:2002cj,Kowalski:2003hm} that constrains the distribution of initial colour sources drawing from electron-proton and electron-nucleus collision data. The gluon fields are then evolved in space and time using classical Yang-Mills equations: $\left[D_\mu,F^{\mu \nu}\right] = 0$, up to a proper time $\tau_0$ of order of the inverse of the saturation scale.
The energy density, $\epsilon$, and the flow velocities, $u^\mu$, from the Yang-Mills evolution are then used to initialize the hydrodynamical evolution. This is achieved by solving $u_\mu(\tau_0)T_{\textrm{CYM}}^{\mu \nu} (\tau_0) = \epsilon (\tau_0) u^\nu (\tau_0)$ where $T_{\textrm{CYM}}^{\mu \nu}$ is the classical Yang-Mills energy-momentum tensor. As in Ref.~\cite{Ryu:2015vwa}, $\tau_0 = 0.4$~fm is used in this work. The IP-Glasma initial conditions are boost-invariant, and the subsequent hydrodynamical evolution is $2+1$D as well.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.46\textwidth]{figs/Zeta_s_blank.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{The temperature dependence of the bulk viscosity to entropy density ratio, as used in this work. The points on the low temperature side are the results of a calculation from Ref.~\cite{NoronhaHostler:2008ju}, while the high-temperature results are from Ref.~\cite{Karsch:2007jc}. }
\label{fig:bulk}
\end{figure}
The hydrodynamic evolution involves a dissipative term in the stress-energy tensor:
\begin{eqnarray}
\mathrm T^{\mu \nu}_{\rm diss} &=& \pi^{\mu \nu} - \Delta^{\mu \nu} \Pi \;,
\end{eqnarray}
where $\Delta^{\mu \nu} = g^{\mu \nu} - u^\mu u^\nu$.
In the above, $g^{\mu \nu}=diag(1,-1,-1,-1)$ is the Minkowski tensor, $\pi^{\mu \nu}$ is the shear-stress tensor, and $\Pi$ is the bulk pressure term. The time-evolution of these last two quantities is obtained by solving relaxation-type equations \cite{Denicol:2012cn,Denicol:2014vaa}:
\begin{align}
\tau_{\Pi }\dot{\Pi}+\Pi = -\zeta \theta -\delta _{\Pi \Pi }\Pi \theta
+\lambda _{\Pi \pi }\pi ^{\mu \nu }\sigma _{\mu \nu }\;, \label{intro_1}
\\
\tau_{\pi }\dot{\pi}^{\left\langle \mu \nu \right\rangle }+\pi ^{\mu \nu }
= 2\eta \sigma ^{\mu \nu }-\delta _{\pi \pi }\pi ^{\mu \nu }\theta
+\varphi
_{7}\pi _{\alpha }^{\left\langle \mu \right. }\pi ^{\left. \nu
\right\rangle
\alpha } \notag \\
-\tau _{\pi \pi }\pi _{\alpha }^{\left\langle \mu \right. }\sigma
^{\left. \nu \right\rangle \alpha }+\lambda _{\pi \Pi }\Pi \sigma ^{\mu
\nu
},
\label{eq:relax}
\end{align}%
with the definition
\begin{equation*}
A^{\langle \mu \nu \rangle }=\Delta _{\alpha \beta}^{\mu \nu }A^{\alpha \beta },
\end{equation*}
where
\begin{equation*}
\Delta _{\mu \nu }^{\alpha \beta }=\frac{1}{2}\left[
\Delta _{\alpha }^{\mu }\Delta _{\beta }^{\nu }+\Delta _{\alpha }^{\nu
}\Delta _{\beta }^{\mu }-\frac{2}{3}\Delta ^{\mu \nu }\Delta _{\alpha \beta }%
\right]
\end{equation*}
is the double, symmetric, and traceless projection operator. The expansion rate of the fluid is $\theta =\partial _{\mu }u^{\mu }$ and the shear tensor $\sigma ^{\mu \nu }=\partial ^{\langle \mu}u^{\nu \rangle }$. In the present work, the shear-stress tensor and the bulk pressure are initialized to zero at time $\tau_0$.
The second-order transport coefficients $\tau _{\Pi }$, $\delta _{\Pi \Pi }$, $\lambda _{\Pi \pi }$, $\tau _{\pi }$, $\eta $, $%
\delta _{\pi \pi }$, $\varphi _{7}$, $\tau _{\pi \pi }$, and $\lambda_{\pi\Pi }$ are related to the shear viscosity $\eta$ and bulk viscosity $\zeta$ using formulae derived from the Boltzmann equation near the conformal limit~\cite{Denicol:2014vaa}.
Importantly, the hydrodynamic evolution stage is followed by a phase where discrete particles are produced through the Cooper-Frye procedure \cite{Cooper:1974mv}. Late stage hadrons further interact and freeze-out dynamically through the UrQMD approach and algorithms \cite{Bass:1998ca}.
Since viscous hydrodynamics is used, the medium is not exactly in thermal equilibrium. Consequently, whenever specific particle distributions are invoked -- in the Cooper-Frye scheme or for thermal photon production -- these will receive viscous corrections.
In the present work, the momentum distribution $f_{B/F}(P,X)$ is derived in Appendices (\ref{appendixA}) and (\ref{appendixB}), and is given by
\begin{eqnarray}
f_{B/F}(P,X)=f_{B/F}^{(0)}(P)+\delta f_{B/F}^{\rm shear}(P,X)+\delta f_{B/F}^{\rm bulk}(P,X)\;,\nonumber\\
\label{eq:fHadrons}
\end{eqnarray}
where
\begin{eqnarray}
\delta f_{B/F}^{\rm shear}(P,X)=f_{B/F}^{(0)}(P) (1 + \sigma_{B/F} f^{(0)}(P)) \frac{\pi^{\mu\nu} P^\mu P^\nu}{2 T^2 (\epsilon+\mathcal{P})}\nonumber\\
\label{eq:fShearHadrons}
\end{eqnarray}
and
\begin{eqnarray}
\delta f_{B/F}^{\rm bulk}(P,X)&&=- f_{B/F}^{(0)}(P) (1 + \sigma_{B/F} f^{(0)}(P) ) \nonumber\\
&& \times \left[ \frac{1}{3} \frac{m^2}{T^2} \frac{1}{P\cdot u/T}-\frac{P\cdot u}{T} \left( \frac{1}{3}-c_s^2 \right) \right] \Pi \frac{\tau_\Pi}{\zeta} \nonumber \\
\label{eq:fBulkHadrons}
\end{eqnarray}
with $\sigma_{B}=1$ for bosons and $\sigma_{F}=-1$ for fermions, with $f_{B/F}^{(0)}(P)$ being correspondingly either the Fermi-Dirac or Bose-Einstein distribution. The pressure $\mathcal{P}$, energy density $\epsilon$, flow velocity $u$, speed-of-sound $c_s$ and temperature $T$ entering into the distribution functions are evaluated at spacetime point $X$. The momentum of the quasi-particle of mass $m$ is denoted $P$. The bulk relaxation time is $\tau_\Pi$.
Within the hybrid approach used here (IP-Glasma -- dissipative hydrodynamics -- UrQMD) a recent analysis of ALICE and CMS measurements indicates that identified particle spectra, multiplicity distributions, and multiple flow coefficients ($v_n \{2\}$, $n=2, 3, 4$) can be globally reproduced, for pions, kaons, and protons \cite{Ryu:2015vwa}. The calculation and data analyses were done for 0 - 5\% through 30 - 40\% centrality classes. These LHC-energy analyses, which include both bulk and shear viscosity, lead to $\eta/s=$ 0.095. A relatively narrow $\zeta/s$ temperature-profile, illustrated in Fig. \ref{fig:bulk}, was used. This profile peaks such that $\zeta/s (T_{\rm peak}) \sim 0.3$, with $T_{\rm peak} = 180$ MeV. As a reminder, this is a somewhat novel feature that most fluid-dynamical approaches to the modelling of relativistic heavy-ion collisions do not yet include. The approach used here also notably features non-linear terms that couple the shear and bulk sectors of the viscous hydrodynamics~\cite{Denicol:2014vaa}.
Describing the late stage dynamic of the medium with an afterburner leads to significant complications in the evaluation of photon emission. As a consequence, late photon emission is not evaluated with the afterburner, but rather with hydrodynamics. The exact approach used is explained later in this work.
\section{Photon sources}
\label{sec:photonSources}
The photons measured in relativistic nucleus-nucleus collisions come from a variety of different sources, and these will be discussed in turn in this section. They fall in two broad categories: those with a thermal origin and those coming from ``cold'' processes. In searches for signals from the quark-gluon plasma, the latter are usually thought of as a background. Since this background is irreducible in the experimental measurements, they nevertheless deserve our full attention. We start by describing how prompt photon production is evaluated in this work.
\subsection{Prompt photons}
In the very first instants of the nuclear collision, the interacting nucleons will produce photons through partonic Compton interactions and quark-antiquark annihilations. In addition, QCD jets will be generated and these jets will fragment into many final states, some of which will include photons.
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_pp_200_scaleTest.pdf}
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_pp_200_scaleTest_normed.pdf}
\caption{Top panel: Direct photon spectrum measured in $\sqrt{s_{NN}}=200$~GeV proton-proton collisions at RHIC compared with perturbative QCD calculations at different scales $Q$. Bottom panel: normalised perturbative QCD calculations; see details in the main text. }
\label{fig:ppRHICphotons}
\end{figure}
The calculation of photon production in hadronic interactions using the techniques of perturbative QCD has a long history \cite{[{See, for example, }][{, for an early review.}]Owens:1986mp} which has led to a fairly mature understanding of the subject. The photon production cross section in proton-proton collisions can be written concisely as
\begin{eqnarray}
E \frac{d^3 \sigma_{\rm p p }}{d^3 p} = \sum_{a, b, c, d} f_{a/p} \left(x_a,Q_{\textrm{fact}}\right)
\otimes f_{b/p}\left(x_b,Q_{\textrm{fact}} \right) \nonumber \\ \otimes \, d\hat{\sigma}\left(Q_{\textrm{ren}}\right) \otimes D_{\gamma/c} \left(z_c, Q_{\textrm{frag}}\right),
\end{eqnarray}
where $Q_{\textrm{fact}}, Q_{\textrm{ren}}$, and $Q_{\textrm{frag}}$ are energy scales entering respectively into the parton distribution function $f_a$, the partonic cross-section $d \hat{\sigma}$, and the fragmentation function $D_{\gamma/c}$. The cross-section $d\hat{\sigma}(Q_{\textrm{ren}})$ is evaluated as a perturbative expansion in the strong coupling constant $\alpha_s(Q)$, and $Q_{\textrm{ren}}$ is the scale at which $\alpha_s(Q)$ is evaluated. This scale, along with the factorisation and fragmentation scales, should typically be of the order of the transverse momentum of final state partons.
\begin{table*}[htb]
\begin{tabular}{|c|p{2cm}|p{2cm}|p{2cm}|p{2cm}|}
\hline
& \multicolumn{2}{|c|}{RHIC Au-Au $\sqrt{s_{N N}}=200$~GeV} & \multicolumn{2}{|c|}{LHC Pb-Pb $\sqrt{s_{N N}}=2760$~GeV} \\
\hline
Centrality & 0-20\% & 20-40\% & 0-20\% & 20-40\% \\
\hline
$\langle N_{\textrm{coll}} \rangle$ & 793 & 323 & 1231 & 501 \\
\hline
\end{tabular}
\caption{Average number of binary collisions in different centrality classes, at RHIC and the LHC. Nucleon positions sampled from a Wood-Saxon distribution, which are input of the IP-Glasma model, are used to evaluate the number of binary collisions in each event with the MC-Glauber model. Centralities are defined using the gluon multiplicity of each event, as described in Ref~\cite{Gale:2012rq}.}
\label{table:Ncoll}
\end{table*}
Computing photon production in proton-proton collisions from the formalism described above requires a proton parton distribution function $f_{a/p} \left(x_a,Q_{\textrm{fact}}\right)$, a parton-to-photon fragmentation function $D_{\gamma/c}\left(z_c, Q_{\textrm{frag}}\right)$, and the partonic cross-section $d\hat{\sigma}\left(Q_{\textrm{ren}}\right)$. The latter is currently known at next-to-leading order in the strong coupling constant for both isolated photons \cite{Aurenche:1987fs} and fragmentation photons \cite{Aversa:1988vb}. Combined with next-to-leading order parton distribution and fragmentation functions, photon production using perturbative QCD has been shown to agree very well with direct photon measurements in proton-proton collisions at RHIC, at the LHC, and at previous colliders~\cite{Aurenche:2006vj}.
At high $p_T^\gamma$, prompt photons are by far the dominant source of direct photons. Those are calculated in heavy ion collisions by multiplying the number of photons produced in proton-collisions by the number of binary collisions~\cite{Chatrchyan:2012vq,Afanasiev:2012dg}.
The scaling procedure may be applied either to a fit of direct photon measurements in proton-proton collisions, or to a perturbative QCD calculation of photon production. The latter is used in this work,
in conjunction with nuclear parton distribution functions EPS09~\cite{Eskola:2009uj}, which take into account cold nuclear matter effects.
The study of fragmentation photon energy loss and jet-medium photon productions, two effects that are understood to modify low $p_T^\gamma$ prompt photon production in heavy ion collisions, is not undertaken here and will be the subject of a separate work.
Prompt photons are added to other sources of direct photons on an event-by-event basis. The perturbative QCD calculation is thus scaled by the number of binary collisions in each event individually. For reference, we quote in Table~\ref{table:Ncoll} the centrality-averaged number of binary collisions in each centrality studied in this work.
The pQCD framework used in this work is essentially the next-to-leading order calculation contained in the numerical code INCNLO \cite{incnlo}. The proton parton distribution function and photon fragmentation function used are respectively CTEQ61m \cite{Stump:2003yu} and BFG-2 \cite{Bourhis:1997yu}. The factorisation, renormalisation and fragmentation scales are all set equal to each other: $Q_{\textrm{fact}} = Q_{\textrm{ren}} = Q_{\textrm{frag}} = Q$.
The transverse momentum of the produced photon is used to set the scale, with a normalization constant $Q = \lambda p^\gamma_T$.
The effect of changing the proportionality constant between $Q$ and $p_T^\gamma$ is essentially a change in the normalization of the prompt photon spectra. This can be seen in Fig.~\ref{fig:ppRHICphotons}. The top panel shows the perturbative calculation of prompt photons in proton-proton collisions for different choices of scale $Q$, from $Q=p_T^\gamma/2$ to $Q=8 p_T^\gamma$. The lower panel shows the same calculations scaled by a constant so that they have the same normalization. It is clear from this last figure that the calculations overlap very well, showing that they have the same $p_T^\gamma$ dependence. It was verified that changing the scale $Q$ at LHC energies also has the same effect on the photon spectrum, i.e. it changes the normalisation but not the transverse momentum dependence of the calculation.
It is apparent that a small proportionality constant between $Q$ and $p_T^\gamma$, such as $Q=p_T^\gamma/2$, provides a better description of the available measurements at RHIC. On the other hand, it can be seen in Fig.~\ref{fig:ppRHICphotons} that calculations are limited to $p_T^\gamma>(1.5~$GeV$)/\lambda$, where $\lambda$ is the proportionality constant between $Q$ and $p_T^\gamma$. This limitation results from the presence of a scale $Q_0\sim 1.5$~GeV, which is typically taken as the limit of applicability of perturbative QCD. Parton distribution functions and fragmentation functions are usually limited to $Q>Q_0\sim 1.5$~GeV. Calculations made with INCNLO are also subject to this limit in $Q$. Although this might appear to limit the value of $p_T^\gamma$ at which prompt photons can be evaluated with perturbative QCD, the scaling behaviour observed on the bottom panel of Fig.~\ref{fig:ppRHICphotons} shows that it is not the case: the effect of the scale $Q$ is simply a change in normalization, and prompt photons can be evaluated at low $p_T^\gamma$ by using e.g. $Q=4 p_T^\gamma$ and changing the normalization of the calculation to that of $Q=p_T^\gamma/2$. This is the procedure adopted in this work.
It remains that pQCD is based on the idea that a large momentum exchange occurs in a hadronic collisions, allowing for a part of the cross-section to be computed perturbatively. It is understood that perturbative QCD eventually breaks down at low transverse momentum, although the exact value of $p_T^\gamma$ at which this happens is not clear. As discussed above, the perturbative QCD calculation of prompt photons is in good agreement with the low $p_T^\gamma$ direct photon measurements in proton-proton collisions, although it slightly overestimates the very lowest point around $1$~GeV. It remains to be seen how much lower in transverse momentum the agreement with data persists.
While there are currently no low $p_T^\gamma$ photon data from proton-proton collisions at the LHC, the same extrapolation procedure was used to evaluate low $p_T$ $\pi^0$ production as an additional verification of the approach. Good agreement with measurements was again found down to $p_T\sim 1-2$~GeV~\cite{Paquet:2015Thesis}.
It is worth noting that, besides clarifying the domain of validity of perturbative QCD calculation of prompt photons, additional measurements will also help constrain uncertainties due to the photon fragmentation function, which are significant in the soft domains of perturbative QCD calculation of prompt photons \cite{Klasen:2013mga,*Klasen:2014xfa}. Without direct measurements, those uncertainties will likely persist.
\subsection{Thermal photons}
\label{sec:thermal}
The ``thermal photons'' are those photons resulting from the interaction of thermalized medium constituents\footnote{The thermalization approximation will be relaxed later.}. The computation of photon production rates may be done using thermal field theory techniques, or using relativistic kinetic theory \cite{[{See, for example, }][{, and references therein.}]Kapusta:2006pm}. Both approaches have contributed to the compendium of rates used in this work.
In the partonic sector, photon-production processes calculated at leading order in the strong coupling constant, $g_s$, have been available for almost 15 years \cite{Arnold:2001ms}. Those are used here\footnote{Some recent work has extended this seminal result by going up next-to-leading order \cite{Ghiglieri:2013gia}. For values of the strong coupling relevant to the phenomenology considered in the current work, the net photon rate at NLO is a modest 20\% larger than that at LO.}. At high energies, the charged particle multiplicity is dominated by mesons. In the hadronic sector at temperatures comparable to, and lower than, the crossover temperature, photons originating from thermal reactions of mesonic origins were calculated in Ref. \cite{Turbide:2003si}. That same work also includes the photons obtained from taking the $\rho$-meson self-energy to zero invariant mass. This procedure accounts for the baryonic contributions, be it radiative decays or reactions of the type $\pi N \to \pi N \gamma$, and $N N \to N N \gamma$, where $N$ represents a nucleon. The net rate parametrized in Ref.~\cite{Turbide:2003si} also avoids possible double-counting issues between mesonic and baryonic contributions. Finally, this work includes also recent estimates of $\pi \pi$ bremsstrahlung contributions \cite{Heffernan:2014mla}, and of the reactions $\pi \rho \to \omega \gamma$, $\pi \omega \to \rho \gamma$, and $\pi \omega \to \rho \pi$ \cite{Holt:2015cda}, absent from Ref. \cite{Turbide:2003si}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\linewidth]{figs/rate_thermal_photons_wPiRhoOmega.pdf}
\caption{Ideal QGP and hadronic photon rate near the cross-over region.}
\label{fig:rates}
\end{figure}
It is instructive to compare rates, prior to integrating them with a dynamical four-volume evolution. This is done in Fig.~\ref{fig:rates}. The figure shows the LO partonic rates of Ref. \cite{Arnold:2001ms} (solid lines) compared with the hadronic rates of Refs. \cite{Turbide:2003si,Heffernan:2014mla,Holt:2015cda} (dashed lines) for a range of temperatures in the cross-over region.
\subsection{Non-cocktail hadronic decay photons}
\label{sec:nonCocktail}
As the strongly-interacting fluid hadronizes, it transforms into hadrons which will interact. When those interactions cease, the momentum distributions are frozen and the particles free-stream out to the experimental detectors. The longer-lived hadrons will contribute significantly to the photon signal and therefore have to be included. Collectively, they are dubbed ``the cocktail'' and are (for ALICE) $\pi^0, \eta, \rho, \omega, \eta',\phi$; the relevant photon-producing decays are subtracted from the measured inclusive signal \cite{Lohner}, to expose a combination of thermal photons and prompt photons. There are however other, shorter-lived, states which decay with a photonic component in the final states \cite{Agashe:2014kda}. This work includes all of the ones with a mass $M < 1.7$ GeV. The differential cross section of the decay photons can then be calculated, knowing the relevant branching ratio. After including all of these, together with the decays considered in Ref. \cite{Rapp:1999qu}, the most important channels were found to be $\Sigma \to \Lambda \gamma$, $f_1 (1285) \to \rho^0 \gamma$, and $K^*(982) \to K \gamma$. All contributions are however included, for completeness.
\section{Correcting the photon emission rates for viscosity}
\label{sec:visc_rates}
As mentioned earlier, it is an established fact that the bulk dynamics of strongly interacting matter is sensitive to the value of shear and bulk viscosities, two of the transport coefficients of QCD. Switching to a corpuscular description, and considering separately the reactions that, together, define the fluid enables a channel-by-channel viscous correction of the photon emission rates. The photon production rate, $R_\gamma$, admits a kinetic theory formulation. For $2 \to 2$ scattering $(1 + 2 \to 3 + \gamma)$ it is \cite{Kapusta:2006pm}
\begin{eqnarray}
&&\omega \frac{d^3 R_{\gamma}}{d^3 k}=\frac{1}{2(2\pi)^3} \int \frac{d^3 p_1}{2 P^0_1 (2\pi)^3} \frac{d^3 p_2}{2 P^0_2 (2\pi)^3} \frac{d^3 p_3}{2 P^0_3 (2\pi)^3} \nonumber \\
&& \times (2\pi)^4\delta^4(P_1+P_2-P_3-K) |\mathcal{M}|^2
f_{B/F}(P_1) f_{B/F}(P_2)\nonumber\\&&\times \left(1+\sigma_{B/F} f_{B/F}(P_3)\right),
\label{eq:photonProductionKin}
\end{eqnarray}
where $|\mathcal{M}|^2$ is the squared matrix element corresponding to the $2\to 2$ scattering, $f_{B/F}$ is the particle momentum distribution for bosons ($\sigma_{B}=1$) or fermions ($\sigma_{F}=-1$), and the photon four-momentum is $K = (\omega, \vec{k})$. The distribution function must then be modified in the presence of dissipative effects, for the kinetic formulation of $T^{\mu \nu}$ to match that with the explicit dissipative transport coefficients. This modification is written as $f_{B/F} = f_{B/F}^{(0)} + \delta f_{B/F}$.
Linearizing in $\delta f_{B/F}$ yields
\begin{eqnarray}
\omega \frac{d^3 R_{\gamma}}{d^3 k}\approx \omega \frac{d^3 R_{\gamma}^{(0)}}{d^3 k} + \omega \frac{d^3 R_{\gamma}}{d^3 k}^{\rm (visc)},
\end{eqnarray}
where
\begin{eqnarray}
&&\omega \frac{d^3 R_{\gamma}}{d^3 k}^{\rm (visc)} = \frac{1}{2(2\pi)^3} \int \frac{d^3 p_1}{2 P^0_1 (2\pi)^3} \frac{d^3 p_2}{2 P^0_2 (2\pi)^3} \frac{d^3 p_3}{2 P^0_3 (2\pi)^3}\nonumber\\
&& \times (2\pi)^4\delta^4(P_1+P_2-P_3-K) |\mathcal{M}|^2 \nonumber\\
&& \times \left[ \delta f_{B/F}(P_1) f^{(0)}_{B/F}(P_2) \left(1+\sigma_{B/F} \,f^{(0)}_{B/F}(P_3)\right) \right. \nonumber\\
& & \qquad + f^{(0)}_{B/F}(P_1) \delta f_{B/F}(P_2) \left(1+\sigma_{B/F} \,f^{(0)}_{B/F}(P_3)\right) \nonumber \\
& & \left. \qquad + f^{(0)}_{B/F}(P_1) f^{(0)}_{B/F}(P_2) \left(\sigma_{B/F} \,\delta f_{B/F}(P_3)\right) \right] .
\label{eq:photonProductionKinVisc}
\end{eqnarray}
Next, the corrections appropriate for shear and bulk viscosity are discussed.
\begingroup
\begin{table*}[ht]
\begin{tabular}{|c|c|c|c|}
\hline
Rate & Ideal & Shear correction & Bulk correction \\
\hline
QGP --- $2\to 2$ & \cite{Arnold:2001ms} & Yes \cite{Shen:2014nfa} & Forward scattering approximation\\
\hline
QGP --- Bremsstrahlung & \cite{Arnold:2001ms} & No & No \\
\hline
Hadronic --- Meson gas ($\pi$, $K$, $\rho$, $K^*$, $a_1$) & \cite{Turbide:2003si} & Yes \cite{Dion:2011pp,Shen:2014thesis} & Yes [this work] \\
\hline
Hadronic --- $\rho$ spectral function (incl. baryons) & \cite{Turbide:2003si,Heffernan:2014mla} & No & No \\
\hline
Hadronic --- $\pi+\pi$ bremsstrahlung & \cite{Liu:2007zzw,Heffernan:2014mla} & No & No \\
\hline
Hadronic --- $\pi$-$\rho$-$\omega$ system & \cite{Holt:2015cda} & No & No \\
\hline
\end{tabular}
\caption{A summary of the thermal photon rates sources, together with the current state of advancement of their viscous correction. The bulk corrections are original to this work. The ``forward scattering'' approximation refers to considering cases where, in $2 \to 2$ scattering, the exchanged momentum is soft (i.e. $\sim gT$ ). In this case the amplitude will be dominated by forward scattering.}
\label{table:rates}
\end{table*}
\endgroup
While there is currently a considerable body of research devoted to the extraction of the shear viscosity from relativistic heavy-ion collisions \cite{[{See, for example, }][{, and references therein}]Gale:2013da}, studies of the bulk viscosity are rarer. To proceed further, assumptions need to be made about the form of $\delta f_{B/F}(P,X)$. The space-time coordinates of the emission site, $X$, are now explicit. The first-order correction $\delta f_{B/F}(P,X)$ is linear in the shear stress tensor~$\pi^{\mu\nu}$ and the bulk pressure~$\Pi$. In this case:
\begin{eqnarray}
\delta f_{B/F}(P,X)= \pi_{\mu\nu}(X) P^\mu P^\nu S(P,X) + \Pi(X) B(P,X), \nonumber\\
\end{eqnarray}
where two properties of $\pi_{\mu\nu}(X)$, $\pi_{\mu\nu}(X) g^{\mu\nu}=0$ and $\pi_{\mu\nu}(X) u^{\mu}=0$, were used to constrain the expansion of $\delta f_{B/F}(P,X)$ in $\pi_{\mu\nu}(X)$.
The functions $S(P,X)$ and $B(P,X)$ can depend on the spacetime position $X$ through e.g. the local value of the temperature $T(X)$, the energy density $\epsilon(X)$, the entropy density $s(X)$, etc. All these implicit functions of $X$ are thermodynamical quantities that are related through the equation of state of the medium. For practical reasons, it is better if corrections to photon emission do not have an explicit dependence on the equation of state. This can be achieved if the momentum dependence of $S(P,X)$ and $B(P,X)$ can be factorised from the rest such as:
\begin{eqnarray}
\delta f_{B/F}(P,X)&=& \pi_{\mu\nu}(X) P^\mu P^\nu \sum_j S_X^{(j)}(X) S_M^{(j)}(P,T) \nonumber\\
&& +\, \Pi(X) \sum_j B_X^{(j)}(X) B_M^{(j)}(P,T),
\label{eq:factorisationDeltaf}
\end{eqnarray}
where it was assumed that the momentum-dependent factor $S/B_M^{(j)}(P,T)$ could also depend on the temperature, but no other thermodynamical quantities. The subscript $X$ was used to identify the spatial part of $S$ and $B$, while the subscript $M$ is used for the momentum dependent term. The sum over $j$ is necessary if e.g. $B(P,X)$ cannot be factorised as $B_S(X) B_M(P)$ but is factorisable as a sum of such terms ($B^{(1)}_S(X) B^{(1)}_M(P)+B^{(2)}_S(X) B^{(2)}_M(P)$). It will be shown shortly that such a general form is appropriate for the $\delta f_{B/F}(P,X)$ used in this work. More generally, this factorization and the expansion in Eq. (\ref{eq:factorisationDeltaf}) can be seen as an expansion of irreducible tensors in momentum space \cite{Denicol:2012cn}.
Using Eq.~(\ref{eq:factorisationDeltaf}), the effect of viscosity on photon production (Eq.~(\ref{eq:photonProductionKinVisc})) can be written
\begin{eqnarray}
\omega \frac{d^3 R_{\gamma}}{d ^3 k}^{\rm (visc)} &= & \pi_{\mu\nu}(X) K^\mu K^\nu \sum_j S_X^{(j)}(X) \tilde{S}_M^{(j)}(K,T)\nonumber \\
& &+ \Pi(X) \sum_j B_X^{(j)}(X) \tilde{B}_M^{(j)}(K,T)\;,
\label{eq:photonViscExpansion}
\end{eqnarray}
where $\pi_{\mu\nu}(X) g^{\mu\nu}=0$ and $\pi_{\mu\nu}(X) u^{\mu}=0$ were used again to constrain the coefficient multiplying $\pi_{\mu\nu}(X) $.
The coefficient $\tilde{S}_M^{(j)}(K,T)$ is given by \cite{Shen:2014nfa}
\begin{eqnarray}
&&\tilde{S}_M^{(j)}(K,T) = \frac{1}{2 (K\cdot u)^2}\nonumber\\
&&\times \left[ g_{\mu\nu}+2u_\mu u_\nu + 3 \left(\frac{K_\mu K_\nu}{(K\cdot u)^2}-\frac{(K_\mu u_\nu+u_\mu K_\nu)}{(K\cdot u)}\right) \right] \nonumber \\
&& \quad \times \frac{1}{2(2\pi)^3} \int \frac{d^3 p_1}{2 P^0_1 (2\pi)^3} \frac{d^3 p_2}{2 P^0_2 (2\pi)^3} \frac{d^3 p_3}{2 P^0_3 (2\pi)^3} \nonumber\\
&&\quad \times (2\pi)^4\delta^4(P_1+P_2-P_3-K) |\mathcal{M}|^2 \nonumber \\
& & \quad \times \left[ \left( P^\mu_1 P^\nu_1 S_M^{(j)}(P_1) \right) f^{(0)}_{B/F}(P_2) \left(1+\sigma_{B/F} f^{(0)}_{B/F}(P_3)\right) \right. \nonumber \\
& & \left. \qquad + f^{(0)}_{B/F}(P_1) \left(P^\mu_2 P^\nu_2 S_M^{(j)}(P_2) \right) \left(1+\sigma_{B/F} f^{(0)}_{B/F}(P_3)\right) \right. \nonumber \\
& & \left. \qquad\qquad + f^{(0)}_{B/F}(P_1) f^{(0)}_{B/F}(P_2) \sigma_{B/F} P^\mu_3 P^\nu_3 S_M^{(j)}(P_3) \right], \nonumber \\
\label{eq:photonProductionKinShearVisc}
\end{eqnarray}
while $\tilde{B}_M^{(j)}(K,T)$ is given by the simpler expression
\begin{eqnarray}
&& \tilde{B}_M^{(j)}(K,T) = \frac{1}{2(2\pi)^3} \int \frac{d^3 p_1}{2 P^0_1 (2\pi)^3} \frac{d^3 p_2}{2 P^0_2 (2\pi)^3} \frac{d^3 p_3}{2 P^0_3 (2\pi)^3}\nonumber\\
&&\quad \times (2\pi)^4\delta^4(P_1+P_2-P_3-K) |\mathcal{M}|^2 \nonumber \\
& & \quad \times \left[ \left( B_M^{(j)}(P_1) \right) f^{(0)}_{B/F}(P_2) (1+\sigma_{B/F} f^{(0)}_{B/F}(P_3)) \right. \nonumber \\
& & \left. \qquad + f^{(0)}_{B/F}(P_1) \left( B_M^{(j)}(P_2) \right) (1+\sigma_{B/F} f^{(0)}_{B/F}(P_3)) \right. \nonumber \\
& & \left. \qquad\qquad + f^{(0)}_{B/F}(P_1) f^{(0)}_{B/F}(P_2) \left(\sigma_{B/F} B_M^{(j)}(P_3) \right) \right]. \nonumber \\
\label{eq:photonProductionKinBulkVisc}
\end{eqnarray}
Since $\tilde{S}_M^{(j)}(K,T)$ and $\tilde{B}_M^{(j)}(K,T)$ are scalars, they can only depend on $K$ through the combination $K\cdot u$.
It is thus possible to evaluate $\tilde{S}_M$ and $\tilde{B}_M$ in the restframe of the fluid where the photon energy is $\omega = K\cdot u$ and the temperature is $T$.
Although $\tilde{S}_M^{(j)}(K,T)$ and $\tilde{B}_M^{(j)}(K,T)$ cannot generally be reduced to an analytical expression, it is nevertheless possible to tabulate them as functions of $\omega$ and $T$.
Putting all of this together, the final expression for the photon emission rate is thus\footnote{Note that if photon production rate were to be computed using field-theoretical techniques, then \cite{Kapusta:2006pm}
\begin{eqnarray}
\omega \frac{d^3 R_\gamma}{d^3 k} = - \frac{1}{(2 \pi)^3} {\rm Im\, } \Pi^{{\rm R} \mu}_\mu (\omega, \vec k) \frac{1}{\left(e^{\beta \omega} - 1\right)}\;,
\end{eqnarray}
where $ \Pi^{{\rm R} \mu}_\mu (\omega, \vec k)$ is the retarded, finite-temperature, photon self-energy. This equation is exact in the strong interaction, but correct to leading order in the electromagnetic coupling, $\alpha$.
In that case, linearization in $\delta f_{B/F}(P,X)$ can still be used to write the photon rate as Eq.~(\ref{eq:viscousPhotonRateGeneral}), although with different expressions for $\tilde{S}_M^{(j)}(K)$ and $\tilde{B}_M^{(j)}(K)$.
}
\begin{eqnarray}
\omega \frac{d^3 R_{\gamma}}{d^3 k}&= &\omega \frac{d^3 R_{\gamma}^{(0)}}{d^3 k} \nonumber\\
&&+\, \pi_{\mu\nu}(X) K^\mu K^\nu \sum_j S_X^{(j)}(X) \tilde{S}_M^{(j)}(K,T) \nonumber\\
&&+ \,\Pi(X) \sum_j B_X^{(j)}(X) \tilde{B}_M^{(j)}(K,T)\;.
\label{eq:viscousPhotonRateGeneral}
\end{eqnarray}
Using the approximations for the hadron/parton distribution functions outlined in Appendices (\ref{appendixA}) and (\ref{appendixB}), one may now summarize the corrections to the distribution functions that arise from the inclusion of shear and bulk viscosity. For temperatures where the degrees of freedom are partonic,
\begin{eqnarray}
\delta f^{QGP}_{B/F}(P,X) &=& \pi_{\mu\nu}(X) P^\mu P^\nu S_X (X) S_M(P,T) \nonumber\\
&&+ \Pi(X) B_X^{QGP}(X) B_M^{QGP}(P,T)\;,
\end{eqnarray}
with (suppressing arguments)
\begin{eqnarray}
S_X&=&\frac{1}{2(\epsilon+\mathcal{P})}; \ \ S_M=\frac{f_{B/F}^{(0)} \left(1 + \sigma_{B/F} f_{B/F}^{(0)} \right)}{T^2} \nonumber \\
B_X^{QGP}&=&- \frac{1}{15 \left( \frac{1}{3}-c_s^2 \right)\left( \epsilon+\mathcal{P} \right)} \nonumber\\
B_M^{QGP}&=&f_{B/F}^{(0)}(P) \left(1 + \sigma_{B/F} f_{B/F}^{(0)}(P) \right) \nonumber \\
& & \times \left[ \frac{m^2}{T^2} \frac{T}{P\cdot u}-\frac{P\cdot u}{T} \right]. \nonumber \\
\end{eqnarray}
For hadronic degrees of freedom (a ``hadronic gas'' [HG]), it is given by
\begin{eqnarray}
\delta f^{HG}_{B/F}(P,X) &=& \pi_{\mu\nu}(X) P^\mu P^\nu S_X(X) S_M(P,T) \nonumber \\
& & + \Pi(X) \left[ B_X^{HG,1}(X) B_M^{HG,1}(P,T)\right. \nonumber\\
&&\left.\qquad + B_X^{HG,2}(X) B_M^{HG,2}(P,T) \right],
\end{eqnarray}
with
\begin{eqnarray}
&&B_X^{HG,1}=-\frac{\tau_\Pi}{\zeta} ;\ \ \ B_X^{HG,2}=-\frac{\tau_\Pi}{\zeta} \left( \frac{1}{3}-c_s^2 \right) \nonumber\\
&&B_M^{HG,1}=f_{B/F}^{(0)}(P) \left(1 + \sigma_{B/F} f_{B/F}^{(0)}(P) \right) \frac{1}{3} \frac{m^2}{T^2} \frac{T}{P\cdot u} \nonumber \\
& & B_M^{HG,2}=f_{B/F}^{(0)}(P) \left(1 + \sigma_{B/F} f_{B/F}^{(0)}(P) \right) \left(-\frac{P\cdot u}{T} \right). \nonumber \\
\end{eqnarray}
The above decompositions are not uniquely defined, since temperature factors and constants can be in either coefficients. This is not a problem as long as the above definitions are used consistently. These equations thus define the $S$ and $B$ functions that enter the calculation of the viscous photon rates. Then, care must be taken in evaluating $\tilde{S}_M^{(j)}(K,T)$ and $\tilde{B}_M^{(j)}(K,T)$: a discussion relevant for photon production in the QGP through $2\to 2$ scattering at leading order in the strong coupling constant appears in Ref. (\cite{Shen:2014nfa}).
At present, not all photon sources known are amenable to a calculation of viscous (shear and bulk) corrections. The situation is summarized in Table \ref{table:rates}, together with the appropriate references.
\section{Evaluating the photon momentum anisotropy}
\label{sec:vnTheory}
Before comparisons with measurements are made, it is necessary to explain how photon momentum anisotropies were evaluated in this work, in view of the subtleties involved in their calculation.
There is a wide literature in heavy ion physics on the different methods of studying the azimuthal anisotropic flow of final state particles (see e.g. \cite{Luzum:2013yya} for recent review). The azimuthal dependence of a given particle's underlying momentum distribution, in a particular event, is usually characterized by a Fourier series written in the form of a Fourier coefficient $v^s_n$ and event-plane angle $\Psi^s_n$:
\begin{eqnarray}
v^s_n e^{i n \Psi^s_n} = \frac{\int d p_T dy d\phi p_T \left[ p^0 \frac{d^3 N^s}{d^3 p} \right] e^{i n\phi}}{\int d p_T dy d\phi p_T \left[ p^0 \frac{d^3 N^s}{d^3 p} \right]} \;.
\label{eq:vnOneEv}
\end{eqnarray}
The superscript ``$s$'' denotes the particle species corresponding to the underlying momentum distribution. It is possible to assign additional labels identifying the kinematic cuts used on $p_T$ and $y$, but this is not necessary in what follows, since there should not be any ambiguity about the cuts used for each particle species.
Experiments measure samples of the distribution $p^0 d^3 N^s/d^3 p$, averaged over numerous heavy ion collisions. Given an appropriate measurement of azimuthal correlation between different particles, these event-averaged measurements can be mapped to event-averages of $v_n$ and $\Psi_n$. These can in turn be computed from theoretical models which can often access the full $p^0 d^3 N^s/d^3 p$ distribution.
Due to the limited statistics available, photon anisotropy measurements are not photon-photon correlations, but rather photon-hadron correlations. Both experiments that have measured photon anisotropies, PHENIX at RHIC and ALICE at the LHC, used the event-plane method~\cite{Poskanzer:1998yz} to make the measurement. This method can be understood as using hadrons to define an effective reference plane in the transverse direction, based on the hadron's azimuthal distribution; the photon momentum anisotropy is then measured with respect to this hadronic plane. Depending on the number of hadrons being measured and on the size of their azimuthal momentum anisotropy, the hadronic event-plane cannot necessarily be reconstructed accurately. This introduces a small uncertainty, of the order of $10\%$, in the mapping of event-plane method measurement to the $v_n$ and $\Psi_n$ of photons and hadrons~\cite{Alver:2008zza,Ollitrault:2009ie,Luzum:2012da}.
The two limits of event-plane method measurements are known as the low and high resolution limits. In the low resolution limit, the event-plane anisotropy reduces to the scalar product $v_n\{SP\}$ anisotropy~\cite{Luzum:2013yya}:
\begin{eqnarray}
v_n\{EP\} \overset{\textrm{low res.}}{=} v_n\{SP\}=\frac{\langle v_n^{\gamma} v_n^{h} \cos(n(\Psi_n^{\gamma}-\Psi_n^{h})) \rangle}{\sqrt{\langle ( v_n^{h} )^2 \rangle}} \;. \nonumber \\
\label{eq:v2SP}
\end{eqnarray}
The other limit is the high resolution limit:
\begin{eqnarray}
v_n\{EP\} \overset{\textrm{high res.}}{=} \langle v^\gamma_n \cos(n(\Psi_n^\gamma-\Psi_n^h)) \rangle .
\label{eq:vnAverage}
\end{eqnarray}
The angle brackets $\langle \ldots \rangle$ represents an average over events.
The resolution correction, which quantifies the accuracy of the event-plane reconstruction, is used to determine which limit of $v_n\{EP\}$ should be compared with measurements. Their value for both RHIC and LHC measurements are shown respectively in Refs.~\cite{Adare:2015lcd} and \cite{LohnerThesis}. The value of the resolution correction changes with the centrality and methods used to determine the event-plane. For $n=2$ ($v_2\{EP\}$), it is neither clearly in the high nor low resolution limit. On the other hand, higher harmonics ($n>2$) are closer to the low resolution limit. Equation~(\ref{eq:v2SP}) is thus used in this work to evaluate $v_n\{EP\}$. It was verified that the other limit of $v_n\{EP\}$, Eq.~(\ref{eq:vnAverage}), does not differ from Eq.~(\ref{eq:v2SP}) by more than 10\%~\cite{Paquet:2015Thesis}. The uncertainty associated with this ambiguity in $v_n\{EP\}$ is thus not a significant issue.
The experimental measurements correlate hadrons from a wide bin in $p_T$ to photons measured in a small $p_T$ bin, effectively resulting in a $v^\gamma_n\{EP\}$ differential in the photon transverse momentum:
\begin{eqnarray}
v^\gamma_n\{EP\}(p^\gamma_T)\overset{\textrm{low res.}}{=}\frac{\langle v_n^{\gamma}(p^\gamma_T) v_n^{h} \cos(n(\Psi_n^{\gamma}(p^\gamma_T)-\Psi_n^{h})) \rangle}{\sqrt{\langle ( v_n^{h} )^2 \rangle}}, \nonumber \\
\label{eq:v2SPdiff}
\end{eqnarray}
where the $h$ superscript refers to the charged hadrons with which photons are correlated. In this work we evaluate $v_n^h$ and $\Psi_n^h$ at midrapidity, integrated over $p_T>0.3$~GeV. It was verified that the result of Eq.~(\ref{eq:v2SPdiff}) did not change with other choices of lower $p_T$-cuts between $0$ and $0.5$~GeV.
Equation~(\ref{eq:v2SPdiff}) assumes that the events that are averaged over have small multiplicity fluctuations. That is, all events are assumed to produce a similar number of photons and hadrons. If large multiplicity fluctuations are present, Eq.~(\ref{eq:v2SPdiff}) will take a different form depending of the details of the measurement, for example whether all events are treated equally, or if events with more hadrons and photons are given a larger weight in the event-average.
To reduce the importance of multiplicity fluctuations, experimental collaborations first measure $v^\gamma_n\{EP\}$ in small centrality bins~\cite{LohnerThesis}. The anisotropy measurements from these smaller centrality bins are then recombined into a larger centrality to reduce the statistical uncertainty of the measurement. When the small centrality bins are recombined, each centrality is weighted by the number of photons measured in the centrality~\cite{LohnerThesis}:
\begin{eqnarray}
v^\gamma_n\{EP\}[c_{\textrm{min}},c_{\textrm{max}}]=\frac{\sum_{c \in [c_{\textrm{min}},c_{\textrm{max}}]} v^\gamma_n\{EP\}[c] N[c]}{\sum_{c \in [c_{\textrm{min}},c_{\textrm{max}}]} N[c]}, \nonumber \\
\label{eq:vnCent}
\end{eqnarray}
where $N[c]$ is the number of photons measured in centrality $c$, $v^\gamma_n\{EP\}[c]$ is the momentum anisotropy measured in $c$ and $[c_{\textrm{min}},c_{\textrm{max}}]$ is the final (large) centrality class in which the measurement is reported. At the LHC the sub-bins are~\cite{LohnerThesis} $0-5\%$, $5-10\%$, $10-20\%$, $20-30\%$ and $30-40\%$, while $10\%$ bins are used at RHIC~\cite{BannierThesis}.
The quantity $v^\gamma_n\{EP\}[c_{\textrm{min}},c_{\textrm{max}}]$ --- Eq.~(\ref{eq:vnCent}) --- is the one that should be compared to PHENIX and ALICE measurements. All photon anisotropy calculations presented in this paper are computed with Eq.~(\ref{eq:vnCent}) using the bins just listed.
\section{Results and discussion}
We now show and discuss the result of integrating the photon rates discussed in Sections~\ref{sec:thermal} and \ref{sec:visc_rates}, with the hydrodynamic approach discussed in Section \ref{sec:hydro}. Prior to doing this, an important clarification is needed. The model used here is a hybrid approach, in the sense that it is not purely hydrodynamics: it has a viscous fluid-dynamics stage that is followed by a transport phase -- modelled with UrQMD -- with dynamic decoupling. The UrQMD afterburner is important to a successful theoretical interpretation of the measured proton spectra and $v_2$ \cite{Ryu:2015vwa,Ryu_prep}.
However, extracting the photons via the vector meson spectral density~\cite{Turbide:2003si} from a transport model is still very much a topical subject of current research.
More generally, electromagnetic emissivities are typically calculated in conditions near thermal equilibrium, as discussed earlier in this paper, and a knowledge of the local temperature and of other thermodynamic variables is usually absent from most transport formulations. One resolution of this situation has been to coarse-grain the transport final states, and to assign local temperatures to cells on a space-time grid using the equation of state \cite{Huovinen:2002im,Endres:2015fna}. Such procedures are numerically-intensive, but will be studied within our framework in detail in the future. The point of view adopted in this work is that, apart from proton observables, hydrodynamics does provide a realistic environment for the bulk of hadronic observables, especially if the bulk viscosity is included \cite{Ryu:2015vwa}. Therefore, for the calculation of photons, the contribution of the UrQMD phase of the spatiotemporal evolution is modelled by letting the fluid-dynamical evolution proceed past the switching temperature from hydro to UrQMD (the ``particlization temperature'' \cite{Huovinen:2012is}), $T_{\rm switch} = 145$ MeV, down to a more typical hydro freeze-out temperature of $T = 105$ MeV. In hydrodynamical approaches in general, the freeze-out temperature is a free parameter of the model: more words about the dependence of the photon signal on this parameter will appear later in this section.
\subsection{RHIC}
The direct photon spectrum and $v_2$ were measured at RHIC by the PHENIX collaboration~\cite{Adare:2014fwh,Adare:2011zr,Bannier:2014bja,Adare:2015lcd}. These measurements were made in Au-Au collisions at $\sqrt{s_{NN}}=200$~GeV for centralities 0-20\% and 20-40\%. Comparison of the hydrodynamical model's results for direct photon spectra are shown in Fig. \ref{fig:directPhotonRhic_spect}.
The preliminary, minimum-bias direct photon spectrum measurement from STAR~\cite{ChiYangThesis} is shown in Fig.~\ref{fig:directPhotonRhic_spect_MB}, and is compared with both the hydrodynamical calculations and the PHENIX measurements from Ref.~\cite{Adare:2014fwh}.
The dashed lines represent the thermal contributions, that is the sum of all contributions of thermal origin. The prompt photons are calculated in NLO QCD, as explained earlier. The contribution of non-cocktail photons (Section~\ref{sec:nonCocktail}) is also shown.
\begin{figure}[tb]
\includegraphics[width=0.32\textwidth]{figs/spectra_direct_photons_AuAu_200GeV_cent0020_PHENIX2014_with_ratio.pdf}
\includegraphics[width=0.32\textwidth]{figs/spectra_direct_photons_AuAu_200GeV_cent2040_PHENIX2014_with_ratio.pdf}
\caption{The result of a hydrodynamic calculation of direct photon spectra, for Au - Au collisions at RHIC, in the 0 - 20 \% (top panel) and 20 - 40\% (bottom panel) centrality range. The different curves are explained in the text, the data are from Ref.~\cite{Adare:2014fwh}.
}
\label{fig:directPhotonRhic_spect}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=0.32\textwidth]{figs/spectra_direct_photons_AuAu_200GeV_centMB_PHENIX2014_with_ratio.pdf}
\caption{The result of a hydrodynamic calculation of direct photon spectra, for Au - Au collisions at RHIC, in minimum bias centrality range. The data are from Refs. \cite{Adare:2014fwh,ChiYangThesis}.
}
\label{fig:directPhotonRhic_spect_MB}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_AuAu_200_cent0020_IPGlasma_PHENIX2015.pdf}
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_AuAu_200_cent2040_IPGlasma_PHENIX2015.pdf}
\caption{Hydrodynamic calculation of the direct photon $v_2$, for Au - Au collisions at RHIC, in the 0 - 20 \% (top panel) and 20 - 40\% (bottom panel) centrality range . The data are from Ref.~\cite{Adare:2015lcd}.}
\label{fig:directPhotonRhic_v2}
\end{figure}
The curves labeled ``direct'' represent the sum of all sources considered in this work (Section~\ref{sec:photonSources}). One observes that the calculation, with the contributions enumerated in the text, and the experimental tend to converge for values of $p_T \gtrsim 2.5$ GeV. There, the calculation almost entirely consists of the pQCD component. For intermediate transverse momenta (as defined by this figure, $p_T \approx 1.5$ GeV), the calculation underestimates the PHENIX data central points roughly by a factor of~3.
Agreement of the calculations with the preliminary STAR data (Fig.~\ref{fig:directPhotonRhic_spect_MB}) is considerably better, well within systematic uncertainties.
In the low $p_T$ region, calculation and data are reunited again, but bear in mind the strong caveats regarding the trustworthiness of the pQCD calculations at such low transverse momenta. As supported by a direct comparison with pp photon data, the prompt photon curve shown in Figs. \ref{fig:directPhotonRhic_spect} and \ref{fig:directPhotonRhic_spect_MB} should hold down to $p_T \approx$ 1 GeV. While one does not expect a sudden breakdown of the formalism used here, it does becomes less predictive as the photon momentum goes down. The theoretical interpretations of photon production in nucleus-nucleus collisions would rest on much firmer ground if a fundamental measurement of soft photons from pp collisions, extending to values of transverse momenta compared to those in Figs. \ref{fig:directPhotonRhic_spect} and \ref{fig:directPhotonRhic_spect_MB} existed.
Such a measurement, while challenging, would provide a valuable baseline for phenomenological modelling, and would further our understanding of QCD in its strongly coupled regime.
Figure \ref{fig:directPhotonRhic_v2} shows the calculated photon elliptic flow, compared with data measured by the PHENIX collaboration. The photon anisotropy was evaluated with Eq.~(\ref{eq:vnCent}). The elliptic flow shows the now characteristic shape, with the turnover at $p_T \gtrsim$ 2 GeV driven by the pQCD photons. As was the case for the photon spectra the calculation of the photon elliptic flow systematically undershoots the central data points. However, and this also holds for the spectra, taking into account the statistical and systematic uncertainties greatly reduces the tension between theory and experiment. Thermal photons, represented by the dashed curves, are shown separately to highlight that the thermal contribution does exhibit a large $v_2$, but that this momentum anisotropy is then suppressed by prompt photons.
As can be expected from their small contribution to the direct photon spectra (Fig.~\ref{fig:directPhotonRhic_spect}), non-cocktail photons do not contribute significantly to the direct $v_2$. They are not shown in Figure~\ref{fig:directPhotonRhic_v2}.
\subsection{LHC}
\begin{figure}[tbh]
\includegraphics[width=0.33\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent0020_IPGlasma_with_ratio.pdf}
\includegraphics[width=0.33\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent2040_IPGlasma_with_ratio.pdf}
\caption{The direct photon spectrum for Pb-Pb collisions at the LHC 0 - 20\% (top panel) and 20 - 40\% (bottom panel) centrality range. The different curves are explained in the text, and the data are from the ALICE Collaboration~\cite{Adam:2015lda}.}
\label{fig:directPhotonLHC}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=0.33\textwidth]{figs/v2_direct_photons_PbPb_2760_cent0040_IPGlasma_additive.pdf}
\caption{The direct photon $v_2$ at 0 - 40\% centrality. Data are from the ALICE Collaboration~\cite{Lohner:2012ct,LohnerThesis}.}
\label{fig:directPhotonV2LHC}
\end{figure}
The direct photon spectrum and $v_2$ in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV are presented in Figs.~\ref{fig:directPhotonLHC} and \ref{fig:directPhotonV2LHC} respectively. The calculations are compared with measurements from the ALICE collaboration~\cite{Adam:2015lda,Lohner:2012ct,LohnerThesis}. As for RHIC, the contribution to the spectrum of prompt, thermal and non-cocktail photons, along with their sum (direct photons), are shown separately in Fig.~\ref{fig:directPhotonLHC}. The elliptic flow of thermal photons and of the total number of direct photons is plotted in Fig.~\ref{fig:directPhotonV2LHC}. The general features of the photon data set at the LHC is reminiscent of that at RHIC, but important differences emerge when comparing with theoretical calculations. For both LHC observables --- spectrum and $v_2$ --- there is less tension between data and theory than at RHIC. In fact, the theory results are in agreement with the experimental results when considering the statistical and systematic uncertainties.
As previously, the prompt contribution begins to take over at around $p_T \approx 3$ GeV, but otherwise lies systematically below the thermal sources.
\subsection{Effect of bulk viscosity}
The calculation of direct photons presented in this work is the first one to include the effect of bulk viscosity on both the medium evolution and the photon emission rates. Considering that the introduction of bulk viscosity was shown to have a large effect on the description of the hadronic observables~\cite{Ryu:2015vwa}, it is important to highlight its effect on photon production.
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent0020_IPGlasma_bulk_vs_shear.pdf}
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_PbPb_2760_cent0040_IPGlasma_bulk_vs_shear.pdf}
\caption{Effect of bulk viscosity on the direct photon spectrum (top panel) and $v_2$ (bottom panel) in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV. ALICE measurements~\cite{Wilde:2012wc,Lohner:2012ct,LohnerThesis} are shown for reference.}
\label{fig:directPhotonBulkVsShear}
\end{figure}
In Fig.~\ref{fig:directPhotonBulkVsShear}, the direct photon spectrum and $v_2$ are shown with and without bulk viscosity for $\sqrt{s_{NN}}=2760$~GeV Pb-Pb collisions at the LHC. Since the inclusion of bulk viscosity modifies the \emph{shear} viscosity necessary to describe the hadronic momentum anisotropies~\cite{Ryu:2015vwa}, two direct photon calculations without bulk viscosity are shown: one with $\eta/s=0.095$, which is the shear viscosity necessary to describe the hadronic $v_n$ in the presence of bulk viscosity~\cite{Ryu:2015vwa}, and one with $\eta/s=0.16$, for which a good description of hadronic $v_n$ can be achieved with $\zeta/s=0$. It can be seen that the two calculations that do not include bulk viscosity are similar, both for the spectrum and the $v_2$.
The effect of bulk viscosity on the spectrum of direct photons is small, and consists of a slight softening of the spectrum. Like the spectrum, the $v_2$ increases at low $p_T$ and decreases at high $p_T$. This changes the shape of $v_2$, whose maximum value is shifted toward lower $p_T$. This is a distinctive photonic signal of the finite bulk viscosity of QCD around the transition region.
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent0020_IPGlasma_noViscousRates.pdf}
\hspace{1cm}
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_PbPb_2760_cent0040_IPGlasma_noViscousRates.pdf}
\caption{Effect of viscosity corrections to the photon emission rates for the direct photon spectrum (top panel) and $v_2$ (bottom panel) in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV.}
\label{fig:directPhotonRateCorr}
\end{figure}
The effect of bulk viscosity on direct photons can be divided in two separate contributions: its effect on the photon emission rates (Section~\ref{sec:visc_rates}), and its effect on the spacetime evolution of the medium. The effect of bulk viscosity on the emission rates is illustrated in Fig.~\ref{fig:directPhotonRateCorr} by showing the photon spectrum and $v_2$ with and without corrections to the rates due to bulk viscosity. The effect of the shear viscous correction to the photon rates is shown as well, for reference.
Viscous corrections to the rates have a small effect on the direct photon spectrum. This can be understood from the fact that viscous corrections are larger at higher $p_T$, where prompt photons dominate over thermal ones.
The direct photon $v_2$, on the other hand, is suppressed at higher $p_T$ by both shear and bulk viscosity corrections to the photon rates. The suppression is of the order of $20-30\%$. Recall however that not all photon emission rates are corrected for the effect of shear and bulk viscosities, as listed in Table~\ref{table:rates}. In consequence, the results shown in Fig.~\ref{fig:directPhotonRateCorr} most likely underestimate the effect of viscosity on the photon rates.
\begin{figure}[tbp]
\includegraphics[width=0.23\textwidth]{figs/spacetime_volume_distribution_WBulk_vs_NBulk_shear0095_largeBins.pdf}
\includegraphics[width=0.23\textwidth]{figs/gammaFactor_temp_distribution_WBulk_vs_NBulk_shear0095.pdf}
\caption{Event-average spacetime volume $\left\langle d V_4/d y \right\rangle_T$ (left) and event-average flow velocity $\langle u^\tau \rangle_T$ (right) for hydrodynamical model with and without bulk viscosity in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV.}
\label{fig:directPhotonSpacetime}
\end{figure}
The effect of bulk viscosity on the spacetime description of the medium is illustrated in Fig.~\ref{fig:directPhotonSpacetime}. The change in spacetime volume induced by the inclusion of bulk viscosity is shown for different ranges of temperature on the left, while the effect of bulk viscosity on the flow velocity distribution, as quantified by $u^\tau=\sqrt{1+(u^x)^2+(u^y)^2}$, is shown on the right. The effect of bulk viscosity is clear: it reduces the transverse expansion of the medium at low temperature, but considerably increases its spacetime volume. Since thermal photon emission is proportional to the spacetime volume, the increase in volume translates into a larger number of emitted photons. On the other hand the slower transverse expansion implies a softer photon spectrum, with more soft photons emitted but less hard ones. It is the combination of these two effects that produce an overall softening of the photon spectrum in the presence of bulk viscosity.
\subsection{Effect of photon emission rates}
Calculations of direct photons in heavy ion collisions do not always use the same photon emission rates in the evaluation of thermal photons, which has a large impact on the level of agreement with data. The photon rates used in this work were summarized in Table~\ref{table:rates}. The contribution to photon emission of a $\pi$-$\rho$-$\omega$ system~\cite{Holt:2015cda} has just been published and was not included in previous calculations of direct photons. More importantly, parametrizations for the photon emission rate evaluated with the $\rho$ spectral function, along with additional emission from $\pi+\pi$ bremsstrahlung, were made available in Ref.~\cite{Heffernan:2014mla}. In consequence, calculations of direct photons made before this point often included only photon emission from a meson gas. The importance of the different hadronic photon emission channels on photonic observables is shown in Fig.~\ref{fig:directPhotonHadronicRates}. Since the effects of shear and bulk viscosity have not yet been evaluated for all these photon emission rates, corrections to the photon rates due to viscosities are \emph{not} included for any emission rates in this comparison.
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent0020_IPGlasma_hadronicRates.pdf}
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_PbPb_2760_cent0040_IPGlasma_hadronicRates.pdf}
\caption{Importance of different hadronic photon production channels on the direct photon spectrum (top) and $v_2$ (bottom) in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV.}
\label{fig:directPhotonHadronicRates}
\end{figure}
It is clear from Fig.~\ref{fig:directPhotonHadronicRates} that including only photon emission from a gas of mesons leads to a considerable underestimation of the direct photon $v_2$. The photon channels evaluated with the $\rho$ spectral function are especially important.
On a last note, it is relevant to highlight that QGP and hadronic photon emission rates that are altogether different from those used in the present work have been investigated over the past years. For completeness, the results of folding the hydrodynamical description of heavy ion collisions presented in this work with two of these rates are presented. The first rate is the ``semi-QGP'' photon emission rate~\cite{Gale:2014dfa}, which includes confinement effects on photon emission. The second rate is the hadronic rate from Zahed and Dusling~\cite{Dusling:2009ej}, which is evaluated using a different approach than the hadronic rates used in this work. Once again, viscous corrections to the photon rates are not included in this comparison.
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent0020_IPGlasma_otherRates.pdf}
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_PbPb_2760_cent0040_IPGlasma_otherRates.pdf}
\caption{Direct photon spectrum (top) and $v_2$ (bottom) evaluated with different QGP and hadronic photon emission rates, in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV. See text for details.
}
\label{fig:directPhotonOtherRates}
\end{figure}
The semi-QGP photon rate is considerably smaller than the QGP rate, which results in a 30\% suppression of the direct photon spectrum, shown on the top of Fig.~\ref{fig:directPhotonOtherRates}. The $v_2$, shown on the lower part of the figure, does not change significantly: an intuitive way of understanding this result is to note that while a suppression of the photon rate at high temperature will increase the \emph{thermal} photon $v_2$, it will reduce the contribution of thermal photons with respect to prompt photons. The two effects largely cancel out.
An important consequence of the suppression of the QGP rate studied in Ref.~\cite{Gale:2014dfa} is that it does not match well anymore the hadronic rate in the deconfinement region, as was the case with the QGP rate used previously in this work (Fig.~\ref{fig:rates}). This fact has yet to be addressed in a satisfactory fashion; it highlights the importance of understanding the photon rates in the transition region.
The hadronic rate from Ref.~\cite{Dusling:2009ej} is around 40\%-100\% larger than the one used in the present work. This results in a larger direct photon spectrum and a larger $v_2$, as illustrated in Fig.~\ref{fig:directPhotonOtherRates}. Since the hadronic rates being compared do not include the same photon production channels, this difference is not unexpected. Further studies of these hadronic emission rates themselves will be required to establish if the two approaches can be found to agree for comparable production channels.
\subsection{The importance of late stage photon emission}
\label{sec:latePhotons}
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/vx_vy_temp_distribution_WBulk_largeBins.pdf}
\includegraphics[width=0.35\textwidth]{figs/v2_thermal_photons_PbPb_2760_cent0040_IPGlasma_Tcuts_WBulk.pdf}
\caption{Flow anisotropy of the medium with respect to the reaction plane (top) and momentum anisotropy of thermal photons (bottom), for different temperature ranges, in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV.}
\label{fig:anisotropyVsT}
\end{figure}
The current understanding of heavy ion collisions is that a flow velocity anisotropy is created during the medium's expansion as a result of the initial anisotropy in the energy deposition. This anisotropy increases with time, although it eventually plateaus and reverses under the effect of the viscosities and the decreasing pressure gradients. Using temperature as a proxy for time, this development of flow anisotropies is shown at the top of Fig.~\ref{fig:anisotropyVsT} as the $x$-$y$ flow asymmetry for different temperature ranges. The anisotropy is evaluated with respect to the reaction plane, with the $x$-axis aligned with the impact parameter of the colliding nuclei. Like hadrons, the momentum anisotropy of thermal photons is directly related to this flow velocity anisotropy. This is illustrated by plotting on the lower part of Fig.~\ref{fig:anisotropyVsT} the $v_2$ of thermal photons emitted in different regions of temperature. Lower temperatures are associated with larger time, which in turn is associated with larger thermal photon $v_2$.
\begin{figure}[tb]
\includegraphics[width=0.35\textwidth]{figs/spectra_direct_photons_PbPb_2760_cent0020_IPGlasma_Tcuts.pdf}
\includegraphics[width=0.35\textwidth]{figs/v2_direct_photons_PbPb_2760_cent0040_IPGlasma_Tcuts.pdf}
\caption{Importance of post-particlization photon production on the photon spectrum (top) and $v_2$ (bottom) in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV.}
\label{fig:directPhotonPostFO}
\end{figure}
The magnitude of the $v_2$ of photons emitted at late times means that they can play a large role in increasing the direct photon $v_2$. Their importance is illustrated in Fig.~\ref{fig:directPhotonPostFO} by showing explicitly the contribution to the direct photon spectrum and $v_2$ of thermal photons emitted between $T=105$~MeV and $T=145$~MeV. This temperature range is chosen because $T=145$~MeV is the switching temperature between hydrodynamics and UrQMD that best describes the hadronic observables~\cite{Ryu:2015vwa} in Pb-Pb collisions at $\sqrt{s_{NN}}=2760$~GeV, as explained at the beginning of this section. Photons emitted below this switching temperature would thus be best evaluated using UrQMD~\cite{Bauchle:2010ym}, but this challenging task will be addressed in future work. The large contribution to the direct photon $v_2$ of these late stage photons highlights the importance of further studying photon emission during this phase of the evolution.
\section{Conclusions}
In this paper, the production of photons in heavy ion collisions was studied at RHIC and the LHC using a hydrodynamical model of heavy ion collisions. This comprehensive model included realistic initial conditions (IP-Glasma), along with second-order hydrodynamics equations with both shear and bulk viscosities.
With the inclusion of the elements comprising this paper, most direct photon theoretical results were found to lie either within the limits set by the systematic and statistical uncertainties of measurements at both colliders, or slightly below. A larger discrepancy between theory and PHENIX data remains, but in all cases the agreement between fluid dynamical calculations and experimental data were found to be improved compared to what they were in the past. This level of agreement with data provides strong support to the idea that thermal photons are the principal source of the low $p_T$ direct photon enhancement and of the large photon momentum anisotropy.
The presence of bulk viscosity in the hydrodynamical evolution produced a small effect on the photon spectrum at the LHC, and a modest change in the overall magnitude of the photon $v_2$. On the other hand, it induced a clear change in the shape of $v_2$, enhancing it at low $p_T^\gamma$ and reducing it at higher $p_T^\gamma$. While theoretical and experimental uncertainties do not currently permit to determine if this change in the shape of $v_2$ is favored by data, the reduction of both uncertainties in the future could allow direct photons to be used to constrain the bulk viscosity of QCD.
The significant contribution of late stage photon emission to the photon $v_2$, quantified in Section~\ref{sec:latePhotons}, highlights the need for a more sophisticated study of photon emission in this phase of the collisions. Work also remains to be done on constraining the photon emission rates, especially in the deconfinement region. These questions will be addressed in future work, and may help shed light on the different level of agreement observed with RHIC and LHC data.
Another topic necessitating further attention is the need for a more sophisticated treatment of prompt photon production in heavy ion collisions that includes both parton energy loss and jet-medium photon production \cite{Fries:2002kt,*Turbide:2005fk}. This could be especially important at RHIC, where thermal photons are not as dominant over prompt photons as at the LHC. The production of photons during the pre-thermalised phase of the collisions is also a part of the framework needing greater scrutiny. The encouraging results presented in this paper signal that the current understanding of thermal photons is mature enough for such investigations to be undertaken.
\begin{acknowledgements}
The authors would like to thank the organizers and participants of the ``EMMI Rapid Reaction Task Force on the direct-photon flow puzzle'', together with Takao Sakaguchi, for fruitful discussions. The authors thank Kevin Dusling and Ismail Zahed for providing a tabulation of their photon emission rates. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. GSD and BPS are supported under DOE Contract No. DE-SC0012704. M.L.\ acknowledges support from the Marie Curie Intra-European Fellowship for Career Development grant FP7-PEOPLE-2013-IEF-626212. Computations were made in part on the supercomputer Guillimin from McGill University, managed by Calcul Qu\'ebec and Compute Canada. The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), NanoQu\'ebec, RMGA and the Fonds de recherche du Qu\'ebec - Nature et technologies (FRQ-NT). This research also used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE- AC02-05CH11231.
\end{acknowledgements}
| -44,472.883598 |
[
-3.119140625,
2.845703125
] | 42.553191 |
[
-3.603515625,
-0.26953125,
-1.791015625,
-6.265625,
-0.52685546875,
8.4453125
] |
[
3.5703125,
8.9296875,
4.53125,
6.6328125
] | 524 | 8,956 |
[
-2.705078125,
2.759765625
] | 26.909997 |
[
-6.09765625,
-3.92578125,
-4.07421875,
-2.4609375,
1.8505859375,
11.9453125
] | 0.962349 | 25.879335 | 22.018758 | 2.779455 |
[
2.7498178482055664
] | -28,389.875435 | 6.091335 | -43,261.210573 | 0.366609 | 6.158473 |
[
-3.1171875,
-4.08203125,
-3.70703125,
-4.734375,
2.423828125,
12.4765625
] |
[
-5.796875,
-2.34375,
-2.466796875,
-1.689453125,
3.654296875,
5.15234375
] | |
BkiUfbM5qhLB215XM0Zn
|
\section{Introduction}
The standard {\small ADM } formulation of canonical general relativity \cite{ADM,Dirac}
may be considered as an initial value problem, defined by
considering initial canonical data on a arbitrary spatial slice in a
spacetime foliated by a stack of such slices. The spatial
slice, let us call it $\Sigma_t$ (assuming that it is a collection of
equal-time points), has a 3-metric, $g_{ij}(x)$, inherited from the
4-metric, $\gamma_{\alpha\beta}(X)$ of the surrounding spacetime ${\cal M}$ and
a momentum $p^{ij}(x)$ conjugate to the metric.
\footnote{
Notation: Greek indices run from 0 to 3, and Latin indices from 1 to 3.
$\Sigma $
denotes 3-dimensional space with coordinates $x^i$, $\Sigma_t$ is an
equal-time spatial slice, ${\cal M}$ denotes 4-dimensional spacetime with
coordinates $X^\mu$. ${\rm LDiff}{\cal M}$ is
the Lie algebra of 4-dimensional diffeomorphisms ${\rm Diff}{\cal M}$. }
One uses this spatial slice to orient an orthogonal basis
$(Nn^{\alpha}, N^i X^{\mu}_i )$, defined by the direction normal to the
slice, $n^{\mu}$, and the three tangential directions $X^{\mu}_i\equiv
\frac{\partial X^{\mu}}{\partial x^i}$.
$N$ and $N^i$ are the lapse and
shift. Any quantity of interest from covariant general relativity is then
decomposed with respect to this basis. Thus, the canonical theory is
obtained by decomposing the Hilbert-Einstein action with respect to
$(Nn^{\mu}, N^i X^{\mu}_i)$. The result describes how the canonical data
$g_{ij}(x)$ and $p^{ij}(x)$ are propagated in these four directions by
four constraints, the hamiltonian constraint ${\cal H}_{\perp}$ in the normal direction
and the momentum constraints ${\cal H}_i $ tangentially.
We shall be specifically concerned with the ${\cal H}_{\perp},\ {\cal H}_i $ constraints as
generators of normal and tangential deformations in the sense described
above (as proven in \cite{HKT}). For the canonical representation
of the Einstein theory, one
also requires the algebra of the constraints to describe the result of
one deformation followed by another. This is usually referred to as the
Dirac algebra:
\begin{eqnarray}
\left\{{\cal H}_{\perp}(x),{\cal H}_{\perp}(x')\right\}&=&g^{ij}(x)
{\cal H}_i (x)\delta_{,j}(x,x')-(x\leftrightarrow x')\\
\left\{{\cal H}_{\perp}(x),{\cal H}_j (x')\right\}&=&{\cal H}_{\perp,i}(x)\delta(x,x')+{\cal H}_{\perp}(x)
\delta_{,i}(x,x')\\
\left\{{\cal H}_i (x),{\cal H}_j (x')\right\}&=&{\cal H}_j (x)\delta_{,i}(x,x')
-(ix\leftrightarrow jx').
\end{eqnarray}
The last line in the Dirac algebra, the Poisson bracket between the
two momentum constraints, is the statement of ${\rm Diff}\Sigma$ invariance on
$\Sigma_t$: one spatial deformation followed by another is equivalent to
an overall spatial deformation. The second
line is simply the transformation of ${\cal H}_{\perp}(x)$ as a scalar of density
of weight 1
under ${\rm Diff}\Sigma$. One needs to be more careful with the first line, the
Poisson bracket of two hamiltonian constraints. As Hojman, Kucha\v r and
Teitelboim explain in \cite{HKT}, this
bracket describes how, if one uses ${\cal H}_{\perp}$ to move from an initial to
a final slice via an intermediate one, the arrival point on the final slice
depends on the choice of the intermediate slice. This path-dependence
of the ${\cal H}_{\perp}$ deformation makes the hamiltonian constraint somewhat
difficult to use, and is responsible for the explicit appearance of the
metric field $g^{ij}(x)$ in
the right hand side of the $\left\{{\cal H}_{\perp}(x),{\cal H}_{\perp}(x')\right\}$ Poisson bracket.
This is a very problematic feature of the Dirac algebra. The metric
$g^{ij}(x)$ is not a structure constant but one of the fields,
which means that the Dirac algebra is not a true Lie algebra.
The existence of powerful group theoretic
techniques which may be employed in the
quantisation of theories classically described by Lie algebras
means that the right hand side of this Poisson bracket is unfortunate.
It stands as an obstacle to any attempt to apply group theoretic
quantisation methods to the dynamical part of the canonical gravity theory.
\footnote{
The canonical formulation of gravity ought to be particularly convenient
for a group theoretic approach to quantisation.
Control over the invariance
group of the theory would enable one to construct specific,
self-adjoint representations of its Lie algebra, i.e.\ quantum versions
of the constraints and/or canonical variables, acting on an
appropriate Hilbert space \cite{CJI}.
The kinematical part of such an approach, the canonical commutation
relations, has been addressed by Isham and Kakas with promising
results \cite{IsKa}.
Unfortunately, but perhaps not surprisingly, the dynamical part,
including the hamiltonian generators in the scheme,
has proved a more difficult problem, with central obstacles being
the Dirac algebra and the hamiltonian constraint. }
In this paper we shall reconsider the Dirac algebra, listing which
assumptions of the {\small ADM } analysis make it unavoidable, and keeping an open
mind for alternative algebras of generators of deformations in pure gravity.
The motivation for this work was the discovery by Brown and Kucha\v r of a
candidate algebra for gravity of the form Abelian$\times{\rm Diff}\Sigma$
\cite{BrKu}. It was discovered in the context of a non--derivative
coupling of incoherent dust to gravity. Performing a canonical
decomposition of the system, they found the surprising result that
the dust field helped one to select
a particular scalar combination of the gravitational constraints, a quantity
consisting purely of gravitational variables which, furthermore, had
the property of being abelian. More specifically, for incoherent dust,
this scalar is $G(x):={\cal H}_{\perp}^2(x)-g^{ij}(x){\cal H}_i (x){\cal H}_j (x)$ (or rather its
square root) and satisfies $\{G(x),G(x')\}=0$.
Brown and Kucha\v r concluded with a promising proposal. The scalar density
$G$ is a function of the gravitational variables, like the hamiltonian
constraint ${\cal H}_{\perp}$, and thus if one used $G$ instead of ${\cal H}_{\perp}$, together with
the standard ${\rm Diff}\Sigma$ constraints, the algebra for gravity would not
be the problematic Dirac algebra but would instead have the form
Abelian$\times{\rm Diff}\Sigma$. At first sight, it is unclear how this proposal
can be implemented. For example, $G(x)$ is quadratic in the
old constraints and hence, if the dust field is removed, it does not
generate motion on the constraint surface of pure gravity. This
problem does not arise if the gravitational field remains coupled
to some matter field. Consequently,
the dilemma arises as to whether the abelian constraints should be
investigated in the context of reference fluids and clocks, or
in pure gravity. The first option avoids the above problem and has
been investigated in \cite{KuRo, BrMa, IK} who generalised \cite{BrKu}
to scalar fields and perfect fluids and discovered even more abelian scalar
densities,
which we shall term Kucha\v r constraints. However, this sidesteps
the most intriguing feature of these scalars, the fact that they only
involve gravity variables.
It has been shown recently \cite{FM} that for pure gravity there is a
whole family of such abelian scalar densities, including those found
via particular
matter couplings, which are solutions of a nonlinear partial
differential equation. Such a Kucha\v r scalar density
${\cal K}[({\rm det}g),{\cal H}_{\perp},{\cal H}_i ]$ of weight $\omega$ can be
incorporated in the ``Kucha\v r algebra''
\begin{eqnarray}
\left\{{\cal K}(x),{\cal K}(x')\right\}&=&0,\\
\left\{{\cal K}(x),{\cal H}_i (x')\right\}&=&{\cal K}_{,i}(x)\delta(x,x')+
\omega{\cal K}(x)\delta_{,i}(x,x'),\\
\left\{{\cal H}_i (x),{\cal H}_j (x')\right\}&=&{\cal H}_j (x)\delta_{,i}(x,x')
-(ix\leftrightarrow jx').
\end{eqnarray}
In the present paper, the discovery of the Kucha\v r scalars and algebra is
only the motivation for a search for abelian generators of deformations in
pure gravity. We will not attempt here to derive the precise form of the
Kucha\v r scalars from our results,
although we discuss a possible relationship. Our focus is evolution as an
abelian
timelike deformation produced by scalar generators. We shall identify how the
hamiltonian constraint and its algebra is tied to the {\small ADM } concept of spatial
slices and the normal to the slice, which is unrelated to genuine time
evolution. We find that, if the 3+1 split does not follow the convenient
route of the orthogonal basis of lapse and shift, one can find scalar
generators of abelian deformations which have a close relationship
to time evolution.
The interesting feature is that the most suitable method for obtaining the
above results is to consider the long-standing issue of the r\^ole of
spacetime diffeomorphisms in the canonical theory. We shall discuss
how spacetime diffeomorphisms can be handled canonically if one takes
into account the ways in which the space $\Sigma$ is embedded in
spacetime ${\cal M}$ (for globally hyperbolic spacetime, ${\cal M}\sim\Sigma\times R$)
and how ${\rm Diff}{\cal M}$ is hidden in the {\small ADM } analysis because this embedding is
treated as fixed. A more suitable picture of spacetime ${\cal M}$ as a bundle
with fibres $R$ over space $\Sigma$, which naturally accommodates embeddings,
is proposed in section 2. In section 3, we use this picture to write down
induced spacetime diffeomorphisms on spatial objects. We then move closer to
the usual representation of deformations in canonical theory by writing the
induced spacetime diffeomorphisms as Lie derivatives on tensor quantities,
for example the 3-metric $g_{ij}(x)$ (section 4). From these general
transformations, for
particular choices of diffeomorphisms and embeddings, one can derive the
usual {\small ADM}-Dirac generators and understand more precisely the
assumptions that go into the construction of the normal deformation by the
hamiltonian constraint, as we show in section 5. Interestingly,
we also find a generator which is in many ways more natural than the
hamiltonian constraint corresponding to diffeomorphisms along
the $R$ fibre. This constraint is abelian and, in contrast
to the hamiltonian constraint, the evolution it generates
can be more naturally associated with timelike evolution. This particular
choice is discussed in Section 6, and the consequences for
quantisation, along with other concluding remarks are given in
Section 7.
\section{${\rm Diff}{\cal M}$ and the embedding of $\Sigma$ in ${\cal M}$}
The 4-dimensional formulation of general relativity is covariant
under diffeomorphisms (${\rm Diff}{\cal M}$) of the spacetime manifold ${\cal M}$.
In order to develop a Hamiltonian formulation for the purposes of
canonical quantisation one must introduce a 3+1 split of spacetime
into space and time. While not manifestly covariant, it is clear that
this representation must still exhibit the symmetries of the
4-dimensional
theory if only in terms of an arbitrariness of the embedding of the
spatial slice. The actual question of how the
${\rm Diff}{\cal M}$ covariance is realised in the canonical theory is clearly
of importance. However, the {\small ADM } formulation is not necessarily the most
appropriate formalism in which to address this question. While
in 4 dimensions we have the ${\rm LDiff}{\cal M}$ algebra, in the {\small ADM } formalism
the only algebraic-like structure is the Dirac algebra.
It is accepted that the Dirac algebra is, somehow, the ``projection'' of
${\rm LDiff}{\cal M}$ onto the foliated spacetime. However, this is not a clear statement.
The Dirac algebra is very far from being either isomorphic or a subalgebra
of ${\rm LDiff}{\cal M}$ since it is not even a true algebra.
Recovering ${\rm Diff}{\cal M}$ in the canonical theory is difficult, essentially
because a fundamental tenet of a canonical theory is {\it not} to have
explicit reference to what appears as ambient spacetime. Fortunately,
as has been pointed out in detail by Isham and Kucha\v r
\cite{IsKu}, there is indeed a link provided between space and spacetime.
It is encoded in the way space is thought of as embedded in spacetime in
a 3+1 theory. That is, in the common assumption of a globally
hyperbolic spacetime, ${\cal M}\sim\Sigma\times R$, there are many ways in
which $\Sigma$ is embedded in ${\cal M}$ (provided
the metric induced on $\Sigma$ can be spacelike).
However, in the {\small ADM } approach once the 3+1 decomposition is accomplished
one appears to lose contact with details of the embedding.
In order to carefully analyse the realisation of 4-dimensional symmetries in the
3+1 theory it is clearly necessary to have explicit reference to the
embedding information at the canonical level. For this reason it is
important to know at which stage of the {\small ADM } approach one loses the
explicit embedding information, at least in the sense to which
this information is arbitrary and one can modify the particular
embedding if required.
The procedure of the decomposition is to assume that ${\cal M}$ is foliated
by (equal-time) spacelike slices $\Sigma_t$. If we label coordinates
in ${\cal M}$ by $X^\alpha$ and in $\Sigma_t$ by $x^i$, then the Jacobian
$X^\mu_i:={\partial X^\mu\over \partial x^i}$ describes
the way that $\Sigma_t$ is embedded in ${\cal M}$.
Each slice $\Sigma_t$ acquires a 3-metric which is the
projection of the spacetime metric $\gamma_{\alpha\beta}$ on an orthogonal
basis defined on the slice via the decomposition of the deformation vector
$\dot X^\mu$ (where the dot denotes differentiation by time):
\begin{equation}
\dot X^\mu=N n^\mu+N^i X^\mu_i.
\label{eq:xdot}
\end{equation}
However, to use this formula in the canonical analysis one needs to treat
the embedding $X^\mu_i$ as fixed. For fixed $X^\mu_i$ the
spacetime ${\cal M}$ becomes a particular stack of slices $\Sigma_t$ for increasing
$t$. This construction is of course general since (\ref{eq:xdot}) holds
for all $X^\mu_i$ (producing spacelike slices). However if, at the
level of the canonical theory, one
wishes to see what happens when the embedding changes one needs to return
to (\ref{eq:xdot}) and perform the analysis more generally.
Note that otherwise the choice of decomposition has an effect similar to
the partial ``gauge-fixing'' of a theory where certain invariances of the
theory, while still present in the sense that the choice of ``gauge''
is arbitrary, become hidden. In this context we no longer have
${\cal M}\sim\Sigma\times R$ for all possible embeddings $\Sigma\rightarrow{\cal M}$,
but only for a chosen, albeit arbitrary, example.
As a result, this fixing of the embedding hides the ${\rm Diff}{\cal M}$ covariance
of the theory.
The {\small ADM } construction is based on this assumption of fixing the
embedding and some of its features are natural only in this context.
Among the basic objects associated with eq.~(\ref{eq:xdot})
are the geometric {\it spatial slice} and its {\it normal} direction, which
naturally lead to the hamiltonian constraint being the generator of
normal deformations. In a formulation where the embeddings can be modified,
the spatial slice and its normal will be less fundamental features.
As the preceding discussion has indicated, in order to describe
spacetime diffeomorphisms
we require a canonical split that can accommodate arbitrary embeddings.
We shall now outline a straightforward formalism of this type
which relies on the use of the global hyperbolicity requirement
${\cal M}\sim\Sigma\times R$.
\begin{figure}
\centerline{\mbox{\epsfig{file=newmaps.eps}}}
\caption{Spacetime ${\cal M}$ as a bundle over space $\Sigma$.}
\end{figure}
We consider a 3-dimensional manifold $\Sigma$,
whose metric is {\it not} yet specified. Over each point $x$ of $\Sigma$,
there is an $R$-line. This results in a line bundle ${\cal M}$ over $\Sigma$
with fibre $R$:
\begin{equation}
\begin{array}{ccc}
R&\longrightarrow &{\cal M}\\
&&\pi\downarrow\uparrow\sigma\\
&&\Sigma
\end{array}
\end{equation}
as pictured in figure 1. There is a projection map, $\pi:{\cal M}\rightarrow\Sigma$,
and the cross-section map $\pi\circ\sigma=1$. The actual embedding
then corresponds to this cross-section map $\sigma$, as it takes each
point $x\in\Sigma$ to a point $\sigma(x)$ in ${\cal M}$. Thus, for every
$\sigma$ we have an embedding of the 3-dimensional manifold $\Sigma$ in ${\cal M}$,
which we will denote by $\sigma(\Sigma)$. This cross-section
$\sigma(\Sigma)$ is the spatial slice in the {\small ADM } language.
The bundle ${\cal M}$ is ${\rm Diff}{\cal M}$ covariant. Under a diffeomorphism $\phi\in{\rm Diff}{\cal M}$,
$\sigma(x)\in{\cal M}$ is mapped to $\phi^{-1}\sigma(x)\in{\cal M}$. The maps
$\sigma$, and $\pi$ connect $\Sigma$ and ${\cal M}$ in a natural way. For example,
when in ${\cal M}$, we can act with $\phi\in{\rm Diff}{\cal M}$, and finally return to
$\pi\phi^{-1}\sigma(x)\in\Sigma$ using the projection map $\pi$. As a
consequence this bundle
construction allows us to induce spacetime transformations on spatial objects.
The induced spatial transformation is from $x$ to
$\pi\phi^{-1}\sigma(x)\in\Sigma$.
In the next two sections, we shall work out explicitly the induced
${\rm Diff}{\cal M}$ transformations of spatial objects.
\section{Induced spacetime diffeomorphisms on space}
Let us first consider the simplest case, the transformation induced by a
diffeomorphism $\phi\in{\rm Diff}{\cal M}$ on a vector $v_x\in T_x\Sigma$. According
to figure 1, we can push this vector forward through
\begin{equation}
T_x\Sigma\stackrel{\sigma_*}{\longrightarrow}T_{\sigma(x)}{\cal M}\stackrel
{\phi_*}{\longrightarrow}T_{\phi^{-1}\sigma(x)}{\cal M}\stackrel
{\pi_*}{\longrightarrow}T_{\pi\phi^{-1}\sigma(x)}\Sigma
\end{equation}
sending
\begin{equation}
v_x\in T_x\Sigma\mapsto \sigma_* v_x\mapsto\phi_*\sigma_* v_x
\mapsto\pi_*\phi_*\sigma_* v_x\in T_{\pi\phi^{-1}\sigma(x)}\Sigma.
\end{equation}
In order to evaluate the result we begin with a basis
$\left({\partial/\partial x^i}\right)_x$ in $T_x\Sigma$
with respect to which $v_x$ has components
\begin{equation}
v_x=v^i \left({\partial\over\partial x^i}\right)_x.
\end{equation}
Then if $\left(\partial/\partial X^\mu\right)_{\sigma(x)}$ is a basis in
$T_{\sigma(x)}{\cal M}$ we can use the Jacobian for the two bases,
\begin{equation}
\sigma^\mu_{,i}(x):=\left(\frac{\partial X^\mu(x)}{\partial x^i}\right)_{\sigma(x)},
\end{equation}
to obtain the push-forward $\sigma_*v_x$ of $v_x$ as
\begin{equation}
\sigma_* v_x\in T_{\sigma(x)}{\cal M} = v^i\sigma^\mu_{,i}(x)\left(
{\partial\over\partial X^\mu}\right)_{\sigma(x)}.
\end{equation}
We now have a vector $\sigma_* v_x$ in $T_{\sigma(x)}{\cal M}$ on which we can
apply a 4-dimensional diffeomorphism $\phi\in{\rm Diff}{\cal M}$ and obtain
\begin{equation}
\phi_*\sigma_* v_x\in T_{\phi^{-1}\sigma(x)}{\cal M}=
v^i\sigma^\mu_{,i}(x)\phi^\nu_{,\mu}\left(\sigma(x)\right)
\left({\partial\over\partial X^\nu}\right)_{\phi^{-1}\sigma(x)}.
\end{equation}
Finally, we can push this forward to $T\Sigma$ again, using the Jacobian
$\pi^j_{,\nu}\left(\phi^{-1}\sigma(x)\right)$ for the two bases
$\left({\partial\over\partial X^\nu}\right)_{\phi^{-1}\sigma(x)}$ and
$\left({\partial\over\partial x^j}\right)_{\pi\phi^{-1}\sigma(x)}$,
\begin{equation}
\pi^j_{,\nu}\left(\phi^{-1}\sigma(x)\right)=
\left(\frac{\partial x^j(X^\nu)}
{\partial X^\nu}\right)_{\phi^{-1}\sigma(x)},
\end{equation}
to obtain
\begin{equation}
\pi_*\phi_*\sigma_* v_x=v^i\sigma^\mu_{,i}(x)\phi^\nu_{,\mu}
\left(\sigma(x)\right)
\pi^j_{,\nu}\left(\phi^{-1}\sigma(x)\right)
\left({\partial\over\partial x^j}\right)_{\pi\phi^{-1}\sigma(x)}.
\label{eq:pi}
\end{equation}
Combining the results above, the induced spacetime diffeomorphism
on the spatial vector $v_x$ has the component form
\begin{equation}
v^i_x\mapsto v'^j_{\pi\phi^{-1}\sigma(x)}=v^i
\sigma^\mu_{,i}(x)\phi^\nu_{,\mu}\left(\sigma(x)\right)
\pi^j_{,\nu}\left(\phi^{-1}\sigma(x)\right).
\label{eq:vector}
\end{equation}
This equation may be readily extended to a spatial vector field.
This is because, when $x$ is varied smoothly and continuously over all
$\Sigma$ in (\ref{eq:vector}), this transformation remains well-defined.
It can therefore be used to push-forward vector fields.
\footnote{Note that
this mapping of the vector field can not be factorised, as the
push-forward with $\pi_*$ in (\ref{eq:pi}) is a many-to-one map and
not defined for a vector field.}
Let us now turn to 1-forms and covectors. In this case it is
easier to write down the induced ${\rm Diff}{\cal M}$ pullback if we reverse the route
used in the analysis of vectors. In other respects the derivation is
very similar to that above. The component result of the pullback of the
one-form $\omega(\pi\phi^{-1}\sigma(x))$ to $\omega'(x)$ via
$\sigma^*\phi^*\pi^*$ is
\begin{equation}
\omega'_j(x)\in T_x^*\Sigma=\omega_j\left(\pi\phi^{-1}\sigma(x)\right)
\pi^j_{,\nu}
\left(\phi^{-1}\sigma(x)\right)\phi^\nu_{,\mu}\left(\sigma(x)\right)
\sigma^\mu_{,i}(x)\left(dx^i\right)_x.
\label{eq:oneform}
\end{equation}
Similarly, the covector transformation
$k'\in T^*_{\pi\phi^{-1}\sigma(x)}\Sigma
\rightarrow k\in T^*_x\Sigma$ is
\begin{equation}
k_j(x)=k'_j\pi^j_{,\nu}\left(\phi\sigma(x)\right)\phi^\nu_\mu
\left(\sigma(x)\right)\sigma^\mu_i(x)\left(dx^i\right)_x.
\end{equation}
One can check that $k$ and $v$, as given by the formulae above,
are indeed dual i.e.
$\langle k,\pi_*\phi_*\sigma_* v\rangle_{\pi\phi^{-1}\sigma(x)}=
\langle \sigma^*\phi^*\pi^* k, v\rangle_x$.
\section{Infinitesimal spacetime diffeomorphisms on spatial tensors}
At this stage, we have coordinate expressions for the transformations
of the simplest tensorial objects and it is straightforward to
extend these results to other spatial objects as required. Let us now
return to our initial problem, the relation between ${\rm Diff}{\cal M}$ and the
deformations generated by constraints. We would like to compare the
present formalism to the standard approach of constraint generators
decomposed with respect to a fixed orthogonal basis. For
example, we can consider the tangential deformation of the
3-metric by the momentum constraint
${\cal H}_i $ (smeared by a vector field $N$):
\begin{equation}
\left\{{\cal H}(N), g_{ij}\right\}=\delta g_{ij}={\cal L}_N g_{ij}.
\label{eq:mom}
\end{equation}
We need to work with infinitesimal $\phi\in{\rm Diff}{\cal M}$, namely, Lie
derivatives with respect to a vector field ${\cal V}\in T{\cal M}$. Such an infinitesimal
diffeomorphism transforms, say, the covector $\xi\in T^*{\cal M}$ in the manner
\begin{equation}
\xi\mapsto\xi'=\xi+\epsilon{\cal L}_{\cal V}\xi+{\cal O}\left(\epsilon^2\right).
\end{equation}
Recall that the base space $\Sigma$ does not have a fixed 3-metric
$g_{ij}(x)$, unlike a spatial slice $\Sigma_t$.
Instead, for $\Sigma$, $g_{ij}(x)$
is a special symmetric 2-index tensor, an element of
$T^*_x\Sigma\otimes T^*_x\Sigma$.
Its deformation (\ref{eq:mom}) will then be a particular induced
${\rm Diff}{\cal M}$ map $\sigma_*\phi_*\pi_*:T^*_x\Sigma\otimes T^*_x\Sigma\rightarrow
T^*_{\pi\phi^{-1}\sigma(x)}\Sigma\otimes T^*_{\pi\phi^{-1}\sigma(x)}\Sigma$,
as we shall verify in section 5.
In preparation let us write down the induced spacetime diffeomorphism of a
general tensor in $T^*_x\Sigma\otimes T^*_x\Sigma$, say $t_{ij}(x)$.
The result follows in a similar manner to the calculations already
presented, except that transformations are required for each index
and we consider only infinitesimal diffeomorphisms with parameter
$\epsilon$, i.e.
\begin{equation}
t'_{ij}=t_{ij}+\epsilon\sigma^\mu_{,i}\sigma^\nu_{,j}
\left[{\cal V}^\lambda t_{\mu\nu}
+\left(\partial_\mu{\cal V}^\lambda\right)t_{\lambda\nu}+\left(\partial_\nu
{\cal V}^\lambda\right) t_{\lambda\mu}\right],
\label{eq:tij}
\end{equation}
where $t_{\mu\nu}$ (at $\sigma(x)$) is shorthand for $t_{ij}(x)$ embedded in
${\cal M}$:\footnote{
For clarity we will ommit some indices. In detail, the transformation (\ref{eq:tij}) is:
\begin{eqnarray}
& & t'_{ij}\left(\pi\phi^{-1}\sigma(x)\right)=
t_{ij}(x)+\epsilon\sigma^\mu_{,i}(x)\sigma^\nu_{,j}(x)\nonumber\\
& & \left[{\cal V}^\lambda\left(\sigma(x)\right) t_{\mu\nu}\left(\sigma(x)\right)+
\left(\partial_\mu{\cal V}^\lambda\left(\sigma(x)\right)
\right)t_{\lambda\nu}\left(\sigma(x)\right) +\left(\partial_\nu
{\cal V}^\lambda\left(\sigma(x)\right) \right) t_{\lambda\mu}\left(\sigma(x)\right)\right]\nonumber.
\end{eqnarray}
In what follows, we will use a prime to denote the value of the tensor at
point $\pi\phi^{-1}\sigma(x)$. }
\begin{equation}
t_{ij}(x)=\sigma^\mu_{,i}(x)\sigma^\nu_{,j}(x)
t_{\mu\nu}\left(\sigma(x)\right).
\end{equation}
Similarly, for a contravariant 2-tensor, $l^{ij}\in T\Sigma\otimes T\Sigma$,
we have
\begin{equation}
l'^{ij}=l^{ij}+\epsilon\pi^i_{,\mu}\pi^j_{,\nu}\left[{\cal V}^\lambda l^{\mu\nu}
+\left(\partial_\mu{\cal V}^\lambda\right)l^{\lambda\nu}+\left(\partial_\nu
{\cal V}^\lambda\right) l^{\lambda\mu}\right].
\label{eq:lij}
\end{equation}
The transformations (\ref{eq:tij}) and (\ref{eq:lij}) are general
formulae that encode the induced action of arbitrary
4-dimensional infinitesimal
diffeomorphisms on spatial 2-tensors. The compactness of these
expressions hides the fact that most of the physical
information is contained in the sets of $\sigma^\mu_{,i}$ and the
choice of the vector field ${\cal V}^\mu$. Recall that
$\sigma^\mu_{,i}$ are the coordinate expressions for
the embedding $T\Sigma\rightarrow T{\cal M}$ induced by the cross-sections
$\sigma:\Sigma\rightarrow{\cal M}$. The choice of the vector field ${\cal V}^\mu$
is determined by the spacetime diffeomorphism $\phi\in{\rm Diff}{\cal M}$ we are
performing.
In the next two sections we show
that, as special cases of (\ref{eq:tij}) and (\ref{eq:lij}), we can, firstly
retrieve the Dirac algebra explicitly as an orthogonal projection of
spacetime diffeomorphisms on $\Sigma\times R$ and, secondly,
obtain abelian transformations generated by an extra class of
diffeomorphisms along the $R$-fibre. These arise very naturally,
are by construction abelian, and suggest intriguing connections to
existing 3+1 work.
\section{The ADM-Dirac generators as projections of
${\rm LDiff}{\cal M}$ on an orthogonal basis}
Having developed a formalism for considering the transformation of
spatial tensors under 4-dimensional diffeomorphisms of the
bundle ${\cal M}$ that explicitly involves
``embeddings'', we may use it for the Dirac algebra of the canonical
constraints. Appropriate conditions on the vector field ${\cal V}$ via which
the Lie derivatives of the transformations (\ref{eq:tij}) and
(\ref{eq:lij}) are defined and the embedding $\sigma$ will
reproduce the hamiltonian and
momentum constraints as generators of spatial and
normal diffeomorphisms.
We begin by using (\ref{eq:tij}) to derive the known deformations of
the 3-metric $g_{ij}(x)$ under the momentum and hamiltonian
constraints \cite{KVK}. Recall that for the purpose of considering the
effect of spacetime diffeomorphisms $g_{ij}(x)$ may be regarded as a
tensor of the form $t_{ij}(x)\in T^*\Sigma\otimes T^*\Sigma$. That is,
its transformation under a general infinitesimal spacetime diffeomorphism
is given by equation (\ref{eq:tij}),
\begin{equation}
g'_{ij}=g_{ij}+\epsilon\sigma^\mu_{,i}\sigma^\nu_{,j}
\left[{\cal V}^\lambda g_{\mu\nu}
+\left(\partial_\mu{\cal V}^\lambda\right)g_{\lambda\nu}+\left(\partial_\nu
{\cal V}^\lambda\right) g_{\lambda\mu}\right],
\label{eq:gij}
\end{equation}
with $g_{\mu\nu}(\sigma(x))$ given by
$g_{ij}(x)=\sigma^\mu_{,i}(x)\sigma^\nu_{,j}(x) g_{\mu\nu}(\sigma(x))$.
The constraints are then generators of canonical transformations
between elements of $T^*\Sigma\otimes T^*\Sigma$.
A spatial diffeomorphism is generated by a vector field $N$ which is
purely spatial, $N\in T\Sigma$. When $\Sigma$ is embedded in ${\cal M}$, the
corresponding spacetime diffeomorphism will be with respect to a
vector field ${\cal V}$ which lies in the cross-section
$\sigma(\Sigma)$, i.e. ${\cal V}^\mu(\sigma(x))=\sigma^\mu_{,i}(x)N^i(x)$. Using
the identity $\pi\circ\sigma=1$, namely,
\begin{equation}
\pi^i_{,\nu}\left(\sigma(x)\right)\sigma^\nu_{,j}(x)=\delta^i_j(x),
\label{eq:pisigma}
\end{equation}
we obtain
\begin{equation}
\pi^i_{,\mu}\left(\partial_k\sigma^\mu_{,j}\right)=-
\sigma^\mu_{,j}\left(\partial_k\pi^i_{,\mu}\right),
\end{equation}
which, together with the integrability condition
\begin{equation}
\partial_j\sigma^\mu_{,i}=\partial_i\sigma^\mu_{,j}
\end{equation}
leads to equation (\ref{eq:gij}) reducing to the expected form:
\begin{equation}
g'_{ij}(\pi\phi^{-1}\sigma(x)) = g_{ij}(x)+\epsilon{\cal L}_Ng_{ij}(x)
\label{eq:spatial}
\end{equation}
as in equation (\ref{eq:mom}).
Therefore, this induced diffeomorphism $g_{ij}\rightarrow g'_{ij}$ is indeed
an element of ${\rm Diff}\Sigma$.
Let us now check whether, for $\phi$ a diffeomorphism with respect to a
vector field normal to the cross-section $\sigma(\Sigma)$, equation
(\ref{eq:gij}) reduces to the known normal deformation of the 3-metric
generated by the hamiltonian constraint \cite{JY},
\begin{equation}
g'_{ij}=g_{ij}+\epsilon\left[\partial_0 g_{ij}+D_{(i}N_{j)}\right],
\label{eq:ham}
\end{equation}
where $D_i$ denotes the spatial covariant derivative.
The following derivation is interesting mainly because it shows which
are the assumptions of {\small ADM } needed to make the hamiltonian constraint and
normal deformations a convenient tool to use.
\footnote{
Of course, in the {\small ADM } philosophy the hamiltonian constraint is perfectly
reasonable, as the normal can be defined intrinsically to the slice and as
a result one can use quantities such as the extrinsic curvature to
conveniently describe this constraint, and obtain a compact
formulation of the initial-value formulation of
general relativity. However, in the present context where embeddings play
an essential r\^ole, the spatial slice is no longer such a central object.}
Note that in our picture of 3-space arbitrarily embedded in spacetime, the
normal is no longer the most natural direction to use in order
to describe deformations which are not tangential to the embedded slice,
as we shall come to in Section 6.
It turns out that there are four assumptions used in the {\small ADM }
formulation in order to turn an arbitrary normal deformation,
i.e. equation (\ref{eq:gij}) for ${\cal V}^\mu$ some normal vector field $n^\mu$:
\begin{equation}
g'_{ij}=g_{ij}+\epsilon\sigma^\mu_i\sigma^\nu_j\left[ n^\lambda g_{\mu\nu}
+\left(\partial_\mu n^\lambda\right)g_{\lambda\nu}+\left(\partial_\nu
n^\lambda\right) g_{\lambda\mu}\right],
\label{eq:ngij}
\end{equation}
into the simplified form of (\ref{eq:ham}). Firstly, one needs to choose
(and fix) the embedding $\sigma$. Once the embedding is fixed, as a
second step, the
lapse and shift can be introduced, in a manner formally equivalent
to the usual decomposition of the deformation vector
\begin{equation}
\dot X^\mu=n^\mu N+X^\mu_i N^i.
\label{eq:def}
\end{equation}
In our notation $X^\mu_i=\frac{\partial X^\mu}{\partial x^i}
\equiv\sigma^\mu_{,i}(x)$ and $X^\mu\equiv\sigma^\mu(x)$, so the lapse and
shift will appear through $\partial_0\sigma^\mu$. Explicitly, and similarly
to the case of spatial diffeomorphisms, we can impose integrability to
find that
\begin{equation}
\partial_i\left(\partial_0\sigma^\mu\right)=\partial_0 \sigma^\mu_{,i},
\label{eq:int}
\end{equation}
which may be decomposed in the same basis as (\ref{eq:def}) to give
\begin{equation}
\partial_0\sigma^\mu_i=\partial_i\left(n^\mu N+\sigma^\mu_{,i} N^i\right).
\label{eq:split}
\end{equation}
The general normal diffeomorphism (\ref{eq:ngij}) can be simplified to
\begin{equation}
g'_{ij}=g_{ij}+\epsilon n^\lambda\partial_\lambda g_{ab}+\epsilon n^\lambda
\sigma^\mu_{,i}\sigma^\nu_{,j}\left[\partial_\lambda\left(
\pi^a_{,\mu}\pi^b_{\nu}\right)-\partial_\mu\left(\pi^a_{,\lambda}
\pi^b_{,\nu}\right)-\partial_\nu\left(\pi^a_{,\lambda}\pi^b_{,\mu}\right)
\right]g_{ab}.
\end{equation}
Using the integrability condition (\ref{eq:int}),
and the decomposition (\ref{eq:split}) we find, after some tedious
calculations,
\begin{eqnarray}
g'_{ij}&=&g_{ij}+\epsilon n^k\left\{\partial_k g_{ij}-\left(
\sigma^\nu_{,j}\delta^a_{[k}\partial_{i]}\pi^b_{,\nu}+
\sigma^\mu_i\delta_{[k}\partial_{j]}\pi^a_{\mu}\right)
g_{ab}\right\}-\nonumber\\
& & \epsilon n^0\left\{ D_{(i}N_{j)}+\partial_0 g_{ij}-N^k
\left[ \sigma^\mu_{,j}\partial_k\left(\pi^a_{,\mu}g_{ia}\right)+
\sigma^\mu_{,i}\partial_k\left(\pi^a_{,\mu}g_{aj}\right)\right]\right\} -
\nonumber\\
& & \left[\delta^a_i\pi^b_{,\mu}\partial_j(Nn^\mu)+
\delta^b_j\pi^a_{,\mu}\partial_i(Nn^\mu)\right] g_{ab}+\nonumber\\
& &\left(\partial_{(i}\sigma^\mu_{j)}\right)g_{0\mu}-g_{b(i}\partial_{j)}
\pi^b_0.
\label{eq:normal}
\end{eqnarray}
Requiring that the above transformation $g_{ij}\rightarrow g'_{ij}$ be
produced by a generator ${\cal F}(g_{ij},p^{ij})$ via $\delta g_{ij}=
\{g_{ij}, {\cal F}(N)\}$ (by analogy to the usual normal transformation
(\ref{eq:ham}) also being the result of the Poisson bracket of the metric
with the smeared hamiltonian constraint $\{g_{ij},{\cal H}_{\perp}(N)\}$) we can find
${\cal F}$:
\begin{equation}
{\cal F}\left(g_{ij},p^{i},\sigma^\mu_{,i}\right)=
n^0{\cal H}_{\perp}+p^{ij}n^k\partial_k g_{ij}+
p^{ij}A^{ab}_{i}g_{ab}+f(g,\sigma),
\label{eq:f}
\end{equation}
where ${\cal H}_{\perp}$ denotes the standard normal deformation as in
eq.\ (\ref{eq:ham}),
$p^{ij}$ has been defined to be the time derivative of $g_{ij}$, and
the other terms are simply the rest of (\ref{eq:normal})
expressed in a convenient notation. $A^{ab}_{ij}$ is the function of lapse,
shift and embedding
\footnote{The notation $X_{(a|b|c)}$ means that $b$ is not to be included
in the symmetrisation which then takes place only in $a,\ c$.}
\begin{eqnarray}
\label{Aabij}
A^{ab}_{ij}&=&-n^k\left[\sigma^\nu_{,j}\delta^a_{[k}\partial_{i]}\pi^b_{,\nu}
+\sigma^\mu_i\delta^b_{[k}\partial_{j]}\pi^a_{,\mu}\right]+\nonumber\\
& & n^0\left[\delta^a_{(i}\partial_{j)}\pi^b_{,0}-\pi^a_{,0}\pi^b_{,\mu}
\partial_i\sigma^\mu_{,j}+\pi^a_{,\mu}\pi^b_{,0}\partial_j\sigma^\mu_{,i}+
\right.
\nonumber\\
& & \left. N^k\sigma^\mu_{,(i}\partial_{|k}\pi^a_{,\mu|}\delta^b_{i)}-
\delta^a_i\pi^b_{,\mu}\partial_j(Nn^\mu)-\delta^b_j\pi^a_{,\mu}
\partial_i(Nn^\mu)\right],
\end{eqnarray}
and $f(g,\sigma)$ is an unspecified function of the 3-metric and the
embedding only.
The generator ${\cal F}$ in (\ref{eq:f}) is still cumbersome
because we are only halfway through imposing the {\small ADM } assumptions.
As the third step we now ``lock'' the coordinate frame to
our embedding choice, so that $n^\mu$ becomes $n^0=-1,\ n^k=0$.
The second term in (\ref{eq:f}) and the first term in (\ref{Aabij})
then vanish. Finally, let us assume that the cross-sections are
slices of constant coordinate time which implies
that $\sigma^\mu_{,i}=const$. The last two terms of (\ref{eq:f})
then vanish as they contain derivatives of $\sigma^\mu_{,i}$ and we
have recovered the {\small ADM } hamiltonian constraint ${\cal H}_{\perp}$.
\section{Abelian diffeomorphisms along the $R$-fibre}
The derivation in the previous section clarifies the statement that the
Dirac algebra is the ``projection'' of ${\rm LDiff}{\cal M}$. However,
this projection is with respect to a basis determined by a spatial slice
and its normal direction,
rather than on $\Sigma\times R$.
In fact, the projection on
$\Sigma\times R$, which we are now going to consider,
remarkably leads to a
generator algebra of the form Abelian$\times{\rm Diff}\Sigma$.
\begin{figure}
\centerline{\mbox{\epsfig{file=abelian.eps}}}
\caption{Abelian deformations.}
\end{figure}
As much as the normal diffeomorphisms were unnatural and rather
tedious to recover, this third special class of diffeomorphisms is
simple and straightforward to find. It is the case where the spacetime
diffeomorphism is a base-point preserving map in the bundle. That is, the
vector field ${\cal V}^\mu$ is along the 1-dimensional $R$-fibre, as shown in
figure 2. By construction, this ${\cal V}^\mu$ may be represented as
${\cal V}=\frac{\partial}{\partial\tau}$, $\tau$ being the affine parameter
along the fibre. In this case, the transformation (\ref{eq:gij})
of the 3-metric reduces to
\begin{equation}
g'_{ij}=g_{ij}+\epsilon\frac{\partial}{\partial\tau}g_{ij}-\epsilon\left(g_{kl}
\pi^k_{,\mu}\pi^l_{,\nu}\right)\frac{\partial}{\partial\tau}
\left(\sigma^\mu_{,i}\sigma^\nu_{,j}\right).
\label{eq:abelian}
\end{equation}
This describes the change in the value of $g_{ij}(x)$ at each point
$x\in\Sigma$ after some ``time evolution'' $\tau$. Note that the first
two terms of (\ref{eq:abelian}) reflect this time-evolution property of
the base-point preserving diffeomorphisms in a straightforward way.
The third term depends only on the embedding, which changes
in $\tau$-time since it is not restricted to being static in this formalism.
Furthermore, because of the simplicity of the spacetime we are dealing with,
it is not unexpected that this transformation along the 1-dimensional fibre
is abelian. More accurately,
our natural assumption that the fibre is an $R$-group acting freely
on ${\cal M}$ lets us treat ${\cal M}$ as a principal $R$-bundle. Then the
above transformations from ${\cal M}$ to itself form a group,
the automorphism group of ${\cal M}$, Aut$({\cal M})$. Moreover, since
${\cal M}$ is trivial, Aut$({\cal M})$ is isomorphic to the group
$C^\infty(\Sigma,R)$ of functions on $\Sigma$, which is abelian. Thus
we have obtained a framework in which the evolution of the embedded
slices is naturally described by abelian constraints.
The result is that this projection of ${\rm Diff}{\cal M}$ on $\Sigma\times R$ leads to
the Lie algebra $L{{\rm Diff}\Sigma}\odot{C^{\infty}(\Sigma)}$ (with the symbol $\odot$ denoting
the semidirect product).
One may choose to use the transformation (\ref{eq:abelian}) in place of
the normal (\ref{eq:ham}) combined with the spatial diffeomorphisms
(\ref{eq:spatial}) for a 3+1 decomposition with a true Lie algebra of its
deformation generators.
The {\small ADM}-Dirac algebra and this ${{\rm Diff}\Sigma}\odot{C^{\infty}(\Sigma)}$ algebra are very
special cases of the general spacetime deformations (\ref{eq:tij}) and
(\ref{eq:lij}) in that they only refer to the 3-space.
The {\small ADM}-Dirac algebra is constructed from the beginning in this way,
starting from a spatial slice and using quantities that can be defined
intrinsically to the slice. The ${{\rm Diff}\Sigma}\odot{C^{\infty}(\Sigma)}$ algebra also turns out to have
this property as both ${\rm Diff}\Sigma$ and more importantly $C^{\infty}(\Sigma)$ require
only $\Sigma$ and not ${\cal M}$ for
their definition. In fact, it is possible to derive the results of
this paper without reference to spacetime ${\cal M}$ as a physical manifold
with 4-metric $\gamma_{\alpha\beta}$, but by starting from a 3-dimensional
space $\Sigma$ on which ${\rm Diff}\Sigma$ and $C^{\infty}(\Sigma)$ can be defined. In
that context, the 4-dimensional bundle is only a helpful way to unfold
transformations under these two groups by raising an $R$-fibre over
each spatial point and constructing a
$\Sigma\times R$ bundle over $\Sigma$.
This approach was followed in \cite{AnMa}.
One should note that information about spacetime and the spacetime
metric is not used until the very last stages of the derivation of the
hamiltonian generator, when ``locking'' the coordinate frame to
the chosen foliation.
\section{Conclusions}
Motivated by the recent discovery of abelian constraints, and the
proposal that these abelian generators could be of use
in group theoretic approaches to canonical quantisation \cite{BrKu}, we
re--analysed the {\small ADM}-Dirac algebra and the hamiltonian
constraint. We traced the problem of its non-closing Poisson bracket to the
selection of a spatial slice and its normal as primary elements of
the {\small ADM } canonical analysis and the fixed choice of embeddings
needed for their use. Allowing variation of the embeddings,
which is in principle allowed in the canonical gravity, makes it
possible to describe the effect of 4-dimensional spacetime
diffeomorphisms, at least when spacetime is globally hyperbolic.
In order to include the embeddings, we found it
necessary to change our viewpoint of spacetime from a fixed stack-of-slices
to spacetime as an ${\cal M}\sim\Sigma\times R$ bundle over a generic
3-manifold $\Sigma$, where
the embeddings correspond to the cross-section maps from $\Sigma$ to ${\cal M}$.
By including embeddings explicitly in this manner we were
able to break ${\rm Diff}{\cal M}$
covariance in a controlled manner in order to obtain the
induced ${\rm Diff}{\cal M}$ action on spatial quantities.
Using these general transformations, we were able to perform
3+1 splits of ${\rm Diff}{\cal M}$ corresponding to two different embeddings. A
standard normal and tangential split with respect to the spatial
slice which leads to the {\small ADM}-Dirac algebra, and one on
$\Sigma\times R$, leading to an Abelian$\times {\rm Diff}\Sigma$ algebra.
The first case was useful in clarifying the {\small ADM}
assumptions used in the construction of the hamiltonian constraint
and showing how they are incompatible with truly variable embeddings.
The second split makes use of the $R$ in $\Sigma\times R$, and perhaps not
surprisingly, produces abelian deformations whose form resembles
time evolution (although we have left open the issue of the r\^ole
of the $R$-fibres). It is important to note that this $L{{\rm Diff}\Sigma}\odot{C^{\infty}(\Sigma)}$ algebra
only refers to the space $\Sigma$. Corresponding to the way in which
the {\small ADM } analysis can be thought of as a foliation of spacetime
${\cal M}$ by spatial slices, the decomposition in terms of the bundle is a
fibering of ${\cal M}$ by $R$.
While the existence of an abelian algebra for canonical
general relativity is very promising, particularly in the context of group
quantisation, there are a number of tasks which need to be performed before
deciding whether the Abelian$\times{\rm Diff}\Sigma$ split is of practical use.
So far, we have used Lie derivatives to describe
infinitesimal transormations. In the sense described in \cite{HKT} this
amounts to constructing the kinematics of the Abelian$\times{\rm Diff}\Sigma$
formulation. The dynamics would describe the change of functionals of
the canonical variables $g_{ij}(x)$ and $p^{ij}(x)$ under these
transformations using Poisson bracket relationships. This requires
finding expressions for our abelian generators in terms of the
canonical variables. To find them, one may use the same postulate as
\cite{HKT} and ask that they should ``represent the kinematical
generators'', that is, they should be constructed from the canonical
variables---and the embedding variables in our case---in such a way that
their Poisson brackets close like the commutators of the corresponding
kinematical generators. We expect the inclusion of the embedding
variables to produce interesting results \cite{BoMa} and possible
relationship to \cite{IsKu} and \cite{BeKo}.
Finally, let us recall the Kucha\v r scalars. In their derivation
in \cite{BrKu,KuRo,BrMa} a reference fluid is
required to select the particular form of the
scalar, and according to \cite{IK}, each member of the family found in
\cite{FM} also corresponds to a particular choice of reference fluid.
The unsatisfactory element there is that, thus far, each case may only
be obtained in a somewhat ad hoc manner.
It appears possible to set up an equivalence between the
$\sigma$ variables and reference fluids \cite{RB}, thus providing a
better organised derivation of Kucha\v r scalars and a connection
of the present work to the reference fluid results.
\section*{Acknowledgements}
We are grateful to Chris Isham for his guidance throughout this work.
Numerous discussions with Julian Barbour, Roumen Borissov,
Jonathan Halliwell, Adam Ritz and Lee Smolin have been very important
to our understanding of issues of canonical gravity.
FM would like to thank Abhay Ashtekar for hospitality at Penn State
where this work was completed
and she is partly supported by the A S Onassis
Foundation.
| -35,290.659622 |
[
-3.0546875,
2.744140625
] | 19.253209 |
[
-2.419921875,
0.0904541015625,
-2.8828125,
-6.6953125,
-0.84814453125,
9.4921875
] |
[
2.236328125,
8.53125,
-0.09326171875,
4.90234375
] | 236 | 5,677 |
[
-3.5625,
4.140625
] | 27.557047 |
[
-5.55859375,
-4.57421875,
-6,
-2.791015625,
1.78515625,
14.0390625
] | 1.261577 | 8.2588 | 22.881804 | 0.397808 |
[
2.2874011993408203
] | -24,527.111092 | 5.724679 | -35,054.934461 | 0.541555 | 5.855323 |
[
-1.958984375,
-3.625,
-3.625,
-5.05859375,
1.98828125,
12.2265625
] |
[
-4.84375,
-2.068359375,
-2.28125,
-1.0908203125,
3.416015625,
4.31640625
] | |
BkiUe5rxK3YB9m4AAULK
|
\section{Introduction}
Kinetic plasma simulations are important for a wide range of applications, including but not limited to the design and analysis of high-power microwave sources, particle accelerators, laser ignited devices, and ionosphere and magnetosphere problems~\citep{gold1997review, booske2008plasma, benford2015high, lapenta2006kinetic,nayak2019progress,karimabadi2011petascale,chen2020magnetohydrodynamic}. Electromagnetic particle-in-cell (EMPIC) algorithms are typically used for simulating kinetic collisionless plasmas governed by Maxwell-Vlasov equations. EMPIC algorithms compute the electromagnetic field on the spatial mesh based on a discretized form of Maxwell's equations while simultaneously updating, via a kinetic model based on the Lorentz force equation, the velocity and position of computational superparticles that effect a coarse-graining of the phase space of charged particles in the plasma~\citep{birdsall2004plasma, bettencourt2008performance, wang2010three, moon2015exact, meierbachtol2015conformal}.
The inherent nonlinearity and multi-scale nature of the problem
make the interpretation of the underlying physics often difficult and serve as one of the motivations for a reduced-order model that can characterize, with sufficient accuracy, the plasma system using a small number of degrees of freedom. Reduced-order models may also facilitate the possible use of model-based control methods such as model predictive control (MPC) \cite{allgower1999nonlinear, kaiser2018sparse}.
Several recent studies \cite{pandya2016low, van2014use, byrne2017study, kaptanoglu2020physics} in the plasma physics community have indicated the practicality of adopting a lower dimensional feature space that can model the system through a small set of spatio-temporal coherent structures. A variety of model-order reduction techniques, such as proper orthogonal decomposition (POD) \citep{beyer2000proper, nicolini2019model,kaptanoglu2020physics}, bi-orthogonal decomposition (BOD) \citep{de1995enhancement,dudok1994biorthogonal}, principal component analysis (PCA) \cite{bellemans2017reduced} have been proposed in the past. These methods are invariably limited in their ability to resolve the time dynamical properties using low rank modelling. Dynamic mode decomposition (DMD) \cite{schmid2010dynamic,schmid2011applications,tu2013dynamic} helps to overcome this difficulty. In particular, it was recently shown in \citep{taylor2018dynamic,kaptanoglu2020characterizing,sasaki2019using} that DMD can efficiently extract the underlying characteristic features of
(fluid-model) magnetohydrodynamics based plasma simulations with reasonable accuracy. Our preliminary study \citep{nayak2020prediction} shows promise of DMD in reconstructing self electric fields from (kinetic-model) EMPIC plasma simulations. However, a detailed analysis regarding ability of DMD to capture relevant plasma dynamics from the particle-in-cell simulation is yet to be explored.
Another challenge of particle-in-cell (PIC) based algorithms is the large computational load \citep{hockney1988computer}. Several improvements have been proposed in the literature to speed up PIC simulations, ranging from computational architecture to the underlying algorithmic structure, e.g. see \citep{werner2018speeding,decyk2014particle,WOLF2016342}. Here, we address the issue also from a reduced-order model perspective. In order to minimize the computational cost, ideally one would like to perform reduced-order modeling such as DMD using data from high-fidelity simulations based on relatively short time windows and extrapolate the results in future time. However, as is shown in this work, accurate prediction of the equilibrium dynamics using data-driven methods such as DMD requires sufficient data harvesting near equilibrium. As a result, a related important question to be addressed is how to leverage DMD to optimally predict the equilibrium state. The question becomes particularly crucial for timely termination of the high-fidelity simulations such as those based on EMPIC algorithms.
In order to exploit the time extrapolation ability of DMD for reducing computation cost of high-fidelity simulations, it is important to identify the transition from transient to equilibrium state of a dynamical system in an in-line fashion. Several past works \cite{van2013interaction, feng2014deep, chekroun2014rough, tantet2015early} deal with identification of state transition in high-dimensional physical systems. Some recent publications \citep{gottwald2019detecting,alessandri2019dynamic} highlight the importance of DMD in identifying such regime transitions. The authors in \citep{gottwald2019detecting} rely on the DMD reconstruction error difference between transient and equilibrium states of a dynamical system to identify such transitions. However, one of the key assumptions in \citep{gottwald2019detecting} is the fast relaxation of the dynamical system in transience, i.e. a faster time scale of the transient dynamics compared to equilibrium dynamics. The present work does not rely on the fast relaxation assumption since the transience is characterized by temporal variations in the amplitude and changing frequency content.
Rather, we compute the \textit{residue} based on the relative position of dominant DMD eigenvalues with respect to the unit circle. While Ref. \citep{alessandri2019dynamic} also performs identification of regime transition, it does so by observing the variation of a DMD-based least-squares residual term as the DMD window is gradually increased to span the spatial domain.
In contrast, the
residual term in this work is based on the loci of DMD eigenvalues in the complex plane. We keep track of the residual term as a \textit{fixed-width} DMD window is moved forward in time.
Finally, in \citep{alessandri2019dynamic}, the change in the slope of the residual term is detected by fitting two straight lines. This work employs instead a rolling average to detect non-negative slopes that is suitable for
in-line application. This work addresses all these issues in the context of kinetic plasma simulations from a modal analysis perspective. The main contributions of the present work can be summarized as follows:
\begin{enumerate}
\item As mentioned above, while DMD has been recently applied to fluid-based plasma simulations, it has not yet been studied for kinetic plasma simulations. In this work we study the performance of DMD in reconstructing the self electric fields and its effect on the superparticle dynamics, for several test cases.
\item We propose an algorithm for in-line detection of the onset of the equilibrium state of a dynamical system using a sliding-window DMD approach. This advancement has the potential to speed up EMPIC simulations for long term predictions when combined with the time-extrapolation ability of DMD. We propose a sliding window approach that tracks the position of DMD eigenvalues relative to the unit circle on the complex plane for detecting the equilibrium state. We analyze the prediction error in self-field pattern, as well as the superparticle dynamics, produced by the reduced-order model extrapolated solution.
\item We perform a first-of-its-kind analysis to investigate the convergence in DMD mode shapes and shifting of DMD eigenvalues as the DMD window slides from transient to the equilibrium state. We do so by in-line tracking of the DMD modes and eigenvalues as a part of equilibrium detection algorithm. Such analysis can provide insight on how hidden features in the transient state can manifests itself as the system approaches equilibrium.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{jcp_pic_0001_DMD_cycle_flat_crop.jpg}
\caption{\small{Main cyclic steps in the EMPIC algorithm.}}
\label{fig:empic_cycle}
\end{figure}
\section{DMD Applied to EMPIC Kinetic Plasma Simulations}
\subsection{EMPIC Algorithm} \label{empic}
The EMPIC algorithm \citep{moon2015exact,kim2011parallel,EVSTATIEV2013376,doi:10.1063/1.4742985,doi:10.1063/1.4976849,kraus-kormann,jianyuan2018structure} generates the high-fidelity data for the DMD reduced-order model. It executes a marching-on-time procedure in four stages (Fig.~\ref{fig:empic_cycle}) during each timestep: field-update, gather, particle-pusher and scatter.
For the field update, time-dependent Maxwell's equations are discretized on simplicial (triangular or tetrahedral) meshes using finite elements based on discrete exterior calculus~\cite{flanders1989, teixeira1999lattice, kotiuga2004, he2007differential, deschamps1981electromagnetics, he2006sparse, donderici2008mixed, teixeira2013differential}.
The electric $\mathcal{E}\left(t, \vec{r}\right)$ and magnetic (flux) $\mathcal{B}\left(t, \vec{r}\right)$ fields are expanded as a sum of Whitney forms (natural interpolants for discrete differential forms) as explained in~\cite{moon2015exact,nicolini2019model, kim2011parallel,na2016local},
\begin{align}
\mathcal{E}\left(t, \vec{r}\right) &= \sum_{i=1}^{N_{1}} e_{i}\left(t\right) w^{(1)}_{i} (\vec{r}), \label{eq:EDoF}\\
\mathcal{B}\left(t, \vec{r}\right) &= \sum_{j=1}^{N_{2}} b_{j}\left(t\right) w^{(2)}_{j} (\vec{r})\label{eq:BDoF}.
\end{align}
The functions $w^{(1)}_{i} (\vec{r})$ and $w^{(2)}_{j} (\vec{r})$ represent Whitney 1-forms (edge-based functions) and Whitney 2-forms (facet-based functions), respectively
\cite{moon2015exact, teixeira2014lattice, he2006geometric}.
These functions
have a biunivocal association to the edges and facets of the finite element mesh, respectively, with $N_1$ denoting the number of edges and $N_2$ the number of facets.
A detailed description of the discrete field update equations can be found in~\citep{na2016local,moon2015exact,kim2011parallel,nicolini2019model}.
The discrete degrees of freedom (DoF) for the electric field and magnetic {flux} can be represented by column vectors comprising the set of
time-dependent coefficients in \eqref{eq:EDoF},\eqref{eq:BDoF}, i.e.
$\mathbf{e}(t) = [e_{1}(t) \ e_{2}(t) \ \ldots \ e_{N_1}(t)]^{\text{T}}$ and $\mathbf{b}(t) = [b_{1}(t) \ b_{2}(t) \ \ldots \ b_{N_2}(t)]^{\text{T}}$
where \textquote{$^\text{T}$} denotes transpose. Their time-discrete counterparts at the $n^{th}$ timestep produced by the EMPIC algorithm, denoted as
$\mathbf{e}^{(n)} = [e_{1}^{(n)} \ e_{2}^{(n)} \ \ldots \ e_{N_1}^{(n)}]^{\text{T}}$ and { $\mathbf{b}^{(n+1/2)} = [b_{1}^{(n+1/2)} \ b_{2}^{(n+1/2)} \ \ldots \ b_{N_2}^{(n+1/2)}]^{\text{T}}$,}
are used as input to the DMD, as described {in the following section}.
In the gather step, the fields are interpolated at each superparticle position based on the same Whitney forms expansion as above. Then, in the particle-pusher step, the position and velocity of the superparticles are updated using Newton's law of motion (with relativistic corrections if necessary) and the Lorentz force equation.
Finally, the scatter step maps the electric current density and the
electric charge distribution produced by the updated velocities and positions of the superparticles back onto the mesh edges and nodes, respectively, while ensuring charge conservation~\cite{moon2015exact}.
\subsection{Koopman Operator}
\label{sec:koopman}
DMD derives its ability to model nonlinear dynamics from its close relation to the Koopman operator. Indeed, DMD can be viewed as a finite dimensional approximation of the infinite dimensional Koopman operator \cite{mezic2013analysis,rowley2009spectral}. The infinite dimensional linear Koopman operator is associated with evolution of a nonlinear dynamical system on an $N$-dimensional manifold \cite{kutz2016dynamic}, where $N$ is the dimensionality of the state-space. \par
Let us consider a discrete-time dynamical system
\begin{align}
\mathbf{x}^{(n+1)}=F(\mathbf{x}^{(n)}),
\end{align}
where $\mathbf{x}$ is the state of the system belonging to an $N$-dimensional manifold $\mathcal{M}$ ($\mathbf{x}\in\mathcal{M}$) and $F$ is the flow map, $F:\mathcal{M}\mapsto \mathcal{M}$.
In the present application, $\mathbf{x} = \mathbf{e} = [e_{1} \ e_{2} \ \ldots \ e_{N_1}]^{\text{T}}$ from the expansion in \eqref{eq:EDoF}.
The discrete time Koopman operator denoted by $\mathcal{K}$ operates on $g(\mathbf{x})$ ($g: \mathcal{M}\mapsto \mathbb{C}$), the so-called ``observables of the state'' as follows
\begin{align}
\mathcal{K}g(\mathbf{x}^{(n)})=g(F(\mathbf{x}^{(n)}))=g(\mathbf{x}^{(n+1)}).
\end{align}
Suppose the eigenfunctions and eigenvalues of the operator $\mathcal{K}$ are represented as $\phi_i: \mathcal{M}\mapsto \mathbb{C}$ and $\lambda_i \in \mathbb{C}$ respectively, ie. $ \mathcal{K}\phi_j(\mathbf{x})=\lambda_j\phi_j(\mathbf{x})~,~ j=1,2, \ldots$.
We can represent a vector valued observable $\mathbf{g(x)}=[g_1(\mathbf{x}) \ g_2(\mathbf{x}) \ \ldots \ g_P(\mathbf{x})]^{\text{T}} $ using Koopman modes $\mathbf{v}_j$ and Koopman eigenfunctions $\phi_j$, so long as the eigenfunctions span each observable, $g_i$, $i=1,2, \ldots,P$, as $\mathbf{g(x)} = \sum_{j=1}^\infty \phi_j(\mathbf{x})\mathbf{v}_j$ \cite{rowley2009spectral, mezic2005spectral}. For the $n^{th}$ time instant,
\begin{align}
\mathbf{g}(\mathbf{x}^{(n)})=\sum_{j=1}^\infty \mathcal{K}^n\phi_j(\mathbf{x}^{(0)})\mathbf{v}_j=\sum_{j=1}^\infty\lambda_j^n\phi_j(\mathbf{x}^{(0)})\mathbf{v}_j\label{koop_recon},
\end{align}
$\mathbf{x}^{(0)}$ being the initial state. Eq. \eqref{koop_recon} is the basis for DMD which is a finite dimensional approximation of infinite dimensional Koopman operator \cite{rowley2009spectral}. \par
\subsection{DMD Algorithm}\label{DMD_algo}
In classical DMD, the state, $\mathbf{x}$, itself serves as the set of observables, i.e. $\mathbf{g(x)} = \mathbf{x}$. DMD is commonly employed to retrieve dominant spatio-temporal patterns of a dynamical system by harvesting time snapshots of the state.
It produces a set of DMD modes $\Phi$, corresponding DMD frequencies $\omega$ and a set of scaling factors $\vartheta$. The DMD modes $\Phi$ capture spatial variation while temporal variation is captured by the term $e^{\omega t}$. A linear combination of properly scaled modes multiplied by $e^{\omega t}$ reconstructs the original data \cite{schmid2010dynamic,tu2013dynamic,kutz2016dynamic} as shown in \eqref{DMD_recon}. Consider a harvesting window of $(l+1)$ snapshots, starting at $t_0=n_0\Delta_t$ and ending at $(n_0+l\Delta n)\Delta_t$, where $\Delta n$ is the number of timesteps between two consecutive snapshots and $\Delta_t$ is the timestep interval. The snapshot matrix $X$ and the shifted snapshot matrix $X'$ are given as
\begin{gather}
X
=
\begin{bmatrix}\label{snapshot_eq1}
\mathbf{x}^{(n_0)}& \mathbf{x}^{(n_0+\Delta n)} & . & . & . & \mathbf{x}^{(n_0+(l-1)\Delta n)} \\
\end{bmatrix},
\\
X'
=
\begin{bmatrix}\label{snapshot_eq2}
\mathbf{x}^{(n_0+\Delta n)}& \mathbf{x}^{(n_0+2\Delta n)} & . & . & . & \mathbf{x}^{(n_0+l\Delta n)} \\
\end{bmatrix}.
\end{gather}
DMD assumes $X'\approx A\cdot X$ and proceeds to extract the eigenvalues and eigenvectors of $A$ in an efficient manner, where
$A=X'X^\dagger$ (`$\dagger$' is the Moore-Penrose pseudo inverse).
The first step towards low-dimensional representation of $A$ involves performing singular value decomposition (SVD) of the snapshot matrix $X$, resulting in the $U$, $\Sigma$, and $V$ matrices as follows
\begin{align} \label{SVD_eq}
X=U\Sigma V^*,
\end{align}
where `$^*$' denotes complex-conjugate transpose. Next, rank reduction is performed by retaining only the first $r$ columns ($r<l$) of $U, V$ as $U_r$ and $V_r$ respectively, as well as the first $r$ columns and rows of $\Sigma$, as $\Sigma_r$. Typically, the value of $r$ is chosen based on a hard energy threshold or through optimal hard thresholding, {as discussed in \cite{gavish2014optimal,opt_thr_code}}. {In this work we choose an optimal hard thresholding based on the nearest odd value of $r$. An odd value of $r$ ensures at least one DMD eigenvalue on the real axis and thus facilitates tracking of DMD eigenvalues and modes.} The Moore-Penrose pseudo inverse of $X$ is then approximated by $X^\dagger\approx V_r\Sigma_r^{-1}U_r^*$ and $A$ by
\begin{align}\label{A_formula}
A\approx X'V_r\Sigma_r^{-1}U_r^*.
\end{align}
Spectral decomposition of $A$ is invariably computationally expensive due to its high dimensionality. An acceptable compromise is to project $A$ onto the columns of $U_r$ (its POD basis), resulting in $\tilde{A} = U_r^*AU_r = U_r^*X'V_r\Sigma_r^{-1}$. The spectral decomposition of $\tilde{A}$ is given by $\tilde{A}W=W\Lambda$, where the diagonal matrix $\Lambda$ contains eigenvalues $\lambda_i$, $i = 1, 2, \ldots, r$ that are an adequate approximation of eigenvalues of $A$.
Exact DMD modes can be constructed as the columns of $\mathbf{\Phi} = X'V_r\Sigma_r^{-1}W$ \cite{tu2013dynamic}, resulting in the DMD reconstruction ($\hat{\mathbf{x}}$) of the state for $t\geq t_0$, i.e.
\begin{align}\label{DMD_recon}
\mathbf{x}(t)\approx\hat{\mathbf{x}}(t)=\sum_{i=1}^r \vartheta_i\Phi_i e^{\omega_i(t-t_0)},
\end{align}
where $\omega_i=ln(\lambda_i)/\Delta t$, $\Delta t$ being the time interval between two consecutive snapshots ($\Delta t=\Delta n\Delta_t$). The scaling factor $\vartheta_i$ can be calculated by solving an optimization problem as described in \cite{jovanovic2014sparsity}. This paper employs stacked snapshot matrices for better accuracy. Further details can be found in \cite{schmid2010dynamic, tu2013dynamic, kutz2016dynamic}. In practical applications DMD is performed on real signal, generating complex conjugate pairs of DMD modes with corresponding complex conjugate pairs of frequencies and scaling factors. So, we can re-write \eqref{DMD_recon} in terms of $M$ complex-conjugate ( $\overline{(.)}$ ) pairs of DMD modes as
\begin{align}\label{DMD_recon_conj}
\hat{\mathbf{x}}(t)=\sum_{m=1}^M (\vartheta_m\Phi_m e^{\omega_m(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{\overline{\omega}_m(t-t_0)}).
\end{align}
For purely real modes, two terms in \eqref{DMD_recon_conj} collapse to single term $(2M\geq r)$. To accurately capture the periodic behavior of limit-cycle oscillations, the DMD harvesting window should cover multiple cycles. Note that the inter-snapshot sampling interval is dictated by the Nyquist criterion and noise frequency. \par
\section{\label{idet_eq} Equilibrium State Identification}
Detection of onset of the ``equilibrium state'' is motivated by the need to identify the ideal data-harvesting window for data-driven reduced order methods, as well as for control applications. Accurate long term prediction of equilibrium behavior requires the DMD harvesting region to include the equilibrium region. One may terminate high-fidelity simulations once the system has reached equilibrium, ensuring enough quality data for the DMD to work with. Therefore, in-line detection (i.e. concomitantly with the ongoing simulation) of the equilibrium state is highly desirable. In this work, we introduce a sliding-window DMD approach for identification of the equilibrium state. This is particularly useful while characterizing highly nonlinear physical systems \cite{PhysRevE.99.063311}, as the sliding-window DMD approximates the evolution of a nonlinear system through piecewise linear dynamic systems supported by the windowed data \cite{costa2019adaptive,10.3389/fncom.2019.00075,taylor2018dynamic,hemati2014dynamic, zhang2019online, alfatlawi2019incremental}. Next, we present the algorithm to track DMD modes followed by detection of the equilibrium phase.
\subsection{Tracking DMD Modes}
DMD captures key features of a dynamical system within the data-harvesting time window. For a sufficiently ``well-behaved'' dynamic system, an infinitesimal shift in the DMD window is not expected to produce a drastic change in its constituent spatio-temporal features. We aim to track each DMD eigenvalue-mode pair $(\lambda, \Phi)$ from one DMD window to next, because doing so provides insights into how constitutive features of the dynamic system evolve. More importantly, it also helps identify if certain $(\lambda, \Phi)$ pairs become ``sufficiently'' stationary over several windows, indicating the onset of equilibrium. Mode tracking is an evolving field of study \cite{alden1985eigenvalue, beaverstock2015automatic, safin2016advanced, raines2012wideband}. Generally, the tracking of eigenvectors is preferred over eigenvalues due to convergence issues caused by repeated (or nearly equal) eigenvalues \cite{alden1985eigenvalue}. However, in DMD theoretical framework we work with the assumption that DMD eigenvalues are distinct \cite{schmid2010dynamic, tu2013dynamic,hirsh2019centering}.\par
In DMD, the effect of a sliding window can be viewed as a perturbation in {the} snapshot matrices. {Let} $X_k, X_k'$ (from \eqref{snapshot_eq1}, \eqref{snapshot_eq2}) be the snapshot matrices for the $k^{th}$ window and $X_{k+1}, X_{k+1}'$ for the $(k+1)^{th}$ window, with $k^{th}$ and $(k+1)^{th}$ window usually multiple snapshots apart. One can write $X_{k+1} = X_k + \delta_1$ and $X_{k+1}'=X_k' + \delta_2$. The amount of perturbation ($\delta_1, \delta_2$) depends on how fast the system changes between two consecutive DMD windows. Through the arguments presented below, we first point out that infinitesimal perturbations in the snapshot matrix will result in only infinitesimal changes in DMD modes and eigenvalues. The following arguments concerning \eqref{snapshot_eq1}-\eqref{DMD_recon} support this claim:
\begin{enumerate}
\item DMD elements $A$, $\tilde{A}$, $\Phi$, as well as the reconstruction in \eqref{DMD_recon} are linear transformations whose continuity ensures small change in output with small change in input.
\item Continuity is less obvious for \eqref{SVD_eq} and the spectral decomposition of $\tilde{A}$.
However, the perturbation bounds for singular values and singular vectors are well documented \cite{stewart1973error, stewart1977perturbation, li1993performance, chen2020asymmetry}, ensuring infinitesimal change in output given infinitesimal change in input for \eqref{SVD_eq}. Regarding the eigendecomposition step, continuity of the roots of a polynomial ensures that eigenvalues of $\tilde{A}$ (roots of its characteristic polynomial) do not experience discontinuities under small perturbations.
Similarly, perturbation bounds for eigenvectors of simple eigenvalues \cite{greenbaum2019first}
assures an infinitesimal change in $W$, thus an infinitesimal change in DMD modes with infinitesimal change in $\tilde{A}$.
\end{enumerate}
Following the above arguments, a gradual shift in the DMD window is expected to lead to a gradual change in DMD eigenvalues and mode shapes. An exception arises at bifurcation points, which we address in the tracking algorithm described below.
\par
The tracking algorithm refers to each DMD mode and corresponding eigenvalue as the pair $(\lambda, \Phi)$. In other words, both the position of $\lambda$ in the complex plane as well as information on the spatial distribution of $\Phi$ are employed for mode tracking. Define ($\lambda_i^{(k)},\Phi_i^{(k)}$) as the DMD eigenvalue-mode pair in the $k^{th}$ window, where $1 \leq i \leq p$. The aim of the tracking algorithm is to assign $(\lambda_j^{(k+1)},\Phi_j^{(k+1)}),~(1\leq j\leq q)$ from $(k+1)^{th}$ window to ($\lambda_i^{(k)},\Phi_i^{(k)}$) as its successor. Assuming $p$ and $q$ to be the number of DMD modes in the $k^{th}$ and $(k+1)^{th}$ window respectively, there can be broadly three scenarios,
\begin{enumerate} [i.)]
\item $p = q$: In this case, each DMD eigenvalue-mode pair in the $k^{th}$ window is associated with exactly one pair in the $(k+1)^{th}$ window.
\item $p > q$: The algorithm must terminate the tracking of some pairs ($\lambda_i^{(k)},\Phi_i^{(k)}$) to which no successors can be assigned.
\item $p < q$: The algorithm must initiate the tracking of newly identified pairs starting at the $(k+1)^{th}$ window after all pairs from the $k^{th}$ window have been assigned unique successors.
\end{enumerate}
The primary condition for successor assignment is given in terms of the placement of DMD eigenvalues. In other words, the first candidate for mode-matching of ($\lambda_i^{(k)},\Phi_i^{(k)}$) is
\begin{align}
\label{matchconditionA} \tilde{j} = \argmin_{j = 1, \ldots q} \| \lambda_i^{(k)} - \lambda_j^{(k+1)} \|
\end{align}
If a conflict arises, resulting in assignment of the same $\tilde{j}$ for multiple $i$, mode-shape matching is invoked as the secondary criterion for tracking. The modal assurance criterion (MAC) is a popular metric used for comparing mode shapes \cite{beaverstock2015automatic}, given by,
\begin{align}
\text{MAC}(\Phi_i,\Phi_j)=\frac{\big| \Phi_i^\text{T} \ \overline{\Phi}_j\big|^2 }{(\Phi_i^\text{T} \ \overline{\Phi}_i)\cdot(\Phi_j^\text{T} \ \overline{\Phi}_j) }.
\end{align}
This work uses the absolute value of MAC, defined as $\rho(\Phi_i, \Phi_j) = |\text{MAC} (\Phi_i, \Phi_j)|$. The maximum value $\rho$ can attain is $1$, denoting an exact configuration match, while $\rho = 0$ indicates no match at all. The tracking algorithm is described in Algorithm 1.
\begin{algorithm}[tbh!] \label{tracking_algo}
\caption{Algorithm for tracking DMD eigenvalue-mode pair $(\lambda, \Phi)$.}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input: }}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE DMD eigenvalue-mode pair ($\lambda_i^{(k)},\Phi_i^{(k)}$) from $k^{th}$ window, $i=1,2,\ldots,p$ and ($\lambda_j^{(k+1)},\Phi_j^{(k+1)}$) from $(k+1)^{th}$ window, $j=1,2,\ldots,q$.
\ENSURE Successor of ($\lambda_{i}^{(k)},\Phi_i^{(k)}$).
\FOR {i = 1 to p}
\STATE {Find $\tilde{j} =\argmin_{j = 1, \ldots q}d(i,j)= \argmin_{j = 1, \ldots q} \| \lambda_i^{(k)} - \lambda_j^{(k+1)} \|$.}
\ENDFOR
\IF{All $i$ are associated with distinct $\tilde{j}$} \label{repeat_step}
\RETURN{($\lambda_{\tilde{j}}^{(k+1)},\Phi_{\tilde{j}}^{(k+1)}$) as successor of respective ($\lambda_i^{(k)},\Phi_i^{(k)}$).}
\ELSE
\STATE{Identify the set of indices $i$ which share a common $\tilde{j}$. Let $I$ be the set of $i$ ($i\in I$) which have common $\tilde{j}=\tilde{j}_I$.}
\STATE{ Identify $ \hat{i}= \argmax_{i\in I} \rho(\Phi_i^{(k)},\Phi_{\tilde{j_I}}^{(k+1)})$ }\label{repeat1}
\STATE{Identify the second closest eigenvalue to $\lambda_{\hat{i}}^{(k)}$ from $(k+1)^{th}$ window after $\lambda_{\tilde{j}_{I}}^{(k+1)}$. Let the index of second closest eigenvalue be $\tilde{j}_{2I}$ .}
\IF{$\big(\rho(\Phi_{\hat{i}}^{(k)},\Phi_{\tilde{j}_I}^{(k+1)})\geq\rho(\Phi_{\hat{i}}^{(k)},\Phi_{\tilde{j}_{2I}}^{(k+1)})\big)$}
\RETURN{($\lambda_{\tilde{j_I}}^{(k+1)},\Phi_{\tilde{j_I}}^{(k+1)}$) as successor of $(\lambda_{\hat{i}}^{(k)},\Phi_{\hat{i}}^{(k)})$.}
\STATE{ For, $\forall{i}\in I-\{\hat{i}\}$, replace the closest eigenvalue index $\tilde{j}_I$ by next closest eigenvalue index $\tilde{j}_{2I}$ from $(k+1)^{th}$ window. If there is no next closest eigenvalue, the eigenvalue-mode pair corresponding to $i\in I-\{\hat{i}\}$ is not tracked further. Repeat from step \ref{repeat_step} for rest of the eigenvalues. }
\ELSE
\STATE{Delete $\hat{i}$ from $I$, so that $\hat{i}\not\in I$. Then repeat from step \ref{repeat1}. }
\ENDIF
\IF{$\big(\rho(\Phi_{i}^{(k)},\Phi_{\tilde{j}_I}^{(k+1)})<\rho(\Phi_{i}^{(k)},\Phi_{\tilde{j}_{2I}}^{(k+1)})\big)$, $\forall{i}\in I$}
\STATE{Identify the $ \tilde{i}= \argmax_{i\in I} |d(i,\tilde{j}_I)-d(i,\tilde{j}_{2I})|$.}
\RETURN{($\lambda_{\tilde{j}_I}^{(k+1)},\Phi_{\tilde{j}_I}^{(k+1)}$) as successor of ($\lambda_{\tilde{i}}^{(k)},\Phi_{\tilde{i}}^{(k)}$).}
\STATE{ For, $\forall{i}\in I-\{\tilde{i}\}$, replace the closest eigenvalue index $\tilde{j}_I$ by next closest eigenvalue index $\tilde{j}_{2I}$ from $(k+1)^{th}$ window. If there is no next closest eigenvalue, the eigenvalue-mode pair corresponding to $i\in I-\{\tilde{i}\}$ is not tracked further. Repeat from step \ref{repeat_step} for rest of the eigenvalues. }
\ENDIF
\ENDIF
\end{algorithmic}
\end{algorithm}
{At the bifurcation point, broadly two scenarios are possible. First, one complex conjugate pair of DMD eigenvalues generates two real eigenvalues after encountering the real axis. Second, two real DMD eigenvalues merge and become a complex conjugate pair of eigenvalues. Since for real data the DMD eigenvalues are mirrored with respect to the real axis, the tracking algorithm concentrates only on the upper half complex plane including the real axis. The first scenario leads to $p<q$, where the algorithm starts tracking the newly generated DMD eigenvalues from that particular window. For the second case $p>q$, the algorithm stops tracking some eigenvalues from the previous window. }
\subsection{State Transition to Equilibrium}\label{state_trans}
DMD analysis of a dynamical system focuses on low-dimensional modeling of the equilibrium state, usually ignoring transient phenomena \cite{bagheri2013koopman, page2019koopman, pascarella2019analysis}. Regardless of the ROM employed, knowing when the transient phase comes to an end is useful for terminating the high-fidelity simulation in a timely fashion so that future solution can be predicted with the ROM (sec. \ref{DMD_algo}). In the literature, several methods are available for detecting state transition of high-dimensional dynamical systems \cite{van2013interaction, feng2014deep, chekroun2014rough, tantet2015early}. The authors in \cite{gottwald2019detecting} have presented a method exploiting DMD reconstruction error for identifying such transitions. The current paper takes advantage of the temporal variation in position of the DMD eigenvalues in the complex plane with respect to the unit circle. The algorithm presented here can be exploited for in-line applications, given some a priori knowledge about the timescale of the problem. This will be discussed in details later. A preliminary version of this state transition algorithm was described in our work \cite{nayak2021detecting}. \par
\begin{figure} [t]
\centering
\subfloat[\label{fig:abs_move} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic38_eigen_move_schematic_crop.jpg}}
\subfloat[\label{fig:ang_move} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic39_eigen_move_schematic2_crop.jpg}}
\hfill
\caption{\small{ Schematic representation of DMD eigenvalue ($\lambda$) migration. In practical cases, the trajectory can be complex combination of both the trajectories shown. (a) Radial movement of eigenvalue towards unit circle as the DMD window moves from transient to equilibrium state. (b) Eigenvalue movement along unit circle to the equilibrium position (green) as the DMD window moves towards equilibrium from transient.} }\label{fig:general_eigen_move}
\end{figure}
The fundamental idea behind the proposed approach is that given a sufficiently wide data harvesting region within the equilibrium state, it follows that (a) the dominant DMD eigenvalues lie on the unit circle \cite{schmid2010dynamic, tu2013dynamic, sasaki2019using} and (b) the mode shapes and corresponding frequencies associated with the dominant DMD modes remain invariant. Intuitively, the latter makes sense because in equilibrium the dynamics of the system remain unchanged irrespective of the observation window as long as that window is sufficiently wide. The Maxwell-Vlasov equations (governing equations in kinetic plasma simulations) are autonomous in nature. In the equilibrium state, the number of particles entering the solution domain remains same as the number of particles leaving, ensuring that the governing dynamics are autonomous.
Note that the solution of the well-posed DMD is unique \cite{hirsh2019centering}, whereby the extracted dominant DMD modes and corresponding eigenvalues remain unchanged as we slide the window within equilibrium region. Of course, conditions (a) and (b) are not necessarily true in the transient state as indicated by the presence of dominant DMD eigenvalues away from the unit circle and continuously changing dynamics. \par
In practical scenarios, the data obtained is not free from noise {(in a general sense, either from finite machine precision and discretization errors present in a simulation or from ambient and instrument noise present in a measurement)}. As a result, conditions (a) and (b) are not exactly satisfied.
Therefore, we emphasize that invariance of characteristics applies to only dominant DMD modes (i.e. with physically meaningful character) in the equilibrium stage. DMD modes corresponding to the noise space of the data do not follow such observations. Here, we adopt a $5\%$ error criterion, so the first few high energy DMD modes corresponding to $\geq 95\%$ of the reconstructed amplitude are defined as the dominant modes. However, the modal amplitude, defined as $A_m(t) = \| \vartheta_m\Phi_m e^{\omega_m(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{\overline{\omega}_m(t-t_0)} \|_2$ ($\|.\|_2$ denotes the Frobenius norm) varies with time, so the measurements are performed at the end of the DMD harvesting window. As the harvesting window approaches the equilibrium state, two key parameters are tracked. The $\alpha$ parameter measures the relative error in the reconstructed data, assuming exponential growth or decay to be the only source of error due to non-zero distance of the dominant DMD eigenvalues from the unit circle. The $\beta$ parameter represents the error in reconstructed data considering the error only due to fluctuation in phase of dominant DMD eigenvalues. The expressions for $\alpha$ and $\beta$ are derived next. Recall the DMD reconstruction formula{,}
\begin{subequations}
\begin{align}
\hat{\mathbf{x}}(t)&\approx\sum_{m=1}^{M_d} (\vartheta_m\Phi_m e^{\omega_m(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{\overline{\omega}_m(t-t_0)})\\
& =\sum_{m=1}^{M_d} e^{\omega_{mR}(t-t_0)}(\vartheta_m\Phi_m e^{j\omega_{mI}(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{-j\omega_{mI}(t-t_0)})\nonumber\\
& =\sum_{m=1}^{M_d} e^{\omega_{mR}(t-t_0)}\psi_m,
\label{DMD_recon_alpha}
\end{align}
\end{subequations}
where complex frequency $\omega_m=\omega_{mR}+j\omega_{mI}$. $\psi_m=(\vartheta_m\Phi_m e^{j\omega_{mI}(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{-j\omega_{mI}(t-t_0)})$
is the oscillating part of the solution and $M_d$ is the number of dominant DMD modes. In equilibrium, the DMD solution must not include exponentially growing or decaying factors, i.e. $\omega_{mR}= 0$, $m = 1, 2, \ldots M_d$. Assuming non-zero $\omega_{mR}$ to be the only source of error, the ideal solution ${\mathbf{x}}(t)$ must be
\begin{align}\label{DMD_recon_conj2_org}
{\mathbf{x}}(t)\approx\sum_{m=1}^{M_d} \psi_m.
\end{align}
Using $\omega_m = ln(\lambda_m)/\Delta t$, $\lambda_m = |\lambda_m| e^{j\theta_m}$, one can write $\omega_{m} = \frac{ln|\lambda_m|}{\Delta t} + \frac{j\theta_m}{\Delta t} = \omega_{mR} + j\omega_{mI}$, giving $\omega_{mR} = \frac{ln|\lambda_m|}{\Delta t}$. From \eqref{DMD_recon_alpha},
\begin{align}\label{DMD_recon_conj2}
\hat{\mathbf{x}}(t)\approx\sum_{m=1}^{M_d} |\lambda_m|^{\frac{(t-t_0)}{\Delta t}} \psi_m.
\end{align}
The relative 2-norm error, $\delta(t)$, is given below, under the assumption that error is only due to exponential growth/decay. This results in the definition of the parameter $\alpha$:
\begin{align}\label{DMD_err_alpha}
\delta(t)=\frac{||\hat{\mathbf{x}}(t)-{\mathbf{x}}(t)||_2}{||{\mathbf{x}}(t)||_2}
&=\frac{||\sum_{m=1}^{M_d} |\lambda_m|^{\frac{(t-t_0)}{\Delta t}} \psi_m-\sum_{m=1}^{M_d} \psi_m||_2}{||\sum_{m=1}^{M_d} \psi_m||_2}\\
&=\frac{||\sum_{m=1}^{M_d} (|\lambda_m|^{\frac{(t-t_0)}{\Delta t}}-1) \psi_m||_2}{||\sum_{m=1}^{M_d} \psi_m||_2}\nonumber\\
\end{align}
and hence, from the triangle inequality, it follows
\begin{align}\label{DMD_err_alpha_2}
\delta(t)&\leq\frac{\sum_{m=1}^{M_d} ||(|\lambda_m|^{\frac{(t-t_0)}{\Delta t}}-1) \psi_m||_2}{||\sum_{m=1}^{M_d} \psi_m||_2}=\alpha(t-t_0)=\alpha(\tilde{t}).
\end{align}
Note that $\psi_m$ is a function of the time difference between target time $t$ and the reference initial time of that particular DMD window $t_0$, which we denote as $\tilde{t}$. As a result, we can write $\alpha(t-t_0)$=$\alpha(\tilde{t})$ in the above. As the DMD window moves towards equilibrium, the dominant DMD eigenvalues move closer towards unit circle (Fig. \ref{fig:general_eigen_move}), thus decreasing $\alpha(\tilde{t})$, for a fixed $\tilde{t}$. We define the convergence in $\alpha$ as the termination of its secular decay. However, a dynamical system can continue to be in transience even after all DMD eigenvalues have moved to the unit circle (Fig. \ref{fig:ang_move}). This happens when the transient state involves variation of frequency content instead of amplitude. Thus the presented approach also validates that the dominant DMD eigenvalues do not move along the unit circle. Doing so ensures that the error due to shift in phase ($\Delta \theta_m$) over successive windows is less than some predetermined threshold $\beta_{thr}$.
\begin{align}
\hat{\mathbf{x}}(t)& \approx \sum_{m=1}^{M_d} (\vartheta_m\Phi_m e^{\tilde{\omega}_m(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{\overline{\tilde{\omega}}_m(t-t_0)})\\
&=\sum_{m=1}^{M_d} (\vartheta_m\Phi_m e^{(\omega_{mR}+j(\omega_{mI}+\Delta\omega_{mI}))(t-t_0)}+\overline{\vartheta}_m\overline{\Phi}_m e^{(\omega_{mR}-j(\omega_{mI}+\Delta\omega_{mI}))(t-t_0)})\nonumber\\
&=\sum_{m=1}^{M_d} (\chi_m e^{j\Delta\omega_{mI}(t-t_0)}+\overline{\chi}_m e^{-j\Delta\omega_{mI}(t-t_0)})\nonumber
\end{align}
\begin{align}
&=\sum_{m=1}^{M_d} (\chi_m e^{j\frac{\Delta\theta_{m}}{\Delta t}(t-t_0)}+\overline{\chi}_m e^{-j\frac{\Delta\theta_{m}}{\Delta t}(t-t_0)})\nonumber\\
&=\sum_{m=1}^{M_d} 2\text{Re}\{\chi_m e^{j\frac{\Delta\theta_{m}}{\Delta t}(t-t_0)}\},
\end{align}
where, $\tilde{\omega}_m=\omega_m+j\Delta\omega_{mI}$ and $\chi_m=\vartheta_m\Phi_m e^{\omega_m(t-t_0)}$, a function of $(t-t_0)=\tilde{t}$. As above, we compute the relative 2-norm error $\delta(t)$ under the assumption that error is only due to $\Delta\omega_{mI}$ with ${\mathbf{x}}(t)\approx\sum_{m=1}^{M_d} 2\text{Re}\{\chi_m\}$ as the ideal {solution. This results} in the definition of the parameter $\beta$:
\begin{align}\label{DMD_err_beta}
\delta(t)=\frac{||\hat{\mathbf{x}}(t)-{\mathbf{x}}(t)||_2}{||{\mathbf{x}}(t)||_2}&=\frac{||\sum_{m=1}^{M_d} 2\text{Re}\{\chi_me^{j\frac{\Delta\theta_m}{\Delta t}(t-t_0)}\}-\sum_{m=1}^{M_d} 2\text{Re}\{\chi_m\}||_2}{||\sum_{m=1}^{M_d} 2\text{Re}\{\chi_m\}||_2}\nonumber\\
&=\frac{||\sum_{m=1}^{M_d} 2\text{Re}\{\chi_m(e^{j\frac{\Delta\theta_m}{\Delta t}(t-t_0)}-1)\}||_2}{||\sum_{m=1}^{M_d} 2\text{Re}\{\chi_m\}||_2}\nonumber\\
\end{align}
and hence
\begin{align}\label{DMD_err_beta_2}
\delta(t) \leq \frac{\sum_{m=1}^{M_d}|| \text{Re}\{\chi_m(e^{j\frac{\Delta\theta_m}{\Delta t}(t-t_0)}-1)\}||_2}{||\sum_{m=1}^{M_d} \text{Re}\{\chi_m\}||_2}
=\beta(t-t_0)=\beta(\tilde{t})
\end{align}
Parameters $\alpha$ and $\beta$ are computed for dominant DMD modes only. Our goal is to detect the ``knee'' or ``elbow'' region in the graph of $\alpha$ against $k$ (window index), denoting transition to equilibrium state. However, in-line detection of the knee region is challenging, especially when data is noisy. We thus examine the rolling average of $\alpha$ over $W$ successive windows and search for a non-negative slope in the averaged graph, hinting convergence in $\alpha$. Once convergence in $\alpha$ is detected, the focus shifts to the parameter $\beta$ to ensure that the error due to phase shift of the dominant eigenvalues over $W$ windows is within an acceptable bound $(\leq \beta_{thr})$. We will illustrate the in-line algorithm (Algorithm 2) assuming we have some a priori knowledge about the timescale of the problem to inform the selection of appropriate window width $T$. \par
\begin{algorithm}[hbt!]\label{algo_equi}
\caption{Algorithm for detecting onset of equilibrium state}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input: }}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Data from high-fidelity simulation.
\ENSURE Window index indicating onset of equilibrium.
\\ \textit{Initialization} : For first window (k=1), calculate $r$ as in \eqref{A_formula} using optimal hard thresholding and use it for rest of the algorithm.
\STATE At current ($k^{th}$) window, say $D$ denotes the set of dominant DMD eigenvalues. Identify the $i$ for which $\lambda_i^{(k)}\in D$, where $1\leq i \leq p$, and $p$ is number of DMD modes ($M$ from \eqref{DMD_recon_conj}) in the $k^{th}$ window. \label{first_step}
\STATE Calculate $\alpha{(\tilde{t})}$ for $k^{th}$ window at $\tilde{t}=T$ denoted by $\alpha(T)^{(k)}$, where $T$ is the DMD window width. \label{second_step}
\STATE For $k\geq Wh$, perform averaging of $\alpha$ over $W$ windows to get $<\alpha>_W^{(h)}=[\alpha(T)^{(s+1)}+\alpha(T)^{(s+2)}+\ldots+\alpha(T)^{(s+W)}]/W$, where $s=W(h-1)$, $h=1,2,\ldots$.
\IF {(log$(<\alpha>_W^{(h)})\geq$ log$(<\alpha>_W^{(h-1)})$)}\label{check_step}
\STATE From the tracking Algorithm 1, identify the predecessors of $\lambda_i^{(k)}\in D$ for previous $W$ windows, $\lambda_{(i)}^{(k-1)},\lambda_{(i)}^{(k-2)},~.~.~.,\lambda_{(i)}^{(k-W)}$, calculate
$\Delta\theta_{a,i}^{(k)}=|\text{Arg}(\lambda_{(i)}^{(k-a)})-\text{Arg}(\lambda_{i}^{(k)})|$, where $a=1,2,\ldots,W$.
\STATE Calculate $\beta(T)$ wrt. $\Delta\theta_{a,i}^{(k)}$ at $k^{th}$ window for $W$ predecessors, $\beta(T)_1^{(k)},\beta(T)_2^{(k)},~.~.~.,\beta(T)_W^{(k)}$.
\IF {($\beta(T)_1^{(k)},\beta(T)_2^{(k)},~.~.~.,\beta(T)_W^{(k)}\leq \beta_{thr}$)}
\STATE Stop harvesting.
\RETURN $k$
\ELSE
\STATE Continue harvesting, move to the $(k+1)^{th}$ window and return to the step \ref{first_step}.
\ENDIF
\ELSE
\STATE Continue harvesting, move to the $(k+1)^{th}$ window and return to the step \ref{first_step}.
\ENDIF
\end{algorithmic}
\end{algorithm}
It is important to note that the performance of Algorithm 2 depends on the choice of parameters $T$, $W$, $\beta_{thr}$ and $\Delta_k$, where $\Delta_k$ is the shift between two successive sliding DMD windows. These parameters must be selected beforehand and do not adapt during the run. We make the following observations:
\begin{itemize}
\item The parameter $T$ denotes the width of each sliding window. Prior knowledge about the time-scale of the problem helps to make sure that $T$ covers multiple oscillation cycles (if any) in the equilibrium state. If the window width is not sufficient to capture the dynamics of the equilibrium, temporal variation in the parameter $\alpha$ might not elicit convergence even as the window slides towards equilibrium. For offline applications, one of many simple algorithms such as zero crossing detection or peak detection can be used to approximate the period of limit cycle oscillations.
\item The shift $\Delta_k$ generally spans an integer multiple of snapshots. For in-line processes, the natural choice is to shift by one snapshot, in which case DMD is performed when a new snapshot becomes available. In our test cases, we shift by two snapshots as it provides enough headroom to play with varying snapshot intervals, keeping the shift $\Delta_k$ constant. Ideally, overlap between successive windows must be avoided to minimize the computation cost. In practice however, intersection between consecutive sliding windows is employed for the following reasons: (i.) window overlap implies smaller perturbation in the snapshot matrix, which helps with tracking DMD eigenvalue-mode pair, and, (ii.) overlap helps determine $W$, the number of windows over which $\alpha$ is averaged.
\item $W$ is chosen such that it is the minimum number of shifts for there to be no overlap between $k^{th}$ and $(k+W)^{th}$ window. In other words, $W=\floor{T/\Delta_k}$. Too small a value for $W$ can result in premature, erroneous detection of equilibrium, especially for highly noisy $\alpha$ variation. While a large $W$ overcomes this difficulty, it is at the cost of delayed detection of equilibrium. Delayed detection of equilibrium does not pose risk of increased error in DMD extrapolation, only inefficiency.
\item The threshold $\beta_{thr}$ is based on the acceptable error limit for each application. In this work we follow a $5\%$ error criterion, but from experience, the error due to fluctuations in phase is much smaller than errors due to exponential growth/decay. Therefore, we set $\beta_{thr}=0.01$ $(1\% ~\text{error})$. As shown in \eqref{DMD_err_beta_2}, $\beta$ is calculated based on the difference in phase of dominant eigenvalue at the $k^{th}$ window and it's predecessors at previous windows. It aims to detect slow unidirectional movement (Fig. \ref{fig:ang_move}) of eigenvalues along the unit circle. As a rule of thumb, we check for the error due to shift in phase over $W$ previous windows. If there is extremely slow movement of DMD eigenvalues along the unit circle in the transient state, it might go undetected for small $W$ value. Of course, very slow phase variation might not be of interest, as long as the reconstruction accuracy stays within acceptable limits.
\end{itemize}
\section{Results}
In this section we apply the tracking and equilibrium detection algorithm for several plasma examples. The effectiveness of DMD in the modeling and prediction of self electric fields as well as its effect on the particle dynamics is demonstrated. For application of the proposed equilibrium detection algorithm to a classic textbook use-case, the reader is referred to \ref{lorenz_96}, wherein
the well-known Lorenz'96 oscillator is studied. This section presents three test cases. The first two examples consider {a} two dimensional (2-D) plasma ball expansion and {an} oscillating electron beam respectively. We establish the effectiveness of DMD in extracting low dimensional key features from self electric field data $\mathbf{e}(t)$ of EMPIC kinetic plasma simulations and reconstruct the data with good accuracy. Sliding window DMD technique is then employed for in-line identification of the equilibrium state of each system. Finally, we investigate the extrapolation accuracy beyond the detected equilibrium point for both {the} predicted fields as well as {the} particle dynamics.
Convergence of dominant DMD mode shapes and movement of corresponding eigenvalues in the complex plane is presented.
The final example deals with virtual cathode formation, where the main focus is on the accuracy of predicted particle dynamics.
{We treat the data generated from high-fidelity EMPIC simulation as the ``ground truth" to evaluate DMD performance. Nevertheless,
for long-term predictions, we should keep in mind that long simulation runtimes might introduce numerical noise in high-fidelity data queried at later time due to ``numerical heating'' effects \citep{PhysRevE.95.043302}.}{ Due to this and other sources of numerical error mentioned earlier, some of the dominant DMD eigenvalues might not lie exactly on the unit circle. After detecting equilibrium and before extrapolation, we adjust the dominant DMD eigenvalues in the radial direction so that they are exactly on the unit circle.}
\subsection{Plasma ball expansion}
The solution domain is a $L \times L$ square two-dimensional cavity ($L=10$ m: see Fig.~\ref{fig:ball_ss_snap}). It is discretized using an irregular triangular mesh with $N_0=8037$ nodes, $N_1 = 23797$ edges and $N_2=15761$ triangles. Superparticles are initially placed at the center of the cavity within a circle of radius $0.5$ m. The plasma ball is initially assumed to be neutral as each electron-ion pair is initially located at the exact same position. All four sides of the cavity are assumed to be perfect magnetic conductors (PMC). Superparticles are given an initial radial velocity with Maxwellian distribution. The timestep interval is $0.1$ ns and each superparticle represents $2\times 10^5$ electrons. Superparticles are absorbed as they hit the boundary. We sample the data every $\Delta_n=500$ timesteps until $n = 500000$.\par
\begin{figure} [t]
\centering
\subfloat[ \label{fig:ball_ss_snap} ]{%
\includegraphics[width=0.475\linewidth]{jcp_pic00_ball_snap_n800_crop.jpg}}
\hspace{0.0cm}
\subfloat[ \label{fig:ball_ss_sing} ]{%
\includegraphics[width=0.51\linewidth]{jcp_pic14_ball_E_ss_sing_vals_401to550_r19_m12_crop.jpg}}
\hfill
\caption{\small{ (a) Snapshot of plasma ball expansion at $n=400000$ in a square cavity. The yellow dots represent superparticles and majenta arrows show the self electric field quiver plot. (b) Normalized singular values from SVD of snapshot matrix in equilibrium state. } }
\end{figure}
\subsubsection{Self Electric Field Reconstruction}\label{ball_in_eq}
In equilibrium, the self electric field attains a steady state with a constant spatial configuration (Fig.~\ref{fig:ball_ss_snap}). For extracting low-dimensional features in equilibrium through DMD, we harvest data from $n=200500$ to $n=275000$ with interval $\Delta t = 100$ ns between consecutive snapshots. A selection of $r = 19$ leads to $12$ DMD modes, effectively reducing the degrees of freedom from $23797$ to only $12$. Fig. \ref{fig:ball_ss_sing} shows the exponential decay of singular values, revealing the dominance of a single mode. The DMD eigenvalue distribution in the complex plane and dominant stationary mode ($\Phi_1^{(ss)}$) field configuration are shown in Fig. \ref{fig:ball_ss_eig_mode1}. The modes are numbered according to their energy content ($|A_m|^2$), with $\Phi_1^{(ss)}$ being the most energetic mode. With increasing mode indices (decreasing energy), the field configuration becomes more random, as can be observed in the recessive modes (Fig.~\ref{fig:ball_ss_rec}).
\begin{figure} [t]
\centering
\subfloat[ \label{fig:ball_ss_eig} ]{%
\includegraphics[width=0.49\linewidth]{jcp_pic15_ball_E_ss_eig_vals_401to550_r19_m12_crop.jpg}}
\hspace{0.0cm}
\subfloat[ $\Phi_1^{(ss)}$\label{fig:ball_ss_mode1} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic17_ball_E_ss_mode1_dom_401to550_r19_m12_crop.jpg}}
\hfill
\caption{\small{(a) DMD eigenvalues in the complex plane. The green circle denotes the dominant mode and black curve indicates the unit circle. (b) Dominant mode $\Phi_1^{(ss)}$. The blue arrows show the self electric field quiver plot. The colormap indicates logarithm (base 10) of amplitude. } }\label{fig:ball_ss_eig_mode1}
\end{figure}
\begin{figure} [t]
\centering
\subfloat[$\Phi_2^{(ss)}$ \label{fig:ball_ss_mode2} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic18_ball_E_ss_mode2_rec_401to550_r19_m12_crop.jpg}}
\subfloat[$\Phi_3^{(ss)}$ \label{fig:ball_ss_mode3} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic19_ball_E_ss_mode3_rec_401to550_r19_m12_crop.jpg}}
\subfloat[$\Phi_4^{(ss)}$ \label{fig:ball_ss_mode4} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic20_ball_E_ss_mode4_rec_401to550_r19_m12_crop.jpg}}\\
\subfloat[$\Phi_5^{(ss)}$ \label{fig:ball_ss_mode5} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic21_ball_E_ss_mode5_rec_401to550_r19_m12_crop.jpg}}
\subfloat[$\Phi_6^{(ss)}$ \label{fig:ball_ss_mode6} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic22_ball_E_ss_mode6_rec_401to550_r19_m12_crop.jpg}}
\subfloat[$\Phi_7^{(ss)}$ \label{fig:ball_ss_mode7} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic23_ball_E_ss_mode7_rec_401to550_r19_m12_crop.jpg}}
\caption{\small{ First six recessive DMD modes for plasma ball in equilibrium.}
\label{fig:ball_ss_rec}}
\end{figure}
\begin{figure} [hbt!]
\centering
\subfloat[ Transient state DMD modes\label{fig:ball_trans_self_corr} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic03_ball_E_trans_self_corr_rec_1to150_r27_m15_crop.jpg}}
\subfloat[ Equilibrium state DMD modes\label{fig:ball_ss_self_corr} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic16_ball_E_ss_self_corr_401to550_r19_m12_crop.jpg}}
\hfill
\caption{\small{ (a) Absolute value of MAC coefficient $\rho$ between transient state DMD modes. (b) Coefficient $\rho$ between equilibrium state DMD modes. } }\label{fig:ball_ss_trans_self_corr}
\end{figure}
For comparison, we also perform DMD in the transient state with the harvesting region spanning from $n=500$ to $n=75000$, snapshots $\Delta t=100$ ns apart. We choose $r=27$, giving us $15$ DMD modes. For comparing the spatial configuration of DMD modes, we plot the absolute value of MAC ($\rho$) in a matrix form in Fig. \ref{fig:ball_ss_trans_self_corr}. Unlike other projection-based reduced order model techniques such as the proper orthogonal decomposition (POD), DMD does not enforce orthogonality of modes in the spatial domain, which explains the presence of nontrivial off-diagonal elements. At the same time, Fig.~\ref{fig:ball_ss_trans_self_corr} reveals a clear distinction among various equilibrium DMD modes.
On the other hand, transient-state DMD modes are less distinguishable from each other due to more complex dynamics. The 2-norm relative error in reconstructed self electric field is shown in Fig.~\ref{fig:ball_ss_trans_err} for different sampling rates.
The 2-norm relative error in DMD reconstruction ($\hat{\mathbf{e}}$) compared to the full-order solution (${\mathbf{e}}$) at $n^{th}$ timestep ($\delta^{(n)}$) is given by,
\begin{align}\label{error}
\delta^{(n)}=\frac{||\hat{\mathbf{e}}^{(n)}-\mathbf{e}^{(n)}||_2}{||\mathbf{e}^{(n)}||_2}.
\end{align}
As expected, decreasing sampling interval ensures better accuracy, but the solution diverges more rapidly in the extrapolation region. Note that the surprisingly good performance for $\Delta t=200$ ns in Fig. \ref{fig:ball_trans_err} can be attributed to the ``aliasing" like effect for this particular sampling interval. The $\Delta t=200$ ns case is an anomaly for which the DMD frequencies are such that it produces a stable solution with relative error oscillating around a fixed value. This is further confirmed by the fact that the $\Delta t=250$ ns case continues to follow the trend as shown by the $\Delta t=50$ and $100$ ns cases. Higher error at the beginning of the simulation can be attributed simply to the very low field magnitudes then, causing a spike in the relative error. The extrapolation error is higher for transient state DMD compared to DMD in the equilibrium state, which further evokes the need to correctly determine the equilibrium state for good prediction accuracy.
\subsubsection{Sliding-Window DMD}
In the plasma ball expansion case, the self-fields attain a non-oscillatory steady state in equilibrium. This makes the prediction task trivial once equilibrium is detected and presents an opportunity to verify the accuracy of the equilibrium detection algorithm. First, we discuss the robustness of our algorithm with respect to the sampling interval and sliding window width. Then, the convergence of dominant mode shapes and accuracy of predicted particle dynamics is presented.
\begin{figure} [t]
\centering
\subfloat[Transient state DMD\label{fig:ball_trans_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic12_ball_E_trans_err_1to150_r27_m15_new_crop.jpg}}
\subfloat[Equilibrium state DMD\label{fig:ball_ss_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic26_ball_E_ss_err_401to550_f1,f2,f4_crop.jpg}}
\hfill
\caption{\small{(a) 2-norm relative error when the DMD window (green shaded area) is in the transient region.} (b) 2-norm relative error when the DMD window (green shaded area) is in equilibrium region. } \label{fig:ball_ss_trans_err}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{jcp_pic27_ball_E_alpha_var_win200_wid60_f2_r13_crop.jpg}
\caption{\small{Variation in $\alpha(T)$ as the window slides towards the equilibrium state for $\Delta t=100$ ns.}}
\label{fig:ball_sld_alpha_var}
\end{figure}
\begin{figure} [hbt!]
\centering
\subfloat[\label{fig:ball_sld_alpha_comp1} ]{%
\includegraphics[width=0.49\linewidth]{jcp_pic29_ball_E_sld_alpha_var_win200_wid48,60,72_f2_av_111_crop.jpg}}
\subfloat[\label{fig:ball_sld_alpha_comp2} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic31_ball_E_sld_alpha_var_win200_wid60_f1,f2,f4_av_111_crop.jpg}}
\hfill
\caption{\small{ (a) Sensitivity of Algorithm 2 towards window width $T~(\pm20\%)$, keeping fixed $\Delta t=100$ ns. (b) Sensitivity of Algorithm 2 towards sampling interval $\Delta t$ }, keeping fixed $T=3~\mu$s. }\label{fig:ball_sld_alpha_comp}
\end{figure}
\paragraph{Equilibrium Detection}\label{ball_eq_det}
Algorithm 2 is used for identifying the onset of equilibrium state with $\beta_{thr}=0.01$ and $\Delta_k=200$ ns. In this case we know that the fields will eventually attain steady state without limit-cycle oscillations, whereby the selection of $T$ is not a critical factor. We choose $T=3$ $\mu$s with 30 snapshots inside the harvesting window. Starting and ending points of the $k^{th}$ DMD window are given by $n_{st}(k)=\Delta_n+(k-1)\times n_{\Delta_{k}}$ and $n_{en}(k)=(\Delta_n+n_T)+(k-1)\times n_{\Delta_k}$ respectively, where $n_{\Delta_{k}}$ is the window shift in terms of timesteps ($\Delta_k=n_{\Delta_{k}}\Delta_t$) and $n_T$ denotes the number of timesteps forming the DMD window ($T=n_T\Delta_t$). \par
As seen in Fig. \ref{fig:ball_sld_alpha_var}, $\alpha(T)$ decreases initially with increasing $k$ and eventually converges, the knee/elbow region marking the transition from transient to steady-state. The algorithm detects the steady-state at $k=75$ ($n_{st}(75)=148500$). The sensitivity of $\alpha(T)$ towards variation in $T$ and $\Delta t$ is shown in Figs.~\ref{fig:ball_sld_alpha_comp1} and \ref{fig:ball_sld_alpha_comp2} respectively. For better comparison, we set $\Delta_k=200$ ns for all the three cases in Fig.~\ref{fig:ball_sld_alpha_comp2}. Using the non-negative slope criterion, the algorithm stops at $k=96,75$ and $144$ for $T=2.4~\mu$s, $3~\mu$s and $3.6~\mu$s respectively. As explained in \ref{lorenz_96}, non-negative slope is employed to indicate a ``knee'', which can potentially delay the detection of equilibrium. For $T=3.6~\mu$s in Fig. \ref{fig:ball_sld_alpha_comp1}, $\alpha(T)$ encounters non-negative slope at a much later time compared to the other two cases, even though the actual knee region appears earlier. For $\Delta t=50$ ns, $100$ ns and $200$ ns, the algorithm detects equilibrium at $k=75,75$ and $105$ respectively.
\begin{figure}[hbt!]
\centering
\includegraphics[width=1\linewidth]{jcp_pic33_ball_E_sld_mode1_conv_corr_coeff_r13_crop_new.jpg}
\caption{\small{Correlation coefficient ($\rho$) of $\Phi_1^{(75)}$ with its predecessors (black dotted curve). $\rho$ between $\Phi_1^{(ss)}$ and predecessors of $\Phi_1^{(75)}$ (red curve). Inset: $\Phi_1^{(75)}$ and its predecessor $\Phi_1^{(1)}$ at $k=1$.}}
\label{fig:ball_mode1_conv}
\end{figure}
\begin{figure}[hbt!]
\centering
\subfloat[\label{fig:ball_eig_move_mode1} ]{%
\includegraphics[width=0.48\linewidth]{jcp_pic32_ball_E_sld_mode1_eigen_movement_r13_crop.jpg}}
\subfloat[\label{fig:ball_sld_E_err} ]{%
\includegraphics[width=0.52\linewidth]{jcp_pic40_sld_E_err_win75_wid60_r13_crop.jpg}}
\hfill
\caption{\small{ (a) Movement of predecessors of the eigenvalue corresponding to $(\lambda_1^{(75)},\Phi_1^{(75)})$. (b) 2-norm relative error in self electric field reconstruction. The green shaded area denotes the DMD window corresponding to $k=75$.} }\label{fig:ball_eig_move_err}
\end{figure}
\paragraph{Convergence in DMD Mode Shape}
Algorithm 1 helps track the evolution of DMD mode shapes through the parameter $\rho$. We correlate the steady-state mode $\Phi_1^{(ss)}$ (from sec. \ref{ball_in_eq}) and dominant DMD mode in the last window $\Phi_1^{(75)}$ with its predecessors $\Phi_{(1)}^{(k)}$ ($k=1,2, \ldots, 75$, where $\Phi_{(1)}^{(75)}=\Phi_{1}^{(75)}$) . As can be seen in Fig. \ref{fig:ball_mode1_conv}, the high value of $\rho$ indicates that the dominant mode shape remains almost time invariant. Close proximity of red and black curves (Fig. \ref{fig:ball_mode1_conv}) further confirms that the equilibrium is attained at $k=75$ as $\Phi_1^{(ss)}$ and $\Phi_1^{(75)}$ are almost identical. Fig. \ref{fig:ball_eig_move_mode1} shows convergent movement of the dominant eigenvalue towards the unit circle.
\paragraph{Prediction of Self-Fields and Particle Dynamics}
The high-fidelity simulation is stopped after detecting the equilibrium state. The final data harvesting window ($k=75$) is then used for extrapolation (Fig.~\ref{fig:ball_sld_E_err}). Again, as mentioned earlier, extrapolating the self-fields in this particular example is trivial because of the non-oscillatory steady-state nature of the solution. \par
As we are interested in how this predicted self electric field affects the (predicted) particle dynamics, we will substitute it in place of the self electric field generated by original EMPIC algorithm. However, we can entirely bypass the field solver (update) stage of the EMPIC algorithm as illustrated in Fig. \ref{fig:ball_empic_DMD} by performing DMD on the self magnetic {flux} $\mathbf{b}(t)$ as well. In this work we identify the equilibrium performing sliding-window DMD on electric field dataset and extrapolate both the self electric and magnetic field from the last DMD window. However, the DMD extrapolated self-fields do not ensure energy conservation in the extrapolated region. To the extent that the extrapolated fields remain close to the original solution, the energy is approximately conserved in the extrapolation region given that the high-fidelity algorithm itself is energy-conserving. \par
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{jcp_pic_0002_DMD_cycle_flat_new_crop.jpg}
\caption{\small{Schematic representation of EMPIC algorithm with DMD predicted self-fields. Prior to detection of equilibrium state, the EMPIC algorithm consists of usual four stages. After the equilibrium is detected, we perform DMD to extrapolate self-field values and utilize those values bypassing field-update stage for future time. To observe the effect of predicted self-fields on particle behavior, we also perform the gather stage and particle pusher stage. }}
\label{fig:ball_empic_DMD}
\end{figure}
\begin{figure}[hbt!]
\centering
\includegraphics[width=1\linewidth]{jcp_pic37_ball_E_sld_abs_phase_sp_win75_wid60_f2_r13_n450_crop_new.jpg}
\caption{\small{Phase-space plot comparison between finite-element full-order EMPIC simulation (blue) and reduced-order DMD (red) in extrapolation region ($n=225000$). Phase-space plot for absolute velocity and radial distance $(R)$ from center of mesh (5,5). Inset: Phase-space plot corresponding to radial velocity and radial distance.}}
\label{fig:ball_phase_space_450}
\end{figure}
\begin{figure} [hbt!]
\centering
\subfloat[\label{fig:ball_av_rad_vel} ]{%
\includegraphics[width=0.49\linewidth]{jcp_pic38_1_ball_E_sld_part_dens_win75_wid60_f2_r13_n450_new_error_crop.jpg}}
\subfloat[\label{fig:ball_av_part_den} ]{%
\includegraphics[width=0.51\linewidth]{jcp_pic38_0_ball_E_sld_part_dens_win75_wid60_f2_r13_n450_new_error_crop.jpg}}
\hfill
\caption{\small{ Particle dynamics comparison at $n=225000$. (a) Radial variation of average radial velocity of particles. (b) Radial variation of particle density. For both cases, relative error is defined as $\delta=|\hat{\mathcal{X}}(R)-\mathcal{X}(R)|/\max|\mathcal{X}(R)|$, where $\mathcal{X}$ represents either $v_R^{(n+1/2)}$ or $N_p$ and ``hat'' denotes DMD approximation.} }\label{fig:ball_av_vel_part_den}
\end{figure}
We next compare the particle dynamics generated from the full-order and reduced-order DMD in the extrapolation region at $n=225000$, beyond the final snapshot ($n=178500$) of the last window. Fig.~\ref{fig:ball_phase_space_450} shows a good match between the phase space plots of the full-order and reduced-order models in the radial direction ($R$). Fig.~\ref{fig:ball_av_vel_part_den} compares the average radial velocity and particle density as a function of radial distance from the center of the plasma ball. For calculating the average radial particle velocity and particle density at $R$, we consider a thin annular region with outer radius $R+L/40$ and inner radius $R-L/40$ and perform the averaging for all the particles present inside that annular region. It is clear that the predicted fields produce good prediction of the particle dynamics and thus have the potential to speed-up EMPIC simulations for long term predictions.
\subsection{Oscillating Electron Beam}\label{wavy_beam}
Consider the case of a 2-D electron beam propagation along the positive $y$ direction in the $xy$ plane, under the influence of an external oscillating transverse magnetic {flux} (Fig. \ref{fig:wavy_beam_ss_snap}). The solution domain is a square cavity of size $1~\text{m}\times 1~\text{m}$ that is discretized via an irregular triangular mesh wit $N_0=1647$ nodes, $N_1=4788$ edges and $N_2=3142$ triangles. Superparticles are injected randomly with uniform distribution at the bottom of the cavity in the region [$0.5-b_h,0.5+b_h$]. Here, $b_h=0.1$ m is the half-beam width. All four sides of the cavity are assumed to be perfect electric conductors (PEC). Superparticles are injected with initial velocity $v_0=5\times 10^6$ m/s along the positive $y$ direction at rate $10$ superparticles {($1~ \text{superparticle}\equiv 2\times 10^5$ electrons)} per timestep ($= 0.01$ ns ). The external voltage bias is set to $V_b=2\times 10^3$ V and external magnetic {flux} to $B_{ext}=B_0~\sin{(2\pi t/T_b)}~\mathbf{\hat{z}}$, where $B_0= 10^{-3}$ T and $T_b=20$ ns. Superparticles are absorbed as they hit the upper boundary. Time series data of degrees of freedom (DoF) of self-fields is stored at every $80^{th}$ timestep $(\Delta_n=80)$. The data set spans $n= 80$ to $n=80000$ (1000 datapoints).
\subsubsection{Self Electric Field Reconstruction}\label{result_stbeam_1}
\begin{figure} [hbt!]
\centering
\subfloat[ \label{fig:wavy_beam_ss_snap} ]{%
\includegraphics[width=0.475\linewidth]{jcp_pic00_wavy_ss_snap_t800_crop.jpg}}
\subfloat[ \label{fig:wavy_beam_ss_sing} ]{%
\includegraphics[width=0.51\linewidth]{jcp_pic19_wavy_ss_E_sing_vals_501to620_r17_m9_crop.jpg}}
\caption{\small{ (a) Snapshot of an oscillating 2-D electron beam at $n=64000$ in a square cavity, propagating along +ve $y$ direction. The cyan arrows show the self-electric field lines. (b) Normalized singular values from SVD of snapshot matrix in equilibrium state.} \label{fig:wavy_beam_snap_sing}}
\end{figure}
Transience ends shortly after the beam reaches the upper boundary of the domain. The DMD window in equilibrium spans form $n=40080$ to $n=49600$, with consecutive samples $\Delta t=1.6$ ns apart. As seen in Fig.~\ref{fig:wavy_beam_ss_sing}, energy is primarily concentrated in the first few ($\sim 10$) modes, revealing existence of underlying low-dimensional coherent features. We truncate the SVD matrices at $r=17$, generating $9$ DMD modes, resulting in a reduced-order model with only $9$ degrees of freedom compared to $4788$ in the full-order finite element model. Fig.~\ref{fig:wavy_ss_eigs} with dominant eigenvalues marked by green circle indicates that DMD is able to successfully extract the stationary component $\Phi_1^{(eq)}$ and the oscillating component $\Phi_2^{(eq)}$ from the equilibrium state, with the oscillation frequency matching the frequency of oscillation of external magnetic flux. In equilibrium, these two modes contain more than $99\%$ of the energy.
As for the plasma ball example, we perform DMD during transience as well. This DMD window spans from $n=80$ to $n=9600$ with $\Delta t=1.6$ ns, $r=25$ and $13$ DMD modes. Fig. \ref{fig:wavy_self_corr} reveals a clear distinction in nature of correlation among equilibrium modes versus correlation among transient modes: the former have greater separation while the latter have more overlap among each other. This phenomenon is similar to the plasma ball case.
\begin{figure} [H]
\centering
\subfloat[ \label{fig:wavy_ss_eigs} ]{%
\includegraphics[width=0.31\linewidth]{jcp_pic20_wavy_ss_E_eig_vals_501to620_r17_m9_crop.jpg}}
\subfloat[$\Phi_1^{(eq)}$ \label{fig:wavy_ss_mode1} ]{%
\includegraphics[width=0.34\linewidth]{jcp_pic22_wavy_ss_E_mode1_dom_501to620_r17_m9_crop.jpg}}
\subfloat[$\Phi_2^{(eq)}$ \label{fig:wavy_ss_mode2} ]{%
\includegraphics[width=0.34\linewidth]{jcp_pic23_wavy_ss_E_mode2_dom_501to620_r17_m9_crop.jpg}}
\caption{\small{ (a) DMD eigenvalues in complex plane when DMD is performed on data from equilibrium. (b) First dominant mode. (c) Second dominant mode.}
\label{fig:wavy_eigs}}
\end{figure}
Similar to the plasma ball case, the self-field reconstruction error stays within reasonable limits inside the interpolation region, but rapidly increases in the extrapolation region for transient DMD (Fig. \ref{fig:wavy_trans_err}). However, for DMD in the equilibrium region (Fig. \ref{fig:wavy_ss_err}), the extrapolation error remains within acceptable bounds.
\begin{figure} [H]
\centering
\subfloat[$\Phi_3^{(eq)}$ \label{fig:beam_trans_mode3} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic24_wavy_ss_E_mode3_rec_501to620_r17_m9_crop.jpg}}
\subfloat[$\Phi_4^{(eq)}$ \label{fig:beam_trans_mode4} ]{%
\includegraphics[width=0.335\linewidth]{jcp_pic25_wavy_ss_E_mode4_rec_501to620_r17_m9_crop.jpg}}
\subfloat[$\Phi_5^{(eq)}$ \label{fig:beam_trans_mode5} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic26_wavy_ss_E_mode5_rec_501to620_r17_m9_crop.jpg}}\\
\subfloat[$\Phi_6^{(eq)}$ \label{fig:beam_trans_mode6} ]{%
\includegraphics[width=0.335\linewidth]{jcp_pic27_wavy_ss_E_mode6_rec_501to620_r17_m9_crop.jpg}}
\subfloat[$\Phi_{7}^{(eq)}$ \label{fig:beam_trans_mode7} ]{%
\includegraphics[width=0.335\linewidth]{jcp_pic28_wavy_ss_E_mode7_rec_501to620_r17_m9_crop.jpg}}
\subfloat[$\Phi_{8}^{(eq)}$ \label{fig:beam_trans_mode8} ]{%
\includegraphics[width=0.33\linewidth]{jcp_pic29_wavy_ss_E_mode8_rec_501to620_r17_m9_crop.jpg}}
\caption{\small{ First six recessive DMD modes extracted from equilibrium region of oscillating electron beam.}
\label{fig:wavy_ss_rec_modes}}
\end{figure}
\begin{figure} [H]
\centering
\subfloat[\label{fig:wavy_trans_self_corr} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic3_wavy_trans_self_corr_1to120_r25_m13_crop.jpg}}
\subfloat[\label{fig:wavy_ss_self_corr} ]{%
\includegraphics[width=0.49\linewidth]{jcp_pic21_wavy_ss_E_self_corr_501to620_r17_m9_crop.jpg}}
\caption{\small{ (a) Coefficient $\rho$ between DMD modes from transient region. (b) Coefficient $\rho$ between DMD modes from equilibrium region.}
\label{fig:wavy_self_corr}}
\end{figure}
\begin{figure} [H]
\centering
\subfloat[\label{fig:wavy_trans_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic40_wavy_trans_E_err_comp_1to120_f1,f2,f4_crop.jpg}}
\subfloat[\label{fig:wavy_ss_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic40_wavy_ss_E_err_comp_501to620_f1,f2,f4_crop.jpg}}
\caption{\small{ Relative 2-norm error for reconstruction of self electric field, with green shaded area denoting the DMD window. (a) DMD window is in transient region. (b) DMD window is equilibrium region.}
\label{fig:wavy_err}}
\end{figure}
\subsubsection{Sliding-Window DMD}\label{result_stbeam_2}
We set $\beta_{thr}=0.01$ and $\Delta_k=3.2$ ns. Using prior knowledge about the oscillation period of the external magnetic flux ($T_b=20$ ns), we choose $T=56$ ns so that it covers multiple cycles of the forced oscillation. The resulting interval between successive snapshots is $\Delta t=1.6$ ns.
\paragraph{Equilibrium Detection}
The algorithm detects the steady-state at $k=136$ ($n_{st}(136)=43280$). The sensitivity of $\alpha(T)$ towards variation in $T$ and $\Delta t$ is shown in Fig.\ref{fig:wavy_alpha_comp}.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\linewidth]{jcp_pic33_wavy_ss_E_alpha_win200_wid70_f2_r15_W17_crop.jpg}
\caption{\small{Variation in $\alpha(T)$ for $\Delta t=1.6$ ns, as the DMD window slides towards equilibrium for oscillating electron beam. The red curve shows averaged $\alpha$ over 17 windows.}}
\label{fig:wavy_alpha}
\end{figure}
\begin{figure} [hbt!]
\centering
\subfloat[\label{fig:wavy_alpha_comp1} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic35_wavy_ss_E_alpha_var_wid_win200_f2_av_crop.jpg}}
\subfloat[\label{fig:wavy_alpha_comp2} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic37_wavy_ss_E_alpha_var_f_win200_wid70_av_crop.jpg}}
\caption{\small{ (a) Sensitivity of Algorithm 2 towards window width $T~(\pm20\%)$, keeping fixed $\Delta t=1.6$ ns. (b) Sensitivity of Algorithm 2 towards sampling interval, keeping fixed $T=56$ ns.}
\label{fig:wavy_alpha_comp}}
\end{figure}
\paragraph{Convergence of DMD Modes} There are two dominant DMD modes that describe equilibrium dynamics: the stationary mode and an oscillating mode corresponding to external magnetic flux oscillation frequency. It is of interest to track their
evolution to their final spatial configuration ($\Phi_1^{(136)}$ and $\Phi_2^{(136)}$) in equilibrium. The tracking algorithm reveals that the dominant stationary mode $\Phi_1^{(1)}$ in transient state ($k=1$) eventually evolves to the dominant stationary mode $\Phi_1^{(136)}$ in equilibrium ($k=136$). The inset in Fig. \ref{fig:wavy_mode1_conv} reveals that the mode shape $\Phi_1^{(1)}$ at $k=1$ is nothing but the self-field configuration of the straight beam (stationary component) emitting from the lower boundary of the mesh, whereas that of $\Phi_1^{(136)}$ suggests a full fledged straight electron beam. Fig. \ref{fig:wavy_eig_move_mode1} shows convergent migration of the DMD eigenvalue towards the unit circle.
Similar behavior is observed for the eigenvalue corresponding to the oscillating mode $\Phi_2^{(136)}$ (Fig. \ref{fig:wavy_eig_move_mode2}), which is traced back to $\Phi_4^{(1)}$ in the first window. Interestingly, the fourth most energetic mode at $k=1$ evolves to become the second most energetic mode at $k=136$. Note that during transience, it is harder to separate modes in terms of energy due to complex dynamics and rapidly time varying amplitudes. {Tracking} evolution of the oscillating mode underscores how a relatively hidden feature in transience can become prominent in equilibrium. This gradual evolution in mode shape is captured by the continuous variation of parameter $\rho$ as seen in Figs. \ref{fig:wavy_mode1_conv} and \ref{fig:wavy_mode2_conv}.
\begin{figure} [H]
\centering
\subfloat[\label{fig:wavy_eig_move_mode1} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic47_wavy_mode1_eig_move_win136_wid70_f2_crop.jpg}}
\subfloat[\label{fig:wavy_eig_move_mode2} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic48_wavy_mode2_eig_move_win136_wid70_f2_crop.jpg}}
\caption{\small{ (a) DMD eigenvalue movement corresponding to ($\lambda_1^{(136)},\Phi_1^{(136)}$). (b) DMD eigenvalue movement corresponding to ($\lambda_2^{(136)},\Phi_2^{(136)}$).}
\label{fig:wavy_eig_move}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{jcp_pic49_wavy_mode1_eig_corr_conv_win136_wid70_f2_crop.jpg}
\caption{\small{Coefficient $\rho$ of $\Phi_1^{(136)}$ with its predecessors (black dotted curve). $\rho$ between $\Phi_1^{(eq)}$ and predecessors of $\Phi_1^{(136)}$ (red curve). Inset: $\Phi_1^{(136)}$ and its predecessor at $k=1$.}}
\label{fig:wavy_mode1_conv}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{jcp_pic52_wavy_mode2_eig_corr_conv_win136_wid70_f2_crop.jpg}
\caption{\small{Coefficient $\rho$ of $\Phi_2^{(136)}$ with its predecessors (black dotted curve). $\rho$ between $\Phi_2^{(eq)}$ and predecessors of $\Phi_2^{(136)}$ (red curve). Inset: $\Phi_2^{(136)}$ and its predecessor at $k=1$.}}
\label{fig:wavy_mode2_conv}
\end{figure}
\paragraph{Predicted Field and Particle Dynamics}
Recall that a key motivation for using a ROM such as DMD is to expedite the EMPIC simulation by predicting future self-fields and particle dynamics. The 2-norm relative error in predicted fields is close to $1\%$ after extrapolation from the window at $k = 136$. We compare the $x$ and $y$ directional phase-space plots {(Figs.~\ref{fig:wavy_ph_sp_y}-\ref{fig:wavy_ph_sp_x})} and the $x$ and $y$ directional average velocity and particle density (Figs.~\ref{fig:wavy_Np_vel_y_950}-\ref{fig:wavy_Np_vel_x_950}) at $n = 76000$, which extends well into the extrapolation region.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{jcp_pic41_wavy_ph_space_Y_at950_crop.jpg}
\caption{\small{The $y$-directional phase-space plot comparison between finite-element full-order EMPIC simulation (blue) and DMD (red) in extrapolation region ($n=76000$).}}
\label{fig:wavy_ph_sp_y}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{jcp_pic44_wavy_ph_space_X_at950_crop.jpg}
\caption{\small{The $x$-directional phase-space plot comparison between finite-element full-order EMPIC simulation (blue) and DMD (red) in extrapolation region ($n=76000$).}}
\label{fig:wavy_ph_sp_x}
\end{figure}
\begin{figure} [H]
\centering
\subfloat[ \label{fig:wavy_vel_y_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic42_00_wavy_av_vel_Y_at950_error_crop.jpg}}
\subfloat[ \label{fig:wavy_Np_y_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic42_01_wavy_part_den_Y_at950_error_crop.jpg}}
\caption{\small{ Comparison between full-order and DMD predicted average velocity and particle density at $n=76000$ in the $y$-direction. Relative error for $\mathcal{X}(y)$ is defined as $\delta=|\hat{\mathcal{X}}(y)-\mathcal{X}(y)|/\max|\mathcal{X}(y)|$, where ``hat'' denotes the DMD approximation. (a) $y$-directional average velocity (left axis) and relative error (right axis) plot. (b) Particle density variation along the $y$-direction (left axis) and relative error plot (right axis). Few missing points in the error graph correspond to points where the error is below the log scale range shown.}}
\label{fig:wavy_Np_vel_y_950}
\end{figure}
\begin{figure} [H]
\centering
\subfloat[ \label{fig:wavy_Np_vel_x} ]{%
\includegraphics[width=0.49\linewidth]{jcp_pic43_00_wavy_av_vel_X_at950_error_crop.jpg}}
\subfloat[ \label{fig:wavy_Np_vel_x_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic43_01_wavy_part_den_X_at950_error_crop.jpg}}
\caption{\small{ Comparison between full-order and DMD predicted average velocity and particle density at $n=76000$ along the $x$-direction. Relative error is similarly defined as in Fig. \ref{fig:wavy_Np_vel_y_950}. (a) $x$-directional average velocity (left axis) and relative error (right axis) plot. (b) Particle density variation along $x$-direction (left axis) and relative error plot (right axis)}. The missing points in the error graph are below the log scale range shown.}
\label{fig:wavy_Np_vel_x_950}
\end{figure}
\subsection{Electron Beam with Virtual Cathode Formation}\label{result_virt}
A relatively complex example of interest is the reduced-order modelling of virtual cathode oscillations.
The setup of \ref{wavy_beam} is adopted with two major differences: (i.) the amount of injected current is increased $15$ times, and, (ii.) a $y$-directional non-oscillating confining magnetic {flux} is employed instead of a transverse oscillating magnetic flux. The superparticle ratio is increased to {$3\times 10^6$}, while holding the same injection rate. The external voltage bias is turned off and a strong magnetic {flux}, $B = B_y\hat{y}$ is applied in the $y$ direction, with $B_y=100$ A/m. The increased current injection initiates virtual cathode formation, eventually leading to small oscillations near the root of the beam in the equilibrium state (Fig. \ref{fig:virt_snap}). The data set spans from timestep $n=80$ to $n=160000$, containing a total of $2000$ data points $(\Delta_n = 80)$, with $\Delta_t=0.02$ ns. Unlike the previous examples, we only discuss the key takeaways from DMD analysis for this problem.
\begin{figure} [hbt!]
\centering
\subfloat[ \label{fig:virt_snap} ]{%
\includegraphics[width=0.48\linewidth]{jcp_pic0_virt_cath_E_snap_n1000_crop.jpg}}
\subfloat[ \label{fig:virt_ss_sing} ]{%
\includegraphics[width=0.52\linewidth]{jcp_pic10_virt_cath_E_ss_sing_vals_1100to1500_r71_m37_crop.jpg}}
\caption{\small{ (a) Snapshot of virtual cathode formation for 2-D electron beam at $n=80000$. The cyan arrows show the self electric field lines. (b) Normalized singular values for DMD in equilibrium region. } \label{fig:virt_snap_sing}}
\end{figure}
\subsubsection{DMD in Equilibrium State} \label{result_virt_eq}
\begin{figure} [hbt!]
\centering
\subfloat[$\Phi_1^{(eq)}$ \label{fig:virt_ss_mode1} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic13_virt_cath_E_ss_dom_mode1_1100to1500_r71_m37_crop.jpg}}
\subfloat[$\Phi_2^{(eq)}$ \label{fig:virt_ss_mode2} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic14_virt_cath_E_ss_dom_mode2_1100to1500_r71_m37_crop.jpg}}
\caption{\small{ Dominant modes extracted from equilibrium region of virtual cathode formation. } \label{fig:virt_ss_modes}}
\end{figure}
The harvesting window spans from $n=88000$ to $n=120000$ with $r=71$ and $37$ DMD modes, although only two dominant modes capture more than $99\%$ of the total energy in equilibrium. Exponential decay in singular values {(Fig. \ref{fig:virt_ss_sing})} reveals the underlying low-dimensional structure in equilibrium dynamics. The stationary structure of the virtual cathode is represented by the mode $\Phi_1^{(eq)}$ and the small oscillations at the location of virtual cathode formation are captured by $\Phi_2^{(eq)}$. The relative 2-norm error remains close to $1\%$ inside the harvesting window and oscillates around $5\%$ margin in the extrapolation region.
\subsubsection{Predicted Particle Dynamics}
We apply the sliding-window DMD method on self electric field data from the virtual cathode, with $\beta_{thr}=0.01$, $\Delta_k = 6.4$ ns, $T=160$ ns and $\Delta t=3.2$ ns. Equilibrium is detected at $k=180$ $(n_{en}(180)=65360)$, at which point the field-update is replaced with extrapolated self-field values from DMD. The predicted particle dynamics at $n=128000$ is shown in {Figs. \ref{fig:virt_ph_sp_y}-\ref{fig:virt_Np_vel_x_1600}}.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{jcp_pic26_virt_cath_E_sld_phase_sp_Y_win180_wid100_f2_r21_crop.jpg}
\caption{\small{The $y$-directional phase-space plot comparison between finite-element full-order EMPIC simulation (blue) and DMD (red) in extrapolation region ($n=128000$).}}
\label{fig:virt_ph_sp_y}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{jcp_pic29_virt_cath_E_sld_phase_sp_X_win180_wid100_f2_r21_crop.jpg}
\caption{\small{The $x$-directional phase-space plot comparison between finite-element full-order EMPIC simulation (blue) and DMD (red) in extrapolation region ($n=128000$).}}
\label{fig:virt_ph_sp_x}
\end{figure}
\begin{figure} [H]
\centering
\subfloat[ \label{fig:virt_vel_y_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic28_00_virt_cath_E_sld_average_Vy_180_wid100_f2_r21_error_crop.jpg}}
\subfloat[ \label{fig:virt_Np_y_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic28_01_virt_cath_E_sld_part_den_y_180_wid100_f2_r21_error_crop.jpg}}
\caption{\small{ Comparison between full-order and DMD predicted average velocity and particle density at $n=128000$ along the $y$-direction. Relative error in $v_y^{(n+1/2)}$ and $N_p$ are as defined in Fig. \ref{fig:wavy_Np_vel_y_950}. (a) $y$-directional average velocity plot and relative error. (b) Particle density variation along the $y$-direction and relative error.}}
\label{fig:virt_Np_vel_y_1600}
\end{figure}
\begin{figure} [H]
\centering
\subfloat[ \label{fig:virt_vel_x_err} ]{%
\includegraphics[width=0.5\linewidth]{jcp_pic31_00_virt_cath_E_sld_average_Vx_win180_wid100_f2_r21_error_crop.jpg}}
\subfloat[ \label{fig:virt_Np_x_err} ]{%
\includegraphics[width=0.49\linewidth]{jcp_pic31_01_virt_cath_E_sld_part_den_x_win180_wid100_f2_r21_error_crop.jpg}}
\caption{\small{ Comparison between full-order and DMD predicted average velocity and particle density at $n=128000$ along the $x$-direction. (a) $x$-directional average velocity plot and relative error. (b) Particle density variation along the $x$-direction and relative error.}}
\label{fig:virt_Np_vel_x_1600}
\end{figure}
\section{Computational Complexity}\label{empic_speedup}
The timestep complexity (runtime computational complexity to evolve through one timestep) in our {\it explicit} particle-in-cell algorithm
is $\mathcal{O}(N_p + N)$ \citep{WOLF2016342} where $N_p$ is the number of particles and $N$ represents aggregate mesh dimension. More typically, for {\it implicit} field solvers the timestep complexity\footnote{The timestep complexity of our solver is reduced by employing a sparse approximate inverse of the finite-element mass matrix in the time stepping procedure~\cite{kim2011parallel,na2016local}. This strategy basically trades the reduction in timestep complexity for the one-time cost (incurred prior to time stepping) of computing the sparse approximate inverse.} is $\mathcal{O}(N_p+N^s)$, with $s\geq 1.5$.\par
Usually, $N_p\gg N$ and therefore the field gather, particle push, and current scatter stages represent the main bottleneck, especially in serial computers. On the other hand, in parallel computers, one can exploit the fact that the particle steps are embarassingly parallelizable. Nevertheless, in large problems with millions of grid nodes and edges, the field update can also consume significant amount of time. DMD based reduced-order models for self-fields can reduce this cost for long term predictions. \par
In addition, EMPIC simulations are often run beyond the equilibrium onset (which is not known a priori). Let this post-equilibrium timestep index be denoted as $n_0$. The runtime of a typical EMPIC simulation up to timestep $n_0$ in a serial computer is then $\mathcal{O}(n_0N_p+n_0N^s)$. The runtime complexity of exact DMD is dominated by the SVD step, given by $\mathcal{O}(lN^2)$, where $l$ is the number of DMD snapshots. For the sliding-window DMD method described in this paper, the equilibrium onset detection has a runtime complexity of $\mathcal{O}(lN^2~n_{eq}/\Delta n)$, assuming the sliding-window DMD terminates at timestep $n_{eq}$ with a typical window shift of one snapshot. Here $\Delta n$ represents the number of timesteps between two consecutive DMD snapshots. The resulting overall computational complexity of the sliding-window DMD is thus $\mathcal{O}(lN^2~n_{eq}/\Delta n+n_{eq}N_p+n_{eq}N^s)$. Consequently, the presented method is advantageous to determine self-fields for $N_p\gg N$ and/or $n_0\gg n_{eq}$.\par
If the particle dynamics at $n_0$ is also sought, then the reduced-order model for self-fields also provides some advantages given $n_0\gg n_{eq}$ since the field solver is obviated beyond $n_{eq}$. The overall computation complexity becomes $\mathcal{O}(lN^2~n_{eq}/\Delta n+n_0N_p+n_{eq}N^s)$ compared to the original cost of $\mathcal{O}(n_0N_p+n_0N^s)$. If $N_p\gg N$, it turns out that the computation advantage is insignificant. However, if $N_p$ and $N$ are comparable, then the sliding-window DMD model is advantageous for $n_0\gg n_{eq}$. \par
For simplicity, the above estimates assume a serial implementation. As noted, in parallel computers, one can readily exploit the fact that all particle steps (gather, pusher, and scatter) are embarrassingly parallelizable. In that case, the runtime estimates would of course depend on other factors such as the number of available processors.
\section{Concluding Remarks}
This work introduced a DMD approach for the reduced-order modeling of kinetic plasmas. Data is harvested from high-fidelity EMPIC simulations and used to extract key (low-dimensional) features as well as to predict/ extrapolate the problem dynamics to later times. Extraction of key features/modes is shown to be instrumental in providing physical insight into the problem and can facilitate the application of model predictive control methods. Accurate prediction of nonlinear limit-cycle behavior can be non-trivial, especially in simulations based on large meshes with many elements. The sliding-window DMD approach correctly identifies the onset of limit-cycle behavior, which enables accurate prediction of the self-field and particle dynamics beyond the equilibrium detection point, and thus has the potential to speed-up EMPIC simulations for long term prediction. These methods were demonstrated on plasma ball and electron beam examples. Future work will involve improving the algorithm for in-line detection of knee/elbow region for the $\alpha$ variation and implementing model order reduction directly on the particle dynamics.
\section{Acknowledgment}\label{ack}
This work was partially supported by the Defense Threat Reduction Agency under Grant {HDTRA1-18-1-0050}, the Air Force Office of Scientific Research under Grant No. FA9550-20-1-0083 and the Ohio Supercomputer Center under Grant {PAS-0061}.
\clearpage
| -53,394.492209 |
[
-1.58203125,
1.7880859375
] | 27.456647 |
[
-3.662109375,
0.08697509765625,
-2.478515625,
-5.88671875,
-0.413330078125,
8.609375
] |
[
3.42578125,
8.9453125,
3.06640625,
6.921875
] | 733 | 10,030 |
[
-2.55078125,
2.5859375
] | 28.552888 |
[
-6.5234375,
-4.32421875,
-4.72265625,
-2.251953125,
2.478515625,
12.78125
] | 0.703709 | 17.124193 | 23.150548 | 3.296203 |
[
2.169956922531128
] | -32,976.57646 | 6.658923 | -52,272.212569 | 0.540508 | 6.348658 |
[
-2.916015625,
-4.0859375,
-4.0390625,
-4.91015625,
2.5234375,
12.7421875
] |
[
-5.90625,
-2.380859375,
-2.51171875,
-1.6669921875,
3.953125,
5.2265625
] | |
BkiUduM4uBhjCUQN8a6H
|
\section{Introduction}
As we know our universe is undergoing an accelerated expansion that can be
approximated by an exponentially expanding de-Sitter space-time. It is widely
believed, also, that the de-Sitter space-time allows a better description of
the early stage of inflation. Therefore de-Sitter metric is of interest both
for the cosmology of the early and late universe. This importance comes also
from the fact that the de-Sitter space is the unique maximally symmetric
curved space and enjoys the same degrees of symmetry as Minkowski space-time.
This explains the increased interest to study physical quantum effects in such
a space-time. Among these effects we cite the particle-antiparticle pair
creation caused by the expansion of the universe \cite{FT1,FT2,FT3,FT4}. The
phenomenon of particle creation in de-Sitter space has been widely discussed
and analyzed from various point of view
\cite{ds1,ds2,ds3,ds4,ds5,ds6,ds7,ds8,ds9,ds10,ds11,ds12,ds13}. This is
because the fact that the particle creation has many important applications in
contemporary cosmology \cite{C1,C2,C3,C4,C5,C6,C7}. It is known that the
creation of particles induces an effective negative pressure which leads to an
inflationary phase with an exponential expansion. After a sufficient time of
inflation the exponential expansion will be adiabatically switched off giving
rise to a power law expansion. In addition, the creation of cold dark matter
by gravitational fields could explain the actual accelerated expansion without
appealing to the existence of dark energy. The particle creation mechanism
could also have significant contribution to the transition from an anisotropic
universe to an isotropic one \cite{VG}.
In de-Sitter space, the pair production rate, with a pure gravitational field,
is proportional to $\exp\left( -\frac{2\pi}{H}m\right) $, where $m$ is the
particle mass and $H$ is the Hubble's constant. This makes the production of
massive (heavy) particles negligible. The creation of massive particles is
then excluded to play any role in the evolution of the early universe. The
purpose of this paper is to study the effect of the electric field on the
creation of fermions from vacuum in the de-Sitter space-time. We know that
actually there is no electric field in the universe. However, as is mentioned
in \cite{Odintsov}, we reconcile that electric fields were present during the
initial stages of the formation of the universe and they vanish because of the
inverse effect of the particle creation. This idea resembles to the assumption
of the anisotropy of the early universe although the present universe is
isotropic. We note that the influence of electric field on the particle
creation in expanding universe has been studied in several models describing
different stages of the cosmic evolution
\cite{ds7,ds11,VG,Odintsov,Villalba1,haouat2,haouat3,PC1,PC2}.
The Schwinger effect in the (1+1) dimensional de-Sitter space-time is
extensively studied during the last few years \cite{2a,2b,2c,2d,2e,2f,2g}. The
geometric origin of the Schwinger effect in de-Sitter space is studied in
\cite{2c}. In \cite{2d}, the authors studied the Schwinger mechanism by a
uniform electric field in $dS_{2}$ and $AdS_{2}$. They expressed the one-loop
effective action in the proper-time integral. Then they computed the imaginary
part of the effective action and the pair production rate. The authors of
\cite{2e}, considered the problem of particle production by a constant
electric field in (1+1) dimensional de Sitter space as a model for describing
false vacuum decay beyond the semiclassical approximation. They found that the
adiabatic \textquotedblleft in\textquotedblright\ vacuum associated with the
flat chart develops a space-like expectation value for the current, which
manifestly breaks the de Sitter invariance of the background fields. However
the case studied in particular detail is that of scalar particles. Less
studied is the case of fermions, where the spin of the fermion could play an
important role on the behavior of created particles. An interesting attempt to
study the creation of fermions by an electric field in de-Sitter space is
communicated several years ago in \cite{2f} (see also \cite{2g}).
In order to study the effect of particle-antiparticle pair creation form
vacuum by gravitational and electric fields we have at our disposal several
methods such as the adiabatic method \cite{Parker1,Parker2}, the Hamiltonian
diagonalization technique \cite{Grib1,Grib2}, the Feynman path integral
derivation \cite{Chitre,Duru}, the Green function approach
\cite{Bukhbinder,Gavrilov}, the semiclassical WKB approximation
\cite{Biswas1,Biswas2,Biswas3} as well as the method based on Bogoliubov
transformation \cite{Villalba2,haouat1} that we shall use in this paper. The
later method is more convenient for the spin $\frac{1}{2}$ case.
The paper is organized as follows; At the beginning we consider a spin
$\frac{1}{2}$\ fermion subjected to an electric field in the (1 + 1)
dimensional de-Sitter space-time. Then we solve the corresponding Dirac
equation by introducing an unitary transformation. To investigate the process
of particle creation we apply the canonical method based on Bogoliubov
transformation connecting the "\textit{in}" with the "\textit{out}" states.
This method permits us to determine the probability to create pair of
particles in a given state, the density number of created particles and the
vacuum persistence. To clarify the effect of the electric field on the
creation of particles we compute the number of created particles per unit of
time per unit of length by doing sum over all states and we show how the
electric field influences on the particle creation. We complete the analysis
by deriving the imaginary part of the Schwinger effective Lagrangian starting
from the vacuum to vacuum transition amplitude. These results allows us also
to discuss the effect of the actual expansion of the universe on the Schwinger
effect. If the universe expansion leads to some enhancement to the Schwinger
effect, this would be of great interest to experimental physics. We shall
discuss this issue.
\section{Dirac equation in an expanding universe}
Let us consider a spin $\frac{1}{2}$\ fermion of mass $m$ and charge $\left(
-e\right) $ moving on the background geometry of a (1 + 1) dimensional
expanding universe described by the line elemen
\begin{equation}
ds^{2}=a^{2}\left( \eta\right) \left( d\eta^{2}-dx^{2}\right) , \label{2
\end{equation}
where $a\left( \eta\right) =a_{0}f\left( \eta\right) $, in the presence of
an electric field described by the vector potential $A_{\mu}=(0,A_{1}\left(
\eta\right) )$, with
\begin{equation}
A_{1}\left( \eta\right) =E_{0}f\left( \eta\right) . \label{3
\end{equation}
Here $f\left( \eta\right) $ is an arbitrary function. The covariant Dirac
equation with the metric (\ref{2}) takes the for
\begin{equation}
\left[ i\tilde{\gamma}^{\mu}\left( \eta\right) (\partial_{\mu}-ieA_{\mu
}(\eta)-\Gamma_{\mu}(\eta))-m\right] \psi=0 \label{4
\end{equation}
where the curvature-dependent Dirac matrices $\tilde{\gamma}^{\mu}(\eta)$ are
given in diagonal tetrad gauge by
\begin{align}
\tilde{\gamma}^{0}(\eta) & =\frac{1}{a\left( \eta\right) }\gamma
^{0}\label{5}\\
\tilde{\gamma}^{1}(\eta) & =-\frac{1}{a\left( \eta\right) }\gamma^{1}
\label{6
\end{align}
where $\gamma^{0}$ and $\gamma^{1}$ are the usual Dirac's matrices that can be
written, in (1+1) dimensional Minkowski space, in terms of Pauli matrices as
follow
\begin{equation}
\gamma^{0}=\sigma_{z}~,~\gamma^{1}=i\sigma_{y} \label{7
\end{equation}
and $\Gamma_{\mu}(\eta)$ are the spin connection
\begin{align}
\Gamma_{0} & =0\nonumber\\
\Gamma_{1} & =\frac{1}{2}\frac{\dot{a}(\eta)}{a\left( \eta\right)
\gamma_{0}\gamma_{1}. \label{8
\end{align}
By making the substitution $\chi\left( \eta,x\right) =a^{\frac{1}{2}
(\eta)\psi\left( \eta,x\right) ,$ we obtain the simpler equatio
\begin{equation}
\left[ \gamma^{\mu}\left( i\partial_{\mu}+eA_{\mu}(\eta)\right) -ma\left(
\eta\right) \right] \chi\left( \eta,x\right) =0. \label{9
\end{equation}
In order to solve this equation we write, at the beginning, $\chi\left(
\eta,x\right) =\exp\left( ikx\right) \xi\left( \eta\right) $ wit
\begin{equation}
\xi\left( \eta\right) =\left(
\begin{array}
[c]{c
\xi_{1}\left( \eta\right) \\
\xi_{2}\left( \eta\right)
\end{array}
\right) \label{10
\end{equation}
where the two components $\xi_{1}\left( \eta\right) $ and $\xi_{2}\left(
\eta\right) $ satisfy the two coupled equations
\begin{align}
\left( i\frac{\partial}{\partial\eta}-ma_{0}f\left( \eta\right) \right)
\xi_{1}\left( \eta\right) & =\left[ k-eE_{0}f\left( \eta\right)
\right] \xi_{2}\left( \eta\right) \\
\left( i\frac{\partial}{\partial\eta}+ma_{0}f\left( \eta\right) \right)
\xi_{2}\left( \eta\right) & =\left[ k-eE_{0}f\left( \eta\right)
\right] \xi_{1}\left( \eta\right) .
\end{align}
Here we remark that the coupling coefficient $\left[ k-eE_{0}f\left(
\eta\right) \right] $ depends on the conformal time and the usual iteration
procedure leads to a complicated second order equation that does not admit
well-known solutions for many models. To simplify the problem let us introduce
the unitary transformation
\begin{equation}
\left(
\begin{array}
[c]{c
\xi_{1}\left( \eta\right) \\
\xi_{2}\left( \eta\right)
\end{array}
\right) =\frac{1}{\sqrt{1+\tau^{2}}}\left(
\begin{array}
[c]{cc
1 & \tau\\
-\tau & 1
\end{array}
\right) \left(
\begin{array}
[c]{c
\varphi_{1}\left( \eta\right) \\
\varphi_{2}\left( \eta\right)
\end{array}
\right) \label{13
\end{equation}
with
\begin{equation}
\tau=\frac{a_{0}}{eE_{0}}\left( \mathcal{M}-m\right) \label{14
\end{equation}
an
\begin{equation}
\mathcal{M}=\sqrt{m^{2}+\frac{e^{2}E_{0}^{2}}{a_{0}^{2}}}. \label{15
\end{equation}
Then the novel components $\varphi_{1}\left( \eta\right) $ and $\varphi
_{2}\left( \eta\right) $ satisfy the following system of equations
\begin{align}
\left[ i\frac{\partial}{\partial\eta}-\mathcal{M}a_{0}f\left( \eta\right)
+\frac{eE_{0}}{\mathcal{M}a_{0}}k\right] \varphi_{1}\left( \eta\right) &
=k\frac{m}{\mathcal{M}}\varphi_{2}\left( \eta\right) \label{16a}\\
\left[ i\frac{\partial}{\partial\eta}+\mathcal{M}a_{0}f\left( \eta\right)
-\frac{eE_{0}}{\mathcal{M}a_{0}}k\right] \varphi_{2}\left( \eta\right) &
=k\frac{m}{\mathcal{M}}\varphi_{1}\left( \eta\right) \label{17a
\end{align}
which leads to the second order equatio
\begin{equation}
\left[ \frac{\partial^{2}}{\partial\eta^{2}}+\mathcal{M}^{2}a_{0}^{2
f^{2}\left( \eta\right) \pm i\mathcal{M}a_{0}f^{\prime}\left( \eta\right)
-2keE_{0}f\left( \eta\right) +k^{2}\right] \varphi_{1,2}\left(
\eta\right) =0. \label{se
\end{equation}
Equation (\ref{se}) admits simple and analytic solution for various models,
such as the de-Sitter space with $f\left( \eta\right) =\frac{-1}{H^{2}\eta}$
the radiation dominated universe for $f\left( \eta\right) =\eta$ and the
Milne universe for $f\left( \eta\right) =e^{\rho\eta}$, ...etc.
Let us notice that the gravitational field couples to the mass of the
particle, while the electric field couples to the charge. In the novel system
we can see that an effective field is coupled to the quantity $\mathcal{M}$.
As we will show, for the process of particle creation, this quantity is more
important than the mass of the particle.
Writing the equation (\ref{se}) in the form $\varphi_{s}^{\prime\prime}\left(
\eta\right) +\omega_{s}^{2}(\eta)\varphi_{s}\left( \eta\right) =0$, with
\begin{equation}
\omega_{s}^{2}(\eta)=\mathcal{M}^{2}a_{0}^{2}f^{2}\left( \eta\right) \pm
i\mathcal{M}a_{0}f^{\prime}\left( \eta\right) -2keE_{0}f\left( \eta\right)
+k^{2},
\end{equation}
we find that the particles creation is well-defined only in the adiabatic
conditio
\begin{equation}
\lim\limits_{\eta\rightarrow\eta_{0}}\left\vert \frac{\dot{\omega}_{s}(\eta
)}{\omega_{s}^{2}(\eta)}\right\vert <<1 \label{19
\end{equation}
In addition, at the limit $m\rightarrow0$, the mixing term in (\ref{16a}) and
(\ref{17a}) vanishes and the positive and negative energy solutions never
intercept each other. This means that there is no production of massless
particles even if an electric field is present. In such a case we should treat
matter as waves rather than particles.
Furthermore, equation (\ref{se}) can be written in the for
\begin{equation}
\left[ \frac{\partial^{2}}{\partial\eta^{2}}+\left[ \tilde{k}-e\mathcal{E
f\left( \eta\right) \right] ^{2}+\tilde{M}^{2}\pm ie\mathcal{E}f^{\prime
}\left( \eta\right) \right] \varphi_{1,2}\left( \eta\right) =0,
\label{efe
\end{equation}
wher
\begin{align}
e\mathcal{E} & =\mathcal{M}a_{0},\\
\tilde{M}^{2} & =k^{2}-\tilde{k}^{2}=k^{2}\frac{m^{2}}{\mathcal{M}^{2}
\end{align}
an
\begin{equation}
\tilde{k}=k\frac{eE_{0}}{\mathcal{M}a_{0}}.
\end{equation}
Equation (\ref{efe}) is similar to the quadratic Dirac equation for a particle
of mass $\tilde{M}$, charge $\left( -e\right) $ and wave vector $\tilde{k}$
interacting with the gauge field $A_{1}=\mathcal{E}f\left( \eta\right) $ in
Minkowski space. Then the density of particles with mass $m$, charge $\left(
-e\right) $ and wave vector $k$, created by an electric field $A_{1
=E_{0}f\left( \eta\right) $ in an expanding universe with the scale factor
$a_{0}f\left( \eta\right) $ is the same as the density of particles with
mass $\tilde{M}$, charge $\left( -e\right) $ and wave vector $\tilde{k}$
created by the electric field $A_{1}=\mathcal{E}f\left( \eta\right) $ in
Minkowski space.
Technically, the study of particle creation in an expanding universe requires
a definition of a vacuum state for the field theory \cite{Winitzki}. However,
unlike the free Dirac field, it is not obvious how to determine the vacuum
states when the spinor field is subjected to a general gravitational
background. It is widely believed that in arbitrary curved background, there
is no absolute definition of the vacuum state and the concept of particles is
not completely clear. From physical point of view, it is well known that in
the standard quantum theory a particle cannot be localized to a region smaller
than its de Broglie wavelength. When this wavelength is sufficiently large,
the concept of particle becomes unclear \cite{Anton}. Furthermore, when the
vacuum state is defined in the remote past it is habitually unstable so that
it may differ from the vacuum state in the remote future. This gives rise to
spontaneous particle creation. The derivation of the effect requires also
exact solutions to the field equation. If these solutions are classified as
"in" and "out" states then the Bogoliubov transformation between these states
leads to exact expressions for the number density of created particles and the
probability to create a pair of particles in a given state. In the next
section we consider the creation of particle-antiparticle pairs in a de-Sitter
space with a constant electric field.
\section{Pair creation in de-Sitter space}
The general technique for solving the Dirac equation in a (1+1) dimensional
expanding universe when an electric field is present being shown, let us
consider the important case of de-Sitter space with a constant electric field.
The line element of the metric describing $dS_{2}$ space-time can be written
a
\begin{equation}
ds^{2}=dt^{2}-e^{2Ht}dx^{2}=a^{2}\left( \eta\right) \left( d\eta^{2
-dx^{2}\right) ,
\end{equation}
where $a\left( \eta\right) =\frac{-1}{H\eta}$ and $H$ is the Hubble's
constant. A constant electric field in the comoving system of coordinates is
described by the vector potential
\begin{equation}
A_{1}\left( \eta\right) =-\frac{E_{0}}{H^{2}\eta}.
\end{equation}
This is just a particular case of the problem studied in the previous section,
with $a_{0}=H$, $\mathcal{M}=\sqrt{m^{2}+\frac{e^{2}E_{0}^{2}}{H^{2}}}$ and
$f\left( \eta\right) =-\frac{1}{H^{2}\eta}$. In such a case, the equation
(\ref{se}) is similar to the well-known Whittaker equation \cite{Grad
\begin{equation}
\left[ \frac{d^{2}}{d\rho^{2}}-\frac{1}{4}+\frac{\lambda}{\rho}+\frac
{\frac{1}{4}-\mu_{s}^{2}}{\rho^{2}}\right] \tilde{\varphi}_{s}\left(
\rho\right) =0, \label{21
\end{equation}
wit
\begin{equation}
\rho=-2ik\eta\label{20
\end{equation}
and $\tilde{\varphi}_{s}\left( \rho\right) \equiv\varphi_{s}\left(
\eta\right) $, $s=\overline{1,2}$. The constants $\mu_{s}$ and $\lambda$ are
given by
\begin{align}
\mu_{1} & =\frac{1}{2}-i\frac{\mathcal{M}}{H}=\mu\nonumber\\
\mu_{2} & =\frac{1}{2}+i\frac{\mathcal{M}}{H}=\mu^{\ast}=1-\mu\label{22
\end{align}
an
\begin{equation}
\lambda=i\frac{eE_{0}}{H^{2}}. \label{23
\end{equation}
It is known that one can find for the equation (\ref{21}) several sets of
linearly independent solutions which can be written in terms of the Whittaker
functions $M_{\lambda,\mu}\left( \rho\right) $ and $W_{\lambda,\mu}\left(
\rho\right) $, wit
\begin{align}
M_{\lambda,\mu}\left( \rho\right) & =\rho^{\mu+\frac{1}{2}}e^{-\frac{\rho
}{2}}M\left( \mu-\lambda+\frac{1}{2},2\mu+1;\rho\right) \label{24}\\
W_{\lambda,\mu}\left( \rho\right) & =\rho^{\mu+\frac{1}{2}}e^{-\frac{\rho
}{2}}U\left( \mu-\lambda+\frac{1}{2},2\mu+1;\rho\right) , \label{25
\end{align}
where $M\left( a,b,\rho\right) $ and $U\left( a,b,\rho\right) $ are the
Kummar functions \cite{Abramo}.
To obtain a well-defined vacuum state with a reasonable choice of positive and
negative frequency modes we use the so-called adiabatic method based on the
solutions of the relativistic Hamilton-Jacobi equation. Taking into account
the asymptotic behavior of the $W_{\lambda,\mu}\left( \rho\right) $ function
\begin{equation}
W_{\lambda,\mu}\left( \rho\right) \sim e^{-\frac{\rho}{2}}\left(
-\rho\right) ^{\lambda} \label{26
\end{equation}
and using the solutions of the Hamilton-Jacobi equation we can find for the
"\textit{in}" states the following expression
\begin{equation}
\xi_{in}^{+}\left( \eta\right) =\mathcal{N}\left(
\begin{array}
[c]{c
\sqrt{\mathcal{M}-\frac{eE_{0}}{H}}W_{-\lambda,\mu}\left( -\rho\right)
+\tau\sqrt{\mathcal{M}+\frac{eE_{0}}{H}}W_{-\lambda,1-\mu}\left( -\rho\right)
\\
-\tau\sqrt{\mathcal{M}-\frac{eE_{0}}{H}}W_{-\lambda,\mu}\left( -\rho\right)
+\sqrt{\mathcal{M}+\frac{eE_{0}}{H}}W_{-\lambda,1-\mu}\left( -\rho\right)
\end{array}
\right) \label{32
\end{equation}
an
\begin{equation}
\xi_{in}^{-}\left( \eta\right) =\mathcal{N}^{\ast}\left(
\begin{array}
[c]{c
\sqrt{\mathcal{M}+\frac{eE_{0}}{H}}W_{\lambda,\mu}\left( \rho\right)
-\tau\sqrt{\mathcal{M}-\frac{eE_{0}}{H}}W_{\lambda,1-\mu}\left( \rho\right)
\\
\tau\sqrt{\mathcal{M}+\frac{eE_{0}}{H}}W_{\lambda,\mu}\left( \rho\right)
+\sqrt{\mathcal{M}-\frac{eE_{0}}{H}}W_{\lambda,1-\mu}\left( \rho\right)
\end{array}
\right) \label{33
\end{equation}
$\allowbreak$where $\mathcal{N}$ is a normalization constant that is
unimportant vis-a-vis the mechanism of particle creation.
Let us now define the "\textit{out}" states. These states can be defined by
studying the limit $\eta\rightarrow0$ $(t\rightarrow+\infty)$. However, it
should be noted that, since the de-Sitter space describes the universe in the
inflationary era in a limited period, the study of "out" states by considering
the limit $\eta\rightarrow0$ is physically inappropriate. This note is missing
in several derivations of this effect in the literature, where the probability
to create a pair of particles and the number density of created particles were
obtained by studying the limit $\eta\rightarrow0$.
Taking into account that the Kummer function $M\left( a,b;\rho\right) $ is
by definitio
\begin{equation}
M\left( a,b;\rho\right) =1+\frac{a}{1}\frac{\rho}{b}+\frac{a\left(
a+1\right) }{1\times2}\frac{\rho^{2}}{b\left( b+1\right) }+\frac{a\left(
a+1\right) \left( a+2\right) }{1\times2\times3}\frac{\rho^{3}}{b\left(
b+1\right) \left( b+2\right) }+...
\end{equation}
we find that, for finite $\rho$, $M\left( a,b;\rho\right) \approx1$ if
$\frac{\left\vert \rho\right\vert }{\left\vert b\right\vert }<<1$. Then,
instead of taking the limit $\eta\rightarrow0$ $(t\rightarrow+\infty)$, we
consider the case when
\begin{equation}
\left\vert \rho\right\vert <<\left\vert 2\mu+1\right\vert .
\end{equation}
In such a case, like at $\rho\rightarrow0$, the functions $M_{\lambda,\mu
}\left( \rho\right) $\ have the asymptotic behavior
\begin{equation}
M_{\lambda,\mu}\left( \rho\right) \sim e^{-\frac{\rho}{2}}\rho^{\mu+\frac
{1}{2}}. \label{36
\end{equation}
Consequently, we arrive at the following expressions for the two components of
the Dirac spinor
\begin{equation}
\xi_{out}^{+}\left( \eta\right) =\mathcal{N}^{\prime}\left(
\begin{array}
[c]{c
M_{\lambda,-\mu}\left( \rho\right) +\tau\frac{\frac{m}{\mathcal{M}
}{4\left( \frac{1}{2}+i\frac{\mathcal{M}}{H}\right) }M_{\lambda,-\mu
+1}\left( \rho\right) \\
-\tau M_{\lambda,-\mu}\left( \rho\right) +\frac{\frac{m}{\mathcal{M}
}{4\left( \frac{1}{2}+i\frac{\mathcal{M}}{H}\right) }M_{\lambda,-\mu
+1}\left( \rho\right)
\end{array}
\right) \label{41
\end{equation}
an
\begin{equation}
\xi_{out}^{-}\left( \eta\right) =\mathcal{N}^{\prime\ast}\left(
\begin{array}
[c]{c
-\tau M_{-\lambda,\mu-1}\left( -\rho\right) +\frac{\frac{m}{\mathcal{M}
}{4\left( \frac{1}{2}-i\frac{\mathcal{M}}{H}\right) }M_{-\lambda,\mu}\left(
-\rho\right) \\
M_{-\lambda,\mu-1}\left( -\rho\right) +\tau\frac{\frac{m}{\mathcal{M}
}{4\left( \frac{1}{2}-i\frac{\mathcal{M}}{H}\right) }M_{-\lambda,\mu}\left(
-\rho\right)
\end{array}
\right) . \label{43
\end{equation}
Note that those states are connected to one another by the charge conjugation
transformation defined by
\begin{equation}
\xi\rightarrow\xi^{c}=\sigma_{1}\xi^{\ast}. \label{34
\end{equation}
In addition positive and negative energy solutions satisfy the orthogonality
conditio
\begin{equation}
\bar{\xi}^{+}\xi^{-}=\bar{\xi}^{-}\xi^{+}=0. \label{35
\end{equation}
Now, as we have mentioned above, in order to determine the probability of pair
creation and the density of created particles we use the Bogoliubov
transformation connecting the "\textit{in}" with the "\textit{out}" states.
The relation between those states can be obtained by the use of the relation
between Whittaker functions \cite{Grad}
\begin{equation}
M_{\lambda,\mu}\left( z\right) =\frac{\Gamma\left( 2\mu+1\right)
e^{i\pi\lambda}}{\Gamma\left( \mu-\lambda+\frac{1}{2}\right)
W_{-\lambda,\mu}\left( -z\right) +\frac{\Gamma\left( 2\mu+1\right)
e^{-i\pi\left( \mu-\lambda+\frac{1}{2}\right) }}{\Gamma\left( \mu
+\lambda+\frac{1}{2}\right) }W_{\lambda,\mu}\left( z\right) , \label{45
\end{equation}
with $-\frac{3\pi}{2}<\arg z<\frac{\pi}{2}$ and $2\mu\neq-1,-2,\cdot\cdot
\cdot$. The functions $\xi_{out}^{+}\left( \eta\right) $ and $\xi_{out
^{+}\left( \eta\right) $ can be then expressed in terms of $\xi_{in
^{+}\left( \eta\right) $ and $\xi_{in}^{-}\left( \eta\right) $ as follow
\begin{equation
\begin{array}
[c]{c
\xi_{out}^{+}\left( \eta\right) =\alpha~\xi_{in}^{+}\left( \eta\right)
+\beta~\xi_{in}^{-}\left( \eta\right) \\
\xi_{out}^{-}\left( \eta\right) =\alpha^{\ast}~\xi_{in}^{-}\left(
\eta\right) +\beta^{\ast}~\xi_{in}^{+}\left( \eta\right)
\end{array}
\label{46
\end{equation}
where the Bogoliubov coefficients $\alpha~$and $\beta$ are given by
\begin{equation}
\frac{\beta}{\alpha}=\frac{\mathcal{N}}{\mathcal{N}^{\ast}}\frac{\Gamma\left(
\frac{1}{2}-\mu-\lambda\right) \sqrt{\mathcal{M}-\frac{eE_{0}}{H}}
{\Gamma\left( \frac{1}{2}-\mu+\lambda\right) \sqrt{\mathcal{M}+\frac{eE_{0
}{H}}}e^{-i\pi\left( \mu-\frac{1}{2}\right) } \label{47
\end{equation}
an
\begin{equation}
\left\vert \alpha\right\vert ^{2}+\left\vert \beta\right\vert ^{2}=1.
\label{48
\end{equation}
The Bogoliubov relation between "\textit{in}" and "\textit{out}" states leads
to a relation between the creation and annihilation operators
\begin{equation
\begin{array}
[c]{c
\hat{b}_{-k,in}^{+}=\alpha^{\ast}~\hat{b}_{-k,out}^{+}+\beta~\hat{a}_{k,out}\\
\hat{a}_{k,in}=\alpha~\hat{a}_{k,out}+\beta^{\ast}~\hat{b}_{-k,out}^{+}.
\end{array}
\label{49
\end{equation}
Therefore, the probability of pair creation and the density of created
particles will be given in terms of Bogoliubov coefficients. In effect
starting from the amplitude $\mathcal{A}=\left\langle 0_{out}\left\vert
a_{k,out}b_{-k,out}\right\vert 0_{in}\right\rangle $, it is easy to show tha
\begin{equation}
\mathcal{A}=-\frac{\beta^{\ast}}{\alpha}\left\langle 0_{out}\right\vert
\left. 0_{in}\right\rangle . \label{50
\end{equation}
The probability to create a pair of fermions in the state $k$ is the
\begin{equation}
\mathcal{P}_{k}=\left\vert \frac{\beta}{\alpha}\right\vert ^{2}. \label{51
\end{equation}
By the use of the following property of gamma function \cite{Grad}
\begin{equation}
\left\vert \Gamma(ix)\right\vert ^{2}=\frac{\pi}{x\sinh\pi x
\end{equation}
we fin
\begin{equation}
\left\vert \frac{\beta}{\alpha}\right\vert ^{2}=\frac{\sinh\pi\left(
\frac{\mathcal{M}}{H}+\frac{eE_{0}}{H^{2}}\right) }{\sinh\pi\left(
\frac{\mathcal{M}}{H}-\frac{eE_{0}}{H^{2}}\right) }e^{-2\pi\frac{\mathcal{M
}{H}}. \label{52
\end{equation}
For the number density of created particles we hav
\begin{equation}
n\left( k\right) =\left\langle 0_{in}\left\vert a_{k,out}^{+}a_{k,out
\right\vert 0_{in}\right\rangle =\left\vert \beta\right\vert ^{2}. \label{53
\end{equation}
Taking into account that the Bogoliubov coefficients satisfy the condition
(\ref{48}), it is easy to show tha
\begin{equation}
n\left( k\right) =\frac{\sinh\pi\left( \frac{\mathcal{M}}{H}+\frac{eE_{0
}{H^{2}}\right) e^{-2\pi\frac{\mathcal{M}}{H}}}{\sinh\pi\left(
\frac{\mathcal{M}}{H}-\frac{eE_{0}}{H^{2}}\right) +\sinh\pi\left(
\frac{\mathcal{M}}{H}+\frac{eE_{0}}{H^{2}}\right) e^{-2\pi\frac{\mathcal{M
}{H}}}. \label{54
\end{equation}
Let us note that these results are obtained for a positive wave vector ($k>0$)
and $-\frac{3\pi}{2}<\arg\left( 2ik\eta\right) <\frac{\pi}{2}$. For the the
case when $k<0$, the quantities $n\left( k\right) $ and $\mathcal{P}_{k}$
can be obtained form (\ref{52}) and (\ref{54}) by changing the sign of $e$. We
have the
\begin{equation}
n\left( k\right) =\frac{\sinh\pi\left( \frac{\mathcal{M}}{H
+\operatorname{sign}\left( k\right) \frac{eE_{0}}{H^{2}}\right)
e^{-2\pi\frac{\mathcal{M}}{H}}}{\sinh\pi\left( \frac{\mathcal{M}
{H}-\operatorname{sign}\left( k\right) \frac{eE_{0}}{H^{2}}\right)
+\sinh\pi\left( \frac{\mathcal{M}}{H}+\operatorname{sign}\left( k\right)
\frac{eE_{0}}{H^{2}}\right) e^{-2\pi\frac{\mathcal{M}}{H}}
\end{equation}
$\allowbreak$ Then $n\left( k\right) $ is more significant when $k>0$.
Therefore the constant electric field produces predominantly particles with
$k>0$. In other wards, in the presence of a constant electric field, particles
prefer to be created in a specific direction. This depends on the orientation
of the electric field and the sign of the particle charge. The antiparticles
are in the main created in the opposite direction with the same density as the
particles. $\allowbreak$
Since the adiabatic condition (\ref{19}) reduces to $\frac{H}{\mathcal{M}
<<1$, the particle creation in de-Sitter space is well-defined only if
$\mathcal{M}>>H$. In such a limit, the density of created particles can be
approximated by
\begin{equation}
n\left( k\right) =\exp\left[ -\frac{2\pi}{H}\left( \mathcal{M
-\operatorname{sign}\left( k\right) \frac{eE_{0}}{H}\right) \right] ,
\label{55
\end{equation}
which resembles to the Boltzmann distribution. This explains the thermal
nature of the effect and shows that the spin effects are negligible at this level.
\section{The number of created particles}
Since the electric field amplifies the particle creation only when $k>0$, it
is of interest to discuss the effect of the electric field on the total number
of created particles. The total number of created particles is given by
\begin{equation}
N=\int\frac{dxdk}{2\pi a\left( t\right) }n\left( k\right) \label{nt
\end{equation}
where $\frac{dxdk}{2\pi a\left( t\right) }$ is the number of states in the
phase space element $dxdk$ and the factor $a\left( t\right) $ in the
denominator expresses the dilution of the particles by the expansion of the
universe. Incorporating equation (\ref{55}) into (\ref{nt}) we get
\begin{equation}
N=\frac{1}{\pi a\left( t\right) }\int dx\int_{k\geq0}dk\exp\left(
-\frac{2\pi}{H}\mathcal{M}\right) \cosh\left( 2\pi\frac{eE_{0}}{H^{2
}\right) \label{n
\end{equation}
Since $n\left( k\right) $ depends only on $\operatorname{sign}\left(
k\right) $, the integral over $k$ is divergent. The origin of this divergence
is the fact that the total number of created particles in an infinite time is
infinite. However, the number of created particles per unit of time per unit
of length $\frac{dN}{dxdt}$ must be finite. This quantity which is directly
related to the experimental measurements, is defined by
\begin{equation}
N=\int dN=\int\frac{dN}{dxdt}dxdt.
\end{equation}
Then, divergence arises in the integration $\int dxdk$ must be equivalent to
the divergence arise in the integration over $\int dxdt$. In the flat
Minkowski space-time the integration over $k$ can be carried out by making the
change $\int dk\rightarrow eE_{0}\int dt=eE_{0}T$. In de-Sitter space, the
situation is slightly different. By taking into account that particles and
antiparticle are well-defined only when $\left\vert \rho\right\vert
<<\left\vert 2\mu+1\right\vert $, we find that $n\left( k\right) $ makes
sense only if
\begin{equation}
\left\vert k\right\vert <\mathcal{M}e^{Ht}.
\end{equation}
Thus, at any given time $t$ we only need to integrate up to a cut-off
$\Lambda=\mathcal{M}e^{Ht}$. If we consider the variation of $N$ with respect
to a small variation of the cut-off, we obtai
\begin{equation}
\frac{dN}{d\Lambda}=\frac{1}{\pi a\left( t\right) }\int dx\exp\left(
-\frac{2\pi}{H}\mathcal{M}\right) \cosh\left( 2\pi\frac{eE_{0}}{H^{2
}\right) .
\end{equation}
Since
\begin{equation}
d\Lambda=\mathcal{M}He^{Ht}dt,
\end{equation}
we obtai
\begin{equation}
\frac{dN}{dt}=\frac{\mathcal{M}H}{\pi}\int dx\cosh\left( 2\pi\frac{eE_{0
}{H^{2}}\right) \exp\left( -\frac{2\pi}{H}\mathcal{M}\right)
\end{equation}
and, consequently, the number of created particles per unit of time per unit
of length is given b
\begin{equation}
\frac{dN}{dxdt}=\frac{\mathcal{M}H}{\pi}\cosh\left( 2\pi\frac{eE_{0}}{H^{2
}\right) \exp\left( -\frac{2\pi}{H}\mathcal{M}\right) . \label{67
\end{equation}
Here, we notice that, in de-Sitter space-time, the integration over $k$ can be
simply replaced by
\begin{equation}
\int\frac{dk}{a\left( t\right) }\rightarrow\mathcal{M}H\int dt
\end{equation}
instead of $\int dk\rightarrow eE_{0}\int dt$ like in the Minkowski space-time.
From equation (\ref{67}), we find that, for a pure gravitational field (i.e.
$eE_{0}=0$)
\begin{equation}
\frac{dN}{dxdt}=\frac{mH}{\pi}\exp\left( -\frac{2\pi}{H}m\right) .
\label{69
\end{equation}
Furthermore, the number of created particles per unit of time per unit of
length can be written in the for
\begin{equation}
\frac{dN}{dxdt}=\gamma_{E}\frac{mH}{\pi}\exp\left( -\frac{2\pi}{H}m\right) ,
\end{equation}
where the factor $\gamma_{E}$ is given by
\begin{equation}
\gamma_{E}=\sqrt{1+y^{2}}\cosh\left( 2\pi\frac{m}{H}y\right) \exp\left[
\frac{2\pi m}{H}\left( 1-\sqrt{1+y^{2}}\right) \right] \label{gammae
\end{equation}
wit
\begin{equation}
y=\frac{eE_{0}}{mH}.
\end{equation}
It is easy to show that $\gamma_{E}$ is always greater that $1$ and therefore
the electric field amplifies the particle creation in de-Sitter space-time.
In figure (\ref{Fig1}), we plot the factor $\gamma_{E}$ as a function of the
variable $y=\frac{eE_{0}}{mH}$ for various values of $\frac{m}{H}$. As a
result, we remark that the electric field leads to a significant enhancement
of the particle creation. This effect is more important as soon as the
quantity $\frac{m}{H}$ is large
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=4.7245in,
width=4.7245in
{01.eps
\caption{{\protect\small Plotting }$\gamma_{E}$ {\protect\small as a function
of the variable }$y=\frac{eE_{0}}{mH}${\protect\small \ for various values of
}$\frac{m}{H}${\protect\small .}
\label{Fig1
\end{center}
\end{figure}
As is mentioned above, in a pure gravitational field, the production of
massive (heavy) particles is negligible because of the exponential
$\exp\left( -\frac{2\pi}{H}m\right) $ and the creation of massive particles
is excluded to play any role in the evolution of the early universe. However,
if electric fields existed in the early stages of the universe they could
enhance the creation of massive particles and, thus, the particle creation
would have significant implications on the cosmic evolution.
\section{Schwinger effective Lagrangian}
More recently the authors of \cite{2d} have computed the one-loop effective
Lagrangian for scalar particles in $dS_{2}$ space-time, starting from the
definitio
\begin{equation}
\mathcal{L}_{dS}^{(1)}=iH\int\frac{dk}{2\pi}\ln\left( \alpha_{k}^{\ast
}\right) ,
\end{equation}
where $\alpha_{k}^{\ast}$ is the Bogoliubov coefficient. They have found that
the imaginary part of this effective Lagrangian is given by
\begin{equation}
2\operatorname{Im}\mathcal{L}_{dS}^{(1)}=\frac{qE}{2\pi}\ln\left(
1+N_{ds}\right) ,
\end{equation}
where $N_{ds}$ is the number density of created scalar particles. In that
derivation the integration over $k$ is replaced by the factor $eE$, like in
the Minkowski space, and the fact that the coefficient $\alpha_{k}^{\ast}$
depends on the sign of $k$ is not taken into account.
In this section we show how to derive the imaginary part of the Schwinger
effective Lagrangian from our previous results. We start by writing
$\mathcal{P}_{k}$ as
\begin{equation}
\mathcal{P}_{k}=\frac{\sigma}{1-\sigma}, \label{58
\end{equation}
wher
\[
\sigma=\frac{\sinh\pi\left( \frac{\mathcal{M}}{H}+\epsilon\frac{eE_{0}
{H^{2}}\right) e^{-2\pi\frac{\mathcal{M}}{H}}}{\sinh\pi\left( \frac
{\mathcal{M}}{H}+\epsilon\frac{eE_{0}}{H^{2}}\right) e^{-2\pi\frac
{\mathcal{M}}{H}}+\sinh\pi\left( \frac{\mathcal{M}}{H}-\epsilon\frac{eE_{0
}{H^{2}}\right) },
\]
where $\epsilon=\operatorname{sign}\left( k\right) $.
Let $\mathcal{C}_{k}$ to be the probability to have no pair creation in the
state $k.$ The quantity $\mathcal{C}_{k}\mathcal{P}_{k}$ is then the
probability to have only one pair in the state $k.$ Because of the Pauli
principle we have $\mathcal{C}_{k}+\mathcal{C}_{k}\mathcal{P}_{k}=1$ an
\begin{equation}
\mathcal{C}_{k}=1-\sigma. \label{60
\end{equation}
Next, we define the vacuum to vacuum transition amplitude by an intermediate
effective action $\mathcal{A}\left( vac-vac\right) =\exp\left(
iS_{eff}\right) $. The vacuum to vacuum probability can be then written as
\begin{equation}
\left\vert \mathcal{A}\left( vac-vac\right) \right\vert ^{2}=\exp\left(
-2\operatorname{Im}S_{eff}\right) =\prod_{k}\mathcal{C}_{k}.
\end{equation}
It follows from (\ref{60}) tha
\begin{equation}
\exp\left( -2\operatorname{Im}S_{eff}\right) =\exp\left[ \sum_{k}\ln\left(
1-\sigma\right) \right] , \label{61
\end{equation}
where the sum $\sum_{k}$ has to be understood as $\int\frac{dxdk}{2\pi
a\left( t\right) }$. Taking into account that
\begin{equation}
\ln(1-\sigma)=\ln\left[ 1-e^{-2\pi\left( \frac{\mathcal{M}}{H
-\operatorname{sign}\left( k\right) \frac{eE_{0}}{H^{2}}\right) }\right]
-\ln\left[ 1-e^{-4\pi\frac{\mathcal{M}}{H}}\right]
\end{equation}
we obtai
\begin{equation}
2\operatorname{Im}S_{eff}=-\int\frac{dxdk}{2\pi a\left( t\right) }\ln\left(
1-e^{-2\pi\left( \frac{\mathcal{M}}{H}-\operatorname{sign}\left( k\right)
\frac{eE_{0}}{H^{2}}\right) }\right) +\int\frac{dxdk}{2\pi a\left(
t\right) }\ln\left( 1-e^{-4\pi\frac{\mathcal{M}}{H}}\right) . \label{62
\end{equation}
By expanding the logarithm functions, we obtai
\begin{align}
2\operatorname{Im}S_{eff}\ & =2\int_{k\geq0}\frac{dxdk}{2\pi a\left(
t\right) }\sum_{n=1}\frac{1}{n}\cosh\left( 2\pi n\frac{eE_{0}}{H^{2
}\right) \exp\left( -\frac{2\pi n}{H}\mathcal{M}\right) \nonumber\\
& -2\int_{k\geq0}\frac{dxdk}{2\pi a\left( t\right) }\sum_{n=1}\frac{1
{n}\exp\left( -\frac{4\pi n}{H}\mathcal{M}\right)
\end{align}
To eliminate the divergence arises in the integration over $k$, we define an
effective Lagrangia
\[
2\operatorname{Im}S_{eff}=\int dxdt\ 2\operatorname{Im}L_{eff
\]
and we proceed like in the previous section. The\ quantity $2\operatorname{Im
L_{eff},$ which has to be interpreted as the probability of particle creation
per unit of time per unit of length, will be then given by
\begin{align}
2\operatorname{Im}L_{eff} & =\frac{\mathcal{M}H}{\pi}\sum_{n=1}\frac{1
{n}\exp\left( -\frac{2\pi n}{H}\mathcal{M}\right) \cosh\left( 2\pi
n\frac{eE_{0}}{H^{2}}\right) \nonumber\\
& -\frac{\mathcal{M}H}{\pi}\sum_{n=1}\frac{1}{n}\exp\left( -\frac{4\pi n
{H}\mathcal{M}\right) .
\end{align}
Taking into account tha
\begin{equation}
\int_{-\infty}^{+\infty}\frac{ds}{s}\exp\left( -i\frac{\delta}{2}s\right)
\left( \coth\frac{s}{2}-\frac{2}{s}\right) =\sum_{n=1}\frac{1}{n}\exp\left(
-\pi n\delta\right)
\end{equation}
we can write the imaginary part of the effective Lagrangia
\begin{align}
2\operatorname{Im}L_{eff} & =\frac{\mathcal{M}H}{\pi}\int_{-\infty
^{+\infty}\frac{ds}{s}\exp\left( -i\frac{\mathcal{M}}{H}s\right) \cos\left(
\frac{eE_{0}}{H^{2}}s\right) \left( \coth\frac{s}{2}-\frac{2}{s}\right)
\nonumber\\
& -\frac{\mathcal{M}H}{\pi}\int_{-\infty}^{+\infty}\frac{ds}{s}\exp\left(
-2i\frac{\mathcal{M}}{H}s\right) \left( \coth\frac{s}{2}-\frac{2}{s}\right)
\end{align}
Then the Schwinger-like effective Lagrangian is of the for
\begin{align}
L_{eff} & =i\frac{\mathcal{M}H}{\pi}\int_{0}^{+\infty}\frac{ds}{s
\exp\left( -i\frac{\mathcal{M}}{H}s\right) \cos\left( \frac{eE_{0}}{H^{2
}s\right) \left( \coth\frac{s}{2}-\frac{2}{s}\right) \nonumber\\
& -i\frac{\mathcal{M}H}{\pi}\int_{0}^{+\infty}\frac{ds}{s}\exp\left(
-2i\frac{\mathcal{M}}{H}s\right) \left( \coth\frac{s}{2}-\frac{2}{s}\right)
+L_{0},
\end{align}
where $L_{0}$ is an unimportant real phase.
Notice that in the limit $E_{0}\rightarrow0$, we fin
\begin{equation}
L_{eff}=i\frac{mH}{\pi}\int_{0}^{+\infty}\frac{ds}{s}\exp\left( -i\frac{m
{H}s\right) \left( 1-\exp\left( -i\frac{m}{H}s\right) \right) \left(
\coth\frac{s}{2}-\frac{2}{s}\right) +L_{0},
\end{equation}
an
\begin{equation}
2\operatorname{Im}L_{eff}=\frac{mH}{\pi}\sum_{n=1}\frac{1}{n}\exp\left(
-\frac{2\pi n}{H}m\right) \left[ 1-\exp\left( -\frac{2\pi n}{H}m\right)
\right] ,\nonumber
\end{equation}
which could be written in the for
\begin{equation}
2\operatorname{Im}L_{eff}=\frac{mH}{\pi}\ln\left[ 1+\exp\left( -\frac{2\pi
}{H}m\right) \right] . \label{68
\end{equation}
\section{The weak expansion limit}
Let us now consider the weak expansion case, which is appropriate to the
actual accelerating phase of our universe. We concentrate our attention on how
the universe expansion influences on the Schwinger effect and how the usual
results corresponding to the flat space are reobtained. We discuss also the
possibility to enhance the Schwinger effect by the expansion of the universe.
\subsection{The number density of created particles}
Let us, first, study the number density of created particles when $H<<1$. In
order to obtain a constant electric field in the limit $H\rightarrow0$, where
the de-Sitter space reduces to the flat Minkowski space-time, we make the
shift
\[
A_{1}\rightarrow A_{1}^{\prime}=\frac{E_{0}}{H}e^{Ht}-\frac{E_{0}}{H
\]
In such a case, we have to replace $k$ by $k^{\prime}=k+\frac{eE_{0}}{H}$ in
order to obtain the pair production probability and the density of particles.
We get
\begin{equation}
n\left( k\right) =\left\{
\begin{array}
[c]{ccc
\exp\left[ -\frac{2\pi}{H}\left( \mathcal{M}-\frac{eE_{0}}{H}\right)
\right] & \text{ \ \ \ \ \ \ \ if \ \ \ \ \ \ \ } & k>-\frac{eE_{0}}{H}\\
\exp\left[ -\frac{2\pi}{H}\left( \mathcal{M}+\frac{eE_{0}}{H}\right)
\right] & \text{ \ \ \ \ \ \ \ \ if \ \ \ \ \ \ \ \ } & k<-\frac{eE_{0}}{H
\end{array}
\right. \label{s
\end{equation}
It is then clear that the exponential in the second line of equation (\ref{s})
goes to $0$ as $H\rightarrow0$. For the first line, we use the limit
\[
\frac{\mathcal{M}}{H}-\frac{eE_{0}}{H^{2}}\simeq\frac{m^{2}}{2eE_{0}}-\frac
{1}{8}\frac{m^{4}}{e^{3}E_{0}^{3}}H^{2
\]
to obtai
\begin{equation}
\mathcal{P}_{k}\approx\frac{\exp\left[ -\pi\left( \frac{\mathcal{M}
{H}-\frac{eE_{0}}{H^{2}}\right) \right] }{2\sinh\pi\left( \frac
{\mathcal{M}}{H}-\frac{eE_{0}}{H^{2}}\right) }\approx\frac{\exp\left[
-\pi\left( \frac{m^{2}}{eE_{0}}-\frac{1}{4}\frac{m^{4}}{e^{3}E_{0}^{3}
H^{2}\right) \right] }{1-\exp\left[ -\pi\left( \frac{m^{2}}{eE_{0}
-\frac{1}{4}\frac{m^{4}}{e^{3}E_{0}^{3}}H^{2}\right) \right]
\end{equation}
and
\begin{equation}
n\left( k\right) \approx\exp\left[ -\pi\left( \frac{m^{2}}{eE_{0}
-\frac{1}{4}\frac{m^{4}}{e^{3}E_{0}^{3}}H^{2}\right) \right] =\exp\left(
\frac{\pi}{4}\frac{m^{4}}{e^{3}E_{0}^{3}}H^{2}\right) \exp\left( -\pi
\frac{m^{2}}{eE_{0}}\right) .
\end{equation}
Besides, when $H\rightarrow0$, we see tha
\begin{equation}
n\left( k\right) \approx\exp\left( -\pi\frac{m^{2}}{eE_{0}}\right)
\label{56
\end{equation}
and
\begin{equation}
\mathcal{P}_{k}\approx\frac{\exp\left( -\pi\frac{m^{2}}{eE_{0}}\right)
}{1-\exp\left( -\pi\frac{m^{2}}{eE_{0}}\right) }. \label{57
\end{equation}
Let us notice that the condition $k>-\frac{eE_{0}}{H}$ means that, in the
limit $H\rightarrow0$, we have the possibility to create particles with an
arbitrary wave vector, $k\in\left] -\infty,+\infty\right[ $. Physically,
this fact is not strange because $k$ is the canonical momentum and not the
physical one. We remark also that equations (\ref{56}) and (\ref{57}) are in
complete agreements with the well-known results corresponding to the pair
creation by an electric field in Minkowski space-time (see \cite{CF1} and
references therein).
\subsection{The total number of created particles}
In Minkowski space time, the number of created particles per unit of time per
unit of length is given b
\begin{equation}
\frac{dN}{dxdt}=\frac{eE_{0}}{2\pi}\exp\left( -\pi\frac{m^{2}}{eE_{0
}\right) .
\end{equation}
Because of the exponential $\exp\left( -\pi\frac{m^{2}}{eE_{0}}\right) $,
the field strength required to see the Schwinger effect is of order of the
critical value $E_{c}=10^{16}\mathtt{Vcm}^{-1}$. However, the maximal electric
field produced by the current technologies is about two orders of magnitude
smaller than the critical value. This makes$\frac{dN}{dxdt}$ very small for a
pure electric field. Then, in order to see the effect the $\frac{dN}{dxdt}$
must be enhanced exponentially by compensating $\exp\left( -\pi\frac{m^{2
}{eE_{0}}\right) $.
We consider an electric field $E_{0}$ of the order $\sim10^{-2}E_{c}$.
Therefore, the present day expansion satisfies the inequalit
\begin{equation}
\frac{eE_{0}}{H^{2}}>>\frac{m^{2}}{eE_{0}}. \label{c
\end{equation}
In this limit we hav
\begin{equation}
\frac{dN}{dxdt}=\frac{\mathcal{M}H}{\pi}\exp\left[ -\frac{2\pi}{H}\left(
\mathcal{M}-\frac{eE_{0}}{H}\right) \right] ,
\end{equation}
which can be approximated b
\begin{equation}
\frac{dN}{dxdt}=\gamma_{H}\frac{eE_{0}}{2\pi}\exp\left( -\pi\frac{m^{2
}{eE_{0}}\right) ,
\end{equation}
wher
\begin{equation}
\gamma_{H}=\sqrt{1+\frac{m^{2}H^{2}}{e^{2}E_{0}^{2}}}\exp\left( \pi
\frac{m^{4}H^{2}}{4e^{3}E_{0}^{3}}\right) .
\end{equation}
From the condition (\ref{c}), we find that the $\frac{m^{4}H^{2}}{4e^{3
E_{0}^{3}}<<\frac{m^{2}}{eE_{0}}$ and then the Schwinger effect cannot be
exponentially enhanced by the expansion of the universe.
In recent years, some explicit experimental realizations are proposed to
observe the Schwinger effect for the first time \cite{exp1,exp2,exp3}. The
basic principle of these experiences is the dynamic assistance of the
Schwinger mechanism by the combination of slower pulse strong Laser with a
faster pulse weak one. It this case, the faster pulse gives a multi-photon
contribution which leads to an exponential enhancement. Since $\gamma
_{H}\simeq1$, even if the Schwinger effect is observed in laboratory, it would
be difficult to deduce from this observation whether we live in an expanding
universe or in a Minkowski space-time.
\subsection{The Schwinger effective Lagrangian}
In weak expansion limit the\ probability of particle creation per unit of time
per unit of length will be then given by
\begin{equation}
2\operatorname{Im}L_{eff}=\frac{\mathcal{M}H}{2\pi}\sum_{n=1}\frac{1}{n
\exp\left[ -\frac{2\pi n}{H}\left( \mathcal{M}-\frac{eE_{0}}{H}\right)
\right] \label{we
\end{equation}
which can be written as an integral over the Schwinger proper time
\begin{equation}
2\operatorname{Im}L_{eff}=\frac{\mathcal{M}H}{2\pi}\int_{-\infty}^{+\infty
}\frac{ds}{s}\exp\left[ -i\left( \frac{\mathcal{M}}{H}-\frac{eE_{0}}{H^{2
}\right) s\right] \left( \coth\frac{s}{2}-\frac{2}{s}\right)
\end{equation}
Then the Schwinger-like effective Lagrangian is of the form
\begin{equation}
L_{eff}=\frac{i}{2\pi}\frac{\mathcal{M}H}{eE_{0}}\int_{0}^{+\infty}\frac
{ds}{s}\exp\left[ -i\left( \frac{\mathcal{M}}{H}-\frac{eE_{0}}{H^{2
}\right) s\right] \left( \coth\frac{s}{2}-\frac{2}{s}\right) +...
\end{equation}
By taking the limit $H\rightarrow0$ and making the change $s\rightarrow
2eE_{0}s$, we obtain the well-known result
\begin{equation}
L_{eff}=\frac{i}{2\pi}\int_{0}^{+\infty}\frac{ds}{s}\exp\left( -im^{2
s\right) \left( eE_{0}\coth\left( eE_{0}s\right) -\frac{1}{s}\right) +...
\end{equation}
Doing integration over $s$ by using the residues theorem or taking the limit
$H\rightarrow0$ in equation (\ref{we}), we obtain
\begin{equation}
2\operatorname{Im}L_{eff}=\frac{eE_{0}}{2\pi}\sum_{n=1}\frac{1}{n}\exp\left(
-n\pi\frac{m^{2}}{eE_{0}}\right) ,
\end{equation}
which is the same as equation (64) in \cite{Lin}.
\section{Concluding remarks}
In conclusion, we have studied in this paper the influence of an electric
field on the creation of spin-$\frac{1}{2}$ particle pairs in the (1+1)
inflationary de-Sitter space-time. We have shown at the first stage that the
Schwinger effect in an expanding universe is effectively equivalent to the
Schwinger effect in Minkowski space-time by introducing an unitary
transformation to the Dirac equation. This transformation allows us to obtain
exact solutions for the case of de-Sitter space with a constant electric
field. The charge symmetry between positive and negative frequency modes
permitted us to find exact and analytic expressions for the Bogoliubov
coefficients without need to normalize the wave functions. Then the
probability to create a pair of particles in a given state and the density of
created fermions are calculated. The obtained expressions show that a constant
electric field increases the creation of fermions with positive wave number
$k$ and minimizes it in the opposite direction. This effect, which depends on
the particle charge and the orientation of the electric field, is of interest
because in the absence of the electric field there is no preferred direction
for the created particles. In addition, we have shown that the creation of
massless particles with conformal coupling is impossible even if an electric
field is present.
By doing summation over all allowed states we have expressed the number of
created particles per unit of time per unit of length and the imaginary part
of the Schwinger effective Lagrangian in closed forms. We have shown that the
electric field leads to a significant enhancement of the particle creation.
Then, if the particle creation effects on the cosmic evolution are negligible
in a pure gravitational field, the presence of a strong electric field makes
these effects appreciable. Therefore, the electromagnetic fields influence on
the cosmic evolution directly via Friedmann equations and by their effect on
the creation of particles.
We have also considered the case of weak expansion and we have discussed the
effect of the actual expansion of the universe on the Schwinger effect. It is
shown that the universe expansion could not assist the Schwinger effect to be
observed in laboratory and even if the effect is enhanced by the combination
of stronger slower pulse with a faster weaker pulse, it would be difficult to
see through this effect if the universe is expanding.
Even if the problem is modeled in a two dimensional space-time it shows
clearly the effect of the electric field on the creation of fermions. Taking
into account that the number of created particles per unit of time and length
in a pure gravitational field, is given, in the case of (3 + 1) dimensional
space-time, by \cite{ds4
\[
\frac{dN}{dxdt}=\frac{m^{3}H}{\pi^{2}}~\exp\left( -\frac{2\pi m}{H}\right)
,
\]
we expect that the four dimensional analogue of equation (\ref{67}) is of the for
\begin{equation}
\frac{dN}{dxdt}=\frac{\mathcal{M}^{3}H}{\pi^{2}}~F\left( \frac{eE_{0}}{H^{2
}\right) \exp\left( -\frac{2\pi}{H}\mathcal{M}\right) ,
\end{equation}
where the function $F\left( \frac{eE_{0}}{H^{2}}\right) $ satisfies the
condition $F\left( 0\right) =1$. If we impose that this expression reduces
to the Schwinger result \cite{CF1
\begin{equation}
\frac{dN}{dxdt}=\frac{e^{2}E_{0}^{2}}{2\pi^{2}}~\exp\left( -\pi\frac{m^{2
}{eE_{0}}\right) ,
\end{equation}
when $H\rightarrow0$, the function $F\left( \frac{eE_{0}}{H^{2}}\right) $
must behaves, for $\frac{eE_{0}}{H^{2}}>>1$, lik
\begin{equation}
F\left( \frac{eE_{0}}{H^{2}}\right) \simeq\frac{H^{2}}{4\pi eE_{0}
\exp\left( 2\pi\frac{eE_{0}}{H^{2}}\right) .
\end{equation}
This implies that the amplification factor in (3+1) dimensions is
exponentially equivalent to $\gamma_{E}$ given in (\ref{gammae}).
| -67,970.073104 |
[
-2.626953125,
2.552734375
] | 10.879849 |
[
-3.421875,
0.740234375,
-1.6396484375,
-4.5390625,
-0.355224609375,
6.6171875
] |
[
3.251953125,
8.4375,
2.482421875,
5.41796875
] | 200 | 5,396 |
[
-2.88671875,
3.16796875
] | 31.846942 |
[
-5.859375,
-4.234375,
-4.34765625,
-2.35546875,
1.8359375,
11.7578125
] | 0.930859 | 6.663558 | 23.888065 | 2.896235 |
[
2.630857229232788
] | -40,497.882675 | 6.470348 | -66,954.735061 | 1.134216 | 5.841836 |
[
-2.7421875,
-3.66015625,
-3.7734375,
-4.88671875,
2.404296875,
12.2109375
] |
[
-5.17578125,
-1.31640625,
-1.865234375,
-0.80712890625,
2.890625,
3.27734375
] | |
BkiUdjzxK3YB9m7_-xaL
|
\section{Summary of CMB Calibration}
The Planck satellite~\cite{Ade:2013sjv,Adam:2015rua} has measured CMB intensity maps at several different frequencies with both the Low Frequency Instrument (LFI) and the High Frequency Instrument (HFI). The detectors are calibrated using a given known source, which allows to fix the proportionality constant (the {\it gain} factor) between the detector's response and a known intensity value. In the 2013 release~\cite{Ade:2013sjv,Aghanim:2013bta,Ade:2013eta} such a source was the dipolar temperature pattern induced by the Doppler boosting of the primordial temperature monopole due to the velocity $\boldsymbol{\beta_S}$ of the Sun with respect to the CMB rest frame, except for the two highest frequency channels which were calibrated on planets.
Such a dipole, called the ``solar dipole'' by the Planck collaboration, is practically time-independent during the observation time. Its amplitude $\beta_S$ is of order $10^{-3}$ in units of the speed of light, \emph{assuming} that the measured CMB dipole is mostly due to the Sun velocity compared to the CMB. This is generally considered a safe assumption and can be directly tested through measurements of the aberration of the CMB~\cite{Kosowsky:2010jm,Amendola:2010ty,Notari:2011sb}; such a test was performed on the Planck 2013 data~\cite{Aghanim:2013suk}, with still quite large error bars $\sim40\%$. In any case its value is not known independently {\it a priori} in order to calibrate the instrument (which poses a big disadvantage to the method) and for this reason the Planck team had to use the value measured by WMAP5~\cite{Hinshaw:2008kr,2011ApJS..192...14J}, which is itself subject to WMAP calibration uncertainties.
In the 2015 release~\cite{Adam:2015rua,Adam:2015vua,Ade:2015ads} such a source was replaced by the Doppler effect due to the motion with velocity $\boldsymbol{\beta_O}$ of the Planck satellite around the Sun, called the ``orbital dipole''. Although it is one order of magnitude smaller than the solar dipole (${\beta_O} = 1.0 \times 10^{-4}$) it has the great advantages of both being time-dependent (on a scale of 1 year) and having a very well known amplitude and direction. The very small uncertainties come only from the motion of the satellite inside the solar system, which is known to very high accuracy. The uncertainties are of the order $10^{-10}c$, which corresponds to just one part in a million. For this reason the 2015 release is thought to be more accurate and the new absolute calibration of the Planck 2015 HFI instrument is higher by $2\%$ (in power) compared to 2013, resolving the calibration differences noted between WMAP and Planck, which goes down from $2.6\%$ in 2013 to $0.3\%$ in 2015.
These improvements led to shifts in the cosmological parameter likelihood. Although there were other improvements in Planck 2015 compared to 2013, according to the Planck collaboration calibration was the most important factor regarding the combination of the amplitude of the power spectrum and reionization optical depth: $A_s e^{-2\tau}$~\cite{Adam:2015rua}. In fact, its value was corrected upwards by around $2\%$~\cite{Adam:2015rua} (see their Section 10), which corresponds to a large 3.5$\sigma$ shift to the best-fit value~\cite{Planck:2015xua} (see their Table I).
The extremely high accuracy of the orbital motion cited above means that the biggest uncertainty in the orbital temperature dipole comes from our knowledge of the CMB background temperature $T_0$ which is currently only known at the level of $0.02\%$~\cite{Fixsen:2009ug}. Since the orbital dipole is directly proportional to $T_0$, this imposes a limit of $0.02\%$ to the accuracy of this calibration technique. However the most accurately calibrated channel has an uncertainty of $0.07\%$~\cite{Adam:2015rua} so this is not currently a serious limitation.
Since the orbital motion is known to such a good accuracy and in order to avoid systematic errors the Planck team has taken into account also of the subleading relativistic Doppler corrections, in addition to the leading order dipole term. In particular such effects couple the solar dipole to the orbital dipole and are of order $\beta_S \beta_O$ which represents therefore a \emph{time-dependent} relative correction of about $10^{-3}$ compared to the leading dipole term, of order $\beta_O$. An effect of such size cannot a priori be neglected because the typical systematics of the calibration process are precisely of the same order, of about $0.1\%-1\%$, depending on the channel~\cite{Adam:2015rua,Adam:2015vua}. Let us also note that WMAP was also calibrated using the orbital dipole~\cite{Hinshaw:2003fc,Hinshaw:2008kr}, and so any possible bias in this procedure could also have affected WMAP and propagated also to the Planck 2013 release.
In this short note we point out that although the relativistic effects have been included in the 2015 Planck release~\cite{Adam:2015rua,Adam:2015vua,Ade:2015ads},\footnote{In 2013 they were not included as corrections to the solar dipole in HFI~\cite{Ade:2013eta}. In LFI~\cite{Aghanim:2013bta} they were included [see eq.~(A.1) therein], though it was implemented with a wrong factor of 2 due to a bug in the code as reported in its appendix A.} they should be nevertheless multiplied by a frequency-dependent factor, in the same way as discussed in~\cite{Kamionkowski:2002nd,Chluba:2004cn,Sunyaev:2013coa,Notari:2015kla} (following the original results from~\cite{Sazonov:1999zp}) for the purpose of the subtraction of the Doppler quadrupole from the measured maps. This would imply an ${\cal O}(1)$ correction on the relativistic terms, which should be relevant as long as the relativistic terms were actually necessary in the calibration. The reason why such a factor arises is that the Planck detectors are measuring a signal proportional to intensity, and the intensity is not linearly related to the primordial temperature in the CMB rest frame. There is also a frequency-independent Dipole due to the non-relativistic Doppler effect plus \emph{frequency-dependent} relativistic corrections, as we are going to show.
\section{Corrections due to Linearizing Temperature}
Following closely~\cite{Notari:2015kla} we recall the effects of a boost with a velocity $\,\mathbf{v}/c=\boldsymbol{\beta}\,$. An observer in the CMB rest frame looking at the CMB black body signal along a direction $\boldsymbol{\hat{n}}$ would see a temperature $ T(\boldsymbol{\hat{n}})=T_0 + \varepsilon \, \delta T(\boldsymbol{\hat{n}}) $, where we assume $\varepsilon=10^{-5}$ and so $ \delta T(\boldsymbol{\hat{n}})$ is of order 1. An observer in a boosted frame would instead see along a direction $\boldsymbol{\hat{n}}'$ a temperature~\cite{Peebles:1968}:
\begin{equation}
T'(\boldsymbol{\hat{n}}')=
\frac{T(\boldsymbol{\hat{n}})}{\gamma(1-\boldsymbol{\beta} \cdot \boldsymbol{\hat{n}}')} \,. \label{Dopplerab}
\end{equation}
The multiplicative factor is the Doppler shift and the change in the apparent arrival direction of photons is the aberration effect~\cite{Burles:2006xf, Challinor:2002zh}. We split the velocity into a constant term due to the motion of the Sun with respect to the CMB and a time dependent one due to the orbit of the satellite around the Sun:
\begin{equation}
\boldsymbol{\beta}(t) \equiv \boldsymbol{\beta_S} + \boldsymbol{\beta_O}(t).
\end{equation}
In what follows we will not write the time dependence explicitly aiming simplicity of notation.
A completely isotropic map is unaffected by aberration, which explains why its leading effect is at most ${\cal O}(\varepsilon \, \beta)$ ($\sim 10^{-8}$). The Doppler effect instead affects also an exactly isotropic map and in fact it correlates the monopole with higher multipoles, inducing a dipole of order $\beta$ and a $n^{\rm th}$-pole of order $\beta^n$, as it can be seen expanding the multiplicative factor in eq.~\eqref{Dopplerab}.
A detector in a boosted frame measures a signal at a given frequency $\nu'$ proportional to intensity:
\begin{equation}
I'(\nu') = I (\nu) \left( \frac{\nu'}{\nu}\right)^3 = \frac{2 \nu'^3}{e^{\frac{h \nu }{k_B T(\boldsymbol{\hat{n}})}}-1} \,.
\end{equation}
where $ \nu= \nu' \gamma(1-\boldsymbol{\beta} \cdot \boldsymbol{\hat{n}}') $. In the $\beta=0$ limit the two frames coincide and fluctuations in intensity are given at first order in $\varepsilon$ by:
\begin{equation}
\delta I(\nu)\,\approx\, \frac{2 \nu ^4 e^{\frac{\nu }{\nu_0}}}{\nu_0^2 \left(e^{\frac{\nu }{\nu_0}}-1\right)^2} \varepsilon \, \delta T(\boldsymbol{\hat{n}})
\,\equiv\, K \varepsilon \, \frac{\delta T(\boldsymbol{\hat{n}})}{T_0} \,,
\label{K}
\end{equation}
with $\nu_0 \equiv k_B T_0 / h = (56.79 \pm 0.01) \, {\rm GHz}$~\cite{Fixsen:2009ug}.
In the presence of a boost we can instead expand at second order in $\beta$ and first order in $\varepsilon$ getting:
\begin{equation}\label{eq:lin-temp}
\frac{\delta I_\nu}{K} \,=\, \varepsilon \, \frac{\delta T(\boldsymbol{\hat{n}})}{T_0} + \boldsymbol{{\beta}}\cdot \boldsymbol{\hat{n}} + (\boldsymbol{{\beta}}\cdot \boldsymbol{\hat{n}})^2 Q(\nu) - \frac{1}{2} \beta ^2\,.
\end{equation}
This quantity was dubbed \emph{linearized temperature} in~\cite{Notari:2015kla}. Here we have discarded terms of order $\beta\, \varepsilon$ or higher, and we have defined the quantity
\begin{equation}
Q(\nu) \,\equiv\, \frac{\nu}{2 \nu_0} \coth \left(\frac{\nu }{2 \nu_0}\right) \,,
\end{equation}
as in~\cite{Kamionkowski:2002nd,Sunyaev:2013coa,Notari:2015kla}. So, in addition of a dipole correction we also have a frequency dependent quadrupole correction and a shift to the monopole. Splitting into solar and orbital motion we obtain
\begin{align}
\label{eqmain}
\frac{\delta I_\nu}{K} \,=\;\, & \varepsilon \, \frac{\delta T(\boldsymbol{\hat{n}})}{T_0} + \boldsymbol{{\beta}_S}\cdot \boldsymbol{\hat{n}} + Q(\nu) (\boldsymbol{{\beta}_S}\cdot \boldsymbol{\hat{n}}) ^2 + \boldsymbol{{\beta}_O}\cdot \boldsymbol{\hat{n}} + Q(\nu) (\boldsymbol{{\beta}_O}\cdot \boldsymbol{\hat{n}})^2 \nonumber \\
& + 2 \, Q(\nu) (\boldsymbol{{\beta}_S}\cdot \boldsymbol{\hat{n}} ) ( \boldsymbol{{\beta}_O}\cdot \boldsymbol{\hat{n}} )- \beta_S \beta_O - \frac{1}{2} \beta_S^2 - \frac{1}{2} \beta_O^2\,.
\end{align}
The leading time-dependent term, used for the calibration of WMAP~\cite{Hinshaw:2003fc,Hinshaw:2008kr} and Planck 2015~\cite{Adam:2015rua,Adam:2015vua} is the dipole $\boldsymbol{{\beta}_O}\cdot \boldsymbol{\hat{n}} $. The Planck collaboration is considering also all the above correction terms when performing the LFI 2013 and both
LFI and HFI 2015 calibration (see Eqs. (A.1) in~\cite{Aghanim:2013bta} and (5) in~\cite{Adam:2015vua}), but using $Q(\nu)=1$. However the appropriate correction factor $Q(\nu)$ is quite different from 1, mainly for HFI: it has a value of about $\{1.25,\,1.5,\,2.0,\,3.1\}$ respectively for the for the $\{100,\,143,\,217,\,353\}$ GHz channels. The same does not apply to the two highest frequency channels (545 and 857 GHz) since they were calibrated on Uranus and Neptune using models for their atmospheric emission~\cite{Adam:2015vua}.
Note that since WMAP used lower frequency channels its $Q$ deviation from unity was smaller. The values are $\{1.01,\,1.03,\,1.04,\,1.09,\,1.22\}$ respectively for its $\{23,\,33,\,41,\,61,\,94\}$ GHz channels. In any case, WMAP had instrumental sensitivities much lower than Planck, so they did not even include the sub-dominant time-dependent terms in~\eqref{eqmain} in the first place.
Let us now comment on the possible effects of the relativistic frequency-dependent quadrupolar corrections with $Q(\nu)>1$.
\subsection{Doppler cross-terms}
The correction $2 \, Q(\nu) (\boldsymbol{{\beta}_S}\cdot \boldsymbol{\hat{n}} ) ( \boldsymbol{{\beta}_O}\cdot \boldsymbol{\hat{n}} ) $ is only suppressed by a factor $2 Q(\nu) \beta_S\simeq (2 - 7)\times 10^{-3}$ compared to the leading order and, importantly, has the same time dependence (one year period). It should therefore be consistently included in the Planck calibration with the correct $Q(\nu)$ factor, important especially for the HFI maps.
It is difficult to assess precisely the impact of such a correction on the calibration factor, and subsequently on the CMB maps and on the individual multipoles released by Planck. At face value, compared to Planck 2015 which used $Q(\nu)=1$ it could represent a correction on the gain factor that, although quite small for the LFI frequencies, it is up to $0.5\%$ on the 353 GHz channel. This would be of very similar size of the systematic errors as estimated by Planck: $\{0.35\%,\,0.26\%,\,0.2\%\}$ respectively for the three LFI channels at $\{30,\,44,\,70\}$ GHz, and $\{0.09\%,\,0.07\%,\,0.16\%,\,0.78\%\}$ for the HFI channels at $ \{100,\,143,\,217,\,353\}$ GHz. Moreover such possible error could propagate on the maps with a similar or higher impact, since for instance the calibration change quoted in~\cite{Adam:2015rua} between the 2013 and 2015 release is about $0.8\%$ for LFI, $1\%$ for HFI at low frequencies and $2\%$ for the 353 GHz channel, and this has lead to a total shift of $2.3\%$ in the cosmological parameter $A_S \, e^{-2\tau}$ (see Table 1 of ~\cite{Planck:2015xua}).
However this would depend on the detailed calibration procedure. In fact if the instrument is calibrated within a short time period (of order a few hours) then the correction is basically time-independent and would constitute an unknown signal which would add to the primordial and foreground signals and which can be reconstructed iteratively by the calibration and map-making process itself, following a procedure which has been shown to converge and lead to an error of $0.1\%$ over the entire year in the gain factor in WMAP~\cite{Hinshaw:2003fc} after ${\cal O} (10)$ iterations, and an error of less than $0.5\%$ on a single ring \footnote{A ``ring'' in the Planck scanning strategy is the set of observations made during a period of fixed spin axis pointing of the instrument~\cite{Ade:2013sjv}.} after about 5 iterations in Planck~\cite{Ade:2013eta,Tristram:2011gq}. It is unclear therefore whether such correction could induce a ring-dependent bias in the gain, and how this would translate in an overall global gain factor. In fact, note that if a map is built integrating over a time of order of a multiple of 1 year the effect could possibly average to something much smaller in a globally averaged gain factor.
Note that~\cite{Tristram:2011gq} claims that the bias on the Planck global gain factor due to ignoring the relativistic corrections to the orbital dipole is actually much smaller and only of relative order $0.0006\%$, instead of ${\cal O}(0.1\%)$ (this has been used in the 2013 release to justify the use of a non-relativistic approximation, see Appendix A.2 of ~\cite{Ade:2013eta}). Such suppression, shown on simulations, might be indeed due to averaging the effects over long time and many rings. However, from the inclusion of the relativistic terms in Planck 2015 calibration it seems this estimate may no longer be applicable to the current calibration procedure. It is thus not clear what could the effect be in Planck 2015 real data, so we stress that they should be properly included.\footnote{The new version of the Planck HFI Calibration paper~\cite{Adam:2015vua} also argues that the inclusion of frequency dependent coupling between solar and orbital dipoles is expected to have a much smaller impact than the above estimates. Nevertheless its exact impact remains to be quantified on calibration of the real data.}
The fact that such extra signal are time-dependent could produce seasonal variations of the gain at the level of $0.1\%-0.5\%$ and also produce a spurious time-dependent quadrupole in the maps of order $\delta T/T\approx (2-6)\times 10^{-7}$ for HFI, which would also exhibit seasonal variations. It should therefore be possible to find seasonal variations in the gain and in the quadrupole of the real data in the 2015 release with amplitudes which grow with frequency and it should be possible to subtract them.
\subsection{Pure quadrupole terms}
The correction $Q(\nu) (\boldsymbol{{\beta}_O}\cdot \boldsymbol{\hat{n}})^2$ represents a different time-dependent effect, and is smaller than the previous one by an order of magnitude. In fact, the time period is only half a year since clearly one can rewrite the $\cos[t/\rm{year}]^2$ term as $1/2(1+\cos[2t/\rm{year}])$. If the calibration is averaged over exactly one year, the time-dependent signal would average down and we would be left with an inconsequential constant (it should not affect the estimate of the gain, because of the iterative procedure described above). If such an average is not carried out one should carefully take this into account with the correct $Q(\nu)$ factor.
The correction term $Q(\nu) (\boldsymbol{{\beta}_S}\cdot \boldsymbol{\hat{n}})^2$ is actually the dominant relativistic signal, of order $(2-5)\times10^{-6}$, which adds to the quadrupole in the map-making procedure. However, since it represents a time-independent additional signal it should not affect the estimate of the gain. This term has been separately studied in~\cite{Kamionkowski:2002nd,Notari:2015kla} for a different reasons. In particular it modifies in a non-negligible way the statistical significance of claims of two CMB anomalies: the quadrupole-octupole alignment and the low-quadrupole value~\cite{Notari:2015kla}.
\subsection{Quadrupole leakage into the dipole}
A quadrupole can have a leakage into other multipoles in the presence of a mask, suppressed by the masked sky fraction $1-f_{\rm sky}$. The leakage could induce for instance a time-dependent dipole. For the Doppler cross-terms, it is of order $\delta T/T\approx (2- 7)\times 10^{-7} \times \sqrt{1-f_{\rm sky}}$, which would constitute a relative correction of order ${\rm few} \times 10^{-5}$ on the measurements of the solar dipole.\footnote{In any case, it is not easy to forecast whether a small effect on several multipoles could build up and cause a non-negligible systematic effect to overall CMB parameters.}
For the quadrupole terms the leakage would be even bigger. It would result in a shift in the dipole of order of $(0.1\%-0.3\%)\times \sqrt{1-f_{sky}}$, which is a relative correction of order a few $0.01\%$, depending on the size of the mask and on the channel. Such a correction might imply a shift in the determination of the dipole of the {\it same} order of the present uncertainty in the Planck 2015 data, the nominal amplitude of which~\cite{Adam:2015rua} is $(3364.5 \pm 2.0)\mu K$ and direction $(l,b)=(264.00 \pm 0.03,48.24\pm 0.02)$. In more detail, it is separately estimated as $(3365.5 \pm 3.0)\mu K$ with $(l,b)=(264.01 \pm 0.05, 48.26\pm 0.02)$ for LFI and $(3364.5 \pm 1.0)\mu K$ with $(l,b)=(263.94 \pm 0.02, 48.21\pm 0.008)$ for HFI.
Another possible source of leakage is the aberration effect itself. Aberration couples neighboring multipoles, and therefore parts of the quadrupole can leak into the dipole. Not only the time-dependent quadrupole will aberrate into a time-dependent dipole but the also the time-independent quadrupole part will, through $\boldsymbol{\beta_O}(t)$. These effects are however completely negligible as far as calibration goes, as they are both of order a few parts in $10^{-10}$ (considering that the quadrupole itself is low compared to the theoretical expectation).
\section{Discussion}
The Planck collaboration itself is now mentioning in the 2015 release~\cite{Adam:2015vua} that the relativistic terms must be included in the calibration, and so we stress that this should be done consistently using the $Q(\nu)$ factor. Such systematic error, even if by chance it averages to something small, can be easily corrected for and so it should be properly accounted for and precisely shown to be negligible or not in the final results. In fact, the need for relativistic terms seem to contradict previous claims that they are negligible~\cite{Tristram:2011gq}.
Regarding previous data, WMAP and the Planck 2013 release, the relativistic effects may also be a concern. In fact for WMAP the calibration~\cite{Hinshaw:2003fc} is also based on the orbital dipole and presents a systematic error estimated to be of about $0.2\%$, which is of the same size of the relativistic corrections. Note that in this case however the frequency dependence is not worrisome, since all the channels have relatively low frequencies and so $Q(\nu)$ is very close to 1.
For the 2013 Planck release the calibration was based on the solar dipole, using the measurement of WMAP. The first concern may be that a bias in the WMAP calibration may have propagated in Planck through the dipole. The second concern is that the quadrupolar correction $Q(\nu) (\boldsymbol{{\beta}_S}\cdot \boldsymbol{\hat{n}}) ^2$ mentioned above is a relative correction of order ${\cal O}(0.1\% - 0.3\% )$ to the non-relativistic calibration factor used in HFI 2013.
As pointed out in~\cite{Notari:2015kla} the factor $Q(\nu)$ must also be taken into account to correctly estimate the primordial quadrupole. It is important to note that if this is not taken into account, then also the quadrupole leakage into the dipole (and other multipoles) through the presence of a mask would not be properly corrected for. It would then give rise to a spurious dipole of order $\sqrt{1-f_{sky}}$ times a few parts in a thousand, and perhaps a bias in the gain factor of similar size. This is comparable to the current systematics in the measurement of the dipole itself~\cite{Adam:2015rua}.
The potentially most important impact of the corrections here discussed is in Planck \emph{polarization} data. For HFI 2015 that is still dominated by systematic residuals at large scales that are coming from temperature-to-polarization leakage. Such leakages include mismatch from gain uncertainty, which are relevant even at the $10^{-3}$ level~\cite{Adam:2015vua}, and as a consequence the Planck polarization maps at large scales cannot be used yet for cosmology studies and were not included in the 2015 release. It is therefore crucial (and straightforward) to remove the frequency dependent relativistic corrections in the calibration and map-making procedure in order to be sure of its precise quantitative impact and to improve the control of systematics in the polarization data.
\acknowledgments
We thank Alessandro Gruppuso, Massimiliano Lattanzi, Paolo Natoli and Matthieu Tristram for useful discussions.
\bibliographystyle{JHEP2015}
| -14,945.320754 |
[
-3.45703125,
3.091796875
] | 67.479675 |
[
-3.5546875,
0.1416015625,
-2.23828125,
-6.6328125,
-0.79052734375,
9
] |
[
3.361328125,
7.47265625,
3.2265625,
6.8984375
] | 177 | 3,131 |
[
-3.388671875,
4.01171875
] | 27.495815 |
[
-6.40234375,
-4.25,
-4.1015625,
-2.49609375,
2.126953125,
12.375
] | 1.190891 | 43.658103 | 26.604919 | 2.732831 |
[
3.128593683242798
] | -11,372.858284 | 5.497924 | -14,505.215451 | 0.278843 | 5.617203 |
[
-3.255859375,
-3.8828125,
-3.298828125,
-4.29296875,
2.435546875,
11.6015625
] |
[
-6.02734375,
-2.70703125,
-2.326171875,
-1.98828125,
3.84765625,
5.71875
] | |
BkiUcADxK0zjCsHeZmjA
|
\section{Introduction}
\section{Introduction}
The gauge/gravity correspondence \cite{Maldacena:1997re} provides a means of studying strongly coupled field theories via the analysis of more amenable, weakly coupled gravity theories. In recent years the correspondence has been applied to condensed matter systems leading to interesting descriptions of apparent superconductors \cite{Herzog:2009xv,Hartnoll:2009sz,Horowitz:2010gk}. Its application to superconductivity is of particular interest due to the discovery of, so called, high temperature superconductors in the 1980's which fall outside of the scope of current theories of superconductivity and are thought to be described by strongly coupled field theories.
In their simplest manifestations, the gravitational duals to these superconducting systems consist of a black hole in an anti-de Sitter (adS) spacetime with a complex scalar field coupled to a U(1) gauge field. An interesting characteristic of such theories is that they can remain stable despite the scalar field having a negative mass squared, provided it is not below the Breitenlohner-Freedman (BF) bound \cite{Breitenlohner:1982jf}. However, it is argued in \cite{Gubser:2008px} that the scalar field acquires negative contributions to its effective mass squared which, at low temperatures, can be such that the mass drops below the BF bound over a large enough range to render the system unstable to the formation of scalar hair\footnote{We use the term ``hair'' in the spirit of the no hair theorems, \cite{Bekenstein:1996pn,Hertog:2006rr}, as in something that leaves an imprint at radial infinity.}.
This non-trivial scalar hair has a power law fall off at the boundary and the coefficient of this fall off can be interpreted, via the correspondence, as a condensate in the boundary theory. Analysis of the boundary theory has shown it to exhibit many of the qualitative characteristics of a superconducting system. There has been a great deal of recent work in this area, studying how the system behaves in a variety of different scenarios including altering the scalar mass and spacetime dimension, changing the gauge group and adding an external magnetic field \cite{Hartnoll:2008vx, Hartnoll:2008kx,Horowitz:2008bn,Gubser:2009cg,Horowitz:2009ij,Gubser:2008wv,Peeters:2009sr,Nakano:2008xc,Kanno:2010pq}. Such systems have become known as holographic superconductors.
The majority of these models are, so called, ``bottom up'' models\footnote{For an example of some ``top down'' approaches see \cite{Gubser:2009qm,Gauntlett:2009dn, Gauntlett:2009bh}.} where the gravity theory, often Einstein gravity, is, according the gauge/gravity correspondence, thought of as a low energy effective theory for some overarching string theory. In \cite{Gregory:2009fj,Barclay:2010up,Gregory:2010yr,Pan:2009xa,Pan:2010at,Cai:2010cv,Brihaye:2010mr,Siani:2010uw,Jing:2010cx} the authors studied the stability of these systems to the inclusion of the Gauss-Bonnet (GB) invariant in the gravitational action which is believed to be the $\mathcal{O}(\alpha^\prime)$ correction to some low energy string theories \cite{Gross:1986mw,Metsaev:1987zx}. Thus, by studying this system the authors were studying the stability of the model to the inclusion of higher order corrections. The findings of these papers showed that the qualitative features of the holographic superconductor were stable to higher order corrections but the details changed. These papers however, used either only one particular mass to focus their analysis or confined their studies to the probe, or non backreacting, limit. This has meant that the full panorama of the GB invariant on the backreacting superconductor has remained unknown.
This paper expands upon \cite{Barclay:2010up}, studying the fully backreacting holographic superconductor in GB gravity for a variety of scalar masses for positive and negative GB coupling constant $\alpha$. In a recent paper, \cite{Buchel:2009sk}, it was suggested that causality constraints from hydrodynamics limit the GB coupling to $\alpha \in [-0.711,0.113]$, in this work, however, we permit the full range of $\alpha\in(-\infty,L^2/4]$ in our study for greater understanding of its effect. We study the critical temperature of the superconducting system, both analytically and numerically for a variety of scalar masses. We find that in a region of parameter space close to both the BF bound and the upper limit of $\alpha$, numerical analysis is possible even at large, super-planckian scales and that its effect is to increase the critical temperature, not reduce it, as is its tendency in other regions of parameter space. The study of negative $\alpha$ shows $T_c$ to increase as $\alpha$ drops and the effect of backreaction is diminished as $\alpha$ gets large and negative.
We then look at the zero temperature limit of this superconductor first showing analytically that there can be no regular superconducting solutions at zero temperature for systems with a tachyonic scalar field and place strict constraints on systems where the mass is positive. Following \cite{Horowitz:2009ij} we relax the constraint of regularity for a system without a black hole, allowing logarithmically divergent terms. We find that such solutions are incompatible with the concept of GB gravity being a perturbation of Einstein gravity. In the absence of reliable numerical zero temperature solutions we then use two analytic arguments to place bounds on the critical values of the constants of the theory about which the zero temperature phase transitions may occur.
We also study the effect that the GB coupling, backreaction and the mass have on the conductivity of the system, finding largely quantitative not qualitative alterations, the exception being the effect of backreaction on the appearance of quasi-normal modes in the conductivity for a system in the vicinity of the BF bound. We find that increasing backreaction quickly removes the appearance of these quasi-normal modes within the temperature range that we are numerically able to study.
The paper is organized as follows: Section 2 provides a brief overview of the holographic superconductor in GB gravity and a presentation of the equations of motion that are to be solved. In section 3 we study the nature of the condensate and in particular the critical temperature of the system for which we use both analytic and numerical results. In section 4 we study the zero temperature limit. In section 5 we study the conductivity of the system. Finally we conclude in section 6.
\section{The bulk}
In this section we review the set up introduced in \cite{Barclay:2010up}. This consisted of an Einstein Gauss-Bonnet (EGB) gravitational action coupled to a massive charged complex scalar field and a U(1) gauge field
\begin{align}\label{action}
S&=\frac{1}{2\kappa^2}\int d^5x \sqrt{-g} \left[
-R + \frac{12}{L^2} + \frac{\alpha}{2} \left(
R^{abcd}R_{abcd} -4R^{ab}R_{ab} + R^2
\right) \right]\nonumber\\
&\hspace{3.5mm}+\int d^5x\sqrt{-g}\left[
-\frac{1}{4}F^{ab}F_{ab}+|\nabla_a\psi -iqA_a\psi|^2
-V(|\psi|)
\right]\,,
\end{align}
where $\kappa^2 = 8\pi G_5$ provides an explicit Planck scale in the system, $g$ is the determinant of the metric, $R$, $R_{abcd}$ and $R_{ab}$ are the Ricci scalar, Riemann curvature and Ricci tensors respectively. The negative cosmological constant, $-6/L^2$, has been written in terms of a length scale, $L$ and $\alpha\in(-\infty,L^2/4]$ is the GB coupling constant. $\bf{A}$ is the gauge field and $\psi$ is a scalar field with charge $q$.
In this paper we will study the minimal potential consisting of a single term, quadratic in $\psi$
\begin{align}
V(|\psi|)=m^2|\psi|^2\label{Pot1},
\end{align}
where $m$ is the mass of scalar field.
In order to examine holographic superconductivity we look
for charged, planar, black hole solutions\footnote{For studies of GB black holes in adS space see \cite{Boulware:1985wk,Cai:2001dz,Charmousis:2002rc}.} with or without nontrivial scalar hair. We do this by using the following metric and static field Ans\"atze
\begin{gather}
ds^2 = f(r)e^{2\nu(r)}dt^2 - \frac{dr^2}{f(r)}
- \frac{r^2}{L_e^2}(dx^2+dy^2+dz^2),\label{metric}\\
A_a=\phi(r)\delta^0_a,\quad\quad\quad\quad\quad\quad \psi=\psi(r),
\end{gather}
where without loss of generality $\psi$ can be taken to be real. $L_e$ is the effective asymptotic lengthscale of this space time given by
\begin{eqnarray}\label{Leff}
L^2_{\rm e}=\frac{L^2}{2}\left(1+\sqrt{1-\frac{4\alpha}{L^2}}\right)
\to \left\{
\begin{array}{rl}
\frac{L^2}{2} \ , & \quad {\rm for} \ \alpha \rightarrow \frac{L^2}{4} \\
L^2 \ , & \quad {\rm for} \ \alpha \rightarrow 0 \\
\infty \ , & \quad {\rm for} \ \alpha \rightarrow -\infty.
\end{array}\right.
\end{eqnarray}
The fully coupled system of gravity, gauge and scalar equations take the simple form\footnote{Note that there is an $ij$ component of the EGB equations but it is not independent of (\ref{nueq}) and (\ref{feq})}
\begin{align}
&\phi^{\prime\prime}+\left( \frac{3}{r}-\nu^\prime
\right)\phi^\prime -2q^2\frac{\psi^2}{f}\phi=0\,, \label{phieq}\\
&\psi^{\prime\prime}+\left( \frac{3}{r}+\nu^\prime+\frac{f^\prime}{f}
\right)\psi^\prime +\left(\frac{q^2\phi^2}{f^2e^{2\nu}}
-\frac{m^2}{f} \right)\psi=0\,, \label{psieq}\\
&\left(1-\frac{2\alpha f}{r^2} \right)\nu^\prime
=\frac{2\kappa^2}{3}r\left(
\psi^{\prime 2}+\frac{q^2\phi^2\psi^2}{f^2e^{2\nu}}\right) \label{nueq} ,\\
&\left(1-\frac{2\alpha f}{r^2} \right)f^\prime+\frac{2}{r}f
-\frac{4r}{L^2} =-\frac{2\kappa^2}{3}r\left[
\frac{\phi^{\prime2}}{2e^{2\nu}}+m^2\psi^2+
f\psi^{\prime2}+\frac{q^2\phi^2\psi^2}{fe^{2\nu}} \right]. \label{feq}
\end{align}
where a prime denotes a derivative with respect to $r$.
In order to solve these equations we need to impose boundary conditions
at the horizon and the adS boundary. The position of the horizon, $r_+$, is defined by $f(r_+)=0$. Demanding regularity of the matter fields and metric at the horizon
gives
\begin{eqnarray}
\phi(r_+)=0,\hspace{1cm}
\psi^\prime(r_+)=\frac{m^2}{f^\prime(r_+)}\psi(r_+) \ .
\end{eqnarray}
At the boundary we want the spacetime to asymptote adS in standard coordinates so we shall look for a solution with
\begin{eqnarray}
&&\nu \to 0 \;\;\;,\;\;\;\;
f(r) \sim \frac{r^2}{L_e^2}\;\;\; {\rm as} \;\; r \to \infty\,.
\end{eqnarray}
Asymptotically the solutions of $\phi$ and $\psi$ are then found to be:
\begin{eqnarray}
\phi(r) \sim P - \frac{Q}{r^2}\,,\hspace{1cm}
\psi(r) \sim \frac{C_{-}}{r^{\Delta_-}}+\frac{C_{+}}{r^{\Delta_+}}\,,
\label{r:boundary}
\end{eqnarray}
where $Q$ is the charge of the black hole (up to a factor of $4\pi$) and $\Delta_\pm=2\pm\sqrt{4+m^2L_e^2}$. We choose to set $C_{-}=0$ and interpret, $ \langle {\cal O}_{\Delta_+} \rangle \equiv C_{+}$, where ${\cal O}_{\Delta_+}$ is a boundary operator with the conformal
dimension $\Delta_+$. If $\Delta_\pm > 3$ the opposite choice of $C_{+}=0$ and $ \langle {\cal O}_{\Delta_-} \rangle \equiv C_{-}$ does give normalizable solutions, but will not be considered in this work. An example of where such a choice is made for a system with Einstein gravity can be found in \cite{Horowitz:2008bn}.
This paper is concerned with the effect that varying the mass has on the superconductor. We shall choose a sample of masses, greater or equal to that determined by the BF bound, $m^2=-4/L_e^2$, in order to observe the effect on the boundary theory. Each choice of mass will be fixed with respect to the adS lengthscale, $L_e$, in order for the dimension of the boundary operator to remain constant with respect to variations in $\alpha$.
In the next section we solve (\ref{phieq}-\ref{feq}) numerically, reading this $1/r^{\Delta_+}$ fall off of the scalar field to obtain $ \langle {\cal O}_{\Delta_+} \rangle$ for a range of temperatures given by
\begin{eqnarray}
T = \frac{1}{4\pi} f' (r)e^{\nu(r)}\bigg|_{r=r_+}\ .
\label{hawking}
\end{eqnarray}
For numerical convenience we will use the scaling symmetries of the metric, \cite{Barclay:2010up}, to set $L=Q=q=1$. With this rescaling $\kappa^2$ is the parameter used to vary the backreaction of the fields on the metric; if $\kappa^2=0$, referred to as the probe limit, the fields decouple from the metric entirely.
\section{The boundary}
We wish to study the effect that the inclusion of (GB) backreaction has on the holographic superconductor for the full range of masses. We begin by analysing the operator, $\langle {\cal O}_{\Delta_+} \rangle $, as a function of temperature, as seen in figure \ref{CondensatePlot}. This plot shows that at high temperatures the expectation value of the boundary operator is zero which corresponds to the scalar field having a trivially zero profile in the bulk. As the temperature drops below some critical temperature, $T_c$, the bulk scalar obtains a non-zero profile which, on the boundary, is interpreted as the operator `condensing' out of its vacuum. The critical temperature is a feature particular to each system and can simply be read off from such plots.
\FIGURE{
\centering
\input{OvsT.tex}
\caption{Plot of the condensate as a function of temperature; $\alpha=\kappa^2=0$, $m^2=-3/L_e^2$.}
\label{CondensatePlot}
}
Varying $\alpha$, $\kappa^2$ and $m^2$ produces qualitatively similar plots to those of figure \ref{CondensatePlot} with the key differences being a variation in $T_c$. As well as obtaining the exact value of $T_c$ from such numerically produced plots, a quicker, but rougher understanding can be obtained from an analytically generated lower bound on $T_c$ first introduced in \cite{Barclay:2010up}. This bound is found by studying the scalar field equation in the vicinity of $T_c$. At temperatures just below $T_c$, the scalar field is small, $\psi\ll 1$, and the metric and gauge field will have the form
\begin{align}
\phi_0(r) &= \frac{Q}{r_+^2} \left ( 1 - \frac{r_+^2}{r^2} \right),\label{eq:GBAdSRNBlackHole1}\\
f_0(r) &= \frac{r^2}{2\alpha} \left [ 1 \pm \sqrt{1 - \frac{4\alpha}{L^2} \left ( 1 - \frac{r_+^4}{r^4} \right )+ \frac{8\alpha\kappa^2 Q^2}{3r^4 r_+^2} \left ( 1 -\frac{r_+^2}{r^2}\right )} \right],\label{eq:GBAdSRNBlackHole}
\end{align}
up to corrections of order ${\cal O} (\psi^2)$. Thus, in this region, the scalar field satisfies a linear equation, (\ref{psieq}), with $f$ and $\phi$ taking their background values. By letting $Y = r^3\psi$ and rearranging the field equation for $Y$ implies that {\it if} a solution exists, then the integral
\begin{align}
\int_{r_+}^\infty \frac{1}{r^3} \left [\frac{\phi_0^2}{f_0} + \frac{3}{L_e^2}
+ \frac{3f_0}{r^2} - \frac{3f_0'}{r} \right ]dr = - \int_{r_+}^\infty
\frac{f_0 Y^{\prime2}}{r^3Y^2}\, dr \leq 0
\label{eq:LowerBoundInt}
\end{align}
is negative. For much of parameter space this integral is negative at large $T$, and positive as $T\to0$, thus, observing where it changes sign provides a lower bound on $T_c$. As is noted in \cite{Barclay:2010up}, the negativity of this integral does not imply existence of a solution to the linearised equation near $T_c$ but is simply a necessary condition on one if it exists. Therefore, any result obtained from the lower bound must be supported by numerically calculated values of $T_c$.
Figures \ref{TcWithAlpha} to \ref{NegA} show both the analytic lower bound (as lines) and numerical values (as points) of $T_c$ for different values of $\alpha$, $\kappa^2$ and $m^2$. Figure \ref{TcWithAlpha} demonstrates the dependence of $T_c$ on $m^2$, focussing on $\alpha\geq0$. It is possible to find superconducting solutions for $m^2>0$, indeed we found solutions up to a mass of $m^2\approx0.4$ for $\kappa^2=0$, reducing slightly as $\kappa^2$ was increased. The findings of \cite{Kim:2009kb} suggest that solutions exist at even larger values but that numerical solutions become difficult to obtain due to an intriguing ``warping'' of the space of permissible boundary conditions. The solutions that we did obtain for a small positive mass were only marginally different to those of $m^2=0$ and so these plots have not been included. In the plots that are shown we see that in the majority of parameter space the effect of increasing backreaction is to reduce the value of $T_c$. However, as $\alpha\to L^2/4$ and $m^2\to -4/L_e^2$ the effect of backreaction is reversed and actually increases $T_c$. This can be explored in more detail by plotting $T_c$ as a function of $\kappa^2$, as seen in figure \ref{CondKappa}. This plot clearly shows that in this very narrow region of parameter space the effect of backreaction can be to increase the critical temperature of the system substantially above its value in the probe limit. The ability to reach super-planckian values of backreaction has been verified numerically up to $\kappa^2\approx150$. It is also interesting to note that as one approaches this regime the lower bound on the critical temperature becomes significantly less accurate.
\FIGURE{
\begin{tabular}{cc}
(a) $m^2=0$ & (b) $m^2=-1/L_e^2$\\
\includegraphics[width=7.25cm]{./TcVSAlphaM0.eps} &
\includegraphics[width=7.25cm]{./TcVSAlphaM1.eps} \\
(c) $m^2=-2/L_e^2$ & (d) $m^2=-3/L_e^2$ \\
\includegraphics[width=7.25cm]{./TcVSAlphaM2.eps} &
\includegraphics[width=7.25cm]{./TcVSAlphaM3.eps} \\
(e) $m^2=-3.75/L_e^2$ & (f) $m^2=-4/L_e^2$\\
\includegraphics[width=7.25cm]{./TcVSAlphaM3p75.eps} &
\includegraphics[width=7.25cm]{./TcVSAlphaM4.eps}
\end{tabular}
\caption{Plot of $T_c$ as a function of $\alpha$ for a variety of $\kappa^2$; Lines represent the analytic lower bound, and the points represent numerically obtained values. The solid black lines and circular points corresponds to $\kappa^2=0.0$, solid grey lines and square points to $\kappa^2=0.05$, black (large) dashed with triangular points to $\kappa^2=0.2$, grey (large) dashed and diamond points to $\kappa^2=1$ and black (small) dashed to $\kappa^2=5$.}
\label{TcWithAlpha}
}
\FIGURE{
\centering
\includegraphics[width=9cm]{./TcVSKappa.eps}
\caption{Plot of $T_c$ as a function of $\kappa^2$ at $\alpha=0.24999$ for different masses; For each mass the analytic lower bounds are represented as lines and numerical values as points. The black solid line with circular points corresponds to $m^2=-4/L_e^2$, grey with square points to $m^2=-3.75/L_e^2$ and the black dashed line with triangular points to $m^2=-3/L_e^2$.}
\label{CondKappa}
}
It is straightforward to extend this analysis to $\alpha<0$, as shown in figure \ref{NegA}. In this regime the effect of altering the mass is less marked so one mass of $m^2=-2/L_e^2$ has been chosen as a representative sample. These plots show that as $\alpha$ becomes more negative the critical temperature increases. Whilst this increase becomes more and more gradual as $\alpha$ is reduced it appears that an arbitrarily large $T_c$ can be obtained by an appropriate choice of $\alpha$. These plots also show that the effect of backreaction is, in all cases, to reduce $T_c$, but as $\alpha$ becomes large and negative its effect is diminished. This can be understood by looking at the action, (\ref{action}). In the Einstein limit the curvature of the spacetime scales with $\kappa$. When $|\alpha|$ is large the higher order curvature terms dominate meaning the curvature scales as $\sqrt{\kappa}$ and thus the effect of backreaction on the spacetime is reduced.
\FIGURE{
\begin{tabular}{cc}
(a) & (b) \\
\includegraphics[width=7.25cm]{TcVSNegAlpha2.eps} &
\includegraphics[width=7.25cm]{TcVSNegAlpha1.eps}
\end{tabular}
\caption{Plot of $T_c$ with $\alpha$ for $m^2=-2/L_e^2$. (a) shows the region $\alpha\in[-1,0.25]$ and (b) shows the same plot but for $\alpha\in[-100,0.25]$. The lines correspond to the lower bound, points to numerically obtained values of $T_c$. The black solid lines (and circular points) correspond to $\kappa^2=0$; solid grey (and square points) to $\kappa^2=0.05$; dashed black to $\kappa^2=0.2$ and (smaller) dashed black (with diamond points) to $\kappa^2=5$. }
\label{NegA}
}
In an attempt to provide a clearer picture of the characteristics noted above we can use the analytically calculated lower bound on $T_c$ and scan through the parameter space available to generate the lower bound on a surface of $T_c$ as seen in figure \ref{LowerBoundSurface}.
\FIGURE{
(a) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (b) \\
\includegraphics[scale=0.7125]{./SurfaceMfour_raster.eps}
\includegraphics[scale=0.7125]{./SurfaceM3_raster.eps} \\
\qquad\qquad(c) \\
\includegraphics[scale=0.7125]{./SurfaceM0_raster.eps}
\caption{Plots (a), (b) and (c) show the surface of a lower bound on $T_c$ for $m^2=-4/L_e^2$, $-3/L_e^2$ and $0$ respectively.}
\label{LowerBoundSurface}
} Whilst these plots are only a lower bound to the true surface of $T_c$ they do exhibit some of the interesting characteristics of the system that have been supported by exact numerical results. We see immediately how altering the mass of the scalar field dramatically alters the nature of this superconducting system.
\section{Zero Temperature Superconductors}
There is a great deal of interest in the zero temperature limit of these superconducting systems and in particular in the phase transitions that happen there. Most phase transitions are triggered by the thermal fluctuations of the system but at zero temperature, where there are no thermal fluctuations, phase transitions are triggered by the quantum fluctuations associated with Heisenberg's uncertainty principle. The critical points about which these zero temperature phase transitions occur are called quantum critical points (QCPs). It is thought that in certain regimes the effect of the QCP can extend to finite temperature giving rise to unusual physical phenomena. For real superconducting systems it is impossible to reach the absolute zero temperature required to study these QCPs. However, this is not the case with our theoretical models leading to a great deal of recent activity in this direction.
For holographic superconductors the temperature of the boundary theory is governed by the temperature of the black hole in the bulk spacetime. The temperature of a black hole in our system is given by
\begin{align}
T=\frac{1}{4\pi}f^\prime(r)e^{\nu(r)}\bigg|_{r=r_+}. \label{eq:HawkTemp}
\end{align}
In general the temperature of a black hole can approach zero in a variety of ways depending on the type of black hole. For example, above the critical temperature of our system the black holes are simply Reissner-Nordstr\"om black holes in GB gravity. Such black holes have
\begin{align}
f^\prime(r_+)=\frac{4r_+}{L^2}-\frac{4\kappa^2Q^2}{3r_+^5}
\end{align} which means that the mass and charge can balance such that the temperature goes to zero at finite $r_+$. This is not the case for the uncharged Schwarzschild black hole that arises when $\kappa^2=0$. This has $f^\prime(r_+)=4r_+/L^2$ and the zero temperature limit is approached when $r_+\rightarrow0$. To study this limit of the holographic superconductor we must investigate the hairy black hole. A priori it is not immediately obvious how such black holes approach zero temperature; is it found at some finite $r_+$ or when $r_+\rightarrow0$. From the numerical solutions calculated above it seems that the latter may be true since the temperature is reduced by reducing $r_+$ and within the range studied there have been no apparent zero temperature solutions at finite $r_+$. However, it is numerically very difficult to approach $r_+=0$ from some finite value and it is possible that a zero temperature finite $r_+$ solution exists beyond the scope of the numerics. In \cite{Horowitz:2009ij} the authors calculated numerical results for holographic superconductors in a regime where $r_+$ is precisely zero and the results of which, reassuringly, seemed to correspond the asymptote of their finite $r_+$ solutions. However there are problems with this approach. Firstly, that the $r_+=0$ spacetime is not continuously connected to the finite $r_+$ spacetime which introduces an element of uncertainty into the results. Also the results that they obtain are singular, which raises additional concerns.
We can attempt to find information about the true nature of the zero temperature superconductor with a little investigation of the field equations (\ref{phieq}) to (\ref{feq}). In particular we will ask whether these equations permit the existence of zero temperature, regular solutions with a non-trivial scalar field? We will show that this is not the case. We extend the work of \cite{FernandezGracia:2009em} by showing that there are no regular zero temperature solutions, including those with $r_+=0$ for scalars with tachyonic masses. We also address scalars with $m^2\geq0$.
We begin by imposing that our system be regular. This will be true if the energy momentum tensor, $T_{\mu\nu}$, is non-singular in coordinates that are locally regular at the horizon, or indeed, at $r=0$ if there is no horizon. Using Eddington-Finkelstein coordinates defined by $v=t+r^{*}$ and $r=\rho$ where $r^{*}$ is the tortoise coordinate defined by
\begin{align}
& dr^*=\frac{dr}{fe^{\nu}},
\end{align}
the following combinations of the energy momentum tensor must be regular
\begin{align}
T_{vv}&=T_{tt}=fe^\nu T^t_t,\label{reg3}\\
T_{v\rho}&=\frac{-T_{tt}}{fe^\nu}=-e^\nu T^t_t,\\
T_{\rho\rho}&=T_{rr}+\frac{T_{tt}}{f^2e^{2\nu}}=\frac{1}{f}(T^t_t-T_r^r). \label{reg}
\end{align}
(\ref{reg}) gives the most restrictive constraint, namely that
\begin{align}
\frac{\phi^2\psi^2}{f^2e^{2\nu}}+{\psi^\prime}^2 <\infty
\end{align} must be finite and hence each of the individual terms must also be finite. We wish to assess whether the field equations permit these constraints to hold for a non-trivial solution at zero temperature. The field equations are unchanged by the coordinate transformation and we are free to use (\ref{phieq}) to (\ref{feq}) in our analysis.
Note that (\ref{nueq}), plus the regularity of $(T^0_0-T^r_r)/f$ implies that $\nu^\prime(r_+)$ is regular. If $\nu^\prime(r_+)$ is regular then $\nu(r_+)$ is regular and $e^{\nu(r_+)}\neq0$. Thus from the definition of the temperature of our black hole, (\ref{eq:HawkTemp}), the requirement of zero temperature must imply that $f^\prime(r_+)=0$.
We shall now study what effect this constraint has on the scalar field equation
\begin{align}
f\psi^{\prime\prime}+\left(\frac{3}{r}+\nu^\prime+\frac{f^\prime}{f}\right)f \psi^\prime+\left( \frac{q^2\phi^2}{f e^{2\nu}}- m^2\right)\psi=0. \label{eq:PsiEqTimesF}
\end{align}
The terms containing $\psi^{\prime\prime}$ and $\psi^{\prime}$ go to zero at the horizon by the regularity of $\psi^\prime(r_+)$ and the fact that $f^\prime(r_+)=0$, thus the last term must also go to zero. This implies that either $\psi(r_+)=0$ or $m^2=\frac{q^2\phi^2}{fe^{2\nu}}$. If $\psi(r_+)\neq0$ then, by (\ref{reg}), $\frac{q^2\phi^2}{fe^{2\nu}}\to 0 $ which implies that $m^2=0$. Our analysis does not rule out the existence of regular zero temperature solutions for this choice of mass. In fact, it seems likely that such solutions do exist in light of \cite{Horowitz:2009ij}, where similar solutions were found for a system in four dimensional Einstein gravity. We leave the search for such solutions in this system to future research.
To investigate non-zero masses we consider $\psi(r_+)=0$. If $m^2\leq0$ then all the leading order terms of (\ref{eq:PsiEqTimesF}) have the same sign and cannot balance irrespective of whether $r_+$ is finite or zero. Thus there can be no regular, superconducting solutions at zero temperature for scalar fields with tachyonic masses.
Turning to $m^2>0$, where $\psi(r_+)=0$, it is possible to place strict constraints on these masses if solutions exist. Using the field equation for $f$:
\begin{align}
\left(1-\frac{2\alpha f}{r^2} \right)f^\prime+\frac{2}{r}f
-\frac{4r}{L^2} =-\frac{2\kappa^2}{3}r\left[
\frac{\phi^{\prime2}}{2e^{2\nu}}+m^2\psi^2+
f\psi^{\prime2}+\frac{q^2\phi^2\psi^2}{fe^{2\nu}} \right], \label{eq:Fequation2}
\end{align}
we see that if $\phi^\prime(r_+)=0$ then $r_+=0$ and $f(r)\sim r^2/L_e^2$ as $r\to0$. From (\ref{eq:PsiEqTimesF}) we can then infer that $\phi(0)=0$ as otherwise $q^2\phi^2\psi/fe^{2\nu}$ would be the only term at leading order. Then from the field equation for $\phi$
\begin{align}
\phi^{\prime\prime}+\phi^\prime\left( \frac{3}{r} -\nu^\prime \right)-2q^2\frac{\psi^2}{f}\phi=0, \label{eq:PhiEq2}
\end{align}
we see that the last term is sub-dominant and the remaining terms cannot cancel.
If $\phi^\prime(r_+)\neq0$, (\ref{eq:PhiEq2}) implies that $r_+\neq0$. Then the leading and next to leading order terms of (\ref{eq:Fequation2}) give
\begin{align}
\frac{\phi^{\prime 2}}{e^{2\nu_+}}&=\frac{12}{L^2\kappa^2}, & & f^{\prime\prime}_+=\frac{24}{L^2}.
\end{align}
By using these expressions in (\ref{eq:PsiEqTimesF}) we obtain an equation for the allowed masses at zero temperature
\begin{align}
m^2=\frac{12}{L^2}(n^2+n)+\frac{q^2}{\kappa^2}, \label{eq:scalarMassZeroT}
\end{align}
where $n\geq1$ is the leading power of $(r-r_+)$ in an expansion of $\psi$ about $r=r_+$.
This expression shows that there can be no regular solutions for $0<m^2\leq24/L^2$. Thus if positive mass solutions do exist they can only be found at very large $m^2$ and/or backreaction; substantially above the values for which finite temperature solutions have been found. We also see that, unlike the finite temperature system, the ``allowed'' values of $m^2$ are directly related to $\kappa^2$. These observations suggest that this positive mass result may be spurious.
A key result of the above analysis is that the zero temperature limit of our superconducting systems with tachyonic
scalars is not regular. We now wish to investigate this in a little more detail. In \cite{Horowitz:2009ij} a zero temperature solution was presented in which the spacetime, with no black hole, possessed logarithmic divergences as $r\to0$.
Such solutions can be found for our system in the Einstein limit but it becomes clear that they cannot be consistent with the idea of GB gravity as a perturbative expansion of Einstein gravity. The reason is that the logarithmic divergences of the metric cause the curvature invariants, such as the Riemann tensor and Ricci scalar, to diverge at $r=0$. Since the GB terms involve higher order combinations of these invariants than Einstein gravity this singular behaviour will immediately be dominated by the GB terms as $\alpha$ becomes non-zero. If this is the case the concept of GB gravity being a perturbative correction to Einstein gravity is destroyed and the validity of such a solution must be questioned.
The manifestation of this problem on the fields themselves can be seen from a near horizon expansion. Following \cite{Horowitz:2009ij}, for $\alpha=0$, one can find a set of boundary conditions consistent with the field equations
\begin{align}
&\psi=\sqrt{\frac{3}{\kappa^2}}(-\log r)^{1/2}+...,& & f=\frac{m^2}{2} r^2 \log r+..., \nonumber \\
&\phi=\phi_0 r^\beta (-\log r)^{1/2}+...,& & e^{2\nu}=K(\log r)^{-1} +...\label{logExpansion}
\end{align}
where $\beta=-1+\sqrt{1-\frac{12q^2}{\kappa^2 m^2}}$ and $\phi_0/K$ is free parameter that can be used to tune the system. This Ansatz is consistent with the field equations provided $4q^2>-m^2\kappa^2$ and after integrating the fields out from the horizon one finds the asymptotic profiles to be consistent with (\ref{r:boundary}). Unlike in the four dimensional system we were unable to find an appropriate value of $\phi_0/K$ to remove the source of the boundary operator. As a result these solutions do not strictly describe a holographic superconductor. However, they are valid solutions of (\ref{phieq}) to (\ref{feq}) and can be used to demonstrate our point.
The problem for non-zero $\alpha$ arises because $\alpha$ appears in the equations of motion, (\ref{phieq}) to (\ref{feq}), like $(1-\frac{2\alpha f}{r^2})$. From (\ref{feq}) it is possible to show that if $f(r)=f_sr^s(-\log r)^t$ then $s\leq2$ which means that $f/r^2$ has at least a logarithmic singularity for $t>0$. Since for $\alpha=0$, $t=1>0$ this means turning on $\alpha$ immediately incorporates new, singular behaviour at $r=0$ which destroys the perturbative relation between GB and Einstein gravity.
We have shown that there can be no regular solutions to our system at zero temperature, except possibly for massless or very massive scalars, and we have also given cause for caution when considering non-regular solutions. It is possible that consistent, non-regular, zero temperature solutions can be found that respect the relation between Einstein gravity and GB gravity but it seems unlikely. However, we can still find out information about the nature of this system in the zero temperature regime in the absence of such solutions. There are two analytic techniques which can provide bounds on the the critical values of the constants at the QCP. The first bound is found by simply taking the zero temperature limit of the analytic lower bound argument for $T_c$. In other words, taking the zero temperature limit of plots such as those in figure \ref{LowerBoundSurface}. These are shown as blue lines in figure \ref{T0Bounds}.
Another bound can be found by studying the stability of the vacuum solution to forming scalar hair. Following \cite{Hartnoll:2009sz,Denef:2009tp} we study a
perturbation of $\psi$ about the vacuum solution by using the Ansatz
$\psi=\psi(r)e^{-i\omega t}$. Assuming $\psi(r) \ll 1$ the effects of backreaction can be ignored (since its effects occur at $\mathcal{O}(\psi^2)$) and the scalar field equation becomes
\begin{align}\label{stabilityEqn}
\psi^{\prime\prime}+\left(\frac{3}{r} +\frac{f^\prime}{f}
\right)\psi^\prime+\left(\frac{\omega^2}{f^2}-\frac{2q\phi\omega}{f^2}-\frac{q^2\phi^2}{f^2}-\frac{m^2}{f}\right)\psi=0,
\end{align}
with $\phi$ and $f(r)$ taking their vacuum values (\ref{eq:GBAdSRNBlackHole1}) and (\ref{eq:GBAdSRNBlackHole}).
The system is unstable if the field equation shows this small perturbation to diverge. This will be the case if there is a normalizable solution to (\ref{stabilityEqn}) with ingoing boundary conditions at the horizon such that $\omega$ has a positive imaginary part. In general this equation can be solved numerically providing us with a bound on the critical values of the constants for general $T$. However, we are just interested in the $T=0$ case for which an analytic expression can be obtained.
For zero temperature our (extremal) GB Reissner Nordstr\"om black hole has
\begin{align}
& \frac{r_+^6}{L^2}=\frac{\kappa^2Q^2}{3}
\label{extremalCondition}
\end{align} and
\begin{align}
&f(r) = \frac12
f_+^{\prime\prime}(r-r_+)^2+... \quad \quad\quad \text{with} \quad\quad
\quad f_+^{\prime\prime}=\frac{24}{L^2}.\label{extremalCondition2}
\end{align}
It is then a simple exercise to expand (\ref{stabilityEqn}) in the vicinity of $r_+$. It is suggested in \cite{Hartnoll:2009sz,Denef:2009tp} that since we are concerned only with the onset of an instability it is sufficient to consider only the threshold case of $\omega=0$. In this case we find the solution to the expanded field equation to be
\begin{align}
&\psi\rightarrow c_1(r-r_+)^{\xi_+} +c_2(r-r_+)^{\xi_-},\\
&\xi_\pm=\frac12\left(1 \pm \sqrt{1-\frac{64 q^2Q^2}{r^6
{f^{\prime\prime}_+}^2}+\frac{8m^2}{f^{\prime\prime}_+}} \right).
\end{align}
If the expression inside the square root goes negative then $\psi$ will
turn imaginary and oscillate infinitely many times before reaching the horizon which, according to \cite{Denef:2009tp}, indicates an instability. This provides a criterion determining the onset of an instability at extremality.
Using (\ref{extremalCondition}) and (\ref{extremalCondition2}) if
\begin{align}\label{LowerBoundeq}
3+m^2L^2<\frac{q^2Q^2L^4}{r_+^6 3}=\frac{q^2L^2}{\kappa^2}
\end{align}
the black hole is unstable to forming scalar hair\footnote{This same bound can be found by observing that the near horizon limit of the extremal RN black hole has topology $adS_2\times S_2$ which has a different, less negative, BF bound to the full spacetime. Observing when the effective mass of the scalar field in the vicinity of the horizon is more negative then the $adS_2$ BF bound leads to precisely (\ref{LowerBoundeq}).}.
Figure \ref{T0Bounds} shows both this bound (as grey lines) and that taken from the zero temperature limit of the plots in figure \ref{LowerBoundSurface} (blue lines).
\FIGURE{
(a) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (b)\\
\includegraphics[width=7.2cm]{./ZeroTMfour_raster.eps}
\includegraphics[width=7.2cm]{./ZeroTM3_raster.eps}\\
(c) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (d)\\
\includegraphics[width=7.2cm]{./ZeroTM2_raster.eps}
\includegraphics[width=7.2cm]{./ZeroTM0_raster.eps}
\caption{Plots (a), (b), (c) and (d) show a bound, at $T=0$, on the critical value of $\kappa^2$ as a function of $\alpha$ for $m^2=-4/L_e^2$, $-3/L_e^2$, $-2/L_e^2$ and $0$ respectively. The region below each of the lines is the region where the system is unstable to forming scalar hair. The blue lines were generated using the lower bound argument from (\ref{eq:LowerBoundInt}) whilst the gray lines were generated using (\ref{LowerBoundeq}). The bounds continue to become less restrictive as $m^2$ increases above $0$. The red point in (a) indicates a system with $m^2=-4/L_e^2$, $\alpha=-4$ and $\kappa^2=1$ for which the critical temperature was found to be $T_c=0.268$.}
\label{T0Bounds}
}
The regions below each of the curves are the regions of parameter space for which the system is unstable to forming scalar hair, as indicated by each bound. It was suggested in \cite{Hartnoll:2009sz,Denef:2009tp} that the grey lines are not simply a bound but actually indicate the location of the QCPs in the system. Assuming that the true surface of critical temperature is continuous these plots immediately show that this cannot be the case as in each plot the two bounds cross. This has been further verified by the calculation of a non-zero critical temperature for a system with $m^2=-4/L_e^2$, $\alpha=-4$ and $\kappa^2=1$ indicated by the red point in plot (a), which is outside the region of instability as indicated by bound (\ref{LowerBoundeq}).
The correct way to see these curves is as complimenting lower bounds on the critical value of $\kappa^2$ as a function of $\alpha$, being aware that the true critical values could be some way above these combined bounds.
The plots do however indicate that $m^2$ and $\alpha$ do have a significant effect on the zero temperature limit of the system with both bounds exhibiting the opening up of a region of superconductivity at large $\kappa^2$ as both $\alpha$ and $m^2$ approach their upper and lower bounds respectively. This observation fully supports that of figure \ref{CondKappa} where numerically obtained values of $T_c$ were found at large, super-planckian backreaction. From figure \ref{T0Bounds} (a) we see that this is unsurprising since in this region condensation must occur before the temperature drops to zero. What, however, still remains unclear is why the critical temperature increases with backreaction here. Another interesting observation that can be made from these plots is that there can be no QCPs in the absence of backreaction.
It is also interesting to see how these bounds relate to equation (\ref{eq:scalarMassZeroT}) which expresses the values that $m^2$ must take if regular, positive mass, superconducting solutions exist at zero temperature. Inserting (\ref{eq:scalarMassZeroT}) into (\ref{LowerBoundeq}) shows that that systems with these masses can never be in the unstable region as indicated by bound (\ref{LowerBoundeq}) and it is only for large and negative $\alpha$ that they can in the unstable region indicated by (\ref{eq:LowerBoundInt}). This does not prove that these solutions do not exist but indicates that their existence may be unlikely.
\section{Conductivity}
The conductivity of our boundary theory, $\sigma$, is calculated by studying perturbations of the gauge field, $A_\mu$. We set $A_i(t,r,x^i)=A(r)e^{-i\omega t}e_i$ to be our perturbation and solve Maxwell's equation,
\begin{equation} \label{CondEq}
A^{\prime\prime}+\left(\frac{f^\prime}{f}+\nu^\prime+\frac{1}{r}\right)A^\prime
+\left[\frac{\omega^2}{f^2e^{2\nu}}-\frac{2}{f}q^2\psi^2
-\frac{2\kappa^2r^2\phi^{\prime2}}{fe^{2\nu}\left(r^2-2\alpha f\right)}
\right]A =0 \;,
\end{equation}
with the physically imposed boundary condition of only in going radiation at the horizon:
\begin{align}
A(r) \sim f(r)^{-i\frac{\omega}{4\pi T_+}} \ .
\label{ingoing}
\end{align}
Here, $T_+$ is the Hawking temperature defined at $r=r_+$. In \cite{Horowitz:2009ij} a very elegant interpretation of the holographic conductivity in four spacetime dimensions is provided. It involves the recasting of their gauge equation to the form of a one dimensional Schr\"odinger equation
\begin{align}
-A,_{zz}+V(z)A=\omega^2 A,\label{Shrodinger}
\end{align} where $z$ is a new radial parameter. $\sigma$ is then interpreted as a combination of the reflection and transmission coefficients of a wave passing through the potential $V(z)$. Viewing it in this way allowed intuition from quantum mechanics to be used to understand many key aspects of the conductivity of their system. Unfortunately, due to the higher dimensionality of the system discussed in this paper, such a treatment has proven less straightforward. Transforming (\ref{CondEq}) in to the form of (\ref{Shrodinger}) requires a change of radial coordinate to $dz=\frac{dr}{fe^{\nu}}$ followed by a change of variable of $A=r^{-\frac{1}{2}}\tilde{A}$. Proper treatment of this system via the Schr\"odinger equation requires $\tilde{A}$ to be normalizable. Since $A(r\to\infty )$ is finite, $\tilde{A}(r\to\infty)$ is infinite and hence non-normalizable. Instead, to calculate the conductivity we shall follow \cite{Barclay:2010up} and expand the solutions to (\ref{CondEq}) in the vicinity of the adS boundary, $(r\rightarrow\infty)$ giving
\begin{align}
\label{genasoln}
A=a_0 + \frac{a_2}{r^2}
+\frac{a_0 L_e^4 \omega^2}{2r^2}
\log \frac{r}{L},
\end{align}
with $a_0$ and $a_2$ being integration constants that are used to calculate the conductivity:
\begin{align}\label{ConductivityEquation}
\sigma=\frac{2a_2}{i\omega L_e^4 a_0 }
+\frac{i\omega}{2} - i\omega \log \left ( \frac{L_e}{L} \right) \ .
\end{align}
As was noted in \cite{Barclay:2010up} and \cite{Horowitz:2008bn} there exists an arbitrariness of scale in the logarithmic term in (\ref{genasoln}) introduced during the holographic renormalization process \cite{Skenderis:2002wp}. As a result there exists an arbitrariness of scale in the imaginary part of $\sigma$. We shall take advantage of this fact in our numerical calculations by choosing an appropriate renormalization scale in order to present the characteristics of the conductivity most clearly.
Plotting $\sigma/T_c$ as a function of $\omega/T_c$ shows a number of key characteristics of the superconductor
\FIGURE{
\centering
\input{CondTempBFvaryT.tex}
\caption{Conductivity: A plot showing the real (solid lines) and imaginary (dashed lines) parts of the conductivity, $\sigma/T_c$, as a function of $\omega/T_c$ for $m^2=-2/L_e^2$, $\alpha=0.125$, $\kappa^2=0$ at a variety of temperatures. The grey, red and blue lines correspond to temperatures of $50\%$, $35\%$ and $25\%$ of the critical temperature respectively. The small oscillations at larger $\omega$ are a numerical artefact.}
\label{Cond-Diff-Temp-away}
} as seen in figure \ref{Cond-Diff-Temp-away}. This plot shows the real and imaginary parts of the conductivity for a boundary theory at $m^2=-2/L_e^2$, $\alpha=0.125$ and $\kappa^2=0$. The first thing to note is a pole in $\text{Im}(\sigma)$ at $\omega=0$. This, by the Kramers-Kronig relations which follow from causality, indicate the existence of a Dirac delta function in $\text{Re}(\sigma)$ at $\omega=0$. This delta function, that cannot be picked up numerically, represents the infinite conductivity of the system. The plot also clearly shows the presence of a step in $\text{Re}(\sigma)$ coinciding with the global minimum of $\text{Im}(\sigma)$ which is interpreted as an energy gap in the superconductor. Following \cite{Horowitz:2008bn} we let the value of this minimum be $\omega_g$, the value of the energy/frequency gap. This plot also shows the effect that temperature has on the conductivity with the grey, red and blue lines corresponding to the measuring of $\sigma$ at $50\%$, $35\%$ and $25\%$ of the critical temperature respectively. This shows that reducing the temperature alters the plot only slightly; making the step and dip sharper and more pronounced, but does not change the value of $\omega_g$. Accessing lower temperatures has proved numerically very difficult. The absence of a reliable zero temperature solution means we can cast no light on what happens at as $T\rightarrow 0$.
\FIGURE{
\input{CondVaryTM4re.tex}\!\!\!\!\!\!\!\input{CondVaryTM4im.tex}
\caption{Conductivity: Plots showing the real (left) and imaginary (right) parts of the conductivity, $\sigma$, as a function of $\omega/T_c$ for $m^2=-4/L_e^2$, $\alpha=0.125$, $\kappa^2=0$ at a variety of temperatures. The grey, red and blue lines correspond to temperatures of $50\%$, $35\%$ and $25\%$ of the critical temperature respectively.}
\label{Cond-Diff-Temp-BF}
}
As we approach the BF bound the plot behaves quite differently, as can been seen in figure \ref{Cond-Diff-Temp-BF}. Now we see that lowering the temperature does dramatically alter the plot. At $T=0.5 T_c$ the plots looks very similar to that of figure \ref{Cond-Diff-Temp-away}, but as the temperature drops the step and dip shift to higher $\omega$, developing distinct peaks which turn into poles. These poles are interpreted as quasi-normal frequencies that have moved to the real axis from elsewhere in the complex plane \cite{Horowitz:2008bn,Horowitz:2009ij, Brattan:2010pq}. As the temperature drops further more poles appear at higher values of $\omega/T_c$ (not shown). It is suggested in \cite{Siopsis:2010pi} that in the probe limit of Einstein gravity, as $T\rightarrow0$ the number of these poles diverges. Since such low temperature analysis is outside the scope of this paper, we can shed no light on whether or not this occurs away from the Einstein limit.
We are interested in observing the effect that varying $\alpha$, $\kappa^2$ and $m^2$ has on these phenomena. We will begin by looking at the first case; away from the BF bound where temperature dependent effects are less prominent. In \cite{Barclay:2010up} the authors studied the effect of $\alpha$, $\kappa^2$ for $m^2=-3/L_e^2$. They found that increasing $\alpha$ above the Einstein limit increased the value of $\omega_g$ and made the step and dip more pronounced. The effect of increasing $\kappa^2$ was to smooth out the features of the plot but not affecting the value of $\omega_g$, that is until the smoothing removes the presence of the hard gap\footnote{ i.e $\text{Re}(\sigma)$ no longer is zero for small $\omega$.}, at least within the temperature range studied. Studying the conductivity for larger masses we see very similar results with quantitative differences rather than qualitative. The key information has been captured on a plot of $\omega_g$ against $T_c$ as seen in figure \ref{OmegaVsTc}.
\FIGURE{
\begin{tabular}{cc}
\includegraphics[width=7.25cm]{WgVSTc1.eps} &
\includegraphics[width=7.25cm]{WgVSTc2.eps}
\end{tabular}
\caption{ The left plot shows $\omega_g$ against $T_c$ for $\kappa^2=0$ and $m^2=0$, black (triangular) points; $m^2=-2/L_e^2$, red (square) points and $m^2=-3/L_e^2$, blue (circular) points. The right plot shows $\omega_g$ against $T_c$ for $m^2=-2/L_e^2$ for $\kappa^2=0$ red (square) points and $\kappa^2=0.05$ green (circular) points. In both plots from top to bottom the points correspond to $\alpha=0.24999$, $0.1875$, $0.125$, $0.0625$, $0$, $-0.25$, $-1$, $-10$, with the grey points corresponding to Einstein gravity. The dashed lines have been added to guide the eye. The straight line corresponds to $\omega_g=8T_c$.}
\label{OmegaVsTc}
}
The grey points in left plot in figure \ref{OmegaVsTc} correspond to the probe, Einstein limit of the superconductor. One can see that for the range of masses presented, the points all fall close to the line $\omega_g=8T_c$. This observation contributed ammunition to the speculation, \cite{Horowitz:2008bn}, that this may be a universal relation. This plot shows that such a relation is unstable to higher curvature corrections as found in \cite{Barclay:2010up}.
The plot shows that increasing $\alpha$ increases $\omega_g$ and largely reduces $T_c$, except for very close to $\alpha=1/4$. This has the effect of moving the point decidedly off the line. Decreasing the mass from $m^2=0$ increases $\omega_g$ and $T_c$ with the greatest differences occurring towards the upper bound of $\alpha$ where variations in $T_c$ are more pronounced. The right hand plot shows the effect of backreaction. Increasing $\kappa^2$ has very little effect on $\omega_g$ with the majority of the effect coming from the reduction in $T_c$. As $\alpha$ gets large and negative the points converge corresponding to the diminished effect of backreaction in this regime that was noted above. We were unable to extend these curves to much larger negative coupling as the numerical artefacts that arise in our calculation of the conductivity began to obscure the key features of the plots. However, since the calculation of the condensate seems possible for arbitrarily large negative $\alpha$, one might expect these curves to continue towards the axis without ever reaching it.
We now turn our attention to systems at the BF bound, and in particular what effect $\alpha$ and $\kappa^2$ have on the development of the quasi-normal modes. Figure \ref{QuasinormalvaryAlpha} shows $\text{Re}(\sigma)$, measured at $T=T_c/4$, for $m^2=-4/L_e^2$, $\kappa^2=0$ for a variety of values of $\alpha$.
\FIGURE{
\centering
\input{CondVaryA.tex}
\caption{Plot showing $\text{Re}(\sigma)$ measured at $T=T_c/4$, $m^2=-4/L_e^2$ and $\kappa^2=0$ for a range of values of $\alpha$. From left to right: red, $\alpha=-100$; blue, $\alpha=-1.0$; green, $\alpha=-0.25$; grey, $\alpha=0$; purple, $\alpha=0.125$ and black, $\alpha=0.24999$. The small oscillations are a numerical artefact.}
\label{QuasinormalvaryAlpha}
}
This plot shows that increasing or decreasing $\alpha$ does not seem to hinder the development of these quasi-normal modes. The dominant effect of, say, increasing the the GB coupling constant is to shift the poles to higher $\omega/T_c$. This increase with $\alpha$ is particularly marked as you approach the upper limit of the coupling constant.
\FIGURE{
\centering
\input{CondVaryK.tex}
\caption{Plot showing $\text{Re}(\sigma)$ measured at $T=T_c/4$ at $m^2=-4/L_e^2$ and $\alpha=0.24999$ for a range of values of $\kappa^2$. From left to right: red $\kappa^2=0.1$; blue $\kappa^2=0.01$; green, $\kappa^2=0.001$ and grey, $\kappa^2=0.0001.$}
\label{QuasinormalvaryKappa}
}
Figure \ref{QuasinormalvaryKappa} shows the effect that backreaction has on the development of the quasi-normal modes with a plot of $\text{Re}(\sigma)$, measured at $T=T_c/4$, for $m^2=-4/L_e^2$, $\alpha=0.24999$ for a variety of $\kappa^2$. We see that turning on backreaction very quickly removes the appearance of the poles, at least at this temperature; it is still quite conceivable that they may appear as the temperature is dropped. Analysis of this phenomenon at much lower values of $\alpha$ show the existence of quasi-normal modes up to much higher values of $\kappa^2$, supporting the observation that the effect of backreaction diminishes as $\alpha$ is reduced.
\section{Conclusion}
The aim of this paper was to explore the dependence of the fully backreacting Gauss-Bonnet holographic superconductor on the mass of the scalar field. We began by studying the critical temperature, $T_c$, of the system. We found that in the majority of parameter space the effect of backreaction is to reduce $T_c$ but that in a narrow region where $m^2\to-4/L_e^2$ and $\alpha\to L^2/4$ its effect is reversed and actually increases $T_c$. In this regime large, super-planckian values of backreaction are numerically attainable. We also found that as $\alpha$ becomes large and negative $T_c$ increases and the effect of backreaction is diminished as the gravitational action is dominated by the higher curvature terms. Again, this provides a regime where large values of backreaction are attainable.
We studied the zero temperature limit, proving that the system does not permit regular solutions with a tachyonic scalar field and place strict constraints on positive mass solutions. Following \cite{Horowitz:2009ij} we relaxed the regularity constraint for a system without a black hole and permitted the fields to have logarithmic divergences. Such a solution was found in the Einstein limit but was shown to be inconsistent with the idea of Gauss-Bonnet gravity as a perturbative expansion of Einstein gravity. These findings show that a satisfactory zero temperature solution to this system has not yet been found and raises questions as to whether one can be found, except possibly if the scalar is massless.
We also studied the conductivity of the system. In the region away from the BF bound we saw that the prominent effect of, say, increasing $\alpha$ was to increase the frequency gap, $\omega_g$. In the vicinity of the BF bound, the effect of $\alpha$ was to shift the location of the quasi-normal modes that appear there, again shifting them to larger $\omega$ as $\alpha$ was increased. Otherwise $\alpha$ did not seem to effect their development. The effect of backreaction was more notable: increasing backreaction away from the probe limit quickly prevented the appearance of these quasi-normal modes, at least within the temperature range that we were able to study.
The findings of this research suggest a number of interesting avenues for future research, most notably in relation to the zero temperature limit. The first thing to do might be to either find, or disprove the existence of, positive mass solutions in this limit. If they can be found it would be very interesting to study the quantum phase transitions that may happen there.
Having shown that this system does not permit regular, superconducting, zero temperature, tachyonic solutions, it would be interesting to see if there exist non-regular solutions that are compatible with the perturbative relation between Einstein and GB gravity. If so, it would be interesting to find out how this non-regularity can be interpreted. It would also be interesting to study the conductivity of this system in such a case, particularly in light of \cite{Siopsis:2010pi} which suggests that an infinite tower of quasi-normal modes may appear on the real axis as the temperature goes to zero.
Another interesting line of enquiry would be to further investigate the regime of positive scalar masses at finite temperature. In this work we were only able to find solutions for very small $m^2$ which proved to be only marginally different to those of $m^2=0$. In \cite{Kim:2009kb}, which studied the $m^2>0$ case in the probe limit, the authors found solutions at larger masses but found that solutions become difficult to attain, possibly due to an observed warping of the solution space which was dramatically enhanced by increasing $m^2$. It would be interesting to see how the inclusion of higher curvature terms and backreaction may affect this phenomenon.
Another interesting outcome of this paper is the observation that there exists two regions of parameter space where the numerical system is substantially more stable to the inclusion of backreaction. Since the inclusion of backreaction is an important and often complicated aspect of these systems, these regimes may provide testing grounds where the numerical analysis of backreaction is somewhat easier.
\section*{Acknowledgements}
I would like to thank Ruth Gregory for very helpful input during this work. I would also like to thank Sugumi Kanno and Paul Sutcliffe for previous collaboration and Danny Brattan, Simon Gentle and Laura Imrie for useful discussions. This work is supported by an STFC studentship.
\providecommand{\href}[2]{#2}\begingroup\raggedright
| -39,373.233831 |
[
-3.361328125,
3.06640625
] | 41.755889 |
[
-2.78515625,
0.204345703125,
-2.23828125,
-5.99609375,
-1.0556640625,
8.7265625
] |
[
2.509765625,
8.15625,
1.8974609375,
4.0234375
] | 422 | 7,692 |
[
-3.423828125,
3.951171875
] | 27.162557 |
[
-5.75,
-4.20703125,
-4.7734375,
-2.6875,
1.6416015625,
12.609375
] | 1.127899 | 29.388343 | 19.851794 | 4.071604 |
[
2.0013091564178467
] | -23,654.520316 | 5.532631 | -38,989.225531 | 0.465258 | 5.910148 |
[
-2.400390625,
-3.671875,
-3.66015625,
-5.01171875,
2.080078125,
12.2421875
] |
[
-5.26953125,
-2.26171875,
-2.2109375,
-1.412109375,
3.544921875,
4.33203125
] | |
BkiUd-_xK7IDM3wqgGow
|
\section{Introduction. Statement of the problem. Notations. Definitions. }
\vspace{3mm}
Let us assume $ (\Omega, \cal{B}, {\bf P}) $ a probability space and
$ (X, \cal{A}, \mu) $ to be a measurable space with sigma-finite measure $ \mu. $\par
Let $ \{ f_j \} = \{ f_j(x) \}, \ j = 0,1,2,\ldots, N, \ x \in X $ be a {\it family }
of probability densities, i.e. the family of measurable non - negative functions $ f_j: X \to R $
as
$$
\int_X f_j(x) \ d \mu(x) = 1.
$$
In the sequel
$$
\int = \int_X, \hspace{6mm} \int f d \mu = \int_X f(x) \ d \mu(x) = \int_X f(x) \ \mu(dx).
$$
Let also $ \xi = \xi(\omega), \ \omega \in \Omega $ be a random variable(r.v.) with values in the set (space) $ X. $
It may be a random vector or even a random process or fields etc.\par
\vspace{3mm}
{\bf Definition 1.1.} We define the following family of predicates (hypotheses)
$ H_j, \ j = 0,1,2,\ldots, N: $ the statement $ H_j $ imply that the r.v. $ \xi $ has a density $ f_j(x): $
\vspace{3mm}
$$
H_j: \hspace{4mm} \nu_{\xi}(A) \stackrel{def}{=} {\bf P} (\xi \in A) = \int_A f_j(x) \ \mu(dx). \eqno(1.1)
$$
\vspace{3mm}
The condition in (1.1) that the distributions $ \nu_{\xi}(\cdot) $ is absolutely continuous relative to the measure $ \mu(\cdot) $ is
not essential: arbitrary finite set of measures may always be dominated. \par
\vspace{3mm}
One of the main problem of the Cluster Analysis (CA) is the construction of an optimal algorithm in one sense or another
{\it decision rule}, see the classical monographs of M.R.Anderberg \cite{Anderberg1} and of P.Arabie P., L.J.Hubert L.J., and G. De Soete
\cite{Arabie1}.\par
\vspace{3mm}
Let us discuss this more detail. The {\it deterministic } decision rule $ R = R(G) $ may be described as
a {\it partition} of a view
$$
G = \{ G_j \}, \hspace{5mm} G_j \subset X, \hspace{5mm} \cup_{j=0}^N G_j \subset X. \eqno(1.2)
$$
We choose the hypothesis $ H_i $ if and only if $ \xi \in G_i. $ \par
\vspace{3mm}
This rule is {\it unambiguously} iff
$$
\forall (i,k), \ k \ne i \ \Rightarrow G_i \cap G_k = \emptyset, \eqno(1.3)
$$
and {\it complete}, iff
$$
\cup_{j=1}^N G_j = X. \eqno(1.4)
$$
\vspace{3mm}
Arbitrary deterministic rule $ R = R(G) $ has errors:
$$
\alpha_{i,k} = \alpha(R)_{i,k} \stackrel{def}{=} \int_{G_i} f_k(x) \ \mu(dx) = \int_{G_i} f_k \ d \mu, \ k \ne i. \eqno(1.5)
$$
Actually, if the true predicate is $ H_k, $ then $ \alpha_{i,k} $ is the probability to obtain the hypothesis $ H_i. $\par
In contradiction, the {\it randomized } decision rule $ S = S(\phi) $ may be described as a collection of a measurable functions
$ \phi = \{ \phi_j \}, \ \phi_j = \phi_j(x), \ x \in X, \ \phi_j: X \to [0,1] $ so that
$$
\alpha_{i,k} = \alpha(S)_{i,k} = \int_X \phi_i(x) \ f_k(x) \ \mu(dx). \eqno(1.6)
$$
The randomized strategy $ S, $ which includes as a particular case the deterministic rule, is complete iff
$$
\forall x \in X \ \Rightarrow \sum_{j=0}^N \phi_j(x) = 1 \eqno(1.7)
$$
and is unambiguously iff
$$
\forall x \in X, \ \forall k,i: k \ne i \ \Rightarrow \phi_i(x) \phi_k(x) = 0. \eqno(1.8)
$$
\vspace{3mm}
{\it In what follows we impose on all the considered decisions rules both the conditions (1.7) and (1.8). } \par
\vspace{3mm}
The meaningful sense of the formula (1.6) is evident: if the true predicate is $ H_k, $
then by means of additional random mechanism independent on $ \Omega $ we admit the hypothesis $ H_i $ with
probability $ \alpha_{i,k}. $ \par
The probability of a {\it false alarm } $ Q_{fa} $ may be expressed through $ \{ \alpha_{i,k} \}: $
$$
Q_{fa} = \sum_{j=1}^N \alpha_{j,0},
$$
as well as the probability of a {\it undetected faults } $ Q_{nd}: $
$$
Q_{nd} = \sum_{j=1}^N \alpha_{0,j}.
$$
Also the negative sense has probabilities
$$
\overline{\alpha}_{i,i} = 1 - \alpha_{i,i};
$$
it represents the probability to {\it reject} the predicate $ H_j $ under condition that exactly took place.\par
The statement and solving of different optimization problems formulated in the CA terms $ \{ \alpha_{i,k}(S) \} $ see in the
classical monographs \cite{Anderberg1}, \cite{Arabie1}, as well as applications in statistics were described in the books
\cite{Leman1}, \cite{Rao1}. \par
For the technical applications e.g. in technical diagnosis see \cite{Barzilowich1}, \cite{Barzilowich2}, \cite{Birger1}, \cite{Minakov1}
Here the predicate $ H_0 $ in the technical diagnosis correspondent to the normal state of the object. \par
\vspace{3mm}
{\bf The authors are trying in the present paper to highlight some new problems of optimization concerning decision rules, to solve them
and to discuss new applications, especially, in philology. }\par
\vspace{3mm}
Note that in the statistics the statement of an optimization problem looks as a rule as a minimax one \cite{Leman1}, \cite{Rao1}.
The case when the domains $ G_j $ have a parallelepipedal form was considered in the article \cite{Minakov1}. This approach is
traditional in the technical diagnosis, see \cite{Birger1}, and the sizes and the centers of the parallelepipeds are a subject to optimization.\par
The solution obtained in \cite{Minakov1} is in general case not complete. \par
\vspace{4mm}
\section{Main result: statement and solving of an optimization problem. }
\vspace{3mm}
{\bf A. Formation of objective function. } \par
\vspace{3mm}
Let $ v_{i,k}, \ i,k = 0, 1, \ldots, N, \ k \ne i $ be arbitrary non - negative non - trivial constants (weights) and
defined formally as $ v_{i,i} = 0, \ i = 0, 1, \ldots, N. $ \par
\vspace{3mm}
{\it We introduce the following objective function (more exactly, functional) }
$$
Z = Z(S) = \sum \sum_{i,k = 0}^N v_{i,k} \ \alpha_{i,k}(S). \eqno(2.1)
$$
\vspace{3mm}
For instance, the objective function may look like
$$
Z = Z(S) = \sum \sum_{i,k = 0}^N \ \alpha_{i,k}(S)
$$
or
$$
Z = Z(S) = v_1 Q_{nd} + v_2 Q_{fa}
$$
etc. \par
The weight coefficients may be proportional to the priory probabilities of appearance of the different states $ j $
or economical damage from faults. \par
\vspace{3mm}
{\bf B. Statement of the optimization problem. } \par
\vspace{3mm}
The following statement of optimization problem seems quite natural. \par
{\it Find the minimum of the functional} $ Z = Z(S): $
$$
Z = Z(S) = \sum \sum_{i,k = 0}^N v_{i,k} \ \alpha_{i,k}(S) \to \min_S \eqno(2.2)
$$
{\it under conditions}
$$
\phi_k(x) \in [0,1]; \ \sum_{i=0}^N \phi_i(x) = 1; \ \forall k,i: k \ne i \ \Rightarrow \phi_i(x) \phi_k(x) = 0, \eqno(2.3)
$$
(constrained optimization). \par
The problem (2.2) - (2.3) in the case when the decision rule is deterministic may be reduced as follows. Find the partition
$ G = \{ G_j \} $ of the set $ X $ such that
$$
Z = Z(R) = \sum \sum_{i,k} v_{i,k} \int_{G_i} f_k(x) \ \mu(dx) \to \min_G \eqno(2.4)
$$
under natural conditions
$$
\forall (i,k), \ k \ne i \ \Rightarrow G_i \cap G_k = \emptyset, \hspace{7mm} \cup_{j=1}^N G_j = X. \eqno(2.5)
$$
{\bf C. Reducing to the transportation problem.} \par
\vspace{3mm}
Denote
$$
g_i(x) = \sum_{k=0}^N v_{k,i} f_k(x), \eqno(2.6)
$$
then
$$
Z(S) = \sum_{j=0}^N \int_X \phi_j(x) \ g_j(x) \ \mu(dx). \eqno(2.7)
$$
In particular, we can write for the deterministic decision rule
$$
Z(R) = \sum_{j=0}^N \int_{G_j} g_j(x) \ \mu(dx). \eqno(2.8)
$$
Doubtless that the functional $ Z = Z(S) $ is linear over the collection of the functions $ \phi = \{ \phi_i(x) \}. $
The discrete approximation of the functional $ Z $ over the discrete set $ \{ x_r \} $ of the values $ x, \ x \in X $
looks like
$$
Z(R) \approx Z_{\Delta}(R) = \sum_r \sum_{j=0}^N \phi_j(x_r) \ g_j(x_r) \ \Delta_r.\eqno(2.9)
$$
We came to the following optimization problem
$$
\sum_r \sum_{j=0}^N \phi_j(x_r) \ g_j(x_r) \ \Delta_r \to \min_{\phi_j(x_r)} \eqno(2.10)
$$
under conditions
$$
\forall r \ \Rightarrow \phi_j(x_r) \in [0,1]; \hspace{7mm} \sum_{j=0}^N \phi_j(x_r) \Delta_r = 1, \eqno(2.11)
$$
or correspondingly
$$
\forall r \ \Rightarrow \phi_j(x_r) \in (\{0 \}, \ \{1 \} ); \hspace{7mm} \sum_{j=0}^N \phi_j(x_r) \Delta_r = 1. \eqno(2.12)
$$
The problem (2.10) - (2.11) belongs to the class of the well - known transportation problems of linear programming. It may be considered as an
approximation for the source problem (2.2) - (2.3) and may be used in practice. \par
\vspace{3mm}
{\bf D. Solving of the optimization problem. Main result. } \par
\vspace{3mm}
{\bf Theorem. } {\it The optimal decision rule exists, it is unique, deterministic and looks like}
$$
G^{0}_j = \{ x, \ x \in X, \ g_j(x) = \min_k g_k(x) \}. \eqno(2.13)
$$
{\it Herewith }
$$
Z( \{ G^0_j \} ) = \min_{ \{G_j\} } Z( \{ G_j \} ) = \int_X \min_j g_j(x) \ \mu(dx). \eqno(2.14)
$$
\vspace{3mm}
{\bf Proof.} The equality
$$
Z( \{ G^0_j \} ) = \int_X \min_j g_j(x) \ \mu(dx)
$$
follows immediately from the definition of the partition $ G^0 = \{ G^0_j \}. $ \par
Let now $ \phi = \{ \phi_j(x) \} $ be other randomized decision rule satisfying the conditions of unambiguousness
and completeness. We have:
$$
Z( \{\phi_j \} ) = \int \sum_{j=0}^N \phi_j(x) \ g_j(x) \ \mu(dx) \ge \int \sum_{j=0}^N \phi_j(x) \ \min_k g_k(x) \ \mu(dx) =
$$
$$
\int \ \min_k g_k(x) \ \mu(dx) = Z( \{ G^0_j \} ), \eqno(2.15)
$$
as long as $ \phi_k(x) \ge 0 $ and $ \sum_j \phi_j(x) = 1. $\par
\vspace{3mm}
This completes the proof of our theorem.\\
\vspace{3mm}
\section{Quasi - Gaussian distributions. Application in philology. }
\vspace{3mm}
We assumed above that the densities $ f_j = f_j(x) $ are known. They are for instance approximately Gaussian in the
technical diagnosis, see, e.g. \cite{Birger1}, \cite{Minakov1}.\par
We will describe in this section the application in philology, in particular, to represent the possible densities
which may appear therein.\par
The new so-called quasi-Gaussian distributions which may appear in demography and philology were discussed in the previous
paper of the authors \cite{Ostrovsky1}.
These distributions were substantiated by means of characterization properties under some natural conditions. \par
\vspace{3mm}
Let us discuss this in more detail.\par
\vspace{3mm}
There exist many characterizations of a two-dimensional, or, more generally, multidimensional Gaussian (normal)
distributions, with independent coordinates. For example, a characterization by means of independence of linear functionals or through the
distribution of sums of coordinates, see the classical textbook of W.Feller \cite{Feller1}, p. 77 , p. 498 - 500; by means of the
properties of conditional distributions, \cite{Albajar1}, \cite{Kotlarski1}; a characterization by means of the properties of order statistics
\cite{Jian1}; a characterization by means of some inequalities \cite{Bobkov1}, \cite{Kac1} etc., see also the reference therein.\par
The famous monograph of A.M.Kagan, Yu.V.Linnik, C.R.Rao \cite{Kagan1} is completely devoted to the characterisation problems
in Mathematical Statistics.\par
Usually, these characterizations are stable (robust), \cite{Meshalkin1}, \cite{Zolotarev1}. \par
\vspace{3mm}
Let us consider the following example.\\
\vspace{3mm}
{\bf Example.} We denote as trivial for any measurable set $ A, \ A \subset R $ its indicator function by
$ I(A) = I_A(x): $\par
$$
I_A(x) = 1, \ x \in A; \hspace{5mm} I_A(x) = 0, \ x \notin A.\eqno(3.0)
$$
Let us introduce a {\it family} of functions
$$
\omega_{\alpha}(x) = \omega_{\alpha}(x; C_1, C_2) := C_1 \ |x|^{\alpha(1)} \ I_{(-\infty,0)}(x) + C_2 \ x^{\alpha(2)} \ I_{ (0,\infty)}(x),
$$
$$
x \in R, \ C_{1,2}= \const \ge 0, \ \alpha = \vec{\alpha} = (\alpha(1), \alpha(2)), \ \alpha(1), \alpha(2) = \const > -1, \eqno(3.1)
$$
so that $ \omega_{\alpha}(0) = 0, $ and a family of a correspondent probability densities of a form
$$
g_{\alpha, \sigma}(x) = g_{\alpha, \sigma}(x; C_1, C_2) \stackrel{def}{=} \omega_{\alpha}(x; C_1, C_2) \ f_{\sigma}(x). \eqno(3.2)
$$
Since
$$
I_{\alpha(k)}(\sigma) := \int_0^{\infty} x^{\alpha(k)} \exp \left( -x^2/(2 \sigma^2) \right) \ dx = 2^{(\alpha(k) - 1)/2 } \
\sigma^{(\alpha(k) + 1)} \ \Gamma((\alpha(k) + 1)/2),
$$
where $ \Gamma(\cdot) $ is ordinary Gamma function, there is the interrelation between the constants $ C_1, C_2: $
$$
C_1 \ I_{\alpha(1)}(\sigma) + C_2 \ I_{\alpha(2)}(\sigma) = \sigma \ (2 \pi)^{1/2}, \eqno(3.3)
$$
has only one degree of freedom. In particular, the constant $ C_1 $ may be equal to zero;
in this case the r.v. $ \xi $ possess only non - negative values.\par
\vspace{3mm}
{\it We will denote by $ C_i, K_j $ some finite non - negative constants that are not necessary to be
the same in different places. }\\
\vspace{3mm}
{\bf \ Definition 3.1. } The one - dimensional distribution of a r.v. $ \xi $ with density function of a view
$ x \to g_{\alpha, \sigma}(x - a; C_1, C_2), \ a = \const \in R $ is said to be quasi - Gaussian or equally quasi - normal. Notation:
$$
\Law(\xi) = QN(a,\alpha,\sigma, C_1, C_2). \eqno(3.4)
$$
\vspace{3mm}
Let us explain the "physical" sense of introduced parameters of these distributions.
The value $ "a" $ in (3.2) may be called {\it quasi - center } by analogy with normal distribution; the value $ "\alpha" $
expresses the degree of concentration of this distribution about the center and the value of
$ "\sigma" $ which may be called {\it quasi - standard } of the r.v. $ \xi $ expressed alike in the classical Gaussian
r.v. the degree of scattering. \\
\vspace{2mm}
Note that there are some grounds to accept that the deviation of the point of put-down (landing of air-plane) from the central line of the
landing strip has a quasi-Gaussian distribution, see \cite{Mirzachmedov1}, \cite{Mirzachmedov2}.\par
\vspace{2mm}
Many properties of these distributions are previously studied in \cite{Ostrovsky1}: moments, bilateral tail behavior etc. In particular,
it is proved that if the r.v. $ (\xi, \eta) $ are independent and both have the quasi-Gaussian distribution
with parameters $ a = 0, \ b = 0 $ ("quasi - centered" case):
$$
\Law(\xi) = QN(0,\alpha,\sigma, C_1, C_2), \hspace{5mm} \Law(\eta) = QN(0,\beta,\sigma, C_3, C_4) \eqno(3.5)
$$
may occur with different parameters $ \alpha \ne \beta, \ C_1 \ne C_3, C_2 \ne C_4 $ but with the same value of the standard
$ \sigma, \ \sigma > 0, $ then their polar coordinates $ (\rho, \zeta) $ are also independent. \par
The opposite conclusion was also proved in \cite{Ostrovsky1}: the characterization of quasi - Gaussian distribution in the demography and
philology: if the polar and Decart (cartesian) coordinates are independent, then under some natural conditions
the random variables $ \xi, \eta $ have quasi-Gaussian distribution, and is explained why this property denotes this distribution
of the words parameters in many languages. \par
\vspace{3mm}
It is possible to generalize our distributions on the multidimensional case. Actually, let us consider the random vector
$ \xi = \vec{\xi} = (\xi_1, \xi_2, \ldots, \xi_d) $ with the density
$$
f_{\xi}(x_1, x_2, \ldots, x_d) = G( x_1, x_2, \ldots, x_d; \vec{\alpha}, \vec{\sigma}, \vec{ C_1 }, \vec{C_2 } ) \stackrel{def}{=}
$$
$$
\prod_{j=1}^d g_{\alpha_j, \sigma_l}(x_j; C_1^{(j)}, C_2^{(j)}), \eqno(3.6)
$$
where $ \alpha_j > -1, \ \sigma_j = \const > 0, \ C_i^{(j)} = \const \ge 0, $
$$
C_1^{(j)} \ I_{\alpha^{(j)}(1)}(\sigma_j) + C_2^{(j)} \ I_{\alpha^{(j)}(2)}(\sigma_j) = \sigma_j \ (2 \pi)^{1/2}, \eqno(3.7)
$$
\vspace{3mm}
The multidimensional version of our theorem is as follows, \ see \cite{Ostrovsky1}, proposition 3.1:
\vspace{3mm}
{\it Assume that all the standards $ \sigma_j = \sigma $ do not depend on the number $ j. $ Then
the (Cartesian) coordinates of the vector $ \vec{\xi}, $ i.e. the random variables $ \{ \xi_j \} $ are common and independent
and so are their polar coordinates. \par
The contrary is also true: if
the Cartesian and polar coordinates of the vector $ \vec{\xi} $ are commonly independent and the random variables $ \{ \xi_j \} $
are regularly distributed, then its density has a form (3.6), with the same standards } $ \sigma. $ \par
\vspace{3mm}
The knowledge of the densities' form $ f_j = f_j(x) $ of possible distribution $ \xi $ give us a huge advantage
for clusterisation; but we need to describe the method of parameters measurement (estimation). \par
\vspace{3mm}
\section{Estimation of parameters of quasi-Gaussian distribution.}
\vspace{3mm}
{\bf Definition 4.1 of a weight quasi-Gaussian distributions. } \par
\vspace{3mm}
Let $ W_k, \ k = 1,2, \ldots, N $ be positive numbers (weights) such that $ \sum_{k=1}^N W_k = 1. $ We define the {\it weight}
or {\it mixed} quasi - Gaussian distribution by means of multivariate density like
$$
G^{(W)}( x_1, x_2, \ldots, x_d) = G^{(W)} \left( x_1, x_2, \ldots, x_d; \{ a_j^{(k)} \}, \{ \alpha_j^{(k)} \}, \{ \sigma_j^{(k)} \}, \{ C_1^{(k)} \}\right) \stackrel{def}{=}
$$
$$
\sum_{k=1}^N W_k \ G \left( x_1-a_1^{(k)}, x_2-a_2^{(k)}, \ldots, x_d-a_d^{(k)}; \vec{\alpha}^{(k)}, \ \{ \sigma_j^{(k)} \}, \ \vec{ C_1 }^{(k)} \right) =
$$
$$
G^{(W)}( \vec{x}, \vec{\theta}), \eqno(4.1)
$$
where
$$
\vec{\theta} = \theta \stackrel{def}{=} \{ N; \{ \vec{a}_d \}, \ \{ \vec{\sigma}_j \}, \ \{ \vec{ C_1 }^{(k)} \} \}, \ d = \dim X, \ j,k = 0,1,2,\ldots,N.
$$
\vspace{3mm}
A more general form of similar distribution has a discrete component with at the same characterization property:
$$
G_0^{(W)}( \vec{x}) := W_0 \delta(\vec{x} - \vec{a_0}) +
$$
$$
\sum_{k=1}^N W_k \ G \left( x_1-a_1^{(k)}, x_2-a_2^{(k)}, \ldots, x_d-a_d^{(k)}; \vec{\alpha}^{(k)}, \vec{\sigma}^{(k)}, \ \vec{ C_1 }^{(k)} \right), \eqno(4.2)
$$
$$
W_0, W_1, \ldots, W_N > 0, \ \sum_{k=0}^N W_K = 1,
$$
$ \delta(\vec{x}) $ is the classical Dirac delta function; so that
$$
{\bf P} (\vec{\xi} = \vec{a_0}) = W_0 > 0.
$$
\vspace{3mm}
{\it Statement of problem:} given a sample $ \{ \eta_m \}, \ m = 1,2,\ldots, n; $ where $ n >> 1 $ from the weight multivariate
quasi-Gaussian distribution; we need to estimate its parameters: the number of clusters $ N, $ the centers $ \vec{a}_j, $ degrees of
concentrations $ \vec{\alpha} $ etc. \par
\vspace{3mm}
Let us imagine it by means of the {\it demography} analogy. Here the weights $ W_k $ are proportional to the
share of $ k^{th} $ city in the general population of some country.\par
In contradiction, in the {\it philology} the parameters $ \{W_k \} $ are possibly unknown and are
subject to evaluation on a sample.
\vspace{3mm}
Regarding the applications of the developed method in linguistics, let's consider the bunch of words of similar meaning (e.g.
hand, arm, palm, elbow, thumb, finger, to take, to give, to get, to bring, to catch, to hold etc,), so-called "semantic field". These words
are grouped around a semantic nucleus (here - the notion of hand/arm) and will be considered as a cluster. It may be compared with
other clusters in order to calculate lexical/semantic affinity on the base of the proposed quasi-Gaussian distribution. The results may
suggest the common origin, provided the etymological analysis permits it.
\vspace{3mm}
The very same equation (more precisely, system of equations) of maximal likelihood for the parameters estimation has a classical
form:
$$
\hat{\theta}_n = \argmax_{\theta} \sum_{m=1}^n \log G^{(W)}( \vec{\eta}, \vec{\theta}). \eqno(4.3)
$$
It is well-known that the rate of convergence $ \hat{\theta}_n $ to the true value $ \theta_0 $ is $ n^{-1/2}. $
The non-asymptotic deviation
$$
{\bf P}_{\theta,n}(u) := {\bf P} \left( \sqrt{n} || \hat{\theta}_n - \theta_0 || > u \right)
$$
as
$$
{\bf P}_{\theta,n}(u) \le \exp \left(- K(\theta) \ u^{ \gamma } \right) \approx \exp \left(- K(\theta_n) \ u^{ \gamma } \right), \ u \ge 1,
\ \gamma = \const > 0 \eqno(4.4)
$$
is studied in \cite{Ostrovsky2}.\par
Moreover,
$$
{\bf P} \left( \hat{N}_n \ne N \right) \le C_5(\vec{\theta}) \ q^n( \vec{\theta}), \hspace{6mm} 0 < q^n( \vec{\theta}) < 1. \eqno(4.5)
$$
The quasi - centers $ \{ a_j^{(k)} \} $ may be interpreted as coordinates of fundamental human notions: food, policy, medicine, economic etc. \par
One of the important advantage of approach offered above is the {\it automatical} measurement of cluster's number $ N \approx \hat{N}_n, $ in
contradiction to the classical methods of cluster analysis, see \cite{Anderberg1}, \cite{Arabie1}. \par
Notice that wherein the speed of convergence $ \hat{N}_n $ to the true value of number clusters $ N $ is very hight, see (4.5). \par
We emphasise also that we do not used arbitrary distance between the values $ \eta_m. $ \par
\vspace{3mm}
This was made possible only because we deduced the possible form of the distributions $ f_j(x) $ in the parametric form.\par
\vspace{3mm}
The classification based on the mixed quasi-Gaussian distribution may be useful, for example, in learning a foreign language.\par
\vspace{3mm}
Needless to say, this approach requires an experimental verification.\par
\vspace{4mm}
| -24,884.87956 |
[
-2.08203125,
1.775390625
] | 33.333333 |
[
-3.486328125,
0.69970703125,
-1.8583984375,
-5.20703125,
-0.59375,
6.9453125
] |
[
-0.2685546875,
6.7265625,
-2.349609375,
3.703125
] | 219 | 2,719 |
[
-3.5078125,
4.03515625
] | 38.037793 |
[
-5.6484375,
-3.27734375,
-3.734375,
-2.02734375,
1.5146484375,
10.78125
] | 1.180202 | 32.557869 | 30.856933 | 4.332944 |
[
1.530283808708191
] | -15,810.915438 | 4.986024 | -24,501.628914 | 0.531091 | 5.852078 |
[
-2.69921875,
-3.03125,
-3.357421875,
-4.86328125,
2.1875,
11.3359375
] |
[
-5.296875,
-0.4541015625,
-1.423828125,
-0.35693359375,
2.447265625,
2.126953125
] | |
BkiUbN_xK6nrxrSHdiXN
|
\section{Introduction}
Dilute magnetic semiconductors (DMSs) are still a
topic of great current interest.\cite{Jungwirth,Zutic,ohno1,awschalom}
The theoretical description of the interaction of transition metal doped
semiconductors is challenging since localized states interact
significantly with delocalized states. Density functional theory (DFT)
in the local density or generalized gradient approximation (LDA or GGA)
for the exchange-correlation energy is not able to properly describe the
non-locality of the screened exchange interaction and, furthermore, possesses a sizeable
self-interaction error.\cite{kummel} These limitations are particularly severe in the
case of localized orbitals, \textit{e.g.} Mn-$3d$ states, which are described as too
shallow in energy resulting in a large
hybridization with anion $p$-states. As a result, the Mn-$3d$ states are over-delocalized.
The situation is particularly serious in the case of small band-gap semiconductors
(such as Ge) which are described as
metals in LDA/GGA, thus producing an overestimated hybridization among the valence
and conduction Ge states and Mn-$d$ states.
The physics of localized $d$ states can be partially described using a
DFT+$U$ formalism, which introduces a local correction $U$ to recover the proper
position of the Mn-$d$ states\cite{LOT}. However, in
Ge, the accurate electronic
properties are not completely recovered: \textit{e.g.} the half-metallicity of the compound is lost within DFT+$U$ scheme.
Very recently, hybrid Hartree-Fock density functionals,
which mix a fraction of the exact Fock exchange with the DFT exchange,
have been widely applied to extended solid state systems.\cite{kummel,GaN,StroppaTMO,Togo,Kresse1,Kresse2,Scan1,Scan2,Scan3,Scan4,ZungerLast1,ZungerLast2,VanDeWalleLast,AleMF,ZungerLast,Cesare1,Cesare2,Hummer}
In this paper, we mainly focus on
Mn-doping in bulk Ge by performing hybrid-density functional theory calculations.
We will show that the HSE functional gives a satisfactory description of the structural, electronic and magnetic properties of Ge-based DMS, consistent with
experimental data. Furthermore, for few selected properties, and, for sake of comparison, we will also include some results of Mn-doped Silicon.
\section{Computational details}
The calculations were performed within the projector augmented-wave
(PAW) method\cite{paw} using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA)\cite{pbe} and
Heyd-Scuseria-Ernzerhof (HSE) hybrid
functional\cite{computational,erratum}, recently implemented in the VASP
code.\cite{computationalvasp,krakau} We also used the DFT+$U$ method for Mn within Dudarev's approach\cite{Dudarev}
fixing $U$ to 6 eV and $J$ to 1 eV (PBE was used for the DFT part).
The kinetic energy cutoff used for the orbitals was set to 300 eV.
Monkhorst-Pack $k$-point grids of $10\times 10 \times 10$ and $6 \times 6 \times 6$
were used
to sample the Brillouin zone of the Ge-bulk and of the 64-atom
unit cell, respectively. All the atomic internal
positions were relaxed.
In the following, we will focus on the Ge bulk system, single
(substitutional and interstitial) and double substitutional (dimer)
Mn impurities in a 64-atom Germanium cell. For Silicon, we will consider the bulk case and the Mn substitutional impurity in a 64-atom unit cell.
\section{Bulk Ge}
For bulk Ge, the calculated equilibrium properties
are in good agreement with experiments:
the HSE lattice constant is
5.703 (5.792) \AA\ using HSE (PBE)
within 0.7 (2.3) \% of the experimental value of 5.660 \AA;\cite{Scheffler}
the HSE bulk modulus (731 kbar) improves over the PBE value (571 kbar) when compared to
experiment (768 kbar\cite{landolt}). Furthermore, we remark that
within HSE the energy gap is properly described to be indirect
(0.63 eV including SOC\cite{peralta}
compared to the experimental value of 0.74 eV\cite{madelung}).
A similar result was obtained within HSE for Si\cite{peralta}:
the HSE lattice constant is 5.444 \AA \ using HSE\cite{peralta} compared to the
experimental one of 5.430 \AA.\cite{landolt} The calculated indirect energy gap is 1.12 eV\cite{peralta} while the experimental one is 1.17 eV.\cite{landolt}
It is interesting to note that, for Silicon, self-interaction schemes
which are often used to improve
the electronic structure description,\cite{sic,kummel}
do not open the gap.\cite{filippetti}
This is important for our present study,
since a faithful description of the equilibrium
properties of the host semiconductor is at the basis of an appropriate
description of the doped system.
\section{Mn impurities in Ge}
In Tab.\ref{MnGe}, we summarize our main results, {\em i.e.}
formation energy\cite{VanDeWalle} ($\Delta\textrm{H}$),
Mn-Ge bond length
($d_{\textrm{Mn-Ge}}$) and the Mn magnetic moment ($\mu$) at their respective theoretical lattice
constants for the considered Mn-doping cases. The formation energies are evaluated
with respect to the calculated Ge and Mn equilibrium bulk phases (diamond Ge and AFM-fcc Mn).
Of course, while Ge-rich growth conditions can be safely assumed to fix the
Ge chemical potential to its bulk value, the same is not true for Mn so that
the formation energy is a function of the Mn chemical potential
$\mu_{\textrm{Mn}}$. In Table~\ref{MnGe} we report the value for Mn-rich
conditions fixing $\mu_{\textrm{Mn}}$ to the corresponding bulk value ($\alpha$-Mn).
We note that the experimental
evidence of \textit{local} Ge-lattice dilation upon Mn-doping is correctly described
using HSE yielding a Mn-Ge distance 2 \% larger than
the ideal Ge-Ge bond-length,
while PBE gives a local contraction of $-$2\%\,
and DFT+$U$ finds a smaller local dilation of
0.4 (0.8)\%\ when using the theoretical PBE (HSE or experimental)
lattice constant.\cite{kettaps,Tsui1,Tsui} The most recent extended x-ray absorption fine
structure
results\cite{LOT1,LOT2,Pochet} yield a Mn-Ge coordination distance of
2.50-2.51$\pm$ 0.03 \AA \ for the samples obtained at low temperature
and which are thought to be best candidates for Mn occupation on substitutional sites\cite{LOT2}. Clearly, these results match the HSE result and
the DFT+$U$ as well. Similar results for bond-lengths contraction/dilation within
different DFT schemes were reported also for III-V based DMS\cite{Furdyna}.
\begin{figure}
\includegraphics[scale=.5,angle=0]{Orbital.3.eps}
\vspace{1cm.}
\caption{(Color on line) Molecular energy diagram of the Mn-$d$ states (right)
interacting with the Ge-$sp^{3}$ hybrid orbitals (left).
The Mn-$s$ states are not shown for clarity (see text).
$b$, $ab$ and $nb$ subscripts label bonding, anti-bonding and non-bonding orbitals.
Arrows denote up/down electrons
while circles indicate holes. The central panel (Mn$_{\textrm{Ge}}$)
shows $p-d$ hybridization for substitutional Mn at Ge site.}
\label{orbit}
\end{figure}
A simple molecular orbital description, as sketched in Fig.~\ref{orbit}, can be useful in order
to describe the interaction of Mn in the tetrahedral Ge ligand field, as also done previously
for similar compounds\cite{Azunger}.
We recall that in diamond like semiconductors,
the $sp$ valence states arrange to form $sp^{3}$ hybrid orbitals,
each of them filled with a bonding electron pair.
If one Ge atom is removed creating a Ge vacancy,
4 $sp^{3}$ hybrids point towards the vacant Ge atom,
each filled with one electron (dangling bonds).
The Ge vacancy is now replaced by a Mn atom.
Due to the local tetrahedral symmetry, the Mn $d$-states
are split into 3-fold degenerate $t_{2g}$ and 2-fold degenerate
$e_{g}$-like states, further splitted by the local exchange field (see Fig.~\ref{orbit}, right part). From linear combinations of the four $sp^3$
Ge dangling bonds pointing towards the
transition-metal impurity, an $s$-like $a_{1\uparrow,\downarrow}$ orbital
and three $p$-like $t_{2g\uparrow,\downarrow}$ orbitals are
formed. The
host $a_{1\uparrow,\downarrow}$ orbital and the transition-metal $4s_{\uparrow,\downarrow}$ states form a doubly
occupied bonding state deep in the
semiconductor valence band, and an empty antibonding state high in the conduction band (not included in Fig.~\ref{orbit}).
For the majority component, the Mn-$t_{2g}$ orbitals are lower in energy than the Ge-$t_{2g}$ $sp^{3}$ hybrid states. They interact giving rise to
3 bonding states (3$\times$Mn-$t_{2g}$)$_{b}$ (see Fig.~\ref{orbit}) and 3 antibonding states
(3$\times$Ge-$sp^{3}$)$_{ab}$. The Mn-$e_{g}$ states do not hybridize because they are non-bonding in
a tetrahedral ligand field. For the minority component, the Ge-$t_{2g}$ $sp^{3}$ orbitals
are lower in energy than the Mn-$t_{2g}$ states. Upon interaction,
they give rise to 3 bonding orbitals (3$\times$Ge-$sp^{3}$)$_{b}$
and 3 anti-bonding orbitals (3$\times$Mn-$t_{2g}$)$_{ab}$.
The complex is characterized by a total of
11 electrons (4 from the nearest Ge atoms and 7 from the Mn impurity atom).
Disregarding the two electrons occupying the
lowest $a_1$ symmetry-like state, one needs to fill the orbitals with nine electrons
as shown in Fig.~\ref{orbit}: Clearly the Mn impurity is in a high spin state with 5
$d$-electrons
in the majority channel
(2 electrons in the $e_{g}$ and 3 in the $t_{2g}$-like states)
and zero $d$-electrons in the minority channels. However, while the minority
valence Ge-$sp^{3}$ states are fully occupied, the majority
states accommodate two holes.
This simple molecular picture suggests that:
i) the compound is half-metallic,
with the Fermi level falling within the Ge majority valence band;
ii) the total spin moment of the complex, \textit{i.e} $n_{\uparrow}-n_{\downarrow}$,
is 3 $\mu_{B}$; iii)
the local Mn $d$ spin moment is 5 $\mu_{B}$ partially compensated by
the holes in the $sp^{3}$ states;
iv) the induced spin
moment on the 4 nearest Ge atoms, \textit{i.e.}
$n^{{\rm Ge}-sp^{3}}_{\uparrow}-n^{{\rm Ge}-sp^{3}}_{\downarrow}$ should be sizeable
and opposite to the spin on the Mn atom.
In line with previous calculations~\cite{schult,aless,park}, the calculated results (see Tab.~\ref{MnGe}) confirm this picture
finding a total spin moment of exactly 3 $\mu_{B}$ in the unit cell,
4.1 $\mu_{B}$ at the Mn atom, and
$-$0.11\ $\mu_{B}$ at the nearest Ge-atoms.
Furthermore the sizeable induced moments on Ge atoms suggest that the holes are rather delocalized.
The local angular momentum decomposed density of states (DOS) shown in Fig.~\ref{DOS}
is consistent with this orbital interaction diagram.
The top panel shows the HSE results, whereas the bottom panel
reports DFT+$U$ results for $U$=6 eV.
In the inset, we show the relation between
the center of mass of the Mn-$d$ majority states, $\langle \epsilon_d \rangle$,
and the $U$ value.
The horizontal line indicates the $\langle \epsilon_d \rangle$ value, which matches
the HSE result ($U$=6 eV).
Fig.~\ref{DOS} clearly confirms the interpretation discussed above.
In particular, integration of the majority
total density of states from the Fermi level up to the end of the Ge-valence
band exactly sums up to 2 electrons: these are the two holes
required to fill the Ge valence band.
Mn-substitution into the Ge host matrix does not produce
a Jahn-Teller ion,
but rather a Mn$^{2+}$ ionic state with a $d^{5}$ configuration and
two spin-polarized holes.
Let us now compare the HSE and DFT+$U$ DOS.
Due to the choice of the $U$ value, the Mn-$d$ states have the same energy
within HSE and DFT+$U$
but the hybridization between Ge-$sp$ and Mn $t_{2g}$ and $e_{g}$ states (this
latter symmetry-allowed away from $\Gamma$) is underestimated within DFT+$U$
compared to HSE. While the on-site $U$ mainly localizes the Mn $d$
states, HSE also acts through the screened exchange
on Ge-$p$ states, lowering their energy position
and leading to a larger Mn-$d$ Ge-$p$ hybridization.
As a matter of fact, the larger hybridization in HSE compared to DFT+$U$
can be recognized
just below $-4$~eV, where a peak
in the $e_g$ character (shaded region in Fig.~\ref{DOS}) is completely
absent in the present DFT+$U$ description.
We note that previous LDA+$U$ calculations\cite{LOT} with $U$=4 eV, although
reproducing the peak at $-4$~eV characteristic of the Mn-Ge bond,\cite{LOT}
gave a quite different
density of states for both the $t_{2g}$ and $e_{g}$ states, which is due to
the strong dependence of the localized $d$-states
description on the $U$ parameter.
Finally, as found in Ref.\ \onlinecite{LOT},
the half-metallic character of the compound is destroyed within
DFT+$U$: the energy position of the Mn-$t_{2g}$ Ge-$sp$
bonding minority states,
whose energy position is mainly determined by the atomic Ge-$sp$ levels,
is raised towards higher energies causing an incomplete filling of the
minority valence band.
In a previous study,\cite{aless} it was shown that the half-metallicity is favored
in Mn doped Ge while in Silicon matrix it is lost.
In Fig.~\ref{dos_Si}, we show the HSE DOS for substitutional Mn$_{\textrm{Si}}$
(top panel) and DFT+$U$, with $U$=6 eV, the same used for Mn$_{\textrm{Ge}}$
(bottom panel). The correction for the
self-interaction error has a larger effect on Si-$sp^{3}$ states, since they are quite localized. Therefore, they are pushed down in energy. According to the orbital energy diagram shown in Fig.\ \ref{orbit}, also the minority \textit{bonding}
(Si-$sp^{3}$)$_{b}$ are shifted down in energy, favoring the half-metallicity.
Obviously, the Mn-$d$ states are also corrected for the self-interaction error, and they are
pushed down in energy as well
On the other hand, DFT+$U$ corrects only the Mn-$d$ states, but not the Si-$sp^{3}$ states.
This gives a near-half metallic structure and an underestimation of the hybridization of
Mn-Si states compared to the HSE description.
\begin{table}
\caption{Formation energy $\Delta \textrm{H}$, Mn-X distances $d_{\textrm{Mn-X}}$ and
magnetic moments $\mu$ for Mn-doped Ge for various structures. Distances in
parentheses specify the ideal Ge-Ge bond-length of the host (lines $d_{\textrm{Mn-X}}$).
Local Mn magnetic moment as well as total magnetic moment (in parentheses) are specified.}
\vspace{0.5cm}
\begin{tabular}{cccc} \hline \hline
& PBE & DFT+$U$ & HSE \\\hline \hline
{\bf Mn substitutional site} & & & \\
$\Delta \textrm{H}$ (eV/Mn) & 1.5 & & 0.9 \\
$d_{\textrm{Mn-Ge}}$ (\AA) & 2.46 (2.51) & 2.53 (2.51) & 2.52 (2.47) \\
$\mu$ ($\mu_B$) & 3.3 (3.1) & 4.1 (3.4) & 4.1 (3.0) \\
\hline
{\bf Mn interstitial site} & & & \\
$\Delta \textrm{H}$ (eV/Mn) & 2.1 & & 1.8 \\
$d_{\textrm{Mn-Ge}}$ (\AA) & 2.57 (2.51) & 2.63 (2.51) & 2.58 (2.47) \\
$\mu$ ($\mu_B$) & 3.4 (4.0) & 4.2 (4.8) & 3.8 (4.1) \\
\hline
{\bf Mn-Mn dimer} & & & \\
$\Delta \textrm{H}_{\textrm{AFM}}$ (eV/Mn-pair) & 2.9 & & 1.5 \\
$\Delta \textrm{H}_{\textrm{FM}}-\Delta \textrm{H}_{\textrm{AFM}}$ & 0.77 & 0.22 & 0.15 \\
$d_{\textrm{Mn-Mn (FM)}} $ (\AA) & 2.55 (2.51) & 2.97 (2.47) & 2.99 (2.47) \\
$d_{\textrm{Mn-Mn(AFM)}}$ (\AA) & 1.95 (2.51) & 2.71 (2.47) & 2.83 (2.47) \\
\hline\hline
\end{tabular}
\label{MnGe}
\end{table}
\subsection{The single interstitial impurity} This defect in a Germanium matrix
possesses twice the formation energy
as the substitutional impurity (HSE), hence
it is unlikely to form. Here we only note that a tendency to a local expansion
around Mn is found for all functionals. Interestingly,
the local magnetic moment is larger for DFT+$U$ than for HSE,
suggesting sizeable differences in the interaction of the impurity with the local environment.
\subsection{Mn-dimers} Double Mn
substitutions on two nearest-neighbouring Ge sites are of
particular interest, since they are inferred to occur at
experimental growth conditions~\cite{park,LOT1} leading to
nucleation of Mn-precipitates. In addition, they might be also detrimental
for the magnetic ordering, since dimers show antiferromagnetic (AFM) coupling
with no net spin moment.
Unfortunately, GGA-based
calculations are not entirely conclusive, since the dimer configuration
becomes stable at a too small bond-length (1.95 \AA) not compatible with
the Mn ionic radius ($\simeq$ 1.1-1.3 \AA) or bond distances in Mn-Ge compounds.
Thus, published results~\cite{park,imp2} often refer to the ideal
unrelaxed structure which, of course, strongly overestimates the heat of
formation of the dimer.
From the results reported in Tab.~\ref{MnGe}, assuming
thermodynamic equilibrium,
we can comment on the relative concentration of single substitutional sites and dimers.
At thermodynamic equilibrium, the concentration $c$ of a defect with $n_{\rm Mn}$ Mn atoms is roughly proportional to $e^{-(\Delta \textrm{H}- n_{\rm Mn}\Delta \mu_{\rm Mn})/k_{B}T}$ where $\Delta$H is the formation energy, $k_{\textrm{B}}$ the Boltzman constant, and $T$ the temperature. Supposing $\Delta \mu_{\textrm{Mn}}\approx0$~eV (thermodynamic equilibrium with $\alpha-$Mn), the probability of finding a substitutional Mn or dimer is $e^{-0.9/k_{B}T}$ and $e^{-1.5/k_{B}T}$, respectively,
\textit{i.e.} monomers are more likely to form than dimers. More generally, for a specific $\Delta \mu_{\textrm{Mn}}$
the probabilities for single substitutions and dimers are,
\[
e^{(-0.9+\Delta \mu_{\textrm{Mn}})/k_{B}T} \quad \mbox{and} \quad e^{(-1.5+2\mu_{\Delta \textrm{Mn}})/k_{B}T}.
\]
Therefore the dimer concentration is larger than the monomer concentration only for
$\Delta \mu_{\textrm{Mn}}>0.6$~eV, \textit{i.e.} at extremely Mn-rich conditions,
where $\alpha-$Mn precipitates are anyway already preferred over the formation of monomers (or dimers).
Although kinetic effects might well hinder the nucleation of larger precipitates,
our calculated thermodynamics suggests a rather low dimer concentration.
It is important to note, that the thermodynamic arguments alone, presented here,
are not enough to fully discuss the relative probability of occurrence
of the monomers with respects to dimers, as these systems are usually
grown out of the thermodynamic equilibrium and kinetic effects may have an important role.
Finally, the stabilization energy of AFM over FM coupling of
nearby impurities is much lower within HSE than GGA.
Furthermore, for HSE (and DFT+$U$),
the calculated Mn-Mn distance for both FM and
AFM magnetic alignment, are in line with experiments ---always reporting local
lattice dilation --- as well as in line with the
Mn-Mn distances in the FM Mn$_5$Ge$_3$ compound (varying between 2.52
and 3.06 \AA)~\cite{mn5ge3,mn5ge3.stroppa.1,mn5ge3.stroppa.2}.
\begin{figure}
\includegraphics[scale=0.5,angle=0]{./dos_hse_new.eps}
\vspace{1cm.}
\caption{Density of states projected on the Mn impurity site (top panel)
in symmetry resolved angular momentum components: $t_{2g}$ (dashed line) and $e_g$
(shadow) states
for Mn substitutional impurity.
The density of states projected on the $l=1$ component of the 4
Ge (bottom panel) coordinated with the Mn impurity is also shown (solid line).
The inset shows the Mn-$d$ center of mass as a function of the $U$ value. The horizontal line
indicates the value found within HSE.}
\label{DOS}
\end{figure}
\begin{figure}
\includegraphics[scale=.5,angle=0]{dos_hse_new.GevsSi.eps}
\vspace{1cm.}
\caption{(Color on line) HSE Density of states projected on the Mn impurity
site (top panel) for Mn in Si. Labels as in Fig.\ \ref{DOS}. DFT+$U$, $U$=6 eV (bottom panel).}
\label{dos_Si}
\end{figure}
\section{Conclusions} In summary,
we have performed a comparative study of substitutional Mn$_{\textrm{Ge}}$
by using PBE, PBE+$U$ and HSE functional. The main focus is on the differences arising
from three different treatments of the exchange-correlation term, namely the PBE, DFT+$U$,
and HSE. As well known, the PBE treatment can not describe satisfactorily
the ground state properties of the Mn in the host semiconductor matrix. Including the $U$ correction at DFT level improves the description. However, some differences still remain when compared to HSE. For example,
the HSE Mn-$d$ peak position is found at $\sim-$5 eV with respect to the Fermi energy, \textit{i.e.} in the same
energy region as observed in photoemission experiments.\cite{LOT} When using the DFT+$U$ method and fixing the $U$ parameter in order to recover the experimental $d$-peak position, the hybridization with Ge-$p$ states around $-$4 eV is underestimated compared to HSE and the experimental photo-emission
peak.\cite{LOT} Furthermore for the same $U$,
the half-metallic character is not predicted by DFT+$U$.
The fact that HSE accurately describes the host semiconductor and, at the same time, the interaction of localized Mn states with the host valence states
makes this functional a valuable approach for studying transition metal defects in semiconductors. It is also true that the HSE calculations are usually quite more
computationally demanding with respect to DFT+$U$. Therefore, whenever
a compromise between accuracy and computation effort is required in the calculations,
a preliminary HSE study may be useful for choosing an appropriate $U$ value, which is often not accessible from experiments.
\acknowledgments
This work was supported by the Austrian {\em Fonds
zur F\"orderung der wissenschaftlichen Forschung} and by a computing
grant at CINECA-HPC center.
| -15,572.931722 |
[
-2.951171875,
2.677734375
] | 28.150134 |
[
-3.236328125,
-0.475341796875,
-2.3203125,
-5.24609375,
-0.176025390625,
8.359375
] |
[
2.41015625,
8.8203125,
1.65625,
4.8203125
] | 234 | 2,931 |
[
-2.98046875,
3.306640625
] | 28.661812 |
[
-6.1875,
-3.798828125,
-3.857421875,
-2.39453125,
1.791015625,
11.8046875
] | 1.093067 | 13.875815 | 32.378028 | 2.351386 |
[
1.5277533531188965
] | -12,171.871466 | 5.4623 | -14,956.119598 | 0.405996 | 5.850223 |
[
-3.146484375,
-3.96875,
-3.4921875,
-4.359375,
2.51171875,
11.9296875
] |
[
-5.4609375,
-1.3486328125,
-1.6181640625,
-0.529296875,
3.28125,
3.26953125
] | |
BkiUf2Y5qhDCykklL3wA
|
\section{Introduction}\label{Introduction}
Core-collapse supernovae (CC-SNe) originate from the gravitational collapse of the iron cores formed by massive stars ($M \geq 8 \: \mbox{M$_{\odot}$}$) that cannot be supported by further exothermal thermonuclear reactions (\citealt{Iben1983}; \citealt{Woosley2002}). An important sub-class of CC-SNe is represented by Type II-plateau events (SNe II-P) characterized by the presence of hydrogen in their spectra \citep{Filippenko1997} and a luminosity ``plateau'' that lasts for $\sim 80 - 100$ days, after the blue band maximum of the light curve \citep{Barbon1979}. The plateau is powered by the recombination of hydrogen in the SN ejecta. When the recombination ends, the lightcurve drops sharply by several magnitudes in $\sim 30$ days (e.g. \citealt{Kasen2009}; \citealt{Olivares2010}).
This transition phase is followed by a linear ``radioactive tail'', where the light curve is powered by the radioactive decay of $^{56}$Co to $^{56}$Fe. In this phase the SN luminosity depends on the amount of $^{56}$Ni synthesized in the explosion (e.g. \citealt{Weaver1980}).\\
Both theoretical (e.g. \citealt{Grassberg1971}; \citealt{Litvinova1983}; \citealt{Utrobin2008}; \citealt{Pumo2011}; \citealt{Bersten2012}) and empirical (e.g. \citealt{Smarttetal2009}) investigations show that type II-P SNe are generally associated with red supergiants (RSGs). A minor fraction of them (less than $3-5\%$, e.g. \citealt{Smarttetal2009}; \citealt{Kleiser2011}; \citealt{Pastorello2012}) results from the explosion of a blue supergiant, similar to SN 1987A (\citealt{Gilmozzi1987}; \citealt{Kirshner1987}). Theoretical models predict that type II-P SNe are the final fate of progenitors between $8$ and $30$ $M_\odot $ (e.g. \citealt{Heger2003}; \citealt{Walmswell2012}).
Most progenitors identified in high-resolution archival images were found to be RSGs of initial masses between $\sim 8 \: \mbox{M$_{\odot}$}$ and $\sim 17 \: \mbox{M$_{\odot}$}$. The apparent lack of high-mass progenitors has been dubbed as the ``RSG problem'' (\citealt{Smarttetal2009}, and references therein).
The existence of this discrepancy has been further confirmed by studies of the massive star population in Local Group galaxies, for which RSGs have been found to have masses up to $25 \: \mbox{M$_{\odot}$}$ (\citealt{Massey2000}; \citealt{Massey2001}).\\
The reason for this lack of detection of massive RSG progenitors is still debated. A possible solution of the RSG problem was presented by \citet{Walmswell2012}. They speculate that an underestimation of the luminosity of the RSG SN progenitors (and therefore of their masses) might occur if we neglect the presence of an additional extinction due to dust production in the RSG winds. They estimated a new upper limit for the mass range of $21^{+2}_{-1} \mbox{M$_{\odot}$}$, which is, within the errors, marginally consistent with the range derived by \citet{Smartt2009}. \citet{Kochanek2012} pointed out that the use of standard interstellar extinction laws may overestimate the effects of the reddening.\\
A different approach to estimate the mass of Type II-P SN progenitors is based on the use of hydrodynamic modelling of the SN evolution. This allows us to determine the ejecta mass, explosion energy, pre-SN radius and Ni mass by performing a simultaneous comparison between the observed and simulated light curves, the evolution of line velocities and the continuum temperature (\citealt{Litvinova1983}; \citealt{Litvinova1985}; \citealt{Zampieri2005}; \citealt{Zampieri2007}).
The pre-explosion mass is calculated from the ejecta mass assuming the mass of a neutron star remnant ($1.4 \: \mbox{M$_{\odot}$}$) and mass loss through stellar winds. The hydrodynamic modelling of several well-observed Type II-P SNe (SNe 1997D, \citealt{Zampieri1998}; 1999em, \citealt{Elmhamdi2003}; 2003Z, \citealt{Utrobin2007} and \citealt{Spiro2014}; 2004et, \citealt{Maguire2010}; 2005cs, \citealt{Pastorello2009}; 2009kf, \citealt{Botticella2010}) determined higher masses for the progenitors than those derived from the analysis of pre-explosion images. This discrepancy either points to systematic errors in the analysis of pre-explosion images or in the assumptions in the physiscs of the hydrodinamical modelling (\citealt{Utrobin1993}; \citealt{Blinnikov2000}; \citealt{Chugai2000}; \citealt{Zampieri2003}; \citealt{Pastorello2004}; \citealt{Utrobin2007a}, \citealt{Utrobin2007}; \citealt{Utrobin2008}; \citealt{Utrobin2009}; \citealt{Pastorello2009b}).\\
Another method to estimate the mass of the progenitor is the modeling of nebular phase spectroscopic observations (\citealt{Jerkstrand2012}; \citealt{Jerkstrand2014}) ,which provide good agreement with estimates obtained by the analysis of pre-explosion images.\\
The astrophysical interest in Type II-P SNe is twofold: 1) observations show that Type II-P SNe are the most common explosions in the nearby Universe (e.g. \citealt{Cappellaro1999}; \citealt{Li2011}); and 2) starting from the pioneering suggestion by \citet{Kirshner1974}, Type II-P SNe have been proposed as robust distance indicators.
Two different approaches are used to derive distance measurements of SNe II-P.
The theoretical approach is based on spectral modelling like the expanding photosphere method (e.g. \citealt{Eastman1996}) or the spectral expanding atmosphere method (e.g., \citealt{Baron2004}).
Empirical approaches exploit the observed correlation between
the luminosity of a Type II-P SN and its expansion velocity (e.g., the standardized candle method, \citealt{Hamuy2002}) or the steepness of the light curve after the plateau phase \citep{Elmhamdi2003b}.
The \citet{Hamuy2002} method, refined for example by \citet{Nugent2006}, \citet{Poznanski2009}, and \citet{Olivares2010}, has an intrinsic accuracy of $\sim 10-12\%$ \citep{Hamuy2002}; slightly larger than the accuracy obtained for Type Ia SNe (e.g. \citealt{Tammann2013}). Type II-P SNe can, importantly, be observed out to cosmological distances (e.g. \citealt{Nugent2006}); with the advantage of being that they arise from a homogenous progenitor population. The \citet{Hamuy2002} method can, therefore, be used as an independent health check of the SN Ia-based distance scale.
The main goal of this paper is to present the results of our photometric and spectroscopic monitoring campaign of SN 2012ec, which exploded in NGC 1084. The early data were collected via the Large Program ``Supernova Variety and Nuclesosynthesis Yelds''(PI S. Benetti). A substantial fraction of the data has been collected via the ESO Public Survey PESSTO \footnote{www.pessto.org} (``Public ESO Spectroscopic Survey of Transient Objects'', PI S.J. Smartt). The observations of SN 2012ec were analysed in conjunction with the hydrodynamical codes described in \citet{Pumo2010} and \citet{Pumo2011}, and information on the progenitor obtained from high-resolution pre-explosion images. The same analysis has already performed for two other type II-P SNe: SN 2012A (\citealt{Tomasella2013}; \citealt{Roy2014}) and SN 2012aw (\citealt{Fraser2012}; \citealt{Bayless2013}; \citealt{Bose2013}; \citealt{DallOra2014}). This allows us to carry out an homogeneous comparative study of these three SNe, and to identify possible systematic discrepancies in the estimates of the masses of the progenitors derived from different techniques.
The paper is organized as follows: in Section 2 we present the discovery and the detection of the progenitor of SN 2012ec; in Section 3 we discuss the properties of the host galaxy, the distance and the extinction; in Section 4 we present the optical and near-infrared (NIR) photometric evolution of SN 2012ec, and compare its colour evolution and bolometric light curve with those of other Type II-P SNe. In Section 5 we present the optical and NIR spectroscopic observations. In Section 6 we discuss the results of the modeling of the data and in Section 7 we present a detailed comparison of SN 2012ec with the Type II-P SNe 2012A and 2012aw. In Section 8 we consider these three SNe in the context of the SCM and in Section 9 we discuss our results.
\section{Discovery and progenitor detection}\label{discovery}
SN 2012ec was discovered by \citet{Monard2012} in the almost face-on ($i=57^{\circ}$, \citealt{Moiseev2000}) spiral galaxy NGC 1084 on 2012 August 11.039 UT (MJD=56150.04). \citet{Childress2012} classified SN 2012ec as a very young type II-P SN, probably a few days post-explosion. In Fig. \ref{comp} we show this early spectrum of SN 2012ec (collected on 2012, August 13 with WiFeS, MJD $= 56152.2$), compared with SN 2006bp \citep{Quimby2007} at five different epochs. The spectrum of SN 2012ec is very similar to those of SN 2006bp \citep{Quimby2007} obtained at $8$ and $10$ days after the explosion, implying that the SN was observed at $\sim +9$ days post-explosion and giving an explosion epoch of $\sim 7$ days before the discovery. We explicitly note that our estimate is slightly different from the one given by \citet{Maund2013}, who estimated the explosion date at $<6$ days before the discovery by comparison with spectra of SN~1999em.
The explosion epoch of SN 2006bp is much more tightly constrained than that of SN 1999em, because it is based on the detection of shock breakout (\citealt{Nakano2006}; \citealt{Quimby2007}). The estimates obtained by using either SN 2006bp or SN 1999em, as reference, are in agreement within the errors. We adopt, therefore, a conservative constraint on the explosion date of $7 \pm 2$ days prior to discovery and define the zero phase as our estimated explosion epoch of MJD $= 56143.0$.
\citet{Maund2013} identified a progenitor candidate in pre-explosion Hubble Space Telescope (HST) images. Photometry of the progenitor candidate was compared with synthetic photometry of MARCS spectral energy distributions (SED) \citep{Gustafsson2008}, which suggested that the progenitor of SN 2012ec was a RSG with an initial mass in the range $14-22 \: \mbox{M$_{\odot}$}$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{new_explosion.eps}
\end{center}
\caption{Comparison between a very early spectrum of SN 2012ec and $5$ spectra of SN 2006bp, from day $3$ to $16$.}
\label{comp}
\end{figure}
\section{Host galaxy, distance and extinction}\label{hostgalsec}
The SN is located $0.7"$E and $15.9"$N of the nucleus of the host galaxy NGC 1084 (see Fig \ref{FC}). Details of NGC~1084 are presented in Table \ref{galprop}. NGC 1084 previously hosted $4$ known SNe: the Type II-P SN 2009H \citep{Li2009}, the Type II SNe 1998dl \citep{King1998} and 1996an \citep{Nakano1996}, and the Type Ia SN 1963P \citep{Kowal1968}.
The distances available in the literature for NGC 1084 are principally based on the Tully-Fisher relation, and we adopt the value $\mu = 31.19 \pm 0.13$ mag, available in the Extragalactic Distance Database \footnote{Extragalactic Distance Database, \hspace*{0.16cm} http:/$\!$/edd.ifa.hawaii.edu/} \citep{Tully2009}.
The Galactic reddening towards SN~2012ec was estimated from the \citet{Schlafly2011} dust maps to be $E(B-V)= 0.024$ mag\footnote{We checked the consistency with the \citet{Schlegel1998} calibration, and they agree within a few thousandths of magnitude} . The internal reddening in NGC~1084 was derived using the measured equivalent widths (EW) of NaI D ($5889$, $5895$ \AA), observed in a low-resolution spectrum at $+19$ days. The measured value was $\mathrm{EW(NaID) = 0.8 \pm 0.3}$ \AA\, from which we obtained $E(B-V)= 0.12 ^{+0.15}_{-0.12}$ mag using the \citet{Poznanski2012} calibration and $E(B-V)=0.11$ mag using the \citet{Turatto2003} calibration. These two values are in good agreement and we adopt $E(B-V)=0.12 ^{+0.15}_{-0.12}$ mag for the host galaxy reddening.
Assuming a \citet{Cardelli1989} reddening law ($R_{V}=3.1$), we estimate the total Galactic and host $V-$band extinction towards SN~2012ec to be $A_{V}= 0.45$ mag.
\begin{table}
\caption{Properties of NGC 1084.\label{galprop}}
\begin{footnotesize}
\begin{tabular}{ll}
\hline
$\alpha$ (2000) & $2^{h}43^{m}32.091$ \\
$\delta$ (2000) & $-07\degr 47\arcmin 16.76\arcsec$ \\
morphological type & SA(s)d \\
\texttt{z} & $0.004693 \pm 0.000013$ \\
$\mu$ & $31.19 \pm 0.13$ mag \\
v$_{Hel}$ & $1407 \pm 4 \mbox{$\rm{\,km\,s^{-1}}$}$ \\
$E(B-V)_{Galactic}$ & $0.024$ mag \\
$E(B-V)_{host}$ & $0.12$ mag \\
\hline
\end{tabular}
\\[1.5ex]
\end{footnotesize}
\end{table}
\section{Photometric evolution}\label{Photsec}
\subsection{Data sample and reduction}
A photometric and spectroscopic monitoring campaign for SN~2012ec, at optical and NIR wavelengths, was conducted over a period $153$ days, covering $77$ epochs from $11$ to $164$ days post-explosion, using multiple observing facilities. Additional data collected in the nebular phase will be published in a companion paper (\citealt{Jerkstrand14b}, subm.).
$BVRI$ Johnson-Cousins data were collected with: the $2.0$m Liverpool Telescope (LT, Canary Islands, Spain) equipped with the IO:O camera ($BV$, $21$ epochs); the $3.58$m ESO New Technology Telescope (NTT, La Silla, Chile) equipped with the EFOSC2 (ESO Faint Object and Spectrograph Camera) camera ($BVRI$, $9$ epochs); the $1.82$m Copernico telescope (Asiago, Italy) equipped with the AFOSC Asiago Faint Object Spectrograph and Camera ($BVRI$; $3$ epochs); the $0.6$m ESO TRAnsiting Planets and PlanetesImals Small Telescope (TRAPPIST, La Silla, Chile), equipped with TRAPPISTCAM ($BVR$, $4$ epochs); and the the array of $0.41$m Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes (PROMPT, Cerro Tololo, Chile), equipped with Apogee U47p cameras, which employ the E2V CCDs ($BVRI$, $21$ epochs).
$ugriz$ images were collected with: the LT equipped with the IO:O camera ($uriz$ $21$ epochs); the ESO NTT Telescope equipped with EFOSC2 ($ugriz$, $3$ epochs); the PROMPT telescopes ($griz$, $19$ epochs); and the $0.4$m telescope at the Wendelstein Observatory (Mount Wendelstein, Germany), equipped with a ST-10 CCD camera ($gri$, $7$ epochs).
$JHK_{s}$ observations were acquired with the ESO NTT telescope, equipped with the SOFI (Son Of ISAAC) camera ($8$ epochs).
A summary of the characteristics of the instruments and telescopes used for photometric follow up are presented in Table \ref{phtel}.
\begin{table*}
\caption{Summary of the characteristics of the instruments used during for photometric monitoring.\label{phtel}}
\begin{footnotesize}
\begin{tabular}{lclllcll}
\hline
Telescope &Camera & Pixel scale & Field of view & Filters$^{a}$ & \# of epochs \\
& & [arcsec/pix] & [arcmin] & \\
\hline
NTT (3.58m) & EFOSC2 & 0.24 & 4 $\times$ 4 & $B, V, R$; $u, g,r, i$ & 12 \\
NTT (3.58m) & SOFI & 0.28 & 5 $\times$ 5 & $J, H, K_{s}$ & 8 \\
LT (2.0m) & IO:O & 0.15 & 10 $\times$ 10 & $B, V$: $u, r, i, z$ & 21 \\
PROMPT (0.41m) & APU9 & 0.59 & 11 $\times$ 11 & $B, V, R, I$; $g, r, i, z$ & 21 \\
CAO (1.82m) & AFOSC & 0.46 & 8 $\times$ 8 & $B, V, R$; $i$ & 3 \\
SAO (0.97m) & SBIG & 0.86 & 57 $\times$ 38 & $R$ & 1 \\
WOT (0.4m) & SBIG ST-10 XME & 0.44 & 16 $\times$ 10 & $g, r, i$ & 7 \\
TRAPPIST (0.60m) & TRAPPISTCAM & 0.65 & 27 $\times$ 27 & $B$, $V$, $R$ & 4 \\
\hline
\end{tabular}
\\[1.5ex]
NTT = New Technology Telescope with the optical camera ESO Faint Object Spectrograph and Camera EFOSC2 and with the Near-Infrared Camera Son of ISAAC (SOFI); LT = the Liverpool Telescope (LT) with the optical CCD CAMERA IO:O; PROMPT = Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes; CAO = the Copernico telescope at Asiago Observatory with the Asiago Faint Object Spectrograph and Camera (AFOSC); SAO = the Schmidt telescope at the Asiago Observatory; WOT = the $40~cm$ telescope at the Wendelstein Observatory; TRAPPIST = TRAnsit Planets and PlanetesImals Small Telescope. \\
$^{a}$ The NTT and CAO i filter is Gunn.
\end{footnotesize}
\end{table*}
Data were pre-reduced using the respective instrument pipelines, where available, or following the standard procedures (bias, overscan and flat-field corrections, trimming) in the \texttt{IRAF} \footnote{IRAF is distributed by the National Optical Astronomical Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} environment. In particular, NIR images were pre-reduced by means of an \texttt{IRAF}-based custom pipeline based on the \texttt{XDIMSUM IRAF} package \citep{Coppola2011}, which conducts the background subtraction using a two-step technique based on a preliminary guess of the sky background and with a careful masking of unwanted sources in the sky images.
Johnson-Cousins $BVRI$ calibrated magnitudes of $18$ reference stars were obtained by averaging their photometry obtained on $12$ photometric nights, in conjunction with observations of \citet{Landolt1992} standard star fields. $ugriz$ calibrated photometry for $17$ reference stars were obtained on $11$ photometric nights with the LT and the NTT telescopes, in conjunction with observations of \citet{Smith2002} $u'g'r'i'z'$ standard star fields. Finally, calibrated NIR 2MASS $JHK$ photometry was obtained for $5$ reference stars, for which 2MASS \citep{Skrutskie2006} photometry was available. We did not correct NIR magnitudes for colour terms, since they are generally very small in the NIR bands (e.g. \citealt{Carpenter2001}). Our adopted reference stars showed no clear signs of variability.
The host galaxy and the SN position are shown in Fig. \ref{FC}, along with the local sequence stars adopted for the photometric calibration. The calibrated photometry for the local sequence stars is reported in Tables \ref{local} and \ref{local1}. In the following, the Johnson-Cousins $BVRI$ and NIR photometry are reported in Vega magnitudes, while the $ugriz$ photometry is reported in the AB magnitude system.
\begin{figure}
\centering
\includegraphics[scale=0.25]{sn2012ec_fc.eps}
\caption{An image of SN 2012ec and the host galaxy NGC 1084, acquired with the Liverpool Telescope and the IO:O camera. The field of view is $14.5 \times 14.5$ arcmin$^2$. Reference stars are circled and labeled (see Tables \ref{local} and \ref{local1}).}
\label{FC}
\end{figure}
\begin{table*}
\setlength{\tabcolsep}{3pt}.
\caption{Positions and photometry of the local sequence reference stars in the $BVRI$ and in the $u'g'r'i'z'$ systems.\label{local}}
\begin{scriptsize}
\begin{tabular}{llllllllllll}
\hline
\# id & $ \alpha_{J2000.0} $ & $ \delta_{J2000.0} $ & $B$ & $V$ & $R$ & $I$ &$u'$ &$g'$ &$r'$ &$i'$ &$z'$ \\
& (deg) &(deg) & mag & mag & mag & mag & mag & mag & mag & mag & mag \\
\hline
1 & 41.5216674 & -7.5597940 & 17.98 (0.02) & 16.88 (0.02) & 16.19 (0.02) & 15.53 (0.03) & 19.85 (0.02) & 17.48 (0.04) & 16.41 (0.01) & 16.01 (0.02) & 15.86 (0.01) \\
2 & 41.5496917 & -7.6416869 & 16.84 (0.02) & 15.97 (0.02) & 15.46 (0.02) & 14.93 (0.03) & 18.27 (0.02) & 16.43 (0.02) & 15.67 (0.02) & 15.38 (0.01) & 15.32 (0.02) \\
3 & 41.5474764 & -7.6530580 & 17.14 (0.02) & 16.28 (0.02) & 15.81 (0.02) & 15.32 (0.02) & 18.45 (0.06) & 16.71 (0.03) & 16.04 (0.02) & 15.78 (0.01) & 15.72 (0.01) \\
4 & 41.5265649 & -7.6778087 & 15.58 (0.02) & 14.95 (0.02) & 14.66 (0.02) & & 16.48 (0.03) & 15.25 (0.02) & 14.80 (0.02) & 14.64 (0.02) & 14.65 (0.01) \\
5 & 41.5589242 & -7.6811761 & 14.27 (0.02) & 13.52 (0.02) & 13.23 (0.02) & & 15.18 (0.02) & 13.91 (0.02) & 13.31 (0.02) & 13.05 (0.02) & 12.94 (0.01) \\
6 & 41.5522025 & -7.6973300 & 17.01 (0.02) & 16.00 (0.02) & 15.43 (0.02) & & 18.82 (0.05) & 16.55 (0.03) & 15.62 (0.02) & 15.30 (0.02) & 15.18 (0.01) \\
7 & 41.5886692 & -7.5829251 & 14.62 (0.01) & 14.07 (0.01) & 13.84 (0.02) & & 15.36 (0.03) & 14.49 (0.02) & 13.98 (0.01) & 13.86 (0.02) & 13.87 (0.01) \\
9 & 41.6065545 & -7.5858940 & 15.16 (0.01) & 14.42 (0.01) & 14.08 (0.02) & & 16.11 (0.02) & 15.03 (0.03) & 14.25 (0.02) & 14.04 (0.02) & 14.01 (0.01) \\
11 & 41.6108998 & -7.5996059 & 16.99 (0.01) & 16.36 (0.01) & 16.04 (0.02) & & 17.64 (0.02) & 16.66 (0.03) & 16.21 (0.01) & 15.99 (0.02) & 15.97 (0.02) \\
12 & 41.5528922 & -7.5585795 & 18.32 (0.02) & 16.83 (0.02) & 15.93 (0.02) & 14.73 (0.02) & 20.12 (0.05) & 17.65 (0.01) & 16.19 (0.01) & 15.24 (0.02) & 14.90 (0.02) \\
16 & 41.5215760 & -7.5167457 & 16.06 (0.01) & 15.26 (0.01) & 14.90 (0.02) & 14.48 (0.01) & 17.25 (0.04) & 15.64 (0.03) & 15.10 (0.02) & 14.93 (0.02) & 14.86 (0.01) \\
17 & 41.4300180 & -7.5076398 & 14.74 (0.01) & 14.08 (0.01) & 13.81 (0.02) & & 14.92 (0.02) & 14.43 (0.02) & 14.18 (0.02) & 13.92 (0.01) & 13.95 (0.01) \\
18 & 41.5925440 & -7.5310037 & 16.06 (0.01) & 13.76 (0.01) & 14.90 (0.02) & & & & & & \\
\hline
\end{tabular}
\\[0.1ex]
\end{scriptsize}
\end{table*}
\begin{table*}
\caption{Positions and photometry of the local sequence reference stars in the 2MASS $JHK$ system.\label{local1}}
\begin{tabular}{llllll}
\hline
Star ID & $\alpha_{J2000.0}$ & $\delta_{J2000.0}$ & $J$ & $H$ & $K$ \\
& (deg) & (deg) & (mag)& (mag) & (mag) \\
\hline
1 & 41.5216674 & -7.5597940 & 14.82 (0.04) & 14.08 (0.05) & 13.94 (0.05) \\
2 & 41.5496917 & -7.6416869 & 14.32 (0.03) & 13.87 (0.04) & 13.73 (0.05) \\
3 & 41.5474764 & -7.6530580 & 14.71 (0.04) & 14.35 (0.05) & 14.14 (0.06) \\
12 & 41.5528922 & -7.5585795 & 13.63 (0.03) & 13.01 (0.03) & 12.81 (0.03) \\
\hline
\end{tabular}
\end{table*}
Photometric measurements were carried out with the \texttt{QUBA} pipeline \citep{Valenti2011}, which performs \texttt{DAOPHOT}-based \citep{Stetson1987} point-spread-function (PSF) fitting photometry on the SN and on the selected reference stars. Since SN 2012ec is embedded in a spiral arm of the host galaxy, the background was estimated with a polynomial model. We performed empirical tests for the best background subtraction, and in most cases we found that a $4\mathrm{th}$-order polynomial model of the background gave satisfactory results, due to the high S/N ratio of the SN in these images. Only at the last few epochs was the S/N ratio of the SN too low to prohibit satisfactory removal of the local background. We note, however, that even using the subtraction of a template image would probably not yield a significant improvement, as in these cases the flux of the SN was only few tens of counts above the local background. Photometric uncertainties were automatically estimated by the pipeline using artificial star experiments.
The photometric measurements of the SN in the $BVRI$, $u'g'r'i'z'$ and in the $JHK$ filter systems are reported in Table \ref{phot}.
\begin{table*}
\caption{Optical photometry in the Johnson-Cousins filters, in $u'g'r'i'z'$ bands and NIR photometry calibrated to the 2MASS system, with associated errors in parentheses.\label{phot}}
\centering
\resizebox{\textwidth}{!}{ %
\begin{tabular}{llcccccccccccc}
\hline
Date & $MJD$ & $B$ & $V$ & $R$ & $I$ & $u'$ & $g'$ & $r'$ & $i'$ & $z'$ & $J$ & $H$ & $K$ \\
& & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) \\
\hline
20120814 & 56154.22 & 14.99 (0.02) & 14.81 (0.02) & & & 15.02 (0.04) & & 14.78 (0.02) & 14.91 (0.02) & & & & \\
20120815 & 51155.22 & 14.99 (0.04) & 14.86 (0.04) & & & 15.09 (0.02) & & 14.81 (0.02) & 14.91 (0.02) & & & & \\
20120817 & 56157.59 & 15.12 (0.06) & 14.90 (0.06) & 14.74 (0.06) & 14.55 (0.05) & & & & & & & & \\
20120818 & 56158.23 & 15.10 (0.03) & 14.87 (0.03) & & & 15.28 (0.05) & & 14.82 (0.01) &14.94 (0.01) & & & & \\
20120819 & 56158.34 & 15.15 (0.06) & 14.95 (0.05) & 14.73 (0.06) & 14.53 (0.03) & & & 14.84 (0.05) & 14.86 & & & & \\
20120820 & 56159.31 & 15.29 (0.05) & 14.92 (0.04) & 14.62 (0.05) & 14.53 (0.05) & & 15.05 (0.07) & 14.78 (0.03) & 14.86 (0.06) & & & & \\
20120821 & 56160.30 & 15.18 (0.07) & 14.86 (0.06) & 14.65 (0.03) & (14.61 (0.06) & & 15.13 (0.10) & 14.81 (0.04) & 14.89 (0.04) & 14.92 (0.05) & & & \\
20120826 & 56165.28 & 15.47 (0.05) & 14.93 (0.04) & 14.64 (0.04) & & 16.35 (0.04) & 15.25 (0.08) & 14.80 (0.03) & 14.87 (0.03) & 14.92 (0.03) & 14.24 (0.02) & 14.04 (0.02) & 13.91 (0.02) \\
20120828 & 56168.20 & 15.55 (0.02) & 15.02 (0.02) & & & 16.52(0.06) & & 14.85 (0.02) & 14.93 (0.02) & & & & \\
20120831 & 56171.08 & 15.67 (0.06) & 14.98 (0.06) & & & 16.69 (0.08) & & 14.81 (0.02) & 14.94 (0.02) & & & & \\
20120902 & 56173.09 & 15.76 (0.04) & 14.99 (0.04) & & & 16.98 (0.06) & & 14.85(0.02) & 14.92 (0.02) & & & & \\
20120905 & 56176.13 & 15.76 (0.04) & 15.00 (0.05) & 14.65 (0.06) & 14.62 (0.06) &17.05 (0.07) & & 14.84 (0.03) & 14.86 (0.03) & 14.89 (0.03) & & & \\
20120909 & 56179.34 & 15.90 (0.06) & 15.10 (0.06) & 14.78 (0.04) & 14.45 (0.02) & & 15.54 (0.02) & & 14.92 & & 14.11 (0.03) & 13.89 (0.03) & 13.82 (0.03) \\
20120910 & 56180.92 & 15.95 (0.04) & 15.14 (0.03) & 14.76 (0.01) & 14.51 (0.01) & & & & & & & & \\
20120911 & 56181.59 & 16.05 (0.02) & 15.15 (0.02) & 14.79 (0.03) & 14.53 (0.03) & & & & & & & & \\
20120916 & 56186.20 & 16.06 (0.07) & 15.10 (0.06) & 14.75 (0.04) & 14.49 (0.02) & & 15.51 (0.04) & 14.89 (0.04) & 14.87 (0.02) & 14.85 (0.03) & & & \\
20120920 & 56190.24 & & 15.12 (0.03) & & 14.42 (0.05) & & & 14.89 (0.03) & 14.85 (0.03) & 14.87 (0.03) & & & \\
20120923 & 56194.87 & 16.15 (0.02) & 15.10 (0.02) & 14 78 (0.01) & 14.45 (0.01) & & & & & & & & \\
20120924 & 56195.23 & & & & & & & & & & 14.08 (0.03) & 13.89 (0.03) & 13.75 (0.03) \\
20120926 & 56196.20 & 16.16 (0.06) & 15.00 (0.05) & 14.72 (0.03) & & 17.96 (0.09) & 15.56 (0.03) & 14.81 (0.03) & 14.81 (0.03) & 14.78 (0.03) & & & \\
20120929 & 56199.29 & & 15.02 (0.03) & 14.74 (0.03) & 14.36 (0.01) & & & 14.88 (0.02) & 14.81 (0.02) & 14.79 (0.02) & & & \\
20121001 & 56202.01 & 16.19 (0.05) & 15.11 (0.05) & & & 17.98 (0.11) & & 14.89 (0.02) & 14.80 (0.03) & 14.82 (0.02) & & & \\
20121002 & 56202.20 & 16.28 (0.04) & 15.04 (0.04) & 14.74 (0.04) & 14.40 (0.02) & & 15.61 (0.06) & 14.93 (0.05) & 14.84 (0.04) & 14.85 (0.04) & & & \\
20121004 & 56204.21 & 16.33 (0.04) & 15.12 (0.03) & 14.73 (0.03) & 14.43 (0.02) & 18.07 (0.14) & 15.71 (0.04) & 14.90 (0.03) & 14.84 (0.02) & 14.84 (0.02) & & & \\
20121007 & 56208.04 & 16.23 (0.08) & 15.11 (0.08) & & & 18.07 (0.18) & & 14.85 (0.02) & 14.78 (0.02) & 14.81 (0.03) & & & \\
20121010 & 56211.05 & 16.35 (0.04) & 15.16 (0.04) & & & 18.26 (0.10) & 15.68 (0.03) & 14.89 (0.01) & 14.83 (0.01) & 14.85 (0.02) & & & \\
20121012 & 56212.19 & 16.46 (0.06) & 15.22 (0.07) & 14.78 (0.03) & 14.41 (0.02) & & & 14.94 (0.02) & 14.79 (0.03) & 14.87 (0.03) & & & \\
20121016 & 56216.35 & & & & & & & & & & 14.05 (0.03) & 13.80 (0.03) & 13.59 (0.03) \\
20121017 & 56217.15 & 16.50 (0.07) & 15.22 (0.06) & & & & & 14.95 (0.03) & 14.89 (0.03) & 14.77 (0.04) & & & \\
20121019 & 56220.42 & 16.63 (0.05) & 15.28 (0.05) & 14.77 (0.04) & 14.41 (0.03) & & & & & & & & \\
20121020 & 56221.06 & 16.58 (0.03) & 15.36 (0.03) & 14.8 (0.1) & 14.43 (0.02) &18.68 (0.08) & & 14.96 (0.02) & 14.86 (0.02) & 14.92 (0.02) & & & \\
20121022 & 56223.52 & & & & & & & 15.00 (0.02) & 14.85 (0.03) & & & & \\
20121024 & 56226.45 & & & & & & 15.99 (0.06) & 15.01 (0.02) &14.91 (0.03) & & & & \\
20121101 & 56232.13 & 16.79 (0.09) & 15.47 (0.08) & 14.95 (0.04) & 14.59 (0.02) & & 16.00 (0.06) & 15.15 (0.03) & & 15.05 (0.03) & & & \\
20121106 & 56237.12 & & 15.63 (0.03) & 15.04 (0.03) & 14.67 (0.03) & & 16.2 (0.1) & 15.26 (0.02) & 15.15 (0.02) & 15.16 (0.02) & 14.35 ( 0.06) & 14.12 (0.06) & 14.04 (0.04) \\
20121111 & 56242.13 & 17.1 (0.1) & 15.85 (0.08) & 15.26 (0.03) & 14.85 (0.03) & & 16.38 (0.06) & 15.39 (0.02) & 15.29 (0.02) & 15.26 (0.03) & & & \\
20121114 & 56245.20 & & & & & & & & & &14.58 (0.03) & 14.34 (0.03) & 14.28 (0.03) \\
20121115 & 56246.96 & 17.4 (0.1) & 16.09 (0.10) & & & & & 15.63 (0.02) & 15.47 (0.02) & 15.43 (0.02) & & & \\
20121117 & 56248.14 & & 16.26 (0.04) & 15.57 (0.05) & 15.15 (0.04) & & & & & & & & \\
20121119 & 56250.19 & 17.82 (0.09) & 16.49 (0.08) & 15.85 (0.05) & 15.37 (0.05) & & 17.24 (0.05) & 16.00 (0.02) & 15.89 (0.02) & 15.74 (0.03) & & & \\
20121122 & 56253.08 & 17.95 (0.10) & 17.16 (0.10) & 16.36 (0.07) & 15.63 (0.05) & & 17.45 (0.10) & 16.32 (0.03) & 16.29 (0.03) & 16.12 (0.12) & & & \\
20121204 & 56266.14 & & & & & & & & & & 15.73 (0.06) & 15.27 (0.08) & 15.40 (0.06) \\
20121205 & 56266.93 & 18.5 (0.2) & 17.3 (0.2) & & & & & 16.65 (0.07) & 16.51 (0.07) & 16.40 (0.06) & & & \\
20121207 & 56268.94 & 18.60 (0.15) & 17.40 (0.15) & & & & & 16.80(0.1) & 16.6 (0.1) & 16.50 (0.06) & & & \\
20121209 & 56270.95 & 18.70 (0.13) & 17.50 (0.13) & & & & & 16.9 (0.1) & 16.7 (0.1) & 16.6 (0.1) & & & \\
20121216 & 56277.99 & & & & & & & 17.0 (0.1) & 16.9 (0.1) & 16.8 (0.1) & & & \\
20121220 & 56282.94 & 18.8 (0.2) & 17.7 (0.2) & 16.9 (0.2) & & & & & & & & & \\
20121221 & 56283.10 & & & & & & & & & & 15.83 (0.03) & 15.41 (0.03) & 15.47 (0.03) \\
20121228 & 56290.00 & 19 (0.2) & 17.9 (0.2) & & & & & 17.1 (0.1) & 17.1 (0.1) & 16.9 (0.1) & & & \\
20130110 & 56302.81 & 19.15 (0.30) & 18.0 (0.3) & & & & & 17.2 (0.1) & 17.2 (0.1) & 17.0 (0.1) & & & \\
20130112 & 56305.66 & & 18.0 (0.3) & 17.15 (0.30) & 16.75 (0.30) & & & & & & & & \\
\hline
\end{tabular}}
\\[1.4ex]
\end{table*}
\subsection{Data analysis}\label{photoanalysis}
The photometric evolution of SN 2012ec in the $BVRI$, $JHK$ and in the $u'g'r'i'z'$ filter systems is shown in Fig. \ref{LC_opt}.
\begin{figure*}
\centering
\includegraphics[scale=0.4]{new_opt_bessel_2.eps}
\includegraphics[scale=0.4]{opt_sloan_new.eps}
\caption{Left panel: photometric evolution of SN2012ec in the Johnson-Cousins $BVRI$ and $JHK$ filters. Right panel: photometric evolution of SN2012ec in the $u'g'r'i'z'$ filters. A shift has been applied for clarity.}
\label{LC_opt}
\end{figure*}
SN 2012ec was already on the plateau in the $V, R, I, r',i'$ and $z'$ bands by $+13$ days. The average absolute magnitude, in the different bands, during the plateau phase was $M_{V}=-16.54$ mag, $M_{R}=-16.75$ mag, $M_{I}=-16.96$ mag, $M_{r'}=-16.80$ mag, $M_{i'}=-16.93$ mag and $M_{z'}=-17.08$ mag.
Using the definition for the plateau duration proposed by \citet{Olivares2010}, where the end of the plateau occurs at the knee of the light curve, we found that the plateau of SN 2012ec lasted almost $90$ days in $R, I, r', i',z'$ and almost $80$ days in $V$. This is shorter than the usual duration of the plateau of standard Type II-P SNe (e.g. SN 2004et, $~ 100$ days, \citealt{Maguire2010}; SN 2012aw, $~ 100$ days, \citealt{DallOra2014}; see also \citealt{Arcavi2012}). SN 2012ec began to fall from the plateau at $\sim +90$, while the photospheric phase from the observed spectroscopic evolution (see Sect. \ref{spec}) lasted until $\sim 160$ days. The decline in the light curve of SN 2012ec, from the plateau to the radioactive decay tail, lasted $\sim 30$ days, decreasing $ ~1.5$ mag in $r',i',V$ bands, $~1$ mag in the $I$ bands and $~1.3$ mag in the $z'$ band. A list of the main characteristics of the light curve, for each filter, is reported in Table \ref{LCdata}.
\begin{table*}
\setlength{\tabcolsep}{5pt}
\caption{Epochs and apparent magnitudes of the light curve during the plateau in the $VRIr'i'z'$ bands.\label{LCdata}}
\setlength{\tabcolsep}{2.5pt}
\begin{footnotesize}
\begin{tabular}{lrrrrrrrrr}
\hline
& V & R & I & $r'$ & $i'$ & $z'$ & J & H & K \\
& mag & mag & mag & mag & mag & mag & mag & mag & mag \\
\hline
m$_\mathrm{plat}^{a}$ & 15.10 (0.02) & 14.78 (0.01) & 14.45 (0.01) & 14.89 (0.03) & 14.85 (0.03) & 14.87 (0.03) & 14.08 (0.03) & 13.89 (0.03) & 13.75 (0.03) \\
M$_\mathrm{plat}^{a}$ & -16.54 (0.17) & -16.75 (0.17) & -16.96 (0.17) & -16.80 (0.18) & -16.93 (0.18) & -17.08 (0.18) & -17.24 (0.18) & -17.38 (0.18) & -17.49 (0.18) \\
\hline
\end{tabular}
\\[1.5ex]
$^a$ Plateau phase refers to 59 days after the explosion at $MJD=56202.0$
\end{footnotesize}
\end{table*}
The NIR light curve exhibits a plateau of duration $\sim 90-100$ days, which subsequently drops over a period of $40$ days by $~1.3$ mag in the $J$ band, $1.1$ mag in the $H$ band and $1.2$ mag in the $K$ band. This behaviour is similar to that observed for other Type II-P SNe (see for example, SN 2012A, \citealt{Tomasella2013}; SN 2012aw \citealt{DallOra2014}).
The evolution of the $B-V$, $V-R$ and $V-K$ colours of SN 2012ec are shown in Fig. \ref{color_all}. The $B-V$ colour becomes progressively redder over the first $50$ days, rising from $B-V \sim 0$ to $\sim 1\mathrm{mag}$, before reaching a constant value by $\sim 160\mathrm{d}$. The $V-K$ colour starts from $0.7$ mag and increases slowly to $\sim 1$ mag at $\sim 100$ days, before increasing further from $\sim 1$ to $\sim 1.9$ mag in the period $100-130$ days.
The colour evolution of SN 2012ec is similar to those of other type II-P SNe (e.g. SN 2004et, \citealt{Maguire2010}; SN 1999em, \citealt{Elmhamdi2003}; SN 2009bw, \citealt{Inserra2012}).
The trends in the colour evolution are similar to those observed by \citealt[][see their Fig. 10]{Faran2014} for a sample of 23 type II-P SNe.
\begin{figure}
\centering
\includegraphics[scale=0.4]{color_all.eps}
\caption{Colour evolution of SN 2012ec compared to other type II-P SNe.}
\label{color_all}
\end{figure}
\subsection{Bolometric light curve and $^{56}Ni$ mass}\label{bolometric}
A pseudo-bolometric light curve was calculated by integrating over the optical and NIR photometry.
The $u'Bg'Vr'Ri'Iz'JHK$ apparent magnitudes have been converted into monochromatic fluxes at the effective wavelength for each filter, and then corrected for extinction (Sect. \ref{hostgalsec}). The resulting SED was integrated over the entire wavelength range, assuming zero flux at the limits. The estimation of the flux was performed at only those phases for which V band observations were available. If photometry for other bands was not available, the magnitudes were estimated at these phases by interpolating the values from photometry acquired on adjacent nights.
The final integrated fluxes were converted to luminosity through application of the adopted distance modulus. The pseudo-bolometric light curve of SN 2012ec is shown in Fig. \ref{bolom_all}. The luminosity at the first epoch for which the calculation could be conducted ($14$ days) was $L= \mathrm{1.4 \times 10^{42} \: erg \: s^{-1}}$; this can be considered a lower limit for the bolometric luminosity. The SN luminosity reaches the plateau by day $20$ ($L= \mathrm{0.9 \times 10^{42} \: erg \: s^{-1}}$), which then begins to significantly decrease at $\sim 90$ days to the tail at day $~130$, with a luminosity of $L= \mathrm{0.1 \times 10^{42} \: erg \: s^{-1}}$.
A comparison of the pseudo-bolometric light curve of SN~2012ec with other Type II-P SNe demonstrates a similar behaviour (e.g. SN 2012A, \citealt{Tomasella2013}; SN 2012aw, \citealt{DallOra2014}; SN 2009kf, \citealt{Botticella2010}; and SN 2005cs, \citealt{Pastorello2009}). From the pseudo-bolometric light curve of SN 2012ec, it is evident that its luminosity on the plateau is lower than observed for SNe 2012aw and SN 2009kf and that plateau duration is shorter than the more luminous SNe.
SN 2012ec is more luminous than SN 2012A and SN 2005cs but has a behaviour more similar to SN 2012A. They have comparable plateau, even if the one of SN 2012A is a bit shorter.
Instead SN 2005cs shows a different evolution of the light curve compared to SN 2012ec, especially the fall from the plateau that is longer for SN 2005cs.
\begin{figure}
\centering
\includegraphics[scale=0.4]{bolom_all.eps}
\caption{Pseudo-bolometric light curve of SN2012ec, along with those of other type II-P SNe. The pseudo-bolometric light curve accounts for the UBVRIJHK contributions for SN 2012A, UBgVrRiIzJHK for SN 2012aw, griz for SN 2009kf and UBVRIJHK for SN 2005cs.}
\label{bolom_all}
\end{figure}
We estimated the $\mathrm{^{56}Ni}$ mass synthesised during the explosion, by comparing the luminosity of SN 2012ec with that of SN 1987A at similar late epochs.
Assuming a similar $\gamma$-ray deposition fraction, the mass of $\mathrm{^{56}Ni}$ was calculated using the relation of \citet{Bouchet1991}:
\begin{equation}
\label{eq:nichel}
M(^{56}Ni)_{12ec} = M(^{56}Ni)_{87A} \times \frac{L_{12ec}}{L_{87A}} (\mbox{M$_{\odot}$})
\end{equation}
For the $\mathrm{^{56}Ni}$ mass of SN 1987A we adopted the weighted mean of the values reported by \citet{Arnett1989} and \citet{Bouchet1991}, and for the bolometric luminosity we adopted the value of \citet{Bouchet1991} (see also \citealt{Suntzeff1988}). For SN 2012ec we calculated $M(^{56}Ni)_{12ec}= 0.040 \pm 0.015 \: \mbox{M$_{\odot}$}$, which is an average of the estimates made at $138$, $146$ and $158$ days (the reported uncertainty is the dispersion of the values computed at each epoch).
The slope of the light curve in the last epochs of the dataset is $0.01 \pm 0.02 \mathrm{\: mag \: day^{-1}}$, in agreement with the $^{56}Co$ rate of decay.
The data from the nebular phase are published in a companion paper (\citealt{Jerkstrand14b}, submitted). \citet{Jerkstrand14b} estimate the nickel mass from photometry at 187 and 202 days, finding a value of $0.03 \pm 0.01 \: \mbox{M$_{\odot}$}$, which is in good agreement with our estimate.
The evolution of the SED of SN 2012ec, based on optical and NIR photometry, is shown in Fig. \ref{SED}. The observations covered the wavelength range $4000-23000$ \AA.
We evaluated the evolution of the SED and calculated blackbody continuum fits at each epoch. At $13$ days, the best fit gives a blackbody temperature of $9600 \pm 800$ K, which decreases to $5300 \pm 400$ K by day $106$. At early time, the fits were conducted using all available photometric observations. At later epochs, the bluest photometric observations were excluded from the fits as metal line blanketing, particularly due to Fe II and Ti II, at these wavelengths caused significant departures from the ideal black body assumption \citep{Dessart2005}. The u band data was excluded from the fits for data after $\mathrm{20d}$ and, in addition, the B and g bands were excluded from fits for data after $\mathrm{50d}$.
\begin{figure}
\centering
\includegraphics[scale=0.75]{fig_sed.eps}
\caption{The temporal evolution of the SED of SN 2012ec. Circles represent the fluxes at the central wavelengths of each filter. Solid lines represent blackbody continuum fits. Fluxes are corrected for distance and extinction.}
\label{SED}
\end{figure}
From the blackbody fit it was possible to evaluate the time evolution of the photospheric temperature of SN 2012ec.
The temperature drops rapidly in the first $30$ days from $9600 \pm 800$ K to $7000 \pm 500$ K, before decreasing slowly from $6500 \pm 500$ K to $5000 \pm 400$ K.
The values of the temperature estimated from the blackbody fits to the photometric data are in good agreement with those derived from fits of the continuum in the observed spectra (within the uncertainties) from $\mathrm{+30d}$. During the first $30$ days the spectroscopic temperature varies from $11000 \pm 900$ K to $8000 \pm 700$ K, decreasing to $6200 \pm 500$ at $~50$ days before reaching $5000 \pm 500$ K in the last epochs.
The slightly higher temperatures estimated from the spectra are due to the limited spectroscopic wavelength range ($4000-9000$ \AA) used for the continuum fits, compared to the wavelength range covered by the available photometric data.
We compared the estimated temperatures with those of SNe 2009bw \citep{Inserra2012} and 1999em \citep{Elmhamdi2003}.
SN 2012ec is cooler at earlier phases, compared to SN 2009bw which had an initial temperature of $\sim 12000$ K and SN 1999em which had a temperature of $\sim 14300$ K. At later pahases, the temperatures of all three SNe converge to $\sim 5000$ K.
\section{Spectroscopic evolution}
\subsection{Data sample and reduction}\label{Specsec}
As a PESSTO follow-up target, SN 2012ec was scheduled for a dense spectroscopic monitoring campaign at the ESO NTT at La Silla, Chile. Ten epochs of optical spectroscopy were acquired with EFOSC2 and ten epochs of NIR spectroscopy were acquired with SOFI.
The optical dataset was supplemented with spectra from the following facilities: the $2.3$m telescope of the Siding Spring Observatory (SSO, New South Wales, Australia) equipped with the Wide Field Spectrograph WiFeS ($2$ epochs), the $2.5$m Nordic Optical Telescope (NOT, Canary Islands, Spain) equipped with the Andalucia Faint Object Spectrograph and Camera (ALFOSC) ($1$ epoch), the 1.82m Copernico Telescope (Asiago, Italy) equipped with AFOSC ($3$ epochs), the William Herschel Telescope (WHT, Canary Islands, Spain) equipped with the Intermediate dispersion Spectrograph and Imaging system (ISIS) ($1$ epoch), the 1.22m Galileo Telescope (Asiago, Italy) equipped with the Boller $\&$ Chivens spectrograph (B$\&$C) ($2$ epochs).
The spectroscopic observations cover $29$ epochs from day $8$ to day $161$.
Details of the spectroscopic observations and the characteristics of the instruments used are listed in Table \ref{logspec}.
\begin{table*}
\caption{Summary of instrumental sets-up used for the spectroscopic follow-up campaign.}\label{logspec}
\begin{footnotesize}
\begin{tabular}{lclllcll}
\hline
Telescope & Instrument & Grism & Range & Resolution & \# of epochs \\
& & & [ \AA\ ] & [ \AA\ ] & \\
\hline
NTT (3.58m) & EFOSC2 & Gr11, Gr16 & 3350-10000 & 12 & 10 \\
NTT (3.58m) & SOFI & GB & 9400-14000 & 20 & 7 \\
NTT (3.58m) & SOFI & GB, GR & 14000-25000 & 20 & 3 \\
CAO (1.82m) & AFOSC & Gr4 & 3500-8200 & 24 & 3 \\
Pennar (1.22m) & B$\&$C & Gr300 & 3400-7800 & 10 & 2 \\
NOT (2.56m) & ALFOSC & Gr4 & 3400-9000 & 14 & 1 \\
WHT (4.2m) & ISIS & R300B+R158R & 3500-10000 & 5 & 1 \\
ANU (2.3m) & WiFeS & B+R & 3300-9000 & 2 & 2 \\
\hline
\end{tabular}
\\[1.5ex]
NTT = New Technology Telescope with the optical camera ESO Faint Object Spectrograph and Camera EFOSC2 and with the Near-Unfrared Camera Son of ISAAC (SOFI); CAO = the Copernico telescope at Asiago Observatory with the Asiago Faint Object Spectrograph and Camera (AFOSC); Pennar = Galileo telescope at Asiago Observatory with the Boller $\&$ Chivens spectrograph; NOT = Nordic Optical Telescope with the Andalucia Faint Object Spectrograph and Camera (ALFOSC); WHT = William Herschel Telescope with the Intermediate dispersion Spectrograph and Imaging System (ISIS); ANU = Australian National University telescope with the Wide-Field Spectrograph (WiFeS).
\end{footnotesize}
\end{table*}
Spectra were pre-reduced (trimmed, overscan, bias and flat-field corrected) using the PESSTO pipeline (\citealt{Smartt2014}, submitted), based on the standard IRAF tasks \footnote{Fast reduction data are available on WISeREP \citep{Yaron2012} and full reduced data can be accessed from the ESO Phase 3 archive, all details on www.pessto.org}. The wavelength calibration was performed using comparison spectra of arc lamps acquired with the same instrumental configuration as the SN observations. The science observations were flux calibrated with respect to observations of spectrophotometric standard stars. Further corrections for atmospheric extinction were applied using tabulated extinction coefficients for each telescope site (in the pipeline archive).
The quality of the flux calibration was checked by comparison of synthetic $BV$ and $r$ photometry derived from the spectra, using the IRAF task \texttt{CALCPHOT}, with the observed photometry at comparable epochs. Calibrated spectra were finally dereddened for the total line-of-sight extinction and then corrected for the heliocentric velocity of the host galaxy (see Table \ref{galprop}).
\subsection{Data analysis}
\label{spec}
The time evolution of the optical spectrum of SN 2012ec, obtained from 8 to 161 days, is shown in Fig. \ref{spec_evol} and corresponding line identifications are presented in Fig. \ref{spec_id}.
\begin{figure*}
\includegraphics[scale=.9, angle=0]{optical_evol.eps}
\caption{The optical spectroscopic evolution of SN2012ec during the photosperic phase, from $+8$ to $+161$ days.}
\label{spec_evol}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=.5, angle=0]{opt_id_final.eps}
\includegraphics[scale=.5, angle=0]{nir_id_final.eps}
\caption{Identifications of line features observed in optical (at three characteristic epochs; top panel) and NIR spectra (bottom panel) of SN 2012ec.}
\label{spec_id}
\end{figure*}
Fig. \ref{vel} shows the evolution of the velocities of H$_\alpha$, H$_\beta$, Fe II(5018 \AA) and Fe II(5169 \AA) for SN 2012ec. A list of line velocities is presented in Table \ref{tabvelocities}.
Spectra at early phases show a blue continuum, broad Balmer lines and He I at $5876$ \AA.
Lines show the typical P-Cygni profile, from which we estimate expansion velocities from the measurement of the position of the minimum of the absorption component.
At early times, the estimated velocities are $\mathrm{12200 \pm 150 \: km \: s^{-1}}$ for $H_{\alpha}$, $\mathrm{11000 \pm 150 \: km \: s^{-1}}$ for $H_{\beta}$ and $\mathrm{10500 \pm 150 \: km \: s^{-1}}$ for He~I.
A blackbody fit to the continuum of these spectra, in the range $4000-9500$ \AA\, yielded a temperature $11900 \pm 900 \: K$.
Spectra from day 21 to day 44 show, in addition to the Balmer lines, some iron-group elements like Fe II (4629 \AA), Fe II (5018 \AA), Fe II (5169 \AA) and Sc II (6246 \AA). There is also a feature at $8200$ \AA\ due to the Ca II infrared triplet.
The $H_{\alpha}$ velocity decreases to $10000 \pm 120 \: km \: s^{-1}$, $H_{\beta}$ to $9000 \pm 120 \: km \: s^{-1}$, while the velocities for the Fe II(5018 \AA) and Fe II(5169 \AA) were measured to be $\sim 6000 \pm 100 \: km \: s^{-1}$. The temperatures derived from blackbody fits to the continuum show a decrease from $8000 \pm 500 \: K$ to $6000 \pm 300 \: K$.
Spectra from day 49 to day 138 show the appearance of lines due to other heavy elements, such as Ba II(5981 \AA), Ba II(6142 \AA), Ti II(4100 \AA), and numerous blends of Fe II lines, while the absorption feature of NaID is no longer visible. At early times, the NaID feature is clearly visible as an absorption on the continuum, but at later times it is blended with complex broad features.
At these phases the velocities decrease for all elements: the velocity of $H_{\alpha}$ decreases to $5000 \pm 90 \: km \: s^{-1}$ and Fe II (5018 \AA) and Fe II (5169 \AA) decrease to $2000 \pm 120 \: km \: s^{-1}$.
The presence of the iron-group line blends prevents the detection of $H_{\beta}$. A fit to the continuum yields a temperature of $5000 \pm 400 \: K$.
At late times, the spectrum at 161 days shows forbidden [O I] lines (6300, 6364 \AA) and the semi-forbidden Ca II] doublet (7291, 7394 \AA).
The ejecta velocities of SN 2012ec have been compared with those measured for other Type II-P SNe: SN 2012A, SN 2012aw, SN 2004et and SN 1999em (see Table \ref{vel_SN}).
At early phases, the $H_{\alpha}$ velocity is lower than that estimated for SN 2012aw ($\mathrm{\sim 14000 \: km \: s^{-1}}$;, \citealt{DallOra2014}), but higher that the one estimated for SN 2012A ($\sim10200, \: km \: s^{-1}$) \citep{Tomasella2013}, and comparable with the one of SN 1999em ($\sim 12000, \: km \: s^{-1}$) \citep{Elmhamdi2003}.
At later phases ($40$ days), the Fe II (5169 \AA) velocities are higher than those estimated for SN 2012A ($\sim 3500 \: km \: s^{-1} $), comparable with those of SN 2004et ($\sim 4000 \: km \: s^{-1} $) and SN 1999em ($\sim 4200 \: km \: s^{-1} $), but they are still lower than that of SN 2012aw ($\sim 5500 \: km \: s^{-1}$).
In summary, the ejecta velocities measured for SN 2012ec velocities are similar to those measured for SNe 1999em and 2004et, but are consistently lower than for SN 2012aw and higher than for SN 2012A.
We also point out that the evolution of the Fe II(5169), $H_{\alpha}$ and $H_{\beta}$ velocities of SN 2012ec are in excellent agreement with the trends shown in Figure 16 of \citet{Faran2014}, based on a sample of 23 well-studied II-P SNe.
\begin{table*}
\caption{Expansion velocity of SN 2012ec at selected epochs, compared to other Type II-P SNe.\label{vel_SN}}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{lccccc}
\hline
& 2012aw & 2012ec & 1999em & 2004et & 2012A \\
\hline
$H_{\alpha}$ ($\sim 10$ d) & 14000 & 12200 & 12000 & & 10200 \\
Fe II ($\sim 40$ d) & 5500 & 4100 & 4200 & 4000 & 3500 \\
Fe II ($\sim 100$ d) & 3000 & 2400 & 2000 & 2000 & 2000 \\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[scale=0.45]{new_vel.eps}
\caption{Ejecta velocity evolution, estimated from the H$_\alpha$, H$_\beta$, Fe II(5018 \AA) and Fe II(5169 \AA) lines.}
\label{vel}
\end{figure}
\begin{table*}
\caption{Measured expansion velocities (from the minima of P-Cygni absorption) for SN 2012ec.
Estimated uncertaintes are in parentheses}\label{tabvelocities}
\begin{scriptsize}
\begin{tabular}{rrrrrrrrr}
\hline
Date & MJD & $Epoch^{a}$ & $H_{\alpha}$ & $H_{\beta}$ & $Fe II(5018)$ & $Fe II(5169)$ & $ScII(5533)$ & $Ca II(8520)$ \\
& & (d) & $km \: s^{-1}$ & $km \: s^{-1}$ & $km \: s^{-1}$ & $km \: s^{-1}$ & $km \: s^{-1}$ & $km \: s^{-1}$ \\
\hline
20120812 & 56152 & 8 & 12200 (150) & 10600 (150) & & & & \\
20120813 & 56153 & 9 & 11800 (130) & 10400 (150) & & & & \\
20120817 & 56157 & 13 & 11000 (160) & 10300 (130) & & & & \\
20120818 & 56158 & 14 & 10600 (120) & 9100 (120) & & & & \\
20120820 & 56160 & 16 & 10100 (120) & 8800 (110) & & & & \\
20120826 & 56166 & 22 & 9400 (100) & 7600 (120) & 5800 (100) & 6200 (100) & & \\
20120907 & 56178 & 34 & 8400 (110) & 5900 (110) & 4700 (100) & 4700 (120) & 5000 (120) & \\
20120909 & 56180 & 36 & 8300 (110) & 5500 (130) & 4500 (110) & 4600 (100) & 4600 (130) & \\
20120916 & 56187 & 43 & 6800 (120) & 4900 (110) & 4100 (110) & 4100 (130) & & \\
20120922 & 56193 & 49 & 6600 (110) & & 3700 (100) & 3700 (100) & 3800 (140) & 5600 (120) \\
20121008 & 56209 & 56 & 5900 (110) & & 3000 (100) & 3000 (140) & 3100 (100) & 4900 (140) \\
20121017 & 56219 & 75 & 5800 (170) & & 2900 (110) & 2900 (150) & & \\
20121112 & 56244 & 100 & 5230 (120) & & 2300 (120) & 2400 (100) & 2100 (130) & 4100 (100) \\
20121122 & 56252 & 108 & 4800 (100) & & & 2200 (100) & 2000 (150) & 3700 (150) \\
20121203 & 56265 & 121 & 4500 (100) & & 2000 (110) & & & 3600 (130) \\
20121212 & 56270 & 126 & & & 1600 (100) & & & \\
20121220 & 56282 & 138 & 4400 (100) & & & & & 3500 (140) \\
\hline
\end{tabular}
\\[1.4ex]
$a$ = epoch from the explosion.
\end{scriptsize}
\end{table*}
A close-up showing the time evolution of the $H_{\alpha}$, $H_{\beta}$ and Ca II line profiles for SN 2012ec is shown in Fig. \ref{lines}.
\begin{figure}
\centering
\includegraphics[scale=.4, angle=0]{line_evol.eps}
\caption{Time evolution of $H_{\alpha}$, $H_{\beta}$ and Ca II NIR triplet for SN 2012ec.}
\label{lines}
\end{figure}
The NIR spectra cover the period from day $21$ to day $161$ (Fig. \ref{spec_nir}). The H I Paschen lines are clearly visible at all epochs. Starting from day $68$ we identify also He I and Ca I lines and $Br_{\gamma}$.
The elements identified in the NIR spectra (Fig. \ref{spec_id}) are typical of Type II-P SNe, in particular the spectra at $71$ and $79$ days are similar to the NIR spectrum of SN 2012A at 72 days \citep{Tomasella2013}.
\begin{figure*}
\centering
\includegraphics[scale=.75, angle=0]{nir_evol_final.eps}
\caption{NIR spectroscopic evolution of SN 2012ec. Individual spectra have been shifted in flux for clarity. Numbers on the right indicate the epochs from explosion.}
\label{spec_nir}
\end{figure*}
\section{Hydrodynamic modeling}\label{modelling}
To constrain the main physical properties of the progenitor and the energetics of the explosion, we performed hydrodynamical modelling of SN 2012ec. Among the most important parameters we need to constrain are the ejected mass, the radius of the progenitor, the explosion energy and the ejected $\mathrm{^{56}Ni}$ mass (\citealt{Zampieri2003}; \citealt{Kasen2009}). These were found by comparing the observed bolometric luminosity, the evolution of line velocities and continuum temperature at the photosphere with the corresponding simulated quantities (\citealt{Zampieri2003}; \citealt{Pumo2010}). The comparison procedure consists of performing a simultaneous $\chi^{2}$ fit of all the relevant observables against those predicted by the model calculations. This approach was successfully adopted for other CC-SNe (e.g. SN 2007od, \citealt{Inserra2011}; SN 2009bw, \citealt{Inserra2012}; SN 2009E, \citealt{Pastorello2012}; SN 2012A, \citealt{Tomasella2013}; and SN 2012aw, \citealt{DallOra2014}).
The hydrodynamical modelling of the explosion was performed with two different codes: a semi-analytic code \citep{Zampieri2003}, that solves the energy balance equation for a constant density envelope which expands homologously; and a radiation-hydrodynamics code \citep{Pumo2011}, that can simulate the full radiative-hydrodynamical evolution of the ejected material. The latter code solves the hydrodynamic equations of a self-gravitating, relativistic fluid interacting with radiation, and incorporates an accurate treatment of radiative transfer and of the evolution of the ejected material, considering both the gravitational effect of the compact remnant and the heating effects related to the decays of radioactive isotopes synthesized during the CC SN explosion. The first code is used to investigate the more likely parameter space and provide a robust, first estimate of the best fitting model. A more detailed and time-consuming search is then performed with the radiation-hydrodynamics code. This modeling is appropriate only if the emission from the CC SN is dominated by freely expanding ejecta. Clearly, interaction with the circumstellar medium (CSM) can affect the early evolution of the light curve in a way not presently predicted by the models.
An extended grid of semi-analytic models was computed, covering a wide range in mass. The $\chi^{2}$ distribution of the models as a function of ejected mass is shown in Fig. \ref{mod-chi} and shows two comparable minima, one at $\sim 9.1 \: \mbox{M$_{\odot}$}$, the other at $\sim 12.6 \: \mbox{M$_{\odot}$}$.
The best fit model corresponding to the first minumum ($9.1 \pm 0.8 \: \mbox{M$_{\odot}$}$) has an initial radius of $\sim 2.3 \times 10^{13} \pm 0.7 \: cm$ ($330 \pm 100 \: R_{\odot}$), a total explosion energy of $\sim 0.7 \pm 0.2 \: foe$ and an ejected $^{56}$Ni mass of $\sim 0.035 \: \mbox{M$_{\odot}$}$.
The model corresponding to the second minumum has an initial radius of $1.6 \pm 0.5 \times 10^{13} \: cm$ ($230 \pm 70 \: R_{\odot}$), a total explosion energy of $1.2 \pm 0.4 \: foe$, and an ejected $^{56}$Ni mass of $\sim 0.035 \: \mbox{M$_{\odot}$}$.
In light of the results of the progenitor detection in pre-explosion observations, we only consider the ``high-mass'' minimum further.
The best fit model corresponding to the second minimum is shown in Fig. \ref{mod-fit} and appears to be in good agreement with all the observables.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth ]{chi.eps}
\caption{$\chi^{2}$ distribution of the fit of the semi-analytical model to the observed quantities, as a function of the estimated ejected mass.}
\label{mod-chi}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth ]{mod1882.eps}
\caption{Time evolution of the main observables of SN 2012ec (filled dots), compared to the ``high-mass'' best fit model (solid line). The top panel shows the fit of the bolometric light curve; the middle panel shows the the fit of the Fe II velocity and the bottom panel shows the fit of the continuum temperature.}
\label{mod-fit}
\end{figure}
\section{Homogeneous comparison with the two well-studied II-P SNe 2012A and 2012aw}
In this section, we present a detailed comparison of SN 2012ec with two well studied Type II-P SNe: 2012A and 2012aw. In all three cases, a progenitor was detected in pre-explosion images and sufficient photometric and spectroscopic observations were available to permit a homogenous analysis of the properties of the SNe using the same hydrodynamical code.
SN 2012ec was discovered 9 days after the explosion, while the other SNe were discovered much sooner after explosion (see Table \ref{comp_SNe}). SN 2012aw was discovered in M95 at a distance modulus $\mu=29.96 \pm 0.04$ mag and with a total reddening of $E(B-V)=0.086$ mag; while SN 2012A was discovered in NGC 3239 at $\mu=29.96 \pm 0.15$ and $E(B-V)=0.037$ mag.
The estimates of the initial masses of the progenitors, through direct detection of the precursor, were: $M_{12aw}= 14-26 \; \mbox{M$_{\odot}$}$ \citep{Fraser2012}, $M_{12ec}$ in the range $14-22 \; \mbox{M$_{\odot}$}$ \citep{Maund2013} and $M_{12A}=8-15 \; \mbox{M$_{\odot}$}$ \citep{Tomasella2013}. In a separate analysis of the pre-explosion observations of SN~2012aw, \citealt{Van2012} reported an initial mass of $15-20 \: \mbox{M$_{\odot}$}$. A major uncertainty in estimating the progenitor mass is degeneracy between temperature and reddening. \citealt{Kochanek2012} showed that a different treatment of the extincion results in a luminosity of $log(L/L_{\odot})= 4.8-5.0$, corresponding to a progenitor main sequence mass of $13-16 \: \mbox{M$_{\odot}$}$ \citep{Jerkstrand2014}, which is in agreement with the nebular spectral modelling and the amount of oxygen produced by SN 2012aw.
Fig. \ref{R} shows the photometric evolution of the absolute magnitudes in the R and V bands of SN 2012ec, SN 2012aw and SN 2012A.
We note that SN 2012ec is intermediate between the more luminous SN 2012aw and the fainter SN 2012A. The duration of the plateau and the post-plateau decline is longer in SN 2012aw and shorter and steeper in SN 2012A. Again, SN 2012ec shows an intermediate behaviour, with quite a short plateau and a slower post-plateau drop.
The absolute magnitude in the R band for these SNe, on the plateau ($\sim$ 60 days), were $M_{R}(12aw)= -17.1$ mag, $M_{R}(12ec)= -16.7$ mag and $M_{R}(12A)= -16.2$ mag.
\begin{figure}
\centering
\includegraphics[scale=0.4]{LC_R_final.eps}
\includegraphics[scale=0.4]{LC_V_final.eps}
\caption{Comparison of the light curves in the $R$ (top panel) and $V$ (bottom panel) bands of SN 2012ec, with SN 2012aw and SN 2012A.}
\label{R}
\end{figure}
A comparison of the colours evolution of SN 2012ec with SN 2012aw and SN 2012A is shown in Fig. \ref{color}. The colour of each SN has been corrected for reddening for a proper comparison.
The colour evolution of SN 2012ec has already been discussed in Sect. \ref{photoanalysis}.
From Fig. \ref{color}, we can see that the colour evolution of SN 2012ec is similar to that of the other two SNe.
\begin{figure}
\centering
\includegraphics[scale=0.4]{color_comp.eps}
\caption{Comparison of the colour evolution of SN 2012ec, in the $B-V$ (top panel), $V-R$ (middle panel), and $V-K$ (bottom panel), with SN 2012aw and SN 2012A.}
\label{color}
\end{figure}
Fig. \ref{bolom_comp} shows a comparison of the bolometric light curves of SNe 2012ec, 2012A and 2012aw, where SN 2012ec is of intermediate luminosity between the other two SNe. In particular, during the plateau phase, SN 2012ec is more luminous than SN 2012A and exhibits a longer plateau. Conversely, SN 2012aw is clearly of higher luminosity than SN 2012ec throughout the entirety of the photospheric phase and has a longer plateau of $\sim100$ days \citep{DallOra2014}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{bolom_comp.eps}
\caption{Pseudo-bolometric light curve of SN 2012ec, compared to SN 2012aw and SN 2012A.}
\label{bolom_comp}
\end{figure}
From the comparison of the $^{56}Ni$ masses estimated for the three SNe, we may note a sequence in the values: $M(^{56}Ni)_{12aw}= 0.056 \pm 0.013 \: \mbox{M$_{\odot}$}$, $M(^{56}Ni)_{12ec}= 0.040 \pm 0.015 \: \mbox{M$_{\odot}$}$ and $M(^{56}Ni)_{12A}= 0.011 \pm 0.004 \: \mbox{M$_{\odot}$}$.
In Fig. \ref{spec_comp} we show a comparison of the spectra of SN 2012ec with those of SN 2012aw and SN 2012A at three different epochs, highlighting the spectroscopic similarities between the three SNe at all epochs.
\begin{figure*}
\centering
\includegraphics[scale=.7, angle=0]{spec_comp.eps}
\caption{Comparison of the spectra of SN 2012ec, SN 2012aw and SN 2012A at three different epochs, i.e. at early times, during the plateau phase and at the end of the plateau.}
\label{spec_comp}
\end{figure*}
We also compared the ejecta velocities measured from H$_\alpha$, and Fe II(5169 \AA) for SN 2012ec with the velocities measured for other type II-P SNe (see Fig. \ref{vel_H}). SN 2012aw has an initial $H_{\alpha}$ velocity $\sim 14000 \: km \: s^{-1}$, higher than measured for SN 2012ec ($\sim 12200 \: km \: s^{-1}$) and for SN 2012A ($\sim 10200 \: km \: s^{-1}$). After 100 days, the velocity of $H_{\alpha}$ decreases to $\sim 6000 \: km \: s^{-1}$ for SN 2012aw, which is still higher than measured for SN 2012ec ($\sim 5000 \: km \: s^{-1}$) and for SN 2012A ($\sim 5000 \: km \: s^{-1}$). The initial Fe II(5169 \AA) of SN 2012aw is $\sim 6500 \: km \: s^{-1}$, still higher than those of SN 2012ec ($\sim 6000 \: km \: s^{-1}$) and of SN 2012A ($\sim 5200 \: km \: s^{-1}$). After $\sim 100$ days it drops to $\sim 3000 \: km \: s^{-1}$ for SN 2012aw, to $\sim 2500 \: km \: s^{-1}$ for SN 2012ec and to $\sim 2000 \: km \: s^{-1}$ for SN 2012A. In terms of ejecta velocities, SN 2012ec is intermediate between SN 2012aw and SN 2012A.
\begin{figure}
\centering
\includegraphics[scale=0.4]{Ha_final.eps}
\includegraphics[scale=0.4]{Fe_final.eps}
\caption{Comparison of the ejecta velocities of SN 2012ec, SN 2012A and SN 2012aw, measured from the H$_\alpha$ (top panel) and Fe II(5169 \AA) lines (bottom panel).}
\label{vel_H}
\end{figure}
A comparison of the temperature estimated via blackbody fitting of the SED evolution for the 3 SNe is presented in Fig. \ref{temp}, from which it is clear that the temperature evolutions of SN 2012ec and SN 2012A are similar, and significantly hotter than SN 2012aw (from $\sim 20-30$ days post-explosion).
\begin{figure}
\centering
\includegraphics[scale=0.4]{temp_final.eps}
\caption{Comparison of the time evolution of the photospheric temperatures of SNe 2012ec, 2012A and 2012aw.}
\label{temp}
\end{figure}
The ejected mass calculated for SN 2012ec is $12.6 \: \mbox{M$_{\odot}$}$, which is comparable to the value estimated for SN 2012A ($12.5 \: \mbox{M$_{\odot}$}$ \citealt{Tomasella2013}), but lower than value calculated for SN 2012aw ($ 20 \: \mbox{M$_{\odot}$}$, \citealt{DallOra2014}). Similarly the initial radius for SN 2012ec is comparable to SN 2012A ($\sim 260 \: R_{\odot}$), but smaller than for SN 2012aw ($\sim 400 \: R_{\odot}$). Conversely, the estimated energy of SN 2012ec of $1.2 \: foe$ is higher than the value estimated for SN 2012A ($0.48 \: foe$) but similar to the energy of SN 2012aw ($1.5 \: foe$).
In summary, SN 2012ec is more luminous than SN 2012A, synthesised more $\mathrm{^{56}Ni}$ and has higher expansion velocities. The ejecta masses of the two SNe are comparable, but the pre-SN radius and the masses of the progenitors are slightly different. This indicates that the progenitor of SN 2012ec progenitor was likely to be more massive, but more compact the progenitor of SN 2012A. SN 2012aw has a larger initial radius, a more massive envelope and more energetic explosion that produced more $^{56}Ni$ and higher ejecta velocities than SN 2012ec. It is interesting to compare these estimates with the analysis of \citet{Poznanski2013}, who suggests a simple scaling relation between the energy deposited in the exploding core and the mass of the progenitor that, in turn, reflects on a linear correlation between mass and ejecta velocity. In particular, the positions of the ejected masses from the hydrodynamical code of SN 2012A and SN 2012aw in the Figure 1 of \citet{Poznanski2013}, are consistent with a steeper law $M \propto v^{1.5}$, while the ejected mass for SN 2012ec is much lower than expected from both the $M \propto v$ and $M \propto v^{1.5}$ relations. Since the hydrodynamical code estimates the ejecta masses, and not the progenitor masses, for SN 2012ec the discrepancy could be explained with a very efficient mass-loss mechanism. Unfortunately, the same argument cannot be invoked for SN 2012A and SN 2012aw. We also note that the \citet{Poznanski2013} analysis was based on progenitor masses estimated from stellar evolution models, which are based on a different input physics than the hydrodynamical codes.
The main characteristics of the comparisons between the three SNe are summarised in Table \ref{comp_SNe}.
\begin{table}
\caption{Comparison of the main parameters of SNe 2012ec, 2012aw and 2012A.}\label{comp_SNe}
\begin{footnotesize}
\begin{tabular}{lcccc}
\hline
& SN 2012aw & SN 2012ec & SN 2012A \\
\hline
$\mu$ (mag) & 29.96 & 31.19 & 29.96 \\
E(B-V) (mag) & 0.086 & 0.124 & 0.037 \\
$MJD_{expl}$ (d) & 56002 & 56151 & 55933 \\
$MJD_{disc}$ (d) & 56003 & 56143 & 55934 \\
$v_{Fe II}$ ($km \: s^{-1}$)$^{a}$ & $\sim 4200$ & $\sim 3700$ & $\sim 2800$ \\
$M_{R}$ (mag) & -17.1 & -16.7 & -16.2 \\
$L (10^{42} erg \: s^{-1})^{b}$ & 1.1 & 0.9 & 0.5 \\
Plateau duration (d) & 100 & 90 & 80 \\
$^{56}Ni$ ($\mbox{M$_{\odot}$}$) & 0.056 & 0.040 & 0.011 \\
E (foe)$^{c}$ & $ 1.5$ & 1.2 & 0.48 \\
R ($10^{13}$ cm) & 3 & 1.6 & 1.8 \\
$M_{eject}$ ($\mbox{M$_{\odot}$}$) & 20 & 12.6 & 12.5 \\
$M_{prog}$ ($\mbox{M$_{\odot}$}$)$^{d}$ & 13-16 & 14-22 & 8-15 \\
\hline
\end{tabular}
\\[1.5ex]
$^{a}$ at $\sim 50$ days \\
$^{b}$ at the plateau \\
$^{c}$ 1 foe= $10^{51} \: erg$ \\
$^{d}$ Mass of the progenitor as estimated from the pre-explosion images \\
\end{footnotesize}
\end{table}
\section{Type II-P SNe as Standard Candles}
The extragalactic distance scale is intimately connected with Type Ia SNe, up to cosmological distances, and through Type Ia SNe the acceleration of the Universe was discovered \citep{Perlmutter1999,Riess1998,Schmidt1998}. At the present time, current facilities allow us to detect and study Type Ia SNe up to $z=1.7$ \citep{Rubin2013}, while the next generation of extremely large telescopes will allow us to study Type Ia SNe up to $z\sim4$ \citep{Hook2013}. At high $z$, however, the number of Type Ia SNe may significantly decrease, due to the long lifetimes of their progenitors. Alternatively, the ubiquitous Type II (core-collapse) SNe could be an appealing choice to probe further cosmological distances. While Type Ia SNe are the product of an old to intermediate stellar population, Type II SNe come essentially from a young stellar population, and thus constitute a homogeneous sample with respect to the age of the stellar population. It should also be noted, however, that type II SNe are significantly fainter than Type Ia SNe and that they explode in younger and dustier regions, making their discovery and study more difficult.
Although the characteristics of the light curves of the Type II SNe (peak luminosity, decline rate, presence and duration of the plateau) span a broad range of values, their use as distance indicators was already recognized by \citet{Kirshner1974}, who applied the Baade-Wesselink analysis to SN 1969L and SN 1970G through the Expanding Photosphere Method (EPM), and by \citet{Mitchell2002}, who modified the EPM method by introducing spectral synthesis analysis (Spectral-fitting Expanding Atmosphere Method, SEAM). Subsequently, \citet{Dessart2005} further exploited the EPM method by applying non-LTE atmospheric models. Both EPM and SEAM have been succesfully applied to SNe at cosmological distances (e.g. \citealt{Baron2004}, \citealt{Schmidt1994}), but require well sampled light curves and high quality spectra.
More specifically, for type II-P SNe, \citet{Hamuy2002} found a tight empirical correlation between the bolometric luminosity and the expansion velocity of the ejecta during the plateau phase. The luminosity and the expansion velocity (as measured from the Fe~II (5169\AA) line) are estimated at approximately the ``half plateau'' phase, conventionally set at $50$ days. This method, dubbed the ``Standardized Candle Method'' (SCM), was subsequently investigated by \citet{Nugent2006}, \citet{Poznanski2009}, \citet{DAndrea2010} and \citet{Olivares2010}, with the advantage that it requires less input data than both EPM and SEAM. The empirical correlation at the base of the SCM was theoretically reproduced by \citet{Kasen2009}, who pointed out that the correlation relies on the simple behaviour of the expanding hydrogen envelope. They also warned, however, that the SCM may be sensitive to the progenitor metallicity and mass, that in turn could lead to systematic effects.
Almost all the quoted calibrations adopt $50$ days post-explosion as a reference phase that roughly corresponds to the ``half-plateau''. Other choices for the reference phase during the plateau phase can be set, but with the \textit{caveat} that the velocity measured from the Fe~II (5169) line is moderately decreasing over the duration of the plateau and that the method requires knowledge of the epoch of the explosion. Only \citet{Olivares2010} adopted a ``custom'' reference phase for each SN, due to the fact that the length of the plateau varies from SN to SN. For this reason, they suggested adopting a reference epoch $30$ days prior to the epoch at which the light curve has declined to a brightness midway between the plateau brightness and the brightness at which it joins the radioactive tail.
In this paper we take advantage of the homogeneous analysis of the three type II-P SNe (SNe 2012ec, 2012aw and 2012A) to perform a detailed comparison of the available calibrations of SCM and assess the robustness of the method. More specifically, for the comparison we adopt the $I$-band calibrations of SCM, namely: eq. $2$ of \citet{Hamuy2002}; eq.$1$ of \citet{Nugent2006}; eq. $2$ of \citet{Poznanski2009}; eq. 2 of \citet{DAndrea2010}\footnote{In passing, we note that the \citet{Poznanski2010} recalibration of this work led to a Hubble diagram with a scatter of only $11 \%$}; and eq. $16$ of \citet{Olivares2010}. Our estimated distances to the three SNe are compared with a homogeneous set of distances, based on primary (Cepheids, Tip of the Red Giant Branch, or TRGB) and secondary distance indicators (Tully-Fisher, Surface Brightness Fluctuations or SBF), available in the Extragalactic Distance Database \citep{Tully2009}. In Table \ref{distances} we report, for each SN, the distance estimated with the above calibrations. Moreover, we show the difference between the SCM distance and the estimates from the primary (when available) and secondary distance indicators. Finally, for each calibration, we report the mean difference and dispersion of the SCM distances with the estimates based on the primary and secondary distance indicators.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth ]{hamuyplot.eps}
\caption{Our studied sample of type II-P SNe: SN 2012ec (black), SN 2012aw (red) and SN 2012A (blue) in the original \citet{Hamuy2002} plane.}
\label{distance}
\end{figure}
\begin{table*}
\caption{Comparison of the SCM distances and the estimates from the primary and secondary distance indicators.\label{distances}}
\begin{tabular}{llcccrrr}
\hline
Calibration &SN & SCM & Primary & Secondary & SCM $-$ Primary& SCM - Secondary & Mean residual\\
& &(mag)& (mag)& (mag)& (mag)&(mag)& (mag) \\
\hline
& SN 2012ec & $31.22 \pm 0.3$ & & $31.19$ & & $0.03$ & \\
HP2002 & SN 2012aw & $29.96 \pm 0.3$ & $29.96$ & $30.00$ & $0.00$ & $-0.04$ & $0.01 \pm 0.04$\\
& SN 2012A & $30.05 \pm 0.3$ & & $30.00$ & & $0.05$ & \\
\hline
& SN 2012ec & $31.29 \pm 0.3$ & & $31.19$ & & $ 0.10$ & \\
Nugent06 & SN 2012aw & $30.03 \pm 0.3$ & $29.96$ & $30.00$ & $0.07$ & $ 0.03$ & $-0.03 \pm 0.14$\\
& SN 2012A & $29.77 \pm 0.3$ & & $30.00$ & & $-0.23$ & \\
\hline
& SN 2012ec & $31.15 \pm 0.2$ & & $31.19$ & & $-0.04$ & \\
Poznanski09 & SN 2012aw & $29.70 \pm 0.2$ & $29.96$ & $30.00$ & $-0.26$ & $-0.30$ & $-0.1 \pm 0.14$\\
& SN 2012A & $30.04 \pm 0.2$ & & $30.00$ & & $0.04$ & \\
\hline
& SN 2012ec & $31.11 \pm 0.2$ & & $31.19$ & & $-0.08$ & \\
Olivares10 & SN 2012aw & $29.58 \pm 0.2$ & $29.96$ & $30.00$ & $-0.38$ & $-0.42$ & $-0.01 \pm 0.37$\\
& SN 2012A & $30.47 \pm 0.2$ & & $30.00$ & & $0.47$ & \\
\hline
& SN 2012ec & $31.33 \pm 0.2$ & & $31.19$ & & $0.14 $ & \\
D'Andrea10 & SN 2012aw & $29.86 \pm 0.2$ & $29.96$ & $30.00$ & $-0.10$ & $-0.14$ & $0.09 \pm 0.17$\\
& SN 2012A & $30.27 \pm 0.2$ & & $30.00$ & & $0.27$ & \\
\hline
\end{tabular}
\\[1.5ex]
Quoted errors for the SCM distances are the standard deviations of the individual calibrations. The value of the distance from the primary indicators of SN 2012aw is the average from the Cepheids \citep{Freedman2001} and the TRGB \citep{Rizzi2007} estimates. Finally, the ``mean residual'' column shows the average of the SCM $-$ Secondary values, where the error is the standard deviation.
\end{table*}
Table \ref{distances} may suggest that the \citet{Hamuy2002} calibration gives more homogenous results with respect to other calibrations. However, it must be noted that our test is based on only three SNe and that all the calibrations are consistent within the errors. We note that the \citet{Hamuy2002} calibration was derived assuming a value of $H_0 = 65$ km s$^{-1}$ Mpc$^{-1}$, significantly lower than the estimate of $H_0 = 73.8 \pm 2.4$ km s$^{-1}$ Mpc$^{-1}$ of \citet{Riess2011}, but in agreement with $H_0 = 63.7 \pm 2.3$ km s$^{-1}$ Mpc$^{-1}$ given by \citet{Tammann2013}. The large scatter in the \citet{Olivares2010} calibration could be due to the difficulty in estimating the reference phase, when a well sampled light curve covering the end of the plateau, is not available. All these calibrations rely on moderately distant SNe, embedded in the Hubble flow or for which SBF distances are available. However, these distances could still be affected by systematics not completely understood. For these reasons a new calibration of the SCM, based on nearby type II-P SNe for which primary (Cepheids and TRGB) and homogenous secondary indicators (TRGB) distances are available, would be of great interest. Moreover, for these SNe the metallicity effects suggested by \citet{Kasen2009} could also be investigated. The average of the five individual estimates of the distances for SN 2012ec gives a distance modulus of $31.22 \pm 0.08$ mag, which we adopt as our final SCM-based distance. This value is in excellent agreement with the Tully-Fisher distance of $31.19 \pm 0.13$, adopted for our analysis.
\section{Conclusions}
We have presented the results of the Large Program ``Supernova Variety and Nuclesosynthesis Yelds'' and PESSTO photometric and spectroscopic monitoring campaign of SN 2012ec. This is one of the most intensively observed and well investigated Type II-P SNe to date. The optical and spectrocopic monitoring during the photospheric phase lasted for $\sim 161$ days and allowed us to determine the evolution of the pseudo-bolometric luminosity, the expansion velocity and the photospheric temperature and $^{56}Ni$ mass.
These parameters, analysed in conjunctions with hydrodynamical models, allowed us to estimate the explosion parameters such as the explosion energy, the envelope mass and the pre-SN radius.
Correcting the data for reddening ($E(B-V)=0.14 \pm ^{+0.15}_{-0.12}$ mag) and distance modulus ($\mu = 31.19 \pm 0.13 $) we estimated the luminosity to be $L= 0.9 \times 10^{42} \: erg \: s^{-1}$, at the plateau and evaluated the $^{56}Ni$ mass to be $0.040 \pm 0.015 \: \mbox{M$_{\odot}$}$.
The spectra of SN 2012ec were dominated by Balmer lines in the early epochs and after 20 days the iron-group elements started to appear and become more prominent with time. The NIR spectra were dominated by Paschen lines and, starting from 68 days, it is possible to identify He I, Ca I and $Br_{\gamma}$. A black body fit to the continuum gives temperatures of $11900 \pm 900$ K in the early epoches decreasing to $6200 \pm 500$ K at 50 days and $5000 \pm 500$ K in the last epochs.
From the spectroscopic dataset we estimate an initial velocity of $12200 \: km \: s^{-1}$ for the $H_{\alpha}$ line and $11000 \: km \: s^{-1}$ for $H_{\beta}$. The $H_{\alpha}$ velocity decreases to $5000 \; km \: s^{-1}$ by 50 days. At $\sim 25$ days the iron-group elements appear, for which we measure a velocity of $6000 \: km \: s^{-1}$ (for Fe II). The behaviour of SN 2012ec is similar to that seen in other II-P SNe, such as SN 1999em \citep{Elmhamdi2003} and SN 2004et \citep{Maguire2010}.\\
We estimate the physical parameters of SN 2012ec through the hydrodynamical modeling described in Sect. \ref{modelling}. The fit suggests an ejected mass of $M_{env}= 12.6 \: \mbox{M$_{\odot}$}$, a pre-SN radius of $R= 1.6 \times 10^{13} \: cm$, an explosion energy of $E=1.2 \: foe$ and an ejected $M(^{56}Ni)= 0.035 \: \mbox{M$_{\odot}$}$. The progenitor mass is in agreement with independent estimate of \citet{Maund2013} $M= 14-22 \: \mbox{M$_{\odot}$}$ obtained by analyzing pre-explosion images and of \citet{Jerkstrand14b}, submitted, $M= 13-15 \: \mbox{M$_{\odot}$}$ obtained from modeling of the spectra in the nebular phase.
Previously reported ejecta masses estimated from hydrodynamical modelling are generally too large compared to the initial mass estimated from direct detections of the progenitor on pre-explosion images \citep{Utrobin2008,Maguire2010}. In order to investigate this discrepancy, we performed an homogeneous comparison between three type II-P SNe, estimating the mass of the progenitor with two different approaches.
The methods and the codes used for the three objects in both cases are the same, to facilitate a reliable comparison.
We analyze the bright SN 2012aw \citep{DallOra2014}, the low-luminosity SN 2012A \citep{Tomasella2013} and SN 2012ec.
Several observational and derived parameters have been compared for these three objects.
SN 2012aw ($M_{R}=-17.1$ mag, at plateau) is brighter then SN 2012ec ($M_{R}=-16.7$ mag), while SN 2012A is fainter ($M_{R}=-16.2$ mag). A comparison between the bolometric light curves shows that SN 2012ec has an intermediate luminosity between the high luminosity SN 2012aw and the fainter SN 2012A. The nickel mass synthetized by these SNe is $M(^{56}Ni)_{12aw}= 0.056 \pm 0.013 \: \mbox{M$_{\odot}$}$, $M(^{56}Ni)_{12ec}= 0.040 \pm 0.015 \: \mbox{M$_{\odot}$}$ and $M(^{56}Ni)_{12A}= 0.011 \pm 0.004 \: \mbox{M$_{\odot}$}$.
A spectroscopic comparison shows a similar time evolution at all epochs. The velocities of $H_{\alpha}$, $H_{\beta}$ and Fe II of SN 2012ec, place it in the middle of the higher velocities from SN 2012aw and the slowest SN 2012A at all times.
The temperatures estimated are comparable for the three objects within the first 20 days, rather SN 2012ec tend to be similar to SN 2012A and they both are hotter than SN 2012aw.
SN 2012aw has a more energetic explosion ($E=1.5$ foe) than SN 2012ec and SN 2012A ($E= 0.48$ foe), but SN 2012ec is also more energetic than SN 2012A.
We finally compared the results of the direct detection of the progenitors of these three SNe with the masses estimated from the hydrodynamical modelling.
The progenitor mass estimated for SN 2012aw from the pre-explosion images ($M=13-16 \; \mbox{M$_{\odot}$}$) and from the hydrodynamical modeling ($M_{eject}= 20 \: \mbox{M$_{\odot}$}$) show that the two methods are not in good agreement and that SN 2012aw has a more massive progenitor then SN 2012ec, the last one having comparable ejecta mass with SN 2012A ($M= 8-15 \: \mbox{M$_{\odot}$}$, $M_{eject}= 12.5 \: \mbox{M$_{\odot}$}$).
The estimated initial radius of SN 2012aw ($R= 3 \times 10^{13}$ cm) indicate a larger progenitor then for SN 2012ec and SN 2012A ($R= 1.8 \times 10^{13}$ cm).
The estimates of the initial radius from the hydrodynamical modelling for the three objects is lower than those from the pre-explosion images and seem to be too low for a RSG progenitor.
This homogeneous analysis finds a substantial match, within the errors, of the mass of the progenitor obtained with the two methods, mitigating the discrepancy which was pointed out in previous works \citep{Maguire2010}.
SN 2012ec, SN 2012aw and SN 2012A also follow the relation obtained by \citealt{Hamuy2002}.
This fact, coupled with their high luminosity at UV wavelengths, make Type II-P SNe interesting probes observable with the next generation of telescopes up to high \textit{z}.
\section{Acknowledgements}
We warmly thank our referee, Dovi Poznanski, for his helpful comments, which significantly improved the content and the readability of our manuscript.
We thank E. Cappellaro for the useful discussions.
C.B. thanks the IRAP PhD program for the financial support.
The research of JRM is supported through a Royal Society Research Fellowship.
A.G.-Y. is supported by an EU/FP7-ERC grant no [307260], "The Quantum Universe" I-Core program by the Israeli Committee for planning and budgeting and the ISF, GIF, Minerva, ISF and Weizmann-UK grants, and the Kimmel award.
G.P. acknowledges partial support by proyecto interno UNAB DI-303-13/R.
GP and MH acknowledge support provided by the Millennium Institute of Astrophysics (MAS) through grant IC120009 of the Programa Iniciativa Cientifica Milenio del Ministerio de Economia, Fomento y Turismo de Chile".
M.D.V., M.L.P., S.B., A.P., L.T. and M.T. are partially supported by the PRIN-INAF 2011 with the project ``Transient Universe: from ESO Large to PESSTO".
This work was partly supported by the European Union FP7 programme through ERC grant number 320360.
This work is based (in part) on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile as part of PESSTO, (the Public ESO Spectroscopic Survey for Transient Objects Survey) ESO program 188.D-3003, 191.D-0935.
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement n$^{\rm o}$ [291222] (PI : S. J. Smartt) and STFC grants ST/I001123/1 and ST/L000709/1.
The early SN 2012ec data have been collected via the ESO-NTT Large Program "Supernova Variety and Nuclesosynthesis Yelds" (184.D-1140), an European supernova collaboration led by Stefano Benetti (http://sngroup.oapd.inaf.it/esolarge.html).
This paper is partially based on observations collected at Copernico telescope (Asiago, Italy) of the INAF - Osservatorio Astronomico di Padova; at the Galileo 1.22m Telescope operated by Department of Physics and Astronomy of the University of Padova at Asiago; at the 2.56m Nordic Optical Telescope operated by The Nordic Optical Telescope Scientific Association (NOTSA); at the 4.3m William Herschel Telescope operated by the Isaac Newton Group of Telescopes; on observations obtained through the CNTAC proposal CN2012B-73 and on observations made with the Liverpool Telescope (programme OL12B) operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council.
| -66,463.983748 |
[
-2.66015625,
2.42578125
] | 40.544218 |
[
-3.880859375,
0.9267578125,
-1.44140625,
-5.65625,
-0.35302734375,
7.5390625
] |
[
3.666015625,
8.1875,
4.18359375,
5.3984375
] | 1,722 | 11,454 |
[
-3.33203125,
3.74609375
] | 41.038068 |
[
-6.2734375,
-2.47265625,
-2.9765625,
-2.2265625,
1.2734375,
10.6875
] | 1.393558 | 22.54309 | 22.271695 | 19.87306 |
[
2.6768341064453125
] | -47,125.939306 | 5.168587 | -61,411.439443 | 0.84458 | 6.36345 |
[
-3.8203125,
-3.53125,
-3.05078125,
-3.56640625,
2.53125,
10.375
] |
[
-6.62109375,
-2.67578125,
-2.591796875,
-2.029296875,
4.23828125,
6.48046875
] | |
BkiUd4Q4eIOjRst9_iZs
|
\section{Introduction}
In Wu \cite{wu,wu2}, the coefficients appeared in the discretized primal and
dual problems are the function values of the coefficient functions taken at
the right-end points of the subdivided intervals. This simple type of formulation
can just be used to prove the strong duality theorem for the case of
piecewise continuous functions in which the discontinuities are the left-continuities.
In this paper, we shall extend to prove the strong duality theorem for the general situation
of discontinuities. We shall propose the completely different type of formulation for the
discretized primal and dual problems. In this paper, the coefficients in the
discretized primal and dual problems will consider the infimum and supremum
of the coefficient functions on the subdivided intervals, which is more complicated
than that of considering the function values of the coefficient functions taken at
the right-end points of the subdivided intervals in Wu \cite{wu,wu2}.
The theory of continuous-time linear programming problem has received
considerable attention for a long time. Tyndall \cite{tyn65,tyn67}
treated rigorously a continuous-time linear programming problem with
constant matrices, which had originated from the ``bottleneck problem''
proposed by Bellman \cite{bel57}. Levinson \cite{lev} generalized the results
of Tyndall by considering time-dependent matrices in which the functions
appearing in the objective and constraints were assumed to be continuous on the time
interval $[0,T]$. Meidan and Perold \cite{mei}, Papageorgiou \cite{pap} and
Schechter \cite{sch} have also obtained
some interesting results for the continuous-time linear programming problem.
Anderson {\em et al.} \cite{and83,and94,and96}, Fleischer and Sethuraman \cite{fle}
and Pullan \cite{pul93,pul95,pul96,pul00,pul02} investigated a subclass of
continuous-time linear programming problems, which is called separated
continuous-time linear programming problem and can be used to model the
job-shop scheduling problems. Weiss \cite{wei} proposed a simplex-like algorithm to solve
the separated continuous-time linear programming problem.
This paper is organized as follows. In Section 2, the problem formulation is
presented, and the weak duality theorem is proved.
In Section 3, in order to study the strong duality theorem, we propose a perturbed
continuous-time linear programming problem. Many useful results that will be used
to prove the strong duality theorem are derived.
In Section 4, discretized problems are formulated in which the
partition of the time interval $[0,T]$ is not taken as equally dividing $[0,T]$.
We also derive many useful results that will be used
to prove the strong duality theorem.
In Section 5, the strong duality theorem is proved.
\section{Formulation}
Let $A$ be a matrix with entries denoted by $a_{ij}$. We define
$\parallel A\parallel =\sum_{i,j} |a_{ij}|$.
Let $L^{\infty}_{p}[0,T]$ be the space of all measurable and
essentially bounded functions from the compact interval $[0,T]$ into the
$p$-dimensional Euclidean space $\mathbb{R}^{p}$.
If $p=1$, Then, we simply write $L^{\infty}[0,T]$.
For $f\in L^{\infty}[0,T]$, we define
\[\parallel f\parallel_{\infty}=\mbox{ess}\sup_{t\in [0,T]}
|f(t)|=\inf\left\{k:|f(t)|\leq k\mbox{ a.e. in }[0,T]\right\},\]
where the Lebesgue measure is considered.
Therefore, we have $|f(t)|\leq\parallel f\parallel_{\infty}$
a.e. in $[0,T]$. For ${\bf f}=(f_{1},\cdots ,f_{p})\in L^{\infty}_{p}[0,T]$, we define
\[\parallel {\bf f}\parallel_{\infty}^{p}
=\max_{i=1,\cdots ,p}\parallel f_{i}\parallel_{\infty}.\]
We consider the following assumptions:
\begin{itemize}
\item ${\bf a}\in L^{\infty}_{q}[0,T]$ and ${\bf c}\in L^{\infty}_{p}[0,T]$;
\item $B$ and $K$ are time-dependent $p\times q$ matrices
defined on $[0,T]$ and $[0,T]\times [0,T]$, respectively, such that
each entry is in the spaces $L^{\infty}[0,T]$ and $L^{\infty}([0,T]\times [0,T])$,
respectively.
\end{itemize}
The continuous-time linear programming problem is formulated as follows:
\begin{eqnarray*}
(\mbox{CLP}^{*}) & \max & \int_{0}^{T} {\bf a}^{\top}(t){\bf z}(t)dt\\
& \mbox{subject to} & B(t){\bf z}(t)\leq {\bf c}(t)+\int_{0}^{t}
K(t,s){\bf z}(s)ds\mbox{ for all $t\in [0,T]$},\\
&& {\bf z}\in L^{\infty}_{q}[0,T]\mbox{ and }
{\bf z}(t)\geq {\bf 0}\mbox{ for all $t\in [0,T]$}.
\end{eqnarray*}
The dual problem of $(\mbox{CLP}^{*})$ is defined as follows:
\begin{eqnarray*}
(\mbox{DCLP}^{*}) & \min & \int_{0}^{T} {\bf c}^{\top}(t){\bf w}(t)dt\\
& \mbox{subject to} & B^{\top}(t){\bf w}(t)\geq {\bf a}(t)+\int_{t}^{T}
K^{\top}(s,t){\bf w}(s)ds\mbox{ for all $t\in [0,T]$},\\
&& {\bf w}\in L^{\infty}_{p}[0,T]\mbox{ and }
{\bf w}(t)\geq {\bf 0}\mbox{ for all $t\in [0,T]$}.
\end{eqnarray*}
In this paper, we shall consider the following problems:
\begin{eqnarray*}
\mbox{(CLP)} & \max & \int_{0}^{T} {\bf a}^{\top}(t){\bf z}(t)dt\\
& \mbox{subject to} & B(t){\bf z}(t)\leq {\bf c}(t)+\int_{0}^{t}
K(t,s){\bf z}(s)ds\mbox{ a.e. in $[0,T]$,}\\
&& {\bf z}\in L^{\infty}_{q}[0,T]\mbox{ and }
{\bf z}(t)\geq {\bf 0}\mbox{ a.e. in $[0,T]$}
\end{eqnarray*}
and
\begin{eqnarray*}
\mbox{(DCLP)} & \min & \int_{0}^{T} {\bf c}^{\top}(t){\bf w}(t)dt\\
& \mbox{subject to} & B^{\top}(t){\bf w}(t)\geq {\bf a}(t)+\int_{t}^{T}
K^{\top}(s,t){\bf w}(s)ds\mbox{ a.e. in $[0,T]$,}\\
&& {\bf w}\in L^{\infty}_{p}[0,T]\mbox{ and }
{\bf w}(t)\geq {\bf 0}\mbox{ a.e. in $[0,T]$},
\end{eqnarray*}
where the constraints are assumed to be satisfied in the sense of a.e. in $[0,T]$.
The weak duality theorem can be similarly established
although the primal and dual problems (CLP) and (DCLP)
are defined in the sense of a.e. in $[0,T]$.
\begin{Thm}{\label{optt198}}
{\em (Weak Duality Theorem)}
If ${\bf z}$ and ${\bf w}$ are any arbitrary feasible solutions of
the primal and dual problems {\em (CLP)} and {\em (DCLP)}, respectively, then
\[\int_{0}^{T} {\bf a}^{\top}(t){\bf z}(t)dt\leq\int_{0}^{T}
{\bf c}^{\top}(t){\bf w}(t)dt.\]
\end{Thm}
\begin{Proof}
According to the constrains of problems (CLP) and (DCLP), we have
\begin{equation}{\label{*clpeq56}}
\sum_{j=1}^{q}\int_{0}^{T}z_{j}(t)\left [a_{j}(t)-
\sum_{i=1}^{p}B_{ij}(t)w_{i}(t)+
\sum_{i=1}^{p}\int_{t}^{T} K_{ij}(s,t)w_{i}(s)ds\right ]dt\leq 0
\end{equation}
and
\begin{equation}{\label{*clpeq57}}
\sum_{i=1}^{p}\int_{0}^{T}w_{i}(t)\left [c_{i}(t)-\sum_{j=1}^{q}B_{ij}(t)z_{j}(t)
+\sum_{j=1}^{q}\int_{0}^{t} K_{ij}(t,s)z_{j}(s)ds\right ]dt\geq 0.
\end{equation}
By Fubini's theorem, we also have
\begin{equation}{\label{dclp112}}
\int_{0}^{T}\int_{t}^{T}K_{ij}(s,t)z_{j}(t)w_{i}(s)dsdt=
\int_{0}^{T}\int_{0}^{t}K_{ij}(t,s)z_{j}(s)w_{i}(t)dsdt.
\end{equation}
Therefore, in the vectorial form, we obtain
\begin{align*}
0 & \geq\int_{0}^{T}{\bf z}^{\top}(t)\left [{\bf a}(t)
-B^{\top}(t){\bf w}(t)+\int_{t}^{T}K^{\top}(s,t){\bf w}(s)ds\right ]dt\\
& \quad -\int_{0}^{T}{\bf w}^{\top}(t)\left [{\bf c}(t)-B(t){\bf z}(t)
+\int_{0}^{t}K(t,s){\bf z}(s)ds\right ]dt
\mbox{ (by (\ref{*clpeq56}) and (\ref{*clpeq57}))}\\
& =\int_{0}^{T}{\bf z}^{\top}(t){\bf a}(t)dt+\int_{0}^{T}
{\bf w}^{\top}(t)\left [-B(t){\bf z}(t)+\int_{0}^{t}K(t,s){\bf z}(s)ds
\right ]dt\mbox{ (by (\ref{dclp112}))}\\
& \quad -\int_{0}^{T}{\bf c}^{\top}(t){\bf w}(t)dt-\int_{0}^{T}
{\bf w}^{\top}(t)\left [-B(t){\bf z}(t)+\int_{0}^{t}K(t,s){\bf z}(s)ds\right ]dt\\
& =\int_{0}^{T}{\bf z}^{\top}(t){\bf a}(t)dt
-\int_{0}^{T}{\bf c}^{\top}(t){\bf w}(t)dt.
\end{align*}
This completes the proof.
\end{Proof}
In the sequel, we are going to prove the strong duality theorem between
(CLP) and (DCLP) although these problems are considered in the sense of a.e. in $[0,T]$.
\section{Perturbed Formulation}
Given any $\epsilon\geq 0$, we consider the following perturbed problems:
\begin{eqnarray*}
(\mbox{CLP}_{\epsilon}) & \max & \int_{0}^{T} {\bf a}^{\top}(t){\bf z}(t)dt\\
& \mbox{subject to} & B(t){\bf z}(t)\leq
\left [{\bf c}(t)+\mbox{\boldmath $\epsilon$}\right ]
+\int_{0}^{t} K(t,s){\bf z}(s)ds\mbox{ a.e. in $[0,T]$},\\
&& {\bf z}\in L^{\infty}_{q}[0,T]\mbox{ and }
{\bf z}(t)\geq {\bf 0}\mbox{ a.e. in $[0,T]$}
\end{eqnarray*}
and
\begin{eqnarray*}
(\mbox{DCLP}_{\epsilon}) & \min & \int_{0}^{T} {\bf c}^{\top}(t){\bf w}(t)dt\\
& \mbox{subject to} & B^{\top}(t){\bf w}(t)
\geq \left [{\bf a}(t)-\mbox{\boldmath $\epsilon$}\right ]
+\int_{t}^{T} K^{\top}(s,t){\bf w}(s)ds\mbox{ a.e. in $[0,T]$},\\
&& {\bf w}\in L^{\infty}_{p}[0,T]\mbox{ and }
{\bf w}(t)\geq {\bf 0}\mbox{ a.e. in $[0,T]$},
\end{eqnarray*}
where $\mbox{\boldmath $\epsilon$}$ is a vector with all entries $\epsilon$.
Although the vectors $\mbox{\boldmath $\epsilon$}$ in problems
$(\mbox{CLP}_{\epsilon})$ and $(\mbox{DCLP}_{\epsilon})$ have the different dimensions,
we use the same notation for convenience.
If the constraints of $(\mbox{CLP}_{\epsilon})$ and
$(\mbox{DCLP}_{\epsilon})$ are assumed to be satisfied for all $t\in [0,T]$,
then the corresponding problems are denoted by $(\mbox{CLP}_{\epsilon}^{*})$ and
$(\mbox{DCLP}_{\epsilon}^{*})$.
Since each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is measurable and essentially bounded
in $[0,T]$ and $[0,T]\times [0,T]$, respectively, we define
\begin{eqnarray}
&& \tau =\max_{j=1,\cdots ,q}\parallel a_{j}\parallel_{\infty};
\mbox{ that is, }|a_{j}(t)|\leq\tau
\mbox{ a.e. in $[0,T]$},\label{clpeq8}\\
&& \zeta =\max_{i=1,\cdots ,p}\parallel c_{i}\parallel_{\infty};
\mbox{ that is, }|c_{i}(t)|\leq\zeta\mbox{ a.e. in $[0,T]$},\label{clpeq2}\\
&& \eta =\max_{i=1,\cdots ,p;j=1,\cdots ,q}\parallel K_{ij}\parallel_{\infty};
\mbox{ that is, }|K_{ij}(t,s)|\leq\eta\mbox{ a.e. in $[0,T]\times [0,T]$},\label{*clpeq2}\\
&& \nu =\max_{j=1,\cdots ,q}\sum_{i=1}^{p}\parallel K_{ij}\parallel_{\infty};
\mbox{ that is, }\sum_{i=1}^{p}|K_{ij}(t,s)|
\leq\nu\mbox{ a.e. in $[0,T]\times [0,T]$},\label{clpeq50}\\
&& \phi =\max_{i=1,\cdots ,p}\sum_{j=1}^{q}\parallel K_{ij}\parallel_{\infty};
\mbox{ that is, }\sum_{j=1}^{q}|K_{ij}(t,s)|
\leq\phi\mbox{ a.e. in $[0,T]\times [0,T]$}.\label{clpeq3}
\end{eqnarray}
Let $\{f_{k}\}_{k=1}^{\infty}$ be a sequence of functions in $L^{\infty}[0,T]$.
We say that the sequence $\{f_{k}\}_{k=1}^{\infty}$ is uniformly essentially bounded
in $[0,T]$ if and only if there exists a positive constant $C$ such that
$\parallel f_{k}\parallel_{\infty}\leq C$ for each $k$.
If $\{{\bf f}_{k}\}_{k=1}^{\infty}$ is a sequence of vector-valued functions,
Then, we say that the sequence $\{{\bf f}_{k}\}_{k=1}^{\infty}$ is uniformly
essentially bounded if and only if there exists a positive constant $C$ such that
$\parallel f_{ik}\parallel_{\infty}\leq C$ for each $i$ and $k$, where $f_{ik}$ is
the $i$th entry of ${\bf f}_{k}$. For $f\in L^{2}[0,T]$, we recall
\[\parallel f\parallel_{2}=\left (\int_{0}^{T}f^{2}(t)dt\right )^{1/2}.\]
Then, the sequence $\{f_{k}\}_{k=1}^{\infty}$ of real-valued functions is uniformly
bounded in $[0,T]$ with respect to $\parallel\cdot\parallel_{2}$ if and only if
there exists a positive constant $C$ such that
$\parallel f_{k}\parallel_{2}\leq C$ for each $k$.
The concept of uniform boundedness of the sequence of vector-valued
functions $\{{\bf f}_{k}\}_{k=1}^{\infty}$ can be similarly defined.
We also see that if the sequence $\{{\bf f}_{k}\}_{k=1}^{\infty}$ is uniformly
essentially bounded in $[0,T]$, then it is also uniformly
bounded in $[0,T]$ with respect to $\parallel\cdot\parallel_{2}$.
We denote by ${\cal Z}_{\epsilon}$ and ${\cal W}_{\epsilon}$
the feasible sets of problems $(\mbox{CLP}_{\epsilon})$ and
$(\mbox{DCLP}_{\epsilon})$, respectively.
We say that the feasible set ${\cal Z}_{\epsilon}$ of
$(\mbox{CLP}_{\epsilon})$ is uniformly
essentially bounded if and only if there exists a positive constant $C$
such that each feasible solution of $(\mbox{CLP}_{\epsilon})$ is essentially
bounded by $C$. We are going to provide the sufficient conditions to guarantee
that the feasible set ${\cal Z}_{\epsilon}$ of $(\mbox{CLP}_{\epsilon})$ is
uniformly essentially bounded.
Gronwall's lemma was provided by Levinson \cite{lev}. We can similarly prove it in the
sense of a.e. in $[0,T]$.
\begin{Lem}{\label{optl196}}
{\em (Gronwall's lemma)}
Suppose that the real-valued function $g$ is integrable in $[0,T]$ and
$g(t)\geq 0$ a.e. in $[0,T]$ $($resp. for all $t\in [0,T])$.
If there exist constants $\theta_{1}\geq 0$ and $\theta_{2}>0$ such that
\begin{equation}{\label{opteq134}}
g(t)\leq\theta_{1}+\theta_{2}\cdot\int_{0}^{t}g(s)ds\mbox{ a.e. in $[0,T]$
$($resp. for all $t\in [0,T])$},
\end{equation}
then $g(t)\leq\theta_{1}\cdot e^{\theta_{2}t}$ a.e in $[0,T]$
$($resp. for all $t\in [0,T])$.
\end{Lem}
\begin{Proof}
We are going to prove the case of a.e. on $[0,T]$.
For $t\in [0,T]$, we define
\[G(t)=\int_{0}^{t}g(s)ds.\]
Then, we see that $G$ is continuous on $[0,T]$ and
$G'(t)=g(t)$ a.e. on $[0,T]$ by Royden \cite{roy}.
From (\ref{opteq134}), we also have
\begin{equation}{\label{opteq135}}
g(t)\leq\theta_{1}+\theta_{2}G(t)\mbox{ a.e. on $[0,T]$}.
\end{equation}
Using (\ref{opteq135}), we also have
\begin{align*}
\frac{d}{dt}\left (e^{-\theta_{2}t}G(t)\right )
& =-\theta_{2}e^{-\theta_{2}t}G(t)+e^{-\theta_{2}t}g(t)\\
& \leq -\theta_{2}e^{-\theta_{2}t}G(t)+e^{-\theta_{2}t}\cdot\left (
\theta_{1}+\theta_{2}G(t)\right )
=\theta_{1}e^{-\theta_{2}t}\mbox{ a.e. on $[0,T]$}.
\end{align*}
By taking integration, for $t\in [0,T]$, we have
\begin{equation}{\label{opteq137}}
\int_{0}^{t}\frac{d}{dt}\left (e^{-\theta_{2}s}G(s)\right )ds\leq
\int_{0}^{t}\theta_{1}e^{-\theta_{2}s}ds.
\end{equation}
Since $e^{-\theta_{2}s}G(s)$ is continuous on $[0,T]$,
the Lebesgue integral and Riemann integral are identical
as given in (\ref{opteq137}). Therefore, we have
\[e^{-\theta_{2}t}G(t)-G(0)\leq-\frac{\theta_{1}}
{\theta_{2}}e^{-\theta_{2}t}+\frac{\theta_{1}}{\theta_{2}}\]
for each $t\in [0,T]$. Since $G(0)=0$, we have
\begin{equation}{\label{opteq136}}
G(t)\leq\frac{\theta_{1}}{\theta_{2}}\left (e^{\theta_{2}t}-1\right )
\end{equation}
for each $t\in [0,T]$. Using (\ref{opteq135}) and (\ref{opteq136}), we obtain
\[g(t)\leq\theta_{1}+\theta_{2}G(t)\leq\theta_{1}+\theta_{2}\cdot
\frac{\theta_{1}}{\theta_{2}}\left (e^{\theta_{2}t}-1\right )
=\theta_{1}e^{\theta_{2}t}\mbox{ a.e. on $[0,T]$}.\]
This completes the proof.
\end{Proof}
\begin{Pro}{\label{optt120*}}
Suppose that there exist real-valued functions $\lambda_{i}$ satisfying
$0\leq\lambda_{i}(t)\leq 1$ a.e. in $[0,T]$ $($resp. for all $t\in [0,T])$ for
$i=1,\cdots ,p$ and a constant $\sigma >0$ satisfying
\[\min_{j=1,\cdots ,q}\left\{\sum_{i=1}^{p}\lambda_{i}(t)B_{ij}(t)
\right\}\geq\sigma\mbox{ a.e. in $[0,T]$ $($resp. for all $t\in [0,T])$}.\]
If the problem $(\mbox{\em CLP}_{\epsilon})$ is feasible, then
each feasible solution ${\bf z}^{(\epsilon)}(t)$ is bounded satisfying
\begin{align}
\left |z_{j}^{(\epsilon)}(t)\right | & \leq\parallel {\bf z}^{(\epsilon)}(t)\parallel
\leq\frac{p\cdot (\zeta +\epsilon )}{\sigma}\cdot
\exp\left (\frac{p\cdot\phi\cdot t}{\sigma}\right )\nonumber\\
& \leq\frac{p\cdot (\zeta +\epsilon )}{\sigma}\cdot
\exp\left (\frac{p\cdot\phi\cdot T}{\sigma}\right )\mbox{ a.e. in $[0,T]$
$($resp. for all $t\in [0,T])$}\label{clpeq337}
\end{align}
for $j=1,\cdots ,q$, where the constants $\zeta$ and $\phi$ are given in
$(\ref{clpeq2})$ and $(\ref{*clpeq2})$, respectively.
In other words, the feasible set ${\cal Z}_{\epsilon}$ of
$(\mbox{\em CLP}_{\epsilon})$ is uniformly essentially bounded in $[0,T]$
when $\epsilon$ is fixed.
\end{Pro}
\begin{Proof}
We are going to prove the case of a.e. in $[0,T]$.
Let ${\bf z}^{(\epsilon)}$ be a feasible solution of
$(\mbox{CLP}_{\epsilon})$. According to the constraints, we have
\begin{align*}
& \sum_{j=1}^{q}\sum_{i=1}^{p}\lambda_{i}(t)\cdot B_{ij}(t)\cdot z_{j}^{(\epsilon)}(t)\\
& \quad\leq\sum_{i=1}^{p}\lambda_{i}(t)\cdot\left |c_{i}(t)+\epsilon\right |+
\int_{0}^{t}\sum_{j=1}^{q}\sum_{i=1}^{p}\lambda_{i}(t)\cdot
K_{ij}(t,s)\cdot z_{j}^{(\epsilon)}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{align*}
Therefore, we obtain
\begin{align*}
\sigma\cdot\parallel {\bf z}^{(\epsilon)}(t)\parallel
& \leq\sum_{j=1}^{q}\left [\left |z_{j}^{(\epsilon)}(t)\right |\cdot
\sum_{i=1}^{p}\lambda_{i}(t)B_{ij}(t)\right ]
=\sum_{j=1}^{q}\sum_{i=1}^{p}\lambda_{i}(t)\cdot B_{ij}(t)\cdot
\left |z_{j}^{(\epsilon)}(t)\right |\\
& \leq\sum_{i=1}^{p}\lambda_{i}(t)\cdot\left |c_{i}(t)+\epsilon\right |+
\int_{0}^{t}\sum_{i=1}^{p}\left [\sum_{j=1}^{q}\lambda_{i}(t)\cdot
K_{ij}(t,s)\cdot\left |z_{j}^{(\epsilon)}(s)\right |ds\right ]\\
& \leq\sum_{i=1}^{p}\left |c_{i}(t)+\epsilon\right |+\int_{0}^{t}\sum_{i=1}^{p}
\left [\sum_{j=1}^{q}K_{ij}(t,s)\cdot\left |z_{j}^{(\epsilon)}(s)\right |ds\right ]\\
& \leq p\cdot (\zeta +\epsilon )+p\cdot\phi\cdot\int_{0}^{t}
\parallel {\bf z}^{(\epsilon)}(s)\parallel ds\mbox{ a.e. in $[0,T]$}.
\end{align*}
By Gronwall's Lemma~\ref{optl196}, we obtain
\[\parallel {\bf z}^{(\epsilon)}(t)\parallel\leq
\frac{p\cdot (\zeta +\epsilon )}{\sigma}\cdot
\exp\left (\frac{p\cdot\phi\cdot t}{\sigma}\right )
\leq\frac{p\cdot (\zeta +\epsilon )}{\sigma}\cdot
\exp\left (\frac{p\cdot\phi\cdot T}{\sigma}\right )
\mbox{ a.e. in $[0,T]$}.\]
This completes the proof.
\end{Proof}
The following lemmas are very useful.
\begin{Lem}{\label{optl138}}
{\em (Riesz and Sz.-Nagy \cite[p.64]{rie})}
Let $\{f_{k}\}_{k=1}^{\infty}$ be a sequence in $L^{2}[0,T]$.
If the sequence $\{f_{k}\}_{k=1}^{\infty}$ is uniformly bounded with respect to
$\parallel\cdot\parallel_{2}$, then there exists a subsequence
$\{f_{k_{r}}\}_{r=1}^{\infty}$ which weakly converges to some $f_{0}\in L^{2}[0,T]$.
In other words, for any $g\in L^{2}[0,T]$, we have
\[\lim_{r\rightarrow\infty}\int_{0}^{T}
f_{k_{r}}(t)g(t)dt=\int_{0}^{T}f_{0}(t)g(t)dt.\]
\end{Lem}
\begin{Lem}{\label{optl157}}
{\em (Levinson \cite{lev})}
If the sequence $\{f_{k}\}_{k=1}^{\infty}$ is uniformly bounded in $[0,T]$
with respect to $\parallel\cdot\parallel_{2}$ and weakly converges
to some $f_{0}\in L^{2}[0,T]$, then
\[f_{0}(t)\leq\limsup_{k\rightarrow\infty}f_{k}(t)\mbox{ a.e. in $[0,T]$}\]
and
\[f_{0}(t)\geq\liminf_{k\rightarrow\infty}f_{k}(t)\mbox{ a.e. in $[0,T]$}.\]
\end{Lem}
\begin{Pro}{\label{*clpp47}}
Consider the sequence $\{\epsilon_{k}\}_{k=1}^{\infty}$ with
$\epsilon_{k}\rightarrow 0+$ as $k\rightarrow\infty$. Assume that $B(t)\geq {\bf 0}$
a.e. in $[0,T]$. The following statements hold true.
\begin{enumerate}
\item [{\em (i)}] Suppose that each problem $(\mbox{\em CLP}_{\epsilon_{k}})$
is feasible, and that ${\bf z}^{(\epsilon_{k})}$ is a feasible solution of
problem $(\mbox{\em CLP}_{\epsilon_{k}})$ such that the sequence
$\{{\bf z}^{(\epsilon_{k})}\}_{k=1}^{\infty}$ is uniformly essentially bounded.
Then, there exists a subsequence $\{{\bf z}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$
which weakly converges to some feasible solution ${\bf z}^{(0)}\in L_{q}^{2}[0,T]$ of
$(\mbox{\em CLP}_{0})=\mbox{\em (CLP)}$. Moreover, there exists a feasible solution
$\bar{\bf z}$ of {\em (CLP)} such that $\bar{\bf z}(t)\geq {\bf 0}$ for all
$t\in [0,T]$ and $\bar{\bf z}(t)={\bf z}^{(0)}(t)$ a.e. in $[0,T]$.
\item [{\em (ii)}] Suppose that ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$,
and that $K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$ for each fixed $t_{0}\in [0,T]$.
If the sequence $\{{\bf z}^{(\epsilon_{k})}\}_{k=1}^{\infty}$ of feasible solutions of
problem $(\mbox{\em CLP}_{\epsilon_{k}})$ is uniformly essentially bounded,
then there exists a subsequence $\{{\bf z}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$
which weakly converges to some feasible solution ${\bf z}^{(0)}\in L_{q}^{2}[0,T]$ of
$\mbox{\em (CLP)}$. Moreover, there exists a feasible solution $\bar{\bf z}$
of $(\mbox{\em CLP}^{*})$ such that $\bar{\bf z}(t)={\bf z}^{(0)}(t)$ a.e. in $[0,T]$.
\end{enumerate}
\end{Pro}
\begin{Proof}
To prove part (i), since the sequence $\{{\bf z}^{(\epsilon_{k})}\}_{k=1}^{\infty}$
is uniformly essentially bounded in $[0,T]$, it follows
that this sequence is also uniformly bounded in $[0,T]$ with respect to
$\parallel\cdot\parallel_{2}$. Let $z_{j}^{(\epsilon_{k})}$ be the $j$th entry of
${\bf z}^{(\epsilon_{k})}$. Using Lemma~\ref{optl138}, there exists a subsequence of
$\{z_{j}^{(\epsilon_{k})}\}_{k=1}^{\infty}$
which weakly converges to some $z_{j}^{(0)}(t)\in L^{2}[0,T]$.
Therefore, we can construct a vector-valued subsequence
$\{{\bf z}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$ of
$\{{\bf z}^{(\epsilon_{k})}\}_{k=1}^{\infty}$
such that $\{z_{j}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$ weakly converges to
$z_{j}^{(0)}$ for $j=1,\cdots ,q$. For each $i=1,\cdots ,p$, the constraints say that
\begin{equation}{\label{*clpeq59}}
\sum_{j=1}^{q}B_{ij}(t)\cdot z_{j}^{(\epsilon_{k_{r}})}(t)\leq c_{i}(t)
+\epsilon_{k_{r}}+\sum_{j=1}^{q}\int_{0}^{t} K_{ij}(t,s)\cdot
z_{j}^{(\epsilon_{k_{r}})}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
Using Lemma~\ref{optl157}, we also have
\begin{equation}{\label{clpeq1}}
\limsup_{r\rightarrow\infty}z_{j}^{(\epsilon_{k_{r}})}(t)\geq
z_{j}^{(0)}(t)\geq\liminf_{r\rightarrow\infty}z_{j}^{(\epsilon_{k_{r}})}(t)\geq 0
\mbox{ a.e. in $[0,T]$.}
\end{equation}
Since $\epsilon_{k_{r}}\rightarrow 0$ as $r\rightarrow\infty$ and
$B(t)\geq {\bf 0}$ a.e. in $[0,T]$, from (\ref{*clpeq59}) and (\ref{clpeq1}),
by taking the limit superior and using the weak convergence, we obtain
\begin{equation}{\label{opteq148}}
B(t){\bf z}^{(0)}(t)\leq\limsup_{r\rightarrow\infty}
B(t){\bf z}^{(\epsilon_{k_{r}})}(t)
\leq {\bf c}(t)+\int_{0}^{t} K(t,s){\bf z}^{(0)}(s)ds
\mbox{ a.e. in $[0,T]$.}
\end{equation}
This shows that ${\bf z}^{(0)}$ is a feasible solution of (CLP).
Let $N_{0j}=\{t\in [0,T]:z_{j}^{(0)}(t)<0\}$ and $N_{0}=\bigcup_{j=1}^{q}N_{0j}$.
Let $N_{1}$ be the subset of $[0,T]$ such that the inequality
(\ref{opteq148}) is violated. We define $N=N_{0}\cup N_{1}$.
Then from (\ref{clpeq1}) and (\ref{opteq148}), we see that the set $N$ has measure zero.
Now, we define
\begin{equation}{\label{*clpeq51}}
\bar{\bf z}(t)=\left\{\begin{array}{ll}
{\bf z}^{(0)}(t) & \mbox{if $t\not\in N$}\\
{\bf 0} & \mbox{if $t\in N$}.
\end{array}\right .
\end{equation}
Then, we see that $\bar{\bf z}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and $\bar{\bf z}(t)={\bf z}^{(0)}(t)$ a.e. in $[0,T]$.
For $t\not\in N$, from (\ref{opteq148}), we have
\begin{equation}{\label{*clpeq52}}
B(t)\bar{\bf z}(t)=B(t){\bf z}^{(0)}(t)
\leq {\bf c}(t)+\int_{0}^{t} K(t,s){\bf z}^{(0)}(s)ds
={\bf c}(t)+\int_{0}^{t} K(t,s)\bar{\bf z}(s)ds.
\end{equation}
This shows that $\bar{\bf z}$ is a feasible solution of (CLP).
To prove part (ii), under the assumptions of ${\bf c}(t)$ and $K(t,s)$,
it is obvious that the problem $(\mbox{CLP}_{\epsilon_{k}})$ is feasible
for each $\epsilon_{k}$ with the trivial feasible solution
${\bf z}(t)={\bf 0}$ for all $t\in [0,T]$.
We consider $\bar{\bf z}$ defined in (\ref{*clpeq51}).
For $t\in N$, we have $B(t)\bar{\bf z}(t)={\bf 0}$.
Since ${\bf z}^{(0)}(t)\geq {\bf 0}$ a.e. in $[0,T]$ and
$K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$ for each fixed $t_{0}\in [0,T]$,
we obtain
\[B(t)\bar{\bf z}(t)={\bf 0}\leq {\bf c}(t)+\int_{0}^{t} K(t,s){\bf z}^{(0)}(s)ds
={\bf c}(t)+\int_{0}^{t} K(t,s)\bar{\bf z}(s)ds.\]
By referring to (\ref{*clpeq52}) for $t\not\in N$,
we see that $\bar{\bf z}(t)$ satisfies all the constraints of
primal problem (CLP) for all $t\in [0,T]$. This completes the proof.
\end{Proof}
\begin{Pro}{\label{*clpr62}}
Assume that $B(t)\geq {\bf 0}$ a.e. in $[0,T]$. The following statements hold true.
\begin{enumerate}
\item [{\em (i)}] Suppose that the problem $(\mbox{\em CLP}_{\epsilon})$ is feasible.
For any uniformly essentially bounded sequence
$\{{\bf z}^{(k)}\}_{k=1}^{\infty}$ of feasible solutions of
$(\mbox{\em CLP}_{\epsilon})$, there exists a subsequence
$\{{\bf z}^{(k_{r})}\}_{r=1}^{\infty}$ which weakly converges to some feasible solution
${\bf z}^{(\epsilon)}\in L_{q}^{2}[0,T]$ of $(\mbox{\em CLP}_{\epsilon})$.
Moreover, there exists a feasible solution $\bar{\bf z}^{(\epsilon)}$ of
$(\mbox{\em CLP}_{\epsilon})$
such that $\bar{\bf z}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and $\bar{\bf z}^{(\epsilon)}(t)={\bf z}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
\item [{\em (ii)}] Suppose that ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$,
and that $K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$ for each fixed $t_{0}\in [0,T]$.
Given any uniformly essentially bounded sequence
$\{{\bf z}^{(k)}\}_{k=1}^{\infty}$ of
feasible solutions of $(\mbox{\em CLP}_{\epsilon})$, there exists a subsequence
$\{{\bf z}^{(k_{r})}\}_{r=1}^{\infty}$ which weakly converges
to some feasible solution ${\bf z}^{(\epsilon)}\in L_{q}^{2}[0,T]$ of
$(\mbox{\em CLP}_{\epsilon})$. Moreover, there exists a feasible solution
$\bar{\bf z}^{(\epsilon)}$ of $(\mbox{\em CLP}_{\epsilon}^{*})$ such that
$\bar{\bf z}^{(\epsilon)}(t)={\bf z}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
\end{enumerate}
\end{Pro}
\begin{Proof}
To prove part (i), since the sequence $\{{\bf z}^{(k)}\}_{k=1}^{\infty}$
is uniformly essentially bounded in $[0,T]$,
we see that this sequence is also uniformly bounded in $[0,T]$ with respect to
$\parallel\cdot\parallel_{2}$. Using Lemma~\ref{optl138}, there exists a subsequence of
$\{z_{j}^{(k)}\}_{k=1}^{\infty}$ which weakly converges to some $z_{j}^{(\epsilon)}\in L^{2}[0,T]$.
Therefore, we can construct a vector-valued subsequence
$\{{\bf z}^{(k_{r})}\}_{r=1}^{\infty}$ of $\{{\bf z}^{(k)}\}_{k=1}^{\infty}$
such that $\{z_{j}^{(k_{r})}\}_{r=1}^{\infty}$ weakly converges to
$z_{j}^{(\epsilon)}$ for $j=1,\cdots ,q$. For each $i=1,\cdots ,p$, the feasibility says that
\begin{equation}{\label{*clpeq59}}
\sum_{j=1}^{q}B_{ij}(t)\cdot z_{j}^{(k_{r})}(t)\leq c_{i}(t)+\epsilon
+\sum_{j=1}^{q}\int_{0}^{t} K_{ij}(t,s)\cdot z_{j}^{(k_{r})}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
Using Lemma~\ref{optl157}, we also have
\begin{equation}{\label{clpeq1}}
\limsup_{r\rightarrow\infty}z_{j}^{(k_{r})}(t)\geq
z_{j}^{(\epsilon)}(t)\geq\liminf_{r\rightarrow\infty}z_{j}^{(k_{r})}(t)\geq 0
\mbox{ a.e. in $[0,T]$.}
\end{equation}
Since $B(t)\geq {\bf 0}$ a.e. in $[0,T]$, from (\ref{*clpeq59}) and (\ref{clpeq1}),
by taking the limit superior and using the weak convergence, we obtain
\begin{equation}{\label{opteq148}}
B(t){\bf z}^{(\epsilon)}(t)\leq\limsup_{r\rightarrow\infty}B(t){\bf z}^{(k_{r})}(t)
\leq {\bf c}(t)+\mbox{\boldmath $\epsilon$}+\int_{0}^{t} K(t,s){\bf z}^{(\epsilon)}(s)ds
\mbox{ a.e. in $[0,T]$.}
\end{equation}
This shows that ${\bf z}^{(\epsilon)}$ is a feasible solution of $(\mbox{CLP}_{\epsilon})$.
Let $N_{0j}=\{t\in [0,T]:z_{j}^{(\epsilon)}(t)<0\}$ and $N_{0}=\bigcup_{j=1}^{q}N_{0j}$.
Let $N_{1}$ be the subset of $[0,T]$ such that the inequality
(\ref{opteq148}) is violated. We define $N=N_{0}\cup N_{1}$.
Then from (\ref{clpeq1}) and (\ref{opteq148}), we see that the set $N$ has measure zero.
Now, we define
\begin{equation}{\label{*clpeq51}}
\bar{\bf z}^{(\epsilon)}(t)=\left\{\begin{array}{ll}
{\bf z}^{(\epsilon)}(t) & \mbox{if $t\not\in N$}\\
{\bf 0} & \mbox{if $t\in N$}.
\end{array}\right .
\end{equation}
Then, we see that $\bar{\bf z}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and $\bar{\bf z}^{(\epsilon)}(t)={\bf z}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
For $t\not\in N$, from (\ref{opteq148}), we have
\begin{equation}{\label{*clpeq52}}
B(t)\bar{\bf z}^{(\epsilon)}(t)=B(t){\bf z}^{(\epsilon)}(t)
\leq {\bf c}(t)+\mbox{\boldmath $\epsilon$}+\int_{0}^{t} K(t,s){\bf z}^{(\epsilon)}(s)ds
={\bf c}(t)+\mbox{\boldmath $\epsilon$}+\int_{0}^{t} K(t,s)\bar{\bf z}^{(\epsilon)}(s)ds.
\end{equation}
This shows that $\bar{\bf z}^{(\epsilon)}$ is a feasible solution of $(\mbox{CLP}_{\epsilon})$.
To prove part (ii), under the assumptions of ${\bf c}(t)$ and $K(t,s)$,
it is obvious that the problem $(\mbox{CLP}_{\epsilon})$ is feasible
with the trivial feasible solution ${\bf z}(t)={\bf 0}$ for all $t\in [0,T]$.
We consider $\bar{\bf z}^{(\epsilon)}$ defined in (\ref{*clpeq51}).
For $t\in N$, we have $B(t)\bar{\bf z}^{(\epsilon)}(t)={\bf 0}$.
Since ${\bf z}^{(\epsilon)}(t)\geq {\bf 0}$ a.e. in $[0,T]$ and
$K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$ for each fixed $t_{0}\in [0,T]$,
we obtain
\[B(t)\bar{\bf z}^{(\epsilon)}(t)={\bf 0}\leq {\bf c}(t)+\mbox{\boldmath $\epsilon$}
+\int_{0}^{t} K(t,s){\bf z}^{(\epsilon)}(s)ds
={\bf c}(t)+\mbox{\boldmath $\epsilon$}+\int_{0}^{t} K(t,s)\bar{\bf z}^{(\epsilon)}(s)ds.\]
By referring to (\ref{*clpeq52}) for $t\not\in N$,
we see that $\bar{\bf z}^{(\epsilon)}(t)$ satisfies all the constraints of
primal problem $(\mbox{CLP}_{\epsilon})$ for all $t\in [0,T]$. This completes the proof.
\end{Proof}
\begin{Thm}{\label{optt120}}
Assume that $B(t)\geq {\bf 0}$ a.e. in $[0,T]$.
For any $\epsilon\geq 0$, the following results hold.
\begin{enumerate}
\item [{\em (i)}] Suppose that the problem $(\mbox{\em CLP}_{\epsilon})$
is feasible, and that the feasible set ${\cal Z}_{\epsilon}$ of
$(\mbox{\em CLP}_{\epsilon})$ is uniformly essentially bounded.
Then, there exists an optimal solution
$\bar{\bf z}^{(\epsilon)}$ of $(\mbox{\em CLP}_{\epsilon})$ such that
$\bar{\bf z}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$.
\item [{\em (ii)}] Suppose that ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$,
$K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$ for each fixed $t_{0}\in [0,T]$,
and that the feasible set ${\cal Z}_{\epsilon}$ of $(\mbox{\em CLP}_{\epsilon})$
is uniformly essentially bounded. Then, there exists a common optimal solution
$\bar{\bf z}^{(\epsilon)}$ of $(\mbox{\em CLP}_{\epsilon})$
and $(\mbox{\em CLP}_{\epsilon}^{*})$
such that both problems have the same optimal objective values.
\end{enumerate}
\end{Thm}
\begin{Proof}
To prove part (i), we define
\[M=\sup_{{\bf z}\in {\cal Z}_{\epsilon}}\int_{0}^{T}{\bf a}^{\top}(t){\bf z}(t)dt.\]
Then, there exists a sequence $\{{\bf z}^{(k)}\}_{k=1}^{\infty}$ in ${\cal Z}_{\epsilon}$ such that
\begin{equation}{\label{opteq146}}
\lim_{k\rightarrow\infty}\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{(k)}(t)dt=M.
\end{equation}
We are going to claim that the supremum $M$ can be attained by some feasible
solution of $(\mbox{CLP}_{\epsilon})$. Since the sequence $\{{\bf z}^{(k)}\}_{k=1}^{\infty}$
is uniformly essentially bounded in $[0,T]$ by the assumption on the feasible
set ${\cal Z}_{\epsilon}$, using part (i) of Proposition~\ref{*clpr62},
there exists a subsequence $\{{\bf z}^{(k_{r})}\}_{r=1}^{\infty}$ which
weakly converges to some feasible solution ${\bf z}^{(\epsilon)}\in L_{q}^{2}[0,T]$ of $(\mbox{CLP}_{\epsilon})$, and
there exists another feasible solution $\bar{\bf z}^{(\epsilon)}$ of $(\mbox{CLP}_{\epsilon})$
such that $\bar{\bf z}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and $\bar{\bf z}^{(\epsilon)}(t)={\bf z}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
From (\ref{opteq146}), we obtain
\begin{align*}
\int_{0}^{T}{\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}(t)dt
& =\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{(\epsilon)}(t)dt
=\sum_{j=1}^{q}\int_{0}^{T}z_{j}^{(\epsilon)}(t)a_{j}(t)dt\\
& =\lim_{r\rightarrow\infty}\sum_{j=1}^{q}\int_{0}^{T}z_{j}^{(k_{r})}(t)a_{j}(t)dt
=\lim_{r\rightarrow\infty}\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{(k_{r})}(t)dt=M.
\end{align*}
This shows that $\bar{\bf z}^{(\epsilon)}$ is an optimal solution of $(\mbox{CLP}_{\epsilon})$.
To prove part (ii), since ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$ and
$K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$ for each fixed $t_{0}\in [0,T]$,
part (ii) of Proposition~\ref{*clpr62} says that we can take
$\bar{\bf z}^{(\epsilon)}$ as a feasible solution of $(\mbox{CLP}_{\epsilon}^{*})$.
Since the feasible set of $(\mbox{CLP}_{\epsilon}^{*})$ is contained in the
feasible set of $(\mbox{CLP}_{\epsilon})$, it follows that $\bar{\bf z}^{(\epsilon)}$
is an optimal solution of problem $(\mbox{CLP}_{\epsilon}^{*})$. This completes the proof.
\end{Proof}
Suppose that there exists a constant $\sigma >0$ such that
$\sum_{i=1}^{p}B_{ij}(t)\geq\sigma$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$.
We define a real-valued function
\begin{equation}{\label{opteq153}}
\rho_{\epsilon}(t)=\frac{\tau -\epsilon}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]
\mbox{ for }t\in [0,T]
\end{equation}
and define $\mbox{\boldmath $\rho$}_{\epsilon}(t)$ as an $p$-dimensional
vector-valued function with all entries $\rho_{\epsilon}(t)$.
In the sequel, we are going to study the existence of optimal
solutions of $(\mbox{DCLP}_{\epsilon})$.
We first present the feasibility of dual problem.
\begin{Pro}{\label{*clpp50}}
The following statements hold true.
\begin{enumerate}
\item [{\em (i)}] Suppose that there exists a constant $\sigma >0$ such that
$\sum_{i=1}^{p}B_{ij}(t)\geq\sigma$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$.
Then, the problem $(\mbox{\em DCLP}_{\epsilon})$ is feasible with the feasible
solution $\mbox{\boldmath $\rho$}_{\epsilon}$.
\item [{\em (ii)}] Suppose that there exists a constant $\sigma >0$ such that
$\sum_{i=1}^{p}B_{ij}(t)\geq\sigma$ for all $t\in [0,T]$ and for each $j=1,\cdots ,q$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$.
Then, the problem $(\mbox{\em DCLP}_{\epsilon}^{*})$ is feasible with the feasible
solution $\mbox{\boldmath $\rho$}_{\epsilon}$.
\end{enumerate}
\end{Pro}
\begin{Proof}
To prove part (i), from (\ref{opteq153}), we see that
\begin{equation}{\label{opteq154}}
\sigma\rho_{\epsilon}(t)=\tau -\epsilon +\nu\cdot\int_{t}^{T}\rho_{\epsilon}(s)ds
\mbox{ for $t\in [0,T]$.}
\end{equation}
For each $j=1,\cdots ,q$, using (\ref{opteq154}), we have
\begin{equation}{\label{opteq155}}
\sum_{i=1}^{p}B_{ij}(t)\rho_{\epsilon}(t)\geq\sigma\rho_{\epsilon}(t)\geq
a_{j}(t)-\epsilon +\sum_{i=1}^{p}\int_{t}^{T}
K_{ij}(s,t)\rho_{\epsilon}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
This shows that $\mbox{\boldmath $\rho$}_{\epsilon}$ is a feasible solution
of $(\mbox{DCLP}_{\epsilon})$.
To prove part (ii), by applying the assumptions to (\ref{opteq155}), we obtain
\begin{equation}{\label{opteq*155}}
\sum_{i=1}^{p}B_{ij}(t)\rho_{\epsilon}(t)\geq\sigma\rho_{\epsilon}(t)\geq
a_{j}(t)-\epsilon +\sum_{i=1}^{p}\int_{t}^{T}
K_{ij}(s,t)\rho_{\epsilon}(s)ds\mbox{ for all }t\in [0,T].
\end{equation}
This completes the proof.
\end{Proof}
\begin{Lem}{\label{*clpp52}}
Let ${\bf w}^{(\epsilon)}$ be a feasible solution of problem
$(\mbox{\em DCLP}_{\epsilon})$. Then, the following statements hold true.
\begin{enumerate}
\item [{\em (i)}] There exists a feasible solution $\bar{\bf w}^{(\epsilon)}$ of
$(\mbox{\em DCLP}_{\epsilon})$
such that $\bar{\bf w}^{(\epsilon)}(t)={\bf w}^{(\epsilon)}(t)$ a.e. in $[0,T]$
and $\bar{\bf w}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$.
If we further assumed that there is a vector-valued function
${\bf v}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$
such that ${\bf w}^{(\epsilon)}(t)\leq
{\bf v}^{(\epsilon)}(t)$ a.e. in $[0,T]$, then
${\bf 0}\leq\bar{\bf w}^{(\epsilon)}(t)\leq{\bf v}^{(\epsilon)}(t)$ for all $t\in [0,T]$.
\item [{\em (ii)}] Suppose that there exists a constant $\sigma >0$ such that
$\sum_{i=1}^{p}B_{ij}(t)\geq\sigma$ for all $t\in [0,T]$ and for each $j=1,\cdots ,q$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$. If ${\bf w}^{(\epsilon)}(t)\leq
\mbox{\boldmath $\rho$}_{\epsilon}(t)$ a.e. in $[0,T]$, then there exists a feasible solution
$\bar{\bf w}^{(\epsilon)}$ of $(\mbox{\em DCLP}_{\epsilon}^{*})$
such that ${\bf 0}\leq\bar{\bf w}^{(\epsilon)}(t)\leq
\mbox{\boldmath $\rho$}_{\epsilon}(t)$ for all $t\in [0,T]$ and
$\bar{\bf w}^{(\epsilon)}(t)={\bf w}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
\end{enumerate}
\end{Lem}
\begin{Proof}
To prove part (i), we begin by observing that ${\bf w}^{(\epsilon)}(t)\geq {\bf 0}$
a.e in $[0,T]$ and
\begin{equation}{\label{*clpeq54}}
B^{\top}(t){\bf w}^{(\epsilon)}(t)\geq
{\bf a}(t)-\mbox{\boldmath $\epsilon$}+\int_{t}^{T}
K^{\top}(s,t){\bf w}^{(\epsilon)}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
Let $N_{0i}=\{t\in [0,T]:w_{i}^{(\epsilon)}(t)<0\}$
and $N_{0}=\bigcup_{i=1}^{p}N_{0i}$.
Let $N_{1}$ be the subset of $[0,T]$ on which the inequality
(\ref{*clpeq54}) is violated, and let $N=N_{0}\cup N_{1}$.
Then, we see that the set $N$ has measure zero. Now, we define
\[\bar{\bf w}^{(\epsilon)}(t)=\left\{\begin{array}{ll}
{\bf w}^{(\epsilon)}(t) & \mbox{if $t\not\in N$}\\
{\bf 0} & \mbox{if $t\in N$}.
\end{array}\right .\]
Then, we see that $\bar{\bf w}^{(\epsilon)}(t)\geq {\bf 0}$ for all $t\in [0,T]$ and
$\bar{\bf w}^{(\epsilon)}(t)={\bf w}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
For $t\not\in N$, from (\ref{*clpeq54}), we have
\begin{align}
B^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)
& =B^{\top}(t){\bf w}^{(\epsilon)}(t)\nonumber\\
& \geq {\bf a}(t)-\mbox{\boldmath $\epsilon$}
+\int_{t}^{T} K^{\top}(s,t){\bf w}^{(\epsilon)}(s)ds
={\bf a}(t)-\mbox{\boldmath $\epsilon$}
+\int_{t}^{T} K^{\top}(s,t)\bar{\bf w}^{(\epsilon)}(s)ds.\label{extdclp1}
\end{align}
This shows that $\bar{\bf w}^{(\epsilon)}$ is a feasible solution of the dual
problem $(\mbox{DCLP}_{\epsilon})$.
Now, we assume that ${\bf w}^{(\epsilon)}(t)\leq{\bf v}^{(\epsilon)}(t)$ a.e. in $[0,T]$.
Let $N_{0}$ and $N_{1}$ be the subsets of $[0,T]$ defined above,
$N_{2i}=\{t\in [0,T]:w_{i}^{(\epsilon)}(t)>v_{i}^{(\epsilon)}(t)\}$,
$N_{2}=\bigcup_{i=1}^{p}N_{2i}$, and $\hat{N}=N_{0}\cup N_{1}\cup N_{2}$.
Then, the set $\hat{N}$ has measure zero. Now, we define
\begin{equation}{\label{*clpeq55}}
\bar{\bf w}^{(\epsilon)}(t)=\left\{\begin{array}{ll}
{\bf w}^{(\epsilon)}(t) & \mbox{if $t\not\in\hat{N}$}\\
{\bf v}^{(\epsilon)}(t) & \mbox{if $t\in\hat{N}$}.
\end{array}\right .
\end{equation}
Then, we see that ${\bf 0}\leq\bar{\bf w}^{(\epsilon)}(t)\leq {\bf v}^{(\epsilon)}(t)$
for all $t\in [0,T]$ and $\bar{\bf w}^{(\epsilon)}(t)
={\bf w}^{(\epsilon)}(t)$ a.e. in $[0,T]$. For $t\not\in\hat{N}$, we have $t\not\in N$.
Using (\ref{extdclp1}), it follows that $\bar{\bf w}^{(\epsilon)}$ is a feasible
solution of the dual problem $(\mbox{DCLP}_{\epsilon})$.
To prove part (ii), from (\ref{opteq*155}), we can also obtain the following inequality
\begin{equation}{\label{clpeq266}}
B^{\top}(t)\mbox{\boldmath $\rho$}_{\epsilon}(t)\geq {\bf a}(t)
-\mbox{\boldmath $\epsilon$}+\int_{t}^{T}
K^{\top}(s,t)\mbox{\boldmath $\rho$}_{\epsilon}(s)ds\mbox{ for all $t\in [0,T]$.}
\end{equation}
We take $\bar{\bf w}^{(\epsilon)}(t)$ as
defined in (\ref{*clpeq55}) by substituting ${\bf v}^{(\epsilon)}$ for
$\mbox{\boldmath $\rho$}_{\epsilon}$. Then, we see that ${\bf 0}\leq
\bar{\bf w}^{(\epsilon)}(t)\leq\mbox{\boldmath $\rho$}_{\epsilon}(t)$
for all $t\in [0,T]$. For $t\in\hat{N}$, using (\ref{clpeq266}), we obtain
\begin{align*}
B^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)
& =B^{\top}(t)\mbox{\boldmath $\rho$}_{\epsilon}(t)
\geq {\bf a}(t)-\mbox{\boldmath $\epsilon$}
+\int_{t}^{T} K^{\top}(s,t)\mbox{\boldmath $\rho$}_{\epsilon}(s)ds\\
& \geq {\bf a}(t)-\mbox{\boldmath $\epsilon$}
+\int_{t}^{T} K^{\top}(s,t)\bar{\bf w}^{(\epsilon)}(s)ds.
\end{align*}
For $t\not\in\hat{N}$, the argument of part (i) is still valid.
This shows that $\bar{\bf w}^{(\epsilon)}$ satisfies the constraints of
$(\mbox{DCLP}_{\epsilon})$ for all $t\in [0,T]$, and the proof is complete.
\end{Proof}
\begin{Lem}{\label{*clpr302}}
$K(t,s)\geq {\bf 0}$ a.e. on $[0,T]\times [0,T]$ if and only if the subset
\[N_{K}=\left\{t_{0}\in [0,T]:K(s,t_{0})\not\geq {\bf 0}
\mbox{ a.e. on $[0,T]$}\right\}\]
has measure zero; that is, for each fixed $t_{0}\in [0,T]\setminus N_{K}$,
$K(s,t_{0})\geq {\bf 0}$ a.e. on $[0,T]$.
\end{Lem}
\begin{Proof}
Suppose that $K(t,s)\geq {\bf 0}$ a.e. on $[0,T]\times [0,T]$.
We are going to prove it by contradiction. Assume that $\mu (N_{K})\neq 0$.
For each fixed $t_{0}\in N_{K}$, the following set
\[\{s\in [0,T]:K(s,t_{0})\geq {\bf 0}\}\]
has measure zero, which also says that the following set
\[M_{t_{0}}\equiv\{s\in [0,T]:K(s,t_{0})\not\geq {\bf 0}\}\]
is not measure zero. Let
\[M=\bigcup_{t_{0}\in N_{K}}M_{t_{0}}.\]
Then, we have $\mu (M)\neq 0$. For each $(t,s)\in M\times N_{K}$,
we see that $K(t,s)\not\geq {\bf 0}$.
Since $(\mu\times\mu )(M\times N_{K})=\mu (M)\cdot\mu (N_{K})\neq0$,
this contradicts $K(t,s)\geq {\bf 0}$ a.e. on $[0,T]\times [0,T]$.
For the converse, let
\[{\cal N}=\left\{(s,t)\in [0,T]\times [0,T]:K(s,t)\not\geq {\bf 0}\right\}.\]
Assume that $(\mu\times\mu )({\cal N})>0$.
We are going to lead to a contradiction. It is well-know that the
Lebesgue measure $(\mu\times\mu )({\cal N})$ is equal to the inner measure given by
\[0<(\mu\times\mu )({\cal N}) =\sup_{(\bigcup_{k}{\cal R}_{k})\subseteq {\cal N}}
\sum_{k}m({\cal R}_{k}),\]
where the union is a countable union, each ${\cal R}_{k}$ is a
rectangle of $[0,T]\times [0,T]$ and
$m({\cal R}_{k})$ is the area of the rectangle ${\cal R}_{k}$.
Of course, we have $m({\cal R}_{k})=(\mu\times\mu )({\cal R}_{k})$.
In this case, there exists a rectangle ${\cal R}_{k_{0}}\subseteq
{\cal N}$ such that $0<m({\cal R}_{k_{0}})\leq(\mu\times\mu )({\cal N})$.
Suppose that ${\cal R}_{k_{0}}=R_{k_{0}}^{(1)}\times R_{k_{0}}^{(2)}$,
where $R_{k_{0}}^{(1)}$ and $R_{k_{0}}^{(2)}$ are intervals in $[0,T]$ such that
$\mu (R_{k_{0}}^{(1)})\neq 0$ and $\mu (R_{k_{0}}^{(2)})\neq 0$.
Since $\mu (N_{K})=0$ and $\mu (R_{k_{0}}^{(2)})\neq 0$,
there exists $t_{0}^{*}\in R_{k_{0}}^{(2)}$ and $t_{0}^{*}\not\in N_{K}$ such that
$R_{k_{0}}^{(1)}\times\{t_{0}^{*}\}\subseteq {\cal N}$. This shows that
$K(s,t_{0}^{*})\not\geq {\bf 0}$ for $(s,t_{0}^{*})\in
R_{k_{0}}^{(1)}\times\{t_{0}^{*}\}$, which contradicts
$K(s,t_{0}^{*})\geq {\bf 0}$ a.e. on $[0,T]$, since $t_{0}^{*}\not\in N_{K}$
and $\mu (R_{k_{0}}^{(1)})\neq 0$. This completes the proof.
\end{Proof}
\begin{Lem}{\label{optl156}}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item $K(s,t)\geq {\bf 0}$ a.e. in $[0,T]\times [0,T]$;
\item $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\begin{equation}{\label{dclp117}}
B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .
\end{equation}
\end{itemize}
Consider the vector-valued function $\mbox{\boldmath $\rho$}_{\epsilon}$
defined in $(\ref{opteq153})$, and let ${\bf w}^{(\epsilon)}$ be a feasible solution
of problem $(\mbox{\em DCLP}_{\epsilon})$.
Then, there exist a feasible solution $\widehat{\bf w}^{(\epsilon )}$ of
$(\mbox{\em DCLP}_{\epsilon})$ such that
\begin{equation}{\label{dclp114}}
\widehat{\bf w}^{(\epsilon )}(t)\geq {\bf 0}\mbox{ a.e. in $[0,T]$}
\end{equation}
and
\begin{equation}{\label{dclp115}}
\widehat{\bf w}^{(\epsilon )}(t)\leq {\bf w}^{(\epsilon)}(t)\mbox{ and }
\widehat{\bf w}^{(\epsilon )}(t)\leq\mbox{\boldmath $\rho$}_{\epsilon}(t)
\mbox{ for all $t\in [0,T]$.}
\end{equation}
Moreover, if ${\bf w}^{(\epsilon)}$ is an optimal solution of
$(\mbox{\em DCLP}_{\epsilon})$, then $\widehat{\bf w}^{(\epsilon )}$ is
also an optimal solution of $(\mbox{\em DCLP}_{\epsilon})$.
\end{Lem}
\begin{Proof}
Under the assumption of $B(t)$, it is easy to see that $B(t)\geq {\bf 0}$ a.e.
in $[0,T]$ and $\sum_{i=1}^{p}B_{ij}(t)\geq\sigma$ a.e. in $[0,T]$
for each $j=1,\cdots ,q$. Therefore, the dual problem $(\mbox{DCLP}_{\epsilon})$
is feasible by Proposition~\ref{*clpp50}.
Since ${\bf w}^{(\epsilon)}$ is a feasible solution of
dual problem $(\mbox{DCLP}_{\epsilon})$, for each $j=1,\cdots ,q$, we have
\begin{equation}{\label{opteq150}}
\sum_{i}B_{ij}(t)w_{i}^{(\epsilon)}(t)\geq a_{j}(t)-\epsilon+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)w_{i}^{(\epsilon)}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
Now, for $t\in [0,T]$, we define
\[\widehat{w}_{i}^{(\epsilon )}(t)=\min\left\{w_{i}^{(\epsilon)}(t),
\rho_{\epsilon}(t)\right\}.\]
It is obvious that (\ref{dclp114}) and (\ref{dclp115}) are satisfied.
On the other hand, from (\ref{opteq150}) we also obtain
\begin{equation}{\label{opteq151}}
\sum_{i}B_{ij}(t)w_{i}^{(\epsilon)}(t)\geq a_{j}(t)-\epsilon+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)\widehat{w}_{i}^{(\epsilon )}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
Let $\bar{N}_{1i}=\{t\in [0,T]:\widehat{w}_{i}^{(\epsilon )}(t)<0\}$ and
$\bar{N}_{1}=\bigcup_{i=1}^{p}\bar{N}_{1i}$.
Let $\bar{N}_{2ij}=\{t\in [0,T]:B_{ij}(t)<0\}$ and
$\bar{N}_{2}=\bigcup_{i=1}^{p}\bigcup_{j=1}^{q}\bar{N}_{2ij}$.
Let $\bar{N}_{3}$ be the subset of $[0,T]$ on which the statement (\ref{dclp117})
is violated. Let
\[\bar{N}=\bar{N}_{1}\cup\bar{N}_{2}\cup\bar{N}_{3}\cup N_{K},\]
where $N_{K}$ is defined in Lemma~\ref{*clpr302}. Then $\bar{N}$ has measure zero.
Let $N_{0}$ and $N_{1}$ be the subsets of $[0,T]$ on which
the inequalities (\ref{opteq155}) and (\ref{opteq151}) are violated, respectively. We take
\[N=N_{0}\cup N_{1}\cup\bar{N}.\]
Then, the set $N$ has measure zero. For any fixed $t\in [0,T]\setminus N$,
we define the index sets $I_{\leq}=\{i:w_{i}^{(\epsilon)}(t)\leq\rho_{\epsilon}(t)\}$
and $I_{>}=\{i:w_{i}^{(\epsilon)}(t)>\rho_{\epsilon}(t)\}$, and consider
\[\sum_{i}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)=\sum_{i\in I_{\leq}}
B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)
+\sum_{i\in I_{>}}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t).\]
Then, we have the following three cases.
\begin{itemize}
\item Suppose that $I_{>}=\emptyset$ (i.e., the second sum is zero).
Then, we see that $w_{i}^{(\epsilon)}(t)=\widehat{w}_{i}^{(\epsilon )}(t)$ for all $i$.
Therefore, from (\ref{opteq151}) we have
\[\sum_{i}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)=\sum_{i}B_{ij}(t)
w_{i}^{(\epsilon)}(t)\geq a_{j}(t)-\epsilon+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)\widehat{w}_{i}^{(\epsilon )}(s)ds.\]
\item Suppose that $I_{>}\neq\emptyset$ and $B_{ij}(t)=0$ for all $i\in I_{>}$.
Then, by (\ref{opteq151}), we also have
\[\sum_{i}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)
=\sum_{i}B_{ij}(t)w_{i}^{(\epsilon)}(t)
\geq a_{j}(t)-\epsilon+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)\widehat{w}_{i}^{(\epsilon )}(s)ds.\]
\item Suppose that $I_{>}\neq\emptyset$, and that there exists $i_{0}\in I_{>}$ with
$B_{i_{0}j}(t)\neq 0$, i.e., $B_{i_{0}j}(t)\geq\sigma$ by the
assumption on $B(t)$ (since $t\not\in\bar{N}_{3}$).
Since $t\not\in\bar{N}_{0}$ and $t\not\in\bar{N}_{2}$, it follows that
$\widehat{w}_{i}^{(\epsilon )}(t)\geq 0$ and $B_{ij}(t)\geq 0$
for $i=1,\cdots ,p$ and $j=1,\cdots ,q$. Therefore, we have
\begin{equation}{\label{opteq156}}
\sum_{i}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)\geq
\sum_{i\in I_{>}}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)=
\sum_{i\in I_{>}}B_{ij}(t)\rho_{\epsilon}(t)
\geq B_{i_{0}j}(t)\rho_{\epsilon}(t)\geq\sigma\rho_{\epsilon}(t).
\end{equation}
Since the fixed $t\not\in N_{K}$, it follows that $K_{ij}(s,t)\geq 0$ a.e. in
$[0,T]$ by Lemma~\ref{*clpr302}, Since $t\not\in N_{0}$,
using (\ref{opteq155}) and (\ref{opteq156}), we obtain
\begin{align*}
\sum_{i}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)
& \geq a_{j}(t)-\epsilon+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)\rho_{\epsilon}(s)ds\\
& \geq a_{j}(t)-\epsilon+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)\widehat{w}_{i}^{(\epsilon )}(s)ds.
\end{align*}
\end{itemize}
Therefore, we conclude that
\[\sum_{i}B_{ij}(t)\widehat{w}_{i}^{(\epsilon )}(t)\geq a_{j}(t)-\epsilon
+\sum_{i}\int_{t}^{T}
K_{ij}(s,t)\widehat{w}_{i}^{(\epsilon )}(s)ds\mbox{ a.e in $[0,T]$.}\]
This shows that $\widehat{\bf w}^{(\epsilon )}$ is a feasible solution of
$(\mbox{DCLP}_{\epsilon})$.
Suppose that ${\bf w}^{(\epsilon)}$ is an optimal solution of
$(\mbox{DCLP}_{\epsilon})$. Since $(\mbox{DCLP}_{\epsilon})$
is a minimization problem and $\widehat{\bf w}^{(\epsilon )}(t)\leq
{\bf w}^{(\epsilon)}(t)$ for all $t\in [0,T]$, we have
\[\int_{0}^{T}\left ({\bf c}^{(\epsilon)}(t)\right )^{\top}\widehat{\bf w}^{(\epsilon )}(t)dt
\leq\int_{0}^{T}\left ({\bf c}^{(\epsilon)}(t)\right )^{\top}{\bf w}^{(\epsilon)}(t)dt
\leq\int_{0}^{T}\left ({\bf c}^{(\epsilon)}(t)\right )^{\top}\widehat{\bf w}^{(\epsilon )}(t)dt,\]
which says that $\widehat{\bf w}^{(\epsilon )}$ is an optimal solution of
$(\mbox{DCLP}_{\epsilon})$. This completes the proof.
\end{Proof}
\begin{Rem}{\label{*clpr97}}
{\em
We see that if the assumption regarding the time-dependent matrix $B(t)$ in
Lemma~\ref{optl156} is satisfied, then $\sum_{i=1}^{p}B_{ij}(t)\geq\sigma$
a.e. in $[0,T]$ for each $j=1,\cdots ,q$, which says that
the assumption of Proposition~\ref{optt120*} regarding
the time-dependent matrix $B(t)$ is also satisfied by taking
$\lambda_{i}(t)=1$ for all $i=1,\cdots ,p$ and $t\in [0,T]$.
In other words, the conclusions of Proposition~\ref{optt120*} are available
when the assumption in Lemma~\ref{optl156} is satisfied.
}\end{Rem}
\begin{Pro}{\label{*clpp48*}}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item $K(s,t)\geq {\bf 0}$ a.e. in $[0,T]\times [0,T]$;
\item $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
\end{itemize}
Consider the sequence $\{\epsilon_{k}\}_{k=1}^{\infty}$ with
$\epsilon_{k}\rightarrow 0+$ as $k\rightarrow\infty$. For each $\epsilon_{k}$,
let ${\bf w}^{(\epsilon_{k})}$ be a feasible solution of problem
$(\mbox{\em DCLP}_{\epsilon_{k}})$. Then, for each $\epsilon_{k}$, there exists a
feasible solution $\widehat{\bf w}^{(\epsilon_{k})}$
of problem $(\mbox{\em DCLP}_{\epsilon_{k}})$ such that
the following properties hold true.
\begin{enumerate}
\item [{\em (i)}] The sequence $\{\widehat{\bf w}^{(\epsilon_{k})}\}_{k=1}^{\infty}$
is uniformly bounded.
\item [{\em (ii)}] $\widehat{\bf w}^{(\epsilon_{k})}(t)\leq
{\bf w}^{(\epsilon_{k})}(t)$ for all $t\in [0,T]$ and, for each $i=1,\cdots ,p$,
\[\widehat{w}_{i}^{(\epsilon_{k})}(t)\geq 0\mbox{ a.e. in }[0,T]\]
and
\[\widehat{w}_{i}^{(\epsilon_{k})}(t)\leq\frac{\tau -\epsilon_{k}}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\leq
\frac{\tau}{\sigma}\cdot\exp\left (\frac{\nu\cdot T}{\sigma}\right )
\mbox{ for all $t\in [0,T]$}.\]
\item [{\em (iii)}] There exists a subsequence
$\{\widehat{\bf w}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$
which weakly converges to some feasible solution ${\bf w}^{(0)}\in L_{p}^{2}[0,T]$ of problem
$(\mbox{\em DCLP}_{0})=(\mbox{\em DCLP})$. Moreover, there is also another
feasible solution $\bar{\bf w}$ of problem {\em (DCLP)} such that
$\bar{\bf w}(t)={\bf w}^{(0)}(t)$ a.e. in $[0,T]$ and, for each $i=1,\cdots ,p$,
\begin{equation}{\label{*clpeq65}}
0\leq\bar{w}_{i}(t)\leq\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\nu\cdot T}{\sigma}\right )
\mbox{ for all $t\in [0,T]$}.
\end{equation}
We further assume that the conditions regarding the time-dependent matrix $B(t)$
are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$.
Then $\bar{\bf w}(t)$ can be taken as a feasible solution of $(\mbox{\em DCLP}^{*})$.
\end{enumerate}
\end{Pro}
\begin{Proof}
By Lemma~\ref{optl156}, there exists a sequence
$\{\widehat{\bf w}^{(\epsilon_{k})}\}_{k=1}^{\infty}$ of feasible solutions
of problems $(\mbox{DCLP}_{\epsilon_{k}})$ such that
$\widehat{\bf w}^{(\epsilon_{k})}(t)\leq {\bf w}^{(\epsilon_{k})}(t)$
for all $t\in [0,T]$ and, for each $i=1,\cdots ,p$,
\begin{equation}{\label{dclp113}}
\widehat{w}_{i}^{(\epsilon_{k})}(t)\geq 0\mbox{ a.e. in }[0,T]
\end{equation}
and
\begin{equation}{\label{*clpeq64}}
\widehat{w}_{i}^{(\epsilon_{k})}(t)\leq\rho_{\epsilon_{k}}(t)
=\frac{\tau -\epsilon_{k}}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\nu\cdot T}{\sigma}\right )\mbox{ for all $t\in [0,T]$,}
\end{equation}
which says that the sequence $\{\widehat{\bf w}^{(\epsilon_{k})}\}_{k=1}^{\infty}$
is uniformly bounded in $[0,T]$. This proves parts (i) and (ii).
Now, using Lemma~\ref{optl138}, there exists a subsequence
$\{\widehat{\bf w}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$ which weakly converges
to some ${\bf w}^{(0)}\in L_{p}^{2}[0,T]$. Using Lemma~\ref{optl157}, we have
\begin{equation}{\label{opteq223}}
{\bf 0}\leq\liminf_{r\rightarrow\infty}
\widehat{\bf w}^{(\epsilon_{k_{r}})}(t)\leq {\bf w}^{(0)}(t)
\leq\limsup_{r\rightarrow\infty}
\widehat{\bf w}^{(\epsilon_{k_{r}})}(t)\mbox{ a.e. in $[0,T]$}.
\end{equation}
Since $\{\widehat{\bf w}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$ are feasible
solutions of problems $(\mbox{DCLP}_{\epsilon_{k_{r}}})$, we have
\begin{equation}{\label{*clpeq67}}
B^{\top}(t) \widehat{\bf w}^{(\epsilon_{k_{r}})}(t)\geq {\bf a}(t)
-\mbox{\boldmath $\epsilon$}_{k_{r}}+\int_{t}^{T}
K^{\top}(s,t)\widehat{\bf w}^{(\epsilon_{k_{r}})}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
By taking the limit inferior and using the weak convergence from (\ref{*clpeq67}),
since $B(t)\geq {\bf 0}$ a.e. in $[0,T]$, using (\ref{opteq223}), we obtain
\[B^{\top}(t){\bf w}^{(0)}(t)\geq\liminf_{r\rightarrow\infty}
B^{\top}(t)\widehat{\bf w}^{(\epsilon_{k_{r}})}(t)
\geq {\bf a}(t)+\int_{t}^{T} K^{\top}(s,t){\bf w}^{(0)}(s)ds\mbox{ a.e. in $[0,T]$}.\]
This shows that ${\bf w}^{(0)}$ is a feasible solution of problem (DCLP).
Using (\ref{opteq223}) and (\ref{*clpeq64}), we have
\[0\leq w_{i}^{(0)}(t)\leq\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]=\rho_{0}(t)\mbox{ a.e. in $[0,T]$
for each $i=1,\cdots ,p$}.\]
Using part (i) of Lemma~\ref{*clpp52} by taking $\epsilon =0$ with
$\bar{w}_{i}\equiv\bar{w}_{i}^{(0)}$ and
\[v_{i}^{(0)}(t)\equiv\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]=\rho_{0}(t),\]
we obtain the desired result.
Finally, we further assume that the conditions regarding the time-dependent matrix $B(t)$
are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$. Then, the desired result
follows from part (ii) of Lemma~\ref{*clpp52} by taking $\epsilon =0$. This completes the proof.
\end{Proof}
\begin{Thm}{\label{p63}}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item $K(s,t)\geq {\bf 0}$ a.e. in $[0,T]\times [0,T]$;
\item $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
\end{itemize}
Then, the following results hold.
\begin{enumerate}
\item [{\em (i)}] The problem $(\mbox{\em DCLP}_{\epsilon})$ has an optimal solution
$\bar{\bf w}^{(\epsilon )}$ such that, for each $i=1,\cdots ,p$,
\begin{equation}{\label{*clpeq68}}
0\leq\bar{w}_{i}^{(\epsilon )}(t)\leq\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\nu\cdot T}{\sigma}\right )
\mbox{ for all $t\in [0,T]$}.
\end{equation}
\item [{\em (ii)}] We further assume that the conditions regarding the time-dependent matrix
$B(t)$ are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$.
Then, there exists a common optimal solution $\bar{\bf w}^{(\epsilon )}$ of problems
$(\mbox{\em DCLP}_{\epsilon})$ and $(\mbox{\em DCLP}_{\epsilon}^{*})$
such that the inequalities $(\ref{*clpeq68})$ are satisfied and
both problems have the same optimal objective values.
\end{enumerate}
\end{Thm}
\begin{Proof}
To prove part (i), using Proposition~\ref{*clpp50},
we see that problem $(\mbox{DCLP}_{\epsilon})$ is feasible, i.e., the feasible set
${\cal W}_{\epsilon}$ of problem $(\mbox{DCLP}_{\epsilon})$ is nonempty. Therefore, if we define
\[M=\inf_{{\bf w}^{(\epsilon)}\in {\cal W}_{\epsilon}}\int_{0}^{T}
{\bf c}^{\top}(t){\bf w}^{(\epsilon)}(t)dt.\]
Then, there exists a sequence $\{{\bf w}^{(k)}\}_{k=1}^{\infty}$ in ${\cal W}_{\epsilon}$
such that
\begin{equation}{\label{opteq146*}}
\lim_{k\rightarrow\infty}\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{(k)}(t)dt=M.
\end{equation}
By Lemma~\ref{optl156}, there exists a sequence $\{\widehat{\bf
w}^{(k)}\}_{k=1}^{\infty}$ of feasible solutions of problems
$(\mbox{DCLP}_{\epsilon})$ such that $\widehat{\bf w}^{(k)}(t)\leq
{\bf w}^{(k)}(t)$ for all $t\in [0,T]$ and, for each $i=1,\cdots ,p$,
\begin{equation}{\label{dclp113}}
\widehat{w}_{i}^{(k)}(t)\geq 0\mbox{ a.e. in }[0,T]
\end{equation}
and
\begin{equation}{\label{*2clpeq64}}
\widehat{w}_{i}^{(k)}(t)\leq\rho_{\epsilon}(t)
=\frac{\tau -\epsilon}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\nu\cdot T}{\sigma}\right )\mbox{ for all $t\in [0,T]$,}
\end{equation}
which says that the sequence $\{\widehat{\bf
w}^{(k)}\}_{k=1}^{\infty}$ is uniformly bounded in $[0,T]$. Now,
using Lemma~\ref{optl138}, there exists a subsequence
$\{\widehat{\bf w}^{(k_{r})}\}_{r=1}^{\infty}$ which weakly
converges to some ${\bf w}^{(\epsilon)}\in L_{p}^{2}[0,T]$. Using Lemma~\ref{optl157}, we have
\begin{equation}{\label{2opteq223}}
{\bf 0}\leq\liminf_{r\rightarrow\infty}
\widehat{\bf w}^{(k_{r})}(t)\leq {\bf w}^{(\epsilon)}(t)
\leq\limsup_{r\rightarrow\infty}
\widehat{\bf w}^{(k_{r})}(t)\mbox{ a.e. in $[0,T]$}.
\end{equation}
Since $\{\widehat{\bf w}^{(k_{r})}\}_{r=1}^{\infty}$ are feasible
solutions of problems $(\mbox{DCLP}_{\epsilon})$, we have
\begin{equation}{\label{*2clpeq67}}
B^{\top}(t) \widehat{\bf w}^{(k_{r})}(t)\geq {\bf a}(t)
-\mbox{\boldmath $\epsilon$}+\int_{t}^{T}
K^{\top}(s,t)\widehat{\bf w}^{(k_{r})}(s)ds\mbox{ a.e. in $[0,T]$}.
\end{equation}
By taking the limit inferior and using the weak convergence from
(\ref{*2clpeq67}), since $B(t)\geq {\bf 0}$ a.e. in $[0,T]$, using (\ref{2opteq223}), we obtain
\[B^{\top}(t){\bf w}^{(\epsilon)}(t)\geq\liminf_{r\rightarrow\infty}
B^{\top}(t)\widehat{\bf w}^{(k_{r})}(t)\geq {\bf a}(t)-\mbox{\boldmath $\epsilon$}
+\int_{t}^{T} K^{\top}(s,t){\bf w}^{(\epsilon)}(s)ds\mbox{ a.e. in $[0,T]$}.\]
This shows that ${\bf w}^{(\epsilon)}$ is a feasible solution of
problem $(\mbox{DCLP}_{\epsilon})$. Using (\ref{2opteq223}) and (\ref{*2clpeq64}), we have
\[0\leq w_{i}^{(\epsilon)}(t)\leq\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\mbox{ a.e. in $[0,T]$ for each $i=1,\cdots ,p$}.\]
Using part (i) of Lemma~\ref{*clpp52} by taking
\[v_{i}^{(\epsilon)}(t)\equiv\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ],\]
we obtain $\bar{\bf w}^{(\epsilon)}(t)={\bf w}^{(\epsilon)}(t)$ a.e. in $[0,T]$
for some feasible solution $\bar{\bf w}^{(\epsilon)}$ of problem $(\mbox{DCLP}_{\epsilon})$
satisfying (\ref{*clpeq68}). Therefore, using the weak convergence, we have
\begin{align*}
\int_{0}^{T}{\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt
& =\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{(\epsilon)}(t)dt
=\lim_{r\rightarrow\infty}\int_{0}^{T}
{\bf c}^{\top}(t)\widehat{\bf w}^{(k_{r})}(t)dt\\
& \leq\lim_{r\rightarrow\infty}\int_{0}^{T}
{\bf c}^{\top}(t){\bf w}^{(k_{r})}(t)dt=M.
\end{align*}
Since $\{\widehat{\bf w}^{(k_{r})}\}_{r=1}^{\infty}$ are feasible solutions of the
minimization problem $(\mbox{DCLP}_{\epsilon})$, we have
\[\int_{0}^{T}{\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt
=\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{(\epsilon)}(t)dt
=\lim_{r\rightarrow\infty}\int_{0}^{T}
{\bf c}^{\top}(t)\widehat{\bf w}^{(k_{r})}(t)dt\geq M.\]
This shows that $\bar{\bf w}^{(\epsilon)}$ is an optimal solution of
problem $(\mbox{DCLP}_{\epsilon})$ such that the inequalities (\ref{*clpeq68}) are satisfied.
To prove part (ii), under the further assumptions, using part (ii) of Lemma~\ref{*clpp52},
we can see that $\bar{\bf w}^{(\epsilon)}$ is also a feasible solution of
$(\mbox{DCLP}_{\epsilon}^{*})$.
Since the feasible set of $(\mbox{DCLP}_{\epsilon}^{*})$ is contained in the
feasible set of $(\mbox{DCLP}_{\epsilon})$, we conclude that $\bar{\bf w}^{(\epsilon)}$
is also an optimal solution of $(\mbox{DCLP}_{\epsilon}^{*})$. This completes the proof.
\end{Proof}
\section{Discretized Problems}
Now, we are going to consider the discretized versions of problems
(CLP) and (DCLP). Let us consider the Lebesgue measure $\mu$ on
$[0,T]$. Then, we have $\mu ([0,T])=T$. According to the constraints of problems
(CLP) and (DCLP), we see that there is a subset ${\cal T}$ of $[0,T]$ such that
all the constraints of (CLP) and (DCLP) are satisfied
for all $t\in {\cal T}$ and $\mu ({\cal T})=T$. Let
\[{\cal P}=\left\{0=t_{0},t_{1},t_{2},\cdots ,t_{N}=T\right\}\]
be a partition of $[0,T]$ such that $t_{u}\in {\cal T}$ for all
$u=1,\cdots ,N-1$. In this case, all the constraints of
(CLP) and (DCLP) are satisfied for all $t\in {\cal P}$.
\begin{Rem}{\label{clpr42}}
{\em
Suppose that some conditions are satisfied a.e. in $[0,T]$.
We can also construct a subset $\hat{\cal T}$ of $[0,T]$ such that
$\mu (\hat{\cal T})=T$ and define a new partition $\hat{\cal P}$ of $[0,T]$
such that these conditions are satisfied for all $t\in\hat{\cal P}$.
For example, some of these conditions are listed below:
\begin{itemize}
\item the essential boundedness shown in (\ref{clpeq8})-(\ref{clpeq3});
\item ${\bf c}(t)\geq {\bf 0}$ a.e in $[0,T]$;
\item $K(t,s)\geq {\bf 0}$ a.e. on $[0,T]\times [0,T]$;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma ;\]
\item there exist real-valued functions $\lambda_{i}$ satisfying
$0\leq\lambda_{i}(t)\leq 1$ a.e. in $[0,T]$ for $i=1,\cdots ,p$ and a constant
$\sigma >0$ satisfying
\[\min_{j=1,\cdots ,q}\left\{\sum_{i=1}^{p}\lambda_{i}(t)B_{ij}(t)
\right\}\geq\sigma\mbox{ a.e. in $[0,T]$}.\]
\end{itemize}
}\end{Rem}
Recall that $f$ is piecewise continuous on $[0,T]$ means that $f$ is continuous on
$[0,T]$ except for a finite subset of $[0,T]$.
Let $H$ be a real-valued function define on $[0,T]\times [0,T]$.
We consider the piecewise continuity for $H$ in the following sense:
there is a partition ${\cal P}_{H}=\left\{0=t_{0},t_{1},t_{2},\cdots ,t_{M}=T\right\}$
on $[0,T]$ such that the following conditions are satisfied:
\begin{itemize}
\item $H(s,t)$ is continuous on the open rectangles
$(t_{u-1},t_{u})\times (t_{v-1},t_{v})$ for $u=1,\cdots ,M$ and $v=1,\cdots ,M$;
\item for any fixed $s\not\in {\cal P}_{H}$, the single-variable function $H(s,\cdot )$ is
continuous on the open intervals $(t_{v-1},t_{v})$ for $v=1,\cdots ,M$;
\item for any fixed $t\not\in {\cal P}_{H}$, the single-variable function $H(\cdot ,t)$ is
continuous on the open intervals $(t_{u-1},t_{u})$ for $u=1,\cdots ,M$.
\end{itemize}
In order to prove the strong duality theorem, we assume further that each entry of
${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous on $[0,T]$ and $[0,T]\times [0,T]$,
respectively; that is, $a_{j}$, $c_{i}$, $B_{ij}$ and $K_{ij}$ are piecewise
continuous on $[0,T]$ and $[0,T]\times [0,T]$, respectively,
for each $i=1,\cdots ,p$ and $j=1,\cdots ,q$.
Under the above assumptions, we can take the partition ${\cal P}$
such that all the discontinuities of $a_{j}$, $c_{i}$, $B_{ij}$ and $K_{ij}$
are contained in ${\cal P}$. In this case, each $a_{j}$, $c_{i}$, $B_{ij}$ and $K_{ij}$
is continuous on the open subintervals $(t_{u-1},t_{u})$ and open rectangles
$(t_{u-1},t_{u})\times (t_{v-1},t_{v})$ for $u=1,\cdots ,N$ and $v=1,\cdots ,N$, respectively.
Given a partition ${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$ of $[0,T]$, let
\[\parallel {\cal P}\parallel =\max_{u=1,\cdots ,N}\left (t_{u}-t_{u-1}\right )
\mbox{ satisfying }\lim_{N\rightarrow\infty}\parallel {\cal P}\parallel=0.\]
According to the above construction, we assume that the partition ${\cal P}$
satisfies the following conditions.
\begin{itemize}
\item The value $\parallel {\cal P}\parallel$ can be sufficiently small
such that there is a fixed constant $\kappa\geq 1$ satisfying
\begin{equation}{\label{*clpeq74}}
\parallel {\cal P}\parallel\leq\kappa T/N.
\end{equation}
\item All the constraints of (CLP) and (DCLP) are satisfied for all $t\in {\cal P}$.
\item All the assumptions regarding the functions $a_{j}$, $c_{i}$, $B_{ij}$ and $K_{ij}$
are satisfied for all $t\in {\cal P}$.
\item All the discontinuities of $a_{j}$, $c_{i}$, $B_{ij}$ and $K_{ij}$
are contained in ${\cal P}$.
\item Remark~\ref{clpr42} is taken into account.
\end{itemize}
Given a partition ${\cal P}$ satisfying the above assumptions,
since each entry of ${\bf a}$ and ${\bf c}$ is piecewise continuous on $[0,T]$, i.e.,
each entry of ${\bf a}$ and ${\bf c}$ is continuous on each open interval
$(t_{u-1},t_{u})$ for $u=1,\cdots ,N$, we define
\begin{equation}{\label{dclp116}}
a_{j}^{(u)}=\inf_{t\in (t_{u-1},t_{u})}a_{j}(t)>-\infty\mbox{ and }
c_{i}^{(u)}=\inf_{t\in (t_{u-1},t_{u})}c_{i}(t)>-\infty.
\end{equation}
Then, by Remark~\ref{clpr42}, we see that
\begin{equation}{\label{extclp214}}
\tau\geq a_{j}^{(u)}\mbox{ and }\zeta\geq c_{i}^{(u)}.
\end{equation}
Now, we define the vectors ${\bf a}^{(u)}$ and ${\bf c}^{(u)}$ that are consisting
of $a_{j}^{(u)}$ and $c_{i}^{(u)}$ for $j=1,\cdots ,q$ and $i=1,\cdots ,p$,
respectively.
Since each entry of the time-dependent matrices $B$ and $K$ is piecewise continuous
on $[0,T]$ and $[0,T]\times [0,T]$, respectively, for $u=1,\cdots ,N$, we define
\begin{equation}{\label{dclp103}}
B_{ij}^{(u)}=\sup_{t\in (t_{u-1},t_{u})}B_{ij}(t)<+\infty\mbox{ and }
K_{ij}^{(u,v)}=\inf_{(t,s)\in (t_{u-1},t_{u})\times (t_{v-1},t_{v})}
K_{ij}(t,s)>-\infty .
\end{equation}
Then, by (\ref{*clpeq2}), (\ref{clpeq50}) and Remark~\ref{clpr42}, we see that
\begin{equation}{\label{extclpeq213}}
K_{ij}^{(u,v)}\leq\eta\mbox{ and }\sum_{i=1}^{p}K_{ij}^{(u,v)}\leq\nu
\end{equation}
We also define the matrices $B^{(u)}$ and $K^{(u,v)}$ that are consisting
of $B_{ij}^{(u)}$ and $K_{ij}^{(u,v)}$ for $j=1,\cdots ,q$ and $i=1,\cdots ,p$,
respectively.
Let ${\bf z}^{(u)}$ and ${\bf w}^{(u)}$ be the
$q$-dimensional and $p$-dimensional vectors, respectively.
We consider the following finite-dimensional linear programming problems:
\begin{eqnarray}
(\mbox{LP}^{(N)}) & \max & \sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )
({\bf a}^{(u)})^{\top}{\bf z}^{(u)}\nonumber\\
& \mbox{subject to} & B^{(1)}{\bf z}^{(1)}\leq
{\bf c}^{(1)}\nonumber\\
&& B^{(u)}{\bf z}^{(u)}\leq {\bf c}^{(u)}
+\sum_{v=1}^{u-1}\left (t_{v}-t_{v-1}\right )K^{(u,v)}
{\bf z}^{(v)}\mbox{ for }u=2,\cdots ,N\label{clpeq41}\\
&& {\bf z}^{(u)}\geq {\bf 0}\mbox{ for }u=1,\cdots ,N\nonumber.
\end{eqnarray}
and
\begin{eqnarray*}
(\mbox{DLP}^{(*N)}) & \min & \sum_{u=1}^{N}
({\bf c}^{(u)})^{\top}\widehat{\bf w}^{(u)}\\
& \mbox{subject to} & (B^{(u)})^{\top}\widehat{\bf w}^{(u)}
\geq \left (t_{u}-t_{u-1}\right ){\bf a}^{(u)}\\
&& \hspace{5mm}+\left (t_{u}-t_{u-1}\right )\sum_{v=u+1}^{N}(K^{(v,u)})^{\top}
\widehat{\bf w}^{(v)}\mbox{ for }u=1,\cdots ,N-1\\
&& (B^{(N)})^{\top}\widehat{\bf w}^{(N)}\geq
\left (t_{N}-t_{N-1}\right ){\bf a}^{(N)}\nonumber\\
&& \widehat{\bf w}^{(u)}\geq {\bf 0}
\mbox{ for }u=1,\cdots ,N.\nonumber
\end{eqnarray*}
Based on the following matrices
\[A{\bf z}={\small \left [\begin{array}{ccccc}
B^{(1)} & {\bf 0} & {\bf 0} & {\bf 0} & {\bf 0}\\
-(t_{1}-t_{0})K^{(2,1)} & B^{(2)} & & {\bf 0} & {\bf 0}\\
-(t_{1}-t_{0})K^{(3,1)} & -(t_{2}-t_{1})K^{(3,2)} & B^{(3)} & {\bf 0} & {\bf 0}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
-(t_{1}-t_{0})K^{(N,1)} & -(t_{2}-t_{1})K^{(N,2)}
& \cdots & -(t_{N}-t_{N-1})K^{(N,N-1)} & B^{(N)}
\end{array}\right ]
\left [\begin{array}{c}
{\bf z}^{(1)}\\
{\bf z}^{(2)}\\
{\bf z}^{(3)}\\
\vdots\\
{\bf z}^{(N)}
\end{array}\right ]}\]
and
\[A^{\top}\widehat{\bf w}=
{\small \left [\begin{array}{ccccc}
B^{(1)} & -(t_{1}-t_{0})K^{(2,1)} & -(t_{1}-t_{0})K^{(3,1)}
& \cdots & -(t_{1}-t_{0})K^{(N,1)}\\
{\bf 0} & B^{(2)} & -(t_{2}-t_{1})K^{(3,2)} & \cdots & -(t_{2}-t_{1})K^{(N,2)}\\
{\bf 0} & {\bf 0} & B^{(3)} & \cdots & -(t_{3}-t_{2})K^{(N,3)}\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
{\bf 0} & {\bf 0} & \cdots & {\bf 0} & -(t_{N}-t_{N-1})K^{(N,N-1)}\\
{\bf 0} & {\bf 0} & \cdots & {\bf 0} & B^{(N)}
\end{array}\right ]
\left [\begin{array}{c}
\widehat{\bf w}^{(1)}\\
\widehat{\bf w}^{(2)}\\
\widehat{\bf w}^{(3)}\\
\vdots\\
\widehat{\bf w}^{(N-1)}\\
\widehat{\bf w}^{(N)}
\end{array}\right ],}\]
we see that $(\mbox{LP}^{(N)})$ and $(\mbox{DLP}^{(*N)})$ are finite-dimensional
primal and dual pair of linear programming problems. Now, let
\[{\bf w}^{(u)}=\left (\frac{1}{t_{u}-t_{u-1}}\right )\widehat{\bf w}^{(u)}.\]
Then, by dividing $t_{u}-t_{u-1}$ on both sides of the constraints of
dual problem $(\mbox{DLP}^{(*N)})$, we obtain the following equivalent problem
\begin{eqnarray}
(\mbox{DLP}^{(N)}) & \min & \sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )
({\bf c}^{(u)})^{\top}{\bf w}^{(u)}\nonumber\\
& \mbox{subject to} & (B^{(u)})^{\top}{\bf w}^{(u)}
\geq {\bf a}^{(u)}\nonumber\\
&& \quad\quad +\sum_{v=u+1}^{N}\left (t_{v}-t_{v-1}\right )(K^{(v,u)})^{\top}
{\bf w}^{(v)}
\mbox{ for }u=1,\cdots ,N-1\label{clpeq888}\\
&& (B^{(N)})^{\top}{\bf w}^{(N)}\geq {\bf a}^{(N)}\nonumber\\
&& {\bf w}^{(u)}\geq {\bf 0}\mbox{ for }u=1,\cdots ,N.\nonumber
\end{eqnarray}
In the sequel, we shall use this equivalent dual problem $(\mbox{DLP}^{(N)})$.
\begin{Pro}{\label{optp172}}
Suppose that each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous
on $[0,T]$ and $[0,T]\times [0,T]$, respectively. The following statements hold true.
\begin{enumerate}
\item [{\em (i)}] If ${\bf c}(t)\geq {\bf 0}$ a.e. in $[0,T]$,
then the primal problem $(\mbox{\em LP}^{(N)})$ is feasible.
\item [{\em (ii)}] Suppose that $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each
$j=1,\cdots ,q$, and that there exists a constant $\sigma >0$ such that,
for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\begin{equation}{\label{dclp118}}
B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .
\end{equation}
Given a partition ${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$ of $[0,T]$, let
\begin{equation}{\label{clp2eq4}}
\mathfrak{w}_{u}=\frac{\tau}{\sigma}\cdot\left (1+\parallel {\cal P}\parallel\cdot
\frac{\nu}{\sigma}\right )^{N-u}\mbox{ for }u=1,\cdots ,N.
\end{equation}
We define the vector ${\bf w}^{(u)}$ with all entries $\mathfrak{w}_{u}$
for $u=1,\cdots ,N$. Then, $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$ is a feasible solution of
problem $(\mbox{\em DLP}^{(N)})$; that is,
the dual problem $(\mbox{\em DLP}^{(N)})$ is feasible.
In other words, the strong duality theorem holds true between problems
$(\mbox{\em LP}^{(N)})$ and $(\mbox{\em DLP}^{(N)})$.
\end{enumerate}
\end{Pro}
\begin{Proof}
To prove part (i), since each entry of ${\bf c}$ is piecewise continuous on $[0,T]$,
according to the construction of partition ${\cal P}$, each entry $c_{i}$ is
continuous on the open subinterval $(t_{u-1},t_{u})$ for $u=1,\cdots ,N$.
Since $c_{i}(t)\geq 0$ a.e. on $(t_{u-1},t_{u})$, it follows that
$c_{i}(t)\geq 0$ for all $t\in (t_{u-1},t_{u})$ by the continuity.
From (\ref{dclp116}), we see that ${\bf c}^{(u)}\geq {\bf 0}$ for all $u=1,\cdots ,N$.
It is obvious that the primal problem $(\mbox{LP}^{(N)})$ is feasible with the
trivial feasible solution ${\bf z}^{(u)}={\bf 0}$ for $u=1,\cdots ,N$.
To prove part (ii), for each $j=1,\cdots ,q$, since the measure of open interval
$(t_{u-1},t_{u})$ is not zero, using the assumption for $B$ and referring to (\ref{dclp103}),
there exists $t^{*}\in (t_{u-1},t_{u})$ such that
$0\leq B_{ij}(t^{*})\leq B_{ij}^{(u)}$ for each $i=1,\cdots ,p$,
$\sum_{i=1}^{p}B_{ij}(t^{*})>0$ and the statement (\ref{dclp118})
is satisfied at $t^{*}$. Therefore,
there exists $i_{j}\in\{1,2,\cdots ,n\}$ such that $B_{i_{j}j}(t^{*})>0$, which implies
\[B_{i_{j}j}^{(u)}\geq B_{i_{j}j}(t^{*})\geq\sigma >0.\]
Since $B_{ij}^{(u)}\geq 0$ and $w_{i}^{(u)}=\mathfrak{w}_{u}\geq 0$ for
$i=1,\cdots ,p$ and $u=1,\cdots ,N$, we have
\begin{equation}{\label{clp2eq3}}
\sum_{i=1}^{p}B_{ij}^{(u)}\cdot w_{i}^{(u)}\geq B_{i_{j}j}^{(u)}\cdot
w_{i_{j}}^{(u)}=B_{i_{j}j}^{(u)}\cdot\frac{\tau}{\sigma}\cdot
\left (1+\parallel {\cal P}\parallel\cdot\frac{\nu}{\sigma}\right )^{N-u}
\geq\tau\cdot\left (1+\parallel {\cal P}\parallel\cdot\frac{\nu}{\sigma}\right )^{N-u}.
\end{equation}
Since
\begin{align*}
\sum_{i=1}^{p}\left (t_{v}-t_{v-1}\right )\cdot K_{ij}^{(v,u)}\cdot w_{i}^{(v)}
& \leq\sum_{i=1}^{p}\parallel {\cal P}\parallel\cdot K_{ij}^{(v,u)}\cdot
\frac{\tau}{\sigma}\cdot\left (1+\parallel {\cal P}\parallel\cdot
\frac{\nu}{\sigma}\right )^{N-v}\\
& \leq\parallel {\cal P}\parallel\cdot\frac{\nu\cdot\tau}{\sigma}
\left (1+\parallel {\cal P}\parallel\cdot\frac{\nu}{\sigma}\right )^{N-v}
\mbox{ (by (\ref{extclpeq213}))},
\end{align*}
it follows that, for $u=1,\cdots ,N-1$,
\begin{align}
& a_{j}^{(u)}+\sum_{i=1}^{p}\left [\sum_{v=u+1}^{N}
\left (t_{v}-t_{v-1}\right )\cdot K_{ij}^{(v,u)}\cdot w_{i}^{(v)}\right ]
=a_{j}^{(u)}+\sum_{v=u+1}^{N}\sum_{i=1}^{p}\left (t_{v}-t_{v-1}\right )\cdot
K_{ij}^{(v,u)}\cdot w_{i}^{(v)}\nonumber\\
& \quad\leq\tau+\sum_{v=u+1}^{N}\parallel {\cal P}\parallel\cdot
\frac{\nu\cdot\tau}{\sigma}\left (
1+\parallel {\cal P}\parallel\cdot\frac{\nu}{\sigma}\right )^{N-v}\nonumber\\
& \quad =\tau\cdot\left [1+\sum_{v=u+1}^{N}\parallel {\cal P}\parallel
\cdot\frac{\nu}{\sigma}\cdot\left (1+\parallel {\cal P}\parallel
\cdot\frac{\nu}{\sigma}\right )^{N-v}\right ]
=\tau\cdot\left (1+\parallel {\cal P}\parallel\cdot
\frac{\nu}{\sigma}\right )^{N-u}.\label{clp2eq5}
\end{align}
Therefore, from (\ref{clp2eq3}) and (\ref{extclp214}), we obtain
\[\sum_{i=1}^{p}B_{ij}^{(u)}\cdot w_{i}^{(u)}\geq a_{j}^{(u)}
+\sum_{i=1}^{p}\sum_{v=u+1}^{N}\left (t_{v}-t_{v-1}\right )\cdot
K_{ij}^{(v,u)}\cdot w_{i}^{(v)}\mbox{ for $u=1,\cdots ,N-1$}\]
and
\[\sum_{i=1}^{p}B_{ij}^{(N)}\cdot w_{i}^{(N)}\geq\tau\geq a_{j}^{(N)}.\]
This completes the proof.
\end{Proof}
\begin{Lem}{\label{*clpl72}}
Suppose that the set $\{x_{1},\cdots ,x_{N}\}$ satisfies
\[x_{1}\leq\theta_{1}\mbox{ and }
x_{u}\leq\theta_{1}+\theta_{2}\cdot\sum_{i=1}^{u-1}x_{i}\mbox{ for }u=2,\cdots ,N.\]
Then $x_{u}\leq\theta_{1}\cdot\left (1+\theta_{2}\right )^{u-1}$ for $u=1,\cdots ,N$.
\end{Lem}
\begin{Proof}
We are going to prove it by induction. For $u=1$, it is obviously true.
Suppose that $x_{u}\leq\theta_{1}\left (1+\theta_{2}\right )^{u-1}$
hold true for $u=2,\cdots ,N-1$. Then, we have
\[\sum_{i=1}^{N-1}x_{i}\leq\theta_{1}\cdot\sum_{i=1}^{N-1}
\left (1+\theta_{2}\right )^{i-1}=\frac{\theta_{1}
\left [(1+\theta_{2})^{N-1}-1\right ]}{\theta_{2}}.\]
Therefore, we obtain
\[x_{N}\leq\theta_{1}+\theta_{2}\sum_{i=1}^{N-1}x_{i}
\leq\theta_{1}+\theta_{2}\cdot\frac{\theta_{1}
\left [(1+\theta_{2})^{N-1}-1\right ]}{\theta_{2}}
=\theta_{1}\left (1+\theta_{2}\right )^{N-1}.\]
This completes the proof.
\end{Proof}
\begin{Pro}{\label{*clpp32}}
Suppose that each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous
on $[0,T]$ and $[0,T]\times [0,T]$, respectively,
and that there exist real-valued functions $\lambda_{i}$ satisfying
$0\leq\lambda_{i}(t)\leq 1$ a.e. in $[0,T]$ for $i=1,\cdots ,p$,
and a constant $\sigma >0$ such that
\begin{equation}{\label{dclp104}}
\min_{j=1,\cdots ,q}\left\{\sum_{i=1}^{p}\lambda_{i}(t)B_{ij}(t)
\right\}\geq\sigma\mbox{ a.e. in $[0,T]$}.
\end{equation}
If $({\bf z}^{(1)},\cdots ,{\bf z}^{(N)})$
is a feasible solution of primal problem $(\mbox{\em LP}^{(N)})$, then
\begin{equation}{\label{opteq164}}
\parallel{\bf z}^{(1)}\parallel\leq\frac{n\zeta}{\sigma}
\mbox{ and }\parallel{\bf z}^{(u)}\parallel
\leq\frac{n\zeta}{\sigma}\cdot\exp\left (\frac{n\nu\kappa T}{\sigma}\right )
\end{equation}
for $u=2,\cdots ,N$. In other words, the bound for the feasible solutions
is independent of the partition ${\cal P}$.
\end{Pro}
\begin{Proof}
For $u=1,\cdots ,N$, we define
\[\lambda_{i}^{(u)}=\mbox{ess}\sup_{t\in [t_{u-1},t_{u}]}\left |\lambda_{i}(t)\right |
=\inf\left\{k:\left |\lambda_{i}(t)\right |\leq k\mbox{ a.e. in }
[t_{u-1},t_{u}]\right\}.\]
Since $0\leq\lambda_{i}(t)\leq 1$ a.e. in $[0,T]$ for
$i=1,\cdots ,p$ by the assumption, it follows that
\begin{equation}{\label{dclp106}}
0\leq\lambda_{i}(t)\leq\lambda_{i}^{(u)}\mbox{ a.e. in }[t_{u-1},t_{u}]
\end{equation}
and $0\leq\lambda_{i}^{(u)}\leq 1$
for all $i=1,\cdots ,p$ and $u=1,\cdots ,N$. Since
$\sum_{j=1}^{q}B_{ij}^{(1)}z_{j}^{(1)}\leq c_{i}^{(1)}$ by the feasibility,
multiplying $\lambda_{i}^{(1)}\geq 0$ on both sides, we have
\begin{equation}{\label{dclp119}}
\sum_{i=1}^{p}\sum_{j=1}^{q}B_{ij}^{(1)}\lambda_{i}^{(1)}z_{j}^{(1)}\leq
\sum_{i=1}^{p}\lambda_{i}^{(1)}c_{i}^{(1)}.
\end{equation}
From (\ref{dclp103}), (\ref{dclp104}) and (\ref{dclp106}), we have
\begin{equation}{\label{dclp105}}
\min_{j=1,\cdots ,q}\left\{\sum_{i=1}^{p}\lambda_{i}^{(u)}B_{ij}^{(u)}
\right\}\geq\min_{j=1,\cdots ,q}\left\{\sum_{i=1}^{p}\lambda_{i}(t)B_{ij}(t)
\right\}\geq\sigma\mbox{ a.e. in $[t_{u-1},t_{u}]$}.
\end{equation}
Using (\ref{dclp119}), (\ref{dclp105}) and (\ref{extclp214}), we obtain
\[\sigma\cdot\parallel{\bf z}^{(1)}\parallel
=\sum_{j=1}^{q}\sigma\cdot z_{j}^{(1)}\leq\sum_{j=1}^{q}\left [z_{j}^{(1)}
\sum_{i=1}^{p}B_{ij}^{(1)}\lambda_{i}^{(1)}\right ]
\leq\sum_{i=1}^{p}\lambda_{i}^{(1)}c_{i}^{(1)}\leq\sum_{i=1}^{p}c_{i}^{(1)}
\leq n\zeta .\]
This shows that
\begin{equation}{\label{*clpeq73}}
\parallel{\bf z}^{(1)}\parallel\leq\frac{n\zeta}{\sigma}.
\end{equation}
From (\ref{clpeq41}), for each $u=2,\cdots ,N$ and $i=1,\cdots ,p$,
since $\lambda_{i}^{(u)}\geq 0$, we have
\begin{equation}{\label{copteq233}}
\sum_{i=1}^{p}\sum_{j=1}^{q}B_{ij}^{(u)}\lambda_{i}^{(u)}z_{j}^{(u)}\leq
\sum_{i=1}^{p}\lambda_{i}^{(u)}c_{i}^{(u)}+
\sum_{v=1}^{u-1}\sum_{i=1}^{p}\sum_{j=1}^{q}
\left (t_{v}-t_{v-1}\right )K_{ij}^{(u,v)}\lambda_{i}^{(u)}z_{j}^{(v)}.
\end{equation}
Since $0\leq\lambda_{i}^{(u)}\leq 1$, we obtain
\begin{align*}
\sigma\cdot\parallel{\bf z}^{(u)}\parallel
& =\sum_{j=1}^{q}\sigma\cdot z_{j}^{(u)}
\leq\sum_{j=1}^{q}\left [z_{j}^{(u)}
\sum_{i=1}^{p}B_{ij}^{(u)}\lambda_{i}^{(u)}\right ]\mbox{ (by (\ref{dclp105}))}\\
& \leq n\zeta +n\nu\parallel {\cal P}\parallel
\cdot\sum_{v=1}^{u-1}\parallel{\bf z}^{(v)}\parallel
\mbox{ (by (\ref{copteq233}), (\ref{clpeq50}) and (\ref{dclp103}))}.
\end{align*}
Let $\theta_{1}=n\zeta /\sigma$ and $\theta_{2}
=n\nu\parallel {\cal P}\parallel /\sigma$. We have
$\parallel{\bf z}^{(1)}\parallel\leq\theta_{1}$
by (\ref{*clpeq73}) and
\[\parallel{\bf z}^{(u)}\parallel\leq\theta_{1}
+\theta_{2}\cdot\sum_{v=1}^{u-1}\parallel{\bf z}^{(v)}\parallel
\mbox{ for $u=2,\cdots ,N$}.\]
According to Lemma~\ref{*clpl72}, we obtain
\begin{align*}
\parallel{\bf z}^{(u)}\parallel
& \leq\theta_{1}\left (1+\theta_{2}\right )^{u-1}
\leq\theta_{1}\left (1+\theta_{2}\right )^{N}\\
& \leq\theta_{1}\cdot\exp\left (\theta_{2}N\right )
\mbox{ (using the fact of $e^{t}\geq 1+t$)}\\
& =\frac{n\zeta}{\sigma}\cdot
\exp\left (\frac{n\nu N\parallel {\cal P}\parallel}{\sigma}\right )
\leq\frac{n\zeta}{\sigma}\cdot\exp\left (\frac{n\nu\kappa T}{\sigma}\right )
\mbox{ (using (\ref{*clpeq74}))}
\end{align*}
for $u=2,\cdots ,N$. This completes the proof.
\end{Proof}
\begin{Pro}{\label{coptp250}}
Suppose that each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous
on $[0,T]$ and $[0,T]\times [0,T]$, respectively,
and that there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$
and $j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
If $({\bf d}^{(1)},\cdots ,{\bf d}^{(N)})$ is a feasible
solution of dual problem $(\mbox{\em DLP}^{(N)})$, then there exists a
feasible solution $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$ of dual problem
$(\mbox{\em DLP}^{(N)})$ such that
\begin{equation}{\label{opteq163}}
{\bf w}^{(u)}\leq {\bf d}^{(u)}\mbox{ and }
0\leq w_{i}^{(u)}\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\eta T}{\sigma}\right )
\end{equation}
for $u=1,\cdots ,N$ and $i=1,\cdots ,p$, where the bound of feasible solutions is
independent of the partition $\parallel {\cal P}\parallel$.
Moreover, if $({\bf d}^{(1)},\cdots ,{\bf d}^{(N)})$ is an optimal
solution of dual problem $(\mbox{\em DLP}^{(N)})$,
then $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$ is also an
optimal solution of dual problem $(\mbox{\em DLP}^{(N)})$.
\end{Pro}
\begin{Proof}
Let
\begin{equation}{\label{*clpeq3}}
\rho (t)=\frac{\tau}{\sigma}\cdot\exp\left [\frac{\eta (T-t)}{\sigma}\right ].
\end{equation}
Then, we have
\begin{equation}{\label{*clpeq4}}
\sigma\rho (t)=\tau +\eta \cdot\int_{t}^{T}\rho (s)ds.
\end{equation}
From (\ref{*clpeq3}), for $u=1,\cdots ,N$, we define
\[\rho_{u}=\rho (t_{u})=\frac{\tau}{\sigma}\cdot\exp\left [
\frac{\eta (T-t_{u})}{\sigma}\right ].\]
By taking $t=t_{u}$ in (\ref{*clpeq4}), since $\rho$ is a decreasing function, we have
\begin{align}
\sigma\rho_{u} & =\tau +\eta\cdot\int_{t_{u}}^{T}\rho (s)ds
=\tau +\eta\cdot\sum_{v=u}^{N-1}\int_{t_{v}}^{t_{v+1}}\rho (s)ds
\geq\tau +\eta\cdot\sum_{v=u}^{N-1}\int_{t_{v}}^{t_{v+1}}\rho (t_{v+1})ds\nonumber\\
& \geq a_{j}^{(u)}+\sum_{v=u+1}^{N}(t_{v}-t_{v-1})K_{ij}^{(v,u)}\rho_{v}
\mbox{ (by (\ref{extclpeq213}) and (\ref{extclp214}))}.\label{copteq248}
\end{align}
According to constraint (\ref{clpeq888}), for $j=1,\cdots ,q$ and
$u=1,\cdots ,N-1$, we have
\begin{equation}{\label{copteq246}}
\sum_{i}B_{ij}^{(u)}d_{i}^{(u)}\geq a_{j}^{(u)}
+\sum_{v=u+1}^{N}(t_{v}-t_{v-1})K_{ij}^{(v,u)}d_{i}^{(v)}.
\end{equation}
For $u=1,\cdots ,N$, we define $w_{i}^{(u)}=\min\{d_{i}^{(u)},\rho_{u}\}$.
Since $K_{ij}^{(v,u)}\geq 0$, from (\ref{copteq246}), we obtain
\begin{equation}{\label{copteq247}}
\sum_{i}B_{ij}^{(u)}d_{i}^{(u)}\geq a_{j}^{(u)}
+\sum_{u=v+1}^{N}(t_{v}-t_{v-1})K_{ij}^{(v,u)}w_{i}^{(v)}.
\end{equation}
For each fixed $u$, we define the index sets $I_{\leq}=\{i:d_{i}^{(u)}\leq\rho_{u}\}$
and $I_{>}=\{i:d_{i}^{(u)}>\rho_{u}\}$ and consider
\[\sum_{i}B_{ij}^{(u)}w_{i}^{(u)}=\sum_{i\in I_{\leq}}
B_{ij}^{(u)}w_{i}^{(u)}+\sum_{i\in I_{>}}B_{ij}^{(u)}w_{i}^{(u)}.\]
Then, we have the following three cases.
\begin{itemize}
\item Suppose that $I_{>}=\emptyset$ (i.e., the second sum is zero).
Then, we see that $d_{i}^{(u)}=w_{i}^{(u)}$ for all $i$.
Therefore, from (\ref{copteq247}), we have
\[\sum_{i}B_{ij}^{(u)}w_{i}^{(u)}=\sum_{i}B_{ij}^{(u)}d_{i}^{(u)}\geq a_{j}^{(u)}
+\sum_{u=v+1}^{N}(t_{v}-t_{v-1})K_{ij}^{(v,u)}w_{i}^{(v)}.\]
\item Suppose that $I_{>}\neq\emptyset$ and $B_{ij}^{(u)}=0$ for all $i\in I_{>}$. Then
\begin{align*}
\sum_{i}B_{ij}^{(u)}w_{i}^{(u)}
& =\sum_{i\in I_{\leq}}B_{ij}^{(u)}d_{i}^{(u)}+\sum_{i\in I_{>}}B_{ij}^{(u)}\rho_{u}\\
& =\sum_{i\in I_{\leq}}B_{ij}^{(u)}d_{i}^{(u)}+\sum_{i\in I_{>}}B_{ij}^{(u)}d_{i}^{(u)}
=\sum_{i}B_{ij}^{(u)}d_{i}^{(u)}\\
& \geq a_{j}^{(u)}+\sum_{u=v+1}^{N}(t_{v}-t_{v-1})K_{ij}^{(v,u)}w_{i}^{(v)}
\mbox{ (by (\ref{copteq247}))}.
\end{align*}
\item Suppose that $I_{>}\neq\emptyset$, and that there exists $i^{*}\in I_{>}$ with
$B_{i^{*}j}^{(u)}\neq 0$. Then, by the definition of $B_{i^{*}j}^{(u)}$, there exists
$t^{*}\in (t_{u-1},t_{u})$ such that $B_{i^{*}j}(t^{*})\neq 0$. Since $B_{i^{*}j}(t)\geq 0$
a.e. in $[t_{u-1},t_{u}]$ and $B_{i^{*}j}$ is continuous on $(t_{u-1},t_{u})$,
we must have $B_{i^{*}j}(t)\geq 0$ for all $t\in (t_{u-1},t_{u})$.
The facts of $B_{i^{*}j}(t^{*})\neq 0$ and the continuity of $B_{i^{*}j}$ on $(t_{u-1},t_{u})$
imply that there exists a subset $T_{u}$ of
$(t_{u-1},t_{u})$ with nonzero measure such that $B_{i^{*}j}(t)>0$ on $T_{u}$.
By the assumption of $B$, we also have $B_{i^{*}j}(t)\geq\sigma$ a.e. in $T_{u}$.
Since $B_{i^{*}j}(t)\leq B_{i^{*}j}^{(u)}$ for all $t\in (t_{u-1},t_{u})$,
we conclude that $B_{i^{*}j}^{(u)}\geq\sigma$. Therefore, we obtain
\begin{align*}
\sum_{i}B_{ij}^{(u)}w_{i}^{(u)} & \geq\sum_{i\in I_{>}}B_{ij}^{(u)}w_{i}^{(u)}=
\sum_{i\in I_{>}}B_{ij}^{(u)}\rho_{u}\geq B_{i^{*}j}^{(u)}\rho_{u}\geq\sigma\rho_{u}\\
& \geq a_{j}^{(u)}+\sum_{u=v+1}^{N}(t_{v}-t_{v-1})K_{ij}^{(v,u)}w_{i}^{(v)}
\mbox{ (by (\ref{copteq248}) and $\rho_{v}\geq w_{i}^{(v)}\geq 0$)}.
\end{align*}
\end{itemize}
This shows that $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$
is a feasible solution of dual problem $(\mbox{DLP}^{(N)})$.
Since $w_{i}^{(u)}\leq\rho_{u}$ for $u=1,\cdots ,N$, we also obtain (\ref{opteq163}).
Finally, since ${\bf w}^{(u)}\leq {\bf d}^{(u)}$
for $v=1,\cdots ,N$, if $({\bf d}^{(1)},\cdots ,{\bf d}^{(N)})$ is an optimal
solution of problem $(\mbox{DLP}^{(N)})$, then, considering the objective values,
we have
\[\sum_{u=1}^{N}(t_{u}-t_{u-1})({\bf c}^{(u)})^{\top}{\bf d}^{(u)}
\leq\sum_{u=1}^{N}(t_{u}-t_{u-1})({\bf c}^{(u)})^{\top}{\bf w}^{(u)}
\leq\sum_{u=1}^{N}(t_{u}-t_{u-1})({\bf c}^{(u)})^{\top}{\bf d}^{(u)},\]
which says that $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$
is also an optimal solution of problem $(\mbox{DLP}^{(N)})$.
This completes the proof.
\end{Proof}
\section{Strong Duality Theorem}
Let ${\bf z}^{(u)}=(z_{1}^{(u)},\cdots ,z_{q}^{(u)})$ for $u=1,\cdots ,N$ be
feasible solutions of primal problem $(\mbox{LP}^{(N)})$ with the corresponding
partition ${\cal P}=\{0=t_{0},t_{1},\cdots ,t_{N}=T\}$ of $[0,T]$.
We define the step function
$\widehat{\bf z}(t)=(\widehat{z}_{1}(t),\cdots ,\widehat{z}_{q}(t))$ by
\begin{equation}{\label{copteq244}}
\widehat{z}_{j}(t)=\left\{\begin{array}{ll}
z_{j}^{(u)} & \mbox{if }t_{u-1}\leq t<t_{u}\mbox{ and }u=1,\cdots ,N\\
z_{j}^{(N)} & \mbox{if }t=T
\end{array}\right .\mbox{ for }j=1,\cdots ,q.
\end{equation}
Let ${\bf w}^{(u)}=(w_{1}^{(u)},\cdots ,w_{p}^{(u)})$ for $u=1,\cdots ,N$
be feasible solutions of dual problem $(\mbox{DLP}^{(N)})$.
We similarly define the step function
$\widehat{\bf w}(t)=(\widehat{w}_{1}(t),\cdots ,\widehat{w}_{p}(t))$ by
\begin{equation}{\label{opteq184}}
\widehat{w}_{i}(t)=\left\{\begin{array}{ll}
w_{i}^{(u)} & \mbox{if }t_{u-1}\leq t<t_{u}\mbox{ and }u=1,\cdots ,N\\
w_{i}^{(N)} & \mbox{if }t=T
\end{array}\right .\mbox{ for }i=1,\cdots ,p.
\end{equation}
Next, we present some useful lemmas for further discussion.
\begin{Lem}{\label{extdclp516}}
Suppose that each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous on $[0,T]$
and $[0,T]\times [0,T]$, respectively.
Given any $\epsilon >0$, we can take a sufficiently small
$\parallel {\cal P}\parallel$ with ${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$ that satisfies
$(\ref{*clpeq74})$ such that the following statements hold:
\begin{itemize}
\item $a_{j}(t)-a_{j}^{(u)}<\epsilon$ for $j=1,\cdots ,q$,
$t_{u-1}<t<t_{u}$ and $u=1,\cdots ,N$;
\item $c_{i}(t)-c_{i}^{(u)}<\epsilon$ for $i=1,\cdots ,p$,
$t_{u-1}<t<t_{u}$ and $u=1,\cdots ,N$;
\item $B_{ij}^{(u)}-B_{ij}(t)<\epsilon$ for $i=1,\cdots ,p$, $j=1,\cdots ,q$,
$t_{u-1}<t<t_{u}$ and $u=1,\cdots ,N$;
\item for fixed $t$ with $t_{u-1}<t<t_{u}$ and $u=1,\cdots ,N-1$, we have
$K_{ij}(s,t)-K_{ij}^{(v,u)}<\epsilon$ for $i=1,\cdots ,p$, $j=1,\cdots ,q$,
$t_{v-1}<s<t_{v}$ and $v=u+1,\cdots ,N$.
\end{itemize}
\end{Lem}
\begin{Proof}
According to the construction of partition ${\cal P}$, we see that $a_{j}$ is
continuous on the open interval $E_{u}=(t_{u-1},t_{u})$.
We define the compact interval
\[E_{um}=\left [t_{u-1}+\frac{1}{m},t_{u}-\frac{1}{m}\right ].\]
Then
\begin{equation}{\label{clptmpceq16}}
E_{u}=\bigcup_{m=1}^{\infty}E_{um}\mbox{ and }
E_{um_{1}}\subseteq E_{um_{2}}\mbox{ for }m_{2}>m_{1}
\end{equation}
Since $E_{um}\subset E_{u}$, it follows that $a_{j}$ is continuous
on each compact interval $E_{um}$, which also means that $a_{j}$ is
uniformly continuous on each compact interval $E_{um}$.
Therefore, given any $\epsilon >0$, there exists $\delta >0$ such that
$|t_{1}-t_{2}|<\delta$ implies
\begin{equation}{\label{clptmpceq44}}
\left |a_{j}(t_{1})-a_{j}(t_{2})\right |<\epsilon
\mbox{ for any $t_{1},t_{2}\in E_{um}$.}
\end{equation}
Since the length of $E_{u}$ is less than or equal to
$\parallel {\cal P}\parallel\leq \kappa T/N$ by (\ref{*clpeq74}),
we can consider a sufficiently large $N_{0}\in\mathbb{N}$ such that
$\kappa T/N_{0}<\delta$. In this case, each length of $E_{u}$ for $l=1,\cdots ,p$
is less than $\delta$. In other words, if $N\geq N_{0}$,
then (\ref{clptmpceq44}) is satisfied for any $t_{1},t_{2}\in E_{um}$.
We consider the following cases.
\begin{itemize}
\item Suppose that the infimum $a_{j}^{(u)}$ is attained at
$t_{u}^{(*)}\in E_{u}$. From (\ref{clptmpceq16}), there exists $m^{*}$ such that
$t_{u}^{(*)}\in E_{um^{*}}$. Now, given any $t\in E_{u}$, we see that
$t\in E_{um_{0}}$ for some $m_{0}$. Let $m=\max\{m_{0},m^{*}\}$. From
(\ref{clptmpceq16}), it follows that $t,t_{u}^{(*)}\in E_{um}$. Then, we have
\[\left |a_{j}(t)-a_{j}^{(u)}\right |
=\left |a_{j}(t)-a_{j}\left (t_{u}^{(*)}\right )\right |<\epsilon\]
since the length of $E_{um}$ is less than $\delta$,
where $\epsilon$ is independent of $t$ because of the uniform continuity.
\item Suppose that the infimum $a_{j}^{(u)}$ is not attained at any point in
$E_{u}$. Since $a_{j}$ is continuous on the open interval $E_{u}$,
it follows that the infimum $a_{j}^{(u)}$ is either the
righthand limit or lefthand limit given by
\[a_{j}^{(u)}=\lim_{t\rightarrow t_{u-1}+}a_{j}(t)\mbox{ or }
a_{j}^{(u)}=\lim_{t\rightarrow t_{u}-}a_{j}(t).\]
Therefore, for sufficiently large $N_{0}$, i.e., the open interval $E_{u}$
is sufficiently small such that its length is less than $\delta$, we have
\[\left |a_{j}(t)-a_{j}^{(u)}\right |<\epsilon\]
for all $t\in E_{u}$.
\end{itemize}
From the above two cases, since $a_{j}(t)\geq a_{j}^{(u)}$ for all
$t\in E_{u}$, we conclude that $a_{j}(t)-a_{j}^{(u)}<\epsilon$.
The remaining cases can be similarly obtained. This completes the proof.
\end{Proof}
\begin{Lem}{\label{optl185}}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item Suppose that each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous
on $[0,T]$ and $[0,T]\times [0,T]$, respectively;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$
and $j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
\end{itemize}
Let $({\bf d}^{(1)},\cdots ,{\bf d}^{(N)})$ be a feasible solution
of dual problem $(\mbox{\em DLP}^{(N)})$.
Given any $\epsilon >0$, there exists a sufficiently small
$\parallel {\cal P}\parallel$ with ${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$
which depends on $\epsilon$ such that there exists another
feasible solution $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$
of dual problem $(\mbox{\em DLP}^{(N)})$ satisfying ${\bf w}^{(u)}\leq {\bf d}^{(u)}$
for $u=1,\cdots ,N$ and
\begin{equation}{\label{*clpeq30}}
\int_{0}^{T} {\bf c}^{\top}(t)\widehat{\bf w}(t)dt
\leq\sum_{u=1}^{N} (t_{u}-t_{u-1})({\bf c}^{(u)})^{\top}
{\bf w}^{(u)}+\epsilon ,
\end{equation}
where the step function $\widehat{\bf w}(t)$ is defined in $(\ref{opteq184})$.
If $({\bf d}^{(1)},\cdots ,{\bf d}^{(N)})$ is an optimal solution
of dual problem $(\mbox{\em DLP}^{(N)})$, then
$({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$ can be taken as the optimal
solution of dual problem $(\mbox{\em DLP}^{(N)})$.
\end{Lem}
\begin{Proof}
The existence of feasible solution $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$
can be guaranteed by Proposition~\ref{coptp250}
From Lemma~\ref{extdclp516}, given any $\bar{\epsilon}>0$,
we can take a sufficiently small $\parallel {\cal P}\parallel$ with
${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$ that satisfies (\ref{*clpeq74})
such that $c_{i}(t)-c_{i}^{(u)}<\bar{\epsilon}$ for $i=1,\cdots ,p$,
$t_{u-1}<t<t_{u}$ and $u=1,\cdots ,N$, which implies
\begin{equation}{\label{opteq177}}
c_{i}(t) w_{i}^{(u)}-c_{i}^{(u)}w_{i}^{(u)}
\leq\bar{\epsilon}w_{i}^{(u)}\mbox{ for $t_{u-1}<t<t_{u}$ and $u=1,\cdots ,N$}
\end{equation}
by the fact of $w_{i}^{(u)}\geq 0$.
Since the integral does not be affected by the endpoints,
taking integrations from (\ref{opteq177}), we obtain
\[\sum_{i=1}^{p}\int_{0}^{T}c_{i}(t)\widehat{w}_{i}(t)dt-
\sum_{i=1}^{p}\sum_{u=1}^{N}(t_{u}-t_{u-1})c_{i}^{(u)} w_{i}^{(u)}
\leq\parallel {\cal P}\parallel\bar{\epsilon}\cdot\sum_{i=1}^{p}
\sum_{u=1}^{N} w_{i}^{(u)}.\]
By the boundedness of $ w_{i}^{(u)}$ as shown in (\ref{opteq163}), we also have
\begin{align*}
\int_{0}^{T}{\bf c}^{\top}(t)\widehat{\bf w}(t)dt-
\sum_{u=1}^{N} (t_{u}-t_{u-1})({\bf c}^{(u)})^{\top}{\bf w}^{(u)}
& \leq Np\bar{\epsilon}\parallel {\cal P}\parallel\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\eta T}{\sigma}\right )\\
& \leq \kappa Tp\bar{\epsilon}\cdot\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\eta T}{\sigma}\right )\mbox{ (by (\ref{*clpeq74}))}
\end{align*}
which implies, given any $\epsilon >0$, there exists a sufficiently small
$\parallel {\cal P}\parallel$ with ${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$ such that
\[\int_{0}^{T} {\bf c}^{\top}(t)\widehat{\bf w}(t)dt
-\sum_{u=1}^{N} (t_{u}-t_{u-1})({\bf c}^{(u)})^{\top}
{\bf w}^{(u)}\leq\epsilon .\]
This completes the proof.
\end{Proof}
\begin{Lem}{\label{optl183}}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous on $[0,T]$
and $[0,T]\times [0,T]$, respectively;
\item $K(s,t)\geq {\bf 0}$ a.e. in $[0,T]\times [0,T]$;
\item $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$;
\item there exists a constant $\sigma >0$ such that,
for each $i=1,\cdots ,p$
and $j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
\end{itemize}
Let $({\bf d}^{(1)},\cdots {\bf d}^{(N)})$ be a feasible solution
of dual problem $(\mbox{\em DLP}^{(N)})$.
Given any $\epsilon >0$, there exists a sufficiently small
$\parallel {\cal P}\parallel$ which depends on $\epsilon$ such that there exists another
feasible solution $({\bf w}^{(1)},\cdots {\bf w}^{(N)})$
of dual problem $(\mbox{\em DLP}^{(N)})$ satisfying
${\bf w}^{(u)}\leq {\bf d}^{(u)}$ for $u=1,\cdots ,N$ and
\[\sum_{i=1}^{p}B_{ij}(t)\widehat{w}_{i}(t)+\epsilon\geq a_{j}(t)
+\sum_{i=1}^{p}\int_{t}^{T}K_{ij}(s,t)\widehat{w}_{i}(s)ds\]
for all $t\in [0,T]\setminus {\cal P}$ and for $j=1,\cdots ,q$,
where the step function $\widehat{\bf w}(t)$ is defined in $(\ref{opteq184})$.
If $({\bf d}^{(1)},\cdots {\bf d}^{(N)})$ is an optimal
solution of $(\mbox{\em DLP}^{(N)})$, then
$({\bf w}^{(1)},\cdots {\bf w}^{(N)})$ can be taken as the optimal
solution of $(\mbox{\em DLP}^{(N)})$.
\end{Lem}
\begin{Proof}
The existence of feasible solution $({\bf w}^{(1)},\cdots ,{\bf w}^{(N)})$
can be guaranteed by Proposition~\ref{coptp250}
From Lemma~\ref{extdclp516}, given any $\bar{\epsilon}>0$,
we can take a sufficiently small $\parallel {\cal P}\parallel <\bar{\epsilon}$ with
${\cal P}=\{t_{0},t_{1},\cdots ,t_{N}\}$ that satisfies (\ref{*clpeq74})
such that, for $u=1,\cdots ,N$,
\begin{equation}{\label{opteq166}}
a_{j}(t)-a_{j}^{(u)}<\bar{\epsilon}\mbox{ and }B_{ij}^{(u)}-B_{ij}(t)<\bar{\epsilon}
\end{equation}
for $t_{u-1}<t<t_{u}$. Therefore, for $t_{u-1}<t<t_{u}$,using (\ref{opteq163}), we have
\begin{equation}{\label{opteq167}}
\sum_{i=1}^{p}B_{ij}^{(u)}w_{i}^{(u)}-\sum_{i=1}^{p}B_{ij}(t)\widehat{w}_{i}(t)
\leq\sum_{i=1}^{p}\bar{\epsilon}w_{i}^{(u)}
\leq p\bar{\epsilon}\cdot\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\eta T}{\sigma}\right )\equiv\epsilon_{1}
\end{equation}
Also, from Lemma~\ref{extdclp516} again, for $t_{u-1}<t<t_{u}$, $u=1,\cdots ,N-1$
and $v=u+1,\cdots ,N$, we have $K_{ij}(s,t)-K_{ij}^{(v,u)}<\bar{\epsilon}$, which implies
\begin{align}
& \int_{t_{v-1}}^{t_{v}}K_{ij}(s,t)\cdot\widehat{w}_{i}(s)ds
-\left (t_{v}-t_{v-1}\right )K_{ij}^{(v,u)}w_{i}^{(v)}\nonumber\\
& \quad =\int_{t_{v-1}}^{t_{v}}\left (K_{ij}(s,t)-K_{ij}^{(v,u)}\right )w_{i}^{(v)}ds
\leq\bar{\epsilon}\left (t_{v}-t_{v-1}\right )w_{i}^{(v)}.\label{dclp130}
\end{align}
By referring to (\ref{*clpeq2}), for $t_{u-1}<t<t_{u}$, we obtain
\begin{equation}{\label{dclp108}}
\int_{t}^{t_{u}}K_{ij}(s,t)w_{i}^{(u)}ds
\leq\eta\parallel {\cal P}\parallel w_{i}^{(u)}
\leq\eta\bar{\epsilon}w_{i}^{(u)}.
\end{equation}
Now, for $u=1,\cdots ,N-1$, $t_{u-1}<t<t_{u}$ and $i=1,\cdots ,p$,
using (\ref{dclp130}) and (\ref{dclp108}), we have
\begin{align*}
& \int_{t}^{T}K_{ij}(s,t)\cdot\widehat{w}_{i}(s)ds
-\sum_{v=u+1}^{N}\left (t_{v}-t_{v-1}\right )K_{ij}^{(v,u)}w_{i}^{(v)}\\
& \quad\leq\bar{\epsilon}w_{i}^{(u)}\cdot\left [\eta +\sum_{v=u+1}^{N}\left (t_{v}-t_{v-1}\right )
\right ]\leq\bar{\epsilon}w_{i}^{(u)}\cdot\left (\eta +p\cdot\parallel {\cal P}\parallel \right )
\end{align*}
which implies, by using (\ref{opteq163}) and (\ref{*clpeq74}),
\begin{align}
& \sum_{i=1}^{p}\int_{t}^{T}K_{ij}(s,t)\cdot\widehat{w}_{i}(s)ds
-\sum_{i=1}^{p}\sum_{v=u+1}^{N}\left (t_{v}-t_{v-1}\right )
K_{ij}^{(v,u)}w_{i}^{(v)}\nonumber\\
& \quad\leq p\bar{\epsilon}\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\eta T}{\sigma}\right )\cdot\left (\eta +\kappa T\right )
\equiv\epsilon_{2}.\label{opteq168}
\end{align}
For $u=1,\cdots ,N-1$ and $t_{u-1}<t<t_{u}$, using (\ref{opteq166}), (\ref{opteq167}),
(\ref{opteq168}) and the feasibility of $({\bf w}^{(1)},\cdots {\bf w}^{(N)})$,
we can obtain
\[-\sum_{i=1}^{p}B_{ij}(t)\widehat{w}_{i}(t)+a_{j}(t)
+\sum_{i=1}^{p}\int_{t}^{T}K_{ij}(s,t)\widehat{w}_{i}(s)ds
\leq\bar{\epsilon}+\epsilon_{1}+\epsilon_{2},\]
which shows that, for $i=1,\cdots ,p$, given $\bar{\epsilon}_{1}>0$, there exists a
sufficiently small $\parallel {\cal P}\parallel$ such that
\begin{equation}{\label{dclp110}}
\sum_{i=1}^{p}B_{ij}(t)\widehat{w}_{i}(t)+\bar{\epsilon}_{1}\geq a_{j}(t)
+\sum_{i=1}^{p}\int_{t}^{T}K_{ij}(s,t)\widehat{w}_{i}(s)ds.
\end{equation}
For $t_{N-1}<t<T$, using the similar argument, we can show that,
given $\bar{\epsilon}_{2}>0$, there exists a
sufficiently small $\parallel {\cal P}\parallel$ such that
\begin{equation}{\label{dclp111}}
\sum_{i=1}^{p}B_{ij}(t)\widehat{w}_{i}(t)+\bar{\epsilon}_{2}\geq a_{j}(t)
+\sum_{i=1}^{p}\int_{t}^{T}K_{ij}(s,t)\widehat{w}_{i}(s)ds.
\end{equation}
From (\ref{dclp110}) and (\ref{dclp111}), we complete the proof.
\end{Proof}
Let $M(\epsilon )$ and $\widehat{M}(\epsilon )$ be the optimal
objective values of $(\mbox{CLP}_{\epsilon})$ and
$(\mbox{DCLP}_{\epsilon})$, respectively. Under the assumptions of
Theorems~\ref{optt120} and \ref{p63},
we see that there exist optimal solutions $\bar{\bf z}^{(\epsilon)}$ and
$\bar{\bf w}^{(\epsilon)}$ of problems $(\mbox{CLP}_{\epsilon})$
and $(\mbox{DCLP}_{\epsilon})$, respectively, such that
\[M(\epsilon )=\int_{0}^{T} {\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}(t)dt
\mbox{ and }\widehat{M}(\epsilon )
=\int_{0}^{T} {\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt.\]
Also, by taking $\epsilon =0$ in Theorems~\ref{optt120} and \ref{p63},
there exist optimal solutions ${\bf z}^{*}$ and ${\bf w}^{*}$
of problems (CLP) and (DCLP), respectively, such that
${\bf z}^{*}$ and ${\bf w}^{*}$ satisfy the following inequalities:
\begin{equation}{\label{*clpeq98}}
z_{j}^{*}(t)\leq\frac{p\cdot\zeta}{\sigma}\cdot
\exp\left (\frac{p\cdot\phi\cdot T}{\sigma}\right )\mbox{ a.e. in $[0,T]$
for each $j=1,\cdots ,q$}
\end{equation}
from (\ref{clpeq337}) by taking $\epsilon =0$, and
\begin{equation}{\label{*clpeq95}}
w_{i}^{*}(t)\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\nu\cdot T}{\sigma}\right )\mbox{ for all $t\in [0,T]$
and for each $i=1,\cdots ,p$}
\end{equation}
from (\ref{*clpeq68}).
Let $M$ and $\widehat{M}$ be the optimal objective values of (CLP) and (DCLP),
respectively. Then, we see that
\[M(0)=M=\int_{0}^{T} {\bf a}^{\top}(t){\bf z}^{*}(t)dt
\mbox{ and }\widehat{M}(0)=\widehat{M}=\int_{0}^{T} {\bf c}^{\top}(t){\bf w}^{*}(t)dt.\]
We are going to show that the functions $M(\epsilon )$ and
$\widehat{M}(\epsilon )$ are right-continuous at $0$.
\begin{Pro}{\label{*clpp70}}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous in $[0,T]$
and $[0,T]\times [0,T]$, respectively;
\item ${\bf c}(t)\geq {\bf 0}$ a.e. in $[0,T]$;
\item $K(t,s)\geq {\bf 0}$ a.e. in $[0,T]\times [0,T]$;
\item $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
\end{itemize}
Then, we have the following results.
\begin{enumerate}
\item [{\em (i)}] The function $M(\epsilon )$ is nondecreasing
and right-continuous at $0$, i.e., $M(0+)=M(0)$, and
\begin{equation}{\label{clpeq338}}
\lim_{\epsilon\rightarrow 0+}\int_{0}^{T}
{\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}(t)dt=\int_{0}^{T}
{\bf a}^{\top}(t){\bf z}^{*}(t)dt,
\end{equation}
where ${\bf z}^{*}$ is an optimal solution of {\em (CLP)}
such that ${\bf z}^{*}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and the inequalities in $(\ref{*clpeq98})$ are satisfied.
Moreover, the following results hold.
\begin{itemize}
\item If ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$ and,
for each fixed $t_{0}\in [0,T]$, $K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$,
Then, there exists a common optimal solution
${\bf z}^{*}$ of {\em (CLP)} and $(\mbox{\em CLP}^{*})$
such that both problems have the same
optimal objective values and ${\bf z}^{*}$ satisfies the inequalities
$(\ref{*clpeq98})$;
\item If the conditions regarding the time-dependent matrix $B(t)$ are satisfied for all
$t\in [0,T]$, then the inequalities in $(\ref{*clpeq98})$ are satisfied for all $t\in [0,T]$.
\end{itemize}
\item [{\em (ii)}] The function $\widehat{M}(\epsilon )$ is nonincreasing and
right-continuous at $0$, i.e., $\widehat{M}(0+)=\widehat{M}(0)$, and
\begin{equation}{\label{clpeq339}}
\lim_{\epsilon\rightarrow 0+}\int_{0}^{T}
{\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt=\int_{0}^{T}
{\bf c}^{\top}(t){\bf w}^{*}(t)dt,
\end{equation}
where ${\bf w}^{*}$ is the optimal solution of {\em (DCLP)}
such that ${\bf w}^{*}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and the inequalities $(\ref{*clpeq95})$ are satisfied.
If we further assume that the conditions regarding the time-dependent matrix
$B(t)$ are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$,
then there exists a common optimal solution
${\bf w}^{*}$ of {\em (DCLP)} and $(\mbox{\em DCLP}^{*})$
such that both problems have the same optimal objective values
and the inequalities in $(\ref{*clpeq95})$ are satisfied.
\end{enumerate}
\end{Pro}
\begin{Proof}
We want to show that $M(\epsilon )$ is nondecreasing.
For $\epsilon_{1}<\epsilon_{2}$, there exist optimal solutions
$\bar{\bf z}_{\epsilon_{1}}$ and $\bar{\bf z}_{\epsilon_{2}}$ satisfying
\begin{equation}{\label{opteq190}}
B(t)\bar{\bf z}_{\epsilon_{2}}(t)\leq
{\bf c}(t)+\mbox{\boldmath $\epsilon$}_{2}
+\int_{0}^{t} K(t,s)\bar{\bf z}_{\epsilon_{2}}(s)ds\mbox{ for all $t\in [0,T]$}
\end{equation}
and
\begin{equation}{\label{opteq191}}
B(t)\bar{\bf z}_{\epsilon_{1}}(t)\leq {\bf c}(t)+\mbox{\boldmath $\epsilon$}_{1}
+\int_{0}^{t} K(t,s)\bar{\bf z}_{\epsilon_{1}}(s)ds\leq
{\bf c}(t)+\mbox{\boldmath $\epsilon$}_{2}
+\int_{0}^{t} K(t,s)\bar{\bf z}_{\epsilon_{1}}(s)ds\mbox{ for all $t\in [0,T]$}.
\end{equation}
From (\ref{opteq191}), we see that $\bar{\bf z}_{\epsilon_{1}}$
is a feasible solution of $(\mbox{CLP}_{\epsilon_{2}})$.
Therefore, we obtain $M(\epsilon_{1})\leq M(\epsilon_{2})$,
since $(\mbox{CLP}_{\epsilon_{2}})$ is a maximization problem.
This shows that $M(\epsilon )$ is indeed nondecreasing, i.e., $M(0+)$ exists.
We also have
\begin{equation}{\label{opteq195}}
M=M(0)\leq M(0+).
\end{equation}
We consider the sequence $\{\epsilon_{k}\}_{k=1}^{\infty}$ such that
$\epsilon_{k}\rightarrow 0+$ as $k\rightarrow\infty$.
Using (\ref{clpeq337}) in Proposition~\ref{optt120*}
by taking $\lambda_{i}(t)=1$ for $i=1,\cdots ,p$ and Remark~\ref{*clpr97},
we see that the sequence $\{\bar{\bf z}^{(\epsilon_{k})}\}$ is
uniformly essentially bounded. Using part (i) of Proposition~\ref{*clpp47},
there exists a subsequence $\{\bar{\bf z}^{(\epsilon_{k_{r})}}\}$ which weakly
converges to some feasible solution $\bar{\bf z}^{(0)}\in L_{q}^{2}[0,T]$ of
$(\mbox{CLP}_{0})=\mbox{(CLP)}$. Moreover, there exists a feasible solution
${\bf z}^{*}$ of $(\mbox{CLP}_{0})=\mbox{(CLP)}$ such that
${\bf z}^{*}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and ${\bf z}^{*}(t)=\bar{\bf z}^{(0)}(t)$ a.e. in $[0,T]$.
Therefore, using the weak convergence, we have
\begin{equation}{\label{opteq194}}
\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{*}(t)dt
=\int_{0}^{T}{\bf a}^{\top}(t)\bar{\bf z}^{(0)}(t)dt
=\lim_{r\rightarrow\infty}\int_{0}^{T}
{\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon_{k_{r}})}(t)dt
=\lim_{r\rightarrow\infty}M(\epsilon_{k_{r}})=M(0+).
\end{equation}
Since ${\bf z}^{*}$ is a feasible solution of
$(\mbox{CLP}_{0})=\mbox{(CLP)}$, we also have
\begin{equation}{\label{opteq193}}
\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{*}(t)dt\leq M=M(0).
\end{equation}
Therefore, according to (\ref{opteq193}) and (\ref{opteq194}),
we obtain $M(0+)\leq M(0)$, which implies $M(0+)=M(0)=M$ by (\ref{opteq195}).
This says that ${\bf z}^{*}$ is an optimal solution of (CLP) and
proves equality (\ref{clpeq338}). If we further assume that
${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$ and,
for each fixed $t_{0}\in [0,T]$, $K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$,
using part (ii) of Proposition~\ref{*clpp47}, we see that
${\bf z}^{*}$ is also a feasible solution of $(\mbox{CLP}^{*})$.
Therefore, we conclude that ${\bf z}^{*}$ is also an optimal solution
of $(\mbox{CLP}^{*})$, since the feasible set of
$(\mbox{CLP}^{*})$ is contained in the feasible set of (CLP).
On the other hand, if the assumption regarding
the time-dependent matrix $B(t)$ is satisfied for all $t\in [0,T]$,
Then, according to (\ref{clpeq337}) in Proposition~\ref{optt120*},
the inequalities in (\ref{*clpeq98})
are satisfied for all $t\in [0,T]$. This proves part (i).
To prove part (ii), we can similarly show that $\widehat{M}(\epsilon )$ is
nonincreasing and $\widehat{M}=\widehat{M}(0)\geq\widehat{M}(0+)$.
From (\ref{*clpeq68}) in Theorem~\ref{p63}, we have
\[0\leq\bar{w}_{i}^{(\epsilon)}(t)\leq\frac{\tau}{\sigma}\cdot\exp
\left [\frac{\nu\cdot (T-t)}{\sigma}\right ]\leq\frac{\tau}{\sigma}\cdot\exp
\left (\frac{\nu\cdot T}{\sigma}\right )
\mbox{ for all $t\in [0,T]$},\]
which says that the sequence $\{\bar{\bf w}^{(\epsilon_{k})}\}_{k=1}^{\infty}$ is
uniformly essentially bounded. Using Proposition~\ref{*clpp48*},
there exists a subsequence $\{\widehat{\bf w}^{(\epsilon_{k_{r}})}\}_{r=1}^{\infty}$
which weakly converges to some feasible solution $\widehat{\bf w}^{(0)}\in L_{p}^{2}[0,T]$ of
$(\mbox{DCLP}_{0})=\mbox{(DCLP)}$ such that $\widehat{\bf w}^{(\epsilon_{k_{r}})}(t)
\leq\bar{\bf w}^{(\epsilon_{k_{r}})}(t)$ a.e. in $[0,T]$.
Moreover, there exists a feasible solution
${\bf w}^{*}$ of $(\mbox{DCLP}_{0})=\mbox{(DCLP)}$ such that
${\bf w}^{*}(t)\geq {\bf 0}$ for all $t\in [0,T]$
and ${\bf w}^{*}(t)=\widehat{\bf w}^{(0)}(t)$ a.e. in $[0,T]$.
Therefore, using the weak convergence, we have
\begin{align*}
\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt
& =\int_{0}^{T}{\bf c}^{\top}(t)\widehat{\bf w}^{(0)}(t)dt=
\lim_{r\rightarrow\infty}\int_{0}^{T}
{\bf c}^{\top}(t)\widehat{\bf w}^{(\epsilon_{k_{r}})}(t)dt\\
& \leq\lim_{r\rightarrow\infty}\int_{0}^{T}
{\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon_{k_{r}})}(t)dt
=\lim_{r\rightarrow\infty}\widehat{M}(\epsilon_{k_{r}})=\widehat{M}(0+).
\end{align*}
Since ${\bf w}^{*}$ is a feasible solution of
$(\mbox{DCLP}_{0})=\mbox{(DCLP)}$, we also have
\[\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt\geq\widehat{M}=\widehat{M}(0).\]
Therefore, we obtain $\widehat{M}(0+)\geq\widehat{M}(0)$, which also implies
\[\widehat{M}(0+)=\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt
=\widehat{M}(0)=\widehat{M},\]
This shows that ${\bf w}^{*}$ is an optimal solution of (DCLP),
and proves the equality (\ref{clpeq339}).
We further assume that the conditions regarding the time-dependent matrix
$B(t)$ are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$. Then,
part (iii) of Proposition~\ref{*clpp48*} says that we can take
${\bf w}^{*}$ as a feasible solution of $(\mbox{DCLP}^{*})$.
Since the feasible set of $(\mbox{DCLP}^{*})$ is contained in the
feasible set of $(\mbox{DCLP})$, it follows that ${\bf w}^{*}$
is an optimal solution of problem $(\mbox{DCLP}^{*})$. This completes the proof.
\end{Proof}
Now, we are in a position to prove the strong duality theorem.
\begin{Thm}{\label{optt203}}
{\em (Strong Duality Theorem)}
Suppose that the following conditions are satisfied:
\begin{itemize}
\item each entry of ${\bf a}$, ${\bf c}$, $B$ and $K$ is piecewise continuous in $[0,T]$
and $[0,T]\times [0,T]$, respectively;
\item ${\bf c}(t)\geq {\bf 0}$ a.e. in $[0,T]$;
\item $K(t,s)\geq {\bf 0}$ a.e. in $[0,T]\times [0,T]$;
\item the time-dependent matrix $B(t)$ satisfies the following conditions:
\begin{itemize}
\item $\sum_{i=1}^{p}B_{ij}(t)>0$ a.e. in $[0,T]$ for each $j=1,\cdots ,q$;
\item there exists a constant $\sigma >0$ such that, for each $i=1,\cdots ,p$ and
$j=1,\cdots ,q$, the following statement holds true a.e. in $[0,T]$:
\[B_{ij}(t)\neq 0\mbox{ implies }B_{ij}(t)\geq\sigma .\]
\end{itemize}
\end{itemize}
Then, there exist optimal solutions ${\bf z}^{*}$ and ${\bf w}^{*}$ of problems
{\em (CLP)} and {\em (DCLP)}, respectively, such that
\[\int_{0}^{T} {\bf a}^{\top}(t){\bf z}^{*}(t)dt=
\int_{0}^{T} {\bf c}^{\top}(t){\bf w}^{*}(t)dt,\]
where ${\bf z}^{*}(t)\geq {\bf 0}$ and ${\bf w}^{*}(t)\geq {\bf 0}$
for all $t\in [0,T]$, and the inequalities in
$(\ref{*clpeq98})$ and $(\ref{*clpeq95})$ are all satisfied.
Moreover, the following results hold.
\begin{itemize}
\item If ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$ and,
for each fixed $t_{0}\in [0,T]$, $K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$,
then there exists a common optimal solution
${\bf z}^{*}$ of problems {\em (CLP)} and $(\mbox{\em CLP}^{*})$
such that both problems have the same optimal objective value.
\item If we further assume that the conditions regarding the time-dependent matrix
$B(t)$ are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$, then there exists a common optimal solution
${\bf w}^{*}$ of {\em (DCLP)} and $(\mbox{\em DCLP}^{*})$
such that both problems have the same optimal objective value
and the inequalities $(\ref{*clpeq98})$ are satisfied for all $t\in [0,T]$.
\end{itemize}
\end{Thm}
\begin{Proof}
Since the primal and dual pair of linear programming problems $(\mbox{LP}^{(N)})$ and
$(\mbox{DLP}^{(N)})$ are feasible by Proposition~\ref{optp172}, the strong duality theorem
says that there exist optimal solutions $(\bar{{\bf z}}^{(1)},\cdots ,\bar{{\bf z}}^{(N)})$
and $(\bar{{\bf d}}^{(1)},\cdots ,\bar{{\bf d}}^{(N)})$ of problems $(\mbox{LP}^{(N)})$ and
$(\mbox{DLP}^{(N)})$, respectively, such that
\begin{equation}{\label{opteq*187}}
\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf a}^{(u)})^{\top}\bar{{\bf z}}^{(u)}
=\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf c}^{(u)})^{\top}\bar{{\bf d}}^{(u)}.
\end{equation}
Since the integral does not be affected by the endpoints, from (\ref{copteq244}),
it is not hard to obtain
\begin{align}
\int_{0}^{T}{\bf a}^{\top}(t)\widehat{\bf z}(t)dt
& =\sum_{j=1}^{q}\int_{0}^{T}a_{j}(t)\widehat{z}_{j}(t)dt\nonumber\\
& \geq\sum_{j=1}^{q}\sum_{u=1}^{N}(t_{u}-t_{u-1})a_{j}^{(u)}\bar{z}_{j}^{(u)}
=\sum_{u=1}^{N} (t_{u}-t_{u-1})({\bf a}^{(u)})^{\top}\bar{\bf z}^{(u)}\label{*clpeq33}.
\end{align}
Considering $t\in [0,T]\setminus {\cal P}$,
since $K(t,s)$ is continuous on the open rectangles
$(t_{u-1},t_{u})\times (t_{v-1},t_{v})$ and $K(t,s)\geq {\bf 0}$ a.e. in
$(t_{u-1},t_{u})\times (t_{v-1},t_{v})$ for $u=1,\cdots ,N$ and $v=1,\cdots ,N$,
it follows that $K(t,s)\geq {\bf 0}$ for all $(t,s)\in
(t_{u-1},t_{u})\times (t_{v-1},t_{v})$ for $u=1,\cdots ,N$ and $v=1,\cdots ,N$,
which implies $K_{ij}^{(u,v)}\geq 0$ for $u=1,\cdots ,N$ and $v=1,\cdots ,N$.
Since $z_{j}^{(v)}\geq 0$ and $K_{ij}(t,s)\geq K_{ij}^{(u,v)}$ for $t_{u-1}<t<t_{u}$
and $t_{v-1}<s<t_{v}$, we have $K_{ij}(t,s)z_{j}^{(v)}\geq K_{ij}^{(u,v)}z_{j}^{(v)}$.
Since the integral does not be affected by the endpoints, for $t_{u-1}<t<t_{u}$,
we obtain
\begin{equation}{\label{dclp125}}
-\sum_{j=1}^{q}\int_{0}^{t}K_{ij}(t,s)\widehat{z}_{j}(s)ds\leq
-\sum_{j=1}^{q}\sum_{v=1}^{u-1}\left (t_{v}-t_{v-1}\right )K_{ij}^{(u,v)}z_{j}^{(v)}.
\end{equation}
Since $c_{i}(t)\geq c_{i}^{(u)}$ and $B_{ij}(t)\leq B_{ij}^{(u)}$,
using (\ref{dclp125}) and the feasibility of $({\bf z}^{(1)},\cdots ,{\bf z}^{(N)})$,
after some calculations, we can obtain the following inequalities
\[\sum_{j=1}^{q}B_{ij}(t)\widehat{z}_{j}(t)\leq c_{i}(t)
+\sum_{j=1}^{q}\int_{0}^{t}K_{ij}(t,s)\widehat{z}_{j}(s)ds\]
for all $t\in [0,T]\setminus {\cal P}$ and for $i=1,\cdots ,p$, which implies, for $\epsilon >0$,
\begin{equation}{\label{*clpeq32}}
B(t)\widehat{\bf z}(t)\leq {\bf c}(t)+\mbox{\boldmath $\epsilon$}
+\int_{0}^{t} K(t,s)\widehat{\bf z}(s)ds\mbox{ for all $t\in [0,T]\setminus {\cal P}$}
\end{equation}
Considering the dual problem $(\mbox{DLP}^{(N)})$,
applying Lemmas~\ref{optl185} and \ref{optl183}, there exists
a sufficiently small $\parallel {\cal P}\parallel$ which depends on $\epsilon$ such that
$(\bar{{\bf w}}^{(1)},\cdots ,\bar{{\bf w}}^{(N)})$ is an optimal solution of
problem $(\mbox{DLP}^{(N)})$ and the following inequalities are satisfied:
\begin{equation}{\label{*clpeq36}}
B^{\top}(t)\widehat{\bf w}(t)+\mbox{\boldmath $\epsilon$}\geq {\bf a}(t)
+\int_{t}^{T} K^{\top}(s,t)\widehat{\bf w}(s)ds
\mbox{ for all $t\in [0,T]\setminus {\cal P}$}
\end{equation}
and
\begin{equation}{\label{*clpeq202}}
\int_{0}^{T} {\bf c}^{\top}(t)\widehat{\bf w}(t)dt
\leq\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf c}^{(u)})^{\top}
{\bf w}^{(u)}+\epsilon .
\end{equation}
From (\ref{opteq*187}), we also have
\begin{equation}{\label{opteq187}}
\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf a}^{(u)})^{\top}\bar{{\bf z}}^{(u)}
=\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf c}^{(u)})^{\top}\bar{{\bf w}}^{(u)}.
\end{equation}
The inequalities (\ref{*clpeq32}) and (\ref{*clpeq36}) say that
$\widehat{\bf z}$ and $\widehat{\bf w}$ are feasible solutions
of problems $(\mbox{CLP}_{\epsilon})$ and $(\mbox{DCLP}_{\epsilon})$, respectively.
By Theorems~\ref{optt120} and \ref{p63},
we see that there exist optimal solutions $\bar{\bf z}^{(\epsilon)}$ and
$\bar{\bf w}^{(\epsilon)}$ of $(\mbox{CLP}_{\epsilon})$
and $(\mbox{DCLP}_{\epsilon})$, respectively, such that
\begin{equation}{\label{opteq181}}
\int_{0}^{T} {\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}dt
\geq\int_{0}^{T} {\bf a}^{\top}(t)\widehat{\bf z}(t)dt
\mbox{ and }\int_{0}^{T} {\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}dt
\leq\int_{0}^{T} {\bf c}^{\top}(t)\widehat{\bf w}(t)dt.
\end{equation}
Using (\ref{opteq181}), (\ref{*clpeq33}) and (\ref{*clpeq202}), we have
\begin{equation}{\label{opteq182}}
\int_{0}^{T} {\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}dt
\geq\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf a}^{(u)})^{\top}
\bar{{\bf z}}^{(u)}
\end{equation}
and
\begin{equation}{\label{opteq186}}
\int_{0}^{T} {\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt
\leq\sum_{u=1}^{N}\left (t_{u}-t_{u-1}\right )({\bf c}^{(u)})^{\top}
\bar{{\bf w}}^{(u)}+\epsilon .
\end{equation}
From (\ref{opteq187}), (\ref{opteq182}) and (\ref{opteq186}), we obtain
\begin{equation}{\label{opteq188}}
\int_{0}^{T} {\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt
\leq\int_{0}^{T} {\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}dt+\epsilon .
\end{equation}
Since $M(\epsilon )\uparrow M(0)=M$ and
$\widehat{M}(\epsilon )\downarrow\widehat{M}(0)=\widehat{M}$
as $\epsilon\rightarrow 0+$ by Proposition~\ref{*clpp70},
for each $\epsilon >0$, we obtain
\begin{equation}{\label{opteq196}}
\int_{0}^{T}{\bf a}^{\top}(t)\bar{\bf z}^{(\epsilon)}(t)dt\leq
\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{*}(t)dt\mbox{ and }
\int_{0}^{T}{\bf c}^{\top}(t)\bar{\bf w}^{(\epsilon)}(t)dt\geq
\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt.
\end{equation}
By (\ref{opteq188}) and (\ref{opteq196}), we have
\[\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt\leq
\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{*}(t)dt+\epsilon ,\]
which implies
\[\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt\leq
\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{*}(t)dt,\]
since $\epsilon$ can be any positive number.
By the weak duality Theorem~\ref{optt198}, we conclude that
\[\int_{0}^{T}{\bf c}^{\top}(t){\bf w}^{*}(t)dt=
\int_{0}^{T}{\bf a}^{\top}(t){\bf z}^{*}(t)dt.\]
Also, the inequalities regarding the bounds of optimal solutions
${\bf z}^{*}$ and ${\bf w}^{*}$ follow from Proposition~\ref{*clpp70} immediately.
If ${\bf c}(t)\geq {\bf 0}$ for all $t\in [0,T]$ and,
for each fixed $t_{0}\in [0,T]$, $K(t_{0},s)\geq {\bf 0}$ a.e. in $[0,T]$,
then part (i) of Proposition~\ref{*clpp70} says that ${\bf z}^{*}$ is a common optimal
solution of problems {(CLP)} and $(\mbox{CLP}^{*})$ such that both problems have the
same optimal objective value.
Finally, we further assume that the conditions regarding the time-dependent matrix
$B(t)$ are satisfied for all $t\in [0,T]$,
and that the function $\sum_{i=1}^{p}K_{ij}$ is bounded by $\nu$ and the function $a_{j}$
is bounded by $\tau$ for each $j=1,\cdots ,q$.
Then, part (ii) of Proposition~\ref{*clpp70} says that ${\bf w}^{*}$ is a common optimal
solution of problems {(DCLP)} and $(\mbox{DCLP}^{*})$ such that both problems have the
same optimal objective value. Also, part (i) of Proposition~\ref{*clpp70} says that
the inequalities $(\ref{*clpeq98})$ are satisfied for all $t\in [0,T]$.
This completes the proof.
\end{Proof}
| -191,358.463395 |
[
-2.375,
2.2421875
] | 19.311828 |
[
-4.0703125,
0.40673828125,
-2.076171875,
-6.12890625,
-0.6591796875,
9
] |
[
0.81640625,
7.828125,
-0.6640625,
3.912109375
] | 845 | 11,952 |
[
-3.607421875,
4.2421875
] | 42.840146 |
[
-5.93359375,
-3.994140625,
-4.3515625,
-2.169921875,
1.8564453125,
12.1640625
] | 0.795217 | 7.447214 | 14.441098 | 3.497254 |
[
1.466306447982788
] | -131,903.111225 | 5.8499 | -191,209.266953 | 1.038359 | 5.840015 |
[
-2.109375,
-3.49609375,
-4.2890625,
-5.8984375,
2.177734375,
13.4609375
] |
[
-5.66015625,
-1.92578125,
-2.005859375,
-1.388671875,
3.392578125,
4.0625
] | |
BkiUbfM241xg935cMb3G
|
\section{Introduction}
\label{Intro}
Consider a free field, whether classical or quantum, over a background spacetime of specific geometry,
say that of a Minkowski spacetime. Besides the symmetries of that geometry -- thus those of the Poincar\'e group
in the instance of a Minkowski spacetime -- what are all the possible global symmetries of such a field dynamics?
Of course these symmetries include the global symmetries of the background spacetime geometry.
However there are far more symmetries possible still, certainly in the case of a free field.
As is well known -- a result which came as a surprise when it was first
discovered -- in Einstein's General Relativity an asymptotically flat spacetime possesses asymptotic global symmetries at null infinity
that include the Poincar\'e group but are in fact much larger with an infinite (countable) set of conserved quantities,
namely the generators of the so-called BMS symmetries
of super-translations and super-rotations\cite{BMS1, BMS2, BMS3,Hawking,Strominger,Compere}. More recently
it has been established that free fields in a Minkowski spacetime share the same BMS symmetries
as well\cite{Free1,Free2,Free3}.
In particular in Refs.\cite{Free2,Free3} it was shown that these BMS symmetries are related
to conserved charges and generators which are bilinear in the phase space fields but are, as a matter of fact,
spatially non local.
Because of their wave-like dynamics Lorentz covariant fields are intrinsically spatially non local yet with a causally
consistent time evolution.
On the other hand, by definition symmetries of a dynamical system are transformations of its degrees
of freedom that leave its equations of motion form-invariant, namely that map solutions into solutions
to the same equations of motion. However dynamical systems may possess symmetries larger than those
that leave their Lagrangian action invariant (up to total time derivatives), namely symmetry transformations which,
when acting on the associated phase space (rather than on the system's configuration and
conjugate momentum spaces separately), leave the system's Hamiltonian action invariant (up to total time derivatives).
Such symmetry transformations mixing configuration space degrees of freedom with their conjugate momenta
correspond generally to so-called dynamical symmetries\footnote{Such as the celebrated SO(4) symmetry of the
bound states of Coulomb--Kepler problem, or the SU(3) symmetry of the spherically symmetric harmonic oscillator,
whether classical or quantum mechanical in 3 dimensional Euclidean geometry.}.
Obviously Noether's (first) theorem (for global symmetries) applies likewise
to the phase space parametrisation of a dynamics in terms of its Hamiltonian action, allowing for the identification
of the Noether conserved charges generating global symmetries over phase space\cite{Canonical}
through Poisson brackets. Consequently the general setting appropriate to studying the global symmetries
of a system is that of its canonical Hamiltonian formulation.
In the case of a free field, say a bosonic one, its action being quadratic and its equation of motion being
linear in the field, hence also its Hamiltonian equations of motion, the general transformations of the phase space
fields leaving these equations invariant, hence also the Hamiltonian action, are {\sl a priori} linear in the
phase space fields. The corresponding (Noether charge) generators thus need to be bilinear in these fields.
However because of the intrinsic spatially non local character of field dynamics, these generators need not
necessarily be local in space (as are for instance the total energy, momentum and angular-momentum
of the field when conserved), and may indeed be spatially bilocal, namely be given by a double spatial integral
of a bilinear form in the phase space fields.
This is thus the rationale to be implemented in the present paper when wanting to identify all the possible
global symmetries of a free field dynamics. Specifically the analysis to be presented applies to the
simplest case possible, namely that of a free and real bosonic scalar field in a 2+1 dimensional Minkowski space-time
(generalisation to Minkowski spacetimes of arbitrary dimension is of course feasible).
Furthermore in order to better understand the possible role of asymptotic behaviour at infinity
for the existence of BMS symmetries (in the spirit of their original discovery), in the present work
a spatially bounded domain will first\footnote{In a forthcoming paper we plan to present
a similar analysis for the free real scalar field in the full unbounded 2+1 dimensional Minkowski spacetime,
in which case the connection with the known BMS symmetries of that system will be established.} be considered,
namely that of a disk of finite radius $R>0$, $D(R)$, in order to at least preserve -- besides the invariance
under global time translations (thus the conservation of the field's total energy) -- the invariance under
spatial rotations around a fixed center as well (and thus the conservation of the field's total angular-momentum).
In other words the considered 2+1 dimensional spacetime has the topology of a plain cylinder
$\mathbb{R}\times D(R)$.
However the spatial boundary condition to be imposed on the scalar field will be as general as possible
in a manner consistent with rotational invariance, namely in terms of a radial Robin boundary condition
of uniform Robin parameter along the disk boundary, allowing for a smooth transition between a Dirichlet
and a Neumann boundary condition if required. This choice of boundary condition is to be implemented
in the field dynamics through a specific boundary term in the Lagrangian action of the free field.
With the purpose as well of specifying notations and establishing results for later use,
Section 2 discusses the free field dynamics based on an action principle, its Hamiltonian formulation and
eventually its canonical quantisation. Section 3 then implements the rationale outlined above,
to identify the complete ensemble of global symmetries of the considered free field dynamics,
specifically for the flat disk spatial topology and geometry. The comprehensive identification of the
complete global symmetry group is achieved in Section 4. While some final comments are
presented in the Conclusion.
\section{Free Field Dynamics and its Quantisation}
\label{Sect2}
Given the background 2+1 dimensional
Minkowski spacetime of topology $\mathbb{R}\times D(R)$, in natural units ($c=1=\hbar$) let us thus consider
a real scalar field $\phi(t,r,\theta)$, $(r,\theta)$ being polar coordinates in the disk ($0\le r\le R$ and $0\le\theta<2\pi$).
The dynamics of that field derives from the following choice of Lagrangian action which is purely quadratic in the field,
\begin{eqnarray}
S_0\left[\phi\right] = \int dt\,L_0 &=&\int dt\,\int_0^Rdr\,r \int_0^{2\pi}d\theta
\left[\frac{1}{2}\left(\partial_t\phi\right)^2-\frac{1}{2}\left(\partial_r\phi\right)^2
-\frac{1}{2}\left(\frac{1}{r}\partial_\theta\phi\right)^2-\frac{1}{2}\mu^2\phi^2\right]\ + \nonumber \\
&&+\int dt\,\int_0^{2\pi}d\theta\ \frac{1}{2}\lambda\ \phi^2(t,R,\theta),
\label{eq:Action1}
\end{eqnarray}
where $\mu\ge 0$ is a real mass parameter while $\lambda\in\mathbb{R}$ is the real, constant and dimensionless
Robin parameter determining the radial Robin boundary condition for the equation of motion of the free scalar field.
In the above expression the first line is the bulk contribution to the action while the second line is a boundary
contribution. Note that besides its invariances under global time translations and spatial rotations around the
center of the disk $D(R)$, this dynamics is also invariant under parity and time reversal transformations.
\subsection{Classical Solutions}
\label{Sect2.1}
Obviously given the above action the variational principle, which requires the action to be stationary only up
to a total time derivative, implies the usual Klein-Gordon wave equation for the scalar field in the disk,
but subjected as well to the radial Robin boundary condition of parameter~$\lambda$,
\begin{equation}
\left(\partial^2_t-\frac{1}{r}\partial_r\,r\partial_r -\frac{1}{r^2}\partial^2_\theta + \mu^2\right)\phi(t,r,\theta)=0,\qquad
\left(\partial_r\phi\right)(t,R,\theta)=\frac{\lambda}{R}\,\phi(t,R,\theta).
\end{equation}
Note that the value $\lambda=0$ reproduces a Neumann boundary condition,
while the limits $\lambda\rightarrow\pm\infty$ imply a Dirichlet boundary condition for $\phi(t,r,\theta)$.
The Klein-Gordon equation being linear in the field, through a separation of variables
it suffices to find a basis of solutions to construct its general solution by superposition.
The lack of spatial translational invariance in the present instance however, implies that the usual
travelling plane waves are not available for such a basis. A basis of solutions needs rather to be
identified in terms of stationary waves. Given the invariances of the dynamics under global
translations in time and global rotations in the disk with a $2\pi$ periodicity in the $\theta$ dependence,
through a separation of variables such stationary waves of angular-momentum $\ell\in\mathbb{Z}$
and positive angular frequency $\omega_\ell>0$ may be considered in the following (complex) form,
\begin{equation}
\phi_\ell(t,r,\theta)=e^{-i\omega_\ell t}\,e^{i\ell\theta}\,f_\ell(r),\qquad
\omega_\ell>0,\qquad \ell\in\mathbb{Z}.
\end{equation}
Direct substitution into the Klein-Gordon equation then requires,
\begin{equation}
\left[\frac{d^2}{dx^2}+\frac{1}{x}\frac{d}{dx}+\left(1-\frac{\ell^2}{x^2}\right)\right]\,f_\ell(r)=0,\qquad x=k_\ell\, r,
\qquad k_\ell>0,
\end{equation}
where $\omega^2_\ell=k^2_\ell+\mu^2\ge \mu^2$, $k_\ell \equiv \sqrt{\omega^2_\ell-\mu^2}\ge 0$,
$\omega_\ell\ge\mu\ge 0$.
This is the differential equation defining the real Bessel and Neumann functions of order $\ell$,
$J_\ell(x)$ and $N_\ell(x)$. Since the field $\phi(t,r,\theta)$ must remain regular throughout the disk,
only the Bessel functions of the first kind, $J_\ell(x)$, may be retained as solution for the radial dependence
of the field, namely $f_\ell(r)=J_\ell(k_\ell r)$ up to a constant normalisation factor. Let us then also recall
the parity property $J_{-\ell}(x)=(-1)^\ell\,J_\ell(x)$ for $\ell\in\mathbb{Z}$, hence equivalently
$i^{-\ell} J_{-\ell}(x)=i^\ell J_\ell(x)$.
However the spectrum of wave number values $k_\ell$ is restricted by imposing the Robin boundary condition,
which leads to the following equation in $k_\ell$ ($J'_\ell(x)$ denotes the derivative of $J_\ell(x)$ relative
to its argument $x$),
\begin{equation}
x_R\,J'_\ell(x_R)\,-\,\lambda\,J_\ell(x_R)=0,\qquad x_R=k_\ell R,\qquad
k_\ell\,J'(k_\ell R)=\frac{\lambda}{R}\,J_\ell(k_\ell R).
\end{equation}
The properties of the positive roots of this equation in $x_R$ have been discussed
in detail in Refs.\cite{Roots1,Roots2}.
For generic values of the Robin parameter $\lambda$ and a given value of $\ell$,
the positive roots form an infinite discrete set of
non degenerate values, $x_{\ell,n}$, increasing with $n=1,2,\cdots$, the smallest of which is strictly positive,
$x_{\ell,1}>0$. Correspondingly the spectrum of wave numbers solving the Robin boundary condition is thus
given as $k_{\ell,n}=x_{\ell,n}/R$, with $\omega_{\ell,n}=\sqrt{k^2_{\ell,n}+\mu^2}$.
As a function of $R$ and for a given value of $\ell$,
it is only at most for a finite set of isolated values for $\lambda$ (if not a single such value)
that the smallest root, $x_{\ell,1}$, may be doubly degenerate. In this work we shall assume that
such a degenerate circumstance is not encountered nor that accidental degeneracies in the roots $x_{\ell,n}$
would occur for different values of $|\ell |$ (in which case otherwise the set of global
symmetries to be identified for the free field theory would become slightly enlarged as compared
to our final identification of the global symmetries of this system).
However, note that because of parity invariance, or equivalently time reversal invariance of the field,
these roots are doubly degenerate for opposite non vanishing values of $\ell$,
\begin{equation}
x_{-\ell,n}=x_{\ell,n},\quad
k_{-\ell,n}=k_{\ell,n},\qquad
\omega_{-\ell,n}=\omega_{\ell,n},\qquad \ell\ne 0,\quad \ell\in\mathbb{Z},
\end{equation}
as also follows form the fact that $J_{-\ell}(x)=(-1)^\ell J_\ell(x)$.
In conclusion the full spectrum of solutions to the Laplacian eigenvalue problem on the disk,
$\Delta\varphi(r,\theta)=-k^2\varphi(r,\theta)$ with $k\in\mathbb{R}$ and $k\ge 0$,
subjected to the rotationally invariant radial Robin boundary condition of parameter $\lambda$, namely,
\begin{equation}
\left(\frac{1}{r}\partial_r r\partial_r + \frac{1}{r^2}\partial^2_\theta\right)\varphi(r,\theta)=-k^2\,\varphi(r,\theta),\qquad
(\partial_r\varphi)(R,\theta)=\lambda\,\varphi(R,\theta),
\end{equation}
is given by the following basis of functions, with $x_{\ell,n}=k_{\ell,n}R$,
\begin{equation}
\varphi_{\ell,n}(r,\theta) = N_{\ell,n}\,e^{i\ell\theta}\,J_\ell\left(x_{\ell,n}\frac{r}{R}\right),\qquad
k_{\ell,n}\,J'_\ell(k_{\ell,n}R)=\frac{\lambda}{R}\,J_\ell(k_{\ell,n}R),\qquad x_{\ell,n}=k_{\ell,n}R,
\end{equation}
where $N_{\ell,n}$ are (possibly complex) constant normalisation factors still to be determined.
Requiring these functions to be orthonormalised in the disk, namely
\begin{equation}
\int_0^Rdr\, r\int_0^{2\pi}d\theta\,\varphi^*_{\ell_1,n_1}(r,\theta)\,\varphi_{\ell_2,n_2}(r,\theta)
=\delta_{\ell_1,\ell_2}\,\delta_{n_1,n_2},
\end{equation}
implies for the normalisation factors\footnote{Using the differential equation obeyed by $J_n(kx)$ (with
$n\in\mathbb{N}$)
it is possible to establish that $\int dx \, x J_n(kx)\,J_n(\ell x)=\frac{x}{k^2-\ell^2}\left(J_n(kx)\frac{dJ_n(\ell x)}{dx}
-\frac{dJ_n(k x)}{dx}J_n(\ell x)\right)$ for $k^2\ne \ell^2$, as well as
$\int dx\,x J^2_n(\ell x)=\frac{1}{2\ell^2}\left[\left(x\frac{dJ_n(\ell x)}{dx}\right)^2 + (\ell^2 x^2 - n^2) J^2_n(\ell x)\right]$.
Orthogonality of the functions $\varphi_{\ell,n}(r,\theta)$ for different values of $n$ (and a same value for $\ell$)
then follows by relying on the Robin boundary condition identity defining the roots $x_{\ell,n}$.},
\begin{equation}
{\cal N}_{\ell,n}\equiv
|N_{\ell,n}|=\left(\frac{\pi R^2}{x^2_{\ell,n}}\left(\lambda^2+x^2_{\ell,n}-\ell^2\right)\,J^2_\ell(x_{\ell,n})\right)^{-1/2}
=\frac{k_{\ell,n}}{\sqrt{\pi\left(\lambda^2+x^2_{\ell,n}-\ell^2\right)\,J^2_\ell(x_{\ell,n})}}.
\end{equation}
Finally because of the well known representation
\begin{equation}
e^{ix\cos\theta}=\sum_{\ell=-\infty}^\infty\,i^\ell\,e^{i\ell\theta}\,J_\ell(x),
\end{equation}
our choice of phase convention for the normalisation factors $N_{\ell,n}$ is such that
\begin{equation}
\varphi_{\ell,n}(r,\theta)={\cal N}_{\ell,n}\,i^\ell\,e^{i\ell\theta}\,J_\ell(k_{\ell,n}r).
\end{equation}
Besides the above orthonormality properties the functions $\varphi_{\ell,n}(r,\theta)$ also possess the following
completeness relation as a genuine basis of functions in polar coordinates on the disk
obeying the radial Robin boundary condition of parameter $\lambda$,
\begin{equation}
\sum_{\ell=-\infty}^\infty\,\sum_{n=1}^\infty\,\varphi_{\ell,n}(r_1,\theta_1)\,\varphi^*_{\ell,n}(r_2,\theta_2)=
\frac{1}{r_1}\,\delta(r_1 - r_2)\,\delta(\theta_1 - \theta_2).
\label{eq:complete}
\end{equation}
Note that because of the properties under the change of sign in $\ell$ pointed out above,
and the phase convention for $N_{\ell,n}$ specified above, we also have,
\begin{equation}
\varphi_{-\ell,n}(r,\theta)=(-1)^\ell\,\varphi^*_{\ell,n}(r,\theta),
\end{equation}
in a way similar to an analoguous property obeyed by the usual spherical harmonics on the 2-sphere.
In conclusion the complete and general solution to the Klein-Gordon equation subjected to the considered
radial Robin boundary condition is thus of the form,
\begin{equation}
\phi(t,r,\theta)=\sum_{\ell=-\infty}^\infty\sum_{n=1}^\infty\frac{1}{\sqrt{2\omega_{\ell,n}}}
\left[e^{-i\omega_{\ell,n}t}\,\varphi_{\ell,n}(r,\theta)\,a(\ell,n)\,+\,
e^{+i\omega_{\ell,n} t}\,\varphi^*_{\ell,n}(r,\theta)\,a^\dagger(\ell,n)\right],
\label{eq:sol1}
\end{equation}
where the choice of normalisation is made for later convenience, while $a^\dagger(\ell,n)\equiv a^*(\ell,n)$
and $a(\ell,n)$ are complex valued integration constants corresponding to the excitation amplitudes and phases
of each of the possible excitation modes $(\ell,n)$ of standing waves in the disk, which for the quantised
field will correspond to the (normalised) Fock algebra creation and annihilation operators for the different quanta
possible for this free field theory.
Within the Hamiltonian formulation of the system (see the next Subsection),
in polar coordinates in the disk the momentum conjugate to the field, $\pi(t,r,\theta)$, is given
as $\pi(t,r,\theta)=r\partial_t\phi(t,r,\theta)$, namely,
\begin{equation}
\pi(t,r,\theta)=\sum_{\ell=-\infty}^\infty\sum_{n=1}^\infty\frac{-i\omega_{\ell,n}}{\sqrt{2\omega_{\ell,n}}}
\left[e^{-i\omega_{\ell,n}t}\,r\varphi_{\ell,n}(r,\theta)\,a(\ell,n)\,-\,
e^{+i\omega_{\ell,n} t}\,r\varphi^*_{\ell,n}(r,\theta)\,a^\dagger(\ell,n)\right].
\label{eq:sol2}
\end{equation}
The relevant inverse relations are then, for the corresponding mode amplitudes,
\begin{equation}
a(\ell,n)=\int_0^Rdr\,r\int_0^{2\pi}d\theta\frac{\sqrt{2\omega_{\ell,n}}}{2}\,e^{i\omega_{\ell,n}t}\,\varphi^*_{\ell,n}(r,\theta)
\left[\phi(t,r,\theta) + \frac{i}{\omega_{\ell,n}}\frac{1}{r}\pi(t,r,\theta)\right],
\label{eq:a}
\end{equation}
\begin{equation}
a^\dagger(\ell,n)=\int_0^Rdr\,r\int_0^{2\pi}d\theta\frac{\sqrt{2\omega_{\ell,n}}}{2}
\,e^{-i\omega_{\ell,n}t}\,\varphi_{\ell,n}(r,\theta)
\left[\phi(t,r,\theta) - \frac{i}{\omega_{\ell,n}}\frac{1}{r}\pi(t,r,\theta)\right].
\label{eq:adagger}
\end{equation}
\subsection{Hamiltonian Formulation}
\label{Sect2.2}
Given the polar coordinate parametrisation of the Lagrangian action in (\ref{eq:Action1}) the momentum conjugate
to the field configuration $\phi(t,r,\theta)$, defined by $\pi(t,r,\theta)=\delta L/\delta(\partial_t\phi(t,r,\theta))$,
is obtained as,
\begin{equation}
\pi(t,r,\theta)=r\,\partial_t\phi(t,r,\theta),\qquad
\partial_t\phi(t,r,\theta)=\frac{1}{r}\pi(t,r,\theta),
\end{equation}
for which the Poisson brackets are canonical, with the only non vanishing (equal time) bracket being
\begin{equation}
\{\phi(t,r_1,\theta_1),\pi(t,r_2,\theta_2)\}=\delta(r_1-r_2) \delta(\theta_1 - \theta_2).
\end{equation}
Correspondingly the canonical Hamiltonian of the system is given as, inclusive of the boundary term,
\begin{equation}
H=\int_0^R dr\,\int_0^{2\pi}d\theta
\left(\frac{1}{2r}\pi^2 + \frac{1}{2}r(\partial_r\phi)^2+\frac{1}{2r}(\partial_\theta\phi)^2+\frac{1}{2}\mu^2 r\phi^2\right)
\ +\ \int_0^{2\pi}d\theta\left(-\frac{1}{2}\lambda\,\phi^2(t,R,\theta)\right),
\label{eq:H}
\end{equation}
in terms of which the Hamiltonian first-order action is expressed as
\begin{equation}
S_H[\phi,\pi]=\int dt\,L_H = \int dt\,\left[\int_0^R dr\, \int_0^{2\pi}d\theta\,\partial_t \phi \, \pi\,-\,H\right].
\label{eq:SH}
\end{equation}
It should be noted that in view of its definition the conjugate momentum field obeys a rotationally invariant
radial Robin boundary condition of parameter $(\lambda+1)$, namely,
\begin{equation}
(\partial_r\phi)(t,R,\theta)=\frac{\lambda}{R}\,\phi(t,R,\theta),\qquad
(\partial_r\pi)(t,R,\theta)=\frac{\lambda+1}{R}\,\pi(t,R,\theta).
\end{equation}
These are thus the boundary conditions to be considered when solving the associated Hamiltonian equations
of motion, of which the solutions will be expressed in terms of the basis of functions $\varphi_{\ell,n}(r,\theta)$ and
$r\varphi_{\ell,n}(r,\theta)$ (see (\ref{eq:sol1}) and (\ref{eq:sol2})). Note that given the equation obeyed by the roots $x_{\ell,n}=k_{\ell,n}R$, we have indeed as it should,
\begin{equation}
\frac{d}{dr}J_\ell(k_{\ell,n}r)_{|_{r=R}}=\frac{\lambda}{R}\,J_\ell(k_{\ell,n}r)_{|_{r=R}},\qquad
\frac{d}{dr}\left(rJ_\ell(k_{\ell,n}r)\right)_{|_{r=R}}=\frac{\lambda+1}{R}\left(rJ_\ell(k_{\ell,n}r)\right)_{|_{r=R}}.
\end{equation}
Through a careful treatment of the boundary term contribution in the evaluation of Poisson brackets with
the above Hamiltonian $H$, it may readily be checked that the first-order and linear Hamiltonian equations
of motion read as
\begin{equation}
\partial_t \phi=\left\{\phi,H\right\}=\frac{1}{r}\pi,\qquad
\partial_t\pi=\left\{\pi,H\right\}=r\left(\frac{1}{r}\partial_r r\partial_r + \frac{1}{r^2}\partial^2_\theta - \mu^2\right)\phi
=r\left(\Delta - \mu^2\right)\phi,
\end{equation}
of which, given the appropriate Robin boundary conditions, the solutions are provided by the expressions
(\ref{eq:sol1}) and (\ref{eq:sol2}). Note that as it should, these same equations of motion also derive directly
from the variational principle applied to the above Hamiltonian action $S_H$, inclusive of spatial boundary
contributions that need to vanish as well.
Using the inverse relations in (\ref{eq:a}) and (\ref{eq:adagger}) as well as the orthonormality properties
of the basis functions $\varphi_{\ell,n}(r,\theta)$, one may also readily establish that the above canonical
Poisson brackets for the phase space fields $\phi(t,r,\theta)$ and $\pi(t,r,\theta)$ translate into
the following (only non vanishing) Poisson brackets for the mode amplitudes,
\begin{equation}
\left\{a(\ell_1,n_2),a^\dagger(\ell_2,n_2)\right\}=-i\,\delta_{\ell_1,\ell_2}\,\delta_{n_1,n_2}.
\end{equation}
Given the expression (\ref{eq:H}) and using the Robin boundary condition for $\phi$, through partial integrations
it is straightforward to establish the following purely bulk representation for the Hamiltonian which is spatially
local in phase space and quadratic in the phase space fields,
\begin{equation}
H=\int_0^R dr\, r \int_0^{2\pi}d\theta\left[\frac{1}{2}\left(\frac{1}{r}\pi\right)^2
-\frac{1}{2}\phi\left(\frac{1}{r}\partial_r r\partial_r + \frac{1}{r^2}\partial^2_\theta - \mu^2\right)\phi\right].
\label{eq:H2}
\end{equation}
Then using the fact that,
\begin{equation}
\left(\Delta - \mu^2\right)\varphi_{\ell,n}(r,\theta)=
\left(\frac{1}{r}\partial_r r\partial_r + \frac{1}{r^2}\partial^2_\theta - \mu^2\right)\varphi_{\ell,n}(r,\theta)
=-\omega^2_{\ell,n}\,\varphi_{\ell,n}(r,\theta),\quad
\omega^2_{\ell,n}=k^2_{\ell,n}+\mu^2,
\end{equation}
as well as the orthonormality properties of the basis functions $\varphi_{\ell,n}(r,\theta)$,
a straightforward evaluation establishes the following representation for the conserved total energy of the field,
in which none of the factors $a(\ell,n)$ and $a^\dagger(\ell,n)$ have been permuted with one another.
\begin{equation}
H=\sum_{\ell=-\infty}^\infty\sum_{n=1}^\infty\frac{1}{2}\omega_{\ell,n}
\left(a^\dagger(\ell,n) a(\ell,n) + a(\ell,n) a^\dagger(\ell,n)\right).
\end{equation}
Besides the global symmetry under constant translations in time for which the conserved Noether charge
is the total energy or the Hamiltonian $H$, the system also possess a global symmetry under constant
rotations of the disk around its center at $r=0$, for which the conserved Noether charge is
the total angular-momentum of the field, $L$. A simple analysis identifies its spatially local phase space
expression which is quadratic in the phase space fields,
\begin{equation}
L=-\int_0^R dr\, r \int_0^{2\pi}d\theta\,\left(\frac{1}{r}\pi\right)\,\partial_\theta\phi.
\end{equation}
A straighforward substitution in terms of the mode amplitudes then finds,
\begin{equation}
L=\sum_{\ell=-\infty}^\infty\sum_{n=1}^\infty\,
\frac{1}{2}\ell\left(a^\dagger(\ell,n) a(\ell,n) + a(\ell,n) a^\dagger(\ell,n)\right).
\end{equation}
\subsection{The Quantised Field}
\label{Sect2.3}
In terms of the results established above within the Hamiltonian formulation of the system,
its canonical quantisation is straightforward. Its Hilbert space of quantum states is spanned
by the tensor product of the Fock spaces generated from a Fock vacuum by the Fock algebras
of annihilation and creation operators, $a(\ell,n)$ and $a^\dagger(\ell,n)$, obeying the Fock algebras
(with $\hbar=1$),
\begin{equation}
\left[a(\ell_1,n_1),a^\dagger(\ell_2,n_2)\right]=\delta_{\ell_1,\ell_2}\,\delta_{n_1,n_2}\,\mathbb{I},
\qquad \ell_1,\ell_2\in\mathbb{Z},\quad n=1,2,3,\cdots .
\end{equation}
Time dependence of the system is generated through the total quantum Hamiltonian, which we choose to be given
by the normal ordered form of the classical quantity, namely,
\begin{equation}
\hat{H}=\sum_{\ell=-\infty}^\infty\sum_{n=1}^\infty\,\omega_{\ell,n}\,a^\dagger(\ell,n) a(\ell,n).
\end{equation}
In particular in the Heisenberg picture the time dependent
quantum field operators $\hat{\phi}(t,r,\theta)$ and $\hat{\pi}(t,r,\theta)$
remain then given by the same expressions as in (\ref{eq:sol1}) and (\ref{eq:sol2}),
now in terms of the Fock operators.
Likewise the total angular-momentum operator of the system is represented by the normal ordered expression,
\begin{equation}
\hat{L}=\sum_{\ell=-\infty}^\infty\sum_{n=1}^\infty\,\ell\,a^\dagger(\ell,n) a(\ell,n).
\end{equation}
Hence the action of $a^\dagger(\ell,n)$ onto any quantum state of the system adds one more quantum
of energy $\omega_{\ell,n}$ and angular-momentum $\ell$
in the standing wave mode of the field of quantum numbers $(\ell,n)$
related to the root $x_{\ell,n}$ of the equation expressing the radial Robin boundary condition
for the angular-momentum mode $\ell$.
Because of the parity and time reversal invariance property that $x_{-\ell,n}=x_{\ell,n}$,
note well that the energy spectrum of the system is doubly degenerate under the
exchange $\ell\leftrightarrow -\ell$ for $\ell\ne 0$. In other words one may as well write,
\begin{equation}
\hat{H}=\sum_{n=1}^\infty \omega_{0,n}\, a^\dagger(0,n)a(0,n)\,+\,
\sum_{\ell=1}^\infty\sum_{n=1}^\infty\omega_{\ell,n}\left(a^\dagger(\ell,n)a(\ell,n)+a^\dagger(-\ell,n)a(-\ell,n)\right),
\label{eq:Hsu2}
\end{equation}
\begin{equation}
\hat{L}= \sum_{\ell=1}^\infty\sum_{n=1}^\infty \ell \left(a^\dagger(\ell,n)a(\ell,n)-a^\dagger(-\ell,n)a(-\ell,n)\right).
\label{eq:Lsu2}
\end{equation}
\section{Bilocal Global Dynamical Symmetries and their Charges}
\label{Sect3}
Given the rationale outlined in the Introduction, let us now consider the possibility that quantities
which are bilocal and bilinear in the phase space fields, hence bilinear in the creation and annihilation
operators, could generate (as the would-be associated conserved Noether charges) global symmetries
of the free field dynamics. In a compact notation such that ``1" or ``2" stand for the pairs of coordinates
$(r_1,\theta_1)$ and $(r_2,\theta_2)$, respectively, such candidate conserved charges are of the general form
\begin{eqnarray}
Q(t) &=& \int_0^R dr_1\int_0^{2\pi}d\theta_1\,\int_0^R dr_2\int_0^{2\pi}d\theta_2 \times \nonumber \\
&&\times \left(\frac{1}{2}f(1;2)\phi(t,1)\phi(t,2)+\frac{1}{2}g(1;2)\pi(t,1)\pi(t,2)+h(1;2)\phi(t,1)\pi(t,2)\right),
\end{eqnarray}
where the triplet of time independent functions $f(1;2)\equiv f(r_1,\theta_1;r_2,\theta_2)$,
$g(1;2)\equiv g(r_1,\theta_1;r_2,\theta_2)$ and $h(1;2)\equiv h(r_1,\theta_1;r_2,\theta_2)$
are real functions to be identified. In particular note that
the functions $f(1;2)$ and $g(1;2)$ are symmetric under the exchange $1\leftrightarrow 2$, while
$h(1;2)$ does not possess any specific symmetry under that exchange,
\begin{equation}
f(2;1)=f(1;2),\qquad
g(2;1)=g(1;2).
\end{equation}
Through Poisson brackets with the fields $\phi(t,r,\theta)$ and $\pi(t,r,\theta)$ such charges $Q(t)$
generate the following linear transformations mixing the phase space fields,
\begin{equation}
\delta \phi(t,r,\theta)=\int_0^R dr_1\int_0^{2\pi}d\theta_1
\left[g(r,\theta;r_1,\theta_1)\,\pi(t,r_1,\theta_1)\,+\,h(r_1,\theta_1;r,\theta)\,\phi(t,r_1,\theta_1)\right],
\label{eq:var1}
\end{equation}
\begin{equation}
\delta \pi(t,r,\theta)=-\int_0^R dr_1\int_0^{2\pi}d\theta_1
\left[f(r,\theta;r_1,\theta_1)\,\phi(t,r_1,\theta_1)\,+\,h(r,\theta;r_1,\theta_1)\,\pi(t,r_1,\theta_1)\right].
\label{eq:var2}
\end{equation}
In a form more convenient for later use, these transformations may also be expressed as,
\begin{equation}
\delta \phi(t,r,\theta)=\int_0^R dr_1\,r_1\int_0^{2\pi}d\theta_1
\left[g(r,\theta;r_1,\theta_1)\,\left(\frac{1}{r_1}\pi(t,r_1,\theta_1)\right)
\,+\,\frac{1}{r_1}h(r_1,\theta_1;r,\theta)\,\phi(t,r_1,\theta_1)\right],
\label{eq:var3}
\end{equation}
\begin{eqnarray}
\delta\left(\frac{1}{r}\pi(t,r,\theta)\right) &=& -\int_0^R dr_1\,r_1\int_0^{2\pi}d\theta_1 \times \nonumber \\
&&\ \times\left[ \frac{1}{r r_1}f(r,\theta;r_1,\theta_1)\,\phi(t,r_1,\theta_1)\,+\,
\frac{1}{r}h(r,\theta;r_1,\theta_1)\,\left(\frac{1}{r_1}\pi(t,r_1,\theta_1)\right)\right].
\label{eq:var4}
\end{eqnarray}
A first series of restrictions on the functions $f$, $g$ and $h$ is implied by the necessity that the transformed
fields still obey their relevant Robin boundary conditions of parameter $\lambda$ or $(\lambda+1)$ as the case
may. Consequently one must require,
\begin{equation}
\partial_r g(r,\theta;r_1,\theta_1)_{|_{r=R}}=\frac{\lambda}{R}\ g(R,\theta;r_1,\theta_1),\qquad
\partial_r h(r_1,\theta_1;r,\theta)_{|_{r=R}}=\frac{\lambda}{R}\ h(r_1,\theta_1;R,\theta),
\end{equation}
\begin{equation}
\partial_r f(r,\theta;r_1,\theta_1)_{|_{r=R}}=\frac{\lambda+1}{R}\ f(R,\theta;r_1,\theta_1),\qquad
\partial_r h(r,\theta;r_1,\theta_1)_{|_{r=R}}=\frac{\lambda+1}{R}\ h(R,\theta;r_1,\theta_1).
\end{equation}
Note that since $\phi(t,r,\theta)$ and $\pi(t,r,\theta)/r$ both obey the radial Robin boundary condition
with a same parameter $\lambda$, as a matter of fact so do the functions $f(1;2)/(r_1 r_2)$,
$g(1;2)$ and $h(1;2)/r_1$, which is equivalent to the above boundary conditions
for the triplet of functions $(f,g,h)$.
Different strategies may be considered in order to determine under which conditions the charges $Q(t)$
generate global symmetries of the field dynamics. By definition a symmetry leaves the equations of motion
form-invariant; through a direct substitution into the Hamiltonian equations of motion of the above
transformations $\delta\phi$ and $\delta\pi$ one may find specific restrictions that the functions $f$, $g$ and $h$
would have to meet. However there would then still remain the necessity to check under which conditions
the composition (through Poisson brackets or commutators) of transformations generated by such charges $Q(t)$
for different such triples of functions $(f,g,h)$ closes on the same class of transformations.
A rather more efficient approach considers directly the Hamiltonian action of the system, $S_H$.
If the above transformations define a global symmetry leaving the equations of motion form-invariant,
necessarily they leave the action invariant possibly up to a total time derivative. If a total time derivative
is indeed thereby induced it could be that the associated Noether charge, even though conserved under its
complete equation of motion\footnote{An explicitly time dependent phase space observable, $Q(t)$,
may be a conserved quantity nevertheless, provided its Poisson bracket (or quantum commutator)
with the Hamiltonian is non vanishing and such that $dQ(t)/dt=\partial Q(t)/\partial t+\left\{Q(t),H\right\}=0$,
namely $\left\{Q(t),H\right\}=-\partial Q(t)/\partial t\ne 0$.}, possesses an explicit time dependence as a function
over phase space\cite{Canonical}, which in turn may lead to a classical central extension of the algebra spanned
by the Noether generators of such global symmetries of the system\footnote{Incidentally, on account of
Noether's (first) theorem a (global) continuous symmetry generator is always an (on-shell) conserved quantity
over phase-space. The converse however, is not necessarily true: a conserved phase space quantity is
not necessarily the generator of a symmetry. We shall see an explicit example later on.}.
Let us thus consider the Hamiltonian action (\ref{eq:SH}) with the Hamiltonian given as in (\ref{eq:H2}) (without
a boundary term), and determine its linearised variation under the transformations $\delta\phi$ and $\delta\pi$
in (\ref{eq:var1}) and (\ref{eq:var2}). Through a series of careful integrations by parts in $r$ and in $\theta$,
and by exploiting the Robin boundary conditions that the triplet of functions $(f,g,h)$ must satisfy,
the linearised variation of the Hamiltonian action reduces to the expression,
\begin{eqnarray}
\delta S_H &=& \int dt \int_0^R dr_1\int_0^{2\pi}d\theta_1 \int_0^R dr_2\int_0^{2\pi} d\theta_2\times \nonumber \\
&\times&\Bigg\{\partial_t\left[\frac{1}{2}g(1;2)\pi(t,1)\pi(t,2)-\frac{1}{2}f(1;2)\phi(t,1)\phi(t,2)\right] + \Bigg. \nonumber \\
&&\qquad +\ \frac{1}{r_1}\pi(t,1)\pi(t,2)h(1;2) \ + \nonumber \\
&&\qquad +\ \frac{1}{r_2}\phi(t,1)\pi(t,2)f(1;2) + \phi(t,1)\pi(t,2)\,r_1(\Delta_1-\mu^2)g(1;2) \ + \nonumber \\
&&\qquad \Bigg.+\ \phi(t,1)\phi(t,2)\,r_1(\Delta_1-\mu^2) h(2;1)\Bigg\}.
\end{eqnarray}
In order that this variation reduces solely to a total time derivative contribution, in addition to those properties
already listed above the triplet of functions $(f,g,h)$ must satisfy as well the following further properties,
\begin{eqnarray}
&&\frac{1}{r_1}h(1;2)+\frac{1}{r_2}h(2;1) = 0, \nonumber \\
&&f(1;2) = -r_1 r_2(\Delta_1 -\mu^2)g(1;2), \\
&&r_1(\Delta_1 - \mu^2) h(2;1) + r_2 (\Delta_2 -\mu^2)h(1;2) = 0. \nonumber
\end{eqnarray}
Incidentally recall that $f(1;2)$ needs to be symmetric under the exchange $1\leftrightarrow 2$.
Hence we have established so far that
\begin{eqnarray}
Q(t) &=& \int_0^R dr_1\,r_1\int_0^{2\pi}d\theta_1\,\int_0^Rdr_2\,r_2\int_0^{2\pi}d\theta_2 \times
\left\{\frac{1}{2}\left(\frac{1}{r_1}\pi(t,1)\right)\left(\frac{1}{r_2}\pi(t,2)\right)\,g(1;2)\,-\, \right. \nonumber \\
&& \qquad \left. - \frac{1}{2}\phi(t,1)\phi(t,2)\,(\Delta_1-\mu^2)\,g(1;2)\,+\,
\phi(t,1)\left(\frac{1}{r_2}\pi(t,2)\right)\,\left(\frac{1}{r_1}h(1;2)\right)\right\}.
\label{eq:Q2}
\end{eqnarray}
Note that the functions $g(1;2)$, $h(1;2)/r_1$, and $f(1;2)/(r_1r_2)$, all satisfy radial Robin boundary
conditions of parameter $\lambda$, in both their dependencies in $r_1$ and $r_2$. Given that the functions
$\varphi_{\ell,n}(r,\theta)$ provide a basis of functions defined on the disk $D(R)$ and obeying this very Robin
boundary condition, it proves relevant to expand in that basis the symmetry transformation functions just listed
in order to solve the different conditions they must meet, in the following form,
\begin{equation}
g(r_1,\theta_1;r_2,\theta_2)=\sum_{\ell_1,n_1}\sum_{\ell_2,n_2}\alpha(\ell_1,n_1;\ell_2,n_2)\,
\varphi_{\ell_1,n_1}(r_1,\theta_1)\,\varphi_{\ell_2,n_2}(r_2,\theta_2),
\end{equation}
and,
\begin{equation}
\frac{1}{r_1}\,h(r_1,\theta_1;r_2,\theta_2)=\sum_{\ell_1,n_1}\sum_{\ell_2,n_2}\beta(\ell_1,n_1;\ell_2,n_2)\,
\varphi_{\ell_1,n_1}(r_1,\theta_1)\,\varphi_{\ell_2,n_2}(r_2,\theta_2),
\end{equation}
where the complex coefficients $\alpha(\ell_1,n_1;\ell_2,n_2)$ and $\beta(\ell_1,n_1;\ell_2,n_2)$
are to be restricted in order to solve for all the necessary properties that the functions $g(1;2)$ and $h(1;2)/r_1$
must possess, while each of the double sums runs over $\ell\in\mathbb{Z}$ and $n=1,2,3,\cdots$.
The function $g(1;2)$ needs to be real valued and symmetric in $(1 \leftrightarrow 2)$,
while $f(1;2)=-r_1r_2(\Delta_1 - \mu^2)g(1;2)$ needs to be real and symmetric under that exchange as well.
These three conditions translate into the following restrictions for $\alpha(\ell_1,n_1;\ell_2,n_2)$,
\begin{eqnarray}
&&\alpha^*(\ell_1,n_1;\ell_2,n_2)=(-1)^{\ell_1+\ell_2}\,\alpha(-\ell_1,n_1;-\ell_2,n_2), \nonumber \\
&&\alpha(\ell_1,n_1;\ell_2,n_2)-\alpha(\ell_2,n_2;\ell_1,n_1)=0, \\
&&\left(\omega^2_{\ell_1,n_1}-\omega^2_{\ell_2,n_2}\right)\,\alpha(\ell_1,n_1;\ell_2,n_2)=0. \nonumber
\end{eqnarray}
Given the distributions of the roots $x_{\ell,n}$ for which we assume (as a function of the value for $\lambda$)
the generic situation with only a double degeneracy under $\ell\leftrightarrow -\ell$ for fixed $n$,
the general solution to these conditions is as follows.
When $|\ell_1|\ne |\ell_2|$ all coefficients $\alpha(\ell_1,n_1;\ell_2,n_2)$ vanish identically.
When $|\ell_1|=|\ell_2|$ but $n_1\ne n_2$ again they all vanish. While for $|\ell_1|=|\ell_2|$ and $n_1=n_2$
we have the following non vanishing values, for $\ell\in\mathbb{Z}$ and $n=1,2,3,\cdots$,
\begin{eqnarray}
\alpha(\ell,n;\ell,n)\equiv\alpha_+(\ell,n) \in \mathbb{C},\qquad &{\rm with}&\quad \alpha^*_+(\ell,n)=\alpha_+(-\ell,n); \\
\alpha(\ell,n;-\ell,n)\equiv\alpha_-(\ell,n) \in\mathbb{R},\qquad &{\rm with}&\quad \alpha_-(-\ell,n)=\alpha_-(\ell,n),
\end{eqnarray}
where $\alpha_+(\ell,n)$ and $\alpha_-(\ell,n)$ are arbitrary complex and real coefficients, respectively.
Consequently one has,
\begin{equation}
g(r_1,\theta_1;r_2,\theta_2)=\sum_{\ell,n}\alpha_+(\ell,n)\,\varphi_{\ell,n}(r_1,\theta_1)\,\varphi_{\ell,n}(r_2,\theta_2)\,+\,
\sum_{\ell,n}\alpha_-(\ell,n)\,\varphi_{\ell,n}(r_1,\theta_1)\,\varphi_{-\ell,n}(r_2,\theta_2).
\end{equation}
The function $h(1;2)$ needs to be real, and to obey the two conditions
\begin{equation}
\frac{1}{r_1}\,h(1;2)\,+\,\frac{1}{r_2}\,h(2;1)=0,\qquad
\Delta_1\left(\frac{1}{r_2}\,h(2;1)\right)\,+\,\Delta_2\left(\frac{1}{r_1}\,h(1;2)\right)=0.
\end{equation}
These conditions then translate into the following restrictions for the coefficients $\beta(\ell_1,n_1;\ell_2,n_2)$,
\begin{eqnarray}
&&\beta(\ell_1,n_1;\ell_2,n_2)=(-1)^{\ell_1+\ell_2}\,\beta(-\ell_1,n_1;-\ell_2,n_2), \nonumber \\
&&\beta(\ell_1,n_1;\ell_2,n_2)\,+\,\beta(\ell_2,n_2;\ell_1,n_1)=0, \\
&&\left(k^2_{\ell_1,n_1}\,-\,k^2_{\ell_2,n_2}\right)\,\beta(\ell_1,n_1;\ell_2,n_2)=0. \nonumber
\end{eqnarray}
As a consequence all the coefficients $\beta(\ell_1,n_1;\ell_2,n_2)$ vanish identically unless
$\ell_1=\ell=-\ell_2$ and $n_1=n_2$, in which case one finds, with $\ell\in\mathbb{Z}$ and $n=1,2,3,\cdots$,
\begin{equation}
\beta(\ell,n;-\ell,n)=i\beta(\ell,n),\qquad {\rm with}\quad \beta(\ell,n)\in\mathbb{R}
\qquad {\rm and}\quad \beta(-\ell,n)=-\beta(\ell,n),
\end{equation}
where $\beta(\ell,n)$ is a collection of arbirary real constants (note that $\beta(0,n)=0$). Thus finally,
\begin{equation}
\frac{1}{r_1}\,h(r_1,\theta_1;r_2,\theta_2)= i\sum_{\ell,n}
\beta(\ell,n)\,\varphi_{\ell,n}(r_1,\theta_1)\,\varphi_{-\ell,n}(r_2,\theta_2).
\end{equation}
Having identified the triplet of functions $(f,g,h)$ a direct substitution into (\ref{eq:Q2}) determines
the explicit form for the general generator of all global (and dynamical) symmetries of the free field $\phi(t,r,\theta)$
in terms of its creation and annihilation operators. A patient but straightforward calculation then establishes that,
once brought into normal ordered form in the case of the quantum operator,
\begin{eqnarray}
Q(t) &=& \sum_{\ell,n}\alpha_+(\ell,n)\,(-1)^\ell\,\omega_{\ell,n}\,a^\dagger(\ell,n)a(-\ell,n)\ + \ \nonumber \\
&+&\sum_{\ell,n}\left(\alpha_-(\ell,n)\,\omega_{\ell,n} + \beta(\ell,n)\right)
(-1)^\ell\,a^\dagger(\ell,n) a(\ell,n).
\end{eqnarray}
Note that this operator is such that $Q^\dagger(t)=Q(t)$, as it should. Furthermore, since
one obviously has $[Q(t),\hat{H}]=0$ (for which the degeneracy property
$\omega_{-\ell,n}=\omega_{\ell,n}$ is crucial), even though our approach accounted for the possibility that
the generator of the general global symmetry could possess an explicit time dependence as a function
defined over phase space, it turns out that in fact the conserved quantity $Q(t)$ does not possess any explicit
time dependence.
All the possible global symmetries of the free field $\phi(t,r,\theta)$ have thus been identified,
of which the complete set of conserved and time independent Noether generators is composed of
the following bilinear operators in the Fock algebra generators,
\begin{equation}
N_{\ell,n}=a^\dagger(\ell,n)\,a(\ell,n),\qquad
Q_{\ell,n}=a^\dagger(\ell,n)\,a(-\ell,n),\qquad \ell\in\mathbb{Z},\quad n=1,2,3,\cdots,
\end{equation}
such that
\begin{equation}
N^\dagger_{\ell,n}=N_{\ell,n},\qquad
Q^\dagger_{\ell,n}=Q_{-\ell,n},\qquad
Q_{0,n}=N_{0,n},
\end{equation}
and with as constant group parameters essentially the linearly independent and arbitrary coefficients
$\alpha_+(\ell,n)\in\mathbb{C}$ and
$\alpha_-(\ell,n), \beta(\ell,n)\in\mathbb{R}$ such that
$\alpha^*_+(\ell,n)=\alpha_+(-\ell,n)$, $\alpha_-(-\ell,n)=\alpha_-(\ell,n)$, and $\beta(-\ell,n)=-\beta(\ell,n)$
(thus $\beta(0,n)=0$).
\section{The Complete Global Symmetry Group}
\label{Sect4}
The algebra of all global symmetries of the 2+1 dimensional free scalar field theory on the disk $D(R)$
is thus spanned by the operators $N_{\ell,n}$ and $Q_{\ell,n}$, respectively the number operator and the
angular-momentum flipping operator for each of the $(\ell,n)$ modes of the stationary waves of the field
and its conjugate momentum inside the disk. It should be clear these transformations do indeed transform a solution
to the Klein-Gordon equation into another solution to the same Klein-Gordon equation, and this without changing
the total energy of that field configuration since $\omega_{-\ell,n}=\omega_{\ell,n}$.
Furthermore in particular, note that the conserved total energy and angular-momentum of the
system are part of that large dynamical symmetry, which is much larger than the finite dimensional global symmetry
group of the underlying spacetime geometry with the topology of $\mathbb{R}\times D(R)$, namely
time translations and disk rotations of which, when acting on the field and its conjugate momentum
the generators are, respectively\footnote{On account of the completeness relation (\ref{eq:complete})
these two conserved Noether charges correspond to the choices $g(1;2)=\delta(r_1-r_2)\delta(\theta_1 - \theta_2/r_1$
and $h(1;2)=0$ in the case of $H$, and to the choices $g(1;2)=0$ and
$h(1;2)=\delta(r_1-r_2)\partial_{\theta_1}\delta(\theta_1-\theta_2)$ in the case of $L$, thereby being also
spatially local bilinear quantities in phase space.},
\begin{equation}
\hat{H}=\sum_{\ell,n}\omega_{\ell,n}\,N_{\ell,n},\qquad
\hat{L}=\sum_{\ell,n}\,\ell\,N_{\ell,n}.
\end{equation}
The commutator algebra generated by all these conserved Noether charges of course closes,
and is given by,
\begin{eqnarray}
\left[N_{\ell_1,n_1},N_{\ell_2,n_2}\right] &=& 0, \nonumber \\
\left[N_{\ell_1,n_1},Q_{\ell_2,n_2}\right] &=& \delta_{\ell_1,\ell_2}\delta_{n_1,n_2}\,Q_{\ell_1,n_1}\,-\,
\delta_{\ell_1,-\ell_2}\delta_{n_1,n_2}\,Q_{-\ell_1,n_1}, \\
\left[Q_{\ell_1,n_1},Q_{\ell_2,n_2}\right] &=&
\delta_{\ell_1,-\ell_2}\delta_{n_1,n_2}\left(N_{\ell_1,n_1} - N_{-\ell_1,n_1}\right).
\end{eqnarray}
In particular it also follows that,
\begin{equation}
\left[\hat{H},N_{\ell,n}\right]=0,\qquad
\left[\hat{L},N_{\ell,n}\right]=0,
\end{equation}
\begin{equation}
\left[\hat{H},Q_{\ell,n}\right]=0,\qquad
\left[\hat{L},Q_{\ell,n}\right]=2\ell\,Q_{\ell,n},
\end{equation}
as it should of course, while furthermore,
\begin{eqnarray}
\left[N_{\ell_1,n_1},a(\ell_2,n_2)\right]=-\delta_{\ell_1,\ell_2}\delta_{n_1,n_2}\,a(\ell_1,n_1) &,&
\left[N_{\ell_1,n_1},a^\dagger(\ell_2,n_2)\right]=+\delta_{\ell_1,\ell_2}\delta_{n_1,n_2}\,a^\dagger(\ell_1,n_1),
\nonumber \\
&& \\
\left[Q_{\ell_1,n_1},a(\ell_2,n_2)\right]=-\delta_{\ell_1,\ell_2}\delta_{n_1,n_2}\,a(-\ell_1,n_1) &,&
\left[Q_{\ell_1,n_1},a^\dagger(\ell_2,n_2)\right]=+\delta_{\ell_1,-\ell_2}\delta_{n_1,n_2}\,a^\dagger(\ell_1,n_1).
\nonumber
\end{eqnarray}
In order to identify the symmetry group associated to the above algebra generated by $N_{\ell,n}$ and $Q_{\ell,n}$
let us take as a clue the expressions for the total Hamiltonian and angular-momentum operators
in (\ref{eq:Hsu2}) and (\ref{eq:Lsu2}). Clearly quantum sectors with different values for $n=1,2,3,\cdots$
are all decoupled form one another, and so are the sectors with different values of $|\ell |$ for $\ell\in\mathbb{Z}$.
However given a fixed value for $n$, the sectors with opposite values of $\ell\ne 0$ and $-\ell$ are coupled
to one another through the action of the global symmetries generated by $N_{\ell,n}$ and $Q_{\ell,n}$.
Let us thus first consider the sector with $\ell=0$ and a given value for $n=n_0$. Since such a sector does
not contribute to the total angular-momentum $\hat{L}$, is left invariant by the sole symmetry generator
$N_{0,n_0}=Q_{0,n_0}=a^\dagger(0,n_0)a(0,n_0)$ and contributes to the total energy as
$\hat{H}(0,n_0)=\omega_{0,n_0}N_{0,n_0}$, any such sector is equivalent to that of an ordinary one-dimensional
harmonic oscillator of angular frequency $\omega_{0,n_0}$, of which the global symmetry is the
U(1)$_{0,n_0}$ phase symmetry generated by the number operator $N_{0,n_0}$. Hence the complete
global symmetry of all $\ell=0$ sectors of the free scalar field in the disk $D(R)$
is $\bigotimes_{n_0=1}^\infty U(1)_{0,n_0}$.
Consider now a specific non vanishing and positive value of $\ell=1,2,3,\cdots$
as well as a given value for $n=n_\ell$. Let us then adapt the notations as follows,
in order to emphasize the analogy to be highlighted,
\begin{equation}
a_+\equiv a(\ell,n_\ell),\qquad
a_-\equiv a(-\ell,n_\ell),\qquad
a^\dagger_+\equiv a^\dagger(\ell,n_\ell),\qquad
a^\dagger_-\equiv a^\dagger(-\ell,n_\ell),
\end{equation}
and consider the following combinations of the symmetry generators for the sectors $(\ell,n_\ell)$ and $(-\ell,n_\ell)$,
\begin{eqnarray}
T_0 &=& T_0(\ell,n_\ell)\equiv
\frac{1}{2}\left(N_{\ell,n_\ell}+N_{-\ell,n_\ell}\right)=\frac{1}{2}\left(a^\dagger_+ a_+ + a^\dagger_- a_-\right),
\nonumber \\
T_3 &=& T_3(\ell,n_\ell)\equiv
\frac{1}{2}\left(N_{\ell,n_\ell}-N_{-\ell,n_\ell}\right)=\frac{1}{2}\left(a^\dagger_+ a_+ - a^\dagger_- a_-\right), \\
T_+&=& T_+(\ell,n_\ell)\equiv Q_{\ell,n_\ell}=a^\dagger_+ a_- \equiv T_1 + i T_2, \nonumber \\
T_-&=& T_-(\ell,n_\ell)\equiv Q_{-\ell,n_\ell}=a^\dagger_- a_+ \equiv T_1 - i T_2.
\end{eqnarray}
Obviously $2\omega_{\ell,n}T_0$ is the total contribution of the sectors $(\ell,n)$ and $(-\ell,n)$ to the
total energy of the system, while $2\ell T_3$ is that to its total angular-momentum. Once again $T_0$
generates a global U(1)$_{\ell,n_\ell}$ phase symmetry for these two sectors of the system.
However one has furthermore
\begin{equation}
[T_3,T_+]=+T_+,\qquad
[T_3,T_-]=-T_-,\qquad
[T_+,T_-]=2T_3,
\end{equation}
or equivalently,
\begin{equation}
[T_i,T_j]=i\epsilon_{ijk}\,T_k,\qquad \epsilon_{123}=+1,\qquad i,j,k=1,2,3,
\end{equation}
in which one recognises the SU(2) Lie algebra. Indeed the Fock algebras $(a_\pm,a^\dagger_\pm)$
are precisely those of the two dimensional spherically symmetric harmonic oscillator in the Euclidean plane
(in the helicity basis), which is well known to possess a SU(2) dynamical symmetry with the above SU(2) algebra.
Thus the sectors $(\ell,n_\ell)$ and $(-\ell,n_\ell)$ with $\ell\ne 0$ of the free scalar field in the disk $D(R)$
possess a global and dynamical SU(2)$_{\ell,n_\ell}$ symmetry, which is not spatially local in phase space.
Consequently one comes to the final conclusion that the complete global symmetry group of the free scalar field
in the disk $D(R)$ -- which for most of it is a dynamical and spatially non local symmetry group -- is identified as
the following infinite countable symmetry group
\begin{equation}
\bigotimes_{n_0=1}^\infty U(1)_{0,n_0}\bigotimes_{\ell=1}^\infty\bigotimes_{n_\ell=1}^\infty U(2)_{\ell,n_\ell}.
\end{equation}
The finite dimensional symmetry group of the underlying spacetime geometry, namely the direct product
of global time translations and space rotations, is a specific subgroup of the above, with the following two abelian
generators which are also spatially local in phase space,
\begin{equation}
\hat{H}=\sum_{n_0=1}^\infty\omega_{0,n_0}N_{0,n_0}
+\sum_{\ell=1}^\infty\sum_{n_\ell=1}^\infty\,2\omega_{\ell,n}\,T_0(\ell,n_\ell),\qquad
\hat{L}=\sum_{\ell=1}^\infty\sum_{n_\ell=1}^\infty\,2\ell\,T_3(\ell,n_\ell).
\end{equation}
To make the above points somewhat more explicit, let us also consider now the finite symmetry transformations
generated by each of the generators $N_{\ell,n}$, $Q_{\ell,n}$ and $Q^\dagger_{\ell,n}=Q_{-\ell,n}$
to check that indeed they map solutions into other solutions to the equations of motion, by redefining
and mixing the mode amplitudes $a(\ell,n)$ and $a^\dagger(\ell,n)$ and thus the latter's contributions
to the field and its conjugate momentum, $\phi(t,r,\theta)$ and $\pi(t,r,\theta)$, in the Heisenberg picture.
Beginning with the Hermitian number operators $N_{\ell,n}=N^\dagger_{\ell,n}$, let us consider a specific
but otherwise arbitrary choice of values $\ell_0\in\mathbb{Z}$ and $n_0\in\mathbb{N}^+$ and the number operator
$N_{\ell_0,n_0}=a^\dagger(\ell_0,n_0)a(\ell_0,n_0)$ corresponding to the mode $(\ell_0,n_0)$ of the field.
Since the only sector $(\ell,n)$ and its Fock algebra which does not commute with $N_{\ell_0,n_0}$
is that of the mode $(\ell_0,n_0)$, all finite and global symmetries generated by that number operator
through the action of the unitary operator
\begin{equation}
U(\alpha)\equiv e^{i\alpha N_{\ell_0,n_0}},\qquad \alpha\in\mathbb{R},
\end{equation}
leave invariant all sectors $(\ell,n)\ne(\ell_0,n_0)$, and only act on the single sector $(\ell_0,n_0)$.
In particular given that
\begin{equation}
\left[N_{\ell_0,n_0},a(\ell_0,n_0)\right]=-a(\ell_0,n_0),\qquad
\left[N_{\ell_0,n_0},a^\dagger(\ell_0,n_0)\right]=+a^\dagger(\ell_0,n_0),
\end{equation}
one readily establishes that
\begin{eqnarray}
\tilde{a}(\ell_0,n_0) &\equiv& U(\alpha)\,a(\ell_0,n_0)\,U^\dagger(\alpha)=e^{-i\alpha}\,a(\ell_0,n_0), \nonumber \\
\tilde{a}^\dagger(\ell_0,n_0) &\equiv& U(\alpha)\,a^\dagger(\ell_0,n_0)\,U^\dagger(\alpha)
=e^{i\alpha}\,a^\dagger(\ell_0,n_0).
\end{eqnarray}
Note that these unitary transformations leave invariant the Fock algebra of the creation and annihilation
operators of the sector $(\ell_0,n_0)$, as well as its Fock vacuum. When acting on the phase space fields
as $U(\alpha) A\, U^\dagger(\alpha)$ where $A$ is any operator, clearly these transformations only lead
to a phase redefinition of the mode amplitudes $a(\ell_0,n_0)$ and $a^\dagger(\ell_0,n_0)$,
thereby indeed mapping any given solution to some other
solution of the same Hamiltonian equations of motion of the Klein-Gordon scalar field in the disk.
Turning now to the angular-momentum flipping operators $Q_{\ell,n}$, since $Q_{0,n}=N_{0,n}$
let us consider a specific but otherwise arbitrary choice of values $\ell_0\in\mathbb{N}^+$
and $n_0\in\mathbb{N}^+$ and the operators $Q_{\ell_0,n_0}=a^\dagger(\ell_0,n_0)a(-\ell_0,n_0)$
and $Q_{-\ell_0,n_0}=Q^\dagger_{\ell_0,n_0}=a^\dagger(-\ell_0,n_0)a(\ell_0,n_0)$
corresponding to the modes $(\ell_0,n_0)$ and $(-\ell_0,n_0)$ of the field. In this case all sectors
except for these two are left invariant under any of the finite global symmetry transformations
generated by these two operators. Let us recall that these transformations are indeed symmetries
because of the degeneracy $\omega_{-\ell_0,n_0}=\omega_{\ell_0,n_0}$.
In order to work with Hermitian operators, let us introduce the operators,
\begin{equation}
T_1(\ell_0,n_0)=\frac{1}{2}\left(Q_{\ell_0,n_0}+Q^\dagger_{\ell_0,n_0}\right),\qquad
T_2(\ell_0,n_0)=-\frac{1}{2}i\left(Q_{\ell_0,n_0} - Q^\dagger_{\ell_0,n_0}\right),
\end{equation}
such that $T^\dagger_1(\ell_0,n_0)=T_1(\ell_0,n_0)$ and $T^\dagger_2(\ell_0,n_0)=T_2(\ell_0,n_0)$.
Given that one has in this case,
\begin{eqnarray}
\left[T_1(\ell_0,n_0),a(\pm\ell_0,n_0)\right]=-\frac{1}{2}a(\mp\ell_0,n_0) &,&
\left[T_2(\ell_0,n_0),a(\pm\ell_0,n_0)\right]=\pm\frac{1}{2} i a(\mp\ell_0,n_0) , \nonumber \\
\left[T_1(\ell_0,n_0),a^\dagger(\pm\ell_0,n_0)\right]=+\frac{1}{2} a^\dagger(\mp\ell_0,n_0) &,&
\left[T_2(\ell_0,n_0),a^\dagger(\pm\ell_0,n_0)\right]=\pm\frac{1}{2} i a^\dagger(\mp\ell_0,n_0),
\end{eqnarray}
the finite global and unitary symmetry transformations
\begin{equation}
U(\alpha_1)\equiv e^{i\alpha_1 T_1(\ell_0,n_0)},\qquad
U(\alpha_2)\equiv e^{i\alpha_2 T_2(\ell_0,n_0)},\qquad
\alpha_1,\alpha_2\in\mathbb{R},
\end{equation}
generate the following redefinitions and mixings of the mode amplitudes of the sectors $(\ell_0,n_0)$
and $(-\ell_0,n_0)$, which are hence indeed once again genuine symmetry transformations mapping solutions
into other solutions to the dynamical equations of the field, namely first for transformations generated by $T_1$,
\begin{eqnarray}
\tilde{a}(\pm\ell_0,n_0) &\equiv& U(\alpha_1)\,a(\pm\ell_0,n_0)\,U^\dagger(\alpha_1)
=\cos\frac{\alpha_1}{2}\,a(\pm\ell_0,n_0)\,-\,i\sin\frac{\alpha_1}{2}\,a(\mp\ell_0,n_0), \nonumber \\
\tilde{a}^\dagger(\pm\ell_0,n_0) &\equiv& U(\alpha_1)\,a^\dagger(\pm\ell_0,n_0)\,U^\dagger(\alpha_1)
=\cos\frac{\alpha_1}{2}\,a^\dagger(\pm\ell_0,n_0)\,+\,i\sin\frac{\alpha_1}{2}\,a^\dagger(\mp\ell_0,n_0),
\end{eqnarray}
as well as for transformations generated by $T_2$,
\begin{eqnarray}
\tilde{a}(\pm\ell_0,n_0) &\equiv& U(\alpha_2)\,a(\pm\ell_0,n_0)\,U^\dagger(\alpha_2)
=\cos\frac{\alpha_2}{2}\,a(\pm\ell_0,n_0)\,\mp\,\sin\frac{\alpha_2}{2}\,a(\mp\ell_0,n_0), \nonumber \\
\tilde{a}^\dagger(\pm\ell_0,n_0) &\equiv& U(\alpha_2)\,a^\dagger(\pm\ell_0,n_0)\,U^\dagger(\alpha_2)
=\cos\frac{\alpha_2}{2}\,a^\dagger(\pm\ell_0,n_0)\,\mp\,\sin\frac{\alpha_2}{2}\,a^\dagger(\mp\ell_0,n_0).
\end{eqnarray}
Note again that these unitary transformations leave invariant the Fock algebras of both sectors
$(\ell_0,n_0)$ and $(-\ell_0,n_0)$, as well as the corresponding Fock vacua.
\section{Conclusions}
\label{SectConclusions}
In view of the double degeneracy in the energy spectrum under the exchange $\ell\leftrightarrow -\ell$ for $\ell\ne 0$
as made explicit in (\ref{eq:Hsu2}) and (\ref{eq:Lsu2}), and given the
hindsight gained through the present analysis, it would appear rather obvious that indeed the
system possesses as finite global (and dynamical) symmetries the group identified above, namely
\begin{equation}
\bigotimes_{n_0=1}^\infty U(1)_{0,n_0}\bigotimes_{\ell=1}^\infty\bigotimes_{n_\ell=1}^\infty U(2)_{\ell,n_\ell}.
\end{equation}
However that there do not exist any further global symmetries is less obvious, a conclusion which
requires an approach such as the one having been used herein and based on the rationale outlined
in the Introduction.
This is not to say that there do not exist other conserved quantities for the system. Rather it means
that there do not exist conserved quantities other than $N_{\ell,n}$ and $Q_{\ell,n}$ that would
also be generators of additional continuous global symmetries of the system.
To make this point more explicit, given specific but otherwise arbitrary values for $\ell_1$ and $\ell_2$
and for $n_1$ and $n_2$, and such that $|\ell_1|\ne|\ell_2|$, consider the following operators
bilinear in the creation and annihilation operators for the mode sectors $(\ell_1,n_1)$ and $(\ell_2,n_2)$,
\begin{eqnarray}
Q(\ell_1,n_1;\ell_2,n_2;t) &=& e^{i(\omega_{\ell_2,n_2}-\omega_{\ell_1,n_1})t}\,a^\dagger(\ell_1,n_1)\,a(\ell_2,n_2),
\nonumber \\
Q^\dagger(\ell_1,n_2;\ell_2,n_2;t) &=& e^{i(\omega_{\ell_1,n_1}-\omega_{\ell_2,n_2})t}\,
a^\dagger(\ell_2,n_2)\, a(\ell_1,n_1)=Q(\ell_2,n_2;\ell_1,n_1;t),
\end{eqnarray}
in which of course then $\omega_{\ell_1,n_1}\ne\omega_{\ell_2,n_2}$. Even though they do not commute
with the Hamiltonian, $\hat{H}$, of the system these operators are conserved quantities because of their specific
explicit time dependence such that their Heisenberg equation of motion reads,
\begin{equation}
i\frac{dQ(\ell_1,n_1;\ell_2,n_2;t)}{dt}=i\frac{\partial Q(\ell_1,n_1;\ell_2,n_2;t)}{\partial t}
\ +\ \left[Q(\ell_1,n_1;\ell_2,n_2;t),H\right]=0.
\end{equation}
Consider now the two associated conserved Hermitian operators
\begin{eqnarray}
Q_+(t)\equiv \frac{1}{2}\left(Q(\ell_1,n_1;\ell_2,n_2;t) + Q^\dagger(\ell_1,n_1;\ell_2,n_2;t)\right), \nonumber \\
Q_-(t)\equiv -\frac{1}{2}i\left(Q(\ell_1,n_1;\ell_2,n_2;t) - Q^\dagger(\ell_1,n_1;\ell_2,n_2;t)\right).
\end{eqnarray}
Using the shorthand notations $\omega_1\equiv\omega_{\ell_1,n_1}$, $\omega_2\equiv\omega_{\ell_2,n_2}$,
$a_1\equiv a(\ell_1,n_1)$, $a_2\equiv a(\ell_2,n_2)$, $a^\dagger_1\equiv a^\dagger(\ell_1,n_1)$
and $a^\dagger_2\equiv a^\dagger(\ell_2,n_2)$, one then has the relations,
\begin{eqnarray}
\left[Q_+(t),a_1\right] = e^{i(\omega_2 -\omega_1)t}\,\frac{-1}{2}\,a_2 &,&
\left[Q_-(t),a_1\right] = e^{i(\omega_2 -\omega_1)t}\,\frac{i}{2}\,a_2, \nonumber \\
\left[Q_+(t),a_2\right] = e^{i(\omega_1 -\omega_2)t}\,\frac{-1}{2}\,a_1&,&
\left[Q_-(t),a_2\right] = e^{i(\omega_1 -\omega_2)t}\,\frac{-i}{2}\,a_1, \nonumber \\
\left[Q_+(t),a^\dagger_1\right] = e^{i(\omega_1 -\omega_2)t}\,\frac{1}{2}\,a^\dagger_2 &,&
\left[Q_-(t),a^\dagger_1\right] = e^{i(\omega_1 -\omega_2)t}\,\frac{i}{2}\,a^\dagger_2, \\
\left[Q_+(t),a^\dagger_2\right] = e^{i(\omega_2 -\omega_1)t}\,\frac{1}{2}\,a^\dagger_1 &,&
\left[Q_-(t),a^\dagger_2\right] = e^{i(\omega_2 -\omega_1)t}\,\frac{-i}{2}\,a^\dagger_1.
\end{eqnarray}
For finite transformations generated by $Q_+(t)$ and $Q_-(t)$ it then follows that, given $\alpha\in\mathbb{R}$,
\begin{eqnarray}
e^{i\alpha Q_+(t)}\,a_1\,e^{-i\alpha Q_+(t)} &=&
\cos\frac{\alpha}{2}\,a_1\ -\ i\sin\frac{\alpha}{2}\,e^{i(\omega_2 - \omega_1)t}\,a_2, \nonumber \\
e^{i\alpha Q_+(t)}\,a_2\,e^{-i\alpha Q_+(t)} &=&
\cos\frac{\alpha}{2}\,a_2\ -\ i\sin\frac{\alpha}{2}\,e^{i(\omega_1 - \omega_2)t}\,a_1, \nonumber \\
e^{i\alpha Q_+(t)}\,a^\dagger_1\,e^{-i\alpha Q_+(t)} &=&
\cos\frac{\alpha}{2}\,a^\dagger_1\ +\ i\sin\frac{\alpha}{2}\,e^{i(\omega_1 - \omega_2)t}\,a^\dagger_2, \\
e^{i\alpha Q_+(t)}\,a^\dagger_2\,e^{-i\alpha Q_+(t)} &=&
\cos\frac{\alpha}{2}\,a^\dagger_2\ +\ i\sin\frac{\alpha}{2}\,e^{i(\omega_2 - \omega_1)t}\,a^\dagger_1, \nonumber \\
\end{eqnarray}
as well as,
\begin{eqnarray}
e^{i\alpha Q_-(t)}\,a_1\,e^{-i\alpha Q_-(t)} &=&
\cos\frac{\alpha}{2}\,a_1\ -\ \sin\frac{\alpha}{2}\,e^{i(\omega_2 - \omega_1)t}\,a_2, \nonumber \\
e^{i\alpha Q_-(t)}\,a_2\,e^{-i\alpha Q_-(t)} &=&
\cos\frac{\alpha}{2}\,a_2\ +\ \sin\frac{\alpha}{2}\,e^{i(\omega_1 - \omega_2)t}\,a_1, \nonumber \\
e^{i\alpha Q_-(t)}\,a^\dagger_1\,e^{-i\alpha Q_-(t)} &=&
\cos\frac{\alpha}{2}\,a^\dagger_1\ -\ \sin\frac{\alpha}{2}\,e^{i(\omega_1 - \omega_2)t}\,a^\dagger_2, \\
e^{i\alpha Q_-(t)}\,a^\dagger_2\,e^{-i\alpha Q_-(t)} &=&
\cos\frac{\alpha}{2}\,a^\dagger_2\ +\ \sin\frac{\alpha}{2}\,e^{i(\omega_2 - \omega_1)t}\,a^\dagger_1. \nonumber \\
\end{eqnarray}
Note that these two classes of unitary transformations leave invariant the Fock algebras of the mode sectors
$(\ell_1,n_1)$ and $(\ell_2,n_2)$, as it should. However clearly they do not map solutions
to the Klein-Gordon equation into some other solutions to the same equation, and thus do not define symmetries
of the system. Even though conserved, the operators $Q_+(t)$ and $Q_-(t)$ are not generators of any global
symmetry of the system.
These considerations are indeed in perfect agreement with the conclusions of the detailed and general
analysis developed in the present paper, which established that all global (and dynamical) symmetries
of the system are generated by the conserved operators $N_{\ell,n}$ and $Q_{\ell,n}$ and only those bilinear
operators in the creation and annihilation operators of field standing modes.
By considering all possible bilinear quantities in the phase space degrees of freedom, even those
that are not spatially local, and by requiring the Hamiltonian action to be left invariant up to a total
time derivative under variations of the phase space degrees of freedom generated by all such spatially bilocal
phase space bilinear quantities, provides a systematic rationale for identifying all possible
global (and dynamical) symmetries of this system with linear equations of motion.
Relying on the understanding achieved through that approach in the present case of a free field theory
restricted to a bounded spatial domain, in a forthcoming work we plan to
apply the same rationale to a free scalar field theory defined this time over an unbounded
Minkowski spacetime, in order to also establish then the relation between all global (and dynamical)
symmetries of that system with its BMS symmetries.
\section*{Acknowledgements}
DBI acknowledges the support of an ``extraordinary" postdoctoral Fellowship of the
{\it Acad\'emie de Recherche et d'Enseignement Sup\'erieur} (ARES) of the Wallonia-Brussels
Federation of Belgium towards a six months stay at CP3 during which the present work was
completed. The work of JG is supported in part by the Institut Interuniversitaire des Sciences
Nucl\'eaires (IISN, Belgium).
| -61,234.879409 |
[
-3.359375,
3.15234375
] | 19.006479 |
[
-2.955078125,
0.271240234375,
-2.00390625,
-6.26953125,
-0.83935546875,
8.40625
] |
[
3.025390625,
8.9453125,
0.900390625,
4.95703125
] | 179 | 6,290 |
[
-3.3203125,
3.763671875
] | 31.945799 |
[
-5.71875,
-4.44921875,
-5.26171875,
-2.859375,
1.865234375,
13.3125
] | 1.325389 | 8.373515 | 21.90779 | 3.623824 |
[
1.9961209297180176
] | -41,777.887418 | 6.777266 | -60,809.174161 | 0.412864 | 5.925576 |
[
-2.244140625,
-3.966796875,
-3.919921875,
-5.2109375,
2.208984375,
12.78125
] |
[
-4.9609375,
-2.359375,
-2.048828125,
-1.125,
3.716796875,
4.28515625
] | |
BkiUernxK7IADIh7Q3ra
|
\section{Introduction}
\label{sec:introduction}
In fault-tolerant quantum computation based on the surface code (a likely component of future error corrected quantum computers due to the surface code's comparatively high threshold and planar connectivity requirements \cite{Brav98,Denn02,Raus07,Raus07d,Fowl12f}), the cost of a quantum algorithm is well approximated by the number of non-Clifford operations.
This is due to the fact that non-Clifford operations are performed via magic state distillation \cite{bravyi2005}, and the cost of state distillation is large.
For example, the spacetime volume (qubit-seconds) of the T state factory from \cite{fowler2018} is two orders of magnitude larger than the volume of a CNOT operation between adjacent qubits \cite{horsman2012}.
The non-Clifford gate count will likely be particularly significant for the earliest error corrected quantum computers, which will not have enough space to distill magic states in parallel.
\begin{figure*}
\label{fig:overview-dataflow}
\resizebox{\textwidth}{!}{
\includegraphics{overview-dataflow.png}
}
\caption{
Overview of the spatial layout and data flow of the \factory{15|T\rangle}{35 \epsilon^3}{|T\rangle} construction from \cite{fowler2018} (left), our \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle} construction (middle), and our $|T\rangle$-catalyzed \factory{8|T\rangle}{28\epsilon^2}{2|T\rangle} (right).
The level 1 $T$ factories (green) are effectively the same as in \cite{fowler2018}, and are performed at half code distance to balance the contributions from distillation error and code error.
The distillation limited error rates assume a physical gate error rate of $10^{-3}$, that the injection technique of \cite{li2015} can create level 0 $|T\rangle$ states with approximately that level of error, and that the code distance is large enough for the dominant source of error in the outputs to be distillation error.
The minimal distance error rates include the effects of topological errors in the surface code itself, with a code distance of 7 for level 0 $|T\rangle$ state injection, code distance 15 for level 1 factories, and code distance 31 for everything else.
The factory from \cite{fowler2018} has significantly better error suppression, but the amount of suppression is overkill unless one wants to run century-long computations without a single error.
Our factories have smaller footprints, faster output, and an amount of suppression sufficient to run proposed algorithms beyond the classically simulable regime (e.g. \cite{babbush2018}).
The error rates of the catalyzed T factory have asterisks because its errors are correlated: if one error occurs it can poison the catalyst state and cause many more errors.
This means that this factory should be used in contexts where a single error is already considered a complete failure (e.g. at the level of an entire algorithm, not as an input to further distillations).
}
\end{figure*}
\begin{figure*}
\label{fig:overview-3d}
\resizebox{\textwidth}{!}{
\includegraphics{overview-3d.png}
}
\caption{
Size comparison of various factories producing two magic states, including output error rates.
Error rates are computed assuming a physical gate error rate of $10^{-3}$, and include topological errors from the surface code itself.
Includes the \factory{15|T\rangle}{35 \epsilon^3}{|T\rangle} construction using braids \cite{fowler2012bridge} (left) and lattice surgery \cite{fowler2018} (middle left), as well as our \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle} construction (middle right) and our $|T\rangle$-catalyzed \factory{8|T\rangle}{28\epsilon^2}{2|T\rangle} (right).
$|T\rangle$ output events are indicated with red cubes.
$|CCZ\rangle$ output events are indicated with a triplet of orange cubes.
The braided factory has been scaled to account for the fact that it uses the unrotated surface code instead of the rotated surface code \cite{horsman2012}.
The braided T factory's error rate is significantly higher because it uses an older injection technique, resulting in the level 0 T gates having an error rate of $10^{-2}$ instead of $2 \cdot 10^{-3}$.
The error rate of the catalyzed T factory has an asterisk because its errors are correlated: if one error occurs it can poison the catalyst state and cause many more errors.
This means that this factory should be used in contexts where a single error is already considered a complete failure (e.g. at the level of an entire algorithm, not as an input to further distillations).
}
\end{figure*}
\begin{figure*}
\label{fig:spreadsheet}
\resizebox{\textwidth}{!}{
\includegraphics{spreadsheet.png}
}
\caption{
Screenshot of the resource estimation spreadsheet include in the supplementary materials of this paper (file name ``calculator-CCZ-2T-resources.ods"), with various interesting cases pre-entered.
Assuming a physical gate error rate of $10^{-3}$, and minimal code distances, the $|CCZ\rangle$ factory is unlikely to fail when producing on the order of $10^{10}$ states.
This is sufficient to run classically intractable chemistry algorithms \cite{babbush2018}, but not quite sufficient to factor a 1024 bit number with a 50\% success rate (assuming that factoring an $n$ bit number requires $12 n^3$ Toffoli gates and $3n$ space \cite{zalka1998fast}).
However, if the physical gate error rate is improved slightly or (more plausibly) the factory is made slightly larger by increasing the level 1 code distance from 15 to 19, then the number of states that can be produced increases to be on the order of $10^{12}$.
This allows 4096 bit numbers to be factored (though we do not recommend using a single factory for this task, since it would take 5 years to produce the necessary magic states assuming a surface code cycle time of 1 microsecond).
}
\end{figure*}
\begin{figure*}
\label{fig:diagram-style-3d}
\resizebox{\textwidth}{!}{
\includegraphics{diagram-style-3d.png}
}
\caption{
Comparison of the to-scale diagram style from \cite{fowler2018} with the exaggerated-spacing diagram style used by this paper.
The to-scale diagram style emphasizes how things fit together and is ideal when reasoning geometrically.
The exaggerated-spacing diagram style emphasize how things connect together and is ideal when reasoning topologically.
An even more abstract diagram style for lattice surgery is the ZX calculus \cite{de2017}.
In \fig{ccz-graph} we show how translate a topological diagram into a ZX calculus graph.
}
\end{figure*}
Over the past decade, thanks to techniques such as block codes \cite{bravyi2012, fowler2013}, bridge compression \cite{fowler2012bridge}, and many others \cite{horsman2012, campbell2017, campbell2018, litinski2018}, the cost of magic state distillation has steadily decreased.
This paper adds catalyzed phasing to the pile of known techniques, continuing the tradition of gradually chipping away at the convenient approximation that magic states are the dominant cost in error-corrected quantum computation.
Note that, in this paper, we focus on optimizing the cost of distillation in the {\em single-factory regime}.
For example, we do not investigate whether there are block code factories that can use catalyzation.
We focus on the single-factory regime because we are interested in estimating the minimum number of physical qubits needed to run classically intractable instances of various quantum algorithms at a reasonable rate, and the single-factory regime is the relevant one for these kinds of estimates.
In \fig{overview-dataflow} and \fig{overview-3d}, we give a high-level view of this paper's improvements in footprint and spacetime volume, over previous factories in the single-factory regime.
The paper is organized as follows.
\sec{introduction} provides an overview of the paper, and explains the various notation and diagram conventions we will be using.
In \sec{ccz}, we explain how to construct an efficient $|CCZ\rangle$ factory by applying the techniques of \cite{fowler2018} to the construction of \cite{jones2013, eastin2013distilling}.
In \sec{catalysis}, we construct a circuit which can transform a $|CCZ\rangle$ state into two $|T\rangle$ states if a catalyst $|T\rangle$ state is present.
In \sec{generalize}, we show that this catalyzed circuit generalizes to other phase angles and note that this generalized circuit can produce two $|\sqrt{T}\rangle$ states using only five $|T\rangle$ states.
In \sec{full}, we combine constructions from the previous sections into an efficient $|T\rangle$ factory.
Finally, in \sec{conclusion}, we discuss applications of our constructions, summarize our contributions, and point towards future work.
In this paper we will refer to factories using the notation ``\factory{|\text{In}\rangle}{f(\epsilon)}{|\text{Out}\rangle}".
The left hand side is the state input into the factory, the right hand side is the state output from the factory, and the function above the arrow indicates the amount of error suppression up to leading terms (i.e. the $f(\epsilon)$ above the arrow is shorthand for the true suppression $f(\epsilon) + O(\epsilon f(\epsilon))$).
For example, we will refer to the $|T\rangle$ state distillation based on the 15-qubit Reed-Muller code \cite{bravyi2005} as the \factory{15|T\rangle}{35 \epsilon^3}{|T\rangle}.
We use three main types of diagram in this paper: circuit diagrams, time slice diagrams, and 3D topological diagrams.
The circuit diagrams demonstrate the functionality that the 3D topological diagrams are supposed to be implementing, and the time slice diagrams are a sequence of slices through the 3D topological diagrams, showing boundary information and which patches are being merged or split.
We often provide multiple diagrams of the same construction, with common labelling between the diagrams.
For example, \fig{ccz-circuit}, \fig{ccz-slices}, and \fig{ccz-3d} are all diagrams of our CCZ factory.
Discerning readers can use the labels common to all three diagrams to verify that they agree with each other.
In particular, those three diagrams all have three output qubits labelled 1 through 3, eight ancillae qubits labelled $a$ through $h$, and four ``stabilizer qubits" labelled by the stabilizer measurement they correspond to.
To make it possible to see the internal topological structure of the 3D topological diagrams, we have chosen to significantly exaggerate the amount of space between events.
We draw operations as if they had linear $O(d)$ separation (where $d$ is the code distance), but on actual hardware the operations have a constant $O(1)$ separation.
This exaggeration of separation does not change the topology, so interpreting the figures as if they were to scale will still produce the correct computation.
But it is important to account for the distortion when computing the footprint or depth of the computation.
\fig{diagram-style-3d} shows a comparison between the old to-scale diagram style and our new exaggerated spacing style diagrams.
We will sometimes refer to multi-qubit stabilizers using a concatenated-subscript notation such as $Z_{123}$.
Each subscript refers to a separate qubit, i.e. $Z_{123} = Z_1 Z_2 Z_3$.
We will often refer to $|T\rangle$ states as having a particular ``level", e.g. ``a level 1 $|T\rangle$ state" or equivalently ``a $|T_1\rangle$ state".
The level refers to the number of distillation steps used to produce the state.
We will also refer to factories by the level of their output.
For example, our starting point is level 0 $|T\rangle$ states produced using the post-selected state injection of Li \cite{li2015}.
These $|T_0\rangle$ states are then distilled by the level 1 T factory from \cite{fowler2018} into $|T_1\rangle$ states, which we can then feed into our $|CCZ\rangle$ factory.
Lastly, we wish to point out the useful supplementary materials included with this paper.
First, because it is significantly easier to understand 3D diagrams when one is able to move the camera, the supplementary materials include SketchUp files storing the models shown in the 3D topological diagrams.
Second, the supplementary materials include a spreadsheet (file name ``calculator-CCZ-2T-resources.ods") that can compute the overhead of computations that use our factories.
Interested readers can estimate the running time and number of physical qubits required by their algorithms by entering into the spreadsheet the number of T and Toffoli gates performed by the algorithm, how many qubits the algorithm uses, and an error budget.
\fig{spreadsheet} shows a screenshot of the spreadsheet.
\section{\texorpdfstring{
Lattice surgery construction of the \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle}
}{
Lattice surgery construction of the 8T to CCZ factory
}}
\label{sec:ccz}
\begin{figure*}
\label{fig:ccz-circuit}
\resizebox{\textwidth}{!}{
\includegraphics{ccz-circuit.png}
}
\caption{
Quantum circuit for the \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle}.
A rewrite of figure 3 from \cite{jones2013}.
The box with blue circles in the top right is a state display from the online simulator Quirk, with each circle representing an amplitude (the radius of the colored circle indicates the amplitude's magnitude, and the angle of the line rooted at the center of the circle indicates the phase).
The state display is showing that the output state is a $|CCZ\rangle$ state.
The small circled pluses in the circuit are X-axis controls (equivalent to a normal control surrounded by Hadamard gates); whenever one of these controls directly precedes a measurement the measurement corresponds to a Pauli product measurement.
The post-selection operation represents the classical control software determining if the an error was detected; if it fails the output must be discarded.
Pauli operations and classically-controlled Pauli operations appear here, but not in \fig{ccz-3d}, because they are performed entirely within classical control software.
The circuit can be opened in Quirk by \href{http://algassert.com/quirk\#circuit=\%7B\%22cols\%22\%3A\%5B\%5B\%22X\%22\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C\%22X\%22\%2C1\%2C\%22X\%22\%2C1\%2C\%22X\%22\%2C1\%2C1\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C\%22X\%22\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%2C\%22Z\%5E\%C2\%BC\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22Z\%22\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B1\%2C\%22Z\%22\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B1\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B1\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%5D\%2C\%5B\%22Amps3\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%7C0\%E2\%9F\%A9\%E2\%9F\%A80\%7C\%22\%5D\%5D\%7D}{following this link}.
Discerning readers can follow the link and edit the circuit in order to confirm that adding a single Z error by any T gate is caught by the post-selection, and also that all possible pairs of Z errors escape detection.
}
\end{figure*}
\begin{figure*}
\label{fig:ccz-slices-simple}
\resizebox{\textwidth}{!}{
\includegraphics{ccz-slices-simple.png}
}
\caption{
Time slices of lattice surgery activity during production of a single $|CCZ\rangle$ state.
Each red square corresponds to a qubit, and the label inside the red square identifies the qubit from \fig{ccz-circuit} that the square corresponds to.
Gray rectangles correspond to X stabilizer measurements between sets of qubits.
The red arrows labelled ``T" correspond to a noisy T state entering the system.
It is possible to double the throughput shown here by interleaving the production of two states (shown in \fig{ccz-slices}).
}
\resizebox{\textwidth}{!}{
\includegraphics{ccz-slices.png}
}
\caption{
Time slices of lattice surgery activity during production of $|CCZ\rangle$ states by the \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle}.
To maximize utilization, two states are produced concurrently.
Each red or blue square corresponds to a qubit, and the label inside the square identifies the qubit from \fig{ccz-circuit} that the square corresponds to.
Gray rectangles correspond to X stabilizer measurements between sets of qubits.
The red arrows labelled ``T" correspond to a noisy T state entering the system.
Blue squares correspond to qubits involved in producing one of the states, and red squares correspond to qubits involved in producing the other state.
The red squares in each step are exactly identical to the red squares shown in the matching step of \fig{ccz-slices-simple}.
See \fig{ccz-3d} for a 3D topological diagram corresponding to the time slices.
}
\label{fig:ccz-slices}
\end{figure*}
\begin{figure*}
\label{fig:ccz-3d}
\includegraphics[width=\textwidth,height=\dimexpr\textheight-14\baselineskip,keepaspectratio]{ccz-3d.png}
\caption{
3D topological diagram for our construction of the \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle}.
The spacing between qubits has been increased to make it possible to see the internal structure.
White (black) surfaces correspond to boundaries where chains of Z (X) errors can terminate.
Corresponds to the time slices from \fig{ccz-slices}.
Time increases from bottom to top.
The vertical poles correspond to qubits from \fig{ccz-circuit}.
Red boxes indicate connection points for noisy $|T_1\rangle$ states produced by a level 1 T factory.
The green boxes atop the columns are performing either an X or Y basis measurement at half code distance, as described in \cite{fowler2018}, by including or omitting an S gate performed using twists \cite{brown2017poking}.
Using half code distance is acceptable because, at the location in the factory where these operations are performed (i.e. after the T injections), individual errors are detected as distillation failures.
The labels along the right hand side indicate the stabilizer measurements occurring at each time.
The red/blue coloring of labels matches the red/blue coloring of \fig{ccz-slices}.
Note that inserting the $|T_1\rangle$ state has a depth of 1.5, unlike the other steps which have depth 1.
Each horizontal bar linking several vertical poles is a stabilizer measurement of a product of logical X observables.
The groups of three qubits highlighted orange and exiting left are the $|CCZ\rangle$ states being output (note that the middle pole of each $|CCZ\rangle$ state is rotated with respect to the others, with white on top instead of black on top).
The two instances of the factory that are shown differ slightly.
Their qubits have been permuted so that each factory's top layer fits into a void at the bottom of the following factory, saving a layer of depth.
}
\end{figure*}
\begin{figure*}
\label{fig:ccz-graph}
\resizebox{\textwidth}{!}{
\includegraphics{ccz-graph.png}
}
\caption{
A substitution procedure (left) for translating our 3D topological diagrams into (nearly) the ZX calculus \cite{de2017}, as well as an example translation of one of the CCZ factories from \fig{ccz-3d} (right).
We use black (white) nodes instead of green (red) nodes (the usual notation for the ZX calculus) so that the node colors match the boundary colors in the 3D topological diagrams.
Pieces with two ports are translated into edges or degree-2 nodes of either color.
Pieces with three or more ports are translated into a node of matching color.
The $Z \otimes Z$ measurement of a qubit vs a $|T\rangle$ state followed by measuring the qubit in the X or Y basis depending on the outcome of the parity measurement is translated into a (non-standard) red node.
The red node can be expanded into a proper ZX calculus construction, but we do not attempt to do so.
The ZX calculus graph is more amenable to verification than the 3D diagram.
The reverse translation, from ZX calculus graph to 3D topological diagram, is often more difficult.
}
\end{figure*}
A key technique introduced in \cite{fowler2018} is a single-layer stabilizer measurement involving an arbitrary numbers of qubits.
We use this technique in order to quickly measure the 4 stabilizers of the error-detecting Toffoli distillation protocol \cite{jones2013, eastin2013distilling}.
See \fig{ccz-circuit} for a circuit diagram of the CCZ-distillation process.
The operations in the circuit are chosen in a way that trivially translates into lattice surgery.
In \fig{ccz-slices-simple} we show time slices of one possible translation of the circuit into lattice surgery (with matching qubit labels and operation labels), and then in \fig{ccz-slices} show the time slices of our CCZ factory (corresponding to two interleaved translations of the circuit).
We also provide an annotated 3D topological diagram of the CCZ factory (see \fig{ccz-3d}).
Our $|CCZ\rangle$ factory has a naive depth of $4$ (stabilizer measurements) + $1.5$ (T state injections) + $1$ (X or Y basis measurement, depending on T injection measurements) + $2$ (detect errors) = $8.5$.
We use the same technique as in \cite{fowler2018} to partially overlap executions of the factory, resulting in an effective depth of $5.5$.
The T state injections take $1.5d$ layers because they are performed at half code distance and it takes $0.5d$ layers to move the black side into position for a parity measurement, then $0.5d$ layers to perform the parity measurement, then $0.5d$ layers to return the black side to its original position.
It is acceptable to inject at half code distance because the incoming T states have an error rate larger than the topological error incurred from an injection at this distance.
Our $|CCZ\rangle$ factory produces magic states fast enough that algorithms will bottleneck on routing instead of magic state production unless special care is taken.
For example, suppose there are several Toffoli operations to perform on qubits all placed in a common area; a common area with exactly one entrance capable of allowing exactly one qubit to enter or leave every $d$ cycles.
Because a new $|CCZ\rangle$ state is produced every $5.5d$ cycles, and each such state involves three qubits, the entrance will be occupied for $3d$ out of every $5.5d$ cycles moving magic state qubits into the common area to meet target qubits (or vice versa). This leaves only $2.5d$ cycles for other work requiring the entrance.
Furthermore, the $|CCZ\rangle$ teleportation process requires classically controlled CZ and CNOT operations.
If these operations also block the entrance, and are not done in a way that minimizes depth, they will use up the remaining $2.5d$ cycles and cause a routing bottleneck.
We see three ways for algorithms to avoid bottlenecking on routing and keep up with our $|CCZ\rangle$ factory:
\begin{enumerate}
\item
Increase the amount of space dedicated to routing.
Play it safe; do not have areas with narrow entrances or hallways that can only accommodate one qubit per $d$ cycles.
This strategy is simple and effective, but costly.
\item
Carefully distribute logical qubits across multiple disjoint areas with the goal of ensuring that Toffolis rarely target multiple qubits in the same area.
This avoids the bottleneck by having the magic state qubits pass through multiple different entrances, instead of one common entrance.
This strategy will not work for all algorithms, but it will work for some algorithms.
\item
Use generalized CCZ operations capable of targeting arbitrary stabilizers instead of individual qubits, and move Clifford work into the classical control system.
The generalized CCZ is performed in the same way that \cite{litinski2018} performs generalized T gates targeting arbitrary stabilizers.
The gate teleportation process is modified; replacing each $Z_t \otimes Z_m$ parity measurement between a target qubit $t$ and the magic state qubit $m$ with a many-body stabilizer measurement $P \otimes Z_m$ where $P$ is a vector of Pauli operations possibly involving every logical data qubit in the computation.
The main drawback of this approach is that there is 2x space overhead associated with ensuring it is always fast to access the $X$, $Y$, and $Z$ observable of every qubit.
This can likely be avoided by interleaving single-qubit work between the Toffoli operations, but requires careful algorithm-by-algorithm consideration.
\end{enumerate}
Note that our CCZ factory's footprint includes an unused 2x4 area, adjacent to where the $|CCZ\rangle$ state exits the factory (see \fig{overview-dataflow}).
This area can be used to hold target qubits waiting for a Toffoli operation, which helps with the routing overhead. Our overhead spreadsheet assumes this space will be used in this manner.
In order to produce a $|CCZ\rangle$ state every $5.5d$ cycles, we need enough level 1 T factories to create 8 $|T\rangle$ states every $5.5d$ cycles.
The half-code-distance level 1 T factory from \cite{fowler2018} produces a $|T\rangle$ state every $3.25d$ cycles, except when distillation errors are detected.
Assuming a physical gate error rate of $10^{-3}$ and a level 1 code distance of 15, distillation errors will be detected approximately 3\% of the time (the $|T_0\rangle$ states have $\sim 10^{-3}$ error when injected, gain $\sim 10^{-3}$ error while the level 0 T gates are performed at distance 7, there are fifteen of them, and the most likely case is that a single one fails: $2 \cdot 10^{-3} \cdot 15 = 3\%$).
These failures reduce the effective output rate to a $|T\rangle$ state every $3.35d$ cycles, so five of these factories will produce $\sim 8.2$ $|T\rangle$ states every $5.5d$ cycles, which is sufficient to keep up with the $|CCZ\rangle$ factory.
We accumulate a buffer of surplus level 1 $|T\rangle$ states in the small hallways between the $|CCZ\rangle$ factory and the level 1 $|T\rangle$ factories so that a single level 1 T factory failure does not delay the entire $|CCZ\rangle$ factory.
As shown in \fig{overview-dataflow}, the five level 1 factories are placed to either side of the $|CCZ\rangle$ factory.
Note that it is occasionally necessary to route the fifth factory's output to the opposite side, and that there is enough contiguous unused volume in the factory to do this when needed.
We compute the error rate of the $|CCZ\rangle$ states being produced by our factory in two different regimes: the large code distance regime where the factory is distillation limited, and the minimal code distance regime where the factory may be limited by topological errors in the surface code.
We assume a physical gate error rate of $10^{-3}$ in both cases, and assume that the post-selected state injection of Li \cite{li2015} creates $|T_0\rangle$ states with approximately this probability of error.
In the distillation limited regime, we run these states through the \factory{15|T\rangle}{35 \epsilon^3}{|T\rangle} and then through our \factory{8|T\rangle}{28\epsilon^2}{|CCZ\rangle} producing intermediate $|T_1\rangle$ states with error rate $\sim 3.5 \cdot 10^{-8}$ and then $|CCZ\rangle$ states with error rate $\sim 3.4 \cdot 10^{-14}$.
In the minimal code distance regime, we must account for topological error introduced while performing T gates and the Clifford operations making up the factory.
For example, we assume that the error rate of the $|T_0\rangle$ states doubles while performing a level 0 T gate at distance 7.
This increases the effective error of the $|T_1\rangle$ states, but this contribution is overshadowed by the large size and proportionally small code distance of the level 1 T factory operating on these states.
The factory adds approximately $10^{-6}$ error to the output error, which is three to four times more than the distillation error.
We sum the two error rates, resulting in an estimated error rate for $|T_1\rangle$ states of $\sim 1.4 \cdot 10^{-6}$.
This is forty times more error than in the distillation limited case.
The CCZ factory has a code distance large enough that we are distillation limited, and the error rate of the final $|CCZ\rangle$ states is correspondingly $\sim 5.3 \cdot 10^{-11}$.
As shown in figure \fig{spreadsheet}, the minimal distance factory causes errors in more than 50\% of runs when attempting to factor a 1024 bit number, but can comfortably run classically intractable chemistry algorithms.
However, if one increases the level 1 code distance from 15 to 19 (increasing the footprint of the factory by roughly 20\%), then the level 1 error improves so much that it's possible to factor 4096 bit numbers.
\section{\texorpdfstring{
The $|T\rangle$-catalyzed $|CCZ\rangle \rightarrow 2|T\rangle$ factory
}{
The T-catalyzed CCZ to 2T Factory
}}
\label{sec:catalysis}
\begin{figure*}
\label{fig:catalysis-circuit-simple}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{catalysis-circuit-simple.png}
}
\caption{
A circuit that transforms a $|CCZ\rangle$ state into three $|T\rangle$ states by applying Clifford operations and a single T gate.
By using one of the outputs to fuel the next iteration, the circuit can be re-interpreted as a circuit that turns one $|CCZ\rangle$ into two $|T\rangle$ states when catalyzed by one $|T\rangle$ state.
The boxes with blue circles are state displays from the online simulator Quirk, with each circle representing an amplitude (the radius of the colored circle indicates the amplitude's magnitude, and the angle of the line rooted at the center of the circle indicates the phase).
The state displays are showing that the input state is a $|CCZ\rangle$ and the output states are $|T\rangle$ states.
The small circled pluses are X-axis controls (equivalent to a normal control surrounded by Hadamard gates).
The circuit can be opened in Quirk by \href{http://algassert.com/quirk\#circuit=\%7B\%22cols\%22\%3A\%5B\%5B\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%5D\%2C\%5B\%22\%E2\%80\%A2\%22\%2C\%22\%E2\%80\%A2\%22\%2C\%22Z\%22\%5D\%2C\%5B\%22Amps3\%22\%5D\%2C\%5B\%5D\%2C\%5B1\%2C1\%2C\%22X\%5E-\%C2\%BD\%22\%5D\%2C\%5B\%22X\%22\%2C\%22X\%22\%2C\%22\%E2\%97\%A6\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C\%22Z\%5E-\%C2\%BC\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B\%22Amps1\%22\%2C\%22Amps1\%22\%2C\%22Amps1\%22\%5D\%5D\%7D}{following this link}.
}
\end{figure*}
\begin{figure*}
\label{fig:catalysis-circuit}
\centering
\includegraphics[width=\textwidth,height=\dimexpr\textheight-11\baselineskip,keepaspectratio]{catalysis-circuit.png}
\caption{
A circuit for catalyzed $|T\rangle$ state production, specialized for lattice surgery.
Given a $|CCZ\rangle$ state (first three qubits) and a $|T\rangle$ state (fourth qubit), produces three $|T\rangle$ states.
Red areas correspond to a product-of-Paulis measurement.
The blue area happens entirely within classical control software.
The $S$ ancilla is preparing an $|S\rangle$ state that can be used to correct the T gate teleportation used to perform the $Z^{-1/4}$ gate from \fig{catalysis-circuit-simple}.
The $B$ ancilla is being used to perform the $X^{-1/2}$ gate from \fig{catalysis-circuit-simple}.
The $A$ ancilla is being used to perform the multi-target CNOT from \fig{catalysis-circuit-simple}.
The boxes with blue circles, at the beginning and end of the circuit, are state displays from the online simulator Quirk.
Each circle represents an amplitude (the radius of the colored circle indicates the amplitude's magnitude, and the angle of the line rooted at the center of the circle indicates the phase).
The state displays are showing that the input and output states are $|CCZ\rangle$ and $|T\rangle$ states as described.
The small circled pluses in the circuit are X-axis controls (equivalent to a normal control surrounded by Hadamard gates); whenever one of these controls directly precedes a measurement the measurement corresponds to a Pauli product measurement.
The circuit can be opened in Quirk by \href{http://algassert.com/quirk\#circuit=\%7B\%22cols\%22\%3A\%5B\%5B\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22Z\%5E\%C2\%BC\%22\%5D\%2C\%5B\%22\%E2\%80\%A2\%22\%2C\%22\%E2\%80\%A2\%22\%2C\%22Z\%22\%5D\%2C\%5B\%22Amps3\%22\%2C1\%2C1\%2C\%22Amps1\%22\%5D\%2C\%5B\%5D\%2C\%5B1\%2C1\%2C\%22X\%22\%2C1\%2C1\%2C\%22X\%22\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%5D\%2C\%5B\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%5E\%C2\%BD\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22H\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C\%22Measure\%22\%2C1\%2C\%22Measure\%22\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C\%22X\%5E-\%C2\%BD\%22\%5D\%2C\%5B1\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22H\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C\%22X\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C\%22\%E2\%8A\%96\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C\%22H\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22Measure\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B1\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B1\%2C1\%2C\%22Z\%22\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22Z\%22\%2C\%22Z\%22\%2C\%22Z\%22\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B1\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22Amps1\%22\%2C\%22Amps1\%22\%2C\%22Amps1\%22\%5D\%5D\%7D}{following this link}.
}
\end{figure*}
\begin{figure*}
\label{fig:catalysis-slices}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{catalysis-slices.png}
}
\caption{
Time slices of lattice surgery activity during transformation of a $|CCZ\rangle$ state (orange qubits labelled 1, 2, 3) into three $|T\rangle$ states (shown in red in last slice), catalyzed by a $|T\rangle$ state (bottom right qubit in red).
Black and dark gray bars correspond to stabilizer measurements.
Ancillae qubits are shown in blue.
The code distance of the ancillae qubits is doubled when single-qubit Clifford operations are being applied, to ensure there is sufficient suppression of errors.
The light gray ``(CCZ)" box to the left will be used by the CCZ factory producing $|CCZ\rangle$ states to be transformed.
See \fig{catalysis-3d} for a 3D topological diagram corresponding to the time slices.
Every step being performed can be matched up with a step from \fig{catalysis-circuit}, and the qubit labels shown here correspond to the qubit labels there.
}
\end{figure*}
\begin{figure*}
\label{fig:catalysis-3d}
\includegraphics[width=\textwidth,height=\dimexpr\textheight-6\baselineskip,keepaspectratio]{catalysis-3d.png}
\caption{
3D topological diagram of a lattice surgery circuit transforming a $|CCZ\rangle$ state (orange-tipped inputs at bottom) and a $|T\rangle$ state (bottom right red-tipped input) into three $|T\rangle$ states (red-tipped outputs at top).
We conservatively assume that the green boxes are large enough to perform any single-qubit Clifford with negligible error.
See \fig{ccz-3d} for details about how to interpret the diagram.
}
\end{figure*}
In \cite{jones2013}, it is shown how to perform a Toffoli gate by using Clifford operations, measurement, and four T gates.
That circuit can be rewritten into an inline circuit that transforms three $|+\rangle$ states into a $|CCZ\rangle$ state via Clifford operations and four T gates \cite{gidney2018}.
Then, by diagonalizing that circuit's stabilizer table, said circuit can be rewritten into a form where three of the T gates apply directly to an input $|+\rangle$ state.
Those three T gates can then be replaced by three $|T\rangle$ state inputs, resulting in a circuit that maps $|T\rangle^{\otimes 3}$ to $|CCZ\rangle$ using Clifford gates and one T gate.
This circuit contains no measurement, and therefore can be inverted.
The inverse circuit (shown in \fig{catalysis-circuit-simple}) maps a $|CCZ\rangle$ state to three $|T\rangle$ states using Clifford gates and one T gate.
Because $|T\rangle$ states can be used to perform T gates, the T gate used to transform the $|CCZ\rangle$ into three $|T\rangle$ states can be powered by a $|T\rangle$ state output from a previous iteration of the circuit.
If we keep feeding a $|T\rangle$ state output from iteration $k$ into iteration $k+1$, then we effectively have a circuit that takes a $|CCZ\rangle$ state and outputs two $|T\rangle$ states.
Under this interpretation of the circuit, the third $|T\rangle$ state is an ancillary state that is necessary for the transformation to be possible, but is not consumed by the transformation.
Thus, in keeping with terminology for ancillary states that enable LOCC communication tasks without being consumed \cite{jonathan1999}, and previous work \cite{campbell2011catalysis}, we refer to the third $|T\rangle$ as a catalyst.
We refer to the circuit as a whole as the $|T\rangle$-catalyzed $|CCZ\rangle \rightarrow 2|T\rangle$ factory, or ``C2T factory" for short.
Beware that, although the catalyst $|T\rangle$ state is not consumed by the C2T factory, it does accumulate noise from the incoming $|CCZ\rangle$ states.
If a catalyst $|T\rangle$ has cycled through $n$ iterations of the C2T factory, and there is a probability $\epsilon$ of each $|CCZ\rangle$ containing an error, then there is an $\Theta(n \epsilon)$ chance that the catalyst has been poisoned and is causing the factory to produce bad outputs.
However, because every error in the catalyst ultimately traces back to an error in a $|CCZ\rangle$ state, the chance of there being {\em any} error grows like $\Theta(n \epsilon)$, instead of $\Theta(n^2 \epsilon)$ as would be expected from a naive calculation assuming uncorrelated errors.
Distillation protocols usually require inputs with uncorrelated errors, so it is important that we only use the C2T factory as the last step in a distillation chain.
In a sense, because of how we use the C2T factory, the correlation between errors is beneficial to us instead of detrimental.
It means that when we run an algorithm many times there will be a small number of runs with many errors, instead of many runs with a small number of errors.
We experience quadratically fewer whole-algorithm failures than would be expected from the fact that the expected number of errors is growing like $\Theta(n^2 \epsilon)$.
For other examples of correlation between errors being beneficial, we recommend reviewing hat guessing games \cite{paterson2010}.
The C2T factory circuit shown in \fig{catalysis-circuit-simple} is compact, but not in an ideal form for embedding into lattice surgery.
\fig{catalysis-circuit} fixes this by providing an equivalent circuit that, although it appears much more complicated, trivially translates into lattice surgery.
We show the result of this translation in \fig{catalysis-slices}, which has time slices of the lattice surgery operations occurring as the factory operates.
And finally \fig{catalysis-3d} shows an annotated 3D topological diagram of the process.
\section{Arbitrary-Angle Phase Catalysis}
\label{sec:generalize}
The catalysis technique used in the C2T factory from the previous section generalizes to phasing angles other than the T gate's $45^\circ$.
In \fig{catalysis-circuit-generalized}, we show a generalization of \fig{catalysis-circuit-simple} that works for an arbitrary angle $\theta$.
This circuit performs two $Z^\theta$ operations by performing cheap stabilizer operations, performing one Toffoli gate, performing one $Z^{2 \theta}$ operation, and being catalyzed by one $Z^\theta |+\rangle$ state.
Contrast with gate teleportation \cite{gottesman1999}, which consumes a previously prepared $Z^\theta |+\rangle$ state in order to perform one $Z^\theta$ operation, with a 50\% chance of requiring a fixup $Z^{-2 \theta}$ operation.
One way to discover the generalized phase catalysis circuit is to start from the phase-gradient-via-addition circuit \cite{kitaev2002, gidney2018, nam2018}, which performs a series of rotations $Z$, $S$, $T$, $\sqrt{T}$, $\sqrt{\sqrt{T}}$, etc by adding a register containing the target qubits into a phase gradient catalyst state.
Include a carry bit input in the addition of the phase-gradient-via-addition circuit, truncate the circuit after the first ripple-carry step by using the correct fixup operation, and the result is a phase catalysis circuit for an angle $\theta=\pi/2^k$ which trivially generalizes to arbitrary angles.
The catalysis circuit can likely also be derived from synthillation parity-check circuits \cite{campbell2018}, which use similar magic states and have a similar structure but are used to perform distillation of existing states instead of producing additional states.
Specializing the generalized phase catalysis circuit to $\theta = 22.5^{\circ}$, i.e. to the $\sqrt{T}$ gate, produces the circuit shown in \fig{catalysis-circuit-sqrt-t}.
This specialized circuit creates two $|\sqrt{T}\rangle$ states by performing one Toffoli operation and one T gate.
This is significantly more efficient than previous techniques we were able to find and adapt to the task of producing $|\sqrt{T}\rangle$ states \cite{landahl2013complex, bocharov2014, mishra2014, kitaev2002, gidney2018, nam2018}, assuming a physical gate error rate of $10^{-3}$ and a target error rate of $10^{-10}$.
For example, according to figure 5 of \cite{bocharov2014}, repeat-until-success circuits use $\approx 45$ T gates to approximate a $\sqrt{T}$ gate to within precision $\epsilon = 10^{-10}$.
As another example, according to table III of \cite{mishra2014}, direct synthesis of $\sqrt{T}$ state uses $\approx 25$ times more volume than direct synthesis of $|T\rangle$ states (though this ratio improves as the physical gate error rate improves).
A final example: the phase-gradient-via-addition operation described in \cite{kitaev2002,gidney2018} can perform a $\sqrt{T}$ gate with a 4-bit adder (which requires 3 $|CCZ\rangle$ states).
Phase-gradient-via-addition is the closest to competing with phase catalysis, which is perhaps not surprising since phase catalysis is an optimized form of this technique.
Other techniques appear to be very far behind; requiring an order of magnitude more spacetime volume.
\section{\texorpdfstring{
Lattice surgery construction of the \factory{8|T\rangle}{28\epsilon^2}{2|T\rangle}
}{
Lattice surgery construction of the 8T to 2T factory
}}
\label{sec:full}
\begin{figure*}
\label{fig:catalysis-circuit-generalized}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{catalysis-circuit-generalized.png}
}
\caption{
Generalized phase catalysis circuit.
Given a $Z^\theta |+\rangle$ catalyst, two $Z^\theta$ operations can be applied via stabilizer gates, one AND computation gate (notation from \cite{gidney2018}), and one $Z^{2 \theta}$ gate.
}
\end{figure*}
\begin{figure*}
\label{fig:catalysis-circuit-sqrt-t}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{catalysis-circuit-sqrt-t.png}
}
\caption{
Using a catalyst $|\sqrt{T}\rangle$ state to create 2 additional $|\sqrt{T}\rangle$ states using cheap stabilizer operations, one T gate, and one AND computation.
Has a T-cost of $5$ \cite{jones2013, gidney2018}, implying the T-cost of a $\sqrt{T}$ state is at most $2.5$.
}
\end{figure*}
We now combine the $|CCZ\rangle$ factory from \sec{ccz} with the C2T factory from \sec{catalysis}, producing a $|T\rangle$-catalyzed T factory that transforms eight noisy $|T\rangle$ states into two $|T\rangle$ states with quadratically less noise.
Note that this means we achieve a 4:1 ratio of input $|T\rangle$ states to output $|T\rangle$ state, which is competitive with the 3:1 ratio of block codes \cite{bravyi2012}.
This is surprising, because normally one has to work with a larger number of $|T\rangle$ states in order to achieve good ratios.
Note that we do not use exactly the same CCZ factory as in \sec{ccz}.
We re-order the stabilizer measurements and place the output qubits in a different location, so that it fits into the C2T factory from \sec{catalysis}.
Furthermore, we do not bother interleaving the factory with itself anymore.
There's no point; we need five $T_1$ factories to run at the rate achieved by interleaving but now only have four factories (recall \fig{overview-dataflow}).
The details of the combined factory are covered in \fig{full-slices}, which shows the parallel operation of C2T factory and CCZ factory.
Recall that the qubit labels can be matched up with \fig{ccz-circuit} for verification that the correct stabilizers are being measured (though in a different order).
Our penultimate figure, \fig{full-3d}, shows a 3D topological diagram of the factory.
Note that the figure omits the level 1 T factories feeding in noisy $|T_1\rangle$ states, and exaggerates the spacing between qubits in order to make internal structures visible, but is otherwise complete.
To bootstrap the factory, an initial catalyst $|T\rangle$ state is made ``the hard way", using some less efficient $|T\rangle$ factory that can output $|T\rangle$ states with error no higher than the error rate of the $|CCZ\rangle$ factory.
Bootstrapping occurs once at the start of the computation, and any time the catalyst $|T\rangle$ state is lost.
Specifically, note that the $|CCZ\rangle$ state produced by the CCZ part of the factory is being consumed before it's known if it contained a distillation error.
Therefore, when a detected distillation error does occur, the $|T\rangle$ state catalyst must be discarded.
This has a negligible effect on the effective depth of the factory, because it occurs so rarely (approximately once per hundred thousand distillations).
There is a space towards the top right of the factory where a spare $|T\rangle$ catalyst could be placed, to be used as a backup when the main catalyst is lost.
The primary bottleneck on the output of this factory is the rate at which $|T_1\rangle$ states are produced.
As shown in \fig{overview-dataflow}, we assume there are four $|T_1\rangle$ factories present (one beside each pair of qubits require $|T_1\rangle$ states.
When functioning perfectly, each of these factories produces a pair of noisy $|T_1\rangle$ states every $6.5d$ cycles, which is just enough to feed the catalyzed T factory and keep it producing a pair of $|T_2\rangle$ states every $6.5d$ cycles.
Of course, the $|T_1\rangle$ factories do not always function perfectly.
They discard their output roughly 3\% of the time due to detecting an error (computed in \sec{ccz}).
In order to actually achieve a depth of $6.5d$ for the \factory{8|T\rangle}{28\epsilon^2}{2|T\rangle}, it is necessary increase the $|T_1\rangle$ factory output rate by more than 3\% to compensate.
There are many ways to achieve such a small gain, and \fig{t1-3d} sketches one way to do so.
Therefore the $T_1$ factories can keep up with the catalyzed T factory producing a pair of $|T_2\rangle$ states every $6.5d$ cycles.
\section{Conclusions}
\label{sec:conclusion}
\begin{figure*}
\label{fig:t1-3d}
\centering
\includegraphics[width=\textwidth,height=\dimexpr\textheight-12\baselineskip,keepaspectratio]{t1-3d.png}
\caption{
3D topological diagram of a re-arrangement of the level 1 T factory from \cite{fowler2018}.
Quantities are quoted in units of $d/2$ instead of $d$ because the factory is performed at half code distance.
Improves the depth from $13d/2$ to $12.5d/2$, increasing the output rate by roughly 4\%, which ensures four level 1 T factories is sufficient to feed our T-catalyzed factory.
There are two variants of the factory: the one shown with an output on the left (A) and the one shown with an output on the right (B).
The qubits of A and B have been permuted so that their first stabilizer measurement involves qubits that are all on the same side, allowing the stabilizer measurement to be performed without using a central bar.
The second measurement of B (the first measurement using the central bar) is over the back 8 qubits and the last measurement of A is over the front 8 qubits.
This allows B to be lowered by half of $d/2$, so that B rests on the level 0 $|T\rangle$ injections to the right of A.
The transition back from B to A cannot be lowered quite as far, because the top of B would intersect the first central bar used in A.
Overall this optimization saves $0.5d/2$ depth relative to the interleaving technique used in \cite{fowler2018}.
}
\end{figure*}
\begin{figure*}
\label{fig:full-slices}
\resizebox{\textwidth}{!}{
\includegraphics{full-slices.png}
}
\caption{
Time slices of activity during catalyzed $|T\rangle$ state distillation.
Every step being performed can be matched up with \fig{ccz-circuit} and \fig{catalysis-slices}.
See \fig{full-3d} for the 3D topological diagram these time slices come from.
}
\end{figure*}
\begin{figure*}
\label{fig:full-3d}
\includegraphics[width=\textwidth,height=\dimexpr\textheight-6\baselineskip,keepaspectratio]{full-3d.png}
\caption{
3D topological diagram of the full $|T\rangle$-catalyzed \factory{8|T\rangle}{28\epsilon^2}{2|T\rangle}.
Contrast with the time slices from \fig{full-slices}, and the circuit in \fig{ccz-circuit} combined with the circuit in \fig{catalysis-circuit-simple}.
Single-qubit Clifford gates that would affect the catalyst $|T\rangle$ if they failed are performed with extremely conservative code distance (large green boxes).
}
\end{figure*}
In this paper we presented two factories: a $|CCZ\rangle$ factory and a catalyzed $|T\rangle$ factory.
We compiled these factories all the way down to 3d topological diagrams (see \fig{continuous-operation-3d}) and gave detailed estimates of their spacetime volume, footprint, and error rates.
We also showed how to generalize the phase catalysis technique used by our $|T\rangle$ factory to apply to arbitrary angles, including the particularly-efficient angle of $\theta=22.5^\circ$.
Finally, we slightly improved the output rate of the level 1 T factories from \cite{fowler2018}, gave a simple procedure for transforming topological diagrams into ZX calculus graphs, provided a resource estimator spreadsheet, and gave working simulator links for verifying most of our circuit constructions.
Because it takes four $|T\rangle$ states to perform a Toffoli gate, but only one $|CCZ\rangle$ state to do the same, algorithms dominated by applying Toffolis, such as Shor's algorithm and the chemistry algorithm in \cite{babbush2018}, run five times as fast when using our $|CCZ\rangle$ factory instead of the $|T\rangle$ factory from \cite{fowler2018}.
However, we caution that it is often necessary to rework these algorithms' circuits to account for the much faster Toffoli rate.
Assuming that such a reworking is possible for \cite{babbush2018}, the runtimes at classically intractable sizes would be reduced from $\sim$10 hours (see table VII of \cite{babbush2018}) to $\sim$2 hours.
For algorithms dominated by performing T gates, our catalyzed T factory provides a more modest 2$\times$ speedup.
We believe it is possible to further decrease the volume of our factories.
For example, we suspect that the level 1 T state injection at the end of each factory can be partially merged with that factory's final stabilizer measurement.
If that is true, then the depth of the factories could be reduced by $1d$.
However, the effect of this optimization on the topological error rate is difficult to predict and we will use simulation to check the optimization's correctness before claiming it.
Another possible optimization is to eagerly route $|CCZ\rangle$ qubits emerging from the CCZ factory to their final destination (in preparation for a parity measurement), instead of holding them next to the factory until they are verified.
Removing the output-holding area reduces the CCZ factory's footprint by over 20\%, which is a large gain, but it is important to keep in mind that this is not a true reduction in volume but rather a reclassification of some of the factory's volume as routing volume.
Yet another possible optimization would be to carefully analyze how topological errors within the surface code propagate through the factory.
At any location where an error chain between two boundaries would result in a detected failure, the boundaries can be moved closer together.
A final idea that should be investigated is estimating logical error probabilities from the observed pattern of detection events produced by the surface code's stabilizer measurements.
For example, if there were a sudden burst of detection events crossing between two boundaries during the execution of a factory, the factory's output could be cautiously discarded even if the logical measurement results indicate there is no problem.
Assuming there is some metric that can be derived from the raw detection events, that reliably correlates with the true failure probability, this would allow us to reduce the number of false negatives (where an undetected error escapes the factory) at the cost increasing the number of false positives (where a run with no error is discarded).
In this paper we focused on making a low-volume factory in the single-factory regime, but it is also important to consider factories optimized to have a tiny footprint.
Early quantum computers will have limited space; it's worth sacrificing depth if it means the factory actually fits on the machine.
By combining techniques from this paper and low-footprint distillation techniques mentioned in \cite{litinski2018}, it should be possible to create factories covering fewer qubits but with roughly the same volume as ours.
Another interesting avenue to explore is the high-footprint / multi-factory regime, where factories based on block codes become possible.
Block factories should be able to outperform the efficiency of our factory, assuming enough states are being distilled in parallel.
But this raises the question of whether block factories can also be improved by catalysis; are there catalyzed block factories?
We don't know the answer to this question.
We suspect that the space of quantum circuits contains many other gems akin to the catalyzed phasing circuit.
We consider finding these circuit to be important, because they can be surprisingly efficient at their tasks.
It would be particularly useful to have a general framework for finding catalyzed circuits, to better understand what makes them efficient, and to understand the connection with related constructions such as distillation via parity-checks \cite{campbell2018}, synthillation \cite{campbell2017}, and phase gradient kickbacks \cite{kitaev2002, gidney2018, nam2018}.
Our guess as to the nature of the connection between these constructions is that there are a small number of circuit identities underlying all these related but different techniques, and that each technique is rewriting and interpreting the underlying circuit identities in a different way.
If the connection is actually of this form, then perhaps it is possible to write code that takes a circuit using one of these techniques, derives the identity the circuit is using, and then produces a whole related family of interesting circuits (perhaps including circuits that use catalysis).
\section{Acknowledgements}
We thank Earl T. Campbell for reviewing a draft of this paper and suggesting useful references.
\bibliographystyle{plainnat}
| -50,139.538474 |
[
-2.015625,
1.8505859375
] | 69.804618 |
[
-1.767578125,
3.119140625,
-0.50146484375,
-2.98828125,
-1.599609375,
4.0234375
] |
[
2.671875,
8.3359375,
2.98046875,
6.04296875
] | 392 | 7,492 |
[
-3.05078125,
3.53125
] | 29.391319 |
[
-4.8203125,
-2.287109375,
-2.78125,
-2.193359375,
0.89013671875,
8.9296875
] | 0.8103 | 16.921662 | 19.180459 | 1.601945 |
[
2.5301334857940674
] | -42,920.432705 | 5.930059 | -49,058.797451 | 0.321869 | 5.946054 |
[
-2.72265625,
-2.9453125,
-3.091796875,
-3.998046875,
2.345703125,
10.2109375
] |
[
-5.86328125,
-1.822265625,
-1.8388671875,
-1.58984375,
3.30859375,
5.203125
] | |
BkiUeBo4eIZjyphZvgCu
|
\section*{Introduction}
\subsection*{General setting}
In the late 1980's Makkai and Paré presented their book \textit{Accessible categories: the foundations of categorical model theory} \cite{Makkaipare}, providing a solid framework that could accommodate a large portion of categorical logic. In the fashion of abstract logic and abstract model theory, the book has two main aspects: one semantical and one syntactic. On the one hand they introduced the theory of accessible categories\footnote{Which had already appeared under a different name in the work of Lair and Rosický.}, these are abstract categories of models of some theory. On the other hand they present the theory of \textit{sketches}\footnote{Which had been developed by the French school.}, which provide a categorical specification of infinitary first order theories. The interplay between sketches (syntax) and accessible categories (semantics) is a large portion of categorical model theory.
Since then, categorical model theory has evolved significantly, thanks to the contribution of several authors, including the authors of the above mentioned book. The study of accessible categories from the point of view of the model theorist has led to the individuation of special classes of accessible categories, that best suit the most natural constructions of model theory. Among the most common additional requirements, we find:
\begin{itemize}
\item the existence of directed colimits;
\item amalgamation property (AP);
\item joint embedding property (JEP);
\item every morphism is a monomorphism;
\item the existence of a (very) well behaved faithful functor $\ca \to \Set$ preserving directed colimits.
\end{itemize}
Each of these different assumptions is motivated by some model theoretic intuition. For example, the request that every morphism is a monomorphism is motivated by the focus on elementary embedding, rather than homomorphisms of structures. The faithful functor into $\Set$ allows to construct directed colimits of models as colimits of underlying structures. The combination of (AP), (JEP) and the existence of directed colimits allows the construction of saturated objects \cite{Rsaturated}.
Synthetizing the conjoint work of Beke, Rosický, Lieberman, Vasey et al. (see for example \cite{aec,internalsize,lieberman2015limits,internalsize,LB2014,universal,vasey2019accessible}) in a sentence, accessible categories with directed colimits generalize Shelah's framework of abstract elementary classes, and are special enough to recover the main features of categorical model theory.
\subsection*{Our contribution}
In this paper we shift the focus from categorical model theory to \textit{formal model theory}. By this we intend that instead of studying the property of a category of models via its objects and arrows, we study it via its relational behavior among categories of models of theories. We introduce two specific incarnations of formal model theory.
\begin{itemize}
\item[$\star$] the $2$-category $\text{Acc}_\omega$, of \textbf{accessible categories with directed colimits}, where $1$-cells are functors preserving directed colimits and $2$-cells are natural transformations.
\end{itemize}
The study of this $2$-category is very coherent with the classical tradition à la Makkai-Paré, and the additional assumptions that we have listed above, will re-emerge in this setting, depending on the kind of constructions and behavior typical of model theory that we want to simulate.
\begin{itemize}
\item[$\star$] the $2$-category $\text{BIon}$, of (generalized) bounded ionads.
\end{itemize}
The first notion of ionads was introduced by Garner \cite{ionads}, mainly from a topological point of view. In this paper we introduce the notion of \textbf{ionad of models} of a geometric theory, and we give a ionadic interpretation of Makkai's Ultracategories.
On the syntactic side of this paper we find \textbf{topoi} and (lex) \textbf{geometric sketches}, these are both categorical specifications of geometric theories, as we discuss in the background section.
\begin{center}
\begin{tikzcd}
& \mathsf{LGSketches} \arrow[rrdddddd, "\gimel" description, bend left=20] \arrow[dddd, "\text{Mod}" description,] \arrow[ldddddd, "\mathbb{M}\mathbbm{od}" description, bend right=15] & & \\
& & & \\
& & & \\
& & & \\
& \text{Acc}_\omega \arrow[ldd, "\mathsf{ST}" description, dashed, bend right=20] \arrow[rrdd, "\mathsf{S}" description, dashed, bend left=20] & & \\
& & & \\
\text{BIon} \arrow[rrr, "\mathbb{O}" description, dashed, bend right=20] & & & \text{Topoi} \arrow[lluu, "\mathsf{pt}" description,] \arrow[lll, "\mathbbm{pt}" description,]
\end{tikzcd}
\end{center}
In this paper we study the interplay between theories (topoi and sketches) and categories of models (ionads and accessible categories), providing reconstruction results for both of them. This will amount to a complete description of the diagram above.
From a technical point of view we build on two previous paper of ours \cite{thcat,thgeo}, where we develop relationships between topoi, bounded ionads, and accessible categories with directed colimits.
\begin{center}
\begin{tikzcd}
& \text{Loc} \arrow[lddd, "\mathbbm{pt}" description, bend left=12] \arrow[rddd, "\mathsf{pt}" description, bend left=12] & \\
& & \\
& & \\
\text{Top} \arrow[ruuu, "\mathcal{O}" description, dashed, bend left=12] & & \text{Pos}_{\omega} \arrow[luuu, "\mathsf{S}" description, dashed, bend left=12] \arrow[ll, "\mathsf{ST}", dashed]
\end{tikzcd}
\qquad
\begin{tikzcd}
& \text{Topoi} \arrow[lddd, "\mathbbm{pt}" description, bend left=12] \arrow[rddd, "\mathsf{pt}" description, bend left=12] & \\
& & \\
& & \\
\text{BIon} \arrow[ruuu, "\mathbb{O}" description, dashed, bend left=12] & & \text{Acc}_{\omega} \arrow[luuu, "\mathsf{S}" description, dashed, bend left=12] \arrow[ll, "\mathsf{ST}", dashed]
\end{tikzcd}
\end{center}
In \cite{thgeo} we have categorified the Scott topology on a poset with directed joins and the Isbell duality between locales and topological spaces. We will briefly recall the results on those papers in the first sections, contextualizing them in the framework of Lawvere functorial semantics.
\subsection*{Structure}
The exposition is organized as follows:
\begin{enumerate}
\item[\S \ref{back}] We put together the needed background on accessible categories, ionads, topoi and sketches. The section can be completely skipped by the reader that is well versed with this topic and is intended to be a soft interaction for those readers whose background is closer to classical model theory. We provide several references and we contextualize most of the definition.
\item[\S \ref{logicgeneralizedaxiom}] We recall the most relevant results of \cite{thgeo}, the \textbf{Scott adjunction} and the \textbf{categorified Isbell duality}, putting them in the context of functorial semantics. We traces back the Scott topos to the seminal works of Linton and Lawvere on algebraic theories and algebraic varieties.
\item[\S \ref{logicclassifyingtopoi}] This section is dedicated to reconstruction results. First we answer the question \textit{is there any relation between Scott topoi} and \textit{classifying topoi}? with a partially affirmative answer. Indeed every theory $\cs$ has a category of models $\mathsf{Mod}(\cs)$, but this category does not retain enough information to recover the theory, even when the theory has enough points (\Cref{classificatore}). That's why the Scott adjunction is not sharp enough. Nevertheless, every theory has a ionad of models $\mathbb{M}\mathbbm{od}(\cs)$, the category of opens of such a ionad $\mathbb{O}\mathbb{M}\mathbbm{od}(\cs)$ recovers theories with enough points (\Cref{isbellclassificatore}).
\item[\S \ref{logicaec}] This section describes the relation between the Scott adjunction and abstract elementary classes, providing a restriction of the Scott adjunction to one between accessible categories where every map is a monomorphism and locally decidable topoi (\Cref{LDCAECs}).
\item[\S \ref{logicsaturatedobjects}] In this section we give the definition of \textit{category of saturated objects} (CSO) and show that the Scott adjunction restricts to an adjunction between CSO and atomic topoi (\Cref{thmcategoriesofsaturated objects}). This section can be understood as an attempt to conceptualize the main result in \cite{simon}.
\end{enumerate}
\subsection*{Notations and conventions} \label{backgroundnotations}
Most of the notation will be introduced when needed and we will try to make it as natural and intuitive as possible, but we would like to settle some notation.
\begin{enumerate}
\item $\ca, \cb$ will always be accessible categories, very possibly with directed colimits.
\item $\cx, \cy$ will always be ionads.
\item $\mathsf{Ind}_\lambda$ is the free completion under $\lambda$-directed colimits.
\item $\ca_{\kappa}$ is the full subcategory of $\kappa$-presentable objects of $\ca$.
\item $\cg, \ct, \cf, \ce$ will be Grothendieck topoi.
\item In general, $C$ is used to indicate small categories.
\item $\eta$ is the unit of the Scott adjunction.
\item $\epsilon$ is the counit of the Scott adjunction.
\item $\P(X)$ is the category of small copresheaves of $X$.
\item An Isbell topos is a topos of the form $\mathbb{O}(\cx)$, for some bounded ionad $\cx$;
\item A Scott topos is a topos of the form $\mathsf{S}(\ca)$ for some accessible category $\ca$ with directed colimits.
\end{enumerate}
\begin{notation}[Presentation of a topos]\label{presentation of topos}
A presentation of a topos $\cg$ is the data of a geometric embedding into a presheaf topos $f^*: \Set^{C} \leftrightarrows \cg : f_*$. This means precisely that there is a suitable topology $\tau_f$ on $C$ that turns $\cg$ into the category of sheaves over $\tau$; in this sense $f$ \textit{presents} the topos as the category of sheaves over the site $(C, \tau_f)$.
\end{notation}
\section{Background: accessibility, sketches, topoi and ionads} \label{back}
This section is merely expository and can be skipped by the reader that is well versed with the topics that it introduces.
\subsection{Accessible categories} \label{backgroundLPAC}
The theory of accessible and locally presentable categories has gained quite some popularity along the years because of its natural ubiquity. Most of the categories of the \textit{working mathematician} are accessible, with a few (but still extremely important) exceptions. For example, the category $\mathsf{Top}$ of topological spaces is not accessible. In general, categories of algebraic structures are locally $\aleph_0$-presentable and many relevant categories of geometric nature are $\aleph_1$-accessible. A sound rule of thumb is that locally finitely presentable categories correspond to categories of models essentially algebraic theories, in fact this is even a theorem in a proper sense \cite[Chap. 3]{adamekrosicky94}. A similar intuition is available for accessible categories too, but some technical price must be paid \cite[Chap. 5]{adamekrosicky94}. Accessible and locally presentable categories (especially the latter) are \textit{tame} enough to make many categorical wishes come true; that's the case for example of the adjoint functor theorem, that has a very easy to check version for locally presentable categories.
\begin{ach} In this section $\lambda$ is a regular cardinal.
\end{ach}
\begin{defn}[$\lambda$-accessible category]
A $\lambda$-accessible category $\ca$ is a category with $\lambda$-directed colimits with a set of $\lambda$-presentable objects that generate by $\lambda$-directed colimits. An accessible category is a category that is $\lambda$-accessible for some $\lambda$.
\end{defn}
\begin{defn}[Locally $\lambda$-presentable category]
A locally $\lambda$-presentable category is a cocomplete $\lambda$-accessible category. A locally presentable category is a category that is locally $\lambda$-presentable for some $\lambda$.
\end{defn}
\begin{defn}[$\lambda$-presentable object]
An object $a \in \ca$ is $\lambda$-presentable if its covariant hom-functor $\ca(a, -): \ca \to \Set$ preserves $\lambda$-directed colimits.
\end{defn}
\begin{defn}[$\lambda$-directed posets and $\lambda$-directed colimits]
A poset $P$ is $\lambda$-directed if it is non empty and for every $\lambda$-small\footnote{This means that its cardinality is strictly less then $\lambda$. For example $\aleph_0$-small means finite.} family of elements $\{p_i\} \subset P$, there exists an upper bound. A $\lambda$-directed colimit is the colimit of a diagram over a $\lambda$-directed poset (seen as a category).
\end{defn}
\begin{notation} For a category $\ca$, we will call $\ca_\lambda$ its full subcategory of $\lambda$-presentable objects.
\end{notation}
\subsubsection{Literature}
There are two main references for the theory of accessible and locally presentable categories, namely \cite{adamekrosicky94} and \cite{Makkaipare}. The first one is intended for a broader audience and appeared few years after the second one. The second one is mainly concerned with the logical aspects of this theory. Another good general exposition is \cite[Chap. 5]{BOR2}.
\subsubsection{Accessible categories and (infinitary) logic}
Accessible categories have been connected to (infinitary) logic in several (partially independent) ways. This story is recounted in Chapter 5 of \cite{adamekrosicky94}. Let us recall two of the most important results of that chapter.
\begin{enumerate}
\item As locally presentable categories, accessible categories are categories of models of theories, namely \textit{basic} theories \cite[Def. 5.31, Thm. 5.35]{adamekrosicky94}.
\item Given a theory $T$ in $L_\lambda$ the category $\mathsf{Elem}_\lambda(T)$ of models and $\lambda$-elementary embeddings is accessible \cite[Thm. 5.42]{adamekrosicky94}.
\end{enumerate}
Unfortunately, it is not true in general that the whole category of models and homomorphisms of a theory in $L_\lambda$ is accessible. It was later shown by Lieberman \cite{Lthesis} and independently by Rosický and Beke \cite{aec} that abstract elementary classes are accessible too. The reader that is interested in this connection might find interesting \cite{vasey2019accessible}, whose language is probably the closest to that of a model theorist.
\subsection{Sketches} \label{backgroundsketches}
\begin{defn}[Sketch]
A sketch is a quadruple $\mathcal{S}=(S, L,C, \sigma)$ where
\begin{enumerate}
\item[$S$] is a small category;
\item[$L$] is a class of diagrams in $S$, called \textit{limit} diagrams;
\item[$C$] is a class of diagrams in $S$, called \textit{colimit} diagrams;
\item[$\sigma$] is a function assigning to each diagram in $L$ a cone and to each diagram in $C$ a cocone.
\end{enumerate}
\end{defn}
\begin{defn}
A sketch is
\begin{itemize}
\item \textit{limit} if $C$ is empty;
\item \textit{colimit} if $L$ is empty;
\item \textit{mixed} (used only in emphatic sense) if it's not limit, nor colimit;
\item \textit{geometric} if each cone is finite;
\item \textit{coherent} if it is geometric and and every cocone is either finite or discrete, or it is a regular-epi specification\footnote{See \cite[D2.1.2]{elephant2}.}.
\end{itemize}
\end{defn}
\begin{defn}[Morphism of Sketches]
Given two sketches $\cs$ and $\ct$, a morphism of sketches $f: \cs \to \ct$ is a functor $f: S \to T$ mapping (co)limit diagrams into (co)limits diagrams and proper (co)cones into (co)cones.
\end{defn}
\begin{defn}[$2$-category of Sketches]
The $2$-category of sketches has sketches as objects, morphism of sketches as $1$-cells and natural transformations as $2$-cells.
\end{defn}
\begin{defn}[Category of models of a sketch]
For a sketch $\cs$ and a (bicomplete) category $\cc$, the category $\mathsf{Mod}_{\cc}(\cs)$ of $\cc$-models of the sketch is the full subcategory of $\cc^\cs$ of those functors that are models. If it's not specified, by $\mathsf{Mod}(\cs)$ we mean $\mathsf{Mod}_{\Set}(\cs)$.
\end{defn}
\begin{defn}[Model of a sketch]
A model of a sketch $\cs$ in a category $\cc$ is a functor $f: \cs \to \cc$ mapping each specified (co)cone to a (co)limit (co)cone. If it's not specified a model is a $\Set$-model.
\end{defn}
\subsubsection{Literature}
There exists a plethora of different and yet completely equivalent approaches to the theory of sketches. We stick to the one that suits best our setting, following mainly \cite[Chap. 5.6]{BOR2} or \cite[Chap. 2.F]{adamekrosicky94}. Other authors, such as \cite{Makkaipare} and \cite{elephant2} use a different (and more classical) definition involving graphs. Sketches are normally used as generalized notion of theory. From this perspective these approaches are completely equivalent, because the underlying categories of models are the same. \cite[page 40]{Makkaipare} stresses that the graph-definition is a bit more flexible in \textit{daily practice}. Sketches were introduced by C. Ehresmann. Guitart, Lair and Burroni should definitely be mentioned among the relevant contributors. This list of references does not do justice to the French school, which has been extremely prolific on this topic, yet, for the purpose of this paper the literature above will be more then sufficient.
\subsubsection{Sketches: logic and sketchable categories}
Sketches became quite common among category theorists because of their expressiveness. In fact, they can be used as a categorical analog of those theories that can be axiomatized by (co)limit properties. For example, in the previous section, essentially algebraic theories are precisely those axiomatizable by finite limits.
\subsubsection{From theories to sketches}
We have mentioned that a sketch can be seen as a kind of theory. This is much more than a motto, or a motivational presentation of sketches. In fact, given a (infinitary) first order theory $\mathbb{T}$, one can always construct in a more or less canonical way a sketch $\mathcal S_{\mathbb{T}}$ whose models are precisely the models of $\mathbb{T}$. This is very well explained in \cite[D2.2]{elephant2}; for the sake of exemplification, let us state the theorem which is most relevant to our context.
\begin{thm}
If $\mathbb{T}$ is a (geometric) (coherent) theory, there there exists a (geometric) (coherent) sketch having the same category of models of $\mathbb{T}$.
\end{thm}
Some readers might be unfamiliar with geometric and coherent theories; these are just very specific fragments of first order (infinitary) logic. For a very detailed and clean treatment we suggest \cite[D1.1]{elephant2}. Sketches are quite a handy notion of theory because we can use morphisms of sketches as a notion of translation between theories.
\begin{prop}[{\cite[Ex. 5.7.14]{BOR2}}]
If $f: \mathcal{S} \to\mathcal{T} $ is a morphism of sketches, then composition with $f$ yields an (accessible) functor $\mathsf{Mod}(\cs) \to \mathsf{Mod}(\ct)$.
\end{prop}
\subsubsection{Sketchability}
It should not be surprising that sketches can be used to \textit{axiomatize} accessible and locally presentable categories too. The two following results appear, for example, in \cite[2.F]{adamekrosicky94}.
\begin{thm}
A category is accessible if and only if it's equivalent to the category of models of a mixed sketch.
\end{thm}
\subsection{Topoi} \label{backgroundtopoi}
Topoi were defined by Grothendieck as a \textit{natural} generalization of the category of sheaves $\mathsf{Sh}(X)$ over a topological space $X$. Their geometric nature was thus the first to be explored and exploited. Yet, with time, many other properties and facets of them have emerged, making them one of the main concepts in category theory between the 80's and 90's. Johnstone, in the preface of \cite{elephant1}, gives 9 different interpretations of what a topos \textit{can be}. In fact, this multi-faced nature of the concept of topos motivates the title of his book. In this paper we will focus on two main aspects.
\begin{itemize}
\item A topos is a (categorification of the concept of) locale;
\item A topos is a (family of Morita-equivalent) geometric theory;
\end{itemize}
The first one draws the connection with our previous results in \cite{thgeo}, while allows us to treat a topos as placeholders for a geometric theory.
\begin{ach}
In this section by \textit{topos} we mean Grothendieck topos.
\end{ach}
\begin{defn}[Topos]
A topos $\ce$ is lex-reflective\footnote{This means that it is a reflective subcategory and that the left adjoint preserves finite limits. Lex stands for \textit{left exact}, and was originally motivated by homological algebra.} subcategory\footnote{Up to equivalence of categories.} of a category of presheaves over a small category, $$i^*: \Set^{C^\circ} \leftrightarrows \ce : i_*. $$
\end{defn}
\begin{defn}[Geometric morphism]
A geometric morphism of topoi $f: \ce \to \cf$ is an adjunction $f^*: \cf \leftrightarrows \ce: f_*$\footnote{Notice that $f_*$ is the right adjoint.} whose left adjoint preserves finite limits (is left exact). We will make extensive use of the following terminology:
\begin{enumerate}
\item[$f^*$] is the inverse image functor;
\item[$f_*$] is the direct image functor.
\end{enumerate}
\end{defn}
\begin{defn}[$2$-category of Topoi]
The $2$-category of topoi has topoi as objects, geometric morphisms as $1$-cells and natural transformations between left adjoints as $2$-cells.
\end{defn}
\subsubsection{Literature}
There are several standard references for the theory of topoi. Most of the technical content of the paper can be understood via \cite{sheavesingeometry}, a reference that we strongly suggest to start and learn topos theory. Unfortunately, the approach of \cite{sheavesingeometry} is a bit different from ours, and even though its content is sufficient for this paper, the intuition that is provided is not $2$-categorical enough for our purposes. The reader might have to integrate with the encyclopedic \cite{elephant1,elephant2}. A couple of constructions that are quite relevant to us are contained only in \cite{borceux_19943}, that is otherwise very much equivalent to \cite{sheavesingeometry}.
\subsubsection{Topoi and Geometry}
It's a bit hard to convey the relationship between topos theory and geometry in a short subsection. We mainly address the reader to \cite{leinster2010informal}. Let us just mention that to every topological space $X$, one can associate its category of sheaves $\mathsf{Sh}(X)$ (and this category is a topos), moreover, this assignment is a very strong topological invariant. For this reason, the study of $\mathsf{Sh}(X)$ is equivalent to the study of $X$ from the perspective of the topologist, and is very convenient in algebraic geometry and algebraic topology. For example, the category of sets is the topos of sheaves over the one-point-space, $$ \Set \cong \mathsf{Sh}(\bullet) $$ for this reason, category-theorists sometime call $\Set$ \textit{the point}. This intuition is consistent with the fact that $\Set$ is the terminal object in the category of topoi. Moreover, as a point $p \in X$ of a topological space $X$ is a continuous function $p: \bullet \to X$, a point of a topos $\cg$ is a geometric morphism $p: \Set \to \cg$. Parallelisms of this kind have motivated most of the definitions of topos theory and most have led to results very similar to those that were achieved in formal topology (namely the theory of locales). The class of points of a topos $\ce$ has a structure of category $\mathsf{pt}(\ce)$ in a natural way, the arrows being natural transformations between the inverse images of the geometric morphisms.
\subsubsection{Topoi and Logic}
Geometric logic and topos theory are tightly bound together. Indeed, for a geometric theory $\mathbb{T}$ it is possible to build a topos $\Set[\mathbb{T}]$ (the classifying topos of $\mathbb{T}$) whose category of points is precisely the category of models of $\mathbb{T}$,
$$ \mathsf{Mod}(\mathbb{T}) \cong \mathsf{pt}(\Set[\mathbb T]). $$
This amounts to the theory of classifying topoi \cite[Chap. X]{sheavesingeometry} and each topos classifies a geometric theory. This gives us a logical interpretation of a topos. Each topos is a \textit{geometric theory}, which in fact can be recovered from any of its sites of definition. Obviously, for each site that describes the same topos we obtain a different theory. Yet, these theories have the same category of models (in any topos). In this paper we will exploit the construction of \cite{borceux_19943} to show that to each \textit{geometric} sketch (a kind of theory), one can associate a topos whose points are precisely the models of the sketch. This is another way to say that the category of topoi can internalize a geometric logic.
\subsubsection{Special classes of topoi}
In the paper we will study some relevant classes of topoi. In this subsection we recall all of them and give a good reference to check further details. These references will be repeated in the relevant sections.
\begin{table}[!htbp]
\begin{tabular}{lllll}
\cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{Topoi} & \multicolumn{1}{c|}{Reference} & & \\ \cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{connected} & \multicolumn{1}{c|}{{\cite[C1.5.7]{elephant2}}} & & \\ \cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{compact} & \multicolumn{1}{c|}{{\cite[C3.2]{elephant2}}} & & \\ \cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{atomic} & \multicolumn{1}{c|}{{\cite[C3.5]{elephant2}}} & & \\ \cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{locally decidable} & \multicolumn{1}{c|}{{\cite[C5.4]{elephant2}}} & & \\ \cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{coherent} & \multicolumn{1}{c|}{{\cite[D3.3]{elephant2}}} & & \\ \cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{boolean} & \multicolumn{1}{c|}{{\cite[D3.4, D4.5]{elephant2}, \cite[A4.5.22]{elephant1}}} & & \\ \cline{2-3}
\end{tabular}
\end{table}
\subsection{Ionads} \label{backgroundionads}
Ionads were defined by Garner in \cite{ionads}; together with our \cite{thgeo}, this is all the literature available on the topic. Garner's definition is designed to generalize the definition of topological space. Indeed a topological space $\cx$ is the data of a set (of points) and an interior operator, $$\text{Int}: 2^X \to 2^X.$$ Garner builds on the well known analogy between powerset and presheaf categories and extends the notion of interior operator to a presheaf category. The whole theory is extremely consistent with the expectations: while the poset of (co)algebras for the interior operator is the locale of open sets of a topological space, the category of coalgebras of a ionad is a topos, a natural categorification of the concept of locale. In our paper \cite{thgeo} we have provided a more refined notion of ionads which will be very important for our constructions. Garner's notion would not apply to any of our cases.
\subsection{Garner's definitions}
\begin{defn}[Ionad]
An ionad $\cx = (X, \text{Int})$ is a set $X$ together with a comonad $\text{Int}: \Set^X \to \Set^X$ preserving finite limits.
\end{defn}
\begin{defn}[Category of opens of a ionad]
The category of opens $\mathbb{O}(\cx)$ of a ionad $\cx = (X, \text{Int})$ is the category of coalgebras of $\text{Int}$. We shall denote by $U_{\cx}$ the forgetful functor $U_\cx: \mathbb{O}(\cx) \to \Set^X$.
\end{defn}
\begin{defn}[Morphism of Ionads]
A morphism of ionads $f: \cx \to \cy$ is a couple $(f, f^\sharp)$ where $f: X \to Y$ is a set function and $f^\sharp$ is a lift of $f^*$,
\begin{center}
\begin{tikzcd}
\mathbb{O}(\cy) \arrow[rr, "f^\sharp" description] \arrow[dd, "U_\cy" description] & & \mathbb{O}(\cx) \arrow[dd, "U_\cx" description] \\
& & \\
\Set^Y \arrow[rr, "f^*" description] & & \Set^X
\end{tikzcd}
\end{center}
\end{defn}
\begin{defn}[Specialization of morphism of ionads]
Given two morphism of ionads $f,g: \cx \to \cy$, a specialization of morphism of ionads $\alpha: f \Rightarrow g$ is a natural transformation between $f^\sharp$ and $g^\sharp$,
\begin{center}
\begin{tikzcd}
\mathbb{O}(\cy) \arrow[r, "f^\sharp" description, bend left=35, ""{name=U, below}]
\arrow[r,"g^\sharp" description, bend right=35, ""{name=D}]
& \mathbb{O}(\cx)
\arrow[Rightarrow, "\alpha" description, from=U, to=D] \end{tikzcd}
\end{center}
\end{defn}
\begin{defn}[$2$-category of Ionads]
The $2$-category of ionads has ionads as objects, morphism of ionads as $1$-cells and specializations as $2$-cells.
\end{defn}
\begin{defn}[Bounded Ionads]
A ionad $\cx$ is bounded if $\mathbb{O}(\cx)$ is a topos.
\end{defn}
\subsection{A generalization of Garner's ionads}
In his paper, Garner mentions that in giving the definition of ionad he could have chosen a category instead of a set \cite[Rem. 2.4]{ionads}. For technical reasons we cannot be content with Garner's original definition, thus we will allow ionads over a category (as opposed to sets), even a locally small (but possibly large) one.
\begin{defn}[{\cite[Def 2.2.2]{thgeo}} Generalized Ionads]
A generalized ionad $\cx = (X, \text{Int})$ is a locally small (but possibly large) pre-finitely cocomplete category $X$ together with a lex comonad $\text{Int}: \P(X) \to \P(X)$.
\end{defn}
\begin{ach}
We will always omit the adjective \textit{generalized}.
\end{ach}
\begin{rem} The notion of generalized ionad has some delicate aspects, that we cannot discuss in this paper, we refer to our discussion in \cite[Sec. 2]{thgeo} for a complete introduction to the topic. What is important to remind is that, despite the decnical issues, the very idea of Garner remains completely available, and all the other assumptions take care of set-theoretic subtleties generated by the fact that we study possibly large categories insted of crude sets.
\end{rem}
\begin{rem} The main results of \cite[Thm. 3.2.6 and 4.0.3]{thgeo} provide an adjunction between the $2$-category of ionads and the $2$-category of topoi which categories the adjunction between topological spaces and locales. Since the latter can be seen as a completeness theorem for propositional logic, we will use \cite[Thm. 3.2.6 and 4.0.3]{thgeo} in this paper to deduce completeness-like theorems for geometric logic.
\end{rem}
\section{Generalized axiomatizations and the Scott construction} \label{logicgeneralizedaxiom}
\begin{rem}\label{groups}
Let $\mathsf{Grp}$ be the category of groups and $\mathsf{U}: \mathsf{Grp} \to \Set$ be the forgetful functor. The historical starting point of a categorical understanding of universal algebra was precisely that one can recover the (a maximal presentation of) the algebraic theory of groups from $\mathsf{U}$. Consider all the natural transformations of the form \[\mu: \mathsf{U}^n \Rightarrow \mathsf{U}^m, \]
these can be seen as implicitly defined operations of groups. If we gather these operations in an equational theory $\mathbb{T}_\mathsf{U}$, we see that the functor $\mathsf{U}$ lifts to the category of models $\mathsf{Mod}(\mathbb{T}_\mathsf{U})$ as indicated by the diagram below.
\begin{center}
\begin{tikzcd}
\mathsf{Grp} \arrow[rdd, "\mathsf{U}" description] \arrow[r, dotted] & \mathsf{Mod}(\mathbb{T}_\mathsf{U}) \arrow[dd, "|-|" description] \\
& \\
& \Set
\end{tikzcd}
\end{center}
It is a quite classical result that the comparison functor above is fully faithful and essentially surjective, thus we have axiomatized the category of groups (probably with a non minimal family of operations).
\end{rem}
\begin{rem}\label{lawvereliterature}
The idea above was introduced in Lawvere's PhD thesis \cite{lawvere1963functorial} and later developed in great generality by Linton \cite{10.1007/978-3-642-99902-4_3,10.1007/BFb0083080}. The interested reader might find interesting \cite[Chap. 3]{adamekrosicky94} and the expository paper \cite{HYLAND2007437}. Nowadays this is a standard technique in categorical logic and some generalizations of it were presented in \citep{infinitarylang} by Rosický and later again in \citep[Rem. 3.5]{LB2014}.
\end{rem}
\begin{rem}[Lieberman-Rosický construction]
In \citep[Rem. 3.5]{LB2014} given a couple $(\ca, \mathsf{U})$ where $\ca$ is an a accessible category with directed colimits together with a faithful functor $\mathsf{U}: \ca \to \Set$ preserving directed colimits, the authors form a category $\mathbb{U}$ whose objects are finitely accessible sub-functors of $\mathsf{U}^n$ and whose arrows are natural transformations between them. Of course there is a naturally attached signature $\Sigma_U$ and a naturally attached first order theory $\mathbb{T}_\mathsf{U}$. In the same fashion as the previous remarks one finds a comparison functor $\ca \to \Sigma_\mathsf{U}\text{-Str}$. In \citep[Rem. 3.5]{LB2014} the authors stress that is the most natural candidate to axiomatize $\ca$. A model of $\mathbb{T}_\mathsf{U}$ is the same as a functor $\mathbb{U} \to \Set$ preserving products and subobjects. Of course the functor $\ca \to \Sigma_\mathsf{U}\text{-Str}$ factors through $\text{Mod}(\mathbb{U})$ (seen as a sketch) \[l: \ca \to \text{Mod}(\mathbb{U}),\] but in \citep[Rem. 3.5]{LB2014} this was not the main concern of the authors.
\end{rem}
\begin{rem}[Rosický's remark]
Rem. \ref{groups} ascertains that the collection of functor $\{ \mathsf{U}^n\}_{n \in \mathbb{N}}$, together with all the natural transformations between them, retains all the informations about the category of groups. Observe that in this specific case, the functors $ \mathsf{U}^n$ all preserve directed colimits, because finite limits commute with directed colimits. More generally, when $\ca$ does not come equipped with a special forgetful functor, or simply we don't want to choose a specific one, we could follow the general strategy of the remarks above and collect \text{all} the functors preserving directed colimits into $\Set$ in a category. This is the Scott construcion.
\end{rem}
\begin{con}[The Scott construction]\label{defnS}
We recall the construction of $\mathsf{S}$ from \cite{simon} and \cite{thcat}. Let $\ca$ be an accessible category with directed colimits. $\mathsf{S}(\ca)$ is defined as the category the category of functors preserving directed colimits into sets. \[\mathsf{S}(\ca) = \text{Acc}_{\omega}(\ca, \Set).\]
The category $\mathsf{S}(\ca)$ is a Grothendieck topos and thus can be seen as a geometric theory. Following the discussion above, this is a candidate geometric axiomatization of $\ca$. In \cite{thcat} we study the Scott construction and show that it is functorial, providing a left adjoint for the functor of points.
\end{con}
\begin{rem}[The functor $\mathsf{pt}$]\label{pt}
The functor of points $\mathsf{pt}: \text{Topoi} \to \text{Acc}_\omega$ belongs to the literature since quite some time, $\mathsf{pt}$ is the covariant hom functor $\text{Topoi}(\Set, - )$. It maps a Grothendieck topos $\cg$ to its category of points, \[\cg \mapsto \text{Cocontlex}(\cg, \Set).\]
Of course given a geometric morphism $f: \cg \to \ce$, we get an induced morphism $\mathsf{pt}(f): \mathsf{pt}(\cg) \to \mathsf{pt}{(\ce)}$ mapping $p^* \mapsto p^* \circ f^*$. The fact that $\text{Topoi}(\Set, \cg)$ is an accessible category with directed colimits appears in the classical reference by Borceux as \cite[Cor. 4.3.2]{borceux_19943}, while the fact that $\mathsf{pt}(f)$ preserves directed colimits follows trivially from the definition.
\end{rem}
\begin{rem}
When we idenitify the category of topoi with a localization of the category of geometric theories, the functor of points is computing the (set theoretic) models of the theory classified by the topos. Being a right adjoint, it is coherent with the intuition that its left adjoint computes the \textit{free theory} over an accessible category with directed colimits.
\end{rem}
\begin{thm}[{\cite[Prop. 2.3]{simon},\cite[Thm. 2.1]{thcat}} The Scott adjunction]\label{scottadj}
The $2$-functor of points $\mathsf{pt} :\text{Topoi} \to \text{Acc}_{\omega} $ has a left biadjoint $\mathsf{S}$, yielding the Scott biadjunction, $$\mathsf{S} : \text{Acc}_{\omega} \leftrightarrows \text{Topoi}: \mathsf{pt}. $$
\end{thm}
\begin{rem}[Rosický's remark]
Going back to Rosický-Lieberman construction, the previous discussion implies that the small category $\{ \mathsf{U}^n\}_{n \in \mathbb{N}}$ is a full subcategory of the Scott topos of the category of groups. In fact the vocabulary of the theory that we used to axiomatize the category of groups is made up of symbols coming from a full subcategory of the Scott topos.
\end{rem}
\begin{rem}[Generalized axiomatizations]
The generalized axiomatization of Lieberman and Rosický amounts to a sketch $\mathbb{U}$. As we mentioned, there exists an obvious inclusion of $\mathbb{U}$ in the Scott topos of $\ca$, $$i: \mathbb{U} \to \mathsf{S}(\ca)$$ which is a flat functor because finite limits in $\mathsf{S}(\ca)$ are computed pointwise in $\Set^\ca$. Thus, every point $p: \Set \to \mathsf{S}(\ca)$ induces a model of the sketch $\mathbb{U}$ by composition,
$$i^*: \mathsf{pt}(\mathsf{S}\ca) \to \text{Mod}(\mathbb{U})$$
$$p \mapsto p^* \circ i.$$
In particular this shows that the unit of the Scott adjunction lifts the comparison functor between $\ca$ and $\text{Mod}(\mathbb{U})$ along $i^*$ and thus the Scott topos provides a \textit{sharper} axiomatization of $\mathbb{T}_\mathsf{U}$.
\begin{center}
\begin{tikzcd}
& \ca \arrow[ldd, "\eta_\ca" description, bend right] \arrow[rdd, "l" description, bend left] & \\
& & \\
\mathsf{pt}\mathsf{S}(\ca) \arrow[rr, "i^*" description] & & \text{Mod}(\mathbb{U})
\end{tikzcd}
\end{center}
\end{rem}
\begin{rem}[Faithful functors are likely to generate the Scott topos] Yet, it should be noticed that when $\mathbb{U}$ is a generator in $\mathsf{S}(\ca)$, the functor $i^*$ is an equivalence of categories. As unlikely as it may sound, in all the examples that we can think of, a generator of the Scott topos is always given by a faithful forgetful functor $\mathsf{U}: \ca \to \Set$. This phenomenon is so pervasive that the author has believed for quite some time that an object in the Scott topos $\mathsf{S}(\ca)$ is a generator if and only if it is faithful and conservative. We still lack a counterexample, or a theorem proving such a statement.
\end{rem}
\section{Classifying topoi} \label{logicclassifyingtopoi}
This section is devoted to specifying the connection between Scott topoi, Isbell topoi and classifying topoi. Recall that for a geometric theory $\mathbb{T}$, a classifying topos $\Set[\mathbb{T}]$ is a topos representing the functor of models in topoi, \[ \mathsf{Mod}_{(-)}(\mathbb{T}) \cong \text{Topoi}(-, \Set[\mathbb{T}]).\] The theory of classifying topoi allows us to internalize geometric logic in the internal logic of the $2$-category of topoi. The reader that is not familiar with the theory of classifying topoi is encouraged to check the Appendix.
\subsection{Categories of models, Scott topoi and classifying topoi}
The Scott topos $\mathsf{S}(\mathsf{Grp})$ of the category of groups is $\Set^{\mathsf{Grp}_{\omega}}$, this follow from \cite[Rem 2.13]{thcat} and applies to $\mathsf{Mod}(\mathbb{T})$ for every Lawvere theory $\mathbb{T}$. It is well known that $\Set^{\mathsf{Grp}_{\omega}}$ is also the classifying topos of the theory of groups. This section is devoted to understating if this is just a coincidence, or if the Scott topos is actually related to the classifying topos.
\begin{rem}
Let $\ca$ be an accessible category with directed colimits. In order to properly ask the question \textit{is $\mathsf{S}(\ca)$ the classifying topos?}, we should answer the question \textit{the classifying topos of what?} Indeed $\ca$ is just a category, while one can compute classifying topoi of theories. Our strategy is to introduce a quite general notion of theory that fits in the following diagram,
\begin{center}
\begin{tikzcd}
\text{Acc}_\omega \arrow[rr, "\mathsf{S}" description, bend right=10] & & \text{Topoi} \arrow[ll, "\mathsf{pt}" description, bend right=10] \\
& & \\
& & \\
& \mathsf{Theories} \arrow[dotted, luuu, "\mathsf{Mod}(-)" description, bend left=20] \arrow[dotted, ruuu, "\gimel(-)" description, bend right=20] &
\end{tikzcd}
\end{center}
in such a way that:
\begin{enumerate}
\item $\gimel(\mathbb{T})$ gives the classifying topos of $\mathbb{T}$;
\item $\mathsf{Mod}(-) \cong \mathsf{pt} \gimel (-)$.
\end{enumerate}
In this new setting we can reformulate our previous discussion in the following mathematical question: \[\gimel(-) \stackrel{?}{\cong} \mathsf{S} \mathsf{Mod}(-).\]
\end{rem}
\begin{rem}[Geometric Sketches]
The notion of theory that we plan to use is that of geometric sketch. The category of (small) sketches was described in \cite[3.1]{Makkaipare}, while a detailed study of geometric sketches was conducted in \cite{adamek_johnstone_makowsky_rosicky_1997,Admek1996OnGA}.
\begin{center}
\begin{tikzcd}
\text{Acc}_\omega \arrow[rr, "\mathsf{S}" description, bend right=10] & & \text{Topoi} \arrow[ll, "\mathsf{pt}" description, bend right=10] \\
& & \\
& & \\
& \mathsf{GSketches} \arrow[luuu, "\mathsf{Mod}(-)" description, bend left=20] \arrow[ruuu, "\gimel(-)" description, bend right=20] &
\end{tikzcd}
\end{center}
\end{rem}
\begin{rem}
Following \cite{Makkaipare}, there exists a natural way to generate a sketch from any accessible category. This construction, in principle, gives even a left adjoint for the functor $\mathsf{Mod}(-)$, but does land in large sketches. Thus it is indeed true that for each accessible category there exist a sketch (a theory) canonically associated to it. We do not follow this line because the notion of large sketch, from a philosophical perspective, is a bit unnatural. Syntax should always be very frugal. From an operational perspective, presentations should always be as small as possible. It is possible to cut down the size of the sketch, but this construction cannot be defined functorially on the whole category of accessible categories with directed colimits. Since elegance and naturality is one of the main motivations for this treatment of syntax-semantics dualities, we decided to avoid any kind of non-natural construction.
\end{rem}
\begin{rem}
Geometric sketches contain coherent sketches. In the dictionary between logic and geometry that is well motivated in the indicated papers (\cite{adamek_johnstone_makowsky_rosicky_1997,Admek1996OnGA}) these two classes correspond respectively to geometric and coherent theories. The latter essentially contain all first order theories via the process of Morleyzation. These observations make our choice of geometric sketches a very general notion of theory and makes us confident that it's a good notion to look at.
\end{rem}
We now proceed to describe the two functors labeled with the name of $\mathsf{Mod}$ and $\gimel$.
\begin{rem}[Mod]
This 2-functor is very easy to describe. To each sketch $\cs$ we associate its category of Set-models, while it is quite evident that a morphism of sketches induces by composition a functor preserving directed colimits (see Sec. \ref{backgroundsketches} in the Background section).
\end{rem}
\begin{con}[$\gimel$]
The topos completion of a geometric sketch is a highly nontrivial object to describe. Among the possible constructions that appear in the literature, we refer to \citep[4.3]{borceux_19943}. Briefly, the idea behind this construction is the following.
\begin{enumerate}
\item By {\citep[4.3.3]{borceux_19943}}, every sketch $\cs$ can be completed to a sketch $\bar{\cs}$ whose underlying category is cartesian.
\item By {\citep[4.3.6]{borceux_19943}}, this construction is functorial and does not change the model of the sketch in any Grothendieck topos.
\item By {\citep[4.3.8]{borceux_19943}}, the completion of the sketch has a natural topology $\bar{J}$.
\item The correspondence $\cs \mapsto \bar{\cs} \mapsto (\bar{S}, \bar{J})$ transforms geometric sketches into sites and morphism of sketches into morphism of sites.
\item We compute sheaves over the site $(\bar{S}, \bar{J})$.
\item Define $\gimel$ to be $\cs \mapsto \bar{\cs} \mapsto (\bar{S}, \bar{J}) \mapsto \mathsf{Sh}(\bar{S}, \bar{J})$.
\end{enumerate}
\end{con}
\begin{rem}
While \citep[4.3.6]{borceux_19943} proves that $\mathsf{Mod}(-) \simeq \mathsf{pt} \gimel (-)$, and \citep[4.3.8]{borceux_19943} prove that $\gimel(\cs)$ is the classifying topos of $\cs$ among Grothendieck topoi, the main question of this section remains completely open, is $\gimel(\cs)$ isomorphic to the Scott topos $\mathsf{S} \mathsf{Mod}(-)$ of the category of Set models of $\cs$? We answer this question with the following theorem.
\end{rem}
\begin{thm}\label{classificatore}
If the counit $\epsilon_{\gimel(\cs)}$ of the Scott adjunction is an equivalence of categories on $\gimel(\cs)$, then $\gimel(\cs)$ coincides with $\mathsf{S} \mathsf{Mod}(\cs)$.
\end{thm}
\begin{proof}
We introduced enough technology to make this proof incredibly slick. Recall the counit \[\mathsf{S} \mathsf{pt} (\gimel(\cs)) \to \gimel(\cs) \] and assume that it is an equivalence of categories. Now, since $\mathsf{Mod}(-) \simeq \mathsf{pt} \gimel (-)$, we obtain that \[\gimel(\cs) \simeq \mathsf{S}\mathsf{Mod}(\cs),\] which indeed it our thesis.
\end{proof}
\begin{rem}
\cite[Thm 4.1.3 and 4.3.3]{thgeo} characterize those topoi for which the counit is an equivalence of categories, providing a full description of those geometric sketches for which $\gimel(\cs)$ coincides with $\mathsf{S} \mathsf{Mod}(\cs)$.
\end{rem}
\subsection{Ionads of models, Isbell topoi and classifying topoi}
Indeed the main result of this section up to this point has been partially unsatisfactory. As happens sometimes, the answer is not as nice as expected because the question in the first place did not take in consideration some relevant factors. The category of models of a sketch does not retain enough information on the sketch. Fortunately, we will show that every sketch has a ionad of models (not just a category) and the category of opens of this ionad is a much better approximation of the classifying topos.
In this subsection, we switch diagram of study to the one below.
\begin{center}
\begin{tikzcd}
\text{BIon} \arrow[rr, "\mathbb{O}" description, bend right=10] & & \text{Topoi} \arrow[ll, "\mathbbm{pt}" description, bend right=10] \\
& & \\
& & \\
& \mathsf{LGSketches} \arrow[luuu, "\mathbb{M}\mathbbm{od}(-)" description, bend left=20] \arrow[ruuu, "\gimel(-)" description, bend right=20] &
\end{tikzcd}
\end{center}
Of course, in order to study it, we need to introduce all its nodes and legs. We should say what we mean by $\mathsf{LGSketches}$ and $\mathbb{M}\mathbbm{od}(-)$. The adjunction $\mathbb{O} \dashv \mathbbm{pt}$ was introduced and studied in \cite{thgeo} and it relates topoi to bounded ionads, we refer to \cite[Sec. 3]{thgeo} for the construction, while an introduction to ionads can be found in the Appendix.
Whatever $\mathsf{LGSketches}$ and $\mathbb{M}\mathbbm{od}(-)$ will be, the main point of the section is to show that this diagram fixes the one of the previous section, in the sense that we will obtain the following result.
\begin{thm*} The following are equivalent:
\begin{itemize}
\item $\gimel(\cs)$ has enough points;
\item $\gimel(\cs)$ coincides with $\mathbb{O}\mathbb{M}\mathbbm{od}(\cs)$.
\end{itemize}
\end{thm*}
We decided to present this theorem separately from the previous one because indeed a ionad of models is a much more complex object to study than a category of models, thus the results of the previous section are indeed very interesting, because easier to handle.
\begin{exa}[Motivating ionads of models: Ultracategories]
We are not completely used to thinking about ionads of models. Indeed a (bounded) ionad is quite complex data, and we do not completely have a logical intuition on its interior operator. \textit{In which sense does the interior operator equip a category of models with a topology?} One very interesting example, that hasn't appeared in the literature to our knowledge is the case of ultracategories. Ultracategories where introduced by Makkai in \cite{AWODEY2013319} and later simplified by Lurie in \cite{lurieultracategories}. These objects are the data of a category $\ca$ together with an ultrastructure, that is a family of functors \[\int_X:\beta(X) \times \ca^X \to \ca. \] We redirect to \cite{lurieultracategories} for the precise definition. In a nutshell, each of these functors $\int_X$ defines a way to compute the ultraproduct of an $X$-indexed family of objects along some ultrafilter. Of course there is a notion of morphism of ultracategories, namely a functor $\ca \to \cb$ which is compatible with the ultrastructure \cite[Def. 1.41]{lurieultracategories}. Since the category of sets has a natural ultrastructure, for every ultracategory $\ca$ one can define $\text{Ult}(\ca, \Set)$ which obviously sits inside $\Set^\ca$. Lurie observes that the inclusion \[\iota: \text{Ult}(\ca, \Set) \to \Set^\ca\] preserves all colimits \cite[War. 1.4.4]{lurieultracategories}, and in fact also finite limits (the proof is the same). In particular, when $\ca$ is accessible and every ultrafunctor is accessible, the inclusion $\iota: \text{Ult}(\ca, \Set) \to \Set^\ca$ factors through $\P(\ca)$ and thus the ultrastructure over $\ca$ defines a idempotent lex comonad over $\P(\ca)$ by the adjoint functor theorem. This shows that every (good enough) accessible ultracategory yields a ionad, which is also \text{compact} in the sense that its category of opens is a compact (coherent) topos. This example is really a step towards a categorified Stone duality involving compact ionads and boolean topoi.
\end{exa}
\subsubsection{$\mathsf{LGSketches}$ and $\mathbb{M}\mathbbm{od}(-)$}
\begin{defn}
A geometric sketch $\mathcal{S}$ is lex if its underlying category has finite limits and every limiting cone is in the limit class.
\end{defn}
\begin{rem}[Lex sketches are \textit{enough}]
\cite[4.3.3]{borceux_19943} shows that every geometric sketch can be replaced with a lex geometric sketch in such a way that the underlying category of models, and even the classifying topos, does not change. In this sense this full subcategory of geometric sketches is as expressive as the whole category of geometric sketches.
\end{rem}
\begin{prop}[$\mathbb{M}\mathbbm{od}(-)$ on objects]
Every lex geometric sketch $\mathcal{S}$ induces a ionad $\mathbb{M}\mathbbm{od}(\mathcal{S})$ over its category of models $\mathsf{Mod}(\mathcal{S})$.
\end{prop}
\begin{proof}
The underlying category of the ionad $\mathbb{M}\mathbbm{od}(\mathcal{S})$ is $\mathsf{Mod}(\mathcal{S})$. We must provide an interior operator (a lex comonad), $$\text{Int}_{\cs}: \P({\mathsf{Mod}(\mathcal{S})}) \to \P({\mathsf{Mod}(\mathcal{S})}). $$ In order to do so, we consider the evaluation pairing $\mathsf{eval}: S \times \mathsf{Mod}(\cs) \to \Set$ mapping $(s,p) \mapsto p(s)$. Let $\mathsf{ev}: \cs \to \Set^{\mathsf{Mod}(S)}$ be its mate. Similarly to \cite[Con. 3.2.3]{thgeo}, such functor takes values in $\P(\mathsf{Mod}(S))$. Because $\mathcal{S}$ is a lex sketch, this functor must preserve finite limits. Indeed, \[\mathsf{ev}(\lim s_i)(-) \cong (-)(\lim s_i) \cong \lim ((-)(s_i)) \cong \lim \mathsf{ev}(s_i)(-).\]
Now, the left Kan extension $\lan_y \mathsf{ev}$ (see diagram below) is left exact because $\P(\mathsf{Mod}(S))$ is an infinitary pretopos and $\mathsf{ev}$ preserves finite limits.
\begin{center}
\begin{tikzcd}
S \arrow[rr, "\mathsf{ev}" description] \arrow[ddd, "y" description] & & \P(\mathsf{Mod}(S)) \\
& & \\
& & \\
\Set^{S^\circ} \arrow[rruuu, "\lan_y \mathsf{ev}" description, dashed, bend right] & &
\end{tikzcd}
\end{center}
Moreover it is cocontinuous because of the universal property of the presheaf construction. Because $\Set^{S^\circ}$ is a total category, $\lan_y \mathsf{ev}$ must have a right adjoint (and it must coincide with $\lan_{\mathsf{ev}} y$). The induced comonad must be left exact, because the left adjoint is left exact. Define \[\text{Int}_{\cs}:=\lan_y \mathsf{ev} \circ \lan_{\mathsf{ev}} y. \]
Observe that $\text{Int}_{\cs}$ coincides with the density comonad of $\mathsf{ev}$ by \cite[A.7]{liberti2019codensity}. Such result dates back to \cite{appelgate1969categories}.
\end{proof}
\begin{rem}[$\mathbb{M}\mathbbm{od}(-)$ on morphism of sketches]
This definition will not be given explicitly: in fact we will use the following remark to show that the ionad above is isomorphic to the one induced by $\gimel(\cs)$, and thus there exists a natural way to define $\mathbb{M}\mathbbm{od}(-)$ on morphisms.
\end{rem}
\subsubsection{Ionads of models and theories with enough points}
\begin{rem}
In the main result of the previous section, a relevant rôle was played by the fact that $\mathsf{pt}\gimel \simeq \mathsf{Mod}.$ The same must be true in this one. Thus we should show that $\mathbbm{pt}\gimel \simeq \mathbb{M}\mathbbm{od}.$ Indeed we only need to show that the interior operator is the same, because the underlying category is the same by the discussion in the previous section.
\end{rem}
\begin{prop}
\[\mathbbm{pt} \circ \gimel \simeq \mathbb{M}\mathbbm{od}.\]
\end{prop}
\begin{proof}
Let $\cs$ be a lex geometric sketch. Of course there is a map $j: S \to \gimel{\cs}$, because $S$ is a site of definition of $\gimel{\cs}$. Moreover, $j$ is obviously dense. In particular the evaluation functor that defines the ionad $\mathbbm{pt} \circ \gimel$ given by $ev^*: \gimel(\cs) \to \P({\mathsf{pt} \circ \gimel(\cs)})$ is uniquely determined by its composition with $j$. This means that the comonad $ev^*ev_*$ is isomorphic to the density comonad of the composition $ev^* \circ j$. Indeed, \[ev^*ev_* \cong \lan_{ev^*} ev^* \cong \lan_{ev^*} (\lan_j(ev^*j)) \cong \lan_{ev^*j}(ev^*j).\] Yet, $ev^*j$ is evidently $\mathsf{ev}$, and thus $ev^* ev_* \cong \text{Int}_\cs$ as desired.
\end{proof}
\begin{thm} \label{isbellclassificatore} The following are equivalent:
\begin{itemize}
\item $\gimel(\cs)$ has enough points;
\item $\gimel(\cs)$ coincides with $\mathbb{O}\mathbb{M}\mathbbm{od}(\cs)$.
\end{itemize}
\end{thm}
\begin{proof}
By \cite[Thm. 4.0.3]{thgeo}, $\gimel(S)$ has enough points if and only if the counit of the categorified Isbell duality $\rho: \mathbb{O}\mathbbm{pt}(\gimel)(\cs) \to \cs$ is an equivalence of topoi. Now, since $\mathbbm{pt} \circ \gimel \cong \mathbb{M}\mathbbm{od}$, we obtain the thesis.
\end{proof}
\section{Abstract elementary classes and locally decidable topoi} \label{logicaec}
\subsection{A general discussion}
This section is dedicated to the interaction between Abstract elementary classes and the Scott adjunction. Abstract elementary classes were introduced in the 70's by Shelah as a framework to encompass infinitary logics within the language of model theorist. In principle, an abstract elementary class $\ca$ should look like the category of models of a first order infinitary theory whose morphisms are elementary embeddings. The problem of relating abstract elementary classes and accessible categories has been tackled by Lieberman \citep{L2011}, and Beke and Rosický \cite{aec}, and lately has attracted the interest of model theorists such as Vasey, Boney and Grossberg \citep{everybody}. There are many partial, even very convincing results, in this characterization. Let us recall at least one of them. For us, this characterization will be the definition of abstract elementary class.
\begin{thm}[{\citep[5.7]{aec}}]\label{AEC} A category $\ca$ is equivalent to an abstract elementary class if and only if it is an accessible category with directed colimits, whose morphisms are monomorphisms and which admits a full with respect to isomorphisms and nearly full embedding $U$ into a finitely accessible category preserving directed colimits and monomorphisms.
\end{thm}
\begin{defn}
A functor $\mathsf{U} : \ca \to \cb$ is nearly full if, given a commutative diagram,
\begin{center}
\begin{tikzcd}
\mathsf{U}(a) \arrow[rrd, "\mathsf{U}(f)" description] \arrow[dd, "h" description] & & \\
& & \mathsf{U}(c) \\
\mathsf{U}(b) \arrow[rru, "\mathsf{U}(g)" description] & &
\end{tikzcd}
\end{center}
in $\cb$, there is a map $\bar{h}$ in $\ca$ such that $h = \mathsf{U}(\bar{h})$ and $g\bar{h}=f$. Observe that when $\mathsf{U}$ is faithful such a filling has to be unique.
\end{defn}
\begin{rem}
In some reference the notion of nearly-full functor was called coherent, referring directly to the \textit{coherence axiom} of AECs that it incarnates. The word coherent is overloaded in category theory, and thus we do not adopt this terminology, but nowadays it is getting more and more common.
\end{rem}
\begin{exa}[$\mathsf{pt}(\ce)$ is likely to be an AEC]\label{esempiobase}
Let $\ce$ be a Grothendieck topos and $f^*: \Set^C \leftrightarrows \ce :f_*$ a presentation of $\ce$. By a combination of \cite[\text{Prop.} 4.2]{thcat} and \cite[Rem. 2.12]{thcat}, applying the functor $\mathsf{pt}$ we get a fully faithful functor \[\mathsf{pt}(\ce) \stackrel{}{\to} \mathsf{pt}(\Set^C) \stackrel{}{\cong} \mathsf{Ind}(C) \]
into a finitely accessibly category. Thus when every map in $\mathsf{pt}(\ce)$ is a monomorphism we obtain that $\mathsf{pt}(\ce)$ is an AEC via Thm. \ref{AEC}. We will see in the next section (Thm. \ref{LDCAECs}) that this happens when $\ce$ is locally decidable; thus the category of points of a locally decidable topos is always an AEC.
\end{exa}
\begin{exa}[$\eta_\ca$ behaves nicely on AECs]
When $\ca$ is an abstract elementary class, the unit of the Scott adjunction $\eta_\ca: \ca \to \mathsf{pt}\mathsf{S}(\ca)$ is faithful and iso-full. This follows directly from \cite[Prop 4.13]{thcat}.
\end{exa}
\begin{rem}
Even if this is the sharpest (available) categorical characterization of AECs it is not hard to see how unsatisfactory it is. Among the most evident problems, one can see that it is hard to provide a categorical \textit{understanding} of nearly full and full with respect to isomorphisms. Of course, an other problem is that the list of requirements is pretty long and very hard to check: \textit{when does such a $U$ exist?}
\end{rem}
It is very hard to understand when such a pseudo monomorphism exists. That is why it is very useful to have a testing lemma for its existence.
\begin{thm}[Testing lemma]
Let $\ca$ be an object in $\text{Acc}_{\omega}$ where every morphism is a monomorphism. If $\eta_\ca$ is a nearly-full pseudo monomorphism, then $\ca$ is an AEC.
\end{thm}
\begin{proof}
The proof is relatively easy, choose a presentation $f^*: \Set^C \leftrightarrows \mathsf{S}(\ca): f_*$ of $\mathsf{S}(\ca)$. Now in
\[\ca \stackrel{\eta_\ca}{\to} \textsf{pt}\textsf{S}(\ca) \to \textsf{pt}(\Set^{C}) {\cong} \mathsf{Ind}(C),\]
by a combination of \cite[\text{Prop.} 4.2]{thcat} and \cite[Rem. 2.12]{thcat}, the composition is a faithful and nearly full functor preserving directed colimits from an accessible category to a finitely accessible category, and thus $\ca$ is an AEC because of Thm. \ref{AEC}.
\end{proof}
\subsection{Locally decidable topoi and AECs}
The main result of this subsection relates locally decidable topoi to AECs. The full subcategory of $\text{Acc}_\omega$ whose objects are AECs will be indicated by $\text{AECs}$. As in the previous sections, let us give the precise statement and then discuss it in better detail.
\begin{thm}\label{LDCAECs} The Scott adjunction restricts to locally decidable topoi and AECs.
\[\mathsf{S}: \text{AECs} \leftrightarrows \text{LDTopoi}: \mathsf{pt}\]
\end{thm}
\subsubsection{Locally decidable topoi}
The definition of locally decidable topos will appear obscure at first sight.
\begin{defn}[Decidable object]
An object $e$ in a topos $\ce$ is decidable if the diagonal map $e \to e \times e$ is a complemented subobject.
\end{defn}
\begin{defn}[Locally decidable topos]
An object $e$ in a topos $\ce$ is called locally decidable iff there is an epimorphism $e' \twoheadrightarrow e$ such that $e'$ is a decidable object. $\ce$ is locally decidable if every object is locally decidable.
\end{defn}
In order to make the definition above clear we should really define decidable objects and discuss their meaning. This is carried out in the literature and it is not our intention to recall the whole theory of locally decidable topoi. Let us instead give the following characterization, that we may take as a definition.
\begin{thm}[{\citep[C5.4.4]{elephant2}}, Characterization of loc. dec. topoi] The following are equivalent:
\begin{enumerate}
\item $\ce$ is locally decidable;
\item there exists a site $(C,J)$ of presentation where every map is epic;
\item there exists a localic geometric morphism into a Boolean topos.
\end{enumerate}
\end{thm}
\begin{rem}
Recall that a localic topos $\ce$ is a topos of sheaves over a locale. The theorem above (which is due to Freyd \cite{Aspects}) shows that a locally decidable topos is still a topos of sheaves over a locale, but the locale is not in $\Set$. It is instead in some boolean topos. A boolean topos is the closest kind of topos we can think of to the category of sets itself. For more details, we redirect the reader to the Background section, where we give references to the literature.
\end{rem}
\begin{proof}[Proof of Thm. \ref{LDCAECs}]
\begin{itemize}
\item[]
\item Let $\ce$ be a locally decidable topos. By Exa. \ref{esempiobase}, it is enough to show that every map in $\mathsf{pt}(\ce)$ is a monomorphism. This is more or less a folklore result, let us give the shortest path to it given our technology. Recall that one of the possible characterization of a locally decidable topos is that it has a localic geometric morphism into a boolean topos $\ce \to \cb$. If $\cb$ is a boolean topos, then every map in $\mathsf{pt}(\cg)$ is a monomorphism \cite[D1.2.10, last paragraph]{elephant2}. Now, the induce morphism below,
\[ \textsf{pt}(\ce) \to \textsf{pt}(\cb),\]
is faithful by Prop. \cite[Prop. 4.4]{thcat}. Thus every map in $\textsf{pt}\ce$ must be a monomorphism.
\item Let's show that for an accessible category with directed colimits $\ca$, its Scott topos is locally decidable. By \citep[C5.4.4]{elephant2}, it's enough to prove that $\mathsf{S}\ca$ has a site where every map is an epimorphism. Using \cite[Rem. 2.9]{thcat}, $\ca_\kappa^{\circ}$ is a site of definition of $\mathsf{S}\ca$, and since every map in $\ca$ is a monomorphism, every map in $\ca_\kappa^{\circ}$ is epic.
\end{itemize}
\end{proof}
The previous theorem admits an even sharper version.
\begin{thm} Let $\ca$ be an accessible category with directed colimits and a faithful functor $\mathsf{U}: \ca \to \Set$ preserving directed colimits. If $\mathsf{S}\ca$ is locally decidable, then every map in $\ca$ is a monomorphism.
\end{thm}
\begin{proof}
\begin{enumerate}
\item[]
\item[Step 1] If $\cg$ is a boolean topos, then every map in $\mathsf{pt}(\cg)$ is a monomorphism \cite[D1.2.10, last paragraph]{elephant2}.
\item[Step 2] Recall that one of the possible characterization of a locally decidable topos is that it has a localic geometric morphism into a boolean topos $\mathsf{S}(\ca) \to \cg$.
\item[Step 3] In the following diagram
\[\ca \stackrel{\eta_\ca}{\to} \textsf{pt}\textsf{S}(\ca) \stackrel{}{\to} \textsf{pt}(\cg),\]
the composition is a faithful functor by \cite[Prop. 4.4 and 4.13]{thcat}. Thus $\ca$ has a faithful functor into a category where every map is a monomorphism. As a result every map in $\ca$ is a monomorphism.
\end{enumerate}
\end{proof}
\begin{rem}The following corollary gives a complete characterization of those continuous categories that are abstract elementary classes. Recall that continuous categories were defined in \cite{cont} in analogy with continuous posets in order to study exponentiable topoi. Among the possible characterizations, a category is continuous if and only if it is a reflective subcategory of a finitely accessible category whose right adjoint preserve directed colimits. We discussed continuous categories in the first section of \cite{thcat}.
\end{rem}
\begin{cor}[Continuous categories and AECs]
Let $\ca$ be a continuous category. The following are equivalent:
\begin{enumerate}
\item $\ca$ is an AEC.
\item Every map in $\ca$ is a monomorphism.
\item $\mathsf{S}(\ca)$ is locally decidable.
\end{enumerate}
\end{cor}
\begin{proof}
Since it's a split subobject in $\text{Acc}_{\omega}$ of a finitely accessible category, the hypotheses of \citep[5.7]{aec} are met.
\end{proof}
\section{Categories of saturated objects, atomicity and categoricity} \label{logicsaturatedobjects}
\begin{rem}
In this section we define categories of saturated objects and study their connection with atomic topoi and categoricity. The connection between atomic topoi and categoricity was pointed out in \citep{Caramelloatomic}. This section corresponds to a kind of syntax-free counterpart of \citep{Caramelloatomic}. In the definition of \textit{category of saturated objects} we axiomatize the relevant properties of the inclusion $\iota: \Set_{\kappa} \to \Set$ and we prove the following two theorems.
\begin{thm*}
\begin{enumerate}
\item[]
\item If $\ca$ is a category of saturated objects, then $\mathsf{S}(\ca)$ is an atomic topos.
\item If in addition $\ca$ has the joint embedding property, then $\mathsf{S}(\ca)$ is boolean and two valued.
\item If in addition $\eta_\ca$ is isofull and faithful and surjective on objects, then $\ca$ is categorical in some presentability rank.
\end{enumerate}
\end{thm*}
\begin{thm*}
If $\ce$ is an atomic topos, then $\mathsf{pt}(\ce)$ is a \textit{candidate} category of saturated objects.
\end{thm*}
\end{rem}
Let us recall (or introduce) the notion of $\omega$-saturated object in an accessible category and the joint embedding property.
\begin{defn}
Let $\ca$ be an accessible category. We say that $s \in \ca$ is $\omega$-saturated if it is injective with respect to maps between finitely presentable objects. That is, given a morphism between finitely presentable objects $f: p \to p'$ and a map $p \to s$, there exists a lift as in the diagram below.
\begin{center}
\begin{tikzcd}
s & \\
p \arrow[u] \arrow[r] & p' \arrow[lu, dashed]
\end{tikzcd}
\end{center}
\end{defn}
\begin{rem}
In general, when we look at accessible categories from the perspective of model theory, every map in $\ca$ is a monomorphism, and this definition is implicitly adding the hypothesis that every morphism is \textit{injective}.
\end{rem}
\begin{rem}
A very good paper to understand the categorical approach to saturation is \cite{Rsaturated}.
\end{rem}
\begin{defn}
Let $\ca$ be a category. We say that $\ca$ has the joint embedding property if given two objects $A,B$ there exist and object $C$ and two morphisms $A \to C$, $B \to C$.
\end{defn}
\begin{rem}
In \citep{simon}, Henry proves that there are AECs that cannot appear as the category of points of a topos, which means that they cannot be axiomatized in $\text{L}_{\infty, \omega}$. This answers a question initially asked by Rosický at the conference Category Theory 2014 and makes a step towards our understanding of the connection between accessible categories with directed colimits and axiomatizable classes.
The main tool that allows him to achieve this result is called in the paper the \textit{Scott construction}; he proves the Scott topos of $\Set_{\geq \kappa}$\footnote{The category of sets of cardinality at least $\kappa$ and injective functions} is atomic. Even if we developed together the relevant rudiments of the Scott construction, the reason for which this result was true appeared to the author of this paper enigmatic and mysterious. With this motivation in mind we\footnote{The author of this paper.} came to the conclusion that the Scott topos of $\Set_{\geq \kappa}$ is atomic because of the fact that $\Set_{\geq \kappa}$ appears as a subcategory of saturated objects in $\Set$.
\end{rem}
\begin{rem}
As a direct corollary of the theorems in this section one gets back the main result of \citep{simon}, but this is not the main accomplishment of this section. Our main contribution is to present a conceptual understanding of \citep{simon} and a neat technical simplification of his proofs. We also improve our poor knowledge of the Scott adjunction, trying to collect and underline its main features. We feel that the Scott adjunction might serve as a tool to have a categorical understanding of Shelah's categoricity conjecture for accessible categories with directed colimits.
\end{rem}
\begin{rem}[What is categoricity and what about the categoricity conjecture?]
Recall that a category of models of some theory is categorical in some cardinality $\kappa$ if it has precisely one model of cardinality $\kappa$. Morley has shown in 1965 that if a category of models is categorical in some cardinal $\kappa$, then it must be categorical in any cardinal above and in any cardinal below up to $\omega_1$ (\cite{chang1990model}). We will be more precise about Morley's result in the section about open problems. When Abstract elementary classes were introduced in the 1970's, Shelah chose Morley's theorem as a sanity check result for his definition. Since then, many approximations of these results has appeared in the literature. The most updated to our knowledge is contained in \cite{vasey2019categoricity}. We recommend the paper also as an introduction to this topic.
\end{rem}
\begin{defn}[(Candidate) categories of ($\omega$-)saturated objects] Let $\ca$ be a category in $\text{Acc}_{\omega}$. We say that $\ca$ is a category of (finitely) saturated objects if there a is topological embedding $j: \ca \to \ck$ in $\text{Acc}_{\omega}$ such that:
\begin{enumerate}
\item $\ck$ is a finitely accessible category.
\item $j\ca \subset \text{Sat}_\omega(\ck)$\footnote{The full subcategory of $\omega$-saturated objects.}.
\item $\ck_{\omega}$ has the amalgamation property\footnote{A category has the amalgamation property is every span can be completed to a square.}.
\end{enumerate}
We say that $\ca$ is a candidate category of (finitely) saturated objects if there exists a functor $j$ that verifies $(1)$-$(3)$.
\end{defn}
\begin{rem}
The notion of \textit{category of saturated objects} axiomatizes the properties of the inclusion $j:\Sat_\omega(\ck) \hookrightarrow \ck$, our motivating example was the inclusion of $\Set_{\geq \kappa} \hookrightarrow \Set_{\geq \omega} \hookrightarrow \Set$. The fact that every object in $\Set_{\geq \kappa}$ is injective with respect to finite sets is essentially the axiom of choice. \citep{Rsaturated} describes a direct connection between saturation and amalgamation property, which was also implied in \citep{Caramelloatomic}.
\end{rem}
In \citep{Caramelloatomic}, Caramello proves - essentially - that the category of points of an atomic topos is a category of saturated objects and she observes that it is countable categorical. This shows that there is a deep connection between categoricity, saturation and atomic topoi. We recall the last notion before going on with the exposition.
\begin{defn}[Characterization of atomic topoi, {\cite[C3.5]{elephant2}}] Let $\cg$ be a Grothendieck topos, then the following are equivalent:
\begin{enumerate}
\item $\cg$ is atomic.
\item $\cg$ is the category of sheaves over an atomic site.
\item The subobject lattice of every object is a complete atomic boolean algebra.
\item Every object can be written as a disjoint union of atoms.
\end{enumerate}
\end{defn}
\begin{thm} \label{thmcategoriesofsaturated objects}
\begin{enumerate}
\item[]
\item If $\ca$ is a category of saturated objects, then $\mathsf{S}(\ca)$ is an atomic topos.
\item If in addition $\ca$ has the joint embedding property, then $\mathsf{S}(\ca)$ is boolean and two valued.
\item If in addition $\eta_\ca$ is iso-full, faithful and surjective on objects, then $\ca$ is categorical in some presentability rank.
\end{enumerate}
\end{thm}
\begin{proof}
\begin{enumerate}
\item[]
\item Let $\ca$ be a category of saturated objects $j: \ca \to \ck$. We must show that $\mathsf{S}(\ca)$ is atomic. The idea of the proof is very simple; we will show that:
\begin{itemize}
\item[(a)] $\mathsf{S}j$ presents $\ca$ as $j^*: \Set^{\ck_\omega} \leftrightarrows \mathsf{S}(\ca): j_*$;
\item[(b)] The induced topology on $\ck_\omega$ is atomic.
\end{itemize}
(a) follows directly from the definition of topological embedding and \cite[Rem. 2.13]{thcat}. (b) goes identically to \citep[Cor. 4.9]{simon}: note that for any map $k \to k' \in \ck_{\omega}$, the induced map $j^*yk \to j^*yk'$ is an epimorphism: indeed any map $k \to ja$ with $a \in \ca$ can be extended along $k \to k'$ because $j$ makes $\ca$ a category of saturated objects. So the induced topology on $\ck_{\omega}$ is the atomic topology (every non-empty sieve is a cover). The fact that $\ck_{\omega}$ has the amalgamation property is needed to make the atomic topology a proper topology.
\item Because $\ca$ has the joint embedding property, its Scott topos is connected. Indeed a topos is connected when the inverse image of the terminal map $t: \mathsf{S}(\ca) \to \Set$ is fully faithful. $t$ appears as the $\mathsf{S}(\tau)$, where $\tau$ is the terminal map $\tau: \ca \to \cdot$. When $\ca$ has the JEP, and thus is connected, $\tau$ is a lax-epi, and $f^*$ is fully faithful by \cite[Prop. 4.6]{thcat}. Then, $\mathsf{S}(\ca)$ is atomic and connected. By \cite[4.2.17]{caramello2018theories} it is boolean two-valued.
\item This follows from \cite[Prop. 4.13]{thcat} and \cite{Caramelloatomic}. In fact, Caramello has shown that $\mathsf{ptS}(\ca)$ must be countably categorical and the countable object is saturated (by construction). Thus, the unit of the Scott adjunction must reflect the (essential) unicity of such an object.
\end{enumerate}
\end{proof}
\begin{thm}
If $\ce$ is an atomic topos, then $\mathsf{pt}(\ce)$ is a \textit{candidate} category of saturated objects.
\end{thm}
\begin{proof}
Let $\ce$ be an atomic topos and $i: \ce \to \Set^C$ be a presentation of $\ce$ by an atomic site. It follows from \cite{Caramelloatomic} that $\mathsf{pt}(i)$ presents $\mathsf{pt}(\ce)$ as a candidate category of saturated objects.
\end{proof}
\subsection{Categories of $\kappa$-saturated objects}
Obviously the previous definitions can be generalized to the $\kappa$-case of the Scott adjunction, obtaining analogous results. Let us boldly state them.
\begin{defn}[(Candidate) categories of ($\kappa$-)saturated objects] Let $\ca$ be a category in $\text{Acc}_{\kappa}$. We say that $\ca$ is a category of $\kappa$-saturated objects if there is topological embedding (for the $\mathsf{S}_\kappa$-adjunction) $j: \ca \to \ck$ in $\text{Acc}_{\kappa}$ such that:
\begin{enumerate}
\item $\ck$ is a $\kappa$-accessible category.
\item $j\ca \subset \text{Sat}_\kappa(\ck)$.
\item $\ck_{\kappa}$ has the amalgamation property.
\end{enumerate}
We say that $\ca$ is a candidate category of $\kappa$-saturated objects if there exists a functor $j$ that verifies $(1)$-$(3)$.
\end{defn}
\begin{thm} \label{kappasaturated}
\begin{enumerate}
\item[]
\item If $\ca$ is a category of $\kappa$-saturated objects, then $\mathsf{S}_\kappa(\ca)$ is an atomic $\kappa$-topos.
\item If in addition $\ca$ has the joint embedding property, then $\mathsf{S}_\kappa(\ca)$ is boolean and two valued.
\item If in addition $\eta_\ca$\footnote{the unit of the $\kappa$-Scott adjunction.} is iso-full, faithful and surjective on objects, then $\ca$ is categorical in some presentability rank.
\end{enumerate}
\end{thm}
\begin{thm}
If $\ce$ is an atomic $\kappa$-topos, then $\mathsf{pt}_\kappa(\ce)$ is a \textit{candidate} category of $\kappa$-saturated objects.
\end{thm}
\section*{Acknowledgements}
The content of the first section was substantially inspired by some private conversations with Jiří Rosický. I am grateful to Simon Herny for some very constructive discussions on Sec. 3 and 4. I am indebted to Axel Osmond for having read and commented a preliminary draft of this paper.
| -56,436.335786 |
[
-2.798828125,
2.603515625
] | 54.590818 |
[
-2.806640625,
0.69970703125,
-1.890625,
-5.1328125,
-0.4736328125,
7.2265625
] |
[
3.7734375,
8.765625,
1.5419921875,
8.1796875
] | 634 | 10,240 |
[
-3.396484375,
3.927734375
] | 31.980032 |
[
-5.26953125,
-3.255859375,
-5.125,
-2.060546875,
1.7265625,
12.1796875
] | 0.531579 | 34.414563 | 17.442655 | 0.564793 |
[
2.05517840385437
] | -38,284.380136 | 5.566406 | -55,912.810352 | 0.775439 | 5.968195 |
[
-1.5283203125,
-2.9375,
-3.71875,
-5.3125,
1.90234375,
12.0390625
] |
[
-5.7578125,
-1.8408203125,
-2.125,
-1.455078125,
3.841796875,
4.70703125
] | |
BkiUdQs4eIZjxkFh1bqj
|
\section{Introduction}
The state-of-the-art quantum control in systems such as photonics~\cite{photonics1,photonics3,photonics4}, cold atoms~\cite{luole,bryce,yan,NHSOCexp}, or trapped ions~\cite{trappedion3,chenion,trappedion4} offer unprecedented access to the rich dynamics and exotic phenomena in open quantum systems that undergo particle or energy exchange with their environment.
A non-Hermitian description applies therein, for instance, by imposing post selection~\cite{Non1,Uedareview,molmer,michael,weimer}, or by mapping the density-matrix dynamics to an enlarged Hilbert space~\cite{mastereqeff1,mastereqeff2,zhushiliang,tianyu}.
The resulting non-Hermitian physics provides an unconventional perspective of open systems, and has attracted extensive interest in recent years.
Dictated by a non-Hermitian effective Hamiltonian, exotic spectral or dynamic properties, such as the parity-time symmetry~\cite{PT1,photonics2}, enhanced sensing~\cite{sensor1,sensor2,sensor3} and topological transfer~\cite{photonics4,NHSOCexp}, non-Hermitian topology~\cite{nhtopot1,nhtopot2,nhtopot3,nhtopoe1,nhtopoe15,nhtopoe16} and so on, have been systematically studied and experimentally confirmed in a wide range of physical systems.
The recent discovery of the non-Hermitian skin effect has stimulated further research activities~\cite{nhtopot4,nhtopot5,murakami,nhse1,nhse2,nhse3,nhse4,nhse5,nhse6,nhsedy1,nhsedy2,nhsedy3}. Under the non-Hermitian skin effect, eigenstates of a system become exponentially localized at boundaries, leading to dramatic changes in the system's band and spectral topology~\cite{nhse3,nhse4}, dynamics~\cite{mastereqeff1,zhushiliang,tianyu,nhsedy1,nhsedy2,nhsedy3}, and spectral symmetry~\cite{longhipt,skinpt,chenpt}. Experimentally, the non-Hermitian skin effect and its consequences have been observed in classical and photonic systems~\cite{teskin,nhtopoe2,classical1,scienceskin}, as well as in a Bose-Einstein condensate of ultracold atoms~\cite{skinatom}.
In the last case, a dissipative Aharonov-Bohm (AB) chain was implemented in the momentum and hyperfine-spin space of the condensate atoms. As illustrated in Fig.~\ref{fig:fig1}, the AB chain consists of a series of triangular AB rings~\cite{yan}, each threaded by a synthetic magnetic flux, realized by engineering the phases of the nearest-neighbor hopping rates. Dissipation is introduced through on-site particle loss for each ring, such that dynamics of atoms that remain in the chain is driven by a non-Hermitian Hamiltonian that features non-trivial band topology. Importantly, the interplay of synthetic flux and dissipation gives rise to a non-reciprocal flow in the bulk that lies at the origin of the non-Hermitian skin effect.
In the experiment, the non-Hermitian skin effect was observed through a directional propagation of atoms along the chain, while the topological edge states were probed through an inverse spectroscopy, where atoms are injected into an empty dissipative AB chain from a bystander state.
Despite its experimental implementation, a systematic study of the topological properties of the dissipative AB chain is missing in the literature.
Further, given the intrinsic difficulty of detecting topological edge states in the presence of non-Hermitian skin effects (as both are localized at the boundary), more variety of detection schemes is desirable.
\begin{figure}[tbp]
\includegraphics[width=8cm]{fig1}
\caption{Schematic illustration of a dissipative Aharonov-Bohm chain. Each unit cell consists of three sublattice sites $a$, $b$, and $c$, forming a triangular loop threaded by a flux $\phi$. The green, blue and red bounds denote hopping between adjacent sites. See main text for definition of variables.}
\label{fig:fig1}
\end{figure}
In this work, we carry out a systematic study of the dissipative AB chain, focusing on its topology and detection. We identify an additional topological phase transition beyond the experimentally demonstrated parameter regime, and derive analytical expressions for the topological transition points using the non-Bloch band theory.
Invoking the theoretical framework of Feshbach projection~\cite{Non1,feshbach}, we derive the transfer rates of the experimentally implemented atom-injection spectroscopy, which are in good agreement with numerical simulations. In the experiment, atoms were injected to an open edge of the AB chain, to detect the topological edge states. Here we show that by injecting atoms into a bulk site far away from the boundary, spectral information under the periodic boundary condition can be obtained from the transfer rate.
We then propose a dynamic detection scheme for the topological edge states.
The paper is organized as follows. In Sec.~II, we review the model Hamiltonian for the dissipative AB chain, and show that it has the non-Hermitian skin effect. In Sec.~III, we characterize its topological properties using the non-Bloch band theory. We then provide a theoretical characterization of the injection spectroscopy in Sec.~IV. In Sec.~V, we discuss the dynamic detection of topological edge states. We summarize in Sec.~VI.
\section{Model}
The non-Hermitian Hamiltonian of the dissipative AB chain illustrated in Fig.~\ref{fig:fig1} is given by~\cite{yan,skinatom}
\begin{equation}
\begin{aligned}
H=& \sum_{n=1}^{N}[J_pb^\dagger_na_n+J_sc^\dagger_nb_n+J_se^{i\phi}a^\dagger_nc_n+{H.c.}]\\&+\sum_{n=1}^{N-1} [J_ta^\dagger_{n+1}b_n+{H.c.}]+ \sum_{n=1}^{N}(\Delta-i\gamma)c_n^\dagger c_n.
\end{aligned}
\label{eq:Hrealspace}
\end{equation}
Here $a_n(a_n^\dagger)$, $b_n(b_n^\dagger)$ and $c_n(c_n^\dagger)$ are respectively the annihilation (creation) operators for the $a$, $b$ and $c$ sublattice sites of the $n$th unit cell; $J_p$, $J_s$ and $J_t$ are the nearest-neighbour hopping rates; $\Delta$ and $\gamma$ are respectively the on-site potential and the loss rate on site $c$; the phase $\phi\in [0,2\pi)$ corresponds to a synthetic magnetic flux.
\begin{figure}[tbp]
\includegraphics[width=9cm]{fig2}
\caption{Eigenspectra of a dissipative AB chain under the open boundary condition. We take $N=100$ unit cells, $J_s/J_p=2$, $\phi=\pi/2$, $\Delta/J_p=-2$, and $\gamma/J_p=1$ for numerical calculations. (a)(b) The real ($\text{Re}(E)$) and imaginary ($\text{Im}(E)$) components of eigenenergies as functions of $J_t$. The red and blue lines in (a)(b) denote topological edge states, each two-fold degenerate. The two topological transition points are $J_{t,c1}/J_p=1.56$ (associated with edge states in red) and $J_{t,c2}/J_p=3.41$ (associated with those in blue), respectively.}
\label{fig:fig2}
\end{figure}
Hamiltonian (\ref{eq:Hrealspace}) hosts topological edge states under the open boundary condition.
For finite $\gamma$ and $\phi\notin \{0,\pi\}$, all eigenstates accumulate to the boundaries under the non-Hermitian skin effect. As demonstrated in Ref.~\cite{yan}, this can be understood in the limit $\Delta,\gamma\gg J_s,J_p,J_t$, when Hamiltonian (\ref{eq:Hrealspace}) can be perturbatively reduced to a non-Hermitian Su-Schrieffer-Heeger model with asymmetric hopping. Physically, this is because the interplay of the synthetic flux and on-site loss gives rise to a non-reciprocal flow along the chain. Beyond such a limit, the dissipative AB chain is qualitatively different from a non-Hermitian Su-Schrieffer-Heeger model, particularly for the lack of chiral symmetry. Nevertheless, both the non-Hermitian skin effect and band topology persist as salient features of the dissipative AB chain.
In Fig.~\ref{fig:fig2}, we show typical eigenspectra of the model under the open boundary condition.
Two gap-closing points can be identified, particularly visible in $\text{Re}(E)$, where topological edge states emerge.
The topological transitions are robust under variations of $\phi$ and $\gamma$, their locations however, sensitively depend on these parameters.
In the following, we denote the location of the topological phase transitions as $J_{t,c1}$ and $J_{t,c2}$, which are associated with the edge states in red and blue, respectively, in Fig.~\ref{fig:fig2}. We further denote the corresponding eigenenergies of the topological edge states as $E_{c1}$ and $E_{c2}$, respectively.
Note that the transition at $J_{t,c1}$ was experimentally probed in Ref.~\cite{skinatom}, but not the one at $J_{t,c2}$.
In Fig.~\ref{fig:fig3}, we show the spatial probability distribution of eigen wavefunctions under different boundary conditions. We choose the parameter $J_t/J_p=5$, such that two pairs of topological edge states (indicated by red and blue) exist under the open boundary condition. For finite $\gamma$ and $\phi\notin \{0,\pi\}$, all eigenstates are localized toward the boundaries, indicating the presence of non-Hermitian skin effect. It follows that topological edge states in Fig.~\ref{fig:fig2} can only be accounted for by a non-Bloch topological invariants under the non-Bloch band theory.
\begin{figure}[tbp]
\includegraphics[width=8cm]{fig3}
\caption{Spacial distribution of the eigenstates for an AB chain with $N=100$. We take $J_t/J_p=5$, while other parameters are the same as those in Fig.~\ref{fig:fig2}. Gray: periodic boundary condition. Black: bulk eigenstates under the open boundary condition. Red and blue: degenerate topological edge states with eigenenergies $E_{c1}$ and $E_{c2}$, respectively.}
\label{fig:fig3}
\end{figure}
\section{Topology under the non-Bloch band theory}
Topological edge states of the dissipative AB chain are characterized by the non-Bloch band theory~\cite{nhtopot4,murakami}. The idea is to take into account the deformation of the bulk eigenstates under the non-Hermitian skin effect, replacing the phase factor $e^{ik}$ of the Bloch waves (under the periodic boundary condition) with a spatial mode factor $\beta(k)=|\beta(k)|e^{ik}$. Here the quasimomentum $k\in [0,2\pi)$, and the trajectory of $\beta(k)$ on the complex plane is known as the generalized Brillouin zone (GBZ), which can be calculated from the Schr\"{o}dinger's equation as shown below.
In the spirit of the non-Bloch band theory, we write the non-Bloch Hamiltonian as
\begin{equation}
H(\beta)=\left(\begin{array}{ccc}
0 & J_p+J_t\beta^{-1} & J_se^{i\phi} \\
J_p+J_t\beta & 0 & J_s \\
J_se^{-i\phi} & J_s & \Delta-i\gamma
\end{array}\right).
\label{eq:hbeta}
\end{equation}
The Schr\"{o}dinger's equation in the GBZ is then $[H(\beta)-E]|\varphi^{R}_j(\beta)\rangle=0$, where
$E$ is the eigenenergy, $|\varphi^{R}_j(\beta)\rangle$ is the right eigenstate, and $j$ is the band index. Sending the determinant of the eigen equation to zero, we have
\begin{equation}
\begin{aligned}
&\left[J_t\beta+J_p+\frac{e^{-i\phi}J_s^2}{E-(\Delta-i\gamma)}\right]\left[J_t\beta^{-1}+J_p+\frac{e^{i\phi}J_s^2}{E-(\Delta-i\gamma)}\right]\\&-\left[E-\frac{J_s^2}{E-(\Delta-i\gamma)}\right]^2=0.
\end{aligned}
\label{eq:betaeq}
\end{equation}
The spatial mode functions $\beta(k)$ can be solved by requiring the two roots of Eq.~(\ref{eq:betaeq}) to have the same magnitude, with $|\beta_1|=|\beta_2|$. We then have~\cite{skinatom}
\begin{equation}
|\beta(k)|=\sqrt{\left|\frac{J_p+\frac{e^{-i\phi}J_s^2}{E-(\Delta-i\gamma)}}{J_p+\frac{e^{i\phi}J_s^2}{E-(\Delta-i\gamma)}}\right|}.
\label{eq:betamag}
\end{equation}
It is then straightforward to solve for $E$ and $|\beta(k)|$ from Eqs.~(\ref{eq:betaeq}) and (\ref{eq:betamag}) for each $k$. The resulting eigenenergy $E$ gives the eigenspectrum under an open boundary condition.
In Fig.~\ref{fig:fig4}(a)(b)(c), we show the eigenspectra for different parameters, under both the periodic (dots) and the open boundary conditions (solid curves). Under the periodic boundary condition, the eigenenergies of each band form a closed spectra loop, consistent with the well-known spectral topology of the non-Hermitian skin effect. By contrast, under the open boundary condition, the eigenenergies collapse to open arcs within the closed loops. In Fig.~\ref{fig:fig4}(b)(c), the discrete red and blue dots outside the spectra loops correspond to the topological edge states in Fig.~\ref{fig:fig2}.
\begin{figure*}[tbp]
\includegraphics[width=15cm]{fig4}
\caption{(a)(b)(c) Eigenspectra of Hamiltonian Eq.~(\ref{eq:Hrealspace}) with $N=100$ unit cells on the complex plane. Black: eigenspectra under PBC. Orange, green, and purple: eigenspectra for three different bands under the open boundary condition. Red and blue triangle denote the topological edge states with eigenenergies $E_{c1}$ and $E_{c2}$, respectively.
(d)(e)(f) GBZs on the complex plane. Orange, green, and purple: GBZs for the three different bands in (a)(b)(c). Black dashed line is the unit circle, which corresponds to the conventional Brillouin zone. For (a)(d), $J_t/J_p=0.5$; for (b)(e), $J_t/J_p=2.5$; for (c)(f), $J_t/J_p=5$. Other parameters are the same as those in Fig.~\ref{fig:fig2}.}
\label{fig:fig4}
\end{figure*}
\begin{figure}[tbp]
\includegraphics[width=8cm]{fig5}
\caption{Bloch (gray) and non-Bloch (black) winding numbers. The vertical dashed lines in red and blue denote $J_{t,c1}$ and $J_{t,c2}$, respectively. All parameters are the same as those in Fig.~\ref{fig:fig2}.}
\label{fig:fig5}
\end{figure}
In Fig.~\ref{fig:fig4}(d)(e)(f), we plot the GBZs of the three bands under the parameters of Fig.~\ref{fig:fig4}(a)(b)(c), respectively. For all cases, the calculated $|\beta(k)|<1$, and the GBZs are within the unit circle. This indicates that under the open boundary condition, all eigenstates accumulate to the left boundary (toward small unit-cell index $n$).
\begin{figure*}[tbp]
\includegraphics[width=15cm]{fig6}
\caption{Transfer rate for the inverse spectroscopy, for $|f\rangle$ located at an open boundary. (a)(b)(c) The probe Hamiltonian couples the bystander state to the $b$ sublattice site of the $N$th unit cell (the right-most unit cell on the edge), with (a) $J_t/J_p=0.5$, (b) $J_t/J_p=2.5$, and (c) $\delta_{pb}/J_p=\mathrm{Re}(E_{c1})$. Here $E_{c1}$ is the energy of the topological edge state. The dashed vertical lines in (a)(b) correspond to $\mathrm{Re}(E_{c1})$, and the dashed line in (c) corresponds to $J_{t,c1}$.
(d)(e)(f) The probe Hamiltonian couples the bystander state to the $c$ sublattice site of the $N$th unit cell (the right-most unit cell on the edge), with (d) $J_t/J_p=2.5$, (e) $J_t/J_p=5$, and (f) $\delta_{pb}/J_p=\mathrm{Re}(E_{c2})$.
The dashed vertical lines in (d)(e) correspond to $\mathrm{Re}(E_{c2})$, and the dashed line in (f) corresponds to $J_{t,c2}$.
For all subplots, the black solid lines and the magenta dashed lines are respectively the theoretically predicted transfer rate using Eq.~(\ref{eq:tratio}), and the numerically simulated transfer rates from Eq.~(\ref{eq:tratioevo}). For all figures, $J_{pb}/J_p=0.01$, $\tau J_p=40\pi$. Other parameters are the same as those in Fig.~\ref{fig:fig2}.}
\label{fig:fig6}
\end{figure*}
\begin{figure}[tbp]
\includegraphics[width=9cm]{fig7}
\caption{Transfer rate for the inverse spectroscopy, as the probe Hamiltonian is coupled to the $b$ sublattice site of the $51$st unit cell, which is deep in the bulk.
(a)(b) Hermitian case with $\gamma=0$. (c)(d) Non-Hermitian case with $\gamma/J_p=1$. We also take $J_t/J_p=0.5$ in (a)(c), and $J_t/J_p=2.5$ in (b)(d), while other parameters are the same as those in Fig.~\ref{fig:fig6}. For all subplots, {$\tau J_p=40\pi$}, the black solid lines and the magenta dashed lines are respectively the theoretically predicted transfer rate using Eq.~(\ref{eq:tratio}), and the numerically simulated transfer rates from Eq.~(\ref{eq:tratioevo}). The shaded regions in gray indicate the real components of the spectra under the
periodic boundary condition.
}
\label{fig:fig7}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=9cm]{fig8}
\caption{Boundary dynamics for the detection of topological edge states.
(a)(b) Color contour for the normalized occupation distribution as a function of time. (b)(d) The normalized occupation distribution at the time $\tau J_p=6$. We take $J_t/J_p=1$ in (a)(c), and $J_t/J_p=5$ in (b)(d). Other parameters are the same as those in Fig.~\ref{fig:fig2}.}
\label{fig:fig8}
\end{figure}
We are now in a position to calculate the non-Bloch winding number, which can restore the bulk-boundary correspondence and predict the existence and number of topological edge states. Unlike the non-Hermitian Su-Schrieffer-Heeger model, the dissipative AB chain features three bands.
The non-Bloch winding number $\nu$ is {defined through the global Berry phase, which is the sum of the Berry phases} of all three bands, with
\begin{equation}
\nu=\frac{1}{2\pi}\sum_j\Theta_{j}.
\label{eq:windnum}
\end{equation}
Here the {Berry phase} of the $j$th band is given by
{\begin{equation}
\Theta_{j}=i \oint_{\mathrm{GBZ}_j} \mathrm{d}\beta\left\langle \varphi^L_{j}(\beta)\left|\partial_{\beta}\right| \varphi^R_{j}(\beta)\right\rangle,
\label{eq:berryphase}
\end{equation}}
where the right and left eigenstates of $H(\beta)$ are defined as $H(\beta)|\varphi_j^R(\beta)\rangle=E_j|\varphi_j^R(\beta)\rangle$ and $H^\dagger(\beta)|\varphi_j^L(\beta)\rangle=E_j^*|\varphi_j^L(\beta)\rangle$, respectively.
The integration in Eq.~(\ref{eq:berryphase}) is over { the GBZ of the $j$th band.}
When $\beta$ is replaced by $e^{ik}$ in Eq.~(\ref{eq:berryphase}), the non-Bloch winding number is reduced to the Bloch winding number which characterizes the band topology of the system under the periodic boundary condition.
In Fig.~\ref{fig:fig5}, we show the Bloch (gray) and non-Bloch (black) winding numbers. The non-Bloch winding number is quantized to integers, and changes its value at topological transitions that are consistent with the gap-closing points in Fig.~\ref{fig:fig2}. By contrast, the Bloch winding number can take half-integer values, and it does not indicate the topological transitions of the system under the open boundary condition. Note that half-integer winding numbers have previously been reported in the non-Hermitian, asymmetric Su-Schrieffer-Heeger model~\cite{nhse5,halfwinding}, where explicit geometric interpretations can be found based on its chiral symmetry. While chiral symmetry is absent in our model, the origin of these half-integer winding numbers, and their general relation to the non-Hermitian skin effect, are interesting open questions.
The topological transition points can be analytically determined from the gap-closing condition.
At the gap-closing pint, GBZs of two different bands intersect on the complex plane at the same eigenenergies.
It follows that Eq.~(\ref{eq:betaeq}) features a double root at the topological transition. This is satisfied for
\begin{equation}
\begin{aligned}
J_{t,cj}=&\sqrt{\left|(J_p+\frac{e^{i\phi}J_s^2}{E_{cj}-(\Delta-i\gamma)})\cdot
(J_p+\frac{e^{-i\phi}J_s^2}{E_{cj}-(\Delta-i\gamma)})\right|},
\end{aligned}
\label{eq:jtc}
\end{equation}
with $j=1,2$. And the roots are given by
\begin{align}
E_{c1}=\frac{(\Delta-i\gamma)+\sqrt{(\Delta-i\gamma)^2+4J_s^2}}{2},
\label{eq:ec1}\\
E_{c2}=\frac{(\Delta-i\gamma)-\sqrt{(\Delta-i\gamma)^2+4J_s^2}}{2}, \label{eq:ec2}
\end{align}
which correspond to the energies of the topological edge states emerging at the two transition points in Fig.~\ref{fig:fig1}.
Both $J_{t,cj}$ and $E_{cj}$ calculated from Eqs.~(\ref{eq:jtc})(\ref{eq:ec1})(\ref{eq:ec2}) are in excellent agreement with the numerically calculated eigenspectra in Fig.~\ref{fig:fig2}. Note that we take the positive branch for the square roots in Eqs.~(\ref{eq:ec1})(\ref{eq:ec2}).
To close this section, we discuss the symmetry of Hamiltonian (\ref{eq:hbeta}).
While it does not have chiral symmetry, Hamiltonian (\ref{eq:hbeta}) is symmetric under the following transformation:
$\Gamma H^{T}(\beta) \Gamma^{-1}=H(\beta)$, where
$\Gamma=\left(\begin{array}{ccc}0&1&0\\1&0&0\\0&0&e^{-i\phi} \end{array}\right)$. In the Hermitian limit with $\gamma=0$, the symmetry is reduced to $\Gamma H^{T}(k) \Gamma^{-1}=H(k)$, where $k$ is then the quasi-momentum in the conventional Brillouin zone. We have checked that such a symmetry protects the two-fold degeneracy of the topological edge states emerging from either phase transitions. {Note that, while the Berry phases $\Theta_{j}$ are quantized to multiples of $\pi$ in the presence of such a symmetry, they are no longer so when the symmetry is broken. By contrast, the non-Bloch winding number is always quantized, since the global Berry phase, when integrated over the GBZ, is always integer multiples of $2\pi$.}
In the real lattice space, the symmetry operation can be further decomposed into $\Gamma=PC_+$, where $P: a_n \rightarrow a_{N-n}, \,\, b_n \rightarrow b_{N-n}, \,\, c_n \rightarrow c_{N-n}$; $C_+: a_n \rightarrow b_{N-n}, \,\, b_n \rightarrow a_{N-n}, \,\, c_n \rightarrow e^{-i\phi}c_{N-n}$.
We identify $P$ and $C_+$ as the inversion and the non-Hermitian variant of the time-reversal operators, respectively. In particular, $C_+$ can be identified with the $\text{TRS}^\dag$ symmetry in Ref.~\cite{nhtopot2}. Physically, the combined inversion and time-reversal symmetry is understood from the observation that the dissipative AB chain remains invariant by simultaneously reversing the flux and the lattice, but not either alone.
\section{Detecting topological edge states and band structure}
In the experiment~\cite{skinatom}, a momentum-resolved Bragg spectroscopy was applied to detect the topological edge states, where atoms are injected into an edge site of the AB chain. In this section, we provide a theoretical description for the atom-injection spectroscopy, and show that a similar detection scheme can be applied to probe the band structure. We then propose an alternative dynamic detection scheme for the topological edge states.
\subsection{Injection spectroscopy}
We consider coupling atoms in a bystander state $|d\rangle$ to a local site $|f\rangle$ of the dissipative AB chain. Site $|f\rangle$ can be any one of the sublattice sites $|a\rangle$, $|b\rangle$ or $|c\rangle$. It can be on the edge, as is the case in the experiment, or in the bulk, {far away} from any boundaries. The chain is originally empty, such that the scheme is similar in spirit to the inverse radio-frequency spectroscopy. The probe Hamiltonian reads
\begin{equation}
H_{pb}=J_{pb}d^\dagger f+J_{pb}f^\dagger d+\delta_{pb} d^\dagger d.
\label{eq:Hpb}
\end{equation}
Here, $d$ ($d^\dagger$), $f$ ($f^\dagger$) respectively denote the annihilation (creation) operators for state $|d\rangle$ and $|f\rangle$. $J_{pb}$ is the coupling rate between $|d\rangle$ and $|f\rangle$, $\delta_{pb}$ is the detuning of the coupling frequency with respect to the transition $|d\rangle\rightarrow |f\rangle$. {Here the overall dynamics is governed by the Hamiltonian $H^\prime=H+H_{pb}$.}
Following the practice of Feshbach projection~\cite{feshbach,feshbach2,cohen}, we define the projection operators $P=|d\rangle\langle d|$ and $Q=\mathbf{I}-P$. The effective Hamiltonian in the subspace of the bystander state $|d\rangle$ is then
\begin{equation}
H_{\mathrm{eff}}(E) =H^\prime_{P P}+H^\prime_{P Q} \frac{1}{E-H^\prime_{Q Q}} H^\prime_{Q P},
\label{eq:Heffform}
\end{equation}
where
\begin{equation}
\begin{aligned}
H^\prime_{P P} =\delta_{pb}d^\dagger d,&\quad H^\prime_{P Q} =J_{pb}d^\dagger f, \\
H^\prime_{Q P} =J_{pb}f^\dagger d,&\quad H^\prime_{Q Q} =H.
\end{aligned}
\label{eq:hproj}
\end{equation}
It can be shown straightforwardly that
\begin{equation} H_{\mathrm{eff}}(E)=\left(\delta_{pb}+\sum_j\frac{J_{pb}^2\psi_{j,f}^R\psi_{j,f}^{L*}}{E-E_j}\right)d^\dagger d,
\label{eq:heffcal1}
\end{equation}
where $\psi_{j,f}^{R}=\langle f|\psi_j^R\rangle$ and $\psi_{j,f}^{L*}=\langle\psi_j^L|f\rangle$, and the right and left eigenstates of $H$ are defined as
$H|\psi_j^R\rangle=E_j|\psi_j^R\rangle, H^\dagger|\psi_j^L\rangle=E_j^*|\psi_j^L\rangle$.
The effctive Hamiltonian Eq.~(\ref{eq:heffcal1}) is dissipative, where the dissipation is due to the atom transfer from state $|d\rangle$ to the AB chain. We define the transfer rate
\begin{equation}
T(\tau)=1-\exp[-2R(\delta_{pb})\tau],
\label{eq:tratio}
\end{equation}
which describes the probability for an atom to be transferred from $|d\rangle$ to the chain within the evolution time $\tau$. Here
\begin{equation}
R(\delta_{pb})=-\operatorname{Im}\left(\langle d|H_{\mathrm{eff}}(E)|d\rangle\right)=-\operatorname{Im} \sum_j\frac{J_{pb}^2\psi_{j,f}^R\psi_{j,f}^{L*}}{\delta_{pb}-E_j+i0^{+}}.
\label{eq:trate}
\end{equation}
The term $i0^+$ in the denominator ensures that Eq.~(\ref{eq:trate}) recovers the familiar form of the Fermi's golden rule in the Hermitian case. Note that the same Eq.~(\ref{eq:trate}) can be derived using the linear response theory (see Appendix).
The transfer rate can also be evaluated through numerical simulation of the system dynamics. Initialized in the state $|d\rangle$ at the initial time $t=0$, the time evolved state at time $\tau$ is then
$|\psi(\tau)\rangle=e^{-iH^\prime\tau}|d\rangle$. {Note that the non-normalized nature of $|\psi(\tau)\rangle$ corresponds to loss of atoms from the dissipative AB chain.}
The transfer rate can be expressed as
\begin{equation}
T_{evo}(\tau)=1- |\langle d|e^{-iH^\prime \tau}|d\rangle|^2.
\label{eq:tratioevo}
\end{equation}
Due to the perturbative nature of Eq.~(\ref{eq:tratio}), we expect that the transfer rates calculated from Eq.~(\ref{eq:tratio}) and Eq.~(\ref{eq:tratioevo}) are very close to each other, provided that $J_{pb}/J_p\ll 1$ and $J_{pb}^2\tau\ll 1\ll J_p\tau$. This is indeed the case as we show below.
For the detection of the topological edge states, we follow the practice of Ref.~\cite{skinatom}, and consider $|f\rangle$ to be on the edge of the AB chain. The calculated transfer rates are plotted in Fig.~\ref{fig:fig6}. While results from Eq.~(\ref{eq:tratio}) and Eq.~(\ref{eq:tratioevo}) agree well with one another, signals of the edge states are visible near the appropriate detuning $\delta_{pb}$. Specifically, in Fig.~\ref{fig:fig6}(a)(b), we aim to detect the edge state with energy $E_{c1}$. The state has a large support on sublattice site $b$, we therefore set $|f\rangle$ on site $b$ of the right-most unit cell on the edge. When $J_{t}<J_{t,c1}$, $T(\tau)$ exhibits a valley at $\delta_{pb}=\mathrm{Re}(E_{c1})$ [Fig.~\ref{fig:fig6}(a)], indicating the presence of a band gap. When $J_{t}>J_{t,c1}$, a peak emerges at $\delta_{pb}=\mathrm{Re}(E_{c1})$ [Fig.~\ref{fig:fig6}(b)], suggesting the presence of an in-gap edge state.
In Fig.~\ref{fig:fig6}(c), we show the transfer rate at $\delta_{pb}=\mathrm{Re}(E_{c1})$ as a function
of $J_t$, which clearly indicates a phase transition near $J_{t,c1}$.
Similarly, in Fig.~\ref{fig:fig6}(d)(e), we aim to probe the edge state with energy $E_{c2}$.
Since the edge states now have a large support on sublattice site $c$, we set $|f\rangle$ on site $c$ of the right-most unit cell.
We find that a peak appears near $\delta_{pb}=\mathrm{Re}(E_{c2})$ in the transfer rate when $J_{t}>J_{t,c2}$ [Fig.~\ref{fig:fig6}(d)(e)], consistent with the emergence of the topological edge states. Likewise,
the topological phase transition is clearly visible near $J_t=J_{t,c2}$ in Fig.~\ref{fig:fig6}(f).
Compared to the Bragg spectroscopy implemented in Ref.~\cite{skinatom}, for the numerical simulations here, we consider a weaker probe ($J_{\text{pb}}\sim h\times 12$ Hz, $h$ being the planck constant), and a longer probe time ($\tau\sim 16$ ms). Such optimization leads to a more faithful detection with a better resolution.
Alternatively, we can couple the bystander state to a sublattice site in the bulk to reveal the global spectral features under the periodic boundary condition. The results are plotted in Fig.~\ref{fig:fig7}. In the Hermitian case with $\gamma=0$ [Fig.~\ref{fig:fig7}(a)(b)], the transfer rate shows sharp edges at the band edge, revealing both the band continuum and the band gaps. For the dissipative AB chain with finite $\gamma$ [Fig.~\ref{fig:fig7}(c)(d)], the transfer-rate profiles are broadened due to the imaginary components of the eigenspectra. Nevertheless, the band gaps are still visible as valleys in the profile.
In relation to the experiment in Ref.~\cite{skinatom}, injecting atoms into the bulk offers a complementary detection scheme for the topological phase transition, by observing the closing of the band gaps.
\subsection{Dynamic detection of edge states}
Topological edge states can also be detected through dynamics close to the boundary.
Under the non-Hermitian skin effect, eigenstates of the AB chain accumulate to one of the edges.
The idea is to initialize the state near the opposite edge, and observe the time-dependent population along the chain. While the non-Hermitian skin effect would drive the population toward the other edge, topological edge states should remain near the initial site. To quantitatively characterize the phenomena, we define the normalized occupation
\begin{equation}
|\psi^\prime(\tau,n)|^2=\sum_{j=a,b,c}\frac{|\langle n,j|\psi(\tau)\rangle|^2}{\langle\psi(\tau)|\psi(\tau)\rangle},
\label{eq:norocu}
\end{equation}
which indicates the spatial distribution of the state at the time $\tau$.
In Fig.~\ref{fig:fig8}, we show the time evolution of the probability in the topological trivial [Fig.~\ref{fig:fig8}(a)(c)] and non-trivial regions [Fig.~\ref{fig:fig8}(b)(d)].
In the topological trivial region, the time-evolved state diffuses into the bulk without much occupation at the boundary. By contrast, in the topological non-trivial region, the time-evolved state still exhibits
a peak at the boundary, together with the diffusive dynamics into the bulk.
Note that the normalized occupation at the boundary decreases with time because there are bulk eigenstates that decay slower than the edge states. The detection scheme therefore should only work at intermediate times. Nevertheless, such a dynamic detection is readily accessible in experiments, and provides a direct signal of the non-Hermitian skin effect.
\section{Conclusion}
In conclusion, we have characterized the topological features of a dissipative AB chain in detail, and provided a theoretical description for the recently implemented atom-injection spectroscopy. We show that the injection spectroscopy can be applied to the bulk sites and resolve the band structure of the system under a periodical boundary condition. We further propose an alternative detection scheme for the topological edge states. Our studies are helpful for future experimental study of the dissipative AB chain in relevant quantum simulation systems.
\begin{acknowledgments}
This work has been supported by the Natural Science Foundation of China (Grant Nos. 11974331) and the National Key R\&D Program (Grant Nos. 2016YFA0301700, 2017YFA0304100).
\end{acknowledgments}
| -24,524.092973 |
[
-2.353515625,
2.1015625
] | 49.310345 |
[
-2.666015625,
0.736328125,
-2.189453125,
-5.66015625,
-1.2646484375,
8.8828125
] |
[
4.1484375,
8.78125,
3.15234375,
5.36328125
] | 279 | 3,819 |
[
-1.4140625,
1.3447265625
] | 26.122805 |
[
-6.1953125,
-4.25390625,
-4.44921875,
-2.3125,
1.90234375,
12.5546875
] | 1.295896 | 25.680727 | 26.132495 | 1.554626 |
[
1.4628770351409912
] | -17,744.75519 | 6.061796 | -24,053.779941 | 2.449244 | 5.684227 |
[
-2.19921875,
-3.8046875,
-4.34375,
-5.35546875,
2.203125,
13.1875
] |
[
-5.75,
-2.2578125,
-2.84765625,
-1.94140625,
3.794921875,
5.7578125
] | |
BkiUdvQ4uBhjBQYf8tRA
|
\section{Appendix of Section~\ref{section:properties}}
\label{app:proofs}
This section provides the detailed proofs for the main theorem.
\begin{definition}[Dictionary map]
To simplify definitions we sometimes use a functional notation ($\map[]{-}$)
for the dictionary-passing translation, defined as
\begin{align*}
\map{e} & = \lex e~\text{\it where }~\dict{e}{\lex{e}} \\
\map[]{P} & = \lex P~\text{\it where }~\dict[]{P}{\lex{P}} \\
\map{\Gamma} & = \dom{\Gamma} : \texttt{Any}, \multi{\texttt{dict} : \dictName{\Delta(\eta^{-1}(\texttt{dict}))}}
\end{align*}
While $\map{\Gamma}$ seems complex at first glance it simply
erases all variables already in $\Gamma$ while adding
appropriate dictionary variables. This is done by finding
the type parameter name for $\texttt{dict}_i$ from $\eta^{-1}$
and then getting the type bound of the type parameter from $\Delta$.
This type bound is used to decide the type of the $\texttt{dict}_i$ variable.
Note that $\eta$ is bijective since we assume all type parameter names and
dictionary variable names are unique, as such $\eta^{-1}$ exists.
\end{definition}
\lemprec*
\begin{proof}
\begin{enumerate}
\item This case is shown by the deterministic nature of $\longrightarrow$ and the fact
that $\Longrightarrow \subseteq \longrightarrow^+$.
\item This case is immediate for most $\ensuremath{\rho_{\text{dict}}}$ cases. Only the
$\method{mName}{t,m}\sytxBrace{}.\texttt{Apply}(\multi{e})$ case provides
a complexity. Once we realise, however, that each $e_i$ is
used linearly in the method $\method{mName}{t,m}\sytxBrace{}.\texttt{Apply}$,
as created by $\method{meth\_ptr}{}$,
it becomes clear that this case does not interact with any
reductions in $\multi{e}$.
\end{enumerate}
\end{proof}
\begin{restatable}[Type specialisation is resolved by $\dictred$ ]
{lemrest}{lempreposttype}
\label{lem:preposttype}
Let $\Delta = \multi{\alpha : \iType{\tau}}$,
expression $e$ be of type
$\wellTyped[\Delta; \Gamma]{e}{\tau}$,
and $\multi{\sigma}$ be a type actual such that $\subtypeMulti[\Delta]{\sigma}{\iType{\tau}}$.
Let the map $\eta = \{\multi{\alpha \mapsto \texttt{dict}}\}$.
then
$\map{e}[
\multi{\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}}
] \dictred^*
\map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}$
\end{restatable}
\begin{proof}
By induction on $e$, we apply the substitution of each $\alpha_i$ in turn.
Note that dictionary $\eta(\alpha_i)$ is either of the form $v.\texttt{dict}_i$ when
$\alpha_i$ comes from the receiver or $\texttt{dict}_i$ when it is a method parameter.
We can limit our considerations to the latter case as
if the former holds we can transform it to the latter as
$\ensuremath{\rho_{\text{dict}}}$ resolves the desired receiver field access.
\begin{itemize}
\item[] \caseof{d-dictcall}\\
We assume that $\wellTyped[\alpha:\iType{\tau}; \Gamma]{e}{\alpha}$
such that the translation of
$e.m\typeActualMethod(\multi{e})$ is
\[\dict{e.m\typeActualMethod(\multi{e})}
{\eta(\alpha_i).m.\texttt{Apply}(\map{e}, \map{\psi}, \multi{\map{e}})}\]
$\eta(\alpha_i)$ is either of the form $v.\texttt{dict}_i$ when
$\alpha_i$ comes from the receiver or $\texttt{dict}_i$ when it is a method parameter.
In the former case $\ensuremath{\rho_{\text{dict}}}$ resolves the desired receiver field access,
so we may limit our considerations the latter.
Let $\sigma = u[\phi]$ and $\iType{\tau} = \iType{t}[\phi']$ then
applying the substitution $\theta = [\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]$
we produce the term
\[\makeDict[\emptyset; \emptyset]{u[\phi], \iType{t}[\phi']}.m.\texttt{Apply}(\map{e}[\theta], \map{\psi}[\theta], \multi{\map{e_2}[\theta]})\]
where method $m$ is in the method abstractors of the given dictionary, such that
\[\makeDict[\emptyset; \emptyset]{u[\phi], \iType{t}[\phi']} = \dictName{\iType{t}}\{\multi{v}, \method{mName}{u,m}, \multi{v}\}\]
$\ensuremath{\rho_{\text{dict}}}$ resolves as follows
\begin{flalign*}
\qquad \dictName{\iType{t}}\{\cdots\}.&m.\texttt{Apply}(\map{e}[\theta], \map{\psi}[\theta], \multi{\map{e}[\theta]}) &\\
& \dictred \method{mName}{u,m}.\texttt{Apply}(\map{e}[\theta], \map{\psi}[\theta], \multi{\map{e}[\theta]}) &\\
& \dictred \map{e}[\theta].(u).m(\map{\psi}[\theta], \multi{\map{e}[\theta]})
\end{flalign*}
By the induction hypothesis
\begin{flalign*}
\qquad \map{e}[\theta].(u)&.m(\map{\psi}[\theta], \multi{\map{e}[\theta]}) \dictred^* \\
& \map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}.(u).m(\map[\emptyset; \emptyset; \Gamma]{\psi[\multi{\alpha \mathbin{:=} \sigma}]}, \multi{\map{e[\emptyset; \emptyset; \Gamma][\multi{\alpha \mathbin{:=} \sigma}]}})
\end{flalign*}
Note that we can only apply induction on the arguments to $m$
because $\ensuremath{\rho_{\text{dict}}}$ is defined on the pre-congruence evaluation
context $C$. We explicitly do not use $\ensuremath{\rho_{\text{sim}}}$
as part of this induction.
To resolve $\map[\emptyset; \emptyset; \Gamma]{e.m\typeActualMethod(\multi{e})[\multi{\alpha \mathbin{:=} \sigma}]}$
we need first observe that the type substitution specifies all type variables
by our assumptions. This means that the dictionary-passing translation uses
the homomorphic rule \rulename{d-call}. We also know that the type of $e$ ($\alpha$)
is mapped to $\sigma$ ($u[\phi]$).
\begin{flalign*}
\qquad & \map[\emptyset; \emptyset; \Gamma]{e.m\typeActualMethod(\multi{e})[\multi{\alpha \mathbin{:=} \sigma}]} &\\
&\qquad = \map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}].m[\psi[\multi{\alpha \mathbin{:=} \sigma}]](\multi{e[\multi{\alpha \mathbin{:=} \sigma}]})} &\\
&\qquad = \map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}.(u).m(\map[\emptyset; \emptyset; \Gamma]{\psi[\multi{\alpha \mathbin{:=} \sigma}]}, \multi{\map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}})
\end{flalign*}
\item[] \caseof{d-assert} $\wellTyped[\Delta, \Gamma]{e.(\alpha)}{\alpha}$\\
We start by considering the $e.(\alpha)[\alpha \mathbin{:=} \sigma]$ side.
\begin{flalign*}
\qquad & \map[\emptyset; \emptyset; \Gamma]{e.(\alpha)[\alpha \mathbin{:=} \sigma]}
= \map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma].(\sigma)}
= \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]}) &
\end{flalign*}
We now look at the lhs. Let $\zeta = (-.\texttt{\_type}) \circ \eta$
\begin{flalign*}
\qquad \map{e.(\alpha)} & = \typemeta{\alpha}.\texttt{tryCast}(\map{e}) &\\
& = \zeta(\alpha).\texttt{tryCast}(\map{e}) \\
& = (\eta(\alpha).\texttt{\_type}).\texttt{tryCast}(\map{e})\\
& = \texttt{dict}.\texttt{\_type}.\texttt{tryCast}(\map{e})
\end{flalign*}
If $\iType{\tau} = \iType{t}[\phi]$ then we have that
$\makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}} =
\dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta{\sigma}}$.
We also know that $\sigma$ is well typed in the empty environment ($\subtype[]{\sigma}{\iType{\tau}}$)
so $\typemeta{\sigma} = \typemeta[\emptyset]{\sigma}$.
With the previously derived $\map{e.(\alpha)} = \texttt{dict}.\texttt{\_type}.\texttt{tryCast}(\map{e})$
{\small
\begin{flalign*}
\quad \!& \texttt{dict}.\texttt{\_type}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}] &\\
& \quad = \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}.\texttt{\_type}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}])\\
& \quad \dictred \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}])
\end{flalign*}
}
We can now apply the induction hypothesis
\begin{flalign*}
\qquad & \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}]) &\\
& \quad \dictred^* \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})
\end{flalign*}
\item[] \caseof{d-assert} $\wellTyped[\Delta, \Gamma]{e.(\tau)}{\tau}$
If $\alpha \not\in \fv{\tau}$ then this proof is immediate by induction.
If we instead assume that $\alpha \in \fv{\tau}$ then
with $\zeta = (-.\texttt{\_type}) \circ \eta$
\begin{flalign*}
\qquad & \map{e.(\tau)}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}] &\\
& \quad = \typemeta{\tau}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]
\end{flalign*}
We further assume that $\tau = t[\alpha, \multi{\sigma}]$. While it may be that
$\alpha$ is a type specialisation of a type specialisation ($t[u[\alpha]]$),
this case does not significantly alter the proof, so we assume $\tau$
is as given. Naturally this also applies if $\alpha$ is used more than
once in $\tau$ (we assume $\alpha \not\in \fv{\multi{\sigma}}$).
Computing $\typemeta{\tau}$ we get $\metadataName{t}\sytxBrace{\zeta(\alpha), \multi{\typemeta{\sigma}}}$,\\
with $\zeta(\alpha) = \texttt{dict}.\texttt{\_type}$.
{\small
\begin{flalign*}
\qquad & = \typemeta{\tau}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]&\\
& \quad = \metadataName{t}\sytxBrace{\texttt{dict}.\texttt{\_type}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]\\
\end{flalign*}
}
Furthermore we know $\makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}} = \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta{\sigma}}$,
and so the above substitution becomes
{\small
\begin{flalign*}
\qquad & = \metadataName{t}\sytxBrace{\texttt{dict}.\texttt{\_type}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]&\\
& = \metadataName{t}\sytxBrace{\dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta{\sigma}} \\
& \qquad \qquad .\texttt{\_type}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}])\\
& \dictred \metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}])\\
\end{flalign*}
}
Applying induction
{\small
\begin{flalign*}
\qquad &\metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}])&\\
&\quad \dictred^* \metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})
\end{flalign*}
}
We now look to $\map[\emptyset; \emptyset; \Gamma]{e.(\tau)[\alpha \mathbin{:=} \sigma]}$.
We again assume that $\tau = t[\alpha, \multi{\sigma}]$ with $\alpha \not\in \fv{\multi{\sigma}}$
\begin{flalign*}
\qquad &\map[\emptyset; \emptyset; \Gamma]{e.(\tau)[\alpha \mathbin{:=} \sigma]} &\\
& \quad = \map[\emptyset; \emptyset; \Gamma]{e.(t[\alpha, \multi{\sigma}])[\alpha \mathbin{:=} \sigma]}\\
& \quad = \map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma].(t[\sigma, \multi{\sigma}])}\\
& \quad = \typemeta[\emptyset]{t[\sigma, \multi{\sigma}]}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})\\
& \quad = \metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})
\end{flalign*}
\end{itemize}
\end{proof}
\begin{restatable}[Subtype preservation]{lemrest}{lemsubtypepres}
\label{lem:sub:pres}
Let $\dict[]{P}{\lex{P}}$.
If $\subtype[\emptyset]{u[\psi]}{t\typeActualReceive}$ in
$P$ then
$u<:t$ in $\lex{P}$.
\end{restatable}
\begin{proof}
By case analysis on $\subtype[\emptyset]{u[\psi]}{t\typeActualReceive}$.
\end{proof}
\begin{restatable}[Value substitution is compositional upto $\dictred$ ]
{lemrest}{lemprepostval}
\label{lem:prepostval}
Let $\Gamma = \multi{x : \tau}$,
expression $e$ be of type
$\wellTyped[\emptyset; \Gamma]{e}{\tau'}$,
and expressions $\multi{v}$ be typed by
$\wellTypedMulti[\emptyset; \emptyset]{v}{\sType{\sigma}}$
such that
$\subtypeMulti[\emptyset]{\sType{\sigma}}{\tau}$.
We have that
$\map[\emptyset, \emptyset, \Gamma]{e}[
\multi{x\mathbin{:=} \map[\emptyset, \emptyset, \emptyset]{v}}
] \dictred^*
\map[\emptyset; \emptyset; \emptyset]{e[\multi{x\mathbin{:=} v}]}$
\end{restatable}
\begin{proof}
By induction on the translation rule used,
we apply the substitution of each $x_i$ in turn.
\begin{itemize}
\item[] \caseof{d-call}
\[
\namedRule{d-call}{
\infer{
\dict[\emptyset; \emptyset; \Gamma]{
e.m\typeActualMethod(\multi{e})
}{
\map[\emptyset;\emptyset;\Gamma]{e}.(t).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
}
}{
\wellTyped[\emptyset; \Gamma]{e}{t\typeActualReceive}
& \lex{\psi} = \makeDict[\emptyset; \emptyset]{\psi, \Psi}
& (m[\Psi](\multi{x~\tau})~\tau) \in \methods[\Delta](t\typeActualReceive)
}
}
\]
By the substitution lemma \cite[Lemma~4.2]{griesemer2020featherweight}
we have
\begin{pfsteps*}
\pf[1]{\wellTyped[\emptyset; \emptyset]{e[\multi{x\mathbin{:=} v}]}{u[\psi']}}
\pf[2]{\subtype[\emptyset]{u[\psi']}{t\typeActualReceive}}
\end{pfsteps*}
and
\begin{pfsteps*}
\pf[3]{\dict[\emptyset; \emptyset; \emptyset]{
e[\multi{x\mathbin{:=} v}].m\typeActualMethod(\multi{e[\multi{x\mathbin{:=} v}]})
}{
\map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}.(u).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}})
}}
\end{pfsteps*}
By lemma~\ref{lem:sub:pres} \pfref{2} we have that $u<: t$.
We now have
\begin{pfsteps*}
\pf[4]{\map[\emptyset;\emptyset;\Gamma]{e}.(t).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
\dictred
\map[\emptyset;\emptyset;\Gamma]{e}.(u).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
}
\end{pfsteps*}
and by the induction we have
\begin{pfsteps*}
\pf[5]{
\map[\emptyset;\emptyset;\Gamma]{e}.(u).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
\dictred^*
\map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}.(t).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}})
}
\end{pfsteps*}
\end{itemize}
\end{proof}
\begin{lemma}[Method specification simulaiton preserves substitution]
\label{lem:methspec}
Let $\wellFormedMulti[\alpha:\iType{\tau}]{M}$, and assume $\multi{\sigma}$ such that
$\subtypeMulti[\emptyset]{\sigma}{\iType{\tau}}$.
We also assume $\texttt{this} = \metadataName{\iType{t}}\sytxBrace{\multi{\typemeta[\emptyset]{\sigma}}}$.
For $\zeta = \{\multi{\alpha \mapsto \texttt{this}.\texttt{\_type}}\}$ it
holds that $\signatureMeta[\zeta]{M}~\red_{\text{s}}^*~\signatureMeta[\emptyset]{M[\alpha \mathbin{:=} \sigma]}$.
\end{lemma}
\begin{proof}
Since each $\alpha_i$ is distinct we can consider each separately.
We begin by noting that $\method{arity}{M[\alpha_i \mathbin{:=} \sigma_i]} = \method{arity}{M} = n$.
We also define a suitable $\zeta'$ for the $\texttt{param\_index}\{\}$ map,
such that $\alpha_i \not \in \dom{\zeta'}$.
\begin{flalign*}
\qquad & \signatureMeta[\emptyset]{M[\alpha_i \mathbin{:=} \sigma_i]}&\\
& \quad = \fnMeta{n}\sytxBrace{
\multi{\typemeta[\zeta']{\tau[\alpha_i \mathbin{:=} \sigma_i]}}
}\\
& \signatureMeta[\zeta]{M} \\
&\quad = \fnMeta{n}\sytxBrace{
\multi{\typemeta[\zeta', \zeta]{\tau}}
}
\end{flalign*}
Where $\multi{\tau} = \multi{\tau_0}, \multi{\tau_1}, \tau_2$ for
$M = [\multi{\beta ~ \tau_0}](\multi{x~\tau_1})\tau_2$.
It now suffices to show that for all $\tau$. $\typemeta[\zeta', \zeta]{\tau} \red_{\text{s}}^*
\typemeta[\zeta']{\tau[\alpha_i \mathbin{:=} \sigma_i]}$. This is done by induction on $\tau$.
\begin{itemize}
\item[] \resetpfcounter\textbf{Case : } $\tau = \alpha_i$\\
The term $\typemeta[\zeta']{\alpha_i[\alpha_i \mathbin{:=} \sigma_i]}$ becomes $\typemeta[\zeta']{\sigma_i}$
which is equal to $\typemeta[\emptyset]{\sigma_i}$ since $\sigma_i$ is defined
outside the scope of $\zeta'$.
The other term $\typemeta[\zeta', \zeta]{\alpha_i}$ is equal to
$(\zeta', \zeta)(\alpha_i)$ which by the definition of $\zeta$ is\\
$\metadataName{\iType{t}}\sytxBrace{\multi{\typemeta[\emptyset]{\sigma_i}}}.\texttt{\_type}_i \red_{\text{s}} \typemeta[\emptyset]{\sigma_i}$.
\item[] \resetpfcounter\textbf{Case : } $\tau = \beta$ where $\beta \mathbin{!=} \alpha_i$
Both terms are immediately equal to $\zeta'(\beta)$.
\item[] \resetpfcounter\textbf{Case : } $\tau = t[\multi{\tau}]$
By induction on each $\tau_i$.
\end{itemize}
\end{proof}
\begin{lemma}
\label{lem:typeformalsmatchasparams}
If $\Phi \mathbin{:=}_\Delta \phi$ with $\eta$ such that
$\dom{\eta} = \dom{\Delta}$ then
$\vtype(\makeDict{\phi, \Phi}) <: \method{asParam}{\Phi}$.
\end{lemma}
\begin{proof}
Immediately from the definition of $\makeDict[]{}$ and
$\method{asParam}{}$.
\end{proof}
\lemtypepres*
\begin{proof}
By induction on the type of $e$.
\begin{itemize}
\item[] \caseof{t-field}
\begin{pfsteps*}
\pf[1]{\wellTyped[\Delta; \Gamma]{e.f}{\tau}}
\pf[2]{\dict{e.f}{\lex{e}.(\sType{t}).f}}
\end{pfsteps*}
For \pfref{1} to hold $e$ must be of type $\sType{t}\typeActualReceive$.
By the induction hypothesis, either $\wellTyped[\map{\Gamma}]{\map{e}}{\texttt{Any}}$
or $\wellTyped[\map{\Gamma}]{\map{e}}{\sType{t}}$.
In either case $\map{e}.(\sType{t})$ is well typed by \rulename{t-assert$_S$}
or \rulename{t-stupid} (\emph{resp.}). Since $f$ is a field of
type $\sType{t}\typeActualReceive$ it must also be a field of $\sType{t}$.
We get the final typing judgement $\wellTyped[\map{\Gamma}]{\map{e}.(\sType{t}).f}{\texttt{Any}}$.
\item[] \caseof{t-var}
Immediate by our definition of $\map{\Gamma}$.
\item[] \caseof{t-literal}
\begin{pfsteps*}
\pf[1]{\wellTyped[]{\sType{t}\typeActualReceive\sytxBrace{\multi{e}}}{\sType{t}\typeActualReceive}}
\pf[2]{\dict{\sType{t}\typeActualReceive\sytxBrace{\multi{e}}}{
\sType{t}\sytxBrace{\multi{\lex{e}}, \lex{\phi}}}}
\pf[3]{\lex{\phi} = \makeDict{\phi, \Phi}}
\pfstep{\inversion{t-literal}}
{4}{\type \sType{t}\typeFormalType \struct{x~\tau}}
\pfstep{\rulename{d-struct}}
{5}{\dict[]{\type \sType{t}\typeFormalType \struct{x~\tau}}{
\type \sType{t} \struct{x~\texttt{Any}, \method{asParam}{\Phi}}
}}
\end{pfsteps*}
Each $\lex{e_i}$ implements the $\texttt{Any}$ type while by
Lemma \ref{lem:typeformalsmatchasparams}, $\lex{\phi}$
implements $\method{asParam}{\Phi}$. As such
\begin{pfsteps*}
\pf[6]{\wellTyped{\sType{t}\sytxBrace{\multi{\lex{e}}, \lex{\phi}}}{\sType{t}}}
\end{pfsteps*}
\item[] \caseof{t-call}
\begin{pfsteps*}
\pf[1]{\wellTyped{e.m\typeActualMethod(\multi{e})}{\tau}}
\pfSubcase{\wellTyped{e}{\alpha}}
\pf[2]{\dict{
e.m\typeActualMethod(\multi{e})
}{
\eta(\alpha).m.\texttt{Apply}(\lex{e}, \lex\psi, \multi{\lex{e}})
}}
\end{pfsteps*}
Since \pfref{1} we know that the bounds of $\alpha$
($\iType{t}\typeActualReceive = \Delta(\alpha)$)
contains the method $m$ and that the type of the
dictionary $\eta(\alpha)$ is $\dictName{\iType{t}}$
we know that $\dictName{\iType{t}}$ has a field $m$.
We further know that the field $m$ has type
$\nAryFunction{n}$ where $n = |\psi| + |\multi{e}|$.
Because all arguments to the $\texttt{Apply}$ method are of type
$\texttt{Any}$ the rhs of \pfref{2} is well typed.
\pfSubcase{\wellTyped{e}{t\typeActualReceive}}
\begin{pfsteps*}
\pf[3]{\dict{
e.m\typeActualMethod(\multi{e})
}{
\lex{e}.m(\lex\psi, \multi{\lex{e}})
}}
\end{pfsteps*}
We can combine the cases where $t$ is a structure
or an interface since \rulename{d-meth} and
\rulename{d-spec} both do the same thing.
If $m\typeFormalMethod(\multi{x~\tau})~\tau \in
\methods_{\Delta}(t\typeActualReceive)$ then the translation
produces $m(\lex\Psi, \multi{x~\texttt{Any}})~\texttt{Any} \in
\methods(t)$.
\end{itemize}
\end{proof}
\thmopcorrespond*
\begin{proof}
By induction over the assumed reduction.
\begin{itemize}
\item[] \caseof{r-fields} --- (a) direction
\begin{pfsteps*}
\pf[1]{\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.f_i \longrightarrow v_i}
\pf[2]{\dict[\emptyset; \emptyset; \emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.f_i
}{
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset; \emptyset]{\phi, \Phi}}.f_i
}}
\end{pfsteps*}
Inversion on \rulename{r-fields} \pfref{1} and the the definition of
$\fields$ gives us
\begin{pfsteps*}
\pf[3]{(\multi{f~\tau})[\eta] = \fields(\sType{t}\typeActualReceive)}
\pf[4]{\type \sType{t}\typeFormalType \struct{\multi{f~\tau}}\in \multi{D}}
\end{pfsteps*}
Applying the dictionary translation rule \rulename{d-struct} to \pfref{4} we get
\begin{pfsteps*}
\pf[5]{\type \sType{t} \struct{\multi{f~\texttt{Any}}, \multi{\texttt{dict}~u}} \in \multi{\lex{D}}}
\pf[6]{(\multi{f~\texttt{Any}}, \multi{\texttt{dict}~u}) = \fields(\sType{t})}
\pfstep{\rulename{r-fields} \pfref{6, 2}}
{7}{\reduction{
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset; \emptyset]{\phi, \Phi}}.f_i
}{
\lex{v_i}
}}
\pfstep{\inversion{d-value} \pfref{2}}
{8}{
\dict[\emptyset;\emptyset;\emptyset]{v_i}{\lex{v_i}}
}
\end{pfsteps*}
\item[] \caseof{r-fields} --- (b) direction mostly the same as
the (a) direction, since there are no $\dictred$ reductions available to
$\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset; \emptyset]{\phi, \Phi}}.f_i$
as both $\multi{\lex{v}}$ and $\makeDict[\emptyset;\emptyset]{\phi,\Phi}$ are values.
\item[] \caseof{r-call} --- (a) direction \\
We begin by stating our assumptions explicitly
\begin{pfsteps*}
\pf[1]{\reduction{
v.m\typeActualMethod(\multi{v})
}{
e[\theta][\texttt{this}\mathbin{:=} v, \multi{x\mathbin{:=} v}]
}
}
\pf[2]{
\wellTyped[\emptyset; \emptyset]{v.m\typeActualMethod(\multi{v})}{\tau[\theta]}
}
\end{pfsteps*}
with $v$ of the form
\begin{pfsteps*}
\pf[vform]{v = \sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}}
\pf[7]{\wellTyped[\emptyset; \emptyset]{v}{\sType{t}\typeActualReceive}}
\end{pfsteps*}
By analysing the proof tree of \pfref{1} using inversion on \rulename{r-call}
and the definition of $\body$ we get
\begin{pfsteps*}
\pf[4]{
(\texttt{this}:\sType{t}\typeActualReceive,~ \multi{x:\tau}).e[\theta]
= \body(\vtype(v).m\typeActualMethod)
}
\pf[5]{\theta = (\Phi, \Psi := \phi, \psi)}
\pf[6]{\funcDelc{\sType{t}\typeFormalReceive}{m\typeFormalMethod}{\multi{x~\tau}}{\tau}{\return~e} \in \multi{D}}
\end{pfsteps*}
and so $v.m\typeActualMethod(\multi{v})$ is translated using rule \rulename{d-call}
\begin{pfsteps*}
\pf[8]{\dict[\emptyset;\emptyset; \emptyset]{
v.m\typeActualMethod(\multi{v})
}{
\lex{v}.(\sType{t}).m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}})
}}
\end{pfsteps*}
where $\lex{v}$ is defined using \rulename{d-value}
\begin{pfsteps*}
\pf[vddagger]{
\dict[\emptyset; \emptyset; \emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}
}{
\sType{t}\sytxBrace{\multi{\lex{v_1}}, \makeDict[\emptyset, \emptyset]{\phi, \Phi}}
}
}
\end{pfsteps*}
With $\Phi = (\typeFormal)$ and
$\Psi = (\typeFormal[\multi{\beta~\iType{u}[\Psi']}])$,
the method definition \pfref{6} is translated using \rulename{d-meth}
\begin{pfsteps*}
\pf[9]{\eta = \multi{\alpha\mapsto\texttt{this}.\texttt{dict}}, \multi{\beta \mapsto \texttt{dict}}}
\pf[10]{\dict[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}{\lex{e}}}
\pf[11]{\dict[]{
\funcDelc{\sType{t}\typeFormalReceive}{m\typeFormalMethod}{\multi{x~\tau}}{\tau}{\return~e} $\\\qquad$
}{
\funcDelc{\sType{t}}{m}{\multi{\texttt{dict}~\dictName{\iType{u}}},~\multi{x~\texttt{Any}}}{\texttt{Any}}{\return~\lex{e}}
}}
\end{pfsteps*}
From here on we write $\lex{e}$ using the functional notation
\[\lex{e} = \map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}\]
Now that we have fleshed out the translation we begin to look at
the translated term's reductions.
For our value $v$ of type $\sType{t}\typeFormalReceive$, the translated
term $\lex{v}$ is both a value and of type $\sType{t}$.
This is immediately evident by \rulename{d-value}.
As such the assertion is always resolved by $\red_{\text{e}}$.
\begin{pfsteps*}
\pf[12]{\lex{v}.(\sType{t}).m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}}) \red_{\text{e}}
\lex{v}.m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}})}
\end{pfsteps*}
resolving the method call to the implementation in \pfref{11}
\begin{pfsteps*}
\pf[13]{
\lex{v}.m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}})
$\\\qquad$\longrightarrow
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\texttt{this} \mathbin{:=} \lex{v},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\multi{x \mathbin{:=} \lex{v}}]
}
\end{pfsteps*}
By the definition of $\lex{v}$, we can separate the substitution $\texttt{this}\mathbin{:=} \lex{v}$ into\\
$\texttt{this}\mathbin{:=} \lex{v}, \multi{\texttt{this}.\texttt{dict} \mathbin{:=} \lex{v}.\texttt{dict}}$ meaning that we can
rewrite the reduced term and then apply Lemma~\ref{lem:preposttype} and~\ref{lem:prepostval}
\begin{pfsteps*}
\pf[13]{
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\texttt{this} \mathbin{:=} \lex{v},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\multi{x \mathbin{:=} \lex{v}}]$\\\qquad$
=
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \lex{v}.\texttt{dict}},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\texttt{this} \mathbin{:=} \lex{v},
\multi{x \mathbin{:=} \lex{v}}]$\\\qquad$
=
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\multi{\texttt{this}.\texttt{dict}} \mathbin{:=} \makeDict[\emptyset;\emptyset]{\phi, \Phi},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\texttt{this} \mathbin{:=} \lex{v},
\multi{x \mathbin{:=} \lex{v}}]$\\\qquad$
\dictred^*
\map[\emptyset; \emptyset; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e[\theta]}
[ \texttt{this} \mathbin{:=} \lex{v},
\multi{x \mathbin{:=} \lex{v}}]
$\\\qquad$
\dictred^*
\map[\emptyset; \emptyset; \emptyset]{e[\theta][\texttt{this} \mathbin{:=} v,
\multi{x \mathbin{:=} v}]}
}
\end{pfsteps*}
\item[] \caseof{r-call} --- (b) direction
\begin{pfsteps*}
\pf[1]{\wellTyped[\emptyset; \emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}.m\typeActualMethod(\multi{v_2})
}{\tau[\theta]}}
\pf[2]{\dict[\emptyset;\emptyset;\emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}.m\typeActualMethod(\multi{v_2})
}{
\sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.(\sType{t}).m(\lex{\psi},\multi{\map[\emptyset;\emptyset;\emptyset]{v_2}})
}
}
\pf[3]{\lex{\phi} = \makeDict[\emptyset;\emptyset]{\phi, \Phi}}
\pf[4]{\lex{\psi} = \makeDict[\emptyset;\emptyset]{\psi, \Psi}}
\end{pfsteps*}
By the welltypedness of \pfref{1} we know that
\begin{pfsteps*}
\pf[5]{\funcDelc{\sType{t}\typeFormalReceive}{m\typeFormalMethod}{ \multi{x~\tau}}{\tau}{\return e} \in \multi{D}}
\pf[6]{\type \sType{t}\typeFormalType \struct{\multi{y~\sigma}} \in \multi{D}}
\end{pfsteps*}
Translating \pfref{5} with $\Delta = \Phi, \Psi$ where $\Phi = \multi{\alpha~\iType{\tau}}$,
$\Psi = \multi{\beta~\iType{\sigma}}$, and
$\eta = \multi{\alpha \mapsto \texttt{this}.\texttt{dict}}, \multi{\beta \mapsto \texttt{dict}}$ we get
\begin{pfsteps*}
\pf[7]{\funcDelc{\sType{t}}{m}{ \multi{x~\texttt{Any}}}{\texttt{Any}}{\return \map{e}} \in \multi{\lex D}}
\end{pfsteps*}
The (b) direction assumes a reduction on the translated term.
We first note that $\makeDict[\emptyset;\emptyset]{\cdots}$ is always
a value. We then consider the trivial $\red_{\text{e}}$ reduction available
before taking the \rulename{r-call} step.
\begin{pfsteps*}
\pf[8]{
\sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.(\sType{t}).m(\lex{\psi},\multi{\map[\emptyset;\emptyset;\emptyset]{v_2}})
$\\\qquad$ \red_{\text{e}}
\sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.m(\lex{\psi},\multi{\map[\emptyset;\emptyset;\emptyset]{v_2}})
$\\\qquad$ \longrightarrow
\map{e}[\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}},
\multi{\texttt{dict}} \mathbin{:=} \lex{\psi}]
$\\\qquad$ =
\map{e}[\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.\texttt{dict}},
\multi{\texttt{dict}} \mathbin{:=} \lex{\psi}]
[
\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}}
]
}
\end{pfsteps*}
When we consider the $\dictred}%\succ_{\mathit{dict}}$ reduction
we can relate $\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.\texttt{dict}}$
and $\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \lex{\phi}}$. This allows us to use
Lemma \ref{lem:preposttype} and \ref{lem:prepostval}.
\begin{pfsteps*}
\pf[8]{
\map{e}[\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.\texttt{dict}},
\multi{\texttt{dict}} \mathbin{:=} \lex{\psi}]
[
\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}}
]
$\\\qquad$ \dictred}%\succ_{\mathit{dict}}^*
\map[\emptyset;\emptyset;\Gamma]{e[\Phi \mathbin{:=} \phi, \Psi \mathbin{:=} \Psi]}
[
\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}}
]
$\\\qquad$ \dictred}%\succ_{\mathit{dict}}^*
\map[\emptyset;\emptyset;\emptyset]{e[\Phi \mathbin{:=} \phi, \Psi \mathbin{:=} \Psi]
[\texttt{this} \mathbin{:=} \sType{t}[\phi]\sytxBrace{\multi{v_1}},
\multi{x \mathbin{:=} v_2}]
}
}
\end{pfsteps*}
We now look at the (only) reduction available to the original term
\begin{pfsteps*}
\pf[9]{\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}.m\typeActualMethod(\multi{v_2})
\longrightarrow e[\Phi \mathbin{:=} \phi, \Psi \mathbin{:=} \psi][
\texttt{this} \mathbin{:=} \sType{t}\typeActualReceive\sytxBrace{\multi{v_1}},
\multi{x \mathbin{:=} v_2}]}
\end{pfsteps*}
\item[] \caseof{r-assert} --- (a) direction
\begin{pfsteps*}
\pf[1]{\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)
\longrightarrow \sType{t}\typeActualReceive\sytxBrace{\multi{v}}}
\pf[2]{\dict[\emptyset;\emptyset;\emptyset]
{\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)}
{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})}}
\pf[3]{\type~\sType{t}\typeFormalType~T \in \multi{D}}
\pf[4]{\lex{\phi} = \makeDict[\emptyset;\emptyset]{\phi, \Phi}}
\pfstep{\inversion{r-assert}~\pfref{1}}
{5}{\subtype[\emptyset]{\sType{t}\typeActualReceive}{\tau}}
\end{pfsteps*}
As such we know that $\typemeta[\emptyset]{\tau}.\texttt{tryCast}$
should return (as opposed to panicking) if and only if
$\subtype[\emptyset]{\sType{t}\typeActualReceive}{\tau}$
\pfSubcase{\tau = \iType{u}\typeActualMethod}
For \pfref{5} to hold the following must hold
\begin{pfsteps*}
\pf[6]{\methods_\emptyset(\sType{t}\typeActualReceive) \supseteq
\methods_\emptyset(\iType{u}\typeActualMethod)}
\pf[7]{\type~\iType{u}\typeFormalMethod~\interface{\multi{S}} \in \multi{D}}
\end{pfsteps*}
For all $mM_{u} \in \multi{S}$ there
must exist a function
\[\func~(\texttt{this}~\sType{t}\typeFormalReceive)~mM_t~\sytxBrace{\return e} \in \multi{D}\]
such that \[M_u[\Psi \mathbin{:=} \psi] = M_t[\Phi \mathbin{:=} \phi]\]
To show that this is property is preserved we need first elaborate
a number of other definitions.
Let $\Psi = (\typeFormal[\multi{\beta~\iType{\sigma}}])$,
and the map $\zeta$ be $\{\multi{\beta \mapsto \texttt{this}.\texttt{\_type}}\}$.
\begin{pfsteps*}
\pf[9]{\dict[]{\type~\iType{u}\typeFormalMethod~\interface{\multi{S}}}{ $\\\qquad$
\type \iType{u} \interface{\multi{\lex{S}},~\multi{\method{spec\_mdata}{S}}}$\\\qquad$
\funcDelc{\metadataName{\iType{u}}}{\texttt{tryCast}}{x~\texttt{Any}}{\texttt{Any}}{$\\\qquad$
\qquad\left\{
\lit{if} (x.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;$\\\qquad$
\qquad \return x
$\\\qquad$
}
}}
\end{pfsteps*}
And for $\Phi = (\typeFormal)$ and $\phi = \multi{\tau}$ the map
$\zeta' = \{\alpha \mapsto \texttt{this}.\texttt{dict}_i.\texttt{\_type}\}$.
\begin{pfsteps*}
\pf[10]{\dict[]{
\func~(\texttt{this}~\sType{t}\typeFormalReceive)~mM_t~\sytxBrace{\return e}$\\\qquad$
}{
\func~(\texttt{this}~\sType{t})~\method{spec\_name}{m}()~\fnMeta{n}~\sytxBrace{\return \signatureMeta[\zeta']{M_t}}
}}
\pf[11]{\lex{\phi} = \makeDict[\emptyset;\emptyset]{\multi{\tau}, \typeFormal}
= \multi{\dictName{\iType{t}}\sytxBrace{\multi{ptr}, \typemeta[\emptyset]{\tau}} }}
\end{pfsteps*}
We may now consider the reduction of the translated term $\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})$
{\small
\begin{pfsteps*}
\pf[12]{
\typemeta[\emptyset]{\iType{u}\typeActualMethod}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})
$\\\qquad$ \longrightarrow $\\\qquad$
\left\{
\lit{if} (\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$\return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
We can now use Lemma \ref{lem:methspec} to resolve $\zeta$
{\small
\begin{pfsteps*}
\pf[13]{ \left\{
\lit{if} (\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$ \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}}
\end{pfsteps*}
}
Using the $\ensuremath{\rho_{\text{sim}}}$ we can further reduce the term.
While this would happen in a sequential order we simplify the presentation of the proof.
We begin by looking at $\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\iType{u})$.
Since $\subtype[\emptyset]{\sType{t}\typeActualReceive}{\iType{u}\typeActualMethod}$
we know that $\sType{t}$ must posses each method defined by $\iType{u}$.
{\small
\begin{pfsteps*}
\pf[14]{ \red_{\text{s}}^* \left\{
\lit{if} (\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$ \quad \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$
\red_{\text{s}}^*
\left\{
\lit{if} (\signatureMeta[\zeta']{M_t} \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$ \quad \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
We can now use Lemma \ref{lem:methspec} to resolve $\zeta'$
{\small
\begin{pfsteps*}
\pf[15]{
\red_{\text{s}}^*
\left\{
\lit{if} (\signatureMeta[\emptyset]{M_t[\Phi \mathbin{:=} \phi]} \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$\quad \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
Since $M_u[\Psi \mathbin{:=} \psi] = M_t[\Phi \mathbin{:=} \phi]$, no $\lit{if}$ is
triggered.
\begin{pfsteps*}
\pf[16]{
\red_{\text{s}}^*
\return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$
\red_{\text{s}} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}}
\end{pfsteps*}
which is the desired term.
\pfSubcase{\tau = \sType{t}[\phi]}
For \pfref{5} to hold if $\tau$ is a structure type then it must
be precisely the same type as target of the assertion.
\begin{pfsteps*}
\pf[17]{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})
$\\\qquad$
= \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\phi}}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})
$\\\qquad$
\longrightarrow \{\lit{if} ~ \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\phi}}.\texttt{\_type}_i \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
Once again we use $\ensuremath{\rho_{\text{sim}}}$ to resolve assertion. We also
use the same proof simplification and ignore explicit sequentiality.
{\small
\begin{pfsteps*}
\pf[18]{\{\lit{if} ~ \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\phi}}.\texttt{\_type}_i \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \lex{\phi_i}.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \typemeta[\emptyset]{\phi_i}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
\item[] \caseof{r-assert} --- (b) direction\\
This direction follows closely the (a) direction other than that it
does not assume \\$\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)
\longrightarrow \sType{t}\typeActualReceive\sytxBrace{\multi{v}}$.
Yet by our assumption that $\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)$
is not a type assertion error this reduction must exist. It
then suffices to show that the source and target terms' reductions match,
which is given in (a).
\item[] \caseof{r-assert} --- (c) direction\\
We first note that $e = v.(\tau)$ is the only case for (c)
as no other term can produce a panic, and that
$\Longrightarrow$ is defined as the greatest reduction available.
As such for $\lex{e} \Longrightarrow e'$ there is no further $e' \red_{\text{s}}$.
\begin{pfsteps*}
\pf[1]{v.(\tau)~\ensuremath{\mathsf{panic}}}
\pf[2]{\subtype[\emptyset]{\vtype(v)\not}{\tau}}
\pf[3]{\dict[]{v.(\tau)}{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\map[\emptyset;\emptyset;\emptyset]{v})}}
\pf[4]{v = \sType{t}\typeActualReceive\sytxBrace{\multi{v}}}
\pf[5]{\map[\emptyset;\emptyset;\emptyset]{v} =
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}}
\end{pfsteps*}
\pfSubcase{\tau = \iType{u}\typeActualMethod}\\
For \pfref{2} to hold there must be at least one method
$mM \in \methods_\emptyset(\tau)$
such that $mM \not\in \methods_\emptyset(\vtype(v))$.
Let $\Psi = (\typeFormal[\multi{\beta~\iType{\sigma}}])$,
and the map $\zeta$ be $\{\multi{\beta \mapsto this.\texttt{\_type}}\}$.
\begin{pfsteps*}
\pf[9]{\dict[]{\type~\iType{u}\typeFormalMethod~\interface{\multi{S}}}{ $\\\qquad$
\type \iType{u} \interface{\multi{\lex{S}},~\multi{\method{spec\_mdata}{S}}}$\\\qquad$
\funcDelc{\metadataName{\iType{u}}}{\texttt{tryCast}}{x~\texttt{Any}}{\texttt{Any}}{$\\\qquad$
\qquad\left\{
\lit{if} (x.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;$\\\qquad$
\qquad \return x
$\\\qquad$
}
}}
\end{pfsteps*}
The translated term will always be able to make the reduction
\begin{pfsteps*}
\pf[8]{
\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\map[\emptyset;\emptyset;\emptyset]{v})
\longrightarrow
\left\{
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ; $\\\qquad$\quad \return x
}
\end{pfsteps*}
For convenience we assume that the problematic method $m$ is the
first to be checked. If this is not the case then we may reduce
all ok checks using $\red_{\text{s}}$ as described in the \rulename{r-assert}
(a) case.
\begin{pfsteps*}
\pf[7]{
\left\{
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ; $\\\qquad$\quad \return x
$\\\qquad$
\red_{\text{s}}^*
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
} ; \cdots
}
\end{pfsteps*}
We now need to consider the two possible cases in which
$mM \not\in \methods_\emptyset(\vtype(v))$ could hold.
Either there is no method called $m$ in $\methods_\emptyset(\vtype(v))$
or there is a method $m$ but with a different method signatures.
In the former case the assertion $E[\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u})]$
will panic as by our assumption that translation will never introduce a
name collision the method $m$ will not be in $\methods(\sType{t})$
(the methods of $\vtype(\map[\emptyset;\emptyset;\emptyset]{v})$).
In the latter we assume
that for $mM_t[\Phi \mathbin{:=} \phi] \in \methods_\emptyset(\vtype(v))$
and $mM_u[\Psi \mathbin{:=} \psi] \in \methods_\emptyset(\iType{u}\typeActualMethod)$
such that $M_t[\Phi \mathbin{:=} \phi] \mathbin{!=} M_u[\Psi \mathbin{:=} \psi]$
then the $\lit{if}$ branches to $\lit{panic}$.
Let $\Phi = (\typeFormal)$, $\phi = \multi{\tau}$, and the map
$\zeta' = \{\alpha \mapsto \texttt{this}.\texttt{dict}_i.\texttt{\_type}\}$.
\begin{pfsteps*}
\pf[10]{\dict[]{
\func~(\texttt{this}~\sType{t}\typeFormalReceive)~mM_t~\sytxBrace{\return e}$\\\qquad$
}{
\func~(\texttt{this}~\sType{t})~\method{spec\_name}{m}()~\fnMeta{n}~\sytxBrace{\return \signatureMeta[\zeta']{M_t}}
}}
\pf[11]{\map[\emptyset;\emptyset;\emptyset]{v}
=
\sTypeInit{t}{\multi{\lex{v}}, \multi{\dictName{\iType{t}}\sytxBrace{\multi{ptr}, \typemeta[\emptyset]{\tau}} }}}
\pf[12]{
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.\method{spec\_name}{m}() \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} (\signatureMeta[\zeta']{M_t} \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
}
\end{pfsteps*}
We can now apply Lemma~\ref{lem:methspec}, first to the lhs then rhs
\begin{pfsteps*}
\pf[13]{
\lit{if} (\signatureMeta[\zeta']{M_t} \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
$\\\qquad$
\red_{\text{s}}^*
\lit{if} (\signatureMeta[\emptyset]{M_t[\Phi\mathbin{:=}\phi]} \mathbin{!=} \signatureMeta[\emptyset]{M_u[\Psi\mathbin{:=}\psi]})~ \sytxBrace{ \lit{panic}} ; \cdots
}
\end{pfsteps*}
By $M_t[\Phi \mathbin{:=} \phi] \mathbin{!=} M_u[\Psi \mathbin{:=} \psi]$ this reduces to
the desired $\lit{panic}$.
\pfSubcase{\tau = \sType{u}\typeActualMethod}
{\small
\begin{pfsteps*}
\pf[14]{\dict[]{\type \sType{u}[\Phi] \struct{\cdots}$\\\qquad$}{
\type \metadataName{\sType{u}} \struct{\multi{\texttt{\_type}~\texttt{\_type\_mdata}}}
$\\\qquad$
\funcDelc{\texttt{this}~\metadataName{\sType{u}}}{\texttt{tryCast}}{x~\texttt{Any}}{\texttt{Any}}{
$\\\qquad$
\qquad x.(\sType{u})~;~\{\lit{if} ~ \texttt{this}.\texttt{\_type}_i \mathbin{!=} x.(\sType{u}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return x
$\\\qquad$
}
}}
\end{pfsteps*}
}
For $\vtype(v) = \sType{t}\typeActualReceive$.
If $\tau$ is a struct then there are two case.
Either
$\sType{u} \mathbin{!=} \sType{t}$, or
$\sType{u} = \sType{t}$
but for $\phi = \multi{\sigma}$ and $\psi = \multi{\tau}$ there
exists an $i$ such that $\sigma_i \mathbin{!=} \tau_i$.
We first consider the case $\sType{u} \mathbin{!=} \sType{t}$.
Note that $\vtype(\map[\emptyset;\emptyset;\emptyset]{v}) = \sType{t}\typeActualReceive$.
\begin{pfsteps*}
\pf[20]{\typemeta[\emptyset]{\sType{u}\typeActualMethod}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
= \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
\longrightarrow \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{u})~;\cdots
}
\end{pfsteps*}
By our assumption $\sType{u} \mathbin{!=} \sType{t}$ we get the desired
\ensuremath{\mathsf{panic}}.
We now consider the case of $\sType{u} = \sType{t}$
but for $\phi = \multi{\sigma}$ and $\psi = \multi{\tau}$ there
exists an $i$ such that $\sigma_i \mathbin{!=} \tau_i$.
\begin{pfsteps*}
\pf[21]{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
= \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
\longrightarrow \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t})~;~\{\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \cdots
$\\\qquad$
\{\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \cdots
}
\end{pfsteps*}
We once again only need to consider the (lowest) $i$ for which $\sigma_i \mathbin{!=} \tau_i$.
All prior $\lit{if}$ statement pass as per \rulename{r-assert} (a).
\begin{pfsteps*}
\pf[22]{
\{\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \cdots
$\\\qquad$
\red_{\text{s}}^*
\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\makeDict[\emptyset;\emptyset]{\sigma_i, \Phi_i}.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\typemeta[\emptyset]{\sigma_i}~\sytxBrace{\lit{panic}} ; \return \cdots
}
\end{pfsteps*}
by our assumption that $\tau_i \mathbin{!=} \sigma_i$ we get the desired
\ensuremath{\mathsf{panic}}.
\item[] \caseof{r-assert} --- (d) direction\\
Once again we need only consider $e= v.(\tau)$.
This case follows from case (c), but we must first
show that there exists at least an $\lex{e}\red_{\text{e}}^*\longrightarrow d$
reduction. This $d$ then reduces by $\red_{\text{s}}^*$ to $e'$, where
$e'$ is a type assertion error.
We know that $d$ exists by observing that the translation of $v.(\tau)$
will always reduce ($\longrightarrow$) by \rulename{r-call} on $\texttt{tryCast}$.
This $d$ will then reduce ($\red_{\text{s}}^*$) to $e'$, which by
the same logic as (c) is a type assertion error.
\item[] \caseof{r-context} --- (a) direction\\
The only non-immediate case for \rulename{r-context} is when
$E = \square.m\typeActualMethod(\multi{v})$.
\begin{pfsteps*}
\pf[1]{\infer{
e.m\typeActualMethod(\multi{v})
\longrightarrow
d.m\typeActualMethod(\multi{v})
}{
e\longrightarrow d
}}
\pf[2]{
\wellTyped[\emptyset; \emptyset]{ e.m\typeActualMethod(\multi{v}) }{\sigma}
}
\pfstep{\inversion{t-call}}
{3}{\wellTyped[\emptyset; \emptyset]{e}{t[\phi]}}
\end{pfsteps*}
By preservation (lemma~\ref{thm:fggTypePreservation})
\begin{pfsteps*}
\pf[4]{\wellTyped[\emptyset; \emptyset]{d}{u[\phi']}}
\pf[5]{\subtype[\emptyset]{u[\phi']}{t[\phi]}}
\end{pfsteps*}
Translating $E[e]$ and $E[d]$ we get
\begin{pfsteps*}
\pf[6]{\map[\emptyset;\emptyset;\emptyset]{E[e]} =
\map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}} )
}
\pf[7]{\map[\emptyset;\emptyset;\emptyset]{E[d]} =
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}} )
}
\end{pfsteps*}
By the induction hypothesis on $e\longrightarrow d$
\begin{pfsteps*}
\pf[8]{\map[\emptyset;\emptyset;\emptyset]{e}
\Longrightarrow \dictred^\ast
\map[\emptyset;\emptyset;\emptyset]{d}}
\end{pfsteps*}
Using the evaluation context $\square.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})$
\begin{pfsteps*}
\pf[9]{\map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
\Longrightarrow \dictred^\ast
\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
}
\end{pfsteps*}
Using synthetic assertion specialisation and \ref{lem:sub:pres} on \pfref{5}
\begin{pfsteps*}
\pf[10]{\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
\dictred
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})}
\end{pfsteps*}
\item[] \caseof{r-context} --- (a) direction\\
The only non-immediate case for \rulename{r-context}
is for $E = \square.m\typeActualMethod(\multi{v})$
\begin{pfsteps*}
\pf[0]{\wellTyped[\emptyset;\emptyset]{e}{t[\phi]}}
\pf[2]{\map[\emptyset;\emptyset;\emptyset]{E[e]} =
\map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}} )
}
\pf[1]{ \map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) \Longrightarrow e'.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) }
\end{pfsteps*}
By inversion on the reduction $\Longrightarrow$ using context
$E' = \square.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})$
\begin{pfsteps*}
\pf[3]{ \map[\emptyset;\emptyset;\emptyset]{e} \Longrightarrow e'}
\end{pfsteps*}
By the induction hypothesis on \pfref{3} there exists $d$
\begin{pfsteps*}
\pf[4]{e\longrightarrow d}
\pf[5]{e' \dictred^* \map[\emptyset;\emptyset;\emptyset]{d}}
\pfstep{lemma~\ref{thm:fggTypePreservation} \pfref{0}}
{7}{\wellTyped[\emptyset;\emptyset]{d}{u[\phi']}}
\pf[8]{\subtype[\emptyset]{u[\phi']}{t[\phi]}}
\end{pfsteps*}
Applying \pfref{5} on context $C=E'$ we get that
\begin{pfsteps*}
\pf[6]{e'.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) \dictred^*
\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) }
\end{pfsteps*}
Using synthetic assertion specialisation and \ref{lem:sub:pres} on \pfref{8}
\begin{pfsteps*}
\pf[6]{\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
\dictred^*
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) }
\end{pfsteps*}
Using \rulename{r-context} on \pfref{4} and context $E$.
\begin{pfsteps*}
\pf[10]{E[e]\longrightarrow E[d]}
\end{pfsteps*}
Finally, using the typing of $d$ \pfref{7} we get the translation of $E[d]$
\begin{pfsteps*}
\pf[11]{\map[\emptyset;\emptyset;\emptyset]{E[d]} =
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})}
\end{pfsteps*}
\end{itemize}
\end{proof}
\lemredrewrite*
\begin{proof}
(1) is immediate as if $d_1\longrightarrow d_2$ then $E[d_1] \longrightarrow E[d_2] \longrightarrow e_3$.\\
(2) is by case analysis on the reduction $e_2\longrightarrow e_3$.
\begin{itemize}
\item[] \caseof{r-field}:
We have that $e_2 = \sTypeInit{t}{\multi{v}}$.
There are two possible options for the congruence
evaluation context $C$, either it is $\square$ or it is
of the form $\sTypeInit{t}{\multi{v}, \square, \multi{v}}$.
In either case we get a contradiction as both cases
are captured by the standard evaluation context.
\item[] \caseof{r-call}:
Same logic as \rulename{r-field}.
\item[] \caseof{r-assert}:
Same logic as \rulename{r-field}.
\item[] \caseof{r-context}:
We begin with an assumption that the
congruence context $C$ deviated from the standard context
at the top level. Namely there does not exist $E'$, $C'$ such
that $C = E'[C']$. We do this for clarity.
In the situation that this does not
hold, we may simply use add $E'$ where appropriate.
There are two cases for $C\neq E$. Either
$C$ is of the form
$\sTypeInit{t}{\multi{v}, e, \multi{e_1}, C_{\text{sub}}, \multi{e_2}}$ or
$v.m(\multi{v}, e, \multi{e_1}, C_{\text{sub}}, \multi{e_2})$.
We focus on the former.
The starting term $e_1$ is $\sTypeInit{t}{\multi{v}, e, \multi{e_1}, C_{\text{sub}}[d_1], \multi{e_2}}$,
the subsequent term $e_2$ is $\sTypeInit{t}{\multi{v}, e, \multi{e_1}, C_{\text{sub}}[d_2], \multi{e_2}}$,
and the final term $e_3$ is $\sTypeInit{t}{\multi{v}, d, \multi{e_1}, C_{\text{sub}}[d_2], \multi{e_2}}$
for some $d$ such that $e\longrightarrow d$.
Our initial term may make a standard reduction to
$e_2' = \sTypeInit{t}{\multi{v}, d, \multi{e_1}, C_{\text{sub}}[d_1], \multi{e_2}}$,
followed by a $\rightarrowtriangle$ reduction using $C'' = \sTypeInit{t}{\multi{v}, d, \multi{e_1}, C_{\text{sub}}, \multi{e_2}}$
to $e_3$.
\end{itemize}
\end{proof}
\lemredvalue*
\begin{proof}
Assume for contradiction that $\rightarrowtriangle \not\in \longrightarrow$.
Either
\begin{itemize}
\item[] \resetpfcounter\textbf{Case : } $C[e']\rightarrowtriangle C[d] = v$ where $C$ is not the
standard $\longrightarrow$ reduction context. Since there
must be another $\longrightarrow$ reduction from $C[d]$ using the
standard reduction context $E$ it cannot be a value.
\item[] \resetpfcounter\textbf{Case : } $C[e'.(u)]\rightarrowtriangle C[e'.(t)]$. Immediate as $ C[e'.(t)]$
is not a value.
\end{itemize}
\end{proof}
\section{Appendix: Motivating Example Translated using erasure}
\label{sec:erasure-example}
\input{figs/dyn/fgg-nomono.tex}
\section{Appendix: \glsentrylong{fg}}
\label{appendix:fg}
\label{app:fg}
For the reviewer's convenience, this section provides
more explanations of the syntax and the full definitions of the typing
system from those in \cite{griesemer2020featherweight}.
\subsection{\glsentrylong{fg} Syntax}
\label{app:fg:syntax}
We explain
the syntax of \gls{fg} in
Figure~\ref{fig:fg:syntax}.
The meta variables for field ($f$), method ($m$), variable
($x$), structure type names ($t_S, u_S$), and interface type names
($t_I, u_I$) range over their respective namespaces. Types ($t, u$)
range over both structures and interfaces.
A program ($P$) is given by a sequence of declarations ($\multi{D}$)
along with a {\bf main} function which acts as the top-level expression.
We often shorten this as $P = \program{e}$.
Expressions in \gls{fg} are
variables ($x$), method calls ($e.m(\multi{e})$), structure literals
($t_S\{\multi{e}\}$), field selection ($e.f$), and type assertion
($e.(t)$).
Declarations ($D$) can take three forms;
\emph{structure}, \emph{interface}, or \emph{method declaration}. The
structure declaration ($\struct{\multi{f\ t}}$) gives a sequence of
typed fields whereas the interface declaration ($\interface{\multi{S}}$)
gives the method specifications which instances of that interface
should implement. A method specification ($m(\multi{x~t})~t$)
prescribes the name and type for implementing methods.
A method declaration ($\func (x\ t_S)\ m(\multi{x\ t})\ t_r\ \{b\}$)
defines a method $m$ on the structure $t_S$. This method accepts the
arguments $\multi{x\ t}$, which along with the receiver $x$ are passed
to the method body $b$. On a successful computation this method will
return a result of type $t_r$. The special $\lit{main}$ function acts
as the entrance function, and thus has no receiver, arguments or
return value.
\subsection{\glsentrylong{fg} Typing}
\label{app:fg:types}
\input{figs/fg/fg-typing}
For the reviewer's convinience,
we reproduce the typing system with the minimum explainations.
Figure~\ref{fig:fg:types} gives the \gls{fg} typing rules
and auxiliary functions.
Environment $\Gamma$ is a sequence
of typed variable names ($\multi{x : t}$).
We assume
all variables in $\Gamma$ are distinct, and
write $\Gamma,x : t$ if $x\not\in \dom{\Gamma}$.
\myparagraph{Implements}
The \emph{implements relation} ($t <: u$) holds if type $t$ is a subtype of type $u$,
understood as a relationship in which a variable of type $u$ can be
substituted by any variable of type $t$.
A structure can only only be
implemented by itself (\rulename{<:s}). An interface $t_I$ can be
implemented by any type that possesses at least the same methods as
$t_I$ (\rulename{<:i}).
\myparagraph{Well-formedness}
A well-formed term is one that is not only syntactically correct, but
one that also has semantic meaning.
$x \operatorname{\var{ok}}$ holds if $x$ is well
formed according to the typing rules, with the extension
$\wellFormed{x}$ if the term is well-formed in the environment
$\Gamma$.
A type declaration is
well-formed when the type it declares is well-formed (\rulename{t-type})
which happens when it is either a structure with distinct and well
formed fields (\rulename{t-struct}), or an interface with unique and
well-formed method specifications (\rulename{t-interface}). Method
specifications are well-formed when all argument types and the return
type are well-formed (\rulename{t-specification}).
\myparagraph{Method body and statement type checking}
The typing judgement $\wellTyped{x}{t}$ holds if the term $x$ has type $t$
in the environment $\Gamma$. A method
($\textbf{func}~(x~t_S)~m(\multi{x~t})~u~\{b\}$) is well-formed if the
type of the receiver, the return, and all arguments are well-formed,
with all names being distinct from one
another.
A structure literal ($\sType{t}\{\multi{e}\}$) is well-typed when each
field instantiation ($\multi{e}$) subtypes the field's
declared type (\rulename{t-literal}).
Field
assignment and access follow the order of declaration.
\myparagraph{Expression type checking}
Given an expression $e$ of type $u$ the type assertion $e.(t)$ casts the expression to
type $t$. There are three non-overlapping type assertion rules.
The Go specification only permits type assertions from an interface type, which
informs rules \rulename{t-assert$_I$}.
An assertion between two interface types (\rulename{t-assert$_I$}) does not statically
check the assertion since the expression $e$ could evaluate to a term that
implements the target type $t$.
Assertion from an interface type~$u$ to a non-interface type~$t$
is allowed only if $t$~implements~$u$ (\rulename{t-assert$_S$}).
Not part of the Go specification and not need for compile time
checking,
the rule \rulename{tr-stupid}
is only used for the type assertion $e.(t)$ where $e$ has evaluated to a concrete non-interface type.
This assertion provides no utility at compile time as an assertion from a non-interface type is either a no-op
or it unnecessarily erases type information -- yet without this rule a term may become ill-typed during evaluation.
More detailed explanations can be found in
\cite[\S~3.3]{griesemer2020featherweight}.
\begin{theorem}[Preservation]
\label{fcgSubjtCong}
\label{thm:fgTypePreservation}
{\rm (Theorem 3.3 in \cite[\S~3.3]{griesemer2020featherweight})}
If $\wellTyped[\emptyset]{e}{u}$ and $\reduction{e}{e'}$
then $\wellTyped[\emptyset]{e'}{t}$ for some $t <: u$.
\end{theorem}
\begin{theorem}[Progress]
\label{thm:fgProgress}
{\rm (Theorem 3.4 in \cite[\S~3.3]{griesemer2020featherweight})}
If $\wellTyped[\emptyset]{e}{u}$ then
$e$ is either a value, $\reduction{e}{e'}$ for some $e'$ or
$e$ panics.
\end{theorem}
\section{Appendix: \glsentrylong{fgg}}
\label{app:fgg}
\label{appendix:fgg}
\input{figs/fgg/fgg-typing}
For the reviewer's convenience, this section provides
the definitions and more explanations of the typing
system from those in \cite{griesemer2020featherweight}.
\subsection{\glsentrylong{fgg} Typing Rules}
Judgements are extended with the type environment $\Delta$ which
relates types than type names.
The subtyping $\subtype{\tau}{\sigma}$ uses $\Delta$ where both $\tau$
and $\sigma$ may have type parameters in $\Delta$;
judgement $\wellFormed[\Delta]{\tau}$ says
the argument of channel type ($\tau$) is well-formed
w.r.t. all type parameters declared in $\Delta$;
a method declaration is well-formed if
$\Phi$ and $\Psi$ are well-formed types formal of the receiver and the
method, yielding $\Delta$ ($\Phi;~\Psi\operatorname{\var{ok}}~\Delta$) and the receiver's type is
declared by $\Phi'$ such that $\Phi <: \Phi'$.
Judgements for expressions,
method calls and processes are extended w.r.t. $\Delta$, accordingly.
This is so that interface subtyping (\rulename{<:i}) may ensure that
type parameters still implement all methods that an interface
requires.
The type formal subtyping rule ($\Phi <: \Psi$) ensures that if the
type substitution $\Phi :=_\Delta \multi{\tau}$ is well-defined,
then $\Psi :=_\Delta \multi{\tau}$ is well-defined.
We deviate from \cite{griesemer2020featherweight}
in our typing for \rulename{t-func}. We require that
receiver types formal are identical to those in the structures
declaration. This more closely follows the official Go proposal \cite{type-parameters-proposal}.
Rather than require the developer to write a full type formal
which must exactly match the structure's declaration they instead provide
a receiver type parameter list which is converted to a type formal by
looking up the structure type formal.
When looking at method typing it becomes necessary to consider two
types formal, with the methods type formal ($\Psi$) depending on the
receiver's ($\Phi$). A methods type environment $\Delta$ is
constructed by the well formed composition of the receiver and
method's types formal ($\Phi;~\Psi\operatorname{\var{ok}}~\Delta$). This environment is
well formed when $\Phi$ is well formed in an empty environment while
method's type formal $\Psi$ is well formed under $\Phi$. Type formal
well formedness ($\wellFormed[\Phi]{\Psi}$) holds when there is no
repetition in the type parameters between $\Phi$ and $\Psi$ and all
bounds in $\Psi$ are well formed in the $\Phi, \Psi$ environment. This
definition allows mutually recursive bounds in $\Psi$.
A type declaration includes a type formal, this type formal is the
environment that either the structure or interface must be well formed
under. A structure is well formed in an environment $\Phi$ when each
of its fields is well formed under $\Phi$. An interface is well formed
in $\Phi$ when each method is specifies is also well formed under
$\Phi$.
A method specifications is well-formed
($\wellFormed[\Phi]{m\typeFormalMethod(\multi{x~\tau})~\tau}$) when
its composite type environment is well-formed ($\Phi;~\Psi\operatorname{\var{ok}}~\Delta$)
and if its argument and return types are well-formed under that
composite type environment.
\begin{theorem}[Preservation]
{\rm (Theorem 4.3 in \cite[\S~4]{griesemer2020featherweight})}
\label{lemma:fcgg:subjReduction:Expression}
\label{thm:fggTypePreservation}
If $\wellTyped[\emptyset; \emptyset]{e}{\tau}$
and $\reduction{e}{e'}$
then $\wellTyped[\emptyset; \emptyset]{e'}{\tau'}$,
for some $\tau'$ such that $\subtype[\emptyset; \emptyset]{\tau'}{\tau}$.
\end{theorem}
\begin{theorem}[Progress]
\label{thm:fggProgress}
{\rm (Theorem 3.4 in \cite[\S~3.3]{griesemer2020featherweight})}
If $\wellTyped[\emptyset;\emptyset]{e}{\tau}$ then
$e$ is either a value, $\reduction{e}{e'}$ for some $e'$ or
$e$ panics.
\end{theorem}
\pagebreak
\section{Appendix: Generic Implementations of Top 16 Statically Typed Generic Programming Languages}
\label{app:implementations}
\begin{table}[!htp]\centering
\scriptsize
\begin{tabular}{lllll}\toprule
Programming Language &Mainstream Implementation &Memory Management &Runtime Environment \\\cmidrule{1-4}
Java &Erasure &Garbage Collection &JVM \\
Kotlin &Erasure &Garbage Collection &JVM \\
Scala &Erasure &Garbage Collection &JVM \\
C\# &Just-In-Time Specialisation + Non-Specialised Dictionary &Garbage Collection &.NET CLR \\
Visual Basic &Just-In-Time Specialisation + Non-Specialised Dictionary &Garbage Collection &.NET CLR \\
Dart &Erasure &Garbage Collection &Virtual Machine \\
Swift &Non-Specialised Dictionary/Monomorphisation* &Reference Counting &Native \\
Objective-C &Non-Specialised Dictionary/Monomorphisation* &Reference Counting &Native \\
Haskell &Non-Specialised Dictionary &Garbage Collection &Native \\
Go & Monomorphisation + Specialised Dictionary &Garbage Collection &Native \\
D & Monomorphisation &Garbage Collection &Native \\
C++ & Monomorphisation &Manual &Native \\
Rust & Monomorphisation &Manual &Native \\
Delphi & Monomorphisation &Manual &Native \\
Ada & Monomorphisation &Manual &Native \\
Fortran & Monomorphisation &Manual &Native \\
\bottomrule
\end{tabular}
\caption{Generic implementations of top 16 statically typed programming languages with generics. Languages are selected from the top 40 languages by IEEE Spectrum in 2021~\cite{TopProgr17:online}.
(*when source code available or specified by users.)
}\label{tab:app-pl-table }
\end{table}
\section{Conclusion}
\label{section:conclusion}
In this paper, we design and formalise a new source-to-source,
non-specalised call-site
dictionary-passing translation of Go, and prove
essential correctness properties
introducing a novel and general \emph{bisimulation up to} technique.
The theory guides a correct implementation of
the translation,
which we empirically compare along with the recently released Go~1.18\xspace,
an erasure translator, and two existing monomorphisation
translators~\cite{gotogo,griesemer2020featherweight},
with micro and real-world benchmarks.
We demonstrate that our dictionary-passing translator handles
an important class of Go programs (\textit{F}-bounded polymorphism and {\it nomono}{}
programs) beyond the capability of Go~1.18\xspace
and existing translations \cite{gotogo,griesemer2020featherweight},
and provide several crucial findings and implications
for future compiler developers to refer to.
For instance, Go~1.18\xspace requires more improvements on GC shapes in order to
effectively generate small binary code
(See \ref{section:discussion}
for a more detailed discussion).
Beyond Go language,
many dynamically typed languages (such as Python, JavaScript, and
Erlang) type-check at runtime, and their engines cannot
easily decide an object's
implemented methods nominally, similarly to Go.
Consequently,
many
of their implementations~\cite{salib2004starkiller,castanos2012benefits, gal2009trace}
apply similar approaches to monomorphisation to optimise
execution speed.
Rust also supports generic via monomorphisation,
yet this is considered a major reason for slow compilation.
Our work can help in choosing alternative
optimisations for these languages to reduce
code size and compilation time.
In the future, we plan to inspect how other important Go language features
(e.g., \emph{reflection},
\emph{packages}, \emph{first-class}, \emph{anonymous} functions) interact with generics
by proving the correctness and examining the trade-offs among runtime performance, code sizes, and compilation times.
\section{Call-Site, Non-Specialising Dictionary-Passing Translation}
\label{section:dictionary}
This section presents
our new dictionary-passing translation from \gls{fgg} to \gls{fg}.
\myparagraph{High level overview}
Our call-site, non-specialising dictionary-passing translation can be
split into a number of parts, each tackling a
different challenge. Specifically, we consider:
the preservation of typeability,
the use of dictionaries to resolve
generic method implementations,
the creation of dictionaries, and
the preservation of type assertion behaviour.
These challenges may have been discussed in other works,
yet the structural type system of Go serves to
hinder any existing solutions.
We explain the key ideas and challenges in \S~\ref{sec:dictexample},
and detail the formal translation rules in \S~\ref{sec:dictexp}.
}%\vspace{-2mm}
\subsection{Dictionary-Passing by Example}
\label{sec:dictexample}
\subsubsection{\bf Structural Subtyping and Type Erasure}
\label{paragraph:structure}
The first challenge we encounter is that
subtypes must be preserved. If, in
the source program, expression $e$ can
be used as an argument to \inlinelstfcgg{Foo}, then
the translation of $e$ should likewise
be usable as an argument to the translation of \inlinelstfcgg{Foo}.
We should also desire that any non-subtypes
are preserved, we leave this challenge to \S~\ref{subsubsec:typecollision}.
As a first naive attempt at removing
polymorphic types, we might observe that
regardless of the value we pass to a
polymorphic argument, it must implement the \inlinelstfcgg{Any} type.
From this, we could -- again, naively --
conclude that lifting all polymorphic
arguments to the \inlinelstfcgg{Any} type solves our problem.
Unfortunately, such a solution fails upon closer inspection.
Consider the code in Figure~\ref{code:fgg:example}.
By erasing the polymorphic types in
\inlinelstfcgg{Function}, we lose the subtype
\inlinelstfcgg{GtFunc[int]} $<:$ \inlinelstfcgg{Function[int, bool]}
(The naively erased \inlinelstfcgg{GtFunc} implements \inlinelstfcgg{Apply(in Any) bool},
while the erased
\inlinelstfcgg{Function} demands \inlinelstfcgg{Apply(in Any) Any}).
This issue is noted in \citet[\S~4.4]{Igarashi99FJ}.
Their solution, however, is inappropriate in a
structurally typed language such as Go.
In nominally typed languages like Java,
it is clear that one type subtypes another.
One need only inspect the implementing type's
declaration, as a subtype exists only when
it is explicitly declared.
\citet{Igarashi99FJ} insert \emph{bridge methods} to
handle cases such as the \inlinelstfcgg{GtFunc}-\inlinelstfcgg{Function} example.
A bridge method is an overloaded method added
to the subtype whose type matches the erased
method as specified by the supertype, \ie{} adding an
overloaded method of type \inlinelstfcgg{Apply(in Any) Any} to \inlinelstfcgg{GtFunc}.
This method is immediately inappropriate as Go does
not allow method overloading.
The bridge method solution would still be inappropriate
were we to simulate
overloading using name mangling.
To add bridge methods, we need to
know -- statically -- that a subtype exists.
In \gls{fgg}, we need to know how two
types are instantiated before we can conclude
that a subtype relation exists.
This requires the kind of potentially infinite
whole program analysis (\S~\ref{section:nomono})
that we wished to avoid in our dictionary-passing translation.
Instead, we ensure that subtypes are
preserved by erasing \emph{all} method types, rather
than just polymorphic types.
As with \inlinelstfcgg{GtFunc}'s \inlinelstfcgg{Apply}
method in Figure~\ref{code:fg:example},
when a variable
of a known type is used, we assert it to that
type; although unlike Figure~\ref{code:fg:example}, the \gls{fgg} type
checker has already ensured the safety of these synthetic assertions.
\subsubsection{\bf Dictionaries}
\label{subsubsec:dictionaries}
\begin{figure}
\begin{minipage}[t]{0.4\linewidth }
\vspace{-3mm}
\begin{center}
\begin{lstfcgg}
type Ord[T Ord[T]] interface {
Gt[](that T) bool
}
type GtFunc[T Ord[T]] struct { val T }
func (this GtFunc[T]) Apply(in T) bool {
return this.val.Gt[](in)
}
type Max struct {}
func (this Max) Of[T Ord[T]](l T, r T) T {
$\cdots$ l.Gt(r) $\cdots$
}
func main() { GtFunc[int]{5}.Apply(7) }
\end{lstfcgg}
\end{center}
\end{minipage}\hspace*{-2mm}
\begin{minipage}[t] {0.59\linewidth }
\vspace{-3mm}
\lstset{firstnumber=1}
\begin{lstfcgg}
type Ord interface{ Gt(that Any) Any }
type OrdDict struct {
Gt func(rec Any, in Any) Any ; //Gt method pointer
/*Simulated type*/ }
type GtFunc struct { val Any ; dict OrdDict }
func (this GtFunc) Apply(in Any) Any {
return this.dict.Gt(this.val /*Receiver*/, in) (*\label{code:dictexample:resolve1}*)
}
func (this Max) Of(dict OrdDict, l Any, r Any) Any {
$\cdots$ dict.Gt(l, r) $\cdots$ }(*\label{code:dictexample:resolve2}*)
func main() {
od := OrdDict{Gt: func(rec Any, in Any) Any { rec.(int).Gt(in)}}
GtFunc{5, od}.Apply(7) }
\end{lstfcgg}
\end{minipage}
\vspace*{-.5cm}
\caption{Dictionary-passing translation example extending
Figure~\ref{code:fg:example}.
\glsentryshort{fgg} source (Left),
\glsentryshort{fg} translation (Right)}
\vspace*{-.4cm}
\label{code:dict:passing:example}
\end{figure}
We are now confronted with the primary
challenge of concern to dictionary-passing
translations; how do we resolve generic
method calls without polymorphic type information?
A dictionary is, at its simplest,
a map from method names to their
specific implementation for some type.
A dictionary-passing translation, then,
is one which substitutes the specialisation
of type parameters with the passing of
dictionaries as supplementary value-argument.
One may then resolve a method call on
a generic value by performing a dictionary lookup.
Presently, we consider the structure
and usage of dictionaries while delaying
our discussion of call-site dictionary
construction and type simulation until
\S~\ref{subsubsec:dynamicdict} and \S~\ref{subsubsec:typecollision}, \emph{resp}.
Consider Figure~\ref{code:dict:passing:example} (left) extending a fragment of
Figure~\ref{code:fg:example} with a \inlinelstfcgg{Max.Of} method.
For us to call \inlinelstfcgg{Gt} in \inlinelstfcgg{GtFunc[T].Apply}
or \inlinelstfcgg{Max.Of[T]}, we need to know the
concrete type of \inlinelstfcgg{T}. This information
is lost during erasure.
The translation (right) includes a
fresh struct \inlinelstfcgg{OrdDict} which is,
quite naturally, the dictionary for
\inlinelstfcgg{Ord} bounded type parameters.
Dictionaries contain a method pointer
field for each method in the original
interface, along with a \emph{type-rep} which
shall be discussed in \S~\ref{subsubsec:typecollision}.
\gls{fg} does not include method pointers;
instead, we must simulate them using
higher order functions with the
first argument being the receiver.
While this adds a small amount of
complexity to the final correctness
proofs, we see this as a worthwhile
compromise, as it allows us to focus
on the translation of generics alone,
rather than on generics \emph{and} on
a translation to some low level language.
By containing each method specified
by the \gls{fgg} bounding
interface, dictionaries have a fixed internal representation.
This reflects real-world dictionary-passing implementations
and allows entries to be accessed efficiently~\cite{driesen1996direct}.
\begin{wrapfigure}{r}{0.42\linewidth}
\vspace*{-.7cm}
\lstset{xleftmargin=5pt}
\begin{lstfcgg}
type Foo[$\alpha$ Any] interface {
do[$\beta$ Any](a $\beta$, b bool) $\alpha$
}
type Bar[$\alpha$ Any] struct {}
func (x Bar[$\alpha$]) do[$\beta$ Any](a $\beta$, b $\alpha$) int {$\cdots$}
func main() {
Bar[bool]{}.(Foo[int]); (*\label{code:assertion:source}*)
Bar[bool]{}.(Foo[bool]) (*\label{code:assertion:source2}*)
}
\end{lstfcgg}
\vspace*{-.5cm}
\caption{Type-rep example. \glsentryshort{fgg} source}
\label{fig:code:dict:type-rep:fgg}
\vspace*{-.3cm}
\end{wrapfigure}
Dictionaries are passed to methods via
two mechanisms, namely the method's receiver,
and as regular value-arguments.
Generic structures, \eg \inlinelstfcgg{GtFunc},
possess a dictionary for each type parameter.
When used as a receiver, these dictionaries can
be accessed using standard field destructuring.
Method dispatch then takes the form of a dictionary
lookup and method invocation as seen on lines~\ref{code:dictexample:resolve1}
and \ref{code:dictexample:resolve2} (right).
\subsubsection{\bf Type Collision}
\label{subsubsec:typecollision}
Here we consider the challenge of ensuring that
type assertion behaviour is preserved by our translation.
Erasing type parameters may
introduce new subtypes which did not
exist in the source program.
Consider the expression
\inlinelstfcgg{GtFunc[int]\{5\}.(Function[bool, bool])}
where \inlinelstfcgg{GtFunc} and \inlinelstfcgg{Function} are
defined in Figure~\ref{code:fgg:example}.
Upon evaluation, this expression
produces a runtime type assertion
error as \inlinelstfcgg{GtFunc[int]\{5\}} is
not a subtype of \inlinelstfcgg{Function[bool, bool]}.
The erased types as described in
\S~\ref{paragraph:structure}, however, form a subtype relation,
meaning the error will not
occur in the translated code.
This behaviour would be incorrect.
To ensure that type assertion errors
are correctly preserved we simulate the FGG type
assertion system inside the
translated \gls{fg} code via type-reps~\cite{crary1998intensional}.
A simulated \gls{fgg} type implements \inlinelstfcgg{_type_metadata}
by specifying a method,
\inlinelstfcgg{tryCast}, which throws an error
if and only if the \gls{fgg} assertion would have failed.
\begin{figure}
\lstset{xleftmargin=10pt}
\begin{lstfcgg}
type _type_metadata interface { tryCast (in Any) Any }
type AnyDict struct {_type _type_metadata}
type Foo interface { do(dict$_0$ Anydict, in Any) Any ; spec_do() spec_metadata$_4$ }
type Foo_meta struct { _type$_0$ _type_metadata }
func (this Foo_meta) tryCast(x Any) Any { (*\label{code:trycast}*) // Type formal, Parametrised arg, Literal arg, return type $\alpha$
if (x.(Foo).spec_do() $!=$ spec_metadata$_4${Any_meta{}, param_index$_0${}, Bool_meta{}, this._type$_0$ }) { panic }
return x }
type Bar struct {dict$_0$ AnyDict}
func (this Bar) spec_do() spec_metadata$_4$ { // Type formal, Parametrised arg, Arg type $\alpha$, return type literal
return spec_metadata$_4${Any_meta{}, param_index$_0${}, this.dict$_0$._type, Int_meta{}}}(*\label{code:specdo}*)
func main() {
Foo_meta{Int_meta{}}.tryCast(Bar{AnyDict{Bool_meta{}}}) (*\label{code:assertion:target}*)
Foo_meta{Bool_meta{}}.tryCast(Bar{AnyDict{Bool_meta{}}}) } (*\label{code:assertion:target}*)
\end{lstfcgg}
\vspace*{-.5cm}
\caption{Type-rep example. \glsentryshort{fg} translation}
\label{fig:code:dict:type-rep:fg}
\vspace*{-.4cm}
\end{figure}
Consider the code in Figure~\ref{fig:code:dict:type-rep:fgg}.
The source \gls{fgg} code contains two assertions;
the one on line~\ref{code:assertion:source}
passes, while line~\ref{code:assertion:source2}
produces a type assertion error.
A struct implements an
interface when it correctly implements
each method specified by the interface.
This means that not only does the struct
define a method of the same name, but
also of precisely the same type.
Assertion to an interface, then, need
only ensure that each method is correctly implemented.
Assertion to a structure is a simple type equality check.
The translated interface, Figure~\ref{fig:code:dict:type-rep:fg},
now includes the meta method \inlinelstfcgg{spec_do},
returning simulated \gls{fgg} type information for a struct's
\inlinelstfcgg{do} implementation.
The \inlinelstfcgg{spec_metadata$_4${}} object
returned by \inlinelstfcgg{spec_do} on
line~\ref{code:specdo} of the target code is a four-element
tuple containing: type parameter bounds,
argument types, and the return type. This object
simulates the \gls{fgg} method type for
\inlinelstfcgg{do} on \inlinelstfcgg{Bar[$\tau$]} for some $\tau$, \ie
\inlinelstfcgg{do[$\beta\ $ Any](a $\ \beta$, b $\ \tau$) Int[]}.
The first entry \inlinelstfcgg{Any_meta\{\}} gives the simulated
type bound of the source method's type parameter $\beta$.
The next gives the type of argument \inlinelstfcgg{a}, namely $\beta$.
As there is no suitable concrete metadata type for $\beta$, we
use an index \inlinelstfcgg{param_index$_0${}} to indicate
that \inlinelstfcgg{a}'s type is the method's first type parameter.
The third, that of \inlinelstfcgg{b}, is not known at compile time,
but is rather given by the
type parameter of the receiver.
Finally, the return type is given by the constant \inlinelstfcgg{Int_metadata}.
The type assertion on line~\ref{code:assertion:target}
uses the \inlinelstfcgg{Foo_meta}'s \inlinelstfcgg{tryCast} method defined on line~\ref{code:trycast}.
This method first checks that the erased types are compatible, \ie{} that \inlinelstfcgg{Bar}
implements all erased methods in \inlinelstfcgg{Foo}. The \inlinelstfcgg{spec_do} method is then
used to check the simulated method type matches the interface specification.
If any of these checks is failed then the assertion
fails and a {$\ensuremath{\mathsf{panic}}$} is thrown.
\subsubsection{\bf Call-Site Dictionary Creation}
\label{subsubsec:dynamicdict}
As discussed in \S~\ref{section:nomono},
the approach taken by Go~1.18\xspace
is fundamentally limited by its use of call-graph based
dictionary construction.
In contrast we consider the challenge of the call-site
construction of dictionaries
in a structurally typed language.
Our approach
overcomes the aforementioned limitation of
\cite{griesemer2020featherweight} and Go~1.18\xspace.
We note a few key facts.
A $\tau$-dictionary provides all
the methods specified by the
type bound $\tau$, and we may build a
dictionary for any specialising type
which is a subtype of $\tau$.
We can also use a type variable to
specialise some other type variable as
long as the bound of the later is a
supertype of the former. In a translation
this \emph{dictionary-supertyping}
involves using a $\tau$-dictionary to
build a potentially different $\sigma$-dictionary.
In a nominally typed
language the explicit, and fixed,
hierarchy allows a dictionary-passing translation
to easily structure and construct dictionaries
according to the subtype hierarchy.
Dictionary-supertyping in nominally typed languages
is generally a
matter of extracting the appropriate sub-dictionary~\cite{bottu2019coherence}.
In a structurally typed language, however, there is not
a fixed subtype hierarchy. Recall that in order
to infer subtype relationships, we first need the specific
type instances. We
have two choices: either explore the entire
call graph to discover all type instantiations and
construct our dictionaries according to the call-graph,
or construct/supertype our dictionaries at the call-site where
specialisation would have happened.
The former approach was taken by Go~1.18\xspace
and beyond the significant static analysis
required, this approach also suffers from the
same finiteness limitation encountered
by monomorphisation approaches \cite{griesemer2020featherweight}.
\begin{figure}
\noindent
\begin{minipage}{0.33\linewidth}
\begin{lstfcgg}
type Eq[$\alpha$ Eq[$\alpha$]] interface {
Equal(that $\alpha$) bool
}
type Ord[$\alpha$ Ord[$\alpha$]] interface {
Gt(that $\alpha$) bool;
Equal(that $\alpha$) bool
}
func Foo[$\beta$ Ord[$\beta$]](val $\beta$) Any {
return Bar[$\beta$](val)
}
func Bar[$\beta$ Eq[$\beta$]](val $\beta$) Any {$\cdots$}
func main() { Foo[int](5) }
\end{lstfcgg}
\end{minipage}
\begin{minipage}{0.64\linewidth}
\begin{lstfcgg}
type EqDict struct { Equal func(rec Any, that Any) Any }
type OrdDict struct {
Equal func(rec Any, that Any) Any ;
Gt func(rec Any, that Any) Any
}
func Foo(dict OrdDict, val Any) Any {return Bar(EqDict{dict.Equal}, val)}
func Bar(dict EqDict, val Any) Any { $\cdots$ }
func main() {
old_dict := OrdDict{
Equal : func(rec Any, that Any) Any { return rec.(int).Equal(that) }
Gt : func(rec Any, that Any) Any { return rec.(int).Gt(that) } }
Foo(old_dict, 5) }
\end{lstfcgg}
\end{minipage}
\vspace*{-.3cm}
\caption{Call-site dictionary creation example.
\glsentryshort{fgg} source (Left).
\glsentryshort{fg} translation (Right)}
\label{fig:code:dict:dynamic}
\vspace*{-.4cm}
\end{figure}
We demonstrate our call-site approach in Figure~\ref{fig:code:dict:dynamic}.
This example consists
of two interfaces, \inlinelstfcgg{Eq} and
\inlinelstfcgg{Ord}, which form a
subtype relation along with a method \inlinelstfcgg{Foo} which
uses a type parameter bounded by \inlinelstfcgg{Ord} to
instantiate a type parameter bounded by \inlinelstfcgg{Eq}.
If, in the source program, there are two types $\sigma$ and $\tau$
where there exists an instantiation creating a subtype relation, then
the two erased types form a subtype relation.
This is precisely the result discussed in \S~\ref{paragraph:structure}.
When initially creating a dictionary, we
populate it with the required method pointers
for the known instantiating type.
If, however, we are creating a
$\tau$-dictionary for type parameter $\beta$ bounded by $\sigma$,
then the method contained by the supertyping
$\tau$-dictionary (\inlinelstfcgg{Eq}) is a subset of
the $\sigma$-dictionary (\inlinelstfcgg{Ord}) for type parameter $\alpha$.
Dictionary-supertyping then consists of destructuring the
subtype's dictionary and -- along with the type-rep --
adding all required method pointers to a
new supertype-dictionary.
While conceptually simple,
our call-site approach directly addresses the
unique issues raised by structural typing systems and
allows us to
overcome the limitation discussed in \S~\ref{section:nomono}
that afflicts
both monomorphisation~\cite{griesemer2020featherweight} and Go~1.18\xspace.
\subsection{Dictionary-Passing Judgement}
\label{sec:dictexp}
This subsection is technical:
readers who are not interested in the formal translation rules
can safely skip this subsection.
We define the judgement $\dict[]{P}{\lex{P}}$ as the
dictionary-passing translation from $P$ in \gls{fgg} to
$\lex{P}$ in \gls{fg}.
The expression judgement $\dict{e}{\lex{e}}$ is parametrised by
variable and type variable environments
($\Gamma$ and $\Delta$ \emph{resp.})
as well as a dictionary map $\eta$ from type variable names to
dictionary variables.
We provide auxiliary functions in Figure~\ref{fig:dict:aux} and translation rules in
Figure~\ref{fig:dict:prog}.
\myparagraph{Name constants}
We introduce a set of maps from name constants in \gls{fgg} to unique
\gls{fg} names which are assumed
to never produce a collision,
\begin{enumerate*}
\item $\dictName{\iType{t}}$ --- from a type bound (interface) to the
dictionary struct name for that bound;
\item $\metadataName{t}$ --- from a type name to a simulated type name;
\item $\method{spec\_name}{m}$ --- from a method name to a method producing simulated specification; and
\item $\method{mName}{t,m}$ --- the method applicator (pointer) for method $m$ on type $t$.
\end{enumerate*}
\myparagraph{Auxilary functions}
\input{figs/dict/aux}
Figure \ref{fig:dict:aux} provides a number of auxiliary functions used in
the dictionary-passing translation.
The overloaded $\method{arity}{}$ function
computes the number of type and value parameters required by each
method signature, including method signatures in an interface's specifications.
Function $\method{maxFormal}{D}$ computes the number of type parameters expected
by the largest type formal.
Function $\method{asParam}{\Phi}$ converts a type formal into dictionary arguments.
The function $\method{meth\_ptr}{t, mM}$ constructs the
simulated method pointer struct and implementation
-- called the \emph{abstractor/applicator pair} -- for method $m$ on type $t$.
To build a type simulation of type $\tau$ we call $\typemeta{\tau}$ where $\zeta$ is a
map from type variables to existing simulated types.
When simulating the type assertion to an interface in
\S~\ref{subsubsec:typecollision}, we used the
$\method{spec\_name}{m}$ method \inlinelstfcgg{spec_do} to produce
the instantiated simulated signature for method $m$.
The $\method{spec\_mdata}{mM}$ function takes an interface's method specification
and produces the specification for the $\method{spec\_name}{m}$ method.
Simulated method signatures are built using $\signatureMeta{M}$.
This function takes a map $\zeta$
from type variables to simulated types,
and extends $\zeta$ with the indexing structs for the method's type formal.
A set of simulated method signature ($\fnMeta{n}$) and type parameter index
($\texttt{param\_index}_i$) structs are created by the program translation
rule (\rulename{d-program}); $\method{arity}{\multi{D}}$ and $\method{maxFormal}{D}$
are used to ensure that all needed structs are constructed. $\fnMeta{n}$ is an $n+1$ tuple
used in interface assertion simulation, and describes a method signature of arity $n$
and gives
type parameter bounds,
value argument types, and the method's return type.
To allow interface assertion simulation to reference type
variables, we use $\texttt{param\_index}_i\{\}$ to reference a method's $i^{\text{\tiny th}}$
type parameter.
Given a type environment $\Delta$ and a map $\eta$ from type variables to existing dictionaries,
we build a $\iType{\tau}$-dictionary for type~$\sigma$ using the
\makeDict{\sigma, \iType{\tau}} function. In the case that $\sigma$ is already
a type variable $\alpha$, then the map $\eta$ must contain a dictionary for $\alpha$.
When $\alpha$ is bounded by $\iType{\tau}$ in $\delta$, we are done,
whereas if $\iType{\tau}$ is a subtype of $\alpha$,
but not
$\iType{\tau} = \alpha$, then we need copy
method pointers required by the new (and smaller) $\iType{\tau}$-dictionary.
A new dictionary is built for a constant type $\sigma$ by providing a method pointer (abstractor) for each
method specified by $\iType{\tau}$ and the simulated type of $\sigma$.
\input{figs/dict/trans}
\myparagraph{Program translation} Rule \rulename{d-program}
introduces new declarations required for method pointers
and type simulations as described in \S~\ref{sec:dictexample}, and
the \texttt{Any}\ interface to provide a uniform, erased, type representation.
Each method applicator must implement an $n$-arity function interface
$\nAryFunction{n}$, that accepts the receiver and the $n$ arguments for the
desired method call. The arity of a method includes both the regular value
arguments as well as the dictionary arguments.
A simulated type implements the $\texttt{\_type\_mdata}$ interface by providing
an assertion simulation method (\texttt{tryCast}), which panics if
the assertion is invalid.
The $\fnMeta{n}$ and
$\texttt{param\_index}_i$ structs are created as required by
the $\method{arity}{\multi{D}}$ and $\method{maxFormal}{D}$
functions, respectively.
Each declaration is translated to multiple declarations; we use $\mathcal{D}$
to indicate this.
\myparagraph{Interface and dictionary construction}
The translation of
interfaces produces a number
of \gls{fg} declarations (\rulename{d-interface}).
They are (1) an \gls{fg} interface, and
(2) a dictionary for that interface.
The interface $\iType{t}\typeFormalType$ becomes the erased type
$\iType{t}$ (1).
For each method specification $S$ defined by the source interface, we produce
two specifications in the target; the first is defined by \rulename{d-spec}
and replaces types formal with appropriate dictionaries while erasing all other
types, and the second defines a method producing the simulated
\gls{fgg} method specification.
Since \rulename{d-meth} produces such a simulated specification method for
each method, it is guaranteed that any type which
implements the former will implement the latter.
The dictionary (2) for an interface $\iType{t}$ is given by a new struct
$\dictName{\iType{t}}$, which contains a method pointer (abstractor) for
each specified method and the simulated type ($\texttt{\_type}$)
for the type parameter's specialising type.
Type simulation is also defined here.
For type $\iType{t}\typeFormalType$, the simulation struct
($\metadataName{\iType{t}}$) contains a field for each type parameter in $\Phi$.
The \texttt{tryCast}\ method
checks that each specified method is implemented correctly by the target of the
assertion (See \S~\ref{subsubsec:typecollision}).
For clarity of presentation, we assume a number of extra language features that can
be easily implemented in \gls{fg}, including; if-statement, struct inequality,
explicit panic, and sequencing \cite{griesemer2020featherweight}.
\myparagraph{Struct declaration}
To translate $\sType{t}\typeFormalType$,
we erase all field types
and add a new dictionary field for each
type parameter in $\Phi$. The simulated type $\metadataName{\sType{t}}$
is constructed with a
variable for each type parameter, and $\texttt{tryCast}$ checks that the
target value is exactly the assertion type.
\myparagraph{Method declaration}
Judgement on method $m\typeFormalMethod(\multi{x~\tau})~\tau$ (\rulename{d-meth})
produces a primary method, a method returning the simulated method type,
and an abstractor/applicator pair.
The primary method and simulation method's types match those from \rulename{d-interface}.
The body of the implementing method is translated in the
$\Delta;\eta;\Gamma$ environments,
where $\Delta$ and $\Gamma$ are built
according to the typing system.
There are two locations for type variables -- and thus dictionaries -- to be
passed into a method, namely in the receiver or as an argument;
consequently, $\eta$ may map into either a dictionary argument ($\texttt{dict}_i$) or
a receiver's dictionary field ($\texttt{this}.\texttt{dict}_i$).
\myparagraph{Expressions}
The struct literal ($\sType{t}\typeActualReceive\{\multi{e}\}$) is translated by
first translating each field assignment and then
building an appropriate dictionary for each type in $\phi$ using
\textit{makeDict} (\rulename{d-value}).
Method calls are translated in one of two ways. The first (\rulename{d-call})
is the immediate structural translation of sub terms and creation of appropriate dictionaries; this
translation is only possible if the type of the receiver is not a type variable, although
it does not need to be a closed type.
The second (\rulename{d-dictcall}) translates arguments and creates dictionaries in the
same way as the former, but needs to resolve the
method implementation using a dictionary lookup.
\section{Implementation and Evaluation}
\label{sec:exp}
Beside the \gls{dict},
we also implement an \gls{erasure}.
We compare the two implementations with three existing translators: \gls{mono}~\cite{griesemer2020featherweight}, \gls{gotogo}
(the initial prototype based on a source-to-source monomorphisation),
and Go~1.18\xspace~\cite{go118} (the official generic type
implementation released on 15th
March 2022).
This section first discusses the two implementations,
then describes the evaluation methodology,
before finally presenting the evaluation results.
\subsection{Implementation of Dictionary-Passing Translation}
\label{subsec:imple}
\input{figs/eval/fig-venn.tex}
We implement the dictionary-passing translator
(\glsentryshort{dict})
and the erasure-based translator (\glsentryshort{erasure}) based on
the \gls{fgg} artifact~\cite{fgg-artifact} in Go 1.16.
We have fully tested the implementations
using designed unit tests.
Figure~\ref{fig:venn} shows the code coverage difference across the five translators.
\gls{fgg} is the calculus presented in
\cite{griesemer2020featherweight};
\glsentryshort{dict} does not
cover receiver type formal subtyping;
\glsentryshort{erasure} does not cover \gls{fgg} type assertions;
\glsentryshort{mono} does not cover a class
of recursive ({\it nomono}) programs
\cite{griesemer2020featherweight};
\glsentryshort{gotogo} is a
source-to-source monomorphisation translator implemented by the Go Team,
and does not cover \textit{F}-bounded polymorphism, method parametrisation,
receiver type formal subtyping,
or recursive ({\it nomono}) programs; and
Go~1.18\xspace is the
official release with generics and has the same limitations
as \glsentryshort{gotogo}. Both Go~1.18\xspace and \gls{gotogo}
target the full Go language, including features
not considered by \gls{fgg}.
We implement \glsentryshort{dict} following the rules in
\S~\ref{section:dictionary}.
Rather than strictly follow the formalisations of \gls{fg} and \gls{dict}
translation, we leverage the first-class functions support in Go and use
function types~\cite{go-function-types} as dictionary fields, similar
to using function pointers in C/C++.
We also ignore unnecessary type assertions in \rulename{d-field}
and \rulename{d-call} when the translation is not on an interface.
We memorise expression typing results to accelerate compilation.
We exclude type simulation (\S~\ref{subsubsec:typecollision})
of non-generic types (\ie{} the size of
the type formal is zero), and directly use type assertion for \rulename{d-assert}
for better runtime performance. We also find if those type metadata are used, and remove them when possible. Users can also disable all
type metadata copies if there are no type assertions in the input program.
In total, \glsentryshort{dict} contains 1160 lines of Go code.
\glsentryshort{erasure} is an alternative homogeneous translation implementation from \gls{fgg}.
This implementation erases generic type information and uses the underlying interface type, similar to the erasure implementations for Java~\cite{odersky2000two, Igarashi99FJ}.
When calling a method,
the erased object is directly used as the receiver (If
$\wellTyped[\Delta, \alpha{:}\iType{t}\typeActualReceive; \Gamma]{e}{\alpha}$ then $\dict[\Delta, \alpha{:}\iType{t}\typeActualReceive; \Gamma]{
e.m\typeActualMethod(\multi{e})
}{
\lex{e}.(\iType{t}).m(\multi{\lex{e}})
}$), in contrast to \glsentryshort{dict}'s dictionary lookup (\rulename{d-dictcall}).
For example, \inlinelstfcgg{func f[a Foo](x a) \{x.Bar()\}} translates to \inlinelstfcgg{func f(x Any) \{x.(Foo).Bar()\}}, while \glsentryshort{dict} calls the corresponding function in a dictionary field.
As in \S~\ref{paragraph:structure}, naively erasing type parameters breaks type assertion preservation (Definition~\ref{def:type:preservation}).
An example of \glsentryshort{erasure} is provided in
\ifnotsplit{Appendix~\ref{sec:erasure-example}}{the full version of this paper~\cite{fullversion}}.
Compared with \glsentryshort{dict}, \glsentryshort{erasure}
provides a concise translation of generics that is fully based
on Go's existing dynamic dispatch mechanism.
When calling a method of a generic object as though
it were an interface, the Go runtime looks up
the actual method to call from a list of methods~\cite{go-interface-slow-1,go-interface-slow-2},
while \glsentryshort{dict} finds the actual method from the dictionary.
The implementation of \glsentryshort{erasure} contains 765 lines of Go
code.
\subsection{Evaluation Methodology}
\label{subsec:evaluation}
\myparagraph{Benchmarks}
We build two benchmark
suites to conduct head-to-head comparisons for the
five translators.
\textit{1) Micro Benchmarks:}
we design five micro benchmark sets.
Each has a configuration parameter
to demonstrate how the translated code scales with a particular
aspect of \gls{fgg}/Go programs.
\textit{2) Real-World Benchmarks:}
we reimplement all benchmarks
in previous papers about generics in
Java and Scala~\cite{odersky2000two, ureche2013miniboxing}.
Go~1.18\xspace officially released generics on March 15th, 2022, and
it is infeasible for us to find usage of generics in real Go programs.
The second benchmark suite is a reasonable substitute to reveal how the five
translators behave in reality.
\input{figs/eval/fig-toy2-sep}
\myparagraph{Micro Benchmarks}
The five sets of micro benchmarks, \benchname{Program}~\mycircledtext{a}-\mycircledtext{e}, are all derived
from a base program. Figure~\ref{fig:benchmark-prog-pdf} shows the base program and how the five
benchmark sets are derived from it.
In the base program,
lines 29--32 enumerate all possible combinations of types actual for \gocode{f$_1$()}.
Function \gocode{f$_1$()} takes two parameters and uses them
to call \gocode{f$_2$()} on line~20, which in turn calls \gocode{CallBase()}
on line~14.
Function \gocode{CallBase()} calls \gocode{Ops()} on line 5, which further calls \gocode{Op()}
twice to represent two non-generic operations. All methods of interface \gocode{Base}
(\gocode{g$_1$()} and \gocode{g$_2$()}) are implemented by struct \gocode{Derived},
and called on line 17, from receiver variable \gocode{x}
with generic type \gocode{base}.
Function \gocode{main()} calls \gocode{DoIt()} 10,000 times (line~36)
to provide stable performance results.
The set of \benchname{Program}~\mycircledtext{a} extends the number of methods of \gocode{Base} (lines 39--42)
and \inlinelstfcgg{Derived} (lines 45--47) in the base program, from 2 to $n$.
\benchname{Program}~\mycircledtext{b} repeats the non-generic operation $c$ times on line 56, instead of two.
In \benchname{Program}~\mycircledtext{c}, we increase the number of type parameters from 2 to $m$ (lines 59, 60, 63, and 65),
and enumerate all $2^m$ type actual combinations (lines 67--70).
\benchname{Program}~\mycircledtext{d} increases the length of the call chain between \gocode{Doit()}
and \inlinelstfcgg{CallBase()} from 2 to $p$ (lines 49--55).
\benchname{Program}~\mycircledtext{e} is particularly designed to expose the exponential complexity
of monomorphisation (lines 72--88).
Its configuration parameter $m$ controls both
the type parameter number of \gocode{Base} (and \gocode{Derived}) and
the number of functions called in between \gocode{DoIt()}
and \gocode{BaseCall()} along the call chain.
For the $m$ functions in between \gocode{DoIt()}
and \gocode{BaseCall()}, we further configure each caller to call its callee twice,
and each callee to have one more parameter than its caller (e.g., function body of \inlinelstfcgg{f$_1$} and \inlinelstfcgg{f$_2$} on lines 77--84).
\myparagraph{Real-World Benchmarks}
We reimplement the Java and Scala programs using
\glsentryshort{gotogo}, Go~1.18\xspace, and \glsentryshort{fgg} for our evaluation.
Since \glsentryshort{fgg} does not support all syntax in the programs,
we first use \glsentryshort{fgg}
to reimplement as many functionalities as possible. Then,
we translate the \glsentryshort{fgg} code to Go
and manually insert the missed non-generic functionalities.
On the other hand,
\glsentryshort{gotogo} and Go~1.18\xspace support all required syntax, so
we use them to reimplement each whole program.
We manually test the reimplementations with designed testing inputs
and compare their outputs with the original versions in Java or Scala.
Our tests achieve 100\% code coverage.
The benchmarks' functionalities are explained as follows.
\benchname{List} \cite{ureche2013miniboxing}
is an implementation of a linked list. It supports insert and search operations
on the linked list.
\benchname{ResizableArray} \cite{ureche2013miniboxing} implements a resizable array.
It inserts elements into the array,
reverses the array, and searches elements in the array.
\benchname{ListReverse} \cite{odersky2000two} constructs a linked list and reverses it.
It contains two reversing implementations.
\benchname{VectorReverse} \cite{odersky2000two} is to reverse an array.
Similarly, it implements the reversing
functionality in two different ways.
\benchname{Cell} \cite{odersky2000two}
implements a generic container.
\benchname{Hashtable} \cite{odersky2000two} accesses elements in a hash table.
\myparagraph{Metrics}
We consider \emph{code size}, \emph{execution time}, and \emph{compilation time} as our metrics.
For code size, we compile each translated benchmark program into a binary executable and
disassemble the executable using objdump~\cite{objdump120:online}.
Next, we count the number of assembly instructions compiled from the benchmark program as its code size, while excluding the assembly instructions
of linked libraries.
To measure execution time,
we compile each translated \gls{fg} program using
the Go compiler and compute the average execution time over {\em ten} runs.
We consider the time spent on the source-to-source translation and the compilation
from a \gls{fg} program to an executable as the compilation time for the four source-to-source translators. For Go~1.18\xspace, we measure its compilation time directly.
We compile each benchmark program with each translator {\em ten} times
and report the
average compilation time.
\myparagraph{Platform \& Configurations}
All our experiments are conducted on a desktop machine,
with AMD Ryzen 5 2600 CPU, 32GB RAM, and Ubuntu-18.04.
To focus more on the impact of different translations for generics,
we disable garbage collection and compiler optimisations for all translators.
No benchmark requires type simulation.
Thus, we disable this option in \gls{dict},
allowing us to better understand the impact of method translation and dispatch.
\subsection{Evaluation Results}
\label{subsec:benchmark}
}%\vspace{-1mm}
\subsubsection{Micro benchmarks}
\input{figs/eval/fig-prog-a-result.tex}
\myparagraph{Program \mycircledtext{a}}
We change $n$ from $2$ to $40$ to
analyse how the method number of a generic interface impacts
the five translators.
As shown in Figure~\ref{fig:ssa-n}, the code size (number of assembly instructions)
of translated \gls{fg} programs has a linear relationship with $n$ for all five translators.
However, different translators have different coefficients.
The coefficients of \glsentryshort{mono} ($328.8$), \glsentryshort{gotogo} ($300.8$), and Go~1.18\xspace ($297.8$) are much larger
than the coefficients of \glsentryshort{dict} ($117.9$) and \glsentryshort{erasure} ($103.8$).
Figure~\ref{fig:time-n} shows
the execution time of translated programs.
The programs translated by
\glsentryshort{dict} and Go~1.18\xspace
have a similar performance.
They are slower
than the corresponding programs translated by \glsentryshort{mono} and \glsentryshort{gotogo}.
This is largely due to the usage of dictionaries.
The programs generated by \glsentryshort{erasure} have the worst performance,
since the structural typing conducted by \glsentryshort{erasure} when it translates generic method calls to
polymorphic method calls is very slow~\cite{go-interface-slow-1,go-interface-slow-2}.
Figure~\ref{fig:compile-time-n} shows the compilation
time.
\glsentryshort{mono} is significantly slower than the other four translators,
and its compilation time is even not in a linear relationship with $n$.
The compilation times of the other four translators are similar to each other.
\myparagraph{Programs \mycircledtext{b} and \mycircledtext{d}}
How the number of non-generic operations and the length of the call chain impact the three metrics is quite similar to the method number of
generic interface \gocode{Base} in \mycircledtext{a}.
In particular, the code size, execution time,
and compilation time are all in a linear relationship with
the two configuration parameters, except for the compilation time of \glsentryshort{mono}.
Comparing \mycircledtext{b} with \mycircledtext{a},
one important difference to note is that
for \mycircledtext{b},
the programs translated by \glsentryshort{dict} spend a similar execution time
to that of the corresponding programs translated by \glsentryshort{erasure}, and the execution time
is larger than the execution time of the programs translated by Go~1.18\xspace.
However, in Figure~\ref{fig:time-n} for \mycircledtext{a},
the line of \glsentryshort{dict} is almost identical to the line
of Go~1.18\xspace, indicating that their execution times are similar,
and the line of \glsentryshort{dict} is lower
than the line of \glsentryshort{erasure}.
The reason is that when \glsentryshort{dict} translates \glsentryshort{fgg} to \glsentryshort{fg},
it also synthesises type assertions for the non-generic
operations in \glsentryshort{fgg} (line 56 in Figure~\ref{fig:benchmark-prog-pdf}).
The type assertions slow down the translated \glsentryshort{fg} programs.
\myparagraph{Program \mycircledtext{c}}
The code size, execution time, and compilation time all scale
exponentially with $m$ for the five translators.
The underlying reason is that function \gocode{DoIt()}
calls \gocode{f$_1$()} $2^m$ times
in each input \glsentryshort{fgg} program.
After normalising the three metrics
with the number of characters in the \glsentryshort{fgg} programs,
we find that the three metrics are in a linear relationship with $m$.
Among the five translators, \glsentryshort{erasure}'s translated programs
have the longest execution time. \glsentryshort{dict} and \glsentryshort{erasure}
spend a similar compilation time, which is much shorter than \glsentryshort{mono}, \glsentryshort{gotogo}, and Go~1.18\xspace.
\glsentryshort{dict}'s translated programs are similar in size to
\glsentryshort{erasure}'s translated programs, but they are smaller
compared with the programs translated by \glsentryshort{mono}, \glsentryshort{gotogo}, and Go~1.18\xspace.
\input{figs/eval/fig-prog-b-and-real-result.tex}
\myparagraph{Program \mycircledtext{e}}
As shown in Figures~\ref{fig:ssa-m}~and~\ref{fig:compile-m},
both the code size of the translated programs
and the compilation time scale exponentially with $m$
for \glsentryshort{mono},
\glsentryshort{gotogo}, and Go~1.18\xspace.
The reason is that \gocode{f$_m$()} essentially calls \gocode{CallBase()} $2^m$ times with $2^m$
distinct parameter combinations, because for $i\in[2,m), $ \gocode{f$_i$()} calls \gocode{f$_{i+1}$()}
twice, with its input parameters plus \gocode{Red} for the first time and its parameters
plus \gocode{Blue} for the second time, leading the three translators to copy
\gocode{CallBase()} $2^m$ times.
However, neither \glsentryshort{dict} nor \glsentryshort{erasure}
makes any copy of \gocode{CallBase()},
and the code size of their translated programs is in a polynomial relationship with $m$
(e.g., for \glsentryshort{dict}'s translated programs, $\textit{size} = 12.8m^2 + 34.5m + 381$, $p<0.001$).
Contrary to the intuition, as shown in Figure~\ref{fig:time-m},
the programs translated by \glsentryshort{mono}
have a worse execution performance compared with the corresponding programs translated by \glsentryshort{dict},
when $m$ is larger than 7.
The reason is that when $m$ is large, a program synthesised by \glsentryshort{mono}
has a large code size, and thus many cache misses occur during its execution.
For example, when $m$ is 9, the size of the executable file translated by \glsentryshort{mono} is 6.3MB,
and the executable triggers 6,058,156 cache misses in one run,
while the program translated by \glsentryshort{dict} only
causes 93,695 cache misses.
\myparagraph{Type simulation}
As we discussed earlier, we disable the metadata copy of type simulation.
If we enable the copy, then the translated programs
become slower (e.g., 10\% slower for \mycircledtext{a} when configuring $n$ equal to $2$). The slowdown becomes negligible when $n$ is equal to $40$.%
\subsubsection{Real-world benchmarks}
The evaluation results of real-world benchmarks are shown in Table~\ref{tab:real-benchmark-results}.
Overall, the translated programs of \glsentryshort{dict} and \glsentryshort{erasure}
have a smaller code size, but a longer execution time, compared with the corresponding programs translated by
\glsentryshort{gotogo}, \glsentryshort{mono}, and Go~1.18\xspace, which is consistent
with the results on the micro benchmarks.
However, the compilation time does not change significantly across different translators,
because all real-world benchmarks are small and do not have many usages of generics.
\input{figs/eval/exp-result-table.tex}
}%\vspace{-2mm}
\subsection{Discussion and Limitations}
\label{section:discussion}
Our experimental results largely reflect the common intuition that
monomorphisation translators (\gls{mono}, \gls{gotogo}, and Go~1.18\xspace) generate programs
with a better runtime performance,
while non-specialising translators (\gls{dict} and \gls{erasure}) synthesise programs in a smaller code size.
However, our evaluation also pinpoints cases where monomorphisation generates
programs in an extremely large size.
The programs trigger excessive cache misses during execution and have a very bad runtime performance.
On the other hand,
our experimental results motivate the introduction and usage of Go generics,
since without generics,
Go programmers have to implement polymorphism
using interfaces, which is exactly the same as the programs translated by \gls{erasure},
and our experimental results show that those programs are slow.
In practice, our dictionary-passing translator (\gls{dict}) constantly
generates programs in a smaller size and takes a smaller (or comparable) compilation time
than all existing translators (including Go~1.18\xspace,
the official generic type implementation).
Thus, it provides an alternative for real-world users of Go generics to
strike their desired tradeoff.
Moreover, our implementation and evaluation experience show
that type simulation is an important component of \gls{dict},
and that type metadata incurs extra runtime overhead.
Thus, corresponding data structures and algorithms need to be carefully designed
for better translated programs.
For instance, link-time optimisation can be applied to remove unused type metadata.
\myparagraph{Possible improvements for Go~1.18\xspace}
First, Go~1.18\xspace is very conservative in its support for GC shapes --
only considering pointers to have the same GC shape.
In our experiments, we do not observe the reuse of method implementations,
or synthesis and use of dictionaries.
Thus, to make full use of dictionaries and GC shape stenciling~\cite{go118},
it is necessary for the Go team to improve the current implementation and support
more GC shapes.
Second, the Go team can
consider dictionary-passing-based homogeneous compilation, as proposed
in this paper, since it supports polymorphic recursion, provides a faster compilation speed,
generates programs with a smaller code size, and enables separate compilation.
\myparagraph{Limitations}
Since the official generic type implementation released on
March 15th, 2022,
there does not yet exist generic Go code from
large, production-run Go software (e.g.,~Docker, Kubernetes, etcd).
We build the two benchmark suites to explore the translators'
asymptotic behaviours
and inspect how they perform on representative generic programs in other languages,
which is our best effort in conducting the evaluation.
We formalise \gls{dict} as a source-to-source translator
to clarify design choices for future implementations and
aid our proof of correctness (Theorem~\ref{thm:main:correctness}).
However, this choice limits the performance of our implementation, and the evaluation results
may not reflect the true capability of dictionary-passing
translation for two reasons:
first, we erase all types to \gocode{Any} to ensure type preservation,
which is slow at runtime;
and second, Go does not allow the creation of global constant dictionaries in source code,
but those dictionaries can potentially be created by the Go compiler
and leveraged by translated programs for a better runtime performance.
\section{\glsentrylong{fg}}
\label{section:fg}
\label{sec:fg}
We briefly summarise the \glsfirst{fg} language~\cite[\S~3]{griesemer2020featherweight};
specifically highlighting the key points
related to dictionary translation.
\subsection{\glsentrylong{fg} by Examples}
\label{sec:fg:example}
\input{figs/fg/fg-code-function}
\gls{fg} is a core subset
of the (non-generic) Go 1.16 language containing \emph{structures}, \emph{interfaces},
\emph{methods}, and \emph{type assertions}.
In \gls{fg}, there are two kinds of named types;
\emph{Interfaces} (\inlinelstfcgg{interface}) specify a collection
of methods which any implementing type must also possess, and
\emph{structures} (\inlinelstfcgg{struct}) which are
data objects containing a fixed
collection of typed fields.
\emph{Methods} are functions that apply to a
specific structure, called the method's \emph{receiver}.
Finally, \emph{type assertions} ask whether a structure can be used
as a specific type. If it cannot, then \gls{fg} will produce a
\emph{type assertion error}.
In contrast to nominally typed languages, Go uses
\emph{structural subtyping}.
As we shall see in \S~\ref{section:dictionary},
it is this
distinctive feature that makes our dictionary-passing translation
challenging and non-trivial.
In a nominally typed language, such as Java, one type implements (subtypes)
another when it explicitly declares such.
In Go, we do not declare that one type implements another.
Rather, one type implements another precisely when it implements
(at least) all of the prescribed methods.
Consider the example Go code in
Figure~\ref{code:fg:example}, which
simulates higher order functions, lists, and mapping.
For simplicity of presentation, we assume
that there are primitive \inlinelstfcgg{int} and \inlinelstfcgg{bool}
types along with a $<$ operation.
The \inlinelstfcgg{Any} interface does not specify any methods; as such,
all other types are its subtypes, meaning that any object may be used
when an \inlinelstfcgg{Any} is expected, but also that we cannot
apply any methods to an \inlinelstfcgg{Any} object without first
asserting it to some more specific type -- an action which may fail at runtime.
The \inlinelstfcgg{Function} interface specifies a single method,
which is given by the \emph{method signature}
\inlinelstfcgg{Apply(x Any) Any}.
Any structure implementing
an \inlinelstfcgg{Apply} method
that takes an argument of type \inlinelstfcgg{Any}
and returns a value, also of type \inlinelstfcgg{Any},
is said to
implement the \inlinelstfcgg{Function} interface.
Our example code simulates the \emph{greater than} function
as a structure (\inlinelstfcgg{GtFunc}) containing a single \inlinelstfcgg{Ord}
field. Its \inlinelstfcgg{Apply} method then calls the \inlinelstfcgg{Gt}
method provided by struct's field.
The \inlinelstfcgg{Ord} interface, however, specifies that \inlinelstfcgg{Gt}
should accept a single argument of type \inlinelstfcgg{Ord}.
Before the \inlinelstfcgg{Apply} method of \inlinelstfcgg{GtFunc} can call
\inlinelstfcgg{Gt} it must, then, assert its argument to
type \inlinelstfcgg{Ord}.
If the argument does not implement \inlinelstfcgg{Ord}, then a \emph{type assertion error}
occurs.
We assume that only one implementation of \inlinelstfcgg{Ord} exists, that being
\inlinelstfcgg{int}, which itself uses a risky type assertion.
The example also includes a \inlinelstfcgg{List} interface specifying
a \inlinelstfcgg{Map} method. We provide a cons list implementation
of \inlinelstfcgg{List}.
In \gls{fg}, there is a single top-level \inlinelstfcgg{main} function
that acts as the program's entrance.
Our program initially builds a simple three
value \inlinelstfcgg{int} list on line~\ref{fg:code:function:build},
and then uses the simulated greater than function (\inlinelstfcgg{GtFunc}) to
map the list to a \inlinelstfcgg{bool} list.
When, however, we
attempt to map this \inlinelstfcgg{bool} list using the same function, we
encounter a runtime type assertion error on line~\ref{fg:code:function:topanic}.
While we could catch this error at compile time by
increasing the specificity of the \inlinelstfcgg{Apply}, \inlinelstfcgg{Gt}, and
\inlinelstfcgg{Map} functions using \inlinelstfcgg{int} and
\inlinelstfcgg{bool} instead of \inlinelstfcgg{Any},
this would severely limit
code reusability.
}%\vspace{-2mm}
\subsection{\glsentrylong{fg} Syntax and Semantics}
\label{sec:fg:syntax}
\input{figs/fg/fg-syntax}
Figure~\ref{fig:fg:syntax} presents the syntax of \gls{fg} from
\cite{griesemer2020featherweight}.
We use the $\multi{x}$ notation for a sequences of $x$, namely $x_0, x_1,
\dots, x_n$.
A program ($P$) is given by a sequence of declarations ($\multi{D}$)
along with a {\bf main} function which acts as the top-level expression ($e$).
Shortened as $P = \program{e}$.
\gls{fg} is statically typed:
all \gls{fg} typing rules follow the Go 1.16 specification.
If, in the variable-type
environment $\Gamma$,
an expression $e$ is of type $t$,
then it satisfies the judgement $\wellTyped[\Gamma]{e}{t}$.
We assume that all programs $P$ are \emph{well-formed}, written
$P\operatorname{\var{ok}}$.
Since the rules/notations are identical to those
in \cite{griesemer2020featherweight},
we omit them here, but provide
definitions and details in
\ifnotsplit{Appendix~\ref{app:fg}}{the full version of this paper~\cite{fullversion}}.
\label{section:fg:reduction}
Figure~\ref{fig:fg:semantics} presents the \gls{fg} semantics with values and
evaluation contexts.
\textbf{\emph{Evaluation context}} $E$ defines the left-to-right call-by-value semantics
for expressions.
\textbf{\emph{Reductions}} are defined by the field selection rule
\rulename{r-fields}, type assertion rule \rulename{r-assert},
and the method
invocation \rulename{r-call}, with \rulename{r-context} for the context
evaluation. We use $\longrightarrow^\ast$ to denote a multi-step reduction.
\gls{fg} satisfies type preservation and progress properties
(see \cite[Theorems 3.3 and 3.4]{griesemer2020featherweight}).
\input{figs/fg/fg-semantics}
\section{\glsentrylong{fgg} and the limitations of monomorphisation and
Go~1.18\xspace}
\label{section:fgg}
As with \S~\ref{section:fg}, we briefly summarise the
\glsentryfirst{fgg} language~
\cite[\S~4]{griesemer2020featherweight}.
This section concludes with
a discussion of limitations in existing generic Go translations and Go~1.18\xspace.
}%\vspace{-2mm}
\subsection{\glsentrylong{fgg} by Example}
\input{figs/fgg/fg-code-function}
Figure~\ref{code:fgg:example} extends
Figure~\ref{code:fg:example} with generics.
As we saw in \S~\ref{sec:fg:example}, there was
a critical flaw in the original, non-generic, \gls{fg}
code. One part of the logic was polymorphic
(\ie \inlinelstfcgg{Map} is a natural transformation) while the
other was not (\ie \inlinelstfcgg{Gt}). We concluded that
section by observing the
two options; either we cater to the strict type
discipline demanded by \inlinelstfcgg{Gt}, reducing
reusability, or force an excessively permissive
polymorphism on \inlinelstfcgg{Gt} and risk runtime type assertion errors.
Generics, or bounded parametric polymorphism,
provide us with a third solution via the
precise definition and tracking of polymorphic types in
structures, interfaces, and methods.
As we shall see momentarily, in \gls{fgg}, each of
these constructs may now accept any number of
type variables (type parameters) as a type
formal, which must then be instantiated upon use.
Each type variable has a bound, an interface, that
any instantiating type must satisfy, \ie be an instance of.
Type formal \inlinelstfcgg{[T Any]} is read as type parameter
\inlinelstfcgg{T} is bound by type \inlinelstfcgg{Any}.
Objects with a generic type can use all methods
specified by the type variable's bound.
Type variables can be bound by any interface type, and may be mutually recursive within a type formal.
Take, for example, the type bound of \inlinelstfcgg{Ord} in Figure~\ref{code:fgg:example}.
\inlinelstfcgg{Ord} is bound by \inlinelstfcgg{Ord} itself and is used recursively in the
type bound for \inlinelstfcgg{GtFunc}.
For a type (\eg \inlinelstfcgg{int}) to instantiate type variable
\inlinelstfcgg{T} in \inlinelstfcgg{[T Ord[T]]},
its \inlinelstfcgg{Gt} method must not only take an argument of \inlinelstfcgg{Ord},
but must be precisely the same \inlinelstfcgg{Ord}-implementing type.
This kind of self-referential type bound is known as
\emph{F-bounded polymorphism} \cite{canning1989f}.
The interface \inlinelstfcgg{Function}
is now defined over two type variables (\inlinelstfcgg{T} and
\inlinelstfcgg{R}, both bounded by \inlinelstfcgg{Any}),
which are used by the specified \inlinelstfcgg{Apply}
method to type the simulated function's domain and codomain, respectively,
e.g., a type implementing \inlinelstfcgg{Function[int, bool]} must
implement the method \inlinelstfcgg{Apply(x int) bool}.
Unlike the original \gls{fg} code, we do not need \inlinelstfcgg{GtFunc}
to simulate any arbitrary function, but rather just functions from
some generic \inlinelstfcgg{Ord} type
to \inlinelstfcgg{bool}.
Instantiating \inlinelstfcgg{GtFunc} with \inlinelstfcgg{int},
written \inlinelstfcgg{GtFunc[int]},
gives an implementation of \inlinelstfcgg{Function[int,bool]}.
A type bound not only limits which types may specialise a
type parameter, but also what methods are available to
polymorphic values, \ie
given that all valid specialisations of \inlinelstfcgg{T}
in \inlinelstfcgg{GtFunc[T]}
must implement \inlinelstfcgg{Ord[T]}, we know that the
\inlinelstfcgg{val} field must always
possess the \inlinelstfcgg{Gt} method, allowing
us to call to \inlinelstfcgg{Gt} on line~\ref{fgg:code:function:apply:gt}
without a type assertion.
The definition of \inlinelstfcgg{List} tracks not only the type of
the list, but also the type of the list created
by \inlinelstfcgg{Map}. The \inlinelstfcgg{Map}
method accepts a type parameter along
with a \inlinelstfcgg{Function} argument; this type parameter is then
used as the codomain of the \inlinelstfcgg{Function} argument, and
instantiates the \inlinelstfcgg{List} return type.
Line~\ref{fgg:code:function:typefail} thus fails during type
checking because \inlinelstfcgg{GtFunc} does not
implement \inlinelstfcgg{Function[bool, bool]}.
}%\vspace{-2mm}
\subsection{\glsentrylong{fgg} Syntax and Semantics}
\label{section:fgg:syntax}
\input{figs/fgg/fgg-syntax}
Figure~\ref{fig:fgg:syntax} presents the syntax of \gls{fgg}.
The key differences from \gls{fg} are the addition of types formal
($\Psi, \Phi$) for method signatures and declarations.
A type formal ($\typeFormal$) is a sequence of pairs,
each of which contains
a type parameter ($\alpha$) and parameter bound ($\iType{\tau}$).
Type bounds are interface types that
can be mutually recursive, in that any bound in a type
formal may depend upon any type parameter in that type formal, including itself.
Type parameters are instantiated by a type actual ($\psi, \phi$) -- a
sequence of types that satisfy the requirements imposed by the type
formal. A type ($\tau$)
in \gls{fgg} is either a type parameter or
a declared type that has been instantiated
($t\typeActualReceive$).
We simplify method declaration from
\gls{fgg}~\cite{griesemer2020featherweight}, following
the Go~1.18\xspace syntax.
\label{section:fcgg:typing}
The type system in \gls{fgg} extends
\gls{fg} with the addition of a new type variable context $\Delta$
mapping type variable to its bound.
Expression $e$ of type $\tau$ is now given by the judgement
$\wellTyped[\Delta;\Gamma]{e}{\tau}$. Program well-formedness is given
by $P \operatorname{\var{ok}}$.
The typing rules follow those
given in \cite[Figure~15]{griesemer2020featherweight}, which can
be found in
\ifnotsplit{Appendix~\ref{appendix:fgg}}{the full version of this paper~\cite{fullversion}}.
\label{section:fgg:reduction}
The reduction semantics of \gls{fgg} are defined in
Figure~\ref{fig:fgg:semantics}.
They extend those of \gls{fg}; notably, \rulename{r-call}
(via the $\body$ auxiliary function) specialises generic types
in the resolved method body.
\gls{fgg} satisfies
type preservation and progress properties
(see \cite[Theorems 4.3 and 4.4]{griesemer2020featherweight}).
\input{figs/fgg/fgg-semantics}
\section{Introduction}
\label{sec:introduction}
Since its creation in 2009, the Go programming language
has placed a key emphasis on simplicity, safety, and efficiency.
Based on the
\citet{stackoverflow-developer-survey} survey, Go is the 5th most beloved
language, and is used to build
large systems, \eg \citet{docker},
\citet{kubernetes}, and \citet{grpc}.
The recent Go release (Go~1.18\xspace released on the 15th of March 2022)
added \emph{generics}, which has been considered Go's
most critical missing and
long awaited feature by
Go programmers and developers~\cite{go-developer-survey}.
\citet{go-release-notes}, however, has posted
that much work is still needed to ensure that
generics in Go are well-implemented.
The work on implementing generics in Go began in earnest with
\citet{griesemer2020featherweight},
in which they formalised two core calculi of (generic) Go; \gls{fgg} and \gls{fg},
as well as formalising
a \emph{monomorphisation translation} from \gls{fgg} to \gls{fg}.
Monomorphisation statically explores
a program's call graph and generates multiple
implementations of each generic
type and method according to each
specialisation of that type, or method, required at runtime.
The Go team informally proposed three approaches;
\begin{enumerate*}
\item Stencilling (monomorphisation)~\cite{google-mono},
\item Call-graph dictionary-passing~\cite{go-dict-proposal}, and
\item GC shape stencilling (hybrid of (1) and (2))~\cite{google-hybrid}.
\end{enumerate*}
A monomorphisation-based source-to-source prototype (\gls{gotogo})
has been implemented by
\citet{gotogo}, following the stencilling proposal (1) and
\cite{griesemer2020featherweight}.
The current Go~1.18\xspace implementation
extends (3)~\cite{go118}.
Unlike more traditional \emph{non-specialising}
dictionary approaches
(\eg dictionary-passing in Haskell and vtables in C++),
Go~1.18\xspace uses an optimised form of monomorphisation to allow types
in the same GC shape group to share specialised method and type instances.
In theory, all objects in a GC shape group have an equivalent
memory footprint and layout, although currently, Go~1.18\xspace
only groups pointers.
As multiple types may share the same GC shape group,
their dictionaries provide information lost during monomorphisation, \eg
concrete types and method pointers.
Moreover, Go~1.18\xspace builds a monolithic dictionary based on
the program's \emph{call-graph}.
Monomorphisation has a
number of well-known limitations;
it can substantially increase code
size, it can be prohibitively slow
during compilation~\cite{jones1995dictionary, StroustrupB:cpppl},
and it does not cover all programs~\cite{griesemer2020featherweight}.
Concretely, there are two core limitations with all the Go team proposals
(1--3), the current Go~1.18\xspace implementation, and the proposal of \citet{griesemer2020featherweight}.
\input{figs/fgg/fgg-nomono.tex}
\textit{1) Non-monomorphisable programs.\ }
All current implementations and proposals for generics in
Go suffer from the inability
to handle a class of programs that
use recursive instantiations, \eg the
list permutation example\footnote{
See \cite{gitchanderpermute} for an
efficient but type unsafe implementation of list permutation.}
provided in
Figure~\ref{fig:code:list:perm}.
This program cannot be monomorphised, as
a list of integers
\inlinelstfcgg{List[int]} has a
\inlinelstfcgg{permute} method which returns a list of type
\inlinelstfcgg{List[List[int]]}, which in turn has a
\inlinelstfcgg{permute} method
that returns type \inlinelstfcgg{List[List[List[int]]]}, and on
\emph{ad infinitum}.
Monomorphisation cannot explore
this infinite set of types in finite time, and
so cannot specialise a method for each instance.
{\textit{2) Specialising translation.\ }
All currently realised approaches to generics in Go are
based on method/type specialisation.
This stands in contrast
to the approaches taken by other languages with automatic
memory management, such as Haskell, C\#, and Java.
Go uses garbage collection for automatic
memory management.
In the top 16 statically typed languages
with generics~\cite{TopProgr17:online},
we find a constant theme;
languages with automatic memory management use
non-specialising implementations such as
dictionary-passing or erasure, and those without
use monomorphisation (see
\ifnotsplit{Appendix~\ref{app:implementations}}{the full version of this paper~\cite{fullversion}} for a breakdown of language implementations).
\myparagraph{Challenges and contributions}
We develop and implement a new non-specialising, call-site dictionary-passing translation
from Go with generics (\gls{fgg})
to Go (\gls{fg}), and prove its correctness. We then create micro and
real-world benchmarks for generic Go,
and examine the trade-offs
between the different translations to suggest
improvements for Go~1.18\xspace.
{\textit{1) The first challenge is to design and build a
non-specialising call-site dictionary-passing translation for Go.\ }
Go's distinctive
structural subtyping adds an extra level of complexity that requires careful consideration.
Our first contribution in \S~\ref{section:dictionary} and \S~\ref{subsec:imple}
is the formalisation and implementation of a new dictionary-passing translation
that is specifically designed for the unique qualities of Go.
{\textit{2) The second challenge is to overcome the
non-monomorphisability limitation} of
the current implementations and translate previously untranslatable
programs such as \inlinelstfcgg{permute}.
A key aspect of our dictionary design is \emph{call-site}---each polymorphic type parameter
is represented by its own dictionary, which in turn is created at
the call-site where that type parameter would have been instantiated.
This allows any well-formed \gls{fgg} program to be translated.
{\textit{3) The third challenge we meet is
to establish semantic correctness of our
translation}. Historically, dictionary-passing translations
have been proven correct using value preservation~\cite{yu2004formalization,yu2004thesis,sulzmann2021dictionary},
an approach that cannot ensure
termination preservation or
generalise to more advanced language
features (\eg concurrency in Go).
We instead use a fine-grained behavioural equivalence
guided by the work of \citet{Igarashi99FJ}.
Unfortunately, proving the \emph{bisimulation} result in
\cite[Theorem 5.4]{griesemer2020featherweight}
is insufficient due to intermediate states
created by dictionary-passing.
We propose a novel \emph{bisimulation up to dictionary resolution} reduction,
and use this relation to prove that the translation preserves
essential properties of the source language (\S~\ref{section:properties}).
This proof technique is general and translation-agnostic,
and is useful in other contexts where a standard bisimulation
is inadequate.
{\textit{4) The fourth challenge is to find an effective evaluation for implementations of
generics in Go}.
We compare the five implementations---
\begin{enumerate*}
\item our call-site, non-specialising dictionary-passing
translation;
\item an erasure translation
built by us for empirical evaluation;
\item a monomorphisation translation by \citet{griesemer2020featherweight};
\item the initial source-to-source monomorphisation prototype translation \gls{gotogo} by the Go team; and
\item Go~1.18\xspace
\end{enumerate*}
---along three dimensionalities;
\begin{enumerate*}
\item complication time,
\item translated code size, and
\item performance of compiled executables.
\end{enumerate*}
As Go~1.18\xspace was just released,
\emph{there currently exists no real-world Go program
with generics}.
In \S~\ref{subsec:evaluation}, we contribute a number of benchmarks to overcome this
deficit: we construct micro benchmarks to examine the effect of different
forms of complexity in generic programs;
and reimplement the real-world benchmarks from
\cite{odersky2000two,ureche2013miniboxing} in Go.
{\textit{5) The final challenge is to examine
the trade-offs between the different translations, which
suggest future improvements of Go~1.18\xspace}.
We observe,
in general, that monomorphisation leads
to better execution performance, while
non-specialisation (dictionary-passing) produces
smaller executables in less compilation time.
We also observe that on the micro benchmarks our dictionary-passing
translation can generate programs that are comparable in efficiency
to Go~1.18\xspace.
Overall, our results show that Go~1.18\xspace has much scope for improvement and
the usefulness of non-specialised call-site dictionary-passing translations for languages such as Go.
We provide concrete suggestions in \S~\ref{section:discussion}.
\myparagraph{\emph{Outline}}
\S~\ref{section:fg} and \S~\ref{section:fgg} summarise \gls{fg} and
\gls{fgg}; \S~\ref{section:dictionary} proposes a new
dictionary-passing translation;
\S~\ref{section:properties} proves
its semantic correctness;
\S~\ref{sec:exp} describes our implementations
and measures the trade-offs between the five translators;
\S~\ref{section:related} gives related work; and \S~\ref{section:conclusion}
concludes.
Proofs and
omitted definitions can
be found in
\ifnotsplit{the Appendix to this paper.}{the full version of the paper \cite{fullversion}.}
The dictionary-passing/erasure translators
and benchmarks are available in the artifact to this paper~\mbox{\cite{aritfact-entry}}.
Source code is available on GitHub~\cite{zhu22github}
and Software Heritage~\cite{zhu22heritage}.
\subsection{The Limitation of Monomorphisation}
\label{section:nomono}
\citet{griesemer2020featherweight}
define
a class of programs that their monomorphisation approach
cannot translate.
This limitation also applies to the Go~1.18\xspace call-graph based dictionary
implementation for the same rationale.
Consider the model non-monomorphisable
program in Figure~\ref{fig:example:nomono}.
\begin{wrapfigure}{l}{0.32\textwidth}
\vspace*{-.2cm}
\begin{lstfcgg}
type Box[$\alpha$ Any] struct { value $\alpha$ }
func (b Box[$\alpha$]) Nest(n int) Any {
if (n > 0) {
return Box[Box[$\alpha$]]{b}.Nest(n-1)
} else { return b }
}
\end{lstfcgg}
\vspace*{-.3cm}
\caption{\inlinelstfcgg{Box} example\\\cite[Figure~10]{griesemer2020featherweight}}
\label{fig:example:nomono}
\vspace*{-.3cm}
\end{wrapfigure}
Intuitively, the fundamental issue with this
deceptively simple program
is that \emph{instance set discovery} is
non-terminating.
To monomorphise a program,
we first need to discover all possible type instantiations used in
said program.
Perfectly well-behaved programs may however
produce infinitely many type instantiations.
This occurs when an instance of a (mutually)
recursive method eventually depends upon a greater
instantiation of itself, which in turn depends on an even
greater instantiation of itself \textit{ad infinitum}, \eg
\inlinelstfcgg{Box[int].Nest()} depends
upon the specialisation \inlinelstfcgg{Box[Box[int]].Nest()}.
In \cite{griesemer2020featherweight}, such programs are called
{\it nomono}.
\subsection{Go~1.18\xspace Implementation}
The official release of Go~1.18\xspace uses an optimised version of monomorphisation called
\emph{dictionaries and GC shape stenciling}~\cite{go118}.
When possible, their implementation reuses monomorphised functions to reduce code size.
Two objects may share the same specialised method
implementation when they have the same GC shape.
In the current implementation, the criteria of having the same
GC shape means they are of the same data type, or both are pointers.
Each function therefore must to have a dictionary to differentiate
concrete types at runtime.
A dictionary contains (1) the runtime type information of
generic type parameters, as well as (2) their derived types used in the function.
In the function body, each generic function call that
depends on the generic type parameters also needs a dictionary;
(3) these sub-dictionaries required by the method calls are also provided in the dictionary.
Additionally, the dictionary provides each generic object
with (4) the data structure that Go runtime uses to conduct method calls.
Go~1.18\xspace would also need to create an infinite call-graph dictionary
for the \inlinelstfcgg{Box} example in Figure~\ref{fig:example:nomono},
as well as for
the \inlinelstfcgg{permute} example in
Figure~\ref{fig:code:list:perm}. Hence, Go~1.18\xspace cannot handle either
example. Our call-site dictionary-passing approach does
not suffer this limitation.
\section{Correctness of Dictionary-Passing Translation}
\label{section:properties}
In this section, we define, justify, and prove
the correctness of our dictionary-passing translation
using a behavioural equivalence.
We first introduce a general
\emph{correctness criteria}
which good translations should satisfy.
We then propose a novel \emph{bisimulation up to} technique
to prove that translated programs are behaviourally equivalent to their source program.
We use this result to prove the correctness of our dictionary-passing translation.
Full proofs can be found in
\ifnotsplit{Appendix~\ref{app:proofs}}{the full version of this paper~\cite{fullversion}}.
\subsection{Correctness Criteria}
\label{section:properties:correctness}
The correctness criteria is defined
using a number of preliminary
predicates provided below.
\begin{definition}[Type assertion errors]
\label{def:panics}
We say expression $e$ in \gls{fg} is a \emph{type assertion error}
(\emph{panic} in \cite{griesemer2020featherweight})
if there exists an evaluation
context $E$, value $v$, and
type $t$ such that
$e=E[v.(t)]$
and $\vtype(v)\not <: t$.
We say expression $e$ gets
a \emph{type assertion error} (denoted by
$e\Downarrow_\ensuremath{\mathsf{panic}}$)
if it reduces to an expression that contains a type assertion error,
\ie{} $e\longrightarrow^\ast e'$ and $e'$
is a type assertion error.
We write $P\Downarrow_\ensuremath{\mathsf{panic}}$ when
$P = \program{e}$ and $e\Downarrow_\ensuremath{\mathsf{panic}}$.
Similarly, we define
$e\Downarrow_\ensuremath{\mathsf{panic}}$ and $P\Downarrow_\ensuremath{\mathsf{panic}}$ for \gls{fgg}.
\end{definition}
We write $e\Downarrow v$ if there exists $v$ such that
$e\longrightarrow^\ast v$ and extend this predicate to $P$.
We abbreviate $\dict[\emptyset;\emptyset;\emptyset]{e}{\lex{e}}$ to
$\dict[]{e}{\lex{e}}$.
We define the following general correctness
criteria
related to typability, error correctness,
and preservation of a program's final result.
\begin{definition}[Preservation properties
\label{def:type:preservation}
Let $P \operatorname{\var{ok}} $ in \gls{fgg},
and let there exist $\lex{P}$ such that $\dict[]{P}{\lex{P}}$.
A translation is:
\begin{enumerate}
\item \textbf{\emph{type preserving}}: if
$P \operatorname{\var{ok}}$, then $\lexP\operatorname{\var{ok}}$.
\item \textbf{\emph{type assertion error preserving}}:
$P\Downarrow_\ensuremath{\mathsf{panic}}$ iff $\lex{P}\Downarrow_\ensuremath{\mathsf{panic}}$.
\item \textbf{\emph{value preserving}}:
$P\Downarrow v$ iff $\lex{P}\Downarrow \lex{v}$
with
$\dict[]{v}{\lex{v}}$.
\end{enumerate}
\end{definition}
We only require the left-to-right direction for
type preservation, as due to type erasure
(\S~\ref{paragraph:structure}),
we cannot obtain the right-to-left
direction for dictionary-passing.
Our type preservation criteria matches that defined in
\citet[Theorem~5.3]{griesemer2020featherweight}.
We can, however, show
that type assertions are precisely
simulated (\S~\ref{subsubsec:typecollision}).
\subsection{Behavioural Equivalence -- Bisimulation up to Dictionary Resolution}
\label{subsec:prop:beh}
\citet[Theorem 5.4]{griesemer2020featherweight} prove the correctness
of the monomorphism translation using
a simple (strong) bisimulation:
the binary relation $\Re$ is a \emph{bisimulation} iff
for every pair of $\ENCan{e,d}$ in $\Re$, where $e$ is a \gls{fgg} expression
and $d$ is a \gls{fg} expression:
\begin{enumerate*}
\item if $e \longrightarrow e'$, then $d \longrightarrow d'$ such that
$\ENCan{e',d'}\in \Re$; and
\item if $d \longrightarrow d'$, then $e \longrightarrow e'$ such that
$\ENCan{e',d'}\in \Re$.
\end{enumerate*}
This strong bisimulation suffices for translations that
preserve a simple one-to-one reduction-step correspondence.
Unlike monomorphisation,
dictionary-passing relies
on runtime computation, which prevents such a simple
correspondence.
We can, however, distinguish between reductions
introduced by dictionary-passing and those
inherited from the source program. This distinction allows
us to construct
a one-to-one correspondence relation
\emph{up to} dictionary resolution.
The formulation is non-trivial since, in \gls{fg},
dictionary resolution can occur at any point in a subterm.
\begin{wrapfigure}{r}{.67\linewidth}
\begin{minipage}[t]{0.37\linewidth }
\vspace{-4mm}
\begin{lstfcgg}
func foo[$\alpha$ Num](a $\alpha$) $\alpha$ {
return a.Add(bar($\cdots$))
}
func main() {
foo[Int](Zero{})
}
\end{lstfcgg}
\end{minipage}
\begin{minipage}[t]{0.62\linewidth }
\vspace{-4mm}
\lstset{firstnumber=1}
\begin{lstfcgg}
func foo(dict NumDict, a Any) Any {
return dict.Add.Apply(a, bar($\cdots$)) }
type Int_Add struct {} // method pointer
func (i Int_Add) Apply(this Any, a Any) Any {
return this.(Int).Add(a) }
func main() {
foo(NumDict{Int_Add{}}, Zero{}) }
\end{lstfcgg}
\end{minipage}
\vspace{-3mm}
\caption{Non-trivial dictionary example. Source (Left). Translation (Right)}
\label{fig:example:nontriv}
\vspace{-3mm}
\end{wrapfigure}
We demonstrate this issue by evaluating the example in Figure~\ref{fig:example:nontriv}.
Importantly, the translated
function \inlinelstfcgg{foo}
cannot resolve the generic \inlinelstfcgg{Add} method
from dictionary \inlinelstfcgg{dict}
until \emph{after} expression \inlinelstfcgg{bar($\cdots$)}
is fully evaluated.
After one step, the \gls{fgg} program (left) is
\inlinelstfcgg{Zero<<>>.Add(bar($\cdots$))}.
If we translate the afore reduced term, we get
\inlinelstfcgg{Zero<<>>.(Zero).Add(bar($\cdots$))} ($\lex{Q_0}$).
But reducing the translated \gls{fg} program (right), we obtain the
\inlinelstfcgg{NumDict<<Int_Add<<>>>>.Add.Apply(Zero<<>>, bar($\cdots$))} ($\lex{Q_1}$).
To show $\lex{Q_0}$ equivalent to $\lex{Q_1}$ using
the standard \gls{fg} reduction,
we would first have to fully resolve
\inlinelstfcgg{bar($\cdots$)} before we could start
to the resolve dictionary
access in $\lex{Q_1}$.
We might attempt to show that the translation in Figure~\ref{fig:example:nontriv}
is correct using a many-to-many reduction-step relation, \ie
some binary relation $\Re$ where
for every pair of $\ENCan{e,d}$ in $\Re$ it holds that
\begin{enumerate*}
\item if $e \longrightarrow^\ast e'$, then $d \longrightarrow^\ast d'$ such that
$\ENCan{e',d'}\in \Re$; and
\item if $d \longrightarrow^\ast d'$, then $e \longrightarrow^\ast e'$ such that
$\ENCan{e',d'}\in \Re$.
\end{enumerate*}
This approach is both complicated by the presence of non-termination, \eg
if \inlinelstfcgg{bar($\cdots$)} does not return a value,
then we could never show that
$\lex{Q_0}$ and $\lex{Q_1}$ are related.
And more importantly, many-to-many relationships give less information
about the nature of a translation
than one-to-one relationships.
Were we to consider just the
\inlinelstfcgg{NumDict<<Int_Add<<>>>>.Add.Apply($\cdots$)}
portion of $\lex{Q_1}$ we observe that
using a pre-congruence reduction
$\lex{Q_1}$ resolves to \inlinelstfcgg{Zero<<>>.(Int).Add(bar($\cdots$))}.
We may then safely increase the accuracy of the
assertion \inlinelstfcgg{Zero<<>>.(Int)} to \inlinelstfcgg{Zero<<>>.(Zero)}
without altering the semantics of the term.
The later step is required because while the
dictionary stored the information
that \inlinelstfcgg{Zero<<>>} was passed to \inlinelstfcgg{foo}
as type \inlinelstfcgg{Int}, the reduction of the \gls{fgg} term
forgot this
information.
We call these two steps \emph{dictionary resolution},
as they resolve only those computations introduced by the use
of dictionaries for method resolution.
$\lex{Q_0}$ is equivalent to $\lex{Q_1}$
\emph{up to dictionary resolution}.
Our translation also adds type simulation computations
and type assertions.
Unlike dictionary resolution,
these extra computation steps are subsumed by the
standard \gls{fg} reduction.
\begin{definition}[Dictionary resolution]
\label{def:dictreso}
We define three pattern sets in \gls{fg}:
$\ensuremath{\rho_{\text{erase}}}$
(type assertions as a result of erasure),
$\ensuremath{\rho_{\text{sim}}}$ (type assertion simulation),
and $\ensuremath{\rho_{\text{dict}}}$ (dictionary resolution):
\vspace{-2.2mm}
{\small
\begin{flalign*}
\ensuremath{\rho_{\text{erase}}} ::=& \left\{\begin{array}{l}
v.(t)
\end{array}\right\}\\[-1.5mm]
\ensuremath{\rho_{\text{sim}}} ::=& \left\{\begin{array}{l}
v.\texttt{\_type}_i,\
v.\texttt{\_type},\
v.(t),\
v.\method{spec\_name}{m}(), \
v.\texttt{dict}_i, \
\lit{if} ~ v \mathbin{!=} v~\sytxBrace{\lit{panic}}, \
\return v
\end{array}\right\}&\\[-1.5mm]
\ensuremath{\rho_{\text{dict}}} ::=& \left\{\begin{array}{l}
\dictName{t}\{\multi{v}\}.f, \
v.\texttt{dict}_i, \
v.\texttt{\_type}_i,\
v.\texttt{\_type},\\%[-0.5mm]
\method{mName}{t,m}\{\}.\texttt{Apply}(\multi{e}),\
\dictName{t}\{\multi{v}\}.(\dictName{t})
\end{array}\right\}
\end{flalign*}
}
From these patterns, we define a number of reductions.
We define the first of these as
$E[e]\red_{\text{e}} E[e']$
if $e\longrightarrow e'$ with $e\in \ensuremath{\rho_{\text{erase}}}$; and
$E[e]\red_{\text{s}} E[e']$
if $e\longrightarrow e'$ with $e\in \ensuremath{\rho_{\text{sim}}}$.
We write $d\Longrightarrow d'$ if
$d\red_{\text{e}}^\ast \longrightarrow \red_{\text{s}}^\ast d'\not\red_{\text{s}}$.
Let $C$ be the context:
{$C::=\square \bnfsep
C.f \bnfsep
C.(t) \bnfsep
\sTypeInit{t}{\multi{e}, C, \multi{e}'} \bnfsep
C.m(\multi{e}) \bnfsep
e.m(\multi{e},C,\multi{e}')$}.
We define the dictionary resolution reduction $\dictred$ as
\begin{enumerate*}
\item $C[e]\dictred C[e']$
if $\reduction{e}{e'}$ where $e\in\ensuremath{\rho_{\text{dict}}}$; and
\item $C[e.(t)]\dictred C[e.(u)]$ if
$\wellTyped[]{e}{u}$ and $u<: t$.
\end{enumerate*}
\end{definition}
Notice that if $e \Longrightarrow e'$, then $e \longrightarrow^+ e'$; and that
$\Longrightarrow$ can be viewed as
a one-step reduction which corresponds
to a one-step of the source language.
Reduction $\red_{\text{s}}$ only occurs following a
call to $\texttt{tryCast}$, and simulates whether or not the source \gls{fgg} assertion
is a type assertion error (See \S~\ref{subsubsec:typecollision}).
The reduction $\red_{\text{e}}$ resolves only the assertions
introduced during the type erasure step (See \S~\ref{paragraph:structure}).
The dictionary resolution reduction $\ensuremath{\rho_{\text{dict}}}$
will occur following a method call
\rulename{r-call} and simulates the type parameter specialisation.
As demonstrated in the above example,
the $\dictred$ reduction may reduce any subterm matching
$\ensuremath{\rho_{\text{dict}}}$ or refine any type assertion.
\begin{restatable}{lemrest}{lemprec}
\label{lem:rec}
Let $e$ be an \gls{fg} expression.
Assume $\wellTyped[\emptyset]{e}{u}$.
\begin{enumerate}
\item $\Longrightarrow$ is deterministic, \ie if $e \Longrightarrow e_1$ and
$e \Longrightarrow e_2$, then $e_1=e_2$.
\item $\dictred$ is confluent, \ie if $e \dictred e_1$ and
$e \dictred e_2$, then there exists $e'$ such that $e_1 \dictred
e'$ and $e_2 \dictred e'$.
\end{enumerate}
\end{restatable}
We now extend the bisimulation relation to
bisimulation up to dictionary resolution.
\begin{definition}[Bisimulation up to dictionary resolution] \label{def:sb:upto}
\ \\
\begin{tabular}{ll}
\begin{tabular}{l}
The relation $\Re$ is
a \emph{bisimulation up to dictionary}
\\
\emph{resolution} if
$\Re\cdot (\leftarrowtriangle)^\ast$ is a bisimulation,
\\
\ie
if $P \operatorname{\var{ok}} $ in \gls{fgg}
and $\dict[]{P}{\lex{P}}$
\\
where
$P = \program{e}$ and
$\lex{P} = \program[\lex{D}]{\lex{e}}$
\\
then the diagram (right)
commutes.
\end{tabular}
&
\hspace{-0.5cm}
\begin{tabular}{l}
\small
\input{figs/fig-bisim.tex}
\end{tabular}
\end{tabular}
\end{definition}
\hspace{.15cm}
Intuitively, our translation forms a bisimulation
up to dictionary resolution
if
\begin{enumerate*}
\item each step that the source program takes can be mimicked
by the translated program; and
\item conversely, that if the translated program
reduces,
then the source program must have been able to make an equivalent step
\end{enumerate*}
-- albeit with the translated program still needing
to evaluate the added dictionary resolution computations
at some future point during computation.
By considering the observable behaviour of a program
to be non-dictionary resolution reduction steps,
type assertion errors, and
termination (value production), we ensure that the translated program
is behaviourally equivalent to that of the source
program.
Note that
this formulation may be extended to a concurrent or effectful
fragment of Go
with the standard addition of \emph{barbs}~\cite{MiSa92} or
transition labels.
Finally, we arrive at our main theorem --- that the translation
satisfies the correctness criteria.
\begin{restatable}[Correctness of dictionary-passing]{thmrest}{thmcorrect}
\label{thm:main:correctness}
Let $P \operatorname{\var{ok}} $ in \gls{fgg}
and $\dict[]{P}{\lex{P}}$ with
$P = \program{e}$ and
$\lex{P} = \program[\lex{D}]{\lex{e}}$.%
\begin{enumerate*}
\item Dictionary-passing translation
$\lex{(-)}$ is type preserving;
\item $e$ and $\lex{e}$ are bisimilar up to dictionary resolution;
\item $\lex{(-)}$ is
type assertion error
preserving; and
\item $\lex{(-)}$ is value preserving.
\end{enumerate*}
\end{restatable}
Theorem~\ref{thm:main:correctness}
states that
our translation is correct, as
translated programs behave exactly as the source program would have behaved,
and that any extra computations are accounted for by
machinery introduced for dictionary-passing.
It is worth stressing that
our statement
is \emph{stronger} than the various definitions of
dictionary-passing translation correctness considered in the
literature (see \S~\ref{section:related}), which limit themselves
to non-termination preserving versions of value preservation.
By providing an account of intermediate state equivalence,
Theorem~\ref{thm:main:correctness}(2) not only gives a
meaningful equivalence for non-terminating programs, but
may also be extended to languages with non-determinism or concurrency.
\subsection{Proof of Theorem \ref{thm:main:correctness}}
We provide the key lemmata, theorems, and corollaries used in the proof
of Theorem \ref{thm:main:correctness}. All omitted proofs
may be found in
\ifnotsplit{Appendix~\ref{app:proofs}}{the full version of this paper~\cite{fullversion}}.
\myparagraph{Type preservation}
The type preservation criteria given in Definition~\ref{def:type:preservation}
only considers whole programs.
We must first show that the dictionary-passing translation
is type preserving for expressions. Note that the translation of
structure literals is
the only non-\texttt{Any}\ typed expression.
\begin{restatable}[Type preservation of expressions]{lemrest}{lemtypepres}
\label{lem:type:pres:exp}
Let $\dict{e}{\lex{e}}$ and $\map{\Gamma}$ be the \gls{fg} environment
where all variables in $\Gamma$ are erased (\texttt{Any}) and each dictionary
in $\eta$ is appropriately typed according to the bound in $\Delta$.
If $\wellTyped[\Delta;\Gamma]{e}{\tau}$ then
\begin{enumerate*}
\item If $\tau = \alpha$ or $\iType{\tau}$,
then $\wellTyped[\map{\Gamma}]{\lex{e}}{\texttt{Any}}$.
\item If $\tau = \sType{t}\typeActualReceive$, then
either $\wellTyped[\map{\Gamma}]{\lex{e}}{\texttt{Any}}$
or $\wellTyped[\map{\Gamma}]{\lex{e}}{\sType{t}}$.
\end{enumerate*}
\end{restatable}
\begin{restatable}[Type preservation (Theorem \ref{thm:main:correctness} (1)]{correst}{corprogtypepres}
\label{lem:type:pres:prog}
If $P \operatorname{\var{ok}}$, then $\lexP\operatorname{\var{ok}}$.
\end{restatable}
\begin{proof}
By the assumption that name constant functions are distinct and
Lemma~\ref{lem:type:pres:exp}.
\end{proof}
\myparagraph{Bisimulation and error preservation}
The operational correspondence theorem described the behaviour
of a source program and its translation as four non-overlapping
cases. Note that $\lex{e} \Longrightarrow e'$
is the maximum reduction without another
type assertion simulation reduction
($e'\not\red_{\text{s}}$).
\begin{restatable}[Operational correspondence]{thmrest}{thmopcorrespond}
\label{thm:operational:correspondence}
Let $P \operatorname{\var{ok}}$ where $P = \program{e}$
and let $\dict[]{\program{e}}{\program[\lex{D}]{\lex{e}}}$.
\begin{enumerate}[(a)]
\item If $\reduction{e}{d}$, then there
exists $\lex{d}$ such that
$\dict[\emptyset; \emptyset; \emptyset]{d}{\lex{d}}$ and
$\lex{e} \Longrightarrow{\dictred^\ast} \lex{d}$.
\item If $\lex{e} \Longrightarrow e'$ where $e$ is not a type assertion error,
then there exists $d$ such that
$\reduction{e}{d}$ and there exists $\lex{d}$ such that
$\dict[\emptyset; \emptyset; \emptyset]{d}{\lex{d}}$ and
$e' \dictred^* \lex{d}$.
\item
If $\lex{e} \Longrightarrow e'$ where $e$ is a type assertion error,
then $e'$ is a type assertion error.
\item
If $e$ is a type assertion error, then there exists an $e'$
such that $\lex{e} \Longrightarrow e'$ and $e'$ is a type assertion error.
\end{enumerate}
\end{restatable}
\begin{proof}
By induction over the assumed reduction. Full proof is provided in
\ifnotsplit{Appendix~\ref{app:proofs}}{the full version of this paper~\cite{fullversion}}.
\end{proof}
\begin{restatable}[Bisimulation up to dictionary resolution (Theorem \ref{thm:main:correctness} (2))]{correst}{corbisim}
\label{cor:bisim}
Let $P \operatorname{\var{ok}} $
and $\dict[]{P}{\lex{P}}$ with
$P = \program{e}$ and
$\lex{P} = \program[\lex{D}]{\lex{e}}$.
Then $e$ and $\lex{e}$ are bisimilar up to dictionary resolution.
\end{restatable}
\begin{proof}
By Theorem \ref{thm:operational:correspondence}.
%
Let $\Re$ be the least relation such that all source expressions
are paired with their translation.
$\Re$ is a bisimulation up to dictionary resolution.
%
Namely, for each element $\ENCan{e,\lex{e}}\in \Re$, we have that:
\begin{enumerate}
\item If $e\longrightarrow e'$, then by Theorem~\ref{thm:operational:correspondence} (a)
there exists a $\ENCan{e',d}\in \Re$ such
that $\lex{e} \Longrightarrow{\dictred^\ast} d$.
\item If $\lex{e} \Longrightarrow{\dictred^\ast} d$, then by
Theorem~\ref{thm:operational:correspondence} (b)
there exists a $\ENCan{e',d}\in \Re$ such
that $e\longrightarrow e'$.
\end{enumerate}%
\end{proof}
\begin{restatable}[Type error preservation (Theorem \ref{thm:main:correctness} (1))]{correst}{corerrorpres}
\label{cor:error:preservation}
Let $\program \operatorname{\var{ok}}$ and $\dict[]{P}{\lex{P}}$.
$P\Downarrow_\ensuremath{\mathsf{panic}}$ iff $\lex{P}\Downarrow_\ensuremath{\mathsf{panic}}$.
\end{restatable}
\begin{proof}
For this proof, we define $\lex{P}$ as resolving into a type assertion error
if $\lex{P}\Longrightarrow P'$ and $P'$ is a type assertion error. This happens
when $P$ is a type assertion error, as in Theorem~\ref{thm:operational:correspondence} (c) and (d).
By induction on the reductions in $\Downarrow$.
\begin{itemize}
\item[] \resetpfcounter\textbf{Case : } Left to right (base):
By Theorem~\ref{thm:operational:correspondence} (d).
\item[] \resetpfcounter\textbf{Case : } Right to left (base):
By Theorem~\ref{thm:operational:correspondence} (c).
\item[] \resetpfcounter\textbf{Case : } Left to right (induction): \\
If $P$ is not a type assertion
error, then it reduces to $Q$ where $Q\Downarrow_\ensuremath{\mathsf{panic}}$.
By Theorem~\ref{thm:operational:correspondence} (a) $\lex{P}\Longrightarrow\dictred}%\succ_{\mathit{dict}}\lex{Q}$
where $\dict[]{Q}{\lex{Q}}$.
Apply induction on if $Q\Downarrow_\ensuremath{\mathsf{panic}}$ then $\lex{Q}\Downarrow_\ensuremath{\mathsf{panic}}$.
\item[] \resetpfcounter\textbf{Case : } Left to Right (induction): \\
We assume that $\lex{P}$ does not resolve into a type assertion error,
\ie $\lex{P}\Longrightarrow Q'$ where $Q'$ is not a type assertion error.
Since $\dictred}%\succ_{\mathit{dict}}$ cannot cause a type assertion
error, we also get that $Q' \dictred}%\succ_{\mathit{dict}}^* \lex{Q}$ where $\lex{Q}$
is not a type assertion error.
By Theorem~\ref{thm:operational:correspondence} (b) $P\longrightarrow Q$.
Apply induction on if $\lex{Q}\Downarrow_\ensuremath{\mathsf{panic}}$ then $Q\Downarrow_\ensuremath{\mathsf{panic}}$.
\end{itemize}
\end{proof}
\myparagraph{Value preservation}
Finally, the value preservation property follows dictionary-passing
being a bisimulation up to dictionary resolution, as the
dictionary resolution steps are eager reductions that can equivalently
be delayed until they become standard reductions.
\begin{restatable}[Reduction rewrite]{lemrest}{lemredrewrite}
\label{lem:red:rewrite}
Let $e_1\rightarrowtriangle e_2 \longrightarrow e_3$ where $e_1=C[d_1]$, $e_2=C[d_2]$, and $d_1\longrightarrow d_2$.
\begin{enumerate}
\item If there exists an $E$ such that $C=E$ then $e_1 \longrightarrow^2 e_3$
\item If there does not exists an $E$ such that $C=E$ then $e_1 \longrightarrow \rightarrowtriangle e_3$
\end{enumerate}
\end{restatable}
\begin{restatable}[Resolution to value]{lemrest}{lemredvalue}
\label{lem:red:val}
If $e\rightarrowtriangle v$ then $e\longrightarrow v$.
\end{restatable}
\begin{restatable}[Value preservation (Theorem \ref{thm:main:correctness} (4))]
{correst}{corvalue}
\label{cor:valpres}
Let $\program \operatorname{\var{ok}}$ and $\dict[]{P}{\lex{P}}$.
$P\Downarrow v$ iff $\lex{P}\Downarrow \lex{v}$ where
$\dict[]{v}{\lex{v}}$.
\end{restatable}
\tikzset{|/.tip={Bar[width=.8ex,round]}}
\begin{proof}
By Corollary~\ref{cor:bisim} we have
the following diagram (where $\Re$ is created by $\Mapsto$)
\[
\begin{tikzpicture}[line width=rule_thickness,
arrowlabel/.style={inner sep=.5,fill=white},
]
\node (dagone) [] {$\lex{e}_1$} ;
\node (dagtwo) [right=1 of dagone] {$\lex{e}_2$} ;
\node (dagthree) [right=1 of dagtwo] {$\lex{e}_3$} ;
\node (dagdots) [right=1 of dagthree] {$\cdots$\vphantom{$\lex{e}_3$}} ;
\node (dagvee) [right=1 of dagdots] {$\lex{v}$\vphantom{$\lex{e}_3$}} ;
\node (eone) [above=.4 of dagone] {$e_1$} ;
\node (etwo) [above=.4 of dagtwo] {$e_2$} ;
\node (ethree) [above=.4 of dagthree] {$e_3$} ;
\node (edots) [above=.4 of dagdots] {$\cdots$\vphantom{$e_3$}} ;
\node (evee) [above=.4 of dagvee] {$v$\vphantom{$e_3$}} ;
\draw[->] (eone) to (etwo);
\draw[->] (etwo) to (ethree);
\draw[->] (ethree) to (edots);
\draw[->] (edots) to (evee);
\coordinate (onetwo) at ($ (dagone) !.5! (dagtwo) $);
\draw[-{Implies},double] (dagone) to (onetwo);
\draw[-{Latex[open]}] (onetwo) to node[very near end, yshift=.8mm] {${}^*$} (dagtwo);
\coordinate (twothree) at ($ (dagtwo) !.5! (dagthree) $);
\draw[-{Implies},double] (dagtwo) to (twothree);
\draw[-{Latex[open]}] (twothree) to node[very near end, yshift=.8mm] {${}^*$} (dagthree);
\coordinate (threedots) at ($ (dagthree) !.5! (dagdots) $);
\draw[-{Implies},double] (dagthree) to (threedots);
\draw[-{Latex[open]}] (threedots) to node[very near end, yshift=.8mm] {${}^*$} (dagdots);
\coordinate (dotsvee) at ($ (dagdots) !.5! (dagvee) $);
\draw[-{Implies},double] (dagdots) to (dotsvee);
\draw[-{Latex[open]}] (dotsvee) to node[very near end, yshift=.8mm] {${}^*$} (dagvee);
\draw[|-{Implies},double] (eone) to (dagone);
\draw[|-{Implies},double] (etwo) to (dagtwo);
\draw[|-{Implies},double] (ethree) to (dagthree);
\draw[|-{Implies},double] (evee) to (dagvee);
\end{tikzpicture}
\]
By Lemma~\ref{lem:red:rewrite} and \ref{lem:red:val}
each dictionary resolution reduction $\rightarrowtriangle$ is
either subsumed by $\longrightarrow$ or may be delayed using
reduction rewriting until
it becomes a $\longrightarrow$ reduction.
In other words, since
$e_1 \longrightarrow e_2 \longrightarrow \cdots \longrightarrow v$ iff
$\lex{e_1} \Longrightarrow\rightarrowtriangle \lex{e_2} \Longrightarrow\rightarrowtriangle \cdots
\Longrightarrow\rightarrowtriangle \lex{v}$.
We use that
$\rightarrowtriangle$ can be
delayed ($d \rightarrowtriangle\Longrightarrow d'$ implies $d \Longrightarrow\rightarrowtriangle d$
or $d \longrightarrow\Longrightarrow d$),
hence
$\lex{e_1} \Longrightarrow^+\rightarrowtriangle^+ \lex{v}$.
Finally, from $e\rightarrowtriangle^+ v$ implies $e\longrightarrow^+ v$, we have that
$e_1\Downarrow v$ iff
$\lex{e_1}\Downarrow \lex{v}$.
\end{proof}
Proof of Theorem~\ref{thm:main:correctness} is given by
Corollary~\ref{lem:type:pres:prog},
\ref{cor:bisim},
\ref{cor:error:preservation}, and
\ref{cor:valpres}.
\section{Related Work}
\label{section:related}
\textbf{\emph{Implementation and benchmarks of generics}.}
\begin{wrapfigure}{r}{0.70\linewidth}
\footnotesize
\begin{tabular}{@{\hspace{0pt}}c@{\hspace{3pt}}|@{\hspace{3pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l}
&Language &Translation(s) \ \ & Optimal & Optimal \\
& & & Exec.~Time \ & Code Size \\\midrule
Our work
& \gls{fgg} (Go)
& Dict/Mono/
& Mono (1st)
& Erasure$^\dagger$ (1st)\\
&
& Erasure$^\dagger$
& Dict (2nd)
& Dict (2nd)\\
\midrule
Go team & Go & Mono/Hybrid & Mono & Mono \\
\midrule
\cite{ureche2013miniboxing}
& Scala (JVM)\
& Hybrid
& Hybrid
& Hybrid
\\
\cite{odersky2000two}
& Pizza (Java)\
& Mono/Erasure
& Mono
& Erasure
\\
\cite{kennedy2001design}
& .NET CLR
& Hybrid
& Hybrid
& N/A
\\
\cite{Jones93}
& Haskell
& Dict/Mono
& Mono
& Mono
\\
\end{tabular}
\\[1mm]
($\dagger$) \gls{fgg} Erasure is not type preserving.
\vspace*{-2mm}
\caption{Implementations and benchmarks}
\label{fig:tab-benchmark-works}
\vspace*{-3mm}
\end{wrapfigure}
To the best of our knowledge,
there is no existing work comparing implementations
of generics in Go.
The closest ones target JVM languages \cite{odersky2000two,ureche2013miniboxing},
.NET common language runtime (CLR) \cite{kennedy2001design},
and Haskell \cite{jones1995dictionary}.
\citet{odersky2000two}
benchmark a homogeneous
(similar to \acrshort{erasure}) and a
heterogeneous (similar to \acrshort{mono}) translation
for Pizza (an extension of Java with generics).
They find that heterogeneity
reduces execution time, but also increases code size.
\citet{jones1995dictionary} gave a similar comparison for Haskell, reporting
that monomorphisation produces a smaller code size;
our work shows the opposite result.
One major reason is that unnecessary dictionary
fields and manipulation of dictionary parameters require
more assembly instructions in Haskell than Go, as
Go targets
low-level efficiency.
\citet{kennedy2001design} apply a hybrid
dictionary and monomorphisation approach targeting the
Just-In-Time (JIT) .NET CLR compiler.
Object instantiation
is conducted lazily at runtime according to an object's code
and structure (\eg memory layout and garbage
collection shape). Each object contains
a pointer to a dictionary (vtable), which provides
method entry points and type information.
With the help of lazy instantiation during runtime,
.NET CLR supports abundant language features,
including but not limited to $F$-bounded polymorphism
and polymorphic recursion.
They compare their design with equivalent non-generic
implementations using \texttt{Object}s and
hand-specialised code. Their execution speed is
close to that of the hand-specialised versions.
The Go~1.18\xspace approach is similar to .NET CLR,
but unlike .NET CLR, its instantiation happens at
compile time. Due to structural typing dictionaries are instantiated
through an approach similar to instance discovery
in monomorphisation.
Hence, Go~1.18\xspace suffers from an inability to support polymorphic
recursion (\ie constrained by \textit{nomono}, \S~\ref{section:nomono}) and
the large code size of monomorphisation (\S~\ref{sec:exp}).
\citet{ureche2013miniboxing} propose an optimised
monomorphisation approach called miniboxing using
one monomorphised instance on
types with different sizes to reduce code size.
Methods of different types are specialised at runtime
using a custom classloader.
They benchmark seven different settings, one achieving
at most a 22 times speedup over the default
generics translation in Scala.
The main design goal of their benchmarks is the
performance of reading and writing miniboxed objects
allocated on heap by the JVM.
They test the different combinations of concrete
types for generics (``Multi Context''), which is
similar to the scenario of \benchname{Program}~\mycircledtext{c} (in \S~\ref{subsec:evaluation}),
but their goal is to test the historical
paths executed in the HotSpot JVM.
They also test the speed of one method call
\texttt{hashCode} from generics types.
In comparison, our benchmarks test how various
factors impact the performance (\eg the method
number in an interface).
\myparagraph{Formal translations of generics}
Formal translations of generics
can be split into three main techniques;
\emph{Erasure},
\emph{dictionary-passing}, and
\emph{monomorphisation}.
We consider the most relevant work,
a breakdown of which
is provided in Figure~\ref{table:trans:theory}.
Where these works formally prove the correctness of their translation,
we observe that they can be grouped as
\emph{behavioural equivalence}~\cite{griesemer2020featherweight, Igarashi99FJ}
and \emph{value preservation}~\cite{yu2004formalization}.
The former demands that during evaluation the source and target
programs are still related, whereas the latter merely requires that
the result of a productive program be preserved.
In general behavioural equivalence is a more fine-grained equivalence, as
it can be used to show value preservation.
In this paper, we formalised and then
proved our dictionary-passing translation
correct using bisimulation up to dictionary-resolution, which is
categorised as
a behavioural equivalence.
\citet{yu2004formalization}
formalise a hybrid dictionary and
monomorphisation translation for the .NET~CLR.
\begin{wrapfigure}{r}{0.66\linewidth}
\footnotesize
\vspace{-3mm}
\begin{tabular}{@{\hspace{0pt}}c@{\hspace{3pt}}|@{\hspace{3pt}}lllc}
& Language & \hspace{-2mm}Approach & \hspace{-2mm}Translation(s)\ &
\hspace{-5mm}Formalised \\ \midrule
Our work
& \gls{fgg} (Go)
& S-to-S
& Dict
& \CheckmarkBold \\
\midrule
\cite{griesemer2020featherweight}
& \gls{fgg} (Go)
& S-to-S
& Mono
& \CheckmarkBold \\
\cite{Igarashi99FJ}
& Java
& S-to-S
& Erasure
& \CheckmarkBold \\
\cite{yu2004formalization}
& .NET CLR
& IR-to-IR
& Hybrid
& \CheckmarkBold \\
\cite{bottu2019coherence}
& Haskell
& S-to-IR
& Dict
& \CheckmarkBold \\
\cite{OW97}
& Pizza
& S-to-S
& Mono/Erasure
& \XSolidBrush
\end{tabular}
\\
\begin{center}
S-to-S$=$Source to Source; IR$=$Intermediate representation
\end{center}
\vspace{-3mm}
\caption{Related Work: Theory}
\label{table:trans:theory}
\vspace{-5mm}
\end{wrapfigure}
They mostly follow the design of \cite{kennedy2001design}.
They consider a target language which can, using an object's type, request the
specific dictionary from an assumed infinite map.
This is justified for the .NET~CLR as method dictionaries
are created on-demand using an object's type.
Compare this to our translation in which we must eagerly
construct dictionaries and pass
them in addition to the objects that they describe.
\citet[Theorem 5]{yu2004formalization} show that their
translation is value preserving;
for expression $e$, and value $v$,
if $e$ evaluates to $v$ ($e \Downarrow v$)
then there is a reduction
such that $\mapother{e} \longrightarrow^* \mapother{v}$
(where $\mapother{-}$ is their translation).
\citet{bottu2019coherence} formalise
dictionary-passing in Haskell.
Their work focuses on proving a \emph{coherency theorem}.
They motivate this work as nominally typed languages
featuring multiple inheritance (\ie Haskell)
suffer from an ambiguity in dictionary-resolution such that
the translation of
a single source program may
\emph{non-deterministically}
produce different terms in the target language.
A translation is coherent when these target terms
are contextually equivalent.
We need not consider this issue, as Go's structural typing
system does not support the multiplicity of superclass implementations
that causes incoherence.
\citet{bottu2019coherence} do not prove the correctness
of their dictionary-passing translation using an equivalence
between the source and target language.
\citet{griesemer2020featherweight} formalised the \gls{fg} and \gls{fgg}
languages, as well as the \gls{mono} translation used in \S~\ref{sec:exp}.
This work defines a class of \gls{fgg}
programs that can be monomorphised,
and proves that class membership is decidable.
Finally, they prove that their translation forms
a one-to-one bisimulation.
Their behavioural equivalence is
straightforward and does not require any up to
techniques, as monomorphisation does not
introduce runtime computations.
\citet{OW97} describe, but do not formalise, two
alternative approaches -- erasure and monomorphisation --
to implementing
generics in the Pizza language, a generic variant of Java.
\citet{Igarashi99FJ} build on the erasure technique
developed in \cite{OW97}.
Their work formalises
Featherweight Generic Java and
proves a formal erasure translation
to Featherweight Java.
They prove the correctness of their erasure
translation using a behavioural equivalence,
although their translation introduces
\emph{synthetic casts} (assertions), which complicates
the correctness theorems.
To resolve this issue, they introduce a reduction
for their proofs which freely adds,
removes, or safely alters any required synthetic casts.
Correctness of their translation
is split
into two directions, called
\emph{weak completeness} and
\emph{soundness} \cite[Theorem~4.5.4 and Theorem~4.5.5]{Igarashi99FJ},
which uses a behavioural equivalence up to
the cast reduction.
As with our paper, they use these theorems to show a
value preservation corollary.
\citet[Corollary~4.5.6]{Igarashi99FJ} also prove
that their erasure translation is type assertion error preserving
-- in contrast to our \gls{erasure} translation, since ours does
not preserve type assertions. This disparity is due to
a limitation on the expressivity of assertion in Generic Java.
The inclusion of this limitation has been an area of contention, with other
authors suggesting that it could be overcome with the
use of type-reps~\cite{allen02thecase,agesen1997adding,solorzano98reflection,Viroli00reflectiveGJ,crary1998intensional}.
\myparagraph{Formal non-generics dictionary translation}
\citet{sulzmann2021dictionary} propose a
dictionary-passing translation from the non-generic \gls{fg} to an
\emph{untyped} variant of the $\lambda$-calculus
with pattern matching.
They use a dictionary-passing approach to investigate
Go's resolution mechanism for overloaded methods and
structural subtyping.
\citet{sulzmann2021dictionary}
prove that their translation is value preserving
using a step-indexed logical relation.
Intuitively, \citet{sulzmann2021dictionary} use an inductive
proof technique that, using two related values $v$ and $v'$ at type $t$,
relates any terms ($e$ and $\mapother{e}$)
that can reduce to
$v$ and $v'$ (\emph{resp.}) within $k$~reduction-steps.
Step-indexed logical relations are a sophisticated extension to
logical relations (\eg \cite{bottu2019coherence}),
and are applicable for languages with recursion.
\citet{sulzmann2021dictionary} left
a type-preserving translation from \gls{fg} and
a translation from \gls{fgg}
as their future work.
No implementation or evaluation of their translation is provided.
\myparagraph{Alternatives to bisimulation up to}
In our motivating example for \emph{up to dictionary resolution}
(Figure~\ref{fig:example:nontriv}),
we briefly discuss potential alternate many-to-many bisimulation approaches.
One such approach is the
\emph{stuttering bisimulation}~\cite{browne1988charachterizing},
which has been studied extensively in the domain of model checking~\cite{baier2008principles}.
The stutter bisimulation relates two terms when they
both reduce to related terms in an unbounded, but finite, number of steps.
Formally, $e$ and $\mapother{e}$ are related by a
\emph{stutter bisimulation}
when \begin{enumerate*}
\item $e\longrightarrow e'$ implies that there exists a finite reduction
$\mapother{e}\longrightarrow d_0 \longrightarrow \cdots \longrightarrow d_n$ ($n\ge 0$)
where each intermediate state $d_i$ is related to $e'$; and symmetrically,
\item $\mapother{e}\longrightarrow d$ implies that there is a finite reduction from $e$
with each element being related to $d$.
\end{enumerate*}
This approach works well for finite models, but becomes \emph{undecidable}
when applied to Turing complete languages such as \gls{fgg}.
To overcome this issue, the works in \cite{hur2014logical,leroy2009formally}
consider restricted, decidable, variants of
the stutter bisimulation to show the correctness of their translations.
\citet{leroy2009formally} formulates the non-symmetric
\emph{``star''-simulation}, which requires a well-founded ordering on reducing terms
to ensure that either \begin{enumerate*}
\item both source and target terms reduce infinitely; or
\item the source cannot reduce infinitely while the target is stuck.
\end{enumerate*}
In practice, the well-founded ordering used in (2)
is approximated using fixed parametric bounds.
\citet{hur2014logical} formulate this idea
using \emph{stuttering parametric bisimulation},
which bounds the number of steps that two related
terms can take before their reductions are related.
Such restricted variants of the stutter bisimulation
cannot provide a sound and complete correctness proof for \gls{dict}.
More generally, our use of a fine-grained up to bisimulation
not only develops on existing correctness theorems
for the translation generics
\cite{Igarashi99FJ,griesemer2020featherweight}, but it
can also be readily extended to include advanced language features
such as concurrency and side effects in Go.
\section{Appendix of Section~\ref{section:properties}}
\label{app:proofs}
This section provides the detailed proofs for the main theorem.
\begin{definition}[Dictionary map]
To simplify definitions we sometimes use a functional notation ($\map[]{-}$)
for the dictionary-passing translation, defined as
\begin{align*}
\map{e} & = \lex e~\text{\it where }~\dict{e}{\lex{e}} \\
\map[]{P} & = \lex P~\text{\it where }~\dict[]{P}{\lex{P}} \\
\map{\Gamma} & = \dom{\Gamma} : \texttt{Any}, \multi{\texttt{dict} : \dictName{\Delta(\eta^{-1}(\texttt{dict}))}}
\end{align*}
While $\map{\Gamma}$ seems complex at first glance it simply
erases all variables already in $\Gamma$ while adding
appropriate dictionary variables. This is done by finding
the type parameter name for $\texttt{dict}_i$ from $\eta^{-1}$
and then getting the type bound of the type parameter from $\Delta$.
This type bound is used to decide the type of the $\texttt{dict}_i$ variable.
Note that $\eta$ is bijective since we assume all type parameter names and
dictionary variable names are unique, as such $\eta^{-1}$ exists.
\end{definition}
\lemprec*
\begin{proof}
\begin{enumerate}
\item This case is shown by the deterministic nature of $\longrightarrow$ and the fact
that $\Longrightarrow \subseteq \longrightarrow^+$.
\item This case is immediate for most $\ensuremath{\rho_{\text{dict}}}$ cases. Only the
$\method{mName}{t,m}\sytxBrace{}.\texttt{Apply}(\multi{e})$ case provides
a complexity. Once we realise, however, that each $e_i$ is
used linearly in the method $\method{mName}{t,m}\sytxBrace{}.\texttt{Apply}$,
as created by $\method{meth\_ptr}{}$,
it becomes clear that this case does not interact with any
reductions in $\multi{e}$.
\end{enumerate}
\end{proof}
\begin{restatable}[Type specialisation is resolved by $\dictred$ ]
{lemrest}{lempreposttype}
\label{lem:preposttype}
Let $\Delta = \multi{\alpha : \iType{\tau}}$,
expression $e$ be of type
$\wellTyped[\Delta; \Gamma]{e}{\tau}$,
and $\multi{\sigma}$ be a type actual such that $\subtypeMulti[\Delta]{\sigma}{\iType{\tau}}$.
Let the map $\eta = \{\multi{\alpha \mapsto \texttt{dict}}\}$.
then
$\map{e}[
\multi{\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}}
] \dictred^*
\map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}$
\end{restatable}
\begin{proof}
By induction on $e$, we apply the substitution of each $\alpha_i$ in turn.
Note that dictionary $\eta(\alpha_i)$ is either of the form $v.\texttt{dict}_i$ when
$\alpha_i$ comes from the receiver or $\texttt{dict}_i$ when it is a method parameter.
We can limit our considerations to the latter case as
if the former holds we can transform it to the latter as
$\ensuremath{\rho_{\text{dict}}}$ resolves the desired receiver field access.
\begin{itemize}
\item[] \caseof{d-dictcall}\\
We assume that $\wellTyped[\alpha:\iType{\tau}; \Gamma]{e}{\alpha}$
such that the translation of
$e.m\typeActualMethod(\multi{e})$ is
\[\dict{e.m\typeActualMethod(\multi{e})}
{\eta(\alpha_i).m.\texttt{Apply}(\map{e}, \map{\psi}, \multi{\map{e}})}\]
$\eta(\alpha_i)$ is either of the form $v.\texttt{dict}_i$ when
$\alpha_i$ comes from the receiver or $\texttt{dict}_i$ when it is a method parameter.
In the former case $\ensuremath{\rho_{\text{dict}}}$ resolves the desired receiver field access,
so we may limit our considerations the latter.
Let $\sigma = u[\phi]$ and $\iType{\tau} = \iType{t}[\phi']$ then
applying the substitution $\theta = [\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]$
we produce the term
\[\makeDict[\emptyset; \emptyset]{u[\phi], \iType{t}[\phi']}.m.\texttt{Apply}(\map{e}[\theta], \map{\psi}[\theta], \multi{\map{e_2}[\theta]})\]
where method $m$ is in the method abstractors of the given dictionary, such that
\[\makeDict[\emptyset; \emptyset]{u[\phi], \iType{t}[\phi']} = \dictName{\iType{t}}\{\multi{v}, \method{mName}{u,m}, \multi{v}\}\]
$\ensuremath{\rho_{\text{dict}}}$ resolves as follows
\begin{flalign*}
\qquad \dictName{\iType{t}}\{\cdots\}.&m.\texttt{Apply}(\map{e}[\theta], \map{\psi}[\theta], \multi{\map{e}[\theta]}) &\\
& \dictred \method{mName}{u,m}.\texttt{Apply}(\map{e}[\theta], \map{\psi}[\theta], \multi{\map{e}[\theta]}) &\\
& \dictred \map{e}[\theta].(u).m(\map{\psi}[\theta], \multi{\map{e}[\theta]})
\end{flalign*}
By the induction hypothesis
\begin{flalign*}
\qquad \map{e}[\theta].(u)&.m(\map{\psi}[\theta], \multi{\map{e}[\theta]}) \dictred^* \\
& \map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}.(u).m(\map[\emptyset; \emptyset; \Gamma]{\psi[\multi{\alpha \mathbin{:=} \sigma}]}, \multi{\map{e[\emptyset; \emptyset; \Gamma][\multi{\alpha \mathbin{:=} \sigma}]}})
\end{flalign*}
Note that we can only apply induction on the arguments to $m$
because $\ensuremath{\rho_{\text{dict}}}$ is defined on the pre-congruence evaluation
context $C$. We explicitly do not use $\ensuremath{\rho_{\text{sim}}}$
as part of this induction.
To resolve $\map[\emptyset; \emptyset; \Gamma]{e.m\typeActualMethod(\multi{e})[\multi{\alpha \mathbin{:=} \sigma}]}$
we need first observe that the type substitution specifies all type variables
by our assumptions. This means that the dictionary-passing translation uses
the homomorphic rule \rulename{d-call}. We also know that the type of $e$ ($\alpha$)
is mapped to $\sigma$ ($u[\phi]$).
\begin{flalign*}
\qquad & \map[\emptyset; \emptyset; \Gamma]{e.m\typeActualMethod(\multi{e})[\multi{\alpha \mathbin{:=} \sigma}]} &\\
&\qquad = \map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}].m[\psi[\multi{\alpha \mathbin{:=} \sigma}]](\multi{e[\multi{\alpha \mathbin{:=} \sigma}]})} &\\
&\qquad = \map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}.(u).m(\map[\emptyset; \emptyset; \Gamma]{\psi[\multi{\alpha \mathbin{:=} \sigma}]}, \multi{\map[\emptyset; \emptyset; \Gamma]{e[\multi{\alpha \mathbin{:=} \sigma}]}})
\end{flalign*}
\item[] \caseof{d-assert} $\wellTyped[\Delta, \Gamma]{e.(\alpha)}{\alpha}$\\
We start by considering the $e.(\alpha)[\alpha \mathbin{:=} \sigma]$ side.
\begin{flalign*}
\qquad & \map[\emptyset; \emptyset; \Gamma]{e.(\alpha)[\alpha \mathbin{:=} \sigma]}
= \map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma].(\sigma)}
= \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]}) &
\end{flalign*}
We now look at the lhs. Let $\zeta = (-.\texttt{\_type}) \circ \eta$
\begin{flalign*}
\qquad \map{e.(\alpha)} & = \typemeta{\alpha}.\texttt{tryCast}(\map{e}) &\\
& = \zeta(\alpha).\texttt{tryCast}(\map{e}) \\
& = (\eta(\alpha).\texttt{\_type}).\texttt{tryCast}(\map{e})\\
& = \texttt{dict}.\texttt{\_type}.\texttt{tryCast}(\map{e})
\end{flalign*}
If $\iType{\tau} = \iType{t}[\phi]$ then we have that
$\makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}} =
\dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta{\sigma}}$.
We also know that $\sigma$ is well typed in the empty environment ($\subtype[]{\sigma}{\iType{\tau}}$)
so $\typemeta{\sigma} = \typemeta[\emptyset]{\sigma}$.
With the previously derived $\map{e.(\alpha)} = \texttt{dict}.\texttt{\_type}.\texttt{tryCast}(\map{e})$
{\small
\begin{flalign*}
\quad \!& \texttt{dict}.\texttt{\_type}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}] &\\
& \quad = \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}.\texttt{\_type}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}])\\
& \quad \dictred \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}])
\end{flalign*}
}
We can now apply the induction hypothesis
\begin{flalign*}
\qquad & \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta[\emptyset]{\sigma}}]) &\\
& \quad \dictred^* \typemeta[\emptyset]{\sigma}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})
\end{flalign*}
\item[] \caseof{d-assert} $\wellTyped[\Delta, \Gamma]{e.(\tau)}{\tau}$
If $\alpha \not\in \fv{\tau}$ then this proof is immediate by induction.
If we instead assume that $\alpha \in \fv{\tau}$ then
with $\zeta = (-.\texttt{\_type}) \circ \eta$
\begin{flalign*}
\qquad & \map{e.(\tau)}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}] &\\
& \quad = \typemeta{\tau}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]
\end{flalign*}
We further assume that $\tau = t[\alpha, \multi{\sigma}]$. While it may be that
$\alpha$ is a type specialisation of a type specialisation ($t[u[\alpha]]$),
this case does not significantly alter the proof, so we assume $\tau$
is as given. Naturally this also applies if $\alpha$ is used more than
once in $\tau$ (we assume $\alpha \not\in \fv{\multi{\sigma}}$).
Computing $\typemeta{\tau}$ we get $\metadataName{t}\sytxBrace{\zeta(\alpha), \multi{\typemeta{\sigma}}}$,\\
with $\zeta(\alpha) = \texttt{dict}.\texttt{\_type}$.
{\small
\begin{flalign*}
\qquad & = \typemeta{\tau}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]&\\
& \quad = \metadataName{t}\sytxBrace{\texttt{dict}.\texttt{\_type}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]\\
\end{flalign*}
}
Furthermore we know $\makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}} = \dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta{\sigma}}$,
and so the above substitution becomes
{\small
\begin{flalign*}
\qquad & = \metadataName{t}\sytxBrace{\texttt{dict}.\texttt{\_type}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e})[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}]&\\
& = \metadataName{t}\sytxBrace{\dictName{\iType{t}}\sytxBrace{\multi{v}, \typemeta{\sigma}} \\
& \qquad \qquad .\texttt{\_type}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}])\\
& \dictred \metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}])\\
\end{flalign*}
}
Applying induction
{\small
\begin{flalign*}
\qquad &\metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map{e}[\texttt{dict} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\sigma, \iType{\tau}}])&\\
&\quad \dictred^* \metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})
\end{flalign*}
}
We now look to $\map[\emptyset; \emptyset; \Gamma]{e.(\tau)[\alpha \mathbin{:=} \sigma]}$.
We again assume that $\tau = t[\alpha, \multi{\sigma}]$ with $\alpha \not\in \fv{\multi{\sigma}}$
\begin{flalign*}
\qquad &\map[\emptyset; \emptyset; \Gamma]{e.(\tau)[\alpha \mathbin{:=} \sigma]} &\\
& \quad = \map[\emptyset; \emptyset; \Gamma]{e.(t[\alpha, \multi{\sigma}])[\alpha \mathbin{:=} \sigma]}\\
& \quad = \map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma].(t[\sigma, \multi{\sigma}])}\\
& \quad = \typemeta[\emptyset]{t[\sigma, \multi{\sigma}]}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})\\
& \quad = \metadataName{t}\sytxBrace{\typemeta{\sigma}, \multi{\typemeta{\sigma}}}.\texttt{tryCast}(\map[\emptyset; \emptyset; \Gamma]{e[\alpha \mathbin{:=} \sigma]})
\end{flalign*}
\end{itemize}
\end{proof}
\begin{restatable}[Subtype preservation]{lemrest}{lemsubtypepres}
\label{lem:sub:pres}
Let $\dict[]{P}{\lex{P}}$.
If $\subtype[\emptyset]{u[\psi]}{t\typeActualReceive}$ in
$P$ then
$u<:t$ in $\lex{P}$.
\end{restatable}
\begin{proof}
By case analysis on $\subtype[\emptyset]{u[\psi]}{t\typeActualReceive}$.
\end{proof}
\begin{restatable}[Value substitution is compositional upto $\dictred$ ]
{lemrest}{lemprepostval}
\label{lem:prepostval}
Let $\Gamma = \multi{x : \tau}$,
expression $e$ be of type
$\wellTyped[\emptyset; \Gamma]{e}{\tau'}$,
and expressions $\multi{v}$ be typed by
$\wellTypedMulti[\emptyset; \emptyset]{v}{\sType{\sigma}}$
such that
$\subtypeMulti[\emptyset]{\sType{\sigma}}{\tau}$.
We have that
$\map[\emptyset, \emptyset, \Gamma]{e}[
\multi{x\mathbin{:=} \map[\emptyset, \emptyset, \emptyset]{v}}
] \dictred^*
\map[\emptyset; \emptyset; \emptyset]{e[\multi{x\mathbin{:=} v}]}$
\end{restatable}
\begin{proof}
By induction on the translation rule used,
we apply the substitution of each $x_i$ in turn.
\begin{itemize}
\item[] \caseof{d-call}
\[
\namedRule{d-call}{
\infer{
\dict[\emptyset; \emptyset; \Gamma]{
e.m\typeActualMethod(\multi{e})
}{
\map[\emptyset;\emptyset;\Gamma]{e}.(t).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
}
}{
\wellTyped[\emptyset; \Gamma]{e}{t\typeActualReceive}
& \lex{\psi} = \makeDict[\emptyset; \emptyset]{\psi, \Psi}
& (m[\Psi](\multi{x~\tau})~\tau) \in \methods[\Delta](t\typeActualReceive)
}
}
\]
By the substitution lemma \cite[Lemma~4.2]{griesemer2020featherweight}
we have
\begin{pfsteps*}
\pf[1]{\wellTyped[\emptyset; \emptyset]{e[\multi{x\mathbin{:=} v}]}{u[\psi']}}
\pf[2]{\subtype[\emptyset]{u[\psi']}{t\typeActualReceive}}
\end{pfsteps*}
and
\begin{pfsteps*}
\pf[3]{\dict[\emptyset; \emptyset; \emptyset]{
e[\multi{x\mathbin{:=} v}].m\typeActualMethod(\multi{e[\multi{x\mathbin{:=} v}]})
}{
\map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}.(u).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}})
}}
\end{pfsteps*}
By lemma~\ref{lem:sub:pres} \pfref{2} we have that $u<: t$.
We now have
\begin{pfsteps*}
\pf[4]{\map[\emptyset;\emptyset;\Gamma]{e}.(t).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
\dictred
\map[\emptyset;\emptyset;\Gamma]{e}.(u).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
}
\end{pfsteps*}
and by the induction we have
\begin{pfsteps*}
\pf[5]{
\map[\emptyset;\emptyset;\Gamma]{e}.(u).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\Gamma]{e}})
\dictred^*
\map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}.(t).m(\lex{\psi}, \multi{ \map[\emptyset;\emptyset;\emptyset]{e[\multi{x\mathbin{:=} v}]}})
}
\end{pfsteps*}
\end{itemize}
\end{proof}
\begin{lemma}[Method specification simulaiton preserves substitution]
\label{lem:methspec}
Let $\wellFormedMulti[\alpha:\iType{\tau}]{M}$, and assume $\multi{\sigma}$ such that
$\subtypeMulti[\emptyset]{\sigma}{\iType{\tau}}$.
We also assume $\texttt{this} = \metadataName{\iType{t}}\sytxBrace{\multi{\typemeta[\emptyset]{\sigma}}}$.
For $\zeta = \{\multi{\alpha \mapsto \texttt{this}.\texttt{\_type}}\}$ it
holds that $\signatureMeta[\zeta]{M}~\red_{\text{s}}^*~\signatureMeta[\emptyset]{M[\alpha \mathbin{:=} \sigma]}$.
\end{lemma}
\begin{proof}
Since each $\alpha_i$ is distinct we can consider each separately.
We begin by noting that $\method{arity}{M[\alpha_i \mathbin{:=} \sigma_i]} = \method{arity}{M} = n$.
We also define a suitable $\zeta'$ for the $\texttt{param\_index}\{\}$ map,
such that $\alpha_i \not \in \dom{\zeta'}$.
\begin{flalign*}
\qquad & \signatureMeta[\emptyset]{M[\alpha_i \mathbin{:=} \sigma_i]}&\\
& \quad = \fnMeta{n}\sytxBrace{
\multi{\typemeta[\zeta']{\tau[\alpha_i \mathbin{:=} \sigma_i]}}
}\\
& \signatureMeta[\zeta]{M} \\
&\quad = \fnMeta{n}\sytxBrace{
\multi{\typemeta[\zeta', \zeta]{\tau}}
}
\end{flalign*}
Where $\multi{\tau} = \multi{\tau_0}, \multi{\tau_1}, \tau_2$ for
$M = [\multi{\beta ~ \tau_0}](\multi{x~\tau_1})\tau_2$.
It now suffices to show that for all $\tau$. $\typemeta[\zeta', \zeta]{\tau} \red_{\text{s}}^*
\typemeta[\zeta']{\tau[\alpha_i \mathbin{:=} \sigma_i]}$. This is done by induction on $\tau$.
\begin{itemize}
\item[] \resetpfcounter\textbf{Case : } $\tau = \alpha_i$\\
The term $\typemeta[\zeta']{\alpha_i[\alpha_i \mathbin{:=} \sigma_i]}$ becomes $\typemeta[\zeta']{\sigma_i}$
which is equal to $\typemeta[\emptyset]{\sigma_i}$ since $\sigma_i$ is defined
outside the scope of $\zeta'$.
The other term $\typemeta[\zeta', \zeta]{\alpha_i}$ is equal to
$(\zeta', \zeta)(\alpha_i)$ which by the definition of $\zeta$ is\\
$\metadataName{\iType{t}}\sytxBrace{\multi{\typemeta[\emptyset]{\sigma_i}}}.\texttt{\_type}_i \red_{\text{s}} \typemeta[\emptyset]{\sigma_i}$.
\item[] \resetpfcounter\textbf{Case : } $\tau = \beta$ where $\beta \mathbin{!=} \alpha_i$
Both terms are immediately equal to $\zeta'(\beta)$.
\item[] \resetpfcounter\textbf{Case : } $\tau = t[\multi{\tau}]$
By induction on each $\tau_i$.
\end{itemize}
\end{proof}
\begin{lemma}
\label{lem:typeformalsmatchasparams}
If $\Phi \mathbin{:=}_\Delta \phi$ with $\eta$ such that
$\dom{\eta} = \dom{\Delta}$ then
$\vtype(\makeDict{\phi, \Phi}) <: \method{asParam}{\Phi}$.
\end{lemma}
\begin{proof}
Immediately from the definition of $\makeDict[]{}$ and
$\method{asParam}{}$.
\end{proof}
\lemtypepres*
\begin{proof}
By induction on the type of $e$.
\begin{itemize}
\item[] \caseof{t-field}
\begin{pfsteps*}
\pf[1]{\wellTyped[\Delta; \Gamma]{e.f}{\tau}}
\pf[2]{\dict{e.f}{\lex{e}.(\sType{t}).f}}
\end{pfsteps*}
For \pfref{1} to hold $e$ must be of type $\sType{t}\typeActualReceive$.
By the induction hypothesis, either $\wellTyped[\map{\Gamma}]{\map{e}}{\texttt{Any}}$
or $\wellTyped[\map{\Gamma}]{\map{e}}{\sType{t}}$.
In either case $\map{e}.(\sType{t})$ is well typed by \rulename{t-assert$_S$}
or \rulename{t-stupid} (\emph{resp.}). Since $f$ is a field of
type $\sType{t}\typeActualReceive$ it must also be a field of $\sType{t}$.
We get the final typing judgement $\wellTyped[\map{\Gamma}]{\map{e}.(\sType{t}).f}{\texttt{Any}}$.
\item[] \caseof{t-var}
Immediate by our definition of $\map{\Gamma}$.
\item[] \caseof{t-literal}
\begin{pfsteps*}
\pf[1]{\wellTyped[]{\sType{t}\typeActualReceive\sytxBrace{\multi{e}}}{\sType{t}\typeActualReceive}}
\pf[2]{\dict{\sType{t}\typeActualReceive\sytxBrace{\multi{e}}}{
\sType{t}\sytxBrace{\multi{\lex{e}}, \lex{\phi}}}}
\pf[3]{\lex{\phi} = \makeDict{\phi, \Phi}}
\pfstep{\inversion{t-literal}}
{4}{\type \sType{t}\typeFormalType \struct{x~\tau}}
\pfstep{\rulename{d-struct}}
{5}{\dict[]{\type \sType{t}\typeFormalType \struct{x~\tau}}{
\type \sType{t} \struct{x~\texttt{Any}, \method{asParam}{\Phi}}
}}
\end{pfsteps*}
Each $\lex{e_i}$ implements the $\texttt{Any}$ type while by
Lemma \ref{lem:typeformalsmatchasparams}, $\lex{\phi}$
implements $\method{asParam}{\Phi}$. As such
\begin{pfsteps*}
\pf[6]{\wellTyped{\sType{t}\sytxBrace{\multi{\lex{e}}, \lex{\phi}}}{\sType{t}}}
\end{pfsteps*}
\item[] \caseof{t-call}
\begin{pfsteps*}
\pf[1]{\wellTyped{e.m\typeActualMethod(\multi{e})}{\tau}}
\pfSubcase{\wellTyped{e}{\alpha}}
\pf[2]{\dict{
e.m\typeActualMethod(\multi{e})
}{
\eta(\alpha).m.\texttt{Apply}(\lex{e}, \lex\psi, \multi{\lex{e}})
}}
\end{pfsteps*}
Since \pfref{1} we know that the bounds of $\alpha$
($\iType{t}\typeActualReceive = \Delta(\alpha)$)
contains the method $m$ and that the type of the
dictionary $\eta(\alpha)$ is $\dictName{\iType{t}}$
we know that $\dictName{\iType{t}}$ has a field $m$.
We further know that the field $m$ has type
$\nAryFunction{n}$ where $n = |\psi| + |\multi{e}|$.
Because all arguments to the $\texttt{Apply}$ method are of type
$\texttt{Any}$ the rhs of \pfref{2} is well typed.
\pfSubcase{\wellTyped{e}{t\typeActualReceive}}
\begin{pfsteps*}
\pf[3]{\dict{
e.m\typeActualMethod(\multi{e})
}{
\lex{e}.m(\lex\psi, \multi{\lex{e}})
}}
\end{pfsteps*}
We can combine the cases where $t$ is a structure
or an interface since \rulename{d-meth} and
\rulename{d-spec} both do the same thing.
If $m\typeFormalMethod(\multi{x~\tau})~\tau \in
\methods_{\Delta}(t\typeActualReceive)$ then the translation
produces $m(\lex\Psi, \multi{x~\texttt{Any}})~\texttt{Any} \in
\methods(t)$.
\end{itemize}
\end{proof}
\thmopcorrespond*
\begin{proof}
By induction over the assumed reduction.
\begin{itemize}
\item[] \caseof{r-fields} --- (a) direction
\begin{pfsteps*}
\pf[1]{\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.f_i \longrightarrow v_i}
\pf[2]{\dict[\emptyset; \emptyset; \emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.f_i
}{
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset; \emptyset]{\phi, \Phi}}.f_i
}}
\end{pfsteps*}
Inversion on \rulename{r-fields} \pfref{1} and the the definition of
$\fields$ gives us
\begin{pfsteps*}
\pf[3]{(\multi{f~\tau})[\eta] = \fields(\sType{t}\typeActualReceive)}
\pf[4]{\type \sType{t}\typeFormalType \struct{\multi{f~\tau}}\in \multi{D}}
\end{pfsteps*}
Applying the dictionary translation rule \rulename{d-struct} to \pfref{4} we get
\begin{pfsteps*}
\pf[5]{\type \sType{t} \struct{\multi{f~\texttt{Any}}, \multi{\texttt{dict}~u}} \in \multi{\lex{D}}}
\pf[6]{(\multi{f~\texttt{Any}}, \multi{\texttt{dict}~u}) = \fields(\sType{t})}
\pfstep{\rulename{r-fields} \pfref{6, 2}}
{7}{\reduction{
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset; \emptyset]{\phi, \Phi}}.f_i
}{
\lex{v_i}
}}
\pfstep{\inversion{d-value} \pfref{2}}
{8}{
\dict[\emptyset;\emptyset;\emptyset]{v_i}{\lex{v_i}}
}
\end{pfsteps*}
\item[] \caseof{r-fields} --- (b) direction mostly the same as
the (a) direction, since there are no $\dictred$ reductions available to
$\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset; \emptyset]{\phi, \Phi}}.f_i$
as both $\multi{\lex{v}}$ and $\makeDict[\emptyset;\emptyset]{\phi,\Phi}$ are values.
\item[] \caseof{r-call} --- (a) direction \\
We begin by stating our assumptions explicitly
\begin{pfsteps*}
\pf[1]{\reduction{
v.m\typeActualMethod(\multi{v})
}{
e[\theta][\texttt{this}\mathbin{:=} v, \multi{x\mathbin{:=} v}]
}
}
\pf[2]{
\wellTyped[\emptyset; \emptyset]{v.m\typeActualMethod(\multi{v})}{\tau[\theta]}
}
\end{pfsteps*}
with $v$ of the form
\begin{pfsteps*}
\pf[vform]{v = \sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}}
\pf[7]{\wellTyped[\emptyset; \emptyset]{v}{\sType{t}\typeActualReceive}}
\end{pfsteps*}
By analysing the proof tree of \pfref{1} using inversion on \rulename{r-call}
and the definition of $\body$ we get
\begin{pfsteps*}
\pf[4]{
(\texttt{this}:\sType{t}\typeActualReceive,~ \multi{x:\tau}).e[\theta]
= \body(\vtype(v).m\typeActualMethod)
}
\pf[5]{\theta = (\Phi, \Psi := \phi, \psi)}
\pf[6]{\funcDelc{\sType{t}\typeFormalReceive}{m\typeFormalMethod}{\multi{x~\tau}}{\tau}{\return~e} \in \multi{D}}
\end{pfsteps*}
and so $v.m\typeActualMethod(\multi{v})$ is translated using rule \rulename{d-call}
\begin{pfsteps*}
\pf[8]{\dict[\emptyset;\emptyset; \emptyset]{
v.m\typeActualMethod(\multi{v})
}{
\lex{v}.(\sType{t}).m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}})
}}
\end{pfsteps*}
where $\lex{v}$ is defined using \rulename{d-value}
\begin{pfsteps*}
\pf[vddagger]{
\dict[\emptyset; \emptyset; \emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}
}{
\sType{t}\sytxBrace{\multi{\lex{v_1}}, \makeDict[\emptyset, \emptyset]{\phi, \Phi}}
}
}
\end{pfsteps*}
With $\Phi = (\typeFormal)$ and
$\Psi = (\typeFormal[\multi{\beta~\iType{u}[\Psi']}])$,
the method definition \pfref{6} is translated using \rulename{d-meth}
\begin{pfsteps*}
\pf[9]{\eta = \multi{\alpha\mapsto\texttt{this}.\texttt{dict}}, \multi{\beta \mapsto \texttt{dict}}}
\pf[10]{\dict[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}{\lex{e}}}
\pf[11]{\dict[]{
\funcDelc{\sType{t}\typeFormalReceive}{m\typeFormalMethod}{\multi{x~\tau}}{\tau}{\return~e} $\\\qquad$
}{
\funcDelc{\sType{t}}{m}{\multi{\texttt{dict}~\dictName{\iType{u}}},~\multi{x~\texttt{Any}}}{\texttt{Any}}{\return~\lex{e}}
}}
\end{pfsteps*}
From here on we write $\lex{e}$ using the functional notation
\[\lex{e} = \map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}\]
Now that we have fleshed out the translation we begin to look at
the translated term's reductions.
For our value $v$ of type $\sType{t}\typeFormalReceive$, the translated
term $\lex{v}$ is both a value and of type $\sType{t}$.
This is immediately evident by \rulename{d-value}.
As such the assertion is always resolved by $\red_{\text{e}}$.
\begin{pfsteps*}
\pf[12]{\lex{v}.(\sType{t}).m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}}) \red_{\text{e}}
\lex{v}.m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}})}
\end{pfsteps*}
resolving the method call to the implementation in \pfref{11}
\begin{pfsteps*}
\pf[13]{
\lex{v}.m(\makeDict[\emptyset; \emptyset]{\psi, \Psi}, \multi{\lex{v}})
$\\\qquad$\longrightarrow
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\texttt{this} \mathbin{:=} \lex{v},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\multi{x \mathbin{:=} \lex{v}}]
}
\end{pfsteps*}
By the definition of $\lex{v}$, we can separate the substitution $\texttt{this}\mathbin{:=} \lex{v}$ into\\
$\texttt{this}\mathbin{:=} \lex{v}, \multi{\texttt{this}.\texttt{dict} \mathbin{:=} \lex{v}.\texttt{dict}}$ meaning that we can
rewrite the reduced term and then apply Lemma~\ref{lem:preposttype} and~\ref{lem:prepostval}
\begin{pfsteps*}
\pf[13]{
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\texttt{this} \mathbin{:=} \lex{v},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\multi{x \mathbin{:=} \lex{v}}]$\\\qquad$
=
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \lex{v}.\texttt{dict}},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\texttt{this} \mathbin{:=} \lex{v},
\multi{x \mathbin{:=} \lex{v}}]$\\\qquad$
=
\map[\Phi,\Psi; \eta; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e}
[\multi{\texttt{this}.\texttt{dict}} \mathbin{:=} \makeDict[\emptyset;\emptyset]{\phi, \Phi},
\multi{\texttt{dict}} \mathbin{:=} \makeDict[\emptyset; \emptyset]{\psi, \Psi},
\texttt{this} \mathbin{:=} \lex{v},
\multi{x \mathbin{:=} \lex{v}}]$\\\qquad$
\dictred^*
\map[\emptyset; \emptyset; \texttt{this} : {\sType{t}[\multi{\alpha}]}, \multi{x:\tau}]{e[\theta]}
[ \texttt{this} \mathbin{:=} \lex{v},
\multi{x \mathbin{:=} \lex{v}}]
$\\\qquad$
\dictred^*
\map[\emptyset; \emptyset; \emptyset]{e[\theta][\texttt{this} \mathbin{:=} v,
\multi{x \mathbin{:=} v}]}
}
\end{pfsteps*}
\item[] \caseof{r-call} --- (b) direction
\begin{pfsteps*}
\pf[1]{\wellTyped[\emptyset; \emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}.m\typeActualMethod(\multi{v_2})
}{\tau[\theta]}}
\pf[2]{\dict[\emptyset;\emptyset;\emptyset]{
\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}.m\typeActualMethod(\multi{v_2})
}{
\sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.(\sType{t}).m(\lex{\psi},\multi{\map[\emptyset;\emptyset;\emptyset]{v_2}})
}
}
\pf[3]{\lex{\phi} = \makeDict[\emptyset;\emptyset]{\phi, \Phi}}
\pf[4]{\lex{\psi} = \makeDict[\emptyset;\emptyset]{\psi, \Psi}}
\end{pfsteps*}
By the welltypedness of \pfref{1} we know that
\begin{pfsteps*}
\pf[5]{\funcDelc{\sType{t}\typeFormalReceive}{m\typeFormalMethod}{ \multi{x~\tau}}{\tau}{\return e} \in \multi{D}}
\pf[6]{\type \sType{t}\typeFormalType \struct{\multi{y~\sigma}} \in \multi{D}}
\end{pfsteps*}
Translating \pfref{5} with $\Delta = \Phi, \Psi$ where $\Phi = \multi{\alpha~\iType{\tau}}$,
$\Psi = \multi{\beta~\iType{\sigma}}$, and
$\eta = \multi{\alpha \mapsto \texttt{this}.\texttt{dict}}, \multi{\beta \mapsto \texttt{dict}}$ we get
\begin{pfsteps*}
\pf[7]{\funcDelc{\sType{t}}{m}{ \multi{x~\texttt{Any}}}{\texttt{Any}}{\return \map{e}} \in \multi{\lex D}}
\end{pfsteps*}
The (b) direction assumes a reduction on the translated term.
We first note that $\makeDict[\emptyset;\emptyset]{\cdots}$ is always
a value. We then consider the trivial $\red_{\text{e}}$ reduction available
before taking the \rulename{r-call} step.
\begin{pfsteps*}
\pf[8]{
\sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.(\sType{t}).m(\lex{\psi},\multi{\map[\emptyset;\emptyset;\emptyset]{v_2}})
$\\\qquad$ \red_{\text{e}}
\sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.m(\lex{\psi},\multi{\map[\emptyset;\emptyset;\emptyset]{v_2}})
$\\\qquad$ \longrightarrow
\map{e}[\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}},
\multi{\texttt{dict}} \mathbin{:=} \lex{\psi}]
$\\\qquad$ =
\map{e}[\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.\texttt{dict}},
\multi{\texttt{dict}} \mathbin{:=} \lex{\psi}]
[
\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}}
]
}
\end{pfsteps*}
When we consider the $\dictred}%\succ_{\mathit{dict}}$ reduction
we can relate $\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.\texttt{dict}}$
and $\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \lex{\phi}}$. This allows us to use
Lemma \ref{lem:preposttype} and \ref{lem:prepostval}.
\begin{pfsteps*}
\pf[8]{
\map{e}[\multi{\texttt{this}.\texttt{dict} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}}.\texttt{dict}},
\multi{\texttt{dict}} \mathbin{:=} \lex{\psi}]
[
\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}}
]
$\\\qquad$ \dictred}%\succ_{\mathit{dict}}^*
\map[\emptyset;\emptyset;\Gamma]{e[\Phi \mathbin{:=} \phi, \Psi \mathbin{:=} \Psi]}
[
\texttt{this} \mathbin{:=} \sType{t}\sytxBrace{\multi{\map[\emptyset;\emptyset;\emptyset]{v_1}}, \lex{\phi}},
\multi{x \mathbin{:=} \map[\emptyset;\emptyset;\emptyset]{v_2}}
]
$\\\qquad$ \dictred}%\succ_{\mathit{dict}}^*
\map[\emptyset;\emptyset;\emptyset]{e[\Phi \mathbin{:=} \phi, \Psi \mathbin{:=} \Psi]
[\texttt{this} \mathbin{:=} \sType{t}[\phi]\sytxBrace{\multi{v_1}},
\multi{x \mathbin{:=} v_2}]
}
}
\end{pfsteps*}
We now look at the (only) reduction available to the original term
\begin{pfsteps*}
\pf[9]{\sType{t}\typeActualReceive\sytxBrace{\multi{v_1}}.m\typeActualMethod(\multi{v_2})
\longrightarrow e[\Phi \mathbin{:=} \phi, \Psi \mathbin{:=} \psi][
\texttt{this} \mathbin{:=} \sType{t}\typeActualReceive\sytxBrace{\multi{v_1}},
\multi{x \mathbin{:=} v_2}]}
\end{pfsteps*}
\item[] \caseof{r-assert} --- (a) direction
\begin{pfsteps*}
\pf[1]{\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)
\longrightarrow \sType{t}\typeActualReceive\sytxBrace{\multi{v}}}
\pf[2]{\dict[\emptyset;\emptyset;\emptyset]
{\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)}
{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})}}
\pf[3]{\type~\sType{t}\typeFormalType~T \in \multi{D}}
\pf[4]{\lex{\phi} = \makeDict[\emptyset;\emptyset]{\phi, \Phi}}
\pfstep{\inversion{r-assert}~\pfref{1}}
{5}{\subtype[\emptyset]{\sType{t}\typeActualReceive}{\tau}}
\end{pfsteps*}
As such we know that $\typemeta[\emptyset]{\tau}.\texttt{tryCast}$
should return (as opposed to panicking) if and only if
$\subtype[\emptyset]{\sType{t}\typeActualReceive}{\tau}$
\pfSubcase{\tau = \iType{u}\typeActualMethod}
For \pfref{5} to hold the following must hold
\begin{pfsteps*}
\pf[6]{\methods_\emptyset(\sType{t}\typeActualReceive) \supseteq
\methods_\emptyset(\iType{u}\typeActualMethod)}
\pf[7]{\type~\iType{u}\typeFormalMethod~\interface{\multi{S}} \in \multi{D}}
\end{pfsteps*}
For all $mM_{u} \in \multi{S}$ there
must exist a function
\[\func~(\texttt{this}~\sType{t}\typeFormalReceive)~mM_t~\sytxBrace{\return e} \in \multi{D}\]
such that \[M_u[\Psi \mathbin{:=} \psi] = M_t[\Phi \mathbin{:=} \phi]\]
To show that this is property is preserved we need first elaborate
a number of other definitions.
Let $\Psi = (\typeFormal[\multi{\beta~\iType{\sigma}}])$,
and the map $\zeta$ be $\{\multi{\beta \mapsto \texttt{this}.\texttt{\_type}}\}$.
\begin{pfsteps*}
\pf[9]{\dict[]{\type~\iType{u}\typeFormalMethod~\interface{\multi{S}}}{ $\\\qquad$
\type \iType{u} \interface{\multi{\lex{S}},~\multi{\method{spec\_mdata}{S}}}$\\\qquad$
\funcDelc{\metadataName{\iType{u}}}{\texttt{tryCast}}{x~\texttt{Any}}{\texttt{Any}}{$\\\qquad$
\qquad\left\{
\lit{if} (x.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;$\\\qquad$
\qquad \return x
$\\\qquad$
}
}}
\end{pfsteps*}
And for $\Phi = (\typeFormal)$ and $\phi = \multi{\tau}$ the map
$\zeta' = \{\alpha \mapsto \texttt{this}.\texttt{dict}_i.\texttt{\_type}\}$.
\begin{pfsteps*}
\pf[10]{\dict[]{
\func~(\texttt{this}~\sType{t}\typeFormalReceive)~mM_t~\sytxBrace{\return e}$\\\qquad$
}{
\func~(\texttt{this}~\sType{t})~\method{spec\_name}{m}()~\fnMeta{n}~\sytxBrace{\return \signatureMeta[\zeta']{M_t}}
}}
\pf[11]{\lex{\phi} = \makeDict[\emptyset;\emptyset]{\multi{\tau}, \typeFormal}
= \multi{\dictName{\iType{t}}\sytxBrace{\multi{ptr}, \typemeta[\emptyset]{\tau}} }}
\end{pfsteps*}
We may now consider the reduction of the translated term $\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})$
{\small
\begin{pfsteps*}
\pf[12]{
\typemeta[\emptyset]{\iType{u}\typeActualMethod}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})
$\\\qquad$ \longrightarrow $\\\qquad$
\left\{
\lit{if} (\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$\return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
We can now use Lemma \ref{lem:methspec} to resolve $\zeta$
{\small
\begin{pfsteps*}
\pf[13]{ \left\{
\lit{if} (\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$ \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}}
\end{pfsteps*}
}
Using the $\ensuremath{\rho_{\text{sim}}}$ we can further reduce the term.
While this would happen in a sequential order we simplify the presentation of the proof.
We begin by looking at $\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\iType{u})$.
Since $\subtype[\emptyset]{\sType{t}\typeActualReceive}{\iType{u}\typeActualMethod}$
we know that $\sType{t}$ must posses each method defined by $\iType{u}$.
{\small
\begin{pfsteps*}
\pf[14]{ \red_{\text{s}}^* \left\{
\lit{if} (\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$ \quad \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$
\red_{\text{s}}^*
\left\{
\lit{if} (\signatureMeta[\zeta']{M_t} \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$ \quad \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
We can now use Lemma \ref{lem:methspec} to resolve $\zeta'$
{\small
\begin{pfsteps*}
\pf[15]{
\red_{\text{s}}^*
\left\{
\lit{if} (\signatureMeta[\emptyset]{M_t[\Phi \mathbin{:=} \phi]} \mathbin{!=}
\signatureMeta[\emptyset]{M_u[\Psi \mathbin{:=} \psi]}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;~ $\\\qquad$\quad \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
Since $M_u[\Psi \mathbin{:=} \psi] = M_t[\Phi \mathbin{:=} \phi]$, no $\lit{if}$ is
triggered.
\begin{pfsteps*}
\pf[16]{
\red_{\text{s}}^*
\return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$
\red_{\text{s}} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}}
\end{pfsteps*}
which is the desired term.
\pfSubcase{\tau = \sType{t}[\phi]}
For \pfref{5} to hold if $\tau$ is a structure type then it must
be precisely the same type as target of the assertion.
\begin{pfsteps*}
\pf[17]{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})
$\\\qquad$
= \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\phi}}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}})
$\\\qquad$
\longrightarrow \{\lit{if} ~ \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\phi}}.\texttt{\_type}_i \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
Once again we use $\ensuremath{\rho_{\text{sim}}}$ to resolve assertion. We also
use the same proof simplification and ignore explicit sequentiality.
{\small
\begin{pfsteps*}
\pf[18]{\{\lit{if} ~ \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\phi}}.\texttt{\_type}_i \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}.\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \lex{\phi_i}.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\{\lit{if} ~ \typemeta[\emptyset]{\phi_i} \mathbin{!=} \typemeta[\emptyset]{\phi_i}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
$\\\qquad$ \red_{\text{s}}^*
\sTypeInit{t}{\multi{\lex{v}}, \lex{\phi}}
}
\end{pfsteps*}
}
\item[] \caseof{r-assert} --- (b) direction\\
This direction follows closely the (a) direction other than that it
does not assume \\$\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)
\longrightarrow \sType{t}\typeActualReceive\sytxBrace{\multi{v}}$.
Yet by our assumption that $\sType{t}\typeActualReceive\sytxBrace{\multi{v}}.(\tau)$
is not a type assertion error this reduction must exist. It
then suffices to show that the source and target terms' reductions match,
which is given in (a).
\item[] \caseof{r-assert} --- (c) direction\\
We first note that $e = v.(\tau)$ is the only case for (c)
as no other term can produce a panic, and that
$\Longrightarrow$ is defined as the greatest reduction available.
As such for $\lex{e} \Longrightarrow e'$ there is no further $e' \red_{\text{s}}$.
\begin{pfsteps*}
\pf[1]{v.(\tau)~\ensuremath{\mathsf{panic}}}
\pf[2]{\subtype[\emptyset]{\vtype(v)\not}{\tau}}
\pf[3]{\dict[]{v.(\tau)}{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\map[\emptyset;\emptyset;\emptyset]{v})}}
\pf[4]{v = \sType{t}\typeActualReceive\sytxBrace{\multi{v}}}
\pf[5]{\map[\emptyset;\emptyset;\emptyset]{v} =
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}}
\end{pfsteps*}
\pfSubcase{\tau = \iType{u}\typeActualMethod}\\
For \pfref{2} to hold there must be at least one method
$mM \in \methods_\emptyset(\tau)$
such that $mM \not\in \methods_\emptyset(\vtype(v))$.
Let $\Psi = (\typeFormal[\multi{\beta~\iType{\sigma}}])$,
and the map $\zeta$ be $\{\multi{\beta \mapsto this.\texttt{\_type}}\}$.
\begin{pfsteps*}
\pf[9]{\dict[]{\type~\iType{u}\typeFormalMethod~\interface{\multi{S}}}{ $\\\qquad$
\type \iType{u} \interface{\multi{\lex{S}},~\multi{\method{spec\_mdata}{S}}}$\\\qquad$
\funcDelc{\metadataName{\iType{u}}}{\texttt{tryCast}}{x~\texttt{Any}}{\texttt{Any}}{$\\\qquad$
\qquad\left\{
\lit{if} (x.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ;$\\\qquad$
\qquad \return x
$\\\qquad$
}
}}
\end{pfsteps*}
The translated term will always be able to make the reduction
\begin{pfsteps*}
\pf[8]{
\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\map[\emptyset;\emptyset;\emptyset]{v})
\longrightarrow
\left\{
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ; $\\\qquad$\quad \return x
}
\end{pfsteps*}
For convenience we assume that the problematic method $m$ is the
first to be checked. If this is not the case then we may reduce
all ok checks using $\red_{\text{s}}$ as described in the \rulename{r-assert}
(a) case.
\begin{pfsteps*}
\pf[7]{
\left\{
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
}
~\middle|~
\begin{matrix}
mM_u \in \multi{S}
\end{matrix}
\right\} ; $\\\qquad$\quad \return x
$\\\qquad$
\red_{\text{s}}^*
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=}
\signatureMeta{M_u}
)~ \sytxBrace{ \lit{panic}
} ; \cdots
}
\end{pfsteps*}
We now need to consider the two possible cases in which
$mM \not\in \methods_\emptyset(\vtype(v))$ could hold.
Either there is no method called $m$ in $\methods_\emptyset(\vtype(v))$
or there is a method $m$ but with a different method signatures.
In the former case the assertion $E[\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u})]$
will panic as by our assumption that translation will never introduce a
name collision the method $m$ will not be in $\methods(\sType{t})$
(the methods of $\vtype(\map[\emptyset;\emptyset;\emptyset]{v})$).
In the latter we assume
that for $mM_t[\Phi \mathbin{:=} \phi] \in \methods_\emptyset(\vtype(v))$
and $mM_u[\Psi \mathbin{:=} \psi] \in \methods_\emptyset(\iType{u}\typeActualMethod)$
such that $M_t[\Phi \mathbin{:=} \phi] \mathbin{!=} M_u[\Psi \mathbin{:=} \psi]$
then the $\lit{if}$ branches to $\lit{panic}$.
Let $\Phi = (\typeFormal)$, $\phi = \multi{\tau}$, and the map
$\zeta' = \{\alpha \mapsto \texttt{this}.\texttt{dict}_i.\texttt{\_type}\}$.
\begin{pfsteps*}
\pf[10]{\dict[]{
\func~(\texttt{this}~\sType{t}\typeFormalReceive)~mM_t~\sytxBrace{\return e}$\\\qquad$
}{
\func~(\texttt{this}~\sType{t})~\method{spec\_name}{m}()~\fnMeta{n}~\sytxBrace{\return \signatureMeta[\zeta']{M_t}}
}}
\pf[11]{\map[\emptyset;\emptyset;\emptyset]{v}
=
\sTypeInit{t}{\multi{\lex{v}}, \multi{\dictName{\iType{t}}\sytxBrace{\multi{ptr}, \typemeta[\emptyset]{\tau}} }}}
\pf[12]{
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.(\iType{u}).\method{spec\_name}{m}() \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} (\map[\emptyset;\emptyset;\emptyset]{v}.\method{spec\_name}{m}() \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} (\signatureMeta[\zeta']{M_t} \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
}
\end{pfsteps*}
We can now apply Lemma~\ref{lem:methspec}, first to the lhs then rhs
\begin{pfsteps*}
\pf[13]{
\lit{if} (\signatureMeta[\zeta']{M_t} \mathbin{!=} \signatureMeta{M_u})~ \sytxBrace{ \lit{panic}} ; \cdots
$\\\qquad$
\red_{\text{s}}^*
\lit{if} (\signatureMeta[\emptyset]{M_t[\Phi\mathbin{:=}\phi]} \mathbin{!=} \signatureMeta[\emptyset]{M_u[\Psi\mathbin{:=}\psi]})~ \sytxBrace{ \lit{panic}} ; \cdots
}
\end{pfsteps*}
By $M_t[\Phi \mathbin{:=} \phi] \mathbin{!=} M_u[\Psi \mathbin{:=} \psi]$ this reduces to
the desired $\lit{panic}$.
\pfSubcase{\tau = \sType{u}\typeActualMethod}
{\small
\begin{pfsteps*}
\pf[14]{\dict[]{\type \sType{u}[\Phi] \struct{\cdots}$\\\qquad$}{
\type \metadataName{\sType{u}} \struct{\multi{\texttt{\_type}~\texttt{\_type\_mdata}}}
$\\\qquad$
\funcDelc{\texttt{this}~\metadataName{\sType{u}}}{\texttt{tryCast}}{x~\texttt{Any}}{\texttt{Any}}{
$\\\qquad$
\qquad x.(\sType{u})~;~\{\lit{if} ~ \texttt{this}.\texttt{\_type}_i \mathbin{!=} x.(\sType{u}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return x
$\\\qquad$
}
}}
\end{pfsteps*}
}
For $\vtype(v) = \sType{t}\typeActualReceive$.
If $\tau$ is a struct then there are two case.
Either
$\sType{u} \mathbin{!=} \sType{t}$, or
$\sType{u} = \sType{t}$
but for $\phi = \multi{\sigma}$ and $\psi = \multi{\tau}$ there
exists an $i$ such that $\sigma_i \mathbin{!=} \tau_i$.
We first consider the case $\sType{u} \mathbin{!=} \sType{t}$.
Note that $\vtype(\map[\emptyset;\emptyset;\emptyset]{v}) = \sType{t}\typeActualReceive$.
\begin{pfsteps*}
\pf[20]{\typemeta[\emptyset]{\sType{u}\typeActualMethod}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
= \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
\longrightarrow \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{u})~;\cdots
}
\end{pfsteps*}
By our assumption $\sType{u} \mathbin{!=} \sType{t}$ we get the desired
\ensuremath{\mathsf{panic}}.
We now consider the case of $\sType{u} = \sType{t}$
but for $\phi = \multi{\sigma}$ and $\psi = \multi{\tau}$ there
exists an $i$ such that $\sigma_i \mathbin{!=} \tau_i$.
\begin{pfsteps*}
\pf[21]{\typemeta[\emptyset]{\tau}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
= \metadataName{\sType{t}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{tryCast}(\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}})
$\\\qquad$
\longrightarrow \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t})~;~\{\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \cdots
$\\\qquad$
\{\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \cdots
}
\end{pfsteps*}
We once again only need to consider the (lowest) $i$ for which $\sigma_i \mathbin{!=} \tau_i$.
All prior $\lit{if}$ statement pass as per \rulename{r-assert} (a).
\begin{pfsteps*}
\pf[22]{
\{\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}}\}_{i<n} ; \return \cdots
$\\\qquad$
\red_{\text{s}}^*
\lit{if} ~ \metadataName{\sType{u}}\sytxBrace{\typemeta[\emptyset]{\psi}}.\texttt{\_type}_i \mathbin{!=}
$\\\qquad$ \qquad \qquad \sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.(\sType{t}).\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\sTypeInit{t}{\multi{\lex{v}}, \makeDict[\emptyset;\emptyset]{\phi, \Phi}}.\texttt{dict}_i.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\makeDict[\emptyset;\emptyset]{\sigma_i, \Phi_i}.\texttt{\_type}~\sytxBrace{\lit{panic}} ; \return \cdots
$\\\qquad$
\red_{\text{s}}
\lit{if} ~ \typemeta[\emptyset]{\tau_i} \mathbin{!=}
\typemeta[\emptyset]{\sigma_i}~\sytxBrace{\lit{panic}} ; \return \cdots
}
\end{pfsteps*}
by our assumption that $\tau_i \mathbin{!=} \sigma_i$ we get the desired
\ensuremath{\mathsf{panic}}.
\item[] \caseof{r-assert} --- (d) direction\\
Once again we need only consider $e= v.(\tau)$.
This case follows from case (c), but we must first
show that there exists at least an $\lex{e}\red_{\text{e}}^*\longrightarrow d$
reduction. This $d$ then reduces by $\red_{\text{s}}^*$ to $e'$, where
$e'$ is a type assertion error.
We know that $d$ exists by observing that the translation of $v.(\tau)$
will always reduce ($\longrightarrow$) by \rulename{r-call} on $\texttt{tryCast}$.
This $d$ will then reduce ($\red_{\text{s}}^*$) to $e'$, which by
the same logic as (c) is a type assertion error.
\item[] \caseof{r-context} --- (a) direction\\
The only non-immediate case for \rulename{r-context} is when
$E = \square.m\typeActualMethod(\multi{v})$.
\begin{pfsteps*}
\pf[1]{\infer{
e.m\typeActualMethod(\multi{v})
\longrightarrow
d.m\typeActualMethod(\multi{v})
}{
e\longrightarrow d
}}
\pf[2]{
\wellTyped[\emptyset; \emptyset]{ e.m\typeActualMethod(\multi{v}) }{\sigma}
}
\pfstep{\inversion{t-call}}
{3}{\wellTyped[\emptyset; \emptyset]{e}{t[\phi]}}
\end{pfsteps*}
By preservation (lemma~\ref{thm:fggTypePreservation})
\begin{pfsteps*}
\pf[4]{\wellTyped[\emptyset; \emptyset]{d}{u[\phi']}}
\pf[5]{\subtype[\emptyset]{u[\phi']}{t[\phi]}}
\end{pfsteps*}
Translating $E[e]$ and $E[d]$ we get
\begin{pfsteps*}
\pf[6]{\map[\emptyset;\emptyset;\emptyset]{E[e]} =
\map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}} )
}
\pf[7]{\map[\emptyset;\emptyset;\emptyset]{E[d]} =
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}} )
}
\end{pfsteps*}
By the induction hypothesis on $e\longrightarrow d$
\begin{pfsteps*}
\pf[8]{\map[\emptyset;\emptyset;\emptyset]{e}
\Longrightarrow \dictred^\ast
\map[\emptyset;\emptyset;\emptyset]{d}}
\end{pfsteps*}
Using the evaluation context $\square.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})$
\begin{pfsteps*}
\pf[9]{\map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
\Longrightarrow \dictred^\ast
\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
}
\end{pfsteps*}
Using synthetic assertion specialisation and \ref{lem:sub:pres} on \pfref{5}
\begin{pfsteps*}
\pf[10]{\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
\dictred
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})}
\end{pfsteps*}
\item[] \caseof{r-context} --- (a) direction\\
The only non-immediate case for \rulename{r-context}
is for $E = \square.m\typeActualMethod(\multi{v})$
\begin{pfsteps*}
\pf[0]{\wellTyped[\emptyset;\emptyset]{e}{t[\phi]}}
\pf[2]{\map[\emptyset;\emptyset;\emptyset]{E[e]} =
\map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}} )
}
\pf[1]{ \map[\emptyset;\emptyset;\emptyset]{e}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) \Longrightarrow e'.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) }
\end{pfsteps*}
By inversion on the reduction $\Longrightarrow$ using context
$E' = \square.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})$
\begin{pfsteps*}
\pf[3]{ \map[\emptyset;\emptyset;\emptyset]{e} \Longrightarrow e'}
\end{pfsteps*}
By the induction hypothesis on \pfref{3} there exists $d$
\begin{pfsteps*}
\pf[4]{e\longrightarrow d}
\pf[5]{e' \dictred^* \map[\emptyset;\emptyset;\emptyset]{d}}
\pfstep{lemma~\ref{thm:fggTypePreservation} \pfref{0}}
{7}{\wellTyped[\emptyset;\emptyset]{d}{u[\phi']}}
\pf[8]{\subtype[\emptyset]{u[\phi']}{t[\phi]}}
\end{pfsteps*}
Applying \pfref{5} on context $C=E'$ we get that
\begin{pfsteps*}
\pf[6]{e'.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) \dictred^*
\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) }
\end{pfsteps*}
Using synthetic assertion specialisation and \ref{lem:sub:pres} on \pfref{8}
\begin{pfsteps*}
\pf[6]{\map[\emptyset;\emptyset;\emptyset]{d}.(t).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})
\dictred^*
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}}) }
\end{pfsteps*}
Using \rulename{r-context} on \pfref{4} and context $E$.
\begin{pfsteps*}
\pf[10]{E[e]\longrightarrow E[d]}
\end{pfsteps*}
Finally, using the typing of $d$ \pfref{7} we get the translation of $E[d]$
\begin{pfsteps*}
\pf[11]{\map[\emptyset;\emptyset;\emptyset]{E[d]} =
\map[\emptyset;\emptyset;\emptyset]{d}.(u).m(\lex{\psi},
\multi{\map[\emptyset;\emptyset;\emptyset]{e}})}
\end{pfsteps*}
\end{itemize}
\end{proof}
\lemredrewrite*
\begin{proof}
(1) is immediate as if $d_1\longrightarrow d_2$ then $E[d_1] \longrightarrow E[d_2] \longrightarrow e_3$.\\
(2) is by case analysis on the reduction $e_2\longrightarrow e_3$.
\begin{itemize}
\item[] \caseof{r-field}:
We have that $e_2 = \sTypeInit{t}{\multi{v}}$.
There are two possible options for the congruence
evaluation context $C$, either it is $\square$ or it is
of the form $\sTypeInit{t}{\multi{v}, \square, \multi{v}}$.
In either case we get a contradiction as both cases
are captured by the standard evaluation context.
\item[] \caseof{r-call}:
Same logic as \rulename{r-field}.
\item[] \caseof{r-assert}:
Same logic as \rulename{r-field}.
\item[] \caseof{r-context}:
We begin with an assumption that the
congruence context $C$ deviated from the standard context
at the top level. Namely there does not exist $E'$, $C'$ such
that $C = E'[C']$. We do this for clarity.
In the situation that this does not
hold, we may simply use add $E'$ where appropriate.
There are two cases for $C\neq E$. Either
$C$ is of the form
$\sTypeInit{t}{\multi{v}, e, \multi{e_1}, C_{\text{sub}}, \multi{e_2}}$ or
$v.m(\multi{v}, e, \multi{e_1}, C_{\text{sub}}, \multi{e_2})$.
We focus on the former.
The starting term $e_1$ is $\sTypeInit{t}{\multi{v}, e, \multi{e_1}, C_{\text{sub}}[d_1], \multi{e_2}}$,
the subsequent term $e_2$ is $\sTypeInit{t}{\multi{v}, e, \multi{e_1}, C_{\text{sub}}[d_2], \multi{e_2}}$,
and the final term $e_3$ is $\sTypeInit{t}{\multi{v}, d, \multi{e_1}, C_{\text{sub}}[d_2], \multi{e_2}}$
for some $d$ such that $e\longrightarrow d$.
Our initial term may make a standard reduction to
$e_2' = \sTypeInit{t}{\multi{v}, d, \multi{e_1}, C_{\text{sub}}[d_1], \multi{e_2}}$,
followed by a $\rightarrowtriangle$ reduction using $C'' = \sTypeInit{t}{\multi{v}, d, \multi{e_1}, C_{\text{sub}}, \multi{e_2}}$
to $e_3$.
\end{itemize}
\end{proof}
\lemredvalue*
\begin{proof}
Assume for contradiction that $\rightarrowtriangle \not\in \longrightarrow$.
Either
\begin{itemize}
\item[] \resetpfcounter\textbf{Case : } $C[e']\rightarrowtriangle C[d] = v$ where $C$ is not the
standard $\longrightarrow$ reduction context. Since there
must be another $\longrightarrow$ reduction from $C[d]$ using the
standard reduction context $E$ it cannot be a value.
\item[] \resetpfcounter\textbf{Case : } $C[e'.(u)]\rightarrowtriangle C[e'.(t)]$. Immediate as $ C[e'.(t)]$
is not a value.
\end{itemize}
\end{proof}
\section{Appendix: Motivating Example Translated using erasure}
\label{sec:erasure-example}
\input{figs/dyn/fgg-nomono.tex}
\section{Appendix: \glsentrylong{fg}}
\label{appendix:fg}
\label{app:fg}
For the reviewer's convenience, this section provides
more explanations of the syntax and the full definitions of the typing
system from those in \cite{griesemer2020featherweight}.
\subsection{\glsentrylong{fg} Syntax}
\label{app:fg:syntax}
We explain
the syntax of \gls{fg} in
Figure~\ref{fig:fg:syntax}.
The meta variables for field ($f$), method ($m$), variable
($x$), structure type names ($t_S, u_S$), and interface type names
($t_I, u_I$) range over their respective namespaces. Types ($t, u$)
range over both structures and interfaces.
A program ($P$) is given by a sequence of declarations ($\multi{D}$)
along with a {\bf main} function which acts as the top-level expression.
We often shorten this as $P = \program{e}$.
Expressions in \gls{fg} are
variables ($x$), method calls ($e.m(\multi{e})$), structure literals
($t_S\{\multi{e}\}$), field selection ($e.f$), and type assertion
($e.(t)$).
Declarations ($D$) can take three forms;
\emph{structure}, \emph{interface}, or \emph{method declaration}. The
structure declaration ($\struct{\multi{f\ t}}$) gives a sequence of
typed fields whereas the interface declaration ($\interface{\multi{S}}$)
gives the method specifications which instances of that interface
should implement. A method specification ($m(\multi{x~t})~t$)
prescribes the name and type for implementing methods.
A method declaration ($\func (x\ t_S)\ m(\multi{x\ t})\ t_r\ \{b\}$)
defines a method $m$ on the structure $t_S$. This method accepts the
arguments $\multi{x\ t}$, which along with the receiver $x$ are passed
to the method body $b$. On a successful computation this method will
return a result of type $t_r$. The special $\lit{main}$ function acts
as the entrance function, and thus has no receiver, arguments or
return value.
\subsection{\glsentrylong{fg} Typing}
\label{app:fg:types}
\input{figs/fg/fg-typing}
For the reviewer's convinience,
we reproduce the typing system with the minimum explainations.
Figure~\ref{fig:fg:types} gives the \gls{fg} typing rules
and auxiliary functions.
Environment $\Gamma$ is a sequence
of typed variable names ($\multi{x : t}$).
We assume
all variables in $\Gamma$ are distinct, and
write $\Gamma,x : t$ if $x\not\in \dom{\Gamma}$.
\myparagraph{Implements}
The \emph{implements relation} ($t <: u$) holds if type $t$ is a subtype of type $u$,
understood as a relationship in which a variable of type $u$ can be
substituted by any variable of type $t$.
A structure can only only be
implemented by itself (\rulename{<:s}). An interface $t_I$ can be
implemented by any type that possesses at least the same methods as
$t_I$ (\rulename{<:i}).
\myparagraph{Well-formedness}
A well-formed term is one that is not only syntactically correct, but
one that also has semantic meaning.
$x \operatorname{\var{ok}}$ holds if $x$ is well
formed according to the typing rules, with the extension
$\wellFormed{x}$ if the term is well-formed in the environment
$\Gamma$.
A type declaration is
well-formed when the type it declares is well-formed (\rulename{t-type})
which happens when it is either a structure with distinct and well
formed fields (\rulename{t-struct}), or an interface with unique and
well-formed method specifications (\rulename{t-interface}). Method
specifications are well-formed when all argument types and the return
type are well-formed (\rulename{t-specification}).
\myparagraph{Method body and statement type checking}
The typing judgement $\wellTyped{x}{t}$ holds if the term $x$ has type $t$
in the environment $\Gamma$. A method
($\textbf{func}~(x~t_S)~m(\multi{x~t})~u~\{b\}$) is well-formed if the
type of the receiver, the return, and all arguments are well-formed,
with all names being distinct from one
another.
A structure literal ($\sType{t}\{\multi{e}\}$) is well-typed when each
field instantiation ($\multi{e}$) subtypes the field's
declared type (\rulename{t-literal}).
Field
assignment and access follow the order of declaration.
\myparagraph{Expression type checking}
Given an expression $e$ of type $u$ the type assertion $e.(t)$ casts the expression to
type $t$. There are three non-overlapping type assertion rules.
The Go specification only permits type assertions from an interface type, which
informs rules \rulename{t-assert$_I$}.
An assertion between two interface types (\rulename{t-assert$_I$}) does not statically
check the assertion since the expression $e$ could evaluate to a term that
implements the target type $t$.
Assertion from an interface type~$u$ to a non-interface type~$t$
is allowed only if $t$~implements~$u$ (\rulename{t-assert$_S$}).
Not part of the Go specification and not need for compile time
checking,
the rule \rulename{tr-stupid}
is only used for the type assertion $e.(t)$ where $e$ has evaluated to a concrete non-interface type.
This assertion provides no utility at compile time as an assertion from a non-interface type is either a no-op
or it unnecessarily erases type information -- yet without this rule a term may become ill-typed during evaluation.
More detailed explanations can be found in
\cite[\S~3.3]{griesemer2020featherweight}.
\begin{theorem}[Preservation]
\label{fcgSubjtCong}
\label{thm:fgTypePreservation}
{\rm (Theorem 3.3 in \cite[\S~3.3]{griesemer2020featherweight})}
If $\wellTyped[\emptyset]{e}{u}$ and $\reduction{e}{e'}$
then $\wellTyped[\emptyset]{e'}{t}$ for some $t <: u$.
\end{theorem}
\begin{theorem}[Progress]
\label{thm:fgProgress}
{\rm (Theorem 3.4 in \cite[\S~3.3]{griesemer2020featherweight})}
If $\wellTyped[\emptyset]{e}{u}$ then
$e$ is either a value, $\reduction{e}{e'}$ for some $e'$ or
$e$ panics.
\end{theorem}
\section{Appendix: \glsentrylong{fgg}}
\label{app:fgg}
\label{appendix:fgg}
\input{figs/fgg/fgg-typing}
For the reviewer's convenience, this section provides
the definitions and more explanations of the typing
system from those in \cite{griesemer2020featherweight}.
\subsection{\glsentrylong{fgg} Typing Rules}
Judgements are extended with the type environment $\Delta$ which
relates types than type names.
The subtyping $\subtype{\tau}{\sigma}$ uses $\Delta$ where both $\tau$
and $\sigma$ may have type parameters in $\Delta$;
judgement $\wellFormed[\Delta]{\tau}$ says
the argument of channel type ($\tau$) is well-formed
w.r.t. all type parameters declared in $\Delta$;
a method declaration is well-formed if
$\Phi$ and $\Psi$ are well-formed types formal of the receiver and the
method, yielding $\Delta$ ($\Phi;~\Psi\operatorname{\var{ok}}~\Delta$) and the receiver's type is
declared by $\Phi'$ such that $\Phi <: \Phi'$.
Judgements for expressions,
method calls and processes are extended w.r.t. $\Delta$, accordingly.
This is so that interface subtyping (\rulename{<:i}) may ensure that
type parameters still implement all methods that an interface
requires.
The type formal subtyping rule ($\Phi <: \Psi$) ensures that if the
type substitution $\Phi :=_\Delta \multi{\tau}$ is well-defined,
then $\Psi :=_\Delta \multi{\tau}$ is well-defined.
We deviate from \cite{griesemer2020featherweight}
in our typing for \rulename{t-func}. We require that
receiver types formal are identical to those in the structures
declaration. This more closely follows the official Go proposal \cite{type-parameters-proposal}.
Rather than require the developer to write a full type formal
which must exactly match the structure's declaration they instead provide
a receiver type parameter list which is converted to a type formal by
looking up the structure type formal.
When looking at method typing it becomes necessary to consider two
types formal, with the methods type formal ($\Psi$) depending on the
receiver's ($\Phi$). A methods type environment $\Delta$ is
constructed by the well formed composition of the receiver and
method's types formal ($\Phi;~\Psi\operatorname{\var{ok}}~\Delta$). This environment is
well formed when $\Phi$ is well formed in an empty environment while
method's type formal $\Psi$ is well formed under $\Phi$. Type formal
well formedness ($\wellFormed[\Phi]{\Psi}$) holds when there is no
repetition in the type parameters between $\Phi$ and $\Psi$ and all
bounds in $\Psi$ are well formed in the $\Phi, \Psi$ environment. This
definition allows mutually recursive bounds in $\Psi$.
A type declaration includes a type formal, this type formal is the
environment that either the structure or interface must be well formed
under. A structure is well formed in an environment $\Phi$ when each
of its fields is well formed under $\Phi$. An interface is well formed
in $\Phi$ when each method is specifies is also well formed under
$\Phi$.
A method specifications is well-formed
($\wellFormed[\Phi]{m\typeFormalMethod(\multi{x~\tau})~\tau}$) when
its composite type environment is well-formed ($\Phi;~\Psi\operatorname{\var{ok}}~\Delta$)
and if its argument and return types are well-formed under that
composite type environment.
\begin{theorem}[Preservation]
{\rm (Theorem 4.3 in \cite[\S~4]{griesemer2020featherweight})}
\label{lemma:fcgg:subjReduction:Expression}
\label{thm:fggTypePreservation}
If $\wellTyped[\emptyset; \emptyset]{e}{\tau}$
and $\reduction{e}{e'}$
then $\wellTyped[\emptyset; \emptyset]{e'}{\tau'}$,
for some $\tau'$ such that $\subtype[\emptyset; \emptyset]{\tau'}{\tau}$.
\end{theorem}
\begin{theorem}[Progress]
\label{thm:fggProgress}
{\rm (Theorem 3.4 in \cite[\S~3.3]{griesemer2020featherweight})}
If $\wellTyped[\emptyset;\emptyset]{e}{\tau}$ then
$e$ is either a value, $\reduction{e}{e'}$ for some $e'$ or
$e$ panics.
\end{theorem}
\pagebreak
\section{Appendix: Generic Implementations of Top 16 Statically Typed Generic Programming Languages}
\label{app:implementations}
\begin{table}[!htp]\centering
\scriptsize
\begin{tabular}{lllll}\toprule
Programming Language &Mainstream Implementation &Memory Management &Runtime Environment \\\cmidrule{1-4}
Java &Erasure &Garbage Collection &JVM \\
Kotlin &Erasure &Garbage Collection &JVM \\
Scala &Erasure &Garbage Collection &JVM \\
C\# &Just-In-Time Specialisation + Non-Specialised Dictionary &Garbage Collection &.NET CLR \\
Visual Basic &Just-In-Time Specialisation + Non-Specialised Dictionary &Garbage Collection &.NET CLR \\
Dart &Erasure &Garbage Collection &Virtual Machine \\
Swift &Non-Specialised Dictionary/Monomorphisation* &Reference Counting &Native \\
Objective-C &Non-Specialised Dictionary/Monomorphisation* &Reference Counting &Native \\
Haskell &Non-Specialised Dictionary &Garbage Collection &Native \\
Go & Monomorphisation + Specialised Dictionary &Garbage Collection &Native \\
D & Monomorphisation &Garbage Collection &Native \\
C++ & Monomorphisation &Manual &Native \\
Rust & Monomorphisation &Manual &Native \\
Delphi & Monomorphisation &Manual &Native \\
Ada & Monomorphisation &Manual &Native \\
Fortran & Monomorphisation &Manual &Native \\
\bottomrule
\end{tabular}
\caption{Generic implementations of top 16 statically typed programming languages with generics. Languages are selected from the top 40 languages by IEEE Spectrum in 2021~\cite{TopProgr17:online}.
(*when source code available or specified by users.)
}\label{tab:app-pl-table }
\end{table}
\section{Conclusion}
\label{section:conclusion}
In this paper, we design and formalise a new source-to-source,
non-specalised call-site
dictionary-passing translation of Go, and prove
essential correctness properties
introducing a novel and general \emph{bisimulation up to} technique.
The theory guides a correct implementation of
the translation,
which we empirically compare along with the recently released Go~1.18\xspace,
an erasure translator, and two existing monomorphisation
translators~\cite{gotogo,griesemer2020featherweight},
with micro and real-world benchmarks.
We demonstrate that our dictionary-passing translator handles
an important class of Go programs (\textit{F}-bounded polymorphism and {\it nomono}{}
programs) beyond the capability of Go~1.18\xspace
and existing translations \cite{gotogo,griesemer2020featherweight},
and provide several crucial findings and implications
for future compiler developers to refer to.
For instance, Go~1.18\xspace requires more improvements on GC shapes in order to
effectively generate small binary code
(See \ref{section:discussion}
for a more detailed discussion).
Beyond Go language,
many dynamically typed languages (such as Python, JavaScript, and
Erlang) type-check at runtime, and their engines cannot
easily decide an object's
implemented methods nominally, similarly to Go.
Consequently,
many
of their implementations~\cite{salib2004starkiller,castanos2012benefits, gal2009trace}
apply similar approaches to monomorphisation to optimise
execution speed.
Rust also supports generic via monomorphisation,
yet this is considered a major reason for slow compilation.
Our work can help in choosing alternative
optimisations for these languages to reduce
code size and compilation time.
In the future, we plan to inspect how other important Go language features
(e.g., \emph{reflection},
\emph{packages}, \emph{first-class}, \emph{anonymous} functions) interact with generics
by proving the correctness and examining the trade-offs among runtime performance, code sizes, and compilation times.
\section{Call-Site, Non-Specialising Dictionary-Passing Translation}
\label{section:dictionary}
This section presents
our new dictionary-passing translation from \gls{fgg} to \gls{fg}.
\myparagraph{High level overview}
Our call-site, non-specialising dictionary-passing translation can be
split into a number of parts, each tackling a
different challenge. Specifically, we consider:
the preservation of typeability,
the use of dictionaries to resolve
generic method implementations,
the creation of dictionaries, and
the preservation of type assertion behaviour.
These challenges may have been discussed in other works,
yet the structural type system of Go serves to
hinder any existing solutions.
We explain the key ideas and challenges in \S~\ref{sec:dictexample},
and detail the formal translation rules in \S~\ref{sec:dictexp}.
}%\vspace{-2mm}
\subsection{Dictionary-Passing by Example}
\label{sec:dictexample}
\subsubsection{\bf Structural Subtyping and Type Erasure}
\label{paragraph:structure}
The first challenge we encounter is that
subtypes must be preserved. If, in
the source program, expression $e$ can
be used as an argument to \inlinelstfcgg{Foo}, then
the translation of $e$ should likewise
be usable as an argument to the translation of \inlinelstfcgg{Foo}.
We should also desire that any non-subtypes
are preserved, we leave this challenge to \S~\ref{subsubsec:typecollision}.
As a first naive attempt at removing
polymorphic types, we might observe that
regardless of the value we pass to a
polymorphic argument, it must implement the \inlinelstfcgg{Any} type.
From this, we could -- again, naively --
conclude that lifting all polymorphic
arguments to the \inlinelstfcgg{Any} type solves our problem.
Unfortunately, such a solution fails upon closer inspection.
Consider the code in Figure~\ref{code:fgg:example}.
By erasing the polymorphic types in
\inlinelstfcgg{Function}, we lose the subtype
\inlinelstfcgg{GtFunc[int]} $<:$ \inlinelstfcgg{Function[int, bool]}
(The naively erased \inlinelstfcgg{GtFunc} implements \inlinelstfcgg{Apply(in Any) bool},
while the erased
\inlinelstfcgg{Function} demands \inlinelstfcgg{Apply(in Any) Any}).
This issue is noted in \citet[\S~4.4]{Igarashi99FJ}.
Their solution, however, is inappropriate in a
structurally typed language such as Go.
In nominally typed languages like Java,
it is clear that one type subtypes another.
One need only inspect the implementing type's
declaration, as a subtype exists only when
it is explicitly declared.
\citet{Igarashi99FJ} insert \emph{bridge methods} to
handle cases such as the \inlinelstfcgg{GtFunc}-\inlinelstfcgg{Function} example.
A bridge method is an overloaded method added
to the subtype whose type matches the erased
method as specified by the supertype, \ie{} adding an
overloaded method of type \inlinelstfcgg{Apply(in Any) Any} to \inlinelstfcgg{GtFunc}.
This method is immediately inappropriate as Go does
not allow method overloading.
The bridge method solution would still be inappropriate
were we to simulate
overloading using name mangling.
To add bridge methods, we need to
know -- statically -- that a subtype exists.
In \gls{fgg}, we need to know how two
types are instantiated before we can conclude
that a subtype relation exists.
This requires the kind of potentially infinite
whole program analysis (\S~\ref{section:nomono})
that we wished to avoid in our dictionary-passing translation.
Instead, we ensure that subtypes are
preserved by erasing \emph{all} method types, rather
than just polymorphic types.
As with \inlinelstfcgg{GtFunc}'s \inlinelstfcgg{Apply}
method in Figure~\ref{code:fg:example},
when a variable
of a known type is used, we assert it to that
type; although unlike Figure~\ref{code:fg:example}, the \gls{fgg} type
checker has already ensured the safety of these synthetic assertions.
\subsubsection{\bf Dictionaries}
\label{subsubsec:dictionaries}
\begin{figure}
\begin{minipage}[t]{0.4\linewidth }
\vspace{-3mm}
\begin{center}
\begin{lstfcgg}
type Ord[T Ord[T]] interface {
Gt[](that T) bool
}
type GtFunc[T Ord[T]] struct { val T }
func (this GtFunc[T]) Apply(in T) bool {
return this.val.Gt[](in)
}
type Max struct {}
func (this Max) Of[T Ord[T]](l T, r T) T {
$\cdots$ l.Gt(r) $\cdots$
}
func main() { GtFunc[int]{5}.Apply(7) }
\end{lstfcgg}
\end{center}
\end{minipage}\hspace*{-2mm}
\begin{minipage}[t] {0.59\linewidth }
\vspace{-3mm}
\lstset{firstnumber=1}
\begin{lstfcgg}
type Ord interface{ Gt(that Any) Any }
type OrdDict struct {
Gt func(rec Any, in Any) Any ; //Gt method pointer
/*Simulated type*/ }
type GtFunc struct { val Any ; dict OrdDict }
func (this GtFunc) Apply(in Any) Any {
return this.dict.Gt(this.val /*Receiver*/, in) (*\label{code:dictexample:resolve1}*)
}
func (this Max) Of(dict OrdDict, l Any, r Any) Any {
$\cdots$ dict.Gt(l, r) $\cdots$ }(*\label{code:dictexample:resolve2}*)
func main() {
od := OrdDict{Gt: func(rec Any, in Any) Any { rec.(int).Gt(in)}}
GtFunc{5, od}.Apply(7) }
\end{lstfcgg}
\end{minipage}
\vspace*{-.5cm}
\caption{Dictionary-passing translation example extending
Figure~\ref{code:fg:example}.
\glsentryshort{fgg} source (Left),
\glsentryshort{fg} translation (Right)}
\vspace*{-.4cm}
\label{code:dict:passing:example}
\end{figure}
We are now confronted with the primary
challenge of concern to dictionary-passing
translations; how do we resolve generic
method calls without polymorphic type information?
A dictionary is, at its simplest,
a map from method names to their
specific implementation for some type.
A dictionary-passing translation, then,
is one which substitutes the specialisation
of type parameters with the passing of
dictionaries as supplementary value-argument.
One may then resolve a method call on
a generic value by performing a dictionary lookup.
Presently, we consider the structure
and usage of dictionaries while delaying
our discussion of call-site dictionary
construction and type simulation until
\S~\ref{subsubsec:dynamicdict} and \S~\ref{subsubsec:typecollision}, \emph{resp}.
Consider Figure~\ref{code:dict:passing:example} (left) extending a fragment of
Figure~\ref{code:fg:example} with a \inlinelstfcgg{Max.Of} method.
For us to call \inlinelstfcgg{Gt} in \inlinelstfcgg{GtFunc[T].Apply}
or \inlinelstfcgg{Max.Of[T]}, we need to know the
concrete type of \inlinelstfcgg{T}. This information
is lost during erasure.
The translation (right) includes a
fresh struct \inlinelstfcgg{OrdDict} which is,
quite naturally, the dictionary for
\inlinelstfcgg{Ord} bounded type parameters.
Dictionaries contain a method pointer
field for each method in the original
interface, along with a \emph{type-rep} which
shall be discussed in \S~\ref{subsubsec:typecollision}.
\gls{fg} does not include method pointers;
instead, we must simulate them using
higher order functions with the
first argument being the receiver.
While this adds a small amount of
complexity to the final correctness
proofs, we see this as a worthwhile
compromise, as it allows us to focus
on the translation of generics alone,
rather than on generics \emph{and} on
a translation to some low level language.
By containing each method specified
by the \gls{fgg} bounding
interface, dictionaries have a fixed internal representation.
This reflects real-world dictionary-passing implementations
and allows entries to be accessed efficiently~\cite{driesen1996direct}.
\begin{wrapfigure}{r}{0.42\linewidth}
\vspace*{-.7cm}
\lstset{xleftmargin=5pt}
\begin{lstfcgg}
type Foo[$\alpha$ Any] interface {
do[$\beta$ Any](a $\beta$, b bool) $\alpha$
}
type Bar[$\alpha$ Any] struct {}
func (x Bar[$\alpha$]) do[$\beta$ Any](a $\beta$, b $\alpha$) int {$\cdots$}
func main() {
Bar[bool]{}.(Foo[int]); (*\label{code:assertion:source}*)
Bar[bool]{}.(Foo[bool]) (*\label{code:assertion:source2}*)
}
\end{lstfcgg}
\vspace*{-.5cm}
\caption{Type-rep example. \glsentryshort{fgg} source}
\label{fig:code:dict:type-rep:fgg}
\vspace*{-.3cm}
\end{wrapfigure}
Dictionaries are passed to methods via
two mechanisms, namely the method's receiver,
and as regular value-arguments.
Generic structures, \eg \inlinelstfcgg{GtFunc},
possess a dictionary for each type parameter.
When used as a receiver, these dictionaries can
be accessed using standard field destructuring.
Method dispatch then takes the form of a dictionary
lookup and method invocation as seen on lines~\ref{code:dictexample:resolve1}
and \ref{code:dictexample:resolve2} (right).
\subsubsection{\bf Type Collision}
\label{subsubsec:typecollision}
Here we consider the challenge of ensuring that
type assertion behaviour is preserved by our translation.
Erasing type parameters may
introduce new subtypes which did not
exist in the source program.
Consider the expression
\inlinelstfcgg{GtFunc[int]\{5\}.(Function[bool, bool])}
where \inlinelstfcgg{GtFunc} and \inlinelstfcgg{Function} are
defined in Figure~\ref{code:fgg:example}.
Upon evaluation, this expression
produces a runtime type assertion
error as \inlinelstfcgg{GtFunc[int]\{5\}} is
not a subtype of \inlinelstfcgg{Function[bool, bool]}.
The erased types as described in
\S~\ref{paragraph:structure}, however, form a subtype relation,
meaning the error will not
occur in the translated code.
This behaviour would be incorrect.
To ensure that type assertion errors
are correctly preserved we simulate the FGG type
assertion system inside the
translated \gls{fg} code via type-reps~\cite{crary1998intensional}.
A simulated \gls{fgg} type implements \inlinelstfcgg{_type_metadata}
by specifying a method,
\inlinelstfcgg{tryCast}, which throws an error
if and only if the \gls{fgg} assertion would have failed.
\begin{figure}
\lstset{xleftmargin=10pt}
\begin{lstfcgg}
type _type_metadata interface { tryCast (in Any) Any }
type AnyDict struct {_type _type_metadata}
type Foo interface { do(dict$_0$ Anydict, in Any) Any ; spec_do() spec_metadata$_4$ }
type Foo_meta struct { _type$_0$ _type_metadata }
func (this Foo_meta) tryCast(x Any) Any { (*\label{code:trycast}*) // Type formal, Parametrised arg, Literal arg, return type $\alpha$
if (x.(Foo).spec_do() $!=$ spec_metadata$_4${Any_meta{}, param_index$_0${}, Bool_meta{}, this._type$_0$ }) { panic }
return x }
type Bar struct {dict$_0$ AnyDict}
func (this Bar) spec_do() spec_metadata$_4$ { // Type formal, Parametrised arg, Arg type $\alpha$, return type literal
return spec_metadata$_4${Any_meta{}, param_index$_0${}, this.dict$_0$._type, Int_meta{}}}(*\label{code:specdo}*)
func main() {
Foo_meta{Int_meta{}}.tryCast(Bar{AnyDict{Bool_meta{}}}) (*\label{code:assertion:target}*)
Foo_meta{Bool_meta{}}.tryCast(Bar{AnyDict{Bool_meta{}}}) } (*\label{code:assertion:target}*)
\end{lstfcgg}
\vspace*{-.5cm}
\caption{Type-rep example. \glsentryshort{fg} translation}
\label{fig:code:dict:type-rep:fg}
\vspace*{-.4cm}
\end{figure}
Consider the code in Figure~\ref{fig:code:dict:type-rep:fgg}.
The source \gls{fgg} code contains two assertions;
the one on line~\ref{code:assertion:source}
passes, while line~\ref{code:assertion:source2}
produces a type assertion error.
A struct implements an
interface when it correctly implements
each method specified by the interface.
This means that not only does the struct
define a method of the same name, but
also of precisely the same type.
Assertion to an interface, then, need
only ensure that each method is correctly implemented.
Assertion to a structure is a simple type equality check.
The translated interface, Figure~\ref{fig:code:dict:type-rep:fg},
now includes the meta method \inlinelstfcgg{spec_do},
returning simulated \gls{fgg} type information for a struct's
\inlinelstfcgg{do} implementation.
The \inlinelstfcgg{spec_metadata$_4${}} object
returned by \inlinelstfcgg{spec_do} on
line~\ref{code:specdo} of the target code is a four-element
tuple containing: type parameter bounds,
argument types, and the return type. This object
simulates the \gls{fgg} method type for
\inlinelstfcgg{do} on \inlinelstfcgg{Bar[$\tau$]} for some $\tau$, \ie
\inlinelstfcgg{do[$\beta\ $ Any](a $\ \beta$, b $\ \tau$) Int[]}.
The first entry \inlinelstfcgg{Any_meta\{\}} gives the simulated
type bound of the source method's type parameter $\beta$.
The next gives the type of argument \inlinelstfcgg{a}, namely $\beta$.
As there is no suitable concrete metadata type for $\beta$, we
use an index \inlinelstfcgg{param_index$_0${}} to indicate
that \inlinelstfcgg{a}'s type is the method's first type parameter.
The third, that of \inlinelstfcgg{b}, is not known at compile time,
but is rather given by the
type parameter of the receiver.
Finally, the return type is given by the constant \inlinelstfcgg{Int_metadata}.
The type assertion on line~\ref{code:assertion:target}
uses the \inlinelstfcgg{Foo_meta}'s \inlinelstfcgg{tryCast} method defined on line~\ref{code:trycast}.
This method first checks that the erased types are compatible, \ie{} that \inlinelstfcgg{Bar}
implements all erased methods in \inlinelstfcgg{Foo}. The \inlinelstfcgg{spec_do} method is then
used to check the simulated method type matches the interface specification.
If any of these checks is failed then the assertion
fails and a {$\ensuremath{\mathsf{panic}}$} is thrown.
\subsubsection{\bf Call-Site Dictionary Creation}
\label{subsubsec:dynamicdict}
As discussed in \S~\ref{section:nomono},
the approach taken by Go~1.18\xspace
is fundamentally limited by its use of call-graph based
dictionary construction.
In contrast we consider the challenge of the call-site
construction of dictionaries
in a structurally typed language.
Our approach
overcomes the aforementioned limitation of
\cite{griesemer2020featherweight} and Go~1.18\xspace.
We note a few key facts.
A $\tau$-dictionary provides all
the methods specified by the
type bound $\tau$, and we may build a
dictionary for any specialising type
which is a subtype of $\tau$.
We can also use a type variable to
specialise some other type variable as
long as the bound of the later is a
supertype of the former. In a translation
this \emph{dictionary-supertyping}
involves using a $\tau$-dictionary to
build a potentially different $\sigma$-dictionary.
In a nominally typed
language the explicit, and fixed,
hierarchy allows a dictionary-passing translation
to easily structure and construct dictionaries
according to the subtype hierarchy.
Dictionary-supertyping in nominally typed languages
is generally a
matter of extracting the appropriate sub-dictionary~\cite{bottu2019coherence}.
In a structurally typed language, however, there is not
a fixed subtype hierarchy. Recall that in order
to infer subtype relationships, we first need the specific
type instances. We
have two choices: either explore the entire
call graph to discover all type instantiations and
construct our dictionaries according to the call-graph,
or construct/supertype our dictionaries at the call-site where
specialisation would have happened.
The former approach was taken by Go~1.18\xspace
and beyond the significant static analysis
required, this approach also suffers from the
same finiteness limitation encountered
by monomorphisation approaches \cite{griesemer2020featherweight}.
\begin{figure}
\noindent
\begin{minipage}{0.33\linewidth}
\begin{lstfcgg}
type Eq[$\alpha$ Eq[$\alpha$]] interface {
Equal(that $\alpha$) bool
}
type Ord[$\alpha$ Ord[$\alpha$]] interface {
Gt(that $\alpha$) bool;
Equal(that $\alpha$) bool
}
func Foo[$\beta$ Ord[$\beta$]](val $\beta$) Any {
return Bar[$\beta$](val)
}
func Bar[$\beta$ Eq[$\beta$]](val $\beta$) Any {$\cdots$}
func main() { Foo[int](5) }
\end{lstfcgg}
\end{minipage}
\begin{minipage}{0.64\linewidth}
\begin{lstfcgg}
type EqDict struct { Equal func(rec Any, that Any) Any }
type OrdDict struct {
Equal func(rec Any, that Any) Any ;
Gt func(rec Any, that Any) Any
}
func Foo(dict OrdDict, val Any) Any {return Bar(EqDict{dict.Equal}, val)}
func Bar(dict EqDict, val Any) Any { $\cdots$ }
func main() {
old_dict := OrdDict{
Equal : func(rec Any, that Any) Any { return rec.(int).Equal(that) }
Gt : func(rec Any, that Any) Any { return rec.(int).Gt(that) } }
Foo(old_dict, 5) }
\end{lstfcgg}
\end{minipage}
\vspace*{-.3cm}
\caption{Call-site dictionary creation example.
\glsentryshort{fgg} source (Left).
\glsentryshort{fg} translation (Right)}
\label{fig:code:dict:dynamic}
\vspace*{-.4cm}
\end{figure}
We demonstrate our call-site approach in Figure~\ref{fig:code:dict:dynamic}.
This example consists
of two interfaces, \inlinelstfcgg{Eq} and
\inlinelstfcgg{Ord}, which form a
subtype relation along with a method \inlinelstfcgg{Foo} which
uses a type parameter bounded by \inlinelstfcgg{Ord} to
instantiate a type parameter bounded by \inlinelstfcgg{Eq}.
If, in the source program, there are two types $\sigma$ and $\tau$
where there exists an instantiation creating a subtype relation, then
the two erased types form a subtype relation.
This is precisely the result discussed in \S~\ref{paragraph:structure}.
When initially creating a dictionary, we
populate it with the required method pointers
for the known instantiating type.
If, however, we are creating a
$\tau$-dictionary for type parameter $\beta$ bounded by $\sigma$,
then the method contained by the supertyping
$\tau$-dictionary (\inlinelstfcgg{Eq}) is a subset of
the $\sigma$-dictionary (\inlinelstfcgg{Ord}) for type parameter $\alpha$.
Dictionary-supertyping then consists of destructuring the
subtype's dictionary and -- along with the type-rep --
adding all required method pointers to a
new supertype-dictionary.
While conceptually simple,
our call-site approach directly addresses the
unique issues raised by structural typing systems and
allows us to
overcome the limitation discussed in \S~\ref{section:nomono}
that afflicts
both monomorphisation~\cite{griesemer2020featherweight} and Go~1.18\xspace.
\subsection{Dictionary-Passing Judgement}
\label{sec:dictexp}
This subsection is technical:
readers who are not interested in the formal translation rules
can safely skip this subsection.
We define the judgement $\dict[]{P}{\lex{P}}$ as the
dictionary-passing translation from $P$ in \gls{fgg} to
$\lex{P}$ in \gls{fg}.
The expression judgement $\dict{e}{\lex{e}}$ is parametrised by
variable and type variable environments
($\Gamma$ and $\Delta$ \emph{resp.})
as well as a dictionary map $\eta$ from type variable names to
dictionary variables.
We provide auxiliary functions in Figure~\ref{fig:dict:aux} and translation rules in
Figure~\ref{fig:dict:prog}.
\myparagraph{Name constants}
We introduce a set of maps from name constants in \gls{fgg} to unique
\gls{fg} names which are assumed
to never produce a collision,
\begin{enumerate*}
\item $\dictName{\iType{t}}$ --- from a type bound (interface) to the
dictionary struct name for that bound;
\item $\metadataName{t}$ --- from a type name to a simulated type name;
\item $\method{spec\_name}{m}$ --- from a method name to a method producing simulated specification; and
\item $\method{mName}{t,m}$ --- the method applicator (pointer) for method $m$ on type $t$.
\end{enumerate*}
\myparagraph{Auxilary functions}
\input{figs/dict/aux}
Figure \ref{fig:dict:aux} provides a number of auxiliary functions used in
the dictionary-passing translation.
The overloaded $\method{arity}{}$ function
computes the number of type and value parameters required by each
method signature, including method signatures in an interface's specifications.
Function $\method{maxFormal}{D}$ computes the number of type parameters expected
by the largest type formal.
Function $\method{asParam}{\Phi}$ converts a type formal into dictionary arguments.
The function $\method{meth\_ptr}{t, mM}$ constructs the
simulated method pointer struct and implementation
-- called the \emph{abstractor/applicator pair} -- for method $m$ on type $t$.
To build a type simulation of type $\tau$ we call $\typemeta{\tau}$ where $\zeta$ is a
map from type variables to existing simulated types.
When simulating the type assertion to an interface in
\S~\ref{subsubsec:typecollision}, we used the
$\method{spec\_name}{m}$ method \inlinelstfcgg{spec_do} to produce
the instantiated simulated signature for method $m$.
The $\method{spec\_mdata}{mM}$ function takes an interface's method specification
and produces the specification for the $\method{spec\_name}{m}$ method.
Simulated method signatures are built using $\signatureMeta{M}$.
This function takes a map $\zeta$
from type variables to simulated types,
and extends $\zeta$ with the indexing structs for the method's type formal.
A set of simulated method signature ($\fnMeta{n}$) and type parameter index
($\texttt{param\_index}_i$) structs are created by the program translation
rule (\rulename{d-program}); $\method{arity}{\multi{D}}$ and $\method{maxFormal}{D}$
are used to ensure that all needed structs are constructed. $\fnMeta{n}$ is an $n+1$ tuple
used in interface assertion simulation, and describes a method signature of arity $n$
and gives
type parameter bounds,
value argument types, and the method's return type.
To allow interface assertion simulation to reference type
variables, we use $\texttt{param\_index}_i\{\}$ to reference a method's $i^{\text{\tiny th}}$
type parameter.
Given a type environment $\Delta$ and a map $\eta$ from type variables to existing dictionaries,
we build a $\iType{\tau}$-dictionary for type~$\sigma$ using the
\makeDict{\sigma, \iType{\tau}} function. In the case that $\sigma$ is already
a type variable $\alpha$, then the map $\eta$ must contain a dictionary for $\alpha$.
When $\alpha$ is bounded by $\iType{\tau}$ in $\delta$, we are done,
whereas if $\iType{\tau}$ is a subtype of $\alpha$,
but not
$\iType{\tau} = \alpha$, then we need copy
method pointers required by the new (and smaller) $\iType{\tau}$-dictionary.
A new dictionary is built for a constant type $\sigma$ by providing a method pointer (abstractor) for each
method specified by $\iType{\tau}$ and the simulated type of $\sigma$.
\input{figs/dict/trans}
\myparagraph{Program translation} Rule \rulename{d-program}
introduces new declarations required for method pointers
and type simulations as described in \S~\ref{sec:dictexample}, and
the \texttt{Any}\ interface to provide a uniform, erased, type representation.
Each method applicator must implement an $n$-arity function interface
$\nAryFunction{n}$, that accepts the receiver and the $n$ arguments for the
desired method call. The arity of a method includes both the regular value
arguments as well as the dictionary arguments.
A simulated type implements the $\texttt{\_type\_mdata}$ interface by providing
an assertion simulation method (\texttt{tryCast}), which panics if
the assertion is invalid.
The $\fnMeta{n}$ and
$\texttt{param\_index}_i$ structs are created as required by
the $\method{arity}{\multi{D}}$ and $\method{maxFormal}{D}$
functions, respectively.
Each declaration is translated to multiple declarations; we use $\mathcal{D}$
to indicate this.
\myparagraph{Interface and dictionary construction}
The translation of
interfaces produces a number
of \gls{fg} declarations (\rulename{d-interface}).
They are (1) an \gls{fg} interface, and
(2) a dictionary for that interface.
The interface $\iType{t}\typeFormalType$ becomes the erased type
$\iType{t}$ (1).
For each method specification $S$ defined by the source interface, we produce
two specifications in the target; the first is defined by \rulename{d-spec}
and replaces types formal with appropriate dictionaries while erasing all other
types, and the second defines a method producing the simulated
\gls{fgg} method specification.
Since \rulename{d-meth} produces such a simulated specification method for
each method, it is guaranteed that any type which
implements the former will implement the latter.
The dictionary (2) for an interface $\iType{t}$ is given by a new struct
$\dictName{\iType{t}}$, which contains a method pointer (abstractor) for
each specified method and the simulated type ($\texttt{\_type}$)
for the type parameter's specialising type.
Type simulation is also defined here.
For type $\iType{t}\typeFormalType$, the simulation struct
($\metadataName{\iType{t}}$) contains a field for each type parameter in $\Phi$.
The \texttt{tryCast}\ method
checks that each specified method is implemented correctly by the target of the
assertion (See \S~\ref{subsubsec:typecollision}).
For clarity of presentation, we assume a number of extra language features that can
be easily implemented in \gls{fg}, including; if-statement, struct inequality,
explicit panic, and sequencing \cite{griesemer2020featherweight}.
\myparagraph{Struct declaration}
To translate $\sType{t}\typeFormalType$,
we erase all field types
and add a new dictionary field for each
type parameter in $\Phi$. The simulated type $\metadataName{\sType{t}}$
is constructed with a
variable for each type parameter, and $\texttt{tryCast}$ checks that the
target value is exactly the assertion type.
\myparagraph{Method declaration}
Judgement on method $m\typeFormalMethod(\multi{x~\tau})~\tau$ (\rulename{d-meth})
produces a primary method, a method returning the simulated method type,
and an abstractor/applicator pair.
The primary method and simulation method's types match those from \rulename{d-interface}.
The body of the implementing method is translated in the
$\Delta;\eta;\Gamma$ environments,
where $\Delta$ and $\Gamma$ are built
according to the typing system.
There are two locations for type variables -- and thus dictionaries -- to be
passed into a method, namely in the receiver or as an argument;
consequently, $\eta$ may map into either a dictionary argument ($\texttt{dict}_i$) or
a receiver's dictionary field ($\texttt{this}.\texttt{dict}_i$).
\myparagraph{Expressions}
The struct literal ($\sType{t}\typeActualReceive\{\multi{e}\}$) is translated by
first translating each field assignment and then
building an appropriate dictionary for each type in $\phi$ using
\textit{makeDict} (\rulename{d-value}).
Method calls are translated in one of two ways. The first (\rulename{d-call})
is the immediate structural translation of sub terms and creation of appropriate dictionaries; this
translation is only possible if the type of the receiver is not a type variable, although
it does not need to be a closed type.
The second (\rulename{d-dictcall}) translates arguments and creates dictionaries in the
same way as the former, but needs to resolve the
method implementation using a dictionary lookup.
\section{Implementation and Evaluation}
\label{sec:exp}
Beside the \gls{dict},
we also implement an \gls{erasure}.
We compare the two implementations with three existing translators: \gls{mono}~\cite{griesemer2020featherweight}, \gls{gotogo}
(the initial prototype based on a source-to-source monomorphisation),
and Go~1.18\xspace~\cite{go118} (the official generic type
implementation released on 15th
March 2022).
This section first discusses the two implementations,
then describes the evaluation methodology,
before finally presenting the evaluation results.
\subsection{Implementation of Dictionary-Passing Translation}
\label{subsec:imple}
\input{figs/eval/fig-venn.tex}
We implement the dictionary-passing translator
(\glsentryshort{dict})
and the erasure-based translator (\glsentryshort{erasure}) based on
the \gls{fgg} artifact~\cite{fgg-artifact} in Go 1.16.
We have fully tested the implementations
using designed unit tests.
Figure~\ref{fig:venn} shows the code coverage difference across the five translators.
\gls{fgg} is the calculus presented in
\cite{griesemer2020featherweight};
\glsentryshort{dict} does not
cover receiver type formal subtyping;
\glsentryshort{erasure} does not cover \gls{fgg} type assertions;
\glsentryshort{mono} does not cover a class
of recursive ({\it nomono}) programs
\cite{griesemer2020featherweight};
\glsentryshort{gotogo} is a
source-to-source monomorphisation translator implemented by the Go Team,
and does not cover \textit{F}-bounded polymorphism, method parametrisation,
receiver type formal subtyping,
or recursive ({\it nomono}) programs; and
Go~1.18\xspace is the
official release with generics and has the same limitations
as \glsentryshort{gotogo}. Both Go~1.18\xspace and \gls{gotogo}
target the full Go language, including features
not considered by \gls{fgg}.
We implement \glsentryshort{dict} following the rules in
\S~\ref{section:dictionary}.
Rather than strictly follow the formalisations of \gls{fg} and \gls{dict}
translation, we leverage the first-class functions support in Go and use
function types~\cite{go-function-types} as dictionary fields, similar
to using function pointers in C/C++.
We also ignore unnecessary type assertions in \rulename{d-field}
and \rulename{d-call} when the translation is not on an interface.
We memorise expression typing results to accelerate compilation.
We exclude type simulation (\S~\ref{subsubsec:typecollision})
of non-generic types (\ie{} the size of
the type formal is zero), and directly use type assertion for \rulename{d-assert}
for better runtime performance. We also find if those type metadata are used, and remove them when possible. Users can also disable all
type metadata copies if there are no type assertions in the input program.
In total, \glsentryshort{dict} contains 1160 lines of Go code.
\glsentryshort{erasure} is an alternative homogeneous translation implementation from \gls{fgg}.
This implementation erases generic type information and uses the underlying interface type, similar to the erasure implementations for Java~\cite{odersky2000two, Igarashi99FJ}.
When calling a method,
the erased object is directly used as the receiver (If
$\wellTyped[\Delta, \alpha{:}\iType{t}\typeActualReceive; \Gamma]{e}{\alpha}$ then $\dict[\Delta, \alpha{:}\iType{t}\typeActualReceive; \Gamma]{
e.m\typeActualMethod(\multi{e})
}{
\lex{e}.(\iType{t}).m(\multi{\lex{e}})
}$), in contrast to \glsentryshort{dict}'s dictionary lookup (\rulename{d-dictcall}).
For example, \inlinelstfcgg{func f[a Foo](x a) \{x.Bar()\}} translates to \inlinelstfcgg{func f(x Any) \{x.(Foo).Bar()\}}, while \glsentryshort{dict} calls the corresponding function in a dictionary field.
As in \S~\ref{paragraph:structure}, naively erasing type parameters breaks type assertion preservation (Definition~\ref{def:type:preservation}).
An example of \glsentryshort{erasure} is provided in
\ifnotsplit{Appendix~\ref{sec:erasure-example}}{the full version of this paper~\cite{fullversion}}.
Compared with \glsentryshort{dict}, \glsentryshort{erasure}
provides a concise translation of generics that is fully based
on Go's existing dynamic dispatch mechanism.
When calling a method of a generic object as though
it were an interface, the Go runtime looks up
the actual method to call from a list of methods~\cite{go-interface-slow-1,go-interface-slow-2},
while \glsentryshort{dict} finds the actual method from the dictionary.
The implementation of \glsentryshort{erasure} contains 765 lines of Go
code.
\subsection{Evaluation Methodology}
\label{subsec:evaluation}
\myparagraph{Benchmarks}
We build two benchmark
suites to conduct head-to-head comparisons for the
five translators.
\textit{1) Micro Benchmarks:}
we design five micro benchmark sets.
Each has a configuration parameter
to demonstrate how the translated code scales with a particular
aspect of \gls{fgg}/Go programs.
\textit{2) Real-World Benchmarks:}
we reimplement all benchmarks
in previous papers about generics in
Java and Scala~\cite{odersky2000two, ureche2013miniboxing}.
Go~1.18\xspace officially released generics on March 15th, 2022, and
it is infeasible for us to find usage of generics in real Go programs.
The second benchmark suite is a reasonable substitute to reveal how the five
translators behave in reality.
\input{figs/eval/fig-toy2-sep}
\myparagraph{Micro Benchmarks}
The five sets of micro benchmarks, \benchname{Program}~\mycircledtext{a}-\mycircledtext{e}, are all derived
from a base program. Figure~\ref{fig:benchmark-prog-pdf} shows the base program and how the five
benchmark sets are derived from it.
In the base program,
lines 29--32 enumerate all possible combinations of types actual for \gocode{f$_1$()}.
Function \gocode{f$_1$()} takes two parameters and uses them
to call \gocode{f$_2$()} on line~20, which in turn calls \gocode{CallBase()}
on line~14.
Function \gocode{CallBase()} calls \gocode{Ops()} on line 5, which further calls \gocode{Op()}
twice to represent two non-generic operations. All methods of interface \gocode{Base}
(\gocode{g$_1$()} and \gocode{g$_2$()}) are implemented by struct \gocode{Derived},
and called on line 17, from receiver variable \gocode{x}
with generic type \gocode{base}.
Function \gocode{main()} calls \gocode{DoIt()} 10,000 times (line~36)
to provide stable performance results.
The set of \benchname{Program}~\mycircledtext{a} extends the number of methods of \gocode{Base} (lines 39--42)
and \inlinelstfcgg{Derived} (lines 45--47) in the base program, from 2 to $n$.
\benchname{Program}~\mycircledtext{b} repeats the non-generic operation $c$ times on line 56, instead of two.
In \benchname{Program}~\mycircledtext{c}, we increase the number of type parameters from 2 to $m$ (lines 59, 60, 63, and 65),
and enumerate all $2^m$ type actual combinations (lines 67--70).
\benchname{Program}~\mycircledtext{d} increases the length of the call chain between \gocode{Doit()}
and \inlinelstfcgg{CallBase()} from 2 to $p$ (lines 49--55).
\benchname{Program}~\mycircledtext{e} is particularly designed to expose the exponential complexity
of monomorphisation (lines 72--88).
Its configuration parameter $m$ controls both
the type parameter number of \gocode{Base} (and \gocode{Derived}) and
the number of functions called in between \gocode{DoIt()}
and \gocode{BaseCall()} along the call chain.
For the $m$ functions in between \gocode{DoIt()}
and \gocode{BaseCall()}, we further configure each caller to call its callee twice,
and each callee to have one more parameter than its caller (e.g., function body of \inlinelstfcgg{f$_1$} and \inlinelstfcgg{f$_2$} on lines 77--84).
\myparagraph{Real-World Benchmarks}
We reimplement the Java and Scala programs using
\glsentryshort{gotogo}, Go~1.18\xspace, and \glsentryshort{fgg} for our evaluation.
Since \glsentryshort{fgg} does not support all syntax in the programs,
we first use \glsentryshort{fgg}
to reimplement as many functionalities as possible. Then,
we translate the \glsentryshort{fgg} code to Go
and manually insert the missed non-generic functionalities.
On the other hand,
\glsentryshort{gotogo} and Go~1.18\xspace support all required syntax, so
we use them to reimplement each whole program.
We manually test the reimplementations with designed testing inputs
and compare their outputs with the original versions in Java or Scala.
Our tests achieve 100\% code coverage.
The benchmarks' functionalities are explained as follows.
\benchname{List} \cite{ureche2013miniboxing}
is an implementation of a linked list. It supports insert and search operations
on the linked list.
\benchname{ResizableArray} \cite{ureche2013miniboxing} implements a resizable array.
It inserts elements into the array,
reverses the array, and searches elements in the array.
\benchname{ListReverse} \cite{odersky2000two} constructs a linked list and reverses it.
It contains two reversing implementations.
\benchname{VectorReverse} \cite{odersky2000two} is to reverse an array.
Similarly, it implements the reversing
functionality in two different ways.
\benchname{Cell} \cite{odersky2000two}
implements a generic container.
\benchname{Hashtable} \cite{odersky2000two} accesses elements in a hash table.
\myparagraph{Metrics}
We consider \emph{code size}, \emph{execution time}, and \emph{compilation time} as our metrics.
For code size, we compile each translated benchmark program into a binary executable and
disassemble the executable using objdump~\cite{objdump120:online}.
Next, we count the number of assembly instructions compiled from the benchmark program as its code size, while excluding the assembly instructions
of linked libraries.
To measure execution time,
we compile each translated \gls{fg} program using
the Go compiler and compute the average execution time over {\em ten} runs.
We consider the time spent on the source-to-source translation and the compilation
from a \gls{fg} program to an executable as the compilation time for the four source-to-source translators. For Go~1.18\xspace, we measure its compilation time directly.
We compile each benchmark program with each translator {\em ten} times
and report the
average compilation time.
\myparagraph{Platform \& Configurations}
All our experiments are conducted on a desktop machine,
with AMD Ryzen 5 2600 CPU, 32GB RAM, and Ubuntu-18.04.
To focus more on the impact of different translations for generics,
we disable garbage collection and compiler optimisations for all translators.
No benchmark requires type simulation.
Thus, we disable this option in \gls{dict},
allowing us to better understand the impact of method translation and dispatch.
\subsection{Evaluation Results}
\label{subsec:benchmark}
}%\vspace{-1mm}
\subsubsection{Micro benchmarks}
\input{figs/eval/fig-prog-a-result.tex}
\myparagraph{Program \mycircledtext{a}}
We change $n$ from $2$ to $40$ to
analyse how the method number of a generic interface impacts
the five translators.
As shown in Figure~\ref{fig:ssa-n}, the code size (number of assembly instructions)
of translated \gls{fg} programs has a linear relationship with $n$ for all five translators.
However, different translators have different coefficients.
The coefficients of \glsentryshort{mono} ($328.8$), \glsentryshort{gotogo} ($300.8$), and Go~1.18\xspace ($297.8$) are much larger
than the coefficients of \glsentryshort{dict} ($117.9$) and \glsentryshort{erasure} ($103.8$).
Figure~\ref{fig:time-n} shows
the execution time of translated programs.
The programs translated by
\glsentryshort{dict} and Go~1.18\xspace
have a similar performance.
They are slower
than the corresponding programs translated by \glsentryshort{mono} and \glsentryshort{gotogo}.
This is largely due to the usage of dictionaries.
The programs generated by \glsentryshort{erasure} have the worst performance,
since the structural typing conducted by \glsentryshort{erasure} when it translates generic method calls to
polymorphic method calls is very slow~\cite{go-interface-slow-1,go-interface-slow-2}.
Figure~\ref{fig:compile-time-n} shows the compilation
time.
\glsentryshort{mono} is significantly slower than the other four translators,
and its compilation time is even not in a linear relationship with $n$.
The compilation times of the other four translators are similar to each other.
\myparagraph{Programs \mycircledtext{b} and \mycircledtext{d}}
How the number of non-generic operations and the length of the call chain impact the three metrics is quite similar to the method number of
generic interface \gocode{Base} in \mycircledtext{a}.
In particular, the code size, execution time,
and compilation time are all in a linear relationship with
the two configuration parameters, except for the compilation time of \glsentryshort{mono}.
Comparing \mycircledtext{b} with \mycircledtext{a},
one important difference to note is that
for \mycircledtext{b},
the programs translated by \glsentryshort{dict} spend a similar execution time
to that of the corresponding programs translated by \glsentryshort{erasure}, and the execution time
is larger than the execution time of the programs translated by Go~1.18\xspace.
However, in Figure~\ref{fig:time-n} for \mycircledtext{a},
the line of \glsentryshort{dict} is almost identical to the line
of Go~1.18\xspace, indicating that their execution times are similar,
and the line of \glsentryshort{dict} is lower
than the line of \glsentryshort{erasure}.
The reason is that when \glsentryshort{dict} translates \glsentryshort{fgg} to \glsentryshort{fg},
it also synthesises type assertions for the non-generic
operations in \glsentryshort{fgg} (line 56 in Figure~\ref{fig:benchmark-prog-pdf}).
The type assertions slow down the translated \glsentryshort{fg} programs.
\myparagraph{Program \mycircledtext{c}}
The code size, execution time, and compilation time all scale
exponentially with $m$ for the five translators.
The underlying reason is that function \gocode{DoIt()}
calls \gocode{f$_1$()} $2^m$ times
in each input \glsentryshort{fgg} program.
After normalising the three metrics
with the number of characters in the \glsentryshort{fgg} programs,
we find that the three metrics are in a linear relationship with $m$.
Among the five translators, \glsentryshort{erasure}'s translated programs
have the longest execution time. \glsentryshort{dict} and \glsentryshort{erasure}
spend a similar compilation time, which is much shorter than \glsentryshort{mono}, \glsentryshort{gotogo}, and Go~1.18\xspace.
\glsentryshort{dict}'s translated programs are similar in size to
\glsentryshort{erasure}'s translated programs, but they are smaller
compared with the programs translated by \glsentryshort{mono}, \glsentryshort{gotogo}, and Go~1.18\xspace.
\input{figs/eval/fig-prog-b-and-real-result.tex}
\myparagraph{Program \mycircledtext{e}}
As shown in Figures~\ref{fig:ssa-m}~and~\ref{fig:compile-m},
both the code size of the translated programs
and the compilation time scale exponentially with $m$
for \glsentryshort{mono},
\glsentryshort{gotogo}, and Go~1.18\xspace.
The reason is that \gocode{f$_m$()} essentially calls \gocode{CallBase()} $2^m$ times with $2^m$
distinct parameter combinations, because for $i\in[2,m), $ \gocode{f$_i$()} calls \gocode{f$_{i+1}$()}
twice, with its input parameters plus \gocode{Red} for the first time and its parameters
plus \gocode{Blue} for the second time, leading the three translators to copy
\gocode{CallBase()} $2^m$ times.
However, neither \glsentryshort{dict} nor \glsentryshort{erasure}
makes any copy of \gocode{CallBase()},
and the code size of their translated programs is in a polynomial relationship with $m$
(e.g., for \glsentryshort{dict}'s translated programs, $\textit{size} = 12.8m^2 + 34.5m + 381$, $p<0.001$).
Contrary to the intuition, as shown in Figure~\ref{fig:time-m},
the programs translated by \glsentryshort{mono}
have a worse execution performance compared with the corresponding programs translated by \glsentryshort{dict},
when $m$ is larger than 7.
The reason is that when $m$ is large, a program synthesised by \glsentryshort{mono}
has a large code size, and thus many cache misses occur during its execution.
For example, when $m$ is 9, the size of the executable file translated by \glsentryshort{mono} is 6.3MB,
and the executable triggers 6,058,156 cache misses in one run,
while the program translated by \glsentryshort{dict} only
causes 93,695 cache misses.
\myparagraph{Type simulation}
As we discussed earlier, we disable the metadata copy of type simulation.
If we enable the copy, then the translated programs
become slower (e.g., 10\% slower for \mycircledtext{a} when configuring $n$ equal to $2$). The slowdown becomes negligible when $n$ is equal to $40$.%
\subsubsection{Real-world benchmarks}
The evaluation results of real-world benchmarks are shown in Table~\ref{tab:real-benchmark-results}.
Overall, the translated programs of \glsentryshort{dict} and \glsentryshort{erasure}
have a smaller code size, but a longer execution time, compared with the corresponding programs translated by
\glsentryshort{gotogo}, \glsentryshort{mono}, and Go~1.18\xspace, which is consistent
with the results on the micro benchmarks.
However, the compilation time does not change significantly across different translators,
because all real-world benchmarks are small and do not have many usages of generics.
\input{figs/eval/exp-result-table.tex}
}%\vspace{-2mm}
\subsection{Discussion and Limitations}
\label{section:discussion}
Our experimental results largely reflect the common intuition that
monomorphisation translators (\gls{mono}, \gls{gotogo}, and Go~1.18\xspace) generate programs
with a better runtime performance,
while non-specialising translators (\gls{dict} and \gls{erasure}) synthesise programs in a smaller code size.
However, our evaluation also pinpoints cases where monomorphisation generates
programs in an extremely large size.
The programs trigger excessive cache misses during execution and have a very bad runtime performance.
On the other hand,
our experimental results motivate the introduction and usage of Go generics,
since without generics,
Go programmers have to implement polymorphism
using interfaces, which is exactly the same as the programs translated by \gls{erasure},
and our experimental results show that those programs are slow.
In practice, our dictionary-passing translator (\gls{dict}) constantly
generates programs in a smaller size and takes a smaller (or comparable) compilation time
than all existing translators (including Go~1.18\xspace,
the official generic type implementation).
Thus, it provides an alternative for real-world users of Go generics to
strike their desired tradeoff.
Moreover, our implementation and evaluation experience show
that type simulation is an important component of \gls{dict},
and that type metadata incurs extra runtime overhead.
Thus, corresponding data structures and algorithms need to be carefully designed
for better translated programs.
For instance, link-time optimisation can be applied to remove unused type metadata.
\myparagraph{Possible improvements for Go~1.18\xspace}
First, Go~1.18\xspace is very conservative in its support for GC shapes --
only considering pointers to have the same GC shape.
In our experiments, we do not observe the reuse of method implementations,
or synthesis and use of dictionaries.
Thus, to make full use of dictionaries and GC shape stenciling~\cite{go118},
it is necessary for the Go team to improve the current implementation and support
more GC shapes.
Second, the Go team can
consider dictionary-passing-based homogeneous compilation, as proposed
in this paper, since it supports polymorphic recursion, provides a faster compilation speed,
generates programs with a smaller code size, and enables separate compilation.
\myparagraph{Limitations}
Since the official generic type implementation released on
March 15th, 2022,
there does not yet exist generic Go code from
large, production-run Go software (e.g.,~Docker, Kubernetes, etcd).
We build the two benchmark suites to explore the translators'
asymptotic behaviours
and inspect how they perform on representative generic programs in other languages,
which is our best effort in conducting the evaluation.
We formalise \gls{dict} as a source-to-source translator
to clarify design choices for future implementations and
aid our proof of correctness (Theorem~\ref{thm:main:correctness}).
However, this choice limits the performance of our implementation, and the evaluation results
may not reflect the true capability of dictionary-passing
translation for two reasons:
first, we erase all types to \gocode{Any} to ensure type preservation,
which is slow at runtime;
and second, Go does not allow the creation of global constant dictionaries in source code,
but those dictionaries can potentially be created by the Go compiler
and leveraged by translated programs for a better runtime performance.
\section{\glsentrylong{fg}}
\label{section:fg}
\label{sec:fg}
We briefly summarise the \glsfirst{fg} language~\cite[\S~3]{griesemer2020featherweight};
specifically highlighting the key points
related to dictionary translation.
\subsection{\glsentrylong{fg} by Examples}
\label{sec:fg:example}
\input{figs/fg/fg-code-function}
\gls{fg} is a core subset
of the (non-generic) Go 1.16 language containing \emph{structures}, \emph{interfaces},
\emph{methods}, and \emph{type assertions}.
In \gls{fg}, there are two kinds of named types;
\emph{Interfaces} (\inlinelstfcgg{interface}) specify a collection
of methods which any implementing type must also possess, and
\emph{structures} (\inlinelstfcgg{struct}) which are
data objects containing a fixed
collection of typed fields.
\emph{Methods} are functions that apply to a
specific structure, called the method's \emph{receiver}.
Finally, \emph{type assertions} ask whether a structure can be used
as a specific type. If it cannot, then \gls{fg} will produce a
\emph{type assertion error}.
In contrast to nominally typed languages, Go uses
\emph{structural subtyping}.
As we shall see in \S~\ref{section:dictionary},
it is this
distinctive feature that makes our dictionary-passing translation
challenging and non-trivial.
In a nominally typed language, such as Java, one type implements (subtypes)
another when it explicitly declares such.
In Go, we do not declare that one type implements another.
Rather, one type implements another precisely when it implements
(at least) all of the prescribed methods.
Consider the example Go code in
Figure~\ref{code:fg:example}, which
simulates higher order functions, lists, and mapping.
For simplicity of presentation, we assume
that there are primitive \inlinelstfcgg{int} and \inlinelstfcgg{bool}
types along with a $<$ operation.
The \inlinelstfcgg{Any} interface does not specify any methods; as such,
all other types are its subtypes, meaning that any object may be used
when an \inlinelstfcgg{Any} is expected, but also that we cannot
apply any methods to an \inlinelstfcgg{Any} object without first
asserting it to some more specific type -- an action which may fail at runtime.
The \inlinelstfcgg{Function} interface specifies a single method,
which is given by the \emph{method signature}
\inlinelstfcgg{Apply(x Any) Any}.
Any structure implementing
an \inlinelstfcgg{Apply} method
that takes an argument of type \inlinelstfcgg{Any}
and returns a value, also of type \inlinelstfcgg{Any},
is said to
implement the \inlinelstfcgg{Function} interface.
Our example code simulates the \emph{greater than} function
as a structure (\inlinelstfcgg{GtFunc}) containing a single \inlinelstfcgg{Ord}
field. Its \inlinelstfcgg{Apply} method then calls the \inlinelstfcgg{Gt}
method provided by struct's field.
The \inlinelstfcgg{Ord} interface, however, specifies that \inlinelstfcgg{Gt}
should accept a single argument of type \inlinelstfcgg{Ord}.
Before the \inlinelstfcgg{Apply} method of \inlinelstfcgg{GtFunc} can call
\inlinelstfcgg{Gt} it must, then, assert its argument to
type \inlinelstfcgg{Ord}.
If the argument does not implement \inlinelstfcgg{Ord}, then a \emph{type assertion error}
occurs.
We assume that only one implementation of \inlinelstfcgg{Ord} exists, that being
\inlinelstfcgg{int}, which itself uses a risky type assertion.
The example also includes a \inlinelstfcgg{List} interface specifying
a \inlinelstfcgg{Map} method. We provide a cons list implementation
of \inlinelstfcgg{List}.
In \gls{fg}, there is a single top-level \inlinelstfcgg{main} function
that acts as the program's entrance.
Our program initially builds a simple three
value \inlinelstfcgg{int} list on line~\ref{fg:code:function:build},
and then uses the simulated greater than function (\inlinelstfcgg{GtFunc}) to
map the list to a \inlinelstfcgg{bool} list.
When, however, we
attempt to map this \inlinelstfcgg{bool} list using the same function, we
encounter a runtime type assertion error on line~\ref{fg:code:function:topanic}.
While we could catch this error at compile time by
increasing the specificity of the \inlinelstfcgg{Apply}, \inlinelstfcgg{Gt}, and
\inlinelstfcgg{Map} functions using \inlinelstfcgg{int} and
\inlinelstfcgg{bool} instead of \inlinelstfcgg{Any},
this would severely limit
code reusability.
}%\vspace{-2mm}
\subsection{\glsentrylong{fg} Syntax and Semantics}
\label{sec:fg:syntax}
\input{figs/fg/fg-syntax}
Figure~\ref{fig:fg:syntax} presents the syntax of \gls{fg} from
\cite{griesemer2020featherweight}.
We use the $\multi{x}$ notation for a sequences of $x$, namely $x_0, x_1,
\dots, x_n$.
A program ($P$) is given by a sequence of declarations ($\multi{D}$)
along with a {\bf main} function which acts as the top-level expression ($e$).
Shortened as $P = \program{e}$.
\gls{fg} is statically typed:
all \gls{fg} typing rules follow the Go 1.16 specification.
If, in the variable-type
environment $\Gamma$,
an expression $e$ is of type $t$,
then it satisfies the judgement $\wellTyped[\Gamma]{e}{t}$.
We assume that all programs $P$ are \emph{well-formed}, written
$P\operatorname{\var{ok}}$.
Since the rules/notations are identical to those
in \cite{griesemer2020featherweight},
we omit them here, but provide
definitions and details in
\ifnotsplit{Appendix~\ref{app:fg}}{the full version of this paper~\cite{fullversion}}.
\label{section:fg:reduction}
Figure~\ref{fig:fg:semantics} presents the \gls{fg} semantics with values and
evaluation contexts.
\textbf{\emph{Evaluation context}} $E$ defines the left-to-right call-by-value semantics
for expressions.
\textbf{\emph{Reductions}} are defined by the field selection rule
\rulename{r-fields}, type assertion rule \rulename{r-assert},
and the method
invocation \rulename{r-call}, with \rulename{r-context} for the context
evaluation. We use $\longrightarrow^\ast$ to denote a multi-step reduction.
\gls{fg} satisfies type preservation and progress properties
(see \cite[Theorems 3.3 and 3.4]{griesemer2020featherweight}).
\input{figs/fg/fg-semantics}
\section{\glsentrylong{fgg} and the limitations of monomorphisation and
Go~1.18\xspace}
\label{section:fgg}
As with \S~\ref{section:fg}, we briefly summarise the
\glsentryfirst{fgg} language~
\cite[\S~4]{griesemer2020featherweight}.
This section concludes with
a discussion of limitations in existing generic Go translations and Go~1.18\xspace.
}%\vspace{-2mm}
\subsection{\glsentrylong{fgg} by Example}
\input{figs/fgg/fg-code-function}
Figure~\ref{code:fgg:example} extends
Figure~\ref{code:fg:example} with generics.
As we saw in \S~\ref{sec:fg:example}, there was
a critical flaw in the original, non-generic, \gls{fg}
code. One part of the logic was polymorphic
(\ie \inlinelstfcgg{Map} is a natural transformation) while the
other was not (\ie \inlinelstfcgg{Gt}). We concluded that
section by observing the
two options; either we cater to the strict type
discipline demanded by \inlinelstfcgg{Gt}, reducing
reusability, or force an excessively permissive
polymorphism on \inlinelstfcgg{Gt} and risk runtime type assertion errors.
Generics, or bounded parametric polymorphism,
provide us with a third solution via the
precise definition and tracking of polymorphic types in
structures, interfaces, and methods.
As we shall see momentarily, in \gls{fgg}, each of
these constructs may now accept any number of
type variables (type parameters) as a type
formal, which must then be instantiated upon use.
Each type variable has a bound, an interface, that
any instantiating type must satisfy, \ie be an instance of.
Type formal \inlinelstfcgg{[T Any]} is read as type parameter
\inlinelstfcgg{T} is bound by type \inlinelstfcgg{Any}.
Objects with a generic type can use all methods
specified by the type variable's bound.
Type variables can be bound by any interface type, and may be mutually recursive within a type formal.
Take, for example, the type bound of \inlinelstfcgg{Ord} in Figure~\ref{code:fgg:example}.
\inlinelstfcgg{Ord} is bound by \inlinelstfcgg{Ord} itself and is used recursively in the
type bound for \inlinelstfcgg{GtFunc}.
For a type (\eg \inlinelstfcgg{int}) to instantiate type variable
\inlinelstfcgg{T} in \inlinelstfcgg{[T Ord[T]]},
its \inlinelstfcgg{Gt} method must not only take an argument of \inlinelstfcgg{Ord},
but must be precisely the same \inlinelstfcgg{Ord}-implementing type.
This kind of self-referential type bound is known as
\emph{F-bounded polymorphism} \cite{canning1989f}.
The interface \inlinelstfcgg{Function}
is now defined over two type variables (\inlinelstfcgg{T} and
\inlinelstfcgg{R}, both bounded by \inlinelstfcgg{Any}),
which are used by the specified \inlinelstfcgg{Apply}
method to type the simulated function's domain and codomain, respectively,
e.g., a type implementing \inlinelstfcgg{Function[int, bool]} must
implement the method \inlinelstfcgg{Apply(x int) bool}.
Unlike the original \gls{fg} code, we do not need \inlinelstfcgg{GtFunc}
to simulate any arbitrary function, but rather just functions from
some generic \inlinelstfcgg{Ord} type
to \inlinelstfcgg{bool}.
Instantiating \inlinelstfcgg{GtFunc} with \inlinelstfcgg{int},
written \inlinelstfcgg{GtFunc[int]},
gives an implementation of \inlinelstfcgg{Function[int,bool]}.
A type bound not only limits which types may specialise a
type parameter, but also what methods are available to
polymorphic values, \ie
given that all valid specialisations of \inlinelstfcgg{T}
in \inlinelstfcgg{GtFunc[T]}
must implement \inlinelstfcgg{Ord[T]}, we know that the
\inlinelstfcgg{val} field must always
possess the \inlinelstfcgg{Gt} method, allowing
us to call to \inlinelstfcgg{Gt} on line~\ref{fgg:code:function:apply:gt}
without a type assertion.
The definition of \inlinelstfcgg{List} tracks not only the type of
the list, but also the type of the list created
by \inlinelstfcgg{Map}. The \inlinelstfcgg{Map}
method accepts a type parameter along
with a \inlinelstfcgg{Function} argument; this type parameter is then
used as the codomain of the \inlinelstfcgg{Function} argument, and
instantiates the \inlinelstfcgg{List} return type.
Line~\ref{fgg:code:function:typefail} thus fails during type
checking because \inlinelstfcgg{GtFunc} does not
implement \inlinelstfcgg{Function[bool, bool]}.
}%\vspace{-2mm}
\subsection{\glsentrylong{fgg} Syntax and Semantics}
\label{section:fgg:syntax}
\input{figs/fgg/fgg-syntax}
Figure~\ref{fig:fgg:syntax} presents the syntax of \gls{fgg}.
The key differences from \gls{fg} are the addition of types formal
($\Psi, \Phi$) for method signatures and declarations.
A type formal ($\typeFormal$) is a sequence of pairs,
each of which contains
a type parameter ($\alpha$) and parameter bound ($\iType{\tau}$).
Type bounds are interface types that
can be mutually recursive, in that any bound in a type
formal may depend upon any type parameter in that type formal, including itself.
Type parameters are instantiated by a type actual ($\psi, \phi$) -- a
sequence of types that satisfy the requirements imposed by the type
formal. A type ($\tau$)
in \gls{fgg} is either a type parameter or
a declared type that has been instantiated
($t\typeActualReceive$).
We simplify method declaration from
\gls{fgg}~\cite{griesemer2020featherweight}, following
the Go~1.18\xspace syntax.
\label{section:fcgg:typing}
The type system in \gls{fgg} extends
\gls{fg} with the addition of a new type variable context $\Delta$
mapping type variable to its bound.
Expression $e$ of type $\tau$ is now given by the judgement
$\wellTyped[\Delta;\Gamma]{e}{\tau}$. Program well-formedness is given
by $P \operatorname{\var{ok}}$.
The typing rules follow those
given in \cite[Figure~15]{griesemer2020featherweight}, which can
be found in
\ifnotsplit{Appendix~\ref{appendix:fgg}}{the full version of this paper~\cite{fullversion}}.
\label{section:fgg:reduction}
The reduction semantics of \gls{fgg} are defined in
Figure~\ref{fig:fgg:semantics}.
They extend those of \gls{fg}; notably, \rulename{r-call}
(via the $\body$ auxiliary function) specialises generic types
in the resolved method body.
\gls{fgg} satisfies
type preservation and progress properties
(see \cite[Theorems 4.3 and 4.4]{griesemer2020featherweight}).
\input{figs/fgg/fgg-semantics}
\section{Introduction}
\label{sec:introduction}
Since its creation in 2009, the Go programming language
has placed a key emphasis on simplicity, safety, and efficiency.
Based on the
\citet{stackoverflow-developer-survey} survey, Go is the 5th most beloved
language, and is used to build
large systems, \eg \citet{docker},
\citet{kubernetes}, and \citet{grpc}.
The recent Go release (Go~1.18\xspace released on the 15th of March 2022)
added \emph{generics}, which has been considered Go's
most critical missing and
long awaited feature by
Go programmers and developers~\cite{go-developer-survey}.
\citet{go-release-notes}, however, has posted
that much work is still needed to ensure that
generics in Go are well-implemented.
The work on implementing generics in Go began in earnest with
\citet{griesemer2020featherweight},
in which they formalised two core calculi of (generic) Go; \gls{fgg} and \gls{fg},
as well as formalising
a \emph{monomorphisation translation} from \gls{fgg} to \gls{fg}.
Monomorphisation statically explores
a program's call graph and generates multiple
implementations of each generic
type and method according to each
specialisation of that type, or method, required at runtime.
The Go team informally proposed three approaches;
\begin{enumerate*}
\item Stencilling (monomorphisation)~\cite{google-mono},
\item Call-graph dictionary-passing~\cite{go-dict-proposal}, and
\item GC shape stencilling (hybrid of (1) and (2))~\cite{google-hybrid}.
\end{enumerate*}
A monomorphisation-based source-to-source prototype (\gls{gotogo})
has been implemented by
\citet{gotogo}, following the stencilling proposal (1) and
\cite{griesemer2020featherweight}.
The current Go~1.18\xspace implementation
extends (3)~\cite{go118}.
Unlike more traditional \emph{non-specialising}
dictionary approaches
(\eg dictionary-passing in Haskell and vtables in C++),
Go~1.18\xspace uses an optimised form of monomorphisation to allow types
in the same GC shape group to share specialised method and type instances.
In theory, all objects in a GC shape group have an equivalent
memory footprint and layout, although currently, Go~1.18\xspace
only groups pointers.
As multiple types may share the same GC shape group,
their dictionaries provide information lost during monomorphisation, \eg
concrete types and method pointers.
Moreover, Go~1.18\xspace builds a monolithic dictionary based on
the program's \emph{call-graph}.
Monomorphisation has a
number of well-known limitations;
it can substantially increase code
size, it can be prohibitively slow
during compilation~\cite{jones1995dictionary, StroustrupB:cpppl},
and it does not cover all programs~\cite{griesemer2020featherweight}.
Concretely, there are two core limitations with all the Go team proposals
(1--3), the current Go~1.18\xspace implementation, and the proposal of \citet{griesemer2020featherweight}.
\input{figs/fgg/fgg-nomono.tex}
\textit{1) Non-monomorphisable programs.\ }
All current implementations and proposals for generics in
Go suffer from the inability
to handle a class of programs that
use recursive instantiations, \eg the
list permutation example\footnote{
See \cite{gitchanderpermute} for an
efficient but type unsafe implementation of list permutation.}
provided in
Figure~\ref{fig:code:list:perm}.
This program cannot be monomorphised, as
a list of integers
\inlinelstfcgg{List[int]} has a
\inlinelstfcgg{permute} method which returns a list of type
\inlinelstfcgg{List[List[int]]}, which in turn has a
\inlinelstfcgg{permute} method
that returns type \inlinelstfcgg{List[List[List[int]]]}, and on
\emph{ad infinitum}.
Monomorphisation cannot explore
this infinite set of types in finite time, and
so cannot specialise a method for each instance.
{\textit{2) Specialising translation.\ }
All currently realised approaches to generics in Go are
based on method/type specialisation.
This stands in contrast
to the approaches taken by other languages with automatic
memory management, such as Haskell, C\#, and Java.
Go uses garbage collection for automatic
memory management.
In the top 16 statically typed languages
with generics~\cite{TopProgr17:online},
we find a constant theme;
languages with automatic memory management use
non-specialising implementations such as
dictionary-passing or erasure, and those without
use monomorphisation (see
\ifnotsplit{Appendix~\ref{app:implementations}}{the full version of this paper~\cite{fullversion}} for a breakdown of language implementations).
\myparagraph{Challenges and contributions}
We develop and implement a new non-specialising, call-site dictionary-passing translation
from Go with generics (\gls{fgg})
to Go (\gls{fg}), and prove its correctness. We then create micro and
real-world benchmarks for generic Go,
and examine the trade-offs
between the different translations to suggest
improvements for Go~1.18\xspace.
{\textit{1) The first challenge is to design and build a
non-specialising call-site dictionary-passing translation for Go.\ }
Go's distinctive
structural subtyping adds an extra level of complexity that requires careful consideration.
Our first contribution in \S~\ref{section:dictionary} and \S~\ref{subsec:imple}
is the formalisation and implementation of a new dictionary-passing translation
that is specifically designed for the unique qualities of Go.
{\textit{2) The second challenge is to overcome the
non-monomorphisability limitation} of
the current implementations and translate previously untranslatable
programs such as \inlinelstfcgg{permute}.
A key aspect of our dictionary design is \emph{call-site}---each polymorphic type parameter
is represented by its own dictionary, which in turn is created at
the call-site where that type parameter would have been instantiated.
This allows any well-formed \gls{fgg} program to be translated.
{\textit{3) The third challenge we meet is
to establish semantic correctness of our
translation}. Historically, dictionary-passing translations
have been proven correct using value preservation~\cite{yu2004formalization,yu2004thesis,sulzmann2021dictionary},
an approach that cannot ensure
termination preservation or
generalise to more advanced language
features (\eg concurrency in Go).
We instead use a fine-grained behavioural equivalence
guided by the work of \citet{Igarashi99FJ}.
Unfortunately, proving the \emph{bisimulation} result in
\cite[Theorem 5.4]{griesemer2020featherweight}
is insufficient due to intermediate states
created by dictionary-passing.
We propose a novel \emph{bisimulation up to dictionary resolution} reduction,
and use this relation to prove that the translation preserves
essential properties of the source language (\S~\ref{section:properties}).
This proof technique is general and translation-agnostic,
and is useful in other contexts where a standard bisimulation
is inadequate.
{\textit{4) The fourth challenge is to find an effective evaluation for implementations of
generics in Go}.
We compare the five implementations---
\begin{enumerate*}
\item our call-site, non-specialising dictionary-passing
translation;
\item an erasure translation
built by us for empirical evaluation;
\item a monomorphisation translation by \citet{griesemer2020featherweight};
\item the initial source-to-source monomorphisation prototype translation \gls{gotogo} by the Go team; and
\item Go~1.18\xspace
\end{enumerate*}
---along three dimensionalities;
\begin{enumerate*}
\item complication time,
\item translated code size, and
\item performance of compiled executables.
\end{enumerate*}
As Go~1.18\xspace was just released,
\emph{there currently exists no real-world Go program
with generics}.
In \S~\ref{subsec:evaluation}, we contribute a number of benchmarks to overcome this
deficit: we construct micro benchmarks to examine the effect of different
forms of complexity in generic programs;
and reimplement the real-world benchmarks from
\cite{odersky2000two,ureche2013miniboxing} in Go.
{\textit{5) The final challenge is to examine
the trade-offs between the different translations, which
suggest future improvements of Go~1.18\xspace}.
We observe,
in general, that monomorphisation leads
to better execution performance, while
non-specialisation (dictionary-passing) produces
smaller executables in less compilation time.
We also observe that on the micro benchmarks our dictionary-passing
translation can generate programs that are comparable in efficiency
to Go~1.18\xspace.
Overall, our results show that Go~1.18\xspace has much scope for improvement and
the usefulness of non-specialised call-site dictionary-passing translations for languages such as Go.
We provide concrete suggestions in \S~\ref{section:discussion}.
\myparagraph{\emph{Outline}}
\S~\ref{section:fg} and \S~\ref{section:fgg} summarise \gls{fg} and
\gls{fgg}; \S~\ref{section:dictionary} proposes a new
dictionary-passing translation;
\S~\ref{section:properties} proves
its semantic correctness;
\S~\ref{sec:exp} describes our implementations
and measures the trade-offs between the five translators;
\S~\ref{section:related} gives related work; and \S~\ref{section:conclusion}
concludes.
Proofs and
omitted definitions can
be found in
\ifnotsplit{the Appendix to this paper.}{the full version of the paper \cite{fullversion}.}
The dictionary-passing/erasure translators
and benchmarks are available in the artifact to this paper~\mbox{\cite{aritfact-entry}}.
Source code is available on GitHub~\cite{zhu22github}
and Software Heritage~\cite{zhu22heritage}.
\subsection{The Limitation of Monomorphisation}
\label{section:nomono}
\citet{griesemer2020featherweight}
define
a class of programs that their monomorphisation approach
cannot translate.
This limitation also applies to the Go~1.18\xspace call-graph based dictionary
implementation for the same rationale.
Consider the model non-monomorphisable
program in Figure~\ref{fig:example:nomono}.
\begin{wrapfigure}{l}{0.32\textwidth}
\vspace*{-.2cm}
\begin{lstfcgg}
type Box[$\alpha$ Any] struct { value $\alpha$ }
func (b Box[$\alpha$]) Nest(n int) Any {
if (n > 0) {
return Box[Box[$\alpha$]]{b}.Nest(n-1)
} else { return b }
}
\end{lstfcgg}
\vspace*{-.3cm}
\caption{\inlinelstfcgg{Box} example\\\cite[Figure~10]{griesemer2020featherweight}}
\label{fig:example:nomono}
\vspace*{-.3cm}
\end{wrapfigure}
Intuitively, the fundamental issue with this
deceptively simple program
is that \emph{instance set discovery} is
non-terminating.
To monomorphise a program,
we first need to discover all possible type instantiations used in
said program.
Perfectly well-behaved programs may however
produce infinitely many type instantiations.
This occurs when an instance of a (mutually)
recursive method eventually depends upon a greater
instantiation of itself, which in turn depends on an even
greater instantiation of itself \textit{ad infinitum}, \eg
\inlinelstfcgg{Box[int].Nest()} depends
upon the specialisation \inlinelstfcgg{Box[Box[int]].Nest()}.
In \cite{griesemer2020featherweight}, such programs are called
{\it nomono}.
\subsection{Go~1.18\xspace Implementation}
The official release of Go~1.18\xspace uses an optimised version of monomorphisation called
\emph{dictionaries and GC shape stenciling}~\cite{go118}.
When possible, their implementation reuses monomorphised functions to reduce code size.
Two objects may share the same specialised method
implementation when they have the same GC shape.
In the current implementation, the criteria of having the same
GC shape means they are of the same data type, or both are pointers.
Each function therefore must to have a dictionary to differentiate
concrete types at runtime.
A dictionary contains (1) the runtime type information of
generic type parameters, as well as (2) their derived types used in the function.
In the function body, each generic function call that
depends on the generic type parameters also needs a dictionary;
(3) these sub-dictionaries required by the method calls are also provided in the dictionary.
Additionally, the dictionary provides each generic object
with (4) the data structure that Go runtime uses to conduct method calls.
Go~1.18\xspace would also need to create an infinite call-graph dictionary
for the \inlinelstfcgg{Box} example in Figure~\ref{fig:example:nomono},
as well as for
the \inlinelstfcgg{permute} example in
Figure~\ref{fig:code:list:perm}. Hence, Go~1.18\xspace cannot handle either
example. Our call-site dictionary-passing approach does
not suffer this limitation.
\section{Correctness of Dictionary-Passing Translation}
\label{section:properties}
In this section, we define, justify, and prove
the correctness of our dictionary-passing translation
using a behavioural equivalence.
We first introduce a general
\emph{correctness criteria}
which good translations should satisfy.
We then propose a novel \emph{bisimulation up to} technique
to prove that translated programs are behaviourally equivalent to their source program.
We use this result to prove the correctness of our dictionary-passing translation.
Full proofs can be found in
\ifnotsplit{Appendix~\ref{app:proofs}}{the full version of this paper~\cite{fullversion}}.
\subsection{Correctness Criteria}
\label{section:properties:correctness}
The correctness criteria is defined
using a number of preliminary
predicates provided below.
\begin{definition}[Type assertion errors]
\label{def:panics}
We say expression $e$ in \gls{fg} is a \emph{type assertion error}
(\emph{panic} in \cite{griesemer2020featherweight})
if there exists an evaluation
context $E$, value $v$, and
type $t$ such that
$e=E[v.(t)]$
and $\vtype(v)\not <: t$.
We say expression $e$ gets
a \emph{type assertion error} (denoted by
$e\Downarrow_\ensuremath{\mathsf{panic}}$)
if it reduces to an expression that contains a type assertion error,
\ie{} $e\longrightarrow^\ast e'$ and $e'$
is a type assertion error.
We write $P\Downarrow_\ensuremath{\mathsf{panic}}$ when
$P = \program{e}$ and $e\Downarrow_\ensuremath{\mathsf{panic}}$.
Similarly, we define
$e\Downarrow_\ensuremath{\mathsf{panic}}$ and $P\Downarrow_\ensuremath{\mathsf{panic}}$ for \gls{fgg}.
\end{definition}
We write $e\Downarrow v$ if there exists $v$ such that
$e\longrightarrow^\ast v$ and extend this predicate to $P$.
We abbreviate $\dict[\emptyset;\emptyset;\emptyset]{e}{\lex{e}}$ to
$\dict[]{e}{\lex{e}}$.
We define the following general correctness
criteria
related to typability, error correctness,
and preservation of a program's final result.
\begin{definition}[Preservation properties
\label{def:type:preservation}
Let $P \operatorname{\var{ok}} $ in \gls{fgg},
and let there exist $\lex{P}$ such that $\dict[]{P}{\lex{P}}$.
A translation is:
\begin{enumerate}
\item \textbf{\emph{type preserving}}: if
$P \operatorname{\var{ok}}$, then $\lexP\operatorname{\var{ok}}$.
\item \textbf{\emph{type assertion error preserving}}:
$P\Downarrow_\ensuremath{\mathsf{panic}}$ iff $\lex{P}\Downarrow_\ensuremath{\mathsf{panic}}$.
\item \textbf{\emph{value preserving}}:
$P\Downarrow v$ iff $\lex{P}\Downarrow \lex{v}$
with
$\dict[]{v}{\lex{v}}$.
\end{enumerate}
\end{definition}
We only require the left-to-right direction for
type preservation, as due to type erasure
(\S~\ref{paragraph:structure}),
we cannot obtain the right-to-left
direction for dictionary-passing.
Our type preservation criteria matches that defined in
\citet[Theorem~5.3]{griesemer2020featherweight}.
We can, however, show
that type assertions are precisely
simulated (\S~\ref{subsubsec:typecollision}).
\subsection{Behavioural Equivalence -- Bisimulation up to Dictionary Resolution}
\label{subsec:prop:beh}
\citet[Theorem 5.4]{griesemer2020featherweight} prove the correctness
of the monomorphism translation using
a simple (strong) bisimulation:
the binary relation $\Re$ is a \emph{bisimulation} iff
for every pair of $\ENCan{e,d}$ in $\Re$, where $e$ is a \gls{fgg} expression
and $d$ is a \gls{fg} expression:
\begin{enumerate*}
\item if $e \longrightarrow e'$, then $d \longrightarrow d'$ such that
$\ENCan{e',d'}\in \Re$; and
\item if $d \longrightarrow d'$, then $e \longrightarrow e'$ such that
$\ENCan{e',d'}\in \Re$.
\end{enumerate*}
This strong bisimulation suffices for translations that
preserve a simple one-to-one reduction-step correspondence.
Unlike monomorphisation,
dictionary-passing relies
on runtime computation, which prevents such a simple
correspondence.
We can, however, distinguish between reductions
introduced by dictionary-passing and those
inherited from the source program. This distinction allows
us to construct
a one-to-one correspondence relation
\emph{up to} dictionary resolution.
The formulation is non-trivial since, in \gls{fg},
dictionary resolution can occur at any point in a subterm.
\begin{wrapfigure}{r}{.67\linewidth}
\begin{minipage}[t]{0.37\linewidth }
\vspace{-4mm}
\begin{lstfcgg}
func foo[$\alpha$ Num](a $\alpha$) $\alpha$ {
return a.Add(bar($\cdots$))
}
func main() {
foo[Int](Zero{})
}
\end{lstfcgg}
\end{minipage}
\begin{minipage}[t]{0.62\linewidth }
\vspace{-4mm}
\lstset{firstnumber=1}
\begin{lstfcgg}
func foo(dict NumDict, a Any) Any {
return dict.Add.Apply(a, bar($\cdots$)) }
type Int_Add struct {} // method pointer
func (i Int_Add) Apply(this Any, a Any) Any {
return this.(Int).Add(a) }
func main() {
foo(NumDict{Int_Add{}}, Zero{}) }
\end{lstfcgg}
\end{minipage}
\vspace{-3mm}
\caption{Non-trivial dictionary example. Source (Left). Translation (Right)}
\label{fig:example:nontriv}
\vspace{-3mm}
\end{wrapfigure}
We demonstrate this issue by evaluating the example in Figure~\ref{fig:example:nontriv}.
Importantly, the translated
function \inlinelstfcgg{foo}
cannot resolve the generic \inlinelstfcgg{Add} method
from dictionary \inlinelstfcgg{dict}
until \emph{after} expression \inlinelstfcgg{bar($\cdots$)}
is fully evaluated.
After one step, the \gls{fgg} program (left) is
\inlinelstfcgg{Zero<<>>.Add(bar($\cdots$))}.
If we translate the afore reduced term, we get
\inlinelstfcgg{Zero<<>>.(Zero).Add(bar($\cdots$))} ($\lex{Q_0}$).
But reducing the translated \gls{fg} program (right), we obtain the
\inlinelstfcgg{NumDict<<Int_Add<<>>>>.Add.Apply(Zero<<>>, bar($\cdots$))} ($\lex{Q_1}$).
To show $\lex{Q_0}$ equivalent to $\lex{Q_1}$ using
the standard \gls{fg} reduction,
we would first have to fully resolve
\inlinelstfcgg{bar($\cdots$)} before we could start
to the resolve dictionary
access in $\lex{Q_1}$.
We might attempt to show that the translation in Figure~\ref{fig:example:nontriv}
is correct using a many-to-many reduction-step relation, \ie
some binary relation $\Re$ where
for every pair of $\ENCan{e,d}$ in $\Re$ it holds that
\begin{enumerate*}
\item if $e \longrightarrow^\ast e'$, then $d \longrightarrow^\ast d'$ such that
$\ENCan{e',d'}\in \Re$; and
\item if $d \longrightarrow^\ast d'$, then $e \longrightarrow^\ast e'$ such that
$\ENCan{e',d'}\in \Re$.
\end{enumerate*}
This approach is both complicated by the presence of non-termination, \eg
if \inlinelstfcgg{bar($\cdots$)} does not return a value,
then we could never show that
$\lex{Q_0}$ and $\lex{Q_1}$ are related.
And more importantly, many-to-many relationships give less information
about the nature of a translation
than one-to-one relationships.
Were we to consider just the
\inlinelstfcgg{NumDict<<Int_Add<<>>>>.Add.Apply($\cdots$)}
portion of $\lex{Q_1}$ we observe that
using a pre-congruence reduction
$\lex{Q_1}$ resolves to \inlinelstfcgg{Zero<<>>.(Int).Add(bar($\cdots$))}.
We may then safely increase the accuracy of the
assertion \inlinelstfcgg{Zero<<>>.(Int)} to \inlinelstfcgg{Zero<<>>.(Zero)}
without altering the semantics of the term.
The later step is required because while the
dictionary stored the information
that \inlinelstfcgg{Zero<<>>} was passed to \inlinelstfcgg{foo}
as type \inlinelstfcgg{Int}, the reduction of the \gls{fgg} term
forgot this
information.
We call these two steps \emph{dictionary resolution},
as they resolve only those computations introduced by the use
of dictionaries for method resolution.
$\lex{Q_0}$ is equivalent to $\lex{Q_1}$
\emph{up to dictionary resolution}.
Our translation also adds type simulation computations
and type assertions.
Unlike dictionary resolution,
these extra computation steps are subsumed by the
standard \gls{fg} reduction.
\begin{definition}[Dictionary resolution]
\label{def:dictreso}
We define three pattern sets in \gls{fg}:
$\ensuremath{\rho_{\text{erase}}}$
(type assertions as a result of erasure),
$\ensuremath{\rho_{\text{sim}}}$ (type assertion simulation),
and $\ensuremath{\rho_{\text{dict}}}$ (dictionary resolution):
\vspace{-2.2mm}
{\small
\begin{flalign*}
\ensuremath{\rho_{\text{erase}}} ::=& \left\{\begin{array}{l}
v.(t)
\end{array}\right\}\\[-1.5mm]
\ensuremath{\rho_{\text{sim}}} ::=& \left\{\begin{array}{l}
v.\texttt{\_type}_i,\
v.\texttt{\_type},\
v.(t),\
v.\method{spec\_name}{m}(), \
v.\texttt{dict}_i, \
\lit{if} ~ v \mathbin{!=} v~\sytxBrace{\lit{panic}}, \
\return v
\end{array}\right\}&\\[-1.5mm]
\ensuremath{\rho_{\text{dict}}} ::=& \left\{\begin{array}{l}
\dictName{t}\{\multi{v}\}.f, \
v.\texttt{dict}_i, \
v.\texttt{\_type}_i,\
v.\texttt{\_type},\\%[-0.5mm]
\method{mName}{t,m}\{\}.\texttt{Apply}(\multi{e}),\
\dictName{t}\{\multi{v}\}.(\dictName{t})
\end{array}\right\}
\end{flalign*}
}
From these patterns, we define a number of reductions.
We define the first of these as
$E[e]\red_{\text{e}} E[e']$
if $e\longrightarrow e'$ with $e\in \ensuremath{\rho_{\text{erase}}}$; and
$E[e]\red_{\text{s}} E[e']$
if $e\longrightarrow e'$ with $e\in \ensuremath{\rho_{\text{sim}}}$.
We write $d\Longrightarrow d'$ if
$d\red_{\text{e}}^\ast \longrightarrow \red_{\text{s}}^\ast d'\not\red_{\text{s}}$.
Let $C$ be the context:
{$C::=\square \bnfsep
C.f \bnfsep
C.(t) \bnfsep
\sTypeInit{t}{\multi{e}, C, \multi{e}'} \bnfsep
C.m(\multi{e}) \bnfsep
e.m(\multi{e},C,\multi{e}')$}.
We define the dictionary resolution reduction $\dictred$ as
\begin{enumerate*}
\item $C[e]\dictred C[e']$
if $\reduction{e}{e'}$ where $e\in\ensuremath{\rho_{\text{dict}}}$; and
\item $C[e.(t)]\dictred C[e.(u)]$ if
$\wellTyped[]{e}{u}$ and $u<: t$.
\end{enumerate*}
\end{definition}
Notice that if $e \Longrightarrow e'$, then $e \longrightarrow^+ e'$; and that
$\Longrightarrow$ can be viewed as
a one-step reduction which corresponds
to a one-step of the source language.
Reduction $\red_{\text{s}}$ only occurs following a
call to $\texttt{tryCast}$, and simulates whether or not the source \gls{fgg} assertion
is a type assertion error (See \S~\ref{subsubsec:typecollision}).
The reduction $\red_{\text{e}}$ resolves only the assertions
introduced during the type erasure step (See \S~\ref{paragraph:structure}).
The dictionary resolution reduction $\ensuremath{\rho_{\text{dict}}}$
will occur following a method call
\rulename{r-call} and simulates the type parameter specialisation.
As demonstrated in the above example,
the $\dictred$ reduction may reduce any subterm matching
$\ensuremath{\rho_{\text{dict}}}$ or refine any type assertion.
\begin{restatable}{lemrest}{lemprec}
\label{lem:rec}
Let $e$ be an \gls{fg} expression.
Assume $\wellTyped[\emptyset]{e}{u}$.
\begin{enumerate}
\item $\Longrightarrow$ is deterministic, \ie if $e \Longrightarrow e_1$ and
$e \Longrightarrow e_2$, then $e_1=e_2$.
\item $\dictred$ is confluent, \ie if $e \dictred e_1$ and
$e \dictred e_2$, then there exists $e'$ such that $e_1 \dictred
e'$ and $e_2 \dictred e'$.
\end{enumerate}
\end{restatable}
We now extend the bisimulation relation to
bisimulation up to dictionary resolution.
\begin{definition}[Bisimulation up to dictionary resolution] \label{def:sb:upto}
\ \\
\begin{tabular}{ll}
\begin{tabular}{l}
The relation $\Re$ is
a \emph{bisimulation up to dictionary}
\\
\emph{resolution} if
$\Re\cdot (\leftarrowtriangle)^\ast$ is a bisimulation,
\\
\ie
if $P \operatorname{\var{ok}} $ in \gls{fgg}
and $\dict[]{P}{\lex{P}}$
\\
where
$P = \program{e}$ and
$\lex{P} = \program[\lex{D}]{\lex{e}}$
\\
then the diagram (right)
commutes.
\end{tabular}
&
\hspace{-0.5cm}
\begin{tabular}{l}
\small
\input{figs/fig-bisim.tex}
\end{tabular}
\end{tabular}
\end{definition}
\hspace{.15cm}
Intuitively, our translation forms a bisimulation
up to dictionary resolution
if
\begin{enumerate*}
\item each step that the source program takes can be mimicked
by the translated program; and
\item conversely, that if the translated program
reduces,
then the source program must have been able to make an equivalent step
\end{enumerate*}
-- albeit with the translated program still needing
to evaluate the added dictionary resolution computations
at some future point during computation.
By considering the observable behaviour of a program
to be non-dictionary resolution reduction steps,
type assertion errors, and
termination (value production), we ensure that the translated program
is behaviourally equivalent to that of the source
program.
Note that
this formulation may be extended to a concurrent or effectful
fragment of Go
with the standard addition of \emph{barbs}~\cite{MiSa92} or
transition labels.
Finally, we arrive at our main theorem --- that the translation
satisfies the correctness criteria.
\begin{restatable}[Correctness of dictionary-passing]{thmrest}{thmcorrect}
\label{thm:main:correctness}
Let $P \operatorname{\var{ok}} $ in \gls{fgg}
and $\dict[]{P}{\lex{P}}$ with
$P = \program{e}$ and
$\lex{P} = \program[\lex{D}]{\lex{e}}$.%
\begin{enumerate*}
\item Dictionary-passing translation
$\lex{(-)}$ is type preserving;
\item $e$ and $\lex{e}$ are bisimilar up to dictionary resolution;
\item $\lex{(-)}$ is
type assertion error
preserving; and
\item $\lex{(-)}$ is value preserving.
\end{enumerate*}
\end{restatable}
Theorem~\ref{thm:main:correctness}
states that
our translation is correct, as
translated programs behave exactly as the source program would have behaved,
and that any extra computations are accounted for by
machinery introduced for dictionary-passing.
It is worth stressing that
our statement
is \emph{stronger} than the various definitions of
dictionary-passing translation correctness considered in the
literature (see \S~\ref{section:related}), which limit themselves
to non-termination preserving versions of value preservation.
By providing an account of intermediate state equivalence,
Theorem~\ref{thm:main:correctness}(2) not only gives a
meaningful equivalence for non-terminating programs, but
may also be extended to languages with non-determinism or concurrency.
\subsection{Proof of Theorem \ref{thm:main:correctness}}
We provide the key lemmata, theorems, and corollaries used in the proof
of Theorem \ref{thm:main:correctness}. All omitted proofs
may be found in
\ifnotsplit{Appendix~\ref{app:proofs}}{the full version of this paper~\cite{fullversion}}.
\myparagraph{Type preservation}
The type preservation criteria given in Definition~\ref{def:type:preservation}
only considers whole programs.
We must first show that the dictionary-passing translation
is type preserving for expressions. Note that the translation of
structure literals is
the only non-\texttt{Any}\ typed expression.
\begin{restatable}[Type preservation of expressions]{lemrest}{lemtypepres}
\label{lem:type:pres:exp}
Let $\dict{e}{\lex{e}}$ and $\map{\Gamma}$ be the \gls{fg} environment
where all variables in $\Gamma$ are erased (\texttt{Any}) and each dictionary
in $\eta$ is appropriately typed according to the bound in $\Delta$.
If $\wellTyped[\Delta;\Gamma]{e}{\tau}$ then
\begin{enumerate*}
\item If $\tau = \alpha$ or $\iType{\tau}$,
then $\wellTyped[\map{\Gamma}]{\lex{e}}{\texttt{Any}}$.
\item If $\tau = \sType{t}\typeActualReceive$, then
either $\wellTyped[\map{\Gamma}]{\lex{e}}{\texttt{Any}}$
or $\wellTyped[\map{\Gamma}]{\lex{e}}{\sType{t}}$.
\end{enumerate*}
\end{restatable}
\begin{restatable}[Type preservation (Theorem \ref{thm:main:correctness} (1)]{correst}{corprogtypepres}
\label{lem:type:pres:prog}
If $P \operatorname{\var{ok}}$, then $\lexP\operatorname{\var{ok}}$.
\end{restatable}
\begin{proof}
By the assumption that name constant functions are distinct and
Lemma~\ref{lem:type:pres:exp}.
\end{proof}
\myparagraph{Bisimulation and error preservation}
The operational correspondence theorem described the behaviour
of a source program and its translation as four non-overlapping
cases. Note that $\lex{e} \Longrightarrow e'$
is the maximum reduction without another
type assertion simulation reduction
($e'\not\red_{\text{s}}$).
\begin{restatable}[Operational correspondence]{thmrest}{thmopcorrespond}
\label{thm:operational:correspondence}
Let $P \operatorname{\var{ok}}$ where $P = \program{e}$
and let $\dict[]{\program{e}}{\program[\lex{D}]{\lex{e}}}$.
\begin{enumerate}[(a)]
\item If $\reduction{e}{d}$, then there
exists $\lex{d}$ such that
$\dict[\emptyset; \emptyset; \emptyset]{d}{\lex{d}}$ and
$\lex{e} \Longrightarrow{\dictred^\ast} \lex{d}$.
\item If $\lex{e} \Longrightarrow e'$ where $e$ is not a type assertion error,
then there exists $d$ such that
$\reduction{e}{d}$ and there exists $\lex{d}$ such that
$\dict[\emptyset; \emptyset; \emptyset]{d}{\lex{d}}$ and
$e' \dictred^* \lex{d}$.
\item
If $\lex{e} \Longrightarrow e'$ where $e$ is a type assertion error,
then $e'$ is a type assertion error.
\item
If $e$ is a type assertion error, then there exists an $e'$
such that $\lex{e} \Longrightarrow e'$ and $e'$ is a type assertion error.
\end{enumerate}
\end{restatable}
\begin{proof}
By induction over the assumed reduction. Full proof is provided in
\ifnotsplit{Appendix~\ref{app:proofs}}{the full version of this paper~\cite{fullversion}}.
\end{proof}
\begin{restatable}[Bisimulation up to dictionary resolution (Theorem \ref{thm:main:correctness} (2))]{correst}{corbisim}
\label{cor:bisim}
Let $P \operatorname{\var{ok}} $
and $\dict[]{P}{\lex{P}}$ with
$P = \program{e}$ and
$\lex{P} = \program[\lex{D}]{\lex{e}}$.
Then $e$ and $\lex{e}$ are bisimilar up to dictionary resolution.
\end{restatable}
\begin{proof}
By Theorem \ref{thm:operational:correspondence}.
%
Let $\Re$ be the least relation such that all source expressions
are paired with their translation.
$\Re$ is a bisimulation up to dictionary resolution.
%
Namely, for each element $\ENCan{e,\lex{e}}\in \Re$, we have that:
\begin{enumerate}
\item If $e\longrightarrow e'$, then by Theorem~\ref{thm:operational:correspondence} (a)
there exists a $\ENCan{e',d}\in \Re$ such
that $\lex{e} \Longrightarrow{\dictred^\ast} d$.
\item If $\lex{e} \Longrightarrow{\dictred^\ast} d$, then by
Theorem~\ref{thm:operational:correspondence} (b)
there exists a $\ENCan{e',d}\in \Re$ such
that $e\longrightarrow e'$.
\end{enumerate}%
\end{proof}
\begin{restatable}[Type error preservation (Theorem \ref{thm:main:correctness} (1))]{correst}{corerrorpres}
\label{cor:error:preservation}
Let $\program \operatorname{\var{ok}}$ and $\dict[]{P}{\lex{P}}$.
$P\Downarrow_\ensuremath{\mathsf{panic}}$ iff $\lex{P}\Downarrow_\ensuremath{\mathsf{panic}}$.
\end{restatable}
\begin{proof}
For this proof, we define $\lex{P}$ as resolving into a type assertion error
if $\lex{P}\Longrightarrow P'$ and $P'$ is a type assertion error. This happens
when $P$ is a type assertion error, as in Theorem~\ref{thm:operational:correspondence} (c) and (d).
By induction on the reductions in $\Downarrow$.
\begin{itemize}
\item[] \resetpfcounter\textbf{Case : } Left to right (base):
By Theorem~\ref{thm:operational:correspondence} (d).
\item[] \resetpfcounter\textbf{Case : } Right to left (base):
By Theorem~\ref{thm:operational:correspondence} (c).
\item[] \resetpfcounter\textbf{Case : } Left to right (induction): \\
If $P$ is not a type assertion
error, then it reduces to $Q$ where $Q\Downarrow_\ensuremath{\mathsf{panic}}$.
By Theorem~\ref{thm:operational:correspondence} (a) $\lex{P}\Longrightarrow\dictred}%\succ_{\mathit{dict}}\lex{Q}$
where $\dict[]{Q}{\lex{Q}}$.
Apply induction on if $Q\Downarrow_\ensuremath{\mathsf{panic}}$ then $\lex{Q}\Downarrow_\ensuremath{\mathsf{panic}}$.
\item[] \resetpfcounter\textbf{Case : } Left to Right (induction): \\
We assume that $\lex{P}$ does not resolve into a type assertion error,
\ie $\lex{P}\Longrightarrow Q'$ where $Q'$ is not a type assertion error.
Since $\dictred}%\succ_{\mathit{dict}}$ cannot cause a type assertion
error, we also get that $Q' \dictred}%\succ_{\mathit{dict}}^* \lex{Q}$ where $\lex{Q}$
is not a type assertion error.
By Theorem~\ref{thm:operational:correspondence} (b) $P\longrightarrow Q$.
Apply induction on if $\lex{Q}\Downarrow_\ensuremath{\mathsf{panic}}$ then $Q\Downarrow_\ensuremath{\mathsf{panic}}$.
\end{itemize}
\end{proof}
\myparagraph{Value preservation}
Finally, the value preservation property follows dictionary-passing
being a bisimulation up to dictionary resolution, as the
dictionary resolution steps are eager reductions that can equivalently
be delayed until they become standard reductions.
\begin{restatable}[Reduction rewrite]{lemrest}{lemredrewrite}
\label{lem:red:rewrite}
Let $e_1\rightarrowtriangle e_2 \longrightarrow e_3$ where $e_1=C[d_1]$, $e_2=C[d_2]$, and $d_1\longrightarrow d_2$.
\begin{enumerate}
\item If there exists an $E$ such that $C=E$ then $e_1 \longrightarrow^2 e_3$
\item If there does not exists an $E$ such that $C=E$ then $e_1 \longrightarrow \rightarrowtriangle e_3$
\end{enumerate}
\end{restatable}
\begin{restatable}[Resolution to value]{lemrest}{lemredvalue}
\label{lem:red:val}
If $e\rightarrowtriangle v$ then $e\longrightarrow v$.
\end{restatable}
\begin{restatable}[Value preservation (Theorem \ref{thm:main:correctness} (4))]
{correst}{corvalue}
\label{cor:valpres}
Let $\program \operatorname{\var{ok}}$ and $\dict[]{P}{\lex{P}}$.
$P\Downarrow v$ iff $\lex{P}\Downarrow \lex{v}$ where
$\dict[]{v}{\lex{v}}$.
\end{restatable}
\tikzset{|/.tip={Bar[width=.8ex,round]}}
\begin{proof}
By Corollary~\ref{cor:bisim} we have
the following diagram (where $\Re$ is created by $\Mapsto$)
\[
\begin{tikzpicture}[line width=rule_thickness,
arrowlabel/.style={inner sep=.5,fill=white},
]
\node (dagone) [] {$\lex{e}_1$} ;
\node (dagtwo) [right=1 of dagone] {$\lex{e}_2$} ;
\node (dagthree) [right=1 of dagtwo] {$\lex{e}_3$} ;
\node (dagdots) [right=1 of dagthree] {$\cdots$\vphantom{$\lex{e}_3$}} ;
\node (dagvee) [right=1 of dagdots] {$\lex{v}$\vphantom{$\lex{e}_3$}} ;
\node (eone) [above=.4 of dagone] {$e_1$} ;
\node (etwo) [above=.4 of dagtwo] {$e_2$} ;
\node (ethree) [above=.4 of dagthree] {$e_3$} ;
\node (edots) [above=.4 of dagdots] {$\cdots$\vphantom{$e_3$}} ;
\node (evee) [above=.4 of dagvee] {$v$\vphantom{$e_3$}} ;
\draw[->] (eone) to (etwo);
\draw[->] (etwo) to (ethree);
\draw[->] (ethree) to (edots);
\draw[->] (edots) to (evee);
\coordinate (onetwo) at ($ (dagone) !.5! (dagtwo) $);
\draw[-{Implies},double] (dagone) to (onetwo);
\draw[-{Latex[open]}] (onetwo) to node[very near end, yshift=.8mm] {${}^*$} (dagtwo);
\coordinate (twothree) at ($ (dagtwo) !.5! (dagthree) $);
\draw[-{Implies},double] (dagtwo) to (twothree);
\draw[-{Latex[open]}] (twothree) to node[very near end, yshift=.8mm] {${}^*$} (dagthree);
\coordinate (threedots) at ($ (dagthree) !.5! (dagdots) $);
\draw[-{Implies},double] (dagthree) to (threedots);
\draw[-{Latex[open]}] (threedots) to node[very near end, yshift=.8mm] {${}^*$} (dagdots);
\coordinate (dotsvee) at ($ (dagdots) !.5! (dagvee) $);
\draw[-{Implies},double] (dagdots) to (dotsvee);
\draw[-{Latex[open]}] (dotsvee) to node[very near end, yshift=.8mm] {${}^*$} (dagvee);
\draw[|-{Implies},double] (eone) to (dagone);
\draw[|-{Implies},double] (etwo) to (dagtwo);
\draw[|-{Implies},double] (ethree) to (dagthree);
\draw[|-{Implies},double] (evee) to (dagvee);
\end{tikzpicture}
\]
By Lemma~\ref{lem:red:rewrite} and \ref{lem:red:val}
each dictionary resolution reduction $\rightarrowtriangle$ is
either subsumed by $\longrightarrow$ or may be delayed using
reduction rewriting until
it becomes a $\longrightarrow$ reduction.
In other words, since
$e_1 \longrightarrow e_2 \longrightarrow \cdots \longrightarrow v$ iff
$\lex{e_1} \Longrightarrow\rightarrowtriangle \lex{e_2} \Longrightarrow\rightarrowtriangle \cdots
\Longrightarrow\rightarrowtriangle \lex{v}$.
We use that
$\rightarrowtriangle$ can be
delayed ($d \rightarrowtriangle\Longrightarrow d'$ implies $d \Longrightarrow\rightarrowtriangle d$
or $d \longrightarrow\Longrightarrow d$),
hence
$\lex{e_1} \Longrightarrow^+\rightarrowtriangle^+ \lex{v}$.
Finally, from $e\rightarrowtriangle^+ v$ implies $e\longrightarrow^+ v$, we have that
$e_1\Downarrow v$ iff
$\lex{e_1}\Downarrow \lex{v}$.
\end{proof}
Proof of Theorem~\ref{thm:main:correctness} is given by
Corollary~\ref{lem:type:pres:prog},
\ref{cor:bisim},
\ref{cor:error:preservation}, and
\ref{cor:valpres}.
\section{Related Work}
\label{section:related}
\textbf{\emph{Implementation and benchmarks of generics}.}
\begin{wrapfigure}{r}{0.70\linewidth}
\footnotesize
\begin{tabular}{@{\hspace{0pt}}c@{\hspace{3pt}}|@{\hspace{3pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l@{\hspace{2pt}}l}
&Language &Translation(s) \ \ & Optimal & Optimal \\
& & & Exec.~Time \ & Code Size \\\midrule
Our work
& \gls{fgg} (Go)
& Dict/Mono/
& Mono (1st)
& Erasure$^\dagger$ (1st)\\
&
& Erasure$^\dagger$
& Dict (2nd)
& Dict (2nd)\\
\midrule
Go team & Go & Mono/Hybrid & Mono & Mono \\
\midrule
\cite{ureche2013miniboxing}
& Scala (JVM)\
& Hybrid
& Hybrid
& Hybrid
\\
\cite{odersky2000two}
& Pizza (Java)\
& Mono/Erasure
& Mono
& Erasure
\\
\cite{kennedy2001design}
& .NET CLR
& Hybrid
& Hybrid
& N/A
\\
\cite{Jones93}
& Haskell
& Dict/Mono
& Mono
& Mono
\\
\end{tabular}
\\[1mm]
($\dagger$) \gls{fgg} Erasure is not type preserving.
\vspace*{-2mm}
\caption{Implementations and benchmarks}
\label{fig:tab-benchmark-works}
\vspace*{-3mm}
\end{wrapfigure}
To the best of our knowledge,
there is no existing work comparing implementations
of generics in Go.
The closest ones target JVM languages \cite{odersky2000two,ureche2013miniboxing},
.NET common language runtime (CLR) \cite{kennedy2001design},
and Haskell \cite{jones1995dictionary}.
\citet{odersky2000two}
benchmark a homogeneous
(similar to \acrshort{erasure}) and a
heterogeneous (similar to \acrshort{mono}) translation
for Pizza (an extension of Java with generics).
They find that heterogeneity
reduces execution time, but also increases code size.
\citet{jones1995dictionary} gave a similar comparison for Haskell, reporting
that monomorphisation produces a smaller code size;
our work shows the opposite result.
One major reason is that unnecessary dictionary
fields and manipulation of dictionary parameters require
more assembly instructions in Haskell than Go, as
Go targets
low-level efficiency.
\citet{kennedy2001design} apply a hybrid
dictionary and monomorphisation approach targeting the
Just-In-Time (JIT) .NET CLR compiler.
Object instantiation
is conducted lazily at runtime according to an object's code
and structure (\eg memory layout and garbage
collection shape). Each object contains
a pointer to a dictionary (vtable), which provides
method entry points and type information.
With the help of lazy instantiation during runtime,
.NET CLR supports abundant language features,
including but not limited to $F$-bounded polymorphism
and polymorphic recursion.
They compare their design with equivalent non-generic
implementations using \texttt{Object}s and
hand-specialised code. Their execution speed is
close to that of the hand-specialised versions.
The Go~1.18\xspace approach is similar to .NET CLR,
but unlike .NET CLR, its instantiation happens at
compile time. Due to structural typing dictionaries are instantiated
through an approach similar to instance discovery
in monomorphisation.
Hence, Go~1.18\xspace suffers from an inability to support polymorphic
recursion (\ie constrained by \textit{nomono}, \S~\ref{section:nomono}) and
the large code size of monomorphisation (\S~\ref{sec:exp}).
\citet{ureche2013miniboxing} propose an optimised
monomorphisation approach called miniboxing using
one monomorphised instance on
types with different sizes to reduce code size.
Methods of different types are specialised at runtime
using a custom classloader.
They benchmark seven different settings, one achieving
at most a 22 times speedup over the default
generics translation in Scala.
The main design goal of their benchmarks is the
performance of reading and writing miniboxed objects
allocated on heap by the JVM.
They test the different combinations of concrete
types for generics (``Multi Context''), which is
similar to the scenario of \benchname{Program}~\mycircledtext{c} (in \S~\ref{subsec:evaluation}),
but their goal is to test the historical
paths executed in the HotSpot JVM.
They also test the speed of one method call
\texttt{hashCode} from generics types.
In comparison, our benchmarks test how various
factors impact the performance (\eg the method
number in an interface).
\myparagraph{Formal translations of generics}
Formal translations of generics
can be split into three main techniques;
\emph{Erasure},
\emph{dictionary-passing}, and
\emph{monomorphisation}.
We consider the most relevant work,
a breakdown of which
is provided in Figure~\ref{table:trans:theory}.
Where these works formally prove the correctness of their translation,
we observe that they can be grouped as
\emph{behavioural equivalence}~\cite{griesemer2020featherweight, Igarashi99FJ}
and \emph{value preservation}~\cite{yu2004formalization}.
The former demands that during evaluation the source and target
programs are still related, whereas the latter merely requires that
the result of a productive program be preserved.
In general behavioural equivalence is a more fine-grained equivalence, as
it can be used to show value preservation.
In this paper, we formalised and then
proved our dictionary-passing translation
correct using bisimulation up to dictionary-resolution, which is
categorised as
a behavioural equivalence.
\citet{yu2004formalization}
formalise a hybrid dictionary and
monomorphisation translation for the .NET~CLR.
\begin{wrapfigure}{r}{0.66\linewidth}
\footnotesize
\vspace{-3mm}
\begin{tabular}{@{\hspace{0pt}}c@{\hspace{3pt}}|@{\hspace{3pt}}lllc}
& Language & \hspace{-2mm}Approach & \hspace{-2mm}Translation(s)\ &
\hspace{-5mm}Formalised \\ \midrule
Our work
& \gls{fgg} (Go)
& S-to-S
& Dict
& \CheckmarkBold \\
\midrule
\cite{griesemer2020featherweight}
& \gls{fgg} (Go)
& S-to-S
& Mono
& \CheckmarkBold \\
\cite{Igarashi99FJ}
& Java
& S-to-S
& Erasure
& \CheckmarkBold \\
\cite{yu2004formalization}
& .NET CLR
& IR-to-IR
& Hybrid
& \CheckmarkBold \\
\cite{bottu2019coherence}
& Haskell
& S-to-IR
& Dict
& \CheckmarkBold \\
\cite{OW97}
& Pizza
& S-to-S
& Mono/Erasure
& \XSolidBrush
\end{tabular}
\\
\begin{center}
S-to-S$=$Source to Source; IR$=$Intermediate representation
\end{center}
\vspace{-3mm}
\caption{Related Work: Theory}
\label{table:trans:theory}
\vspace{-5mm}
\end{wrapfigure}
They mostly follow the design of \cite{kennedy2001design}.
They consider a target language which can, using an object's type, request the
specific dictionary from an assumed infinite map.
This is justified for the .NET~CLR as method dictionaries
are created on-demand using an object's type.
Compare this to our translation in which we must eagerly
construct dictionaries and pass
them in addition to the objects that they describe.
\citet[Theorem 5]{yu2004formalization} show that their
translation is value preserving;
for expression $e$, and value $v$,
if $e$ evaluates to $v$ ($e \Downarrow v$)
then there is a reduction
such that $\mapother{e} \longrightarrow^* \mapother{v}$
(where $\mapother{-}$ is their translation).
\citet{bottu2019coherence} formalise
dictionary-passing in Haskell.
Their work focuses on proving a \emph{coherency theorem}.
They motivate this work as nominally typed languages
featuring multiple inheritance (\ie Haskell)
suffer from an ambiguity in dictionary-resolution such that
the translation of
a single source program may
\emph{non-deterministically}
produce different terms in the target language.
A translation is coherent when these target terms
are contextually equivalent.
We need not consider this issue, as Go's structural typing
system does not support the multiplicity of superclass implementations
that causes incoherence.
\citet{bottu2019coherence} do not prove the correctness
of their dictionary-passing translation using an equivalence
between the source and target language.
\citet{griesemer2020featherweight} formalised the \gls{fg} and \gls{fgg}
languages, as well as the \gls{mono} translation used in \S~\ref{sec:exp}.
This work defines a class of \gls{fgg}
programs that can be monomorphised,
and proves that class membership is decidable.
Finally, they prove that their translation forms
a one-to-one bisimulation.
Their behavioural equivalence is
straightforward and does not require any up to
techniques, as monomorphisation does not
introduce runtime computations.
\citet{OW97} describe, but do not formalise, two
alternative approaches -- erasure and monomorphisation --
to implementing
generics in the Pizza language, a generic variant of Java.
\citet{Igarashi99FJ} build on the erasure technique
developed in \cite{OW97}.
Their work formalises
Featherweight Generic Java and
proves a formal erasure translation
to Featherweight Java.
They prove the correctness of their erasure
translation using a behavioural equivalence,
although their translation introduces
\emph{synthetic casts} (assertions), which complicates
the correctness theorems.
To resolve this issue, they introduce a reduction
for their proofs which freely adds,
removes, or safely alters any required synthetic casts.
Correctness of their translation
is split
into two directions, called
\emph{weak completeness} and
\emph{soundness} \cite[Theorem~4.5.4 and Theorem~4.5.5]{Igarashi99FJ},
which uses a behavioural equivalence up to
the cast reduction.
As with our paper, they use these theorems to show a
value preservation corollary.
\citet[Corollary~4.5.6]{Igarashi99FJ} also prove
that their erasure translation is type assertion error preserving
-- in contrast to our \gls{erasure} translation, since ours does
not preserve type assertions. This disparity is due to
a limitation on the expressivity of assertion in Generic Java.
The inclusion of this limitation has been an area of contention, with other
authors suggesting that it could be overcome with the
use of type-reps~\cite{allen02thecase,agesen1997adding,solorzano98reflection,Viroli00reflectiveGJ,crary1998intensional}.
\myparagraph{Formal non-generics dictionary translation}
\citet{sulzmann2021dictionary} propose a
dictionary-passing translation from the non-generic \gls{fg} to an
\emph{untyped} variant of the $\lambda$-calculus
with pattern matching.
They use a dictionary-passing approach to investigate
Go's resolution mechanism for overloaded methods and
structural subtyping.
\citet{sulzmann2021dictionary}
prove that their translation is value preserving
using a step-indexed logical relation.
Intuitively, \citet{sulzmann2021dictionary} use an inductive
proof technique that, using two related values $v$ and $v'$ at type $t$,
relates any terms ($e$ and $\mapother{e}$)
that can reduce to
$v$ and $v'$ (\emph{resp.}) within $k$~reduction-steps.
Step-indexed logical relations are a sophisticated extension to
logical relations (\eg \cite{bottu2019coherence}),
and are applicable for languages with recursion.
\citet{sulzmann2021dictionary} left
a type-preserving translation from \gls{fg} and
a translation from \gls{fgg}
as their future work.
No implementation or evaluation of their translation is provided.
\myparagraph{Alternatives to bisimulation up to}
In our motivating example for \emph{up to dictionary resolution}
(Figure~\ref{fig:example:nontriv}),
we briefly discuss potential alternate many-to-many bisimulation approaches.
One such approach is the
\emph{stuttering bisimulation}~\cite{browne1988charachterizing},
which has been studied extensively in the domain of model checking~\cite{baier2008principles}.
The stutter bisimulation relates two terms when they
both reduce to related terms in an unbounded, but finite, number of steps.
Formally, $e$ and $\mapother{e}$ are related by a
\emph{stutter bisimulation}
when \begin{enumerate*}
\item $e\longrightarrow e'$ implies that there exists a finite reduction
$\mapother{e}\longrightarrow d_0 \longrightarrow \cdots \longrightarrow d_n$ ($n\ge 0$)
where each intermediate state $d_i$ is related to $e'$; and symmetrically,
\item $\mapother{e}\longrightarrow d$ implies that there is a finite reduction from $e$
with each element being related to $d$.
\end{enumerate*}
This approach works well for finite models, but becomes \emph{undecidable}
when applied to Turing complete languages such as \gls{fgg}.
To overcome this issue, the works in \cite{hur2014logical,leroy2009formally}
consider restricted, decidable, variants of
the stutter bisimulation to show the correctness of their translations.
\citet{leroy2009formally} formulates the non-symmetric
\emph{``star''-simulation}, which requires a well-founded ordering on reducing terms
to ensure that either \begin{enumerate*}
\item both source and target terms reduce infinitely; or
\item the source cannot reduce infinitely while the target is stuck.
\end{enumerate*}
In practice, the well-founded ordering used in (2)
is approximated using fixed parametric bounds.
\citet{hur2014logical} formulate this idea
using \emph{stuttering parametric bisimulation},
which bounds the number of steps that two related
terms can take before their reductions are related.
Such restricted variants of the stutter bisimulation
cannot provide a sound and complete correctness proof for \gls{dict}.
More generally, our use of a fine-grained up to bisimulation
not only develops on existing correctness theorems
for the translation generics
\cite{Igarashi99FJ,griesemer2020featherweight}, but it
can also be readily extended to include advanced language features
such as concurrency and side effects in Go.
| -267,387.846323 |
[
-2.05859375,
1.720703125
] | 30.408024 |
[
-2.6328125,
1.7490234375,
-1.0224609375,
-5.39453125,
-1.8857421875,
6.52734375
] |
[
-0.9287109375,
4.40234375,
-1.64453125,
3.115234375
] | 2,780 | 37,214 |
[
-3.482421875,
4.16015625
] | 35.347313 |
[
-5.3359375,
-3.55078125,
-3.4375,
-1.625,
1.8125,
9.3828125
] | 0.23202 | 14.986028 | 8.846133 | 1.163865 |
[
2.528282880783081
] | -171,844.782398 | 7.064761 | -266,160.454523 | 0.328632 | 6.554655 |
[
-2.109375,
-2.791015625,
-4.03515625,
-5.71875,
2.375,
12.03125
] |
[
-4.50390625,
0.20654296875,
-0.95556640625,
-0.53759765625,
2.103515625,
0.41357421875
] | |
BkiUdi05qsFAf8zx0x1h
|
\section{Introduction}
\label{intro}
One of the arithmetic features of modular and quasi-modular forms is integrality of the coefficients in their Fourier expansions. This is trivially seen on the generators
\begin{equation}
E_2(\tau)=1-24\sum_{n=1}^\infty\frac{nq^n}{1-q^n},
\quad
E_4(\tau)=1+240\sum_{n=1}^\infty\frac{n^3q^n}{1-q^n},
\quad
E_6(\tau)=1-504\sum_{n=1}^\infty\frac{n^5q^n}{1-q^n}
\label{eis-ser}
\end{equation}
of the ring of quasi-modular forms, as well as on the `discriminant' cusp form
$$
\Delta(\tau)=q\prod_{m=1}^\infty(1-q^m)^{24}=\frac{E_4^3-E_6^2}{1728},
$$
where $q=q(\tau)=e^{2\pi i\tau}$ for $\tau$ from the upper half-plane $\Im\tau>0$.
All $q$-expansions above converge for $q$ inside the unit disk, and in fact have polynomial growth of the coefficients.
A more suprising fact, brought to the mathematical community by Ramanujan \cite{Ra16} more than 100 years ago, is that the three Eisenstein series in \eqref{eis-ser} satisfy the algebraic system of differential equations
\begin{equation}
\delta E_2=\frac1{12}(E_2^2-E_4), \quad \delta E_4=\frac13(E_2E_4-E_6), \quad \delta E_6=\frac12(E_2E_6-E_4^2),
\label{rama-DE}
\end{equation}
where
$$
\delta=\frac1{2\pi i}\frac{\d}{\d\tau}=q\frac{\d}{\d q}.
$$
Ramanujan's notation for the Eisenstein series \eqref{eis-ser} was $P(q),Q(q),R(q)$, respectively, as he mainly viewed them as functions of the $q$-nome.
Since the functions $E_2,E_4,E_6$ are algebraically independent over $\mathbb C$, and even over $\mathbb C(q)$ and over $\mathbb C(\tau,q)$ \cite{Ma69,Re66}, this fine structure gives rise to remarkable applications in transcendental number theory to the values of quasi-modular forms.
One particular notable example in this direction is a famous theorem of Nesterenko \cite{Ne96}, which states that, given a complex number $q$ with $0<|q|<1$, at least three of the four quantities $q,P(q),Q(q),R(q)$ are algebraically independent over~$\mathbb Q$.
Inspired by an arithmetic (ex-)conjecture from \cite{BZ19}, Li and Neururer observed in \cite{LN19} that the formal anti-derivative
$$
\tilde F_{4a}=\delta^{-1}\biggl(\frac{\Delta}{E_4^2}\biggr)=\int_0^q\frac{\Delta}{E_4^2}\,\frac{\d q}q
$$
of the meromorphic modular form $F_{4a}(\tau)=\Delta/E_4^2$ has integer coefficients in its $q$-expansion. (They proved a slightly weaker version about the integrality of the anti-derivative of $64\Delta/E_4^2$.) The function $F_{4a}(\tau)$ has weight~4 and possesses the double pole at $\tau=\rho=e^{2\pi i/3}$ in the fundamental domain, and a simple analysis reveals that it is not the image under $\delta$ of an element from the (differentially closed) field $\mathbb C(q,E_2,E_4,E_6)$. This implies that the anti-derivative $\tilde F_{4a}=\delta^{-1}F_{4a}$ is transcendental over the field, hence the addition of $\tilde F_{4a}$ to the latter increases the transcendence degree by~1. Following the background in \cite{BZ19}, Li and Neururer coined the name `magnetic modular form' to a meromorphic modular form like $F_{4a}$.
A principal goal of this note is to investigate the `magnetic modular' phenomenon further and to give more examples of those.
\begin{theorem}
\label{th1}
The meromorphic modular forms $F_{4a}(\tau)=\Delta/E_4^2$ and $F_{4b}(\tau)=E_4\Delta/E_6^2$ of weight $4$ are magnetic.
In other words, their anti-derivatives $\delta^{-1}F_{4a}$ and $\delta^{-1}F_{4b}$ have integral $q$-expansions.
\end{theorem}
\begin{theorem}
\label{th2}
The meromorphic modular form $F_6(\tau)=E_6\Delta/E_4^3$ of weight $6$ is doubly magnetic\textup:
its first and second anti-derivatives $\delta^{-1}F_6$ and $\delta^{-2}F_6$ have integral $q$-ex\-pan\-sions.
\end{theorem}
There are other instances in the literature of related integrality phenomena;
however the existing methods of proofs seem to be quite different from what we use below.
Investigating the solution space of the linear differential equation
\begin{equation*}
D_kf(\tau)=0, \quad\text{where}\;
D_k=\delta^2-\frac{k+1}6E_2(\tau)\delta+\frac{k(k+1)}{12}\delta E_2(\tau),
\end{equation*}
in \cite{HK12} Honda and Kaneko found that, when $k=4$, it is spanned by $E_4$ and
$$
\tilde E_4=E_4\cdot\delta^{-1}\biggl(\frac{\Delta^{5/6}}{E_4^2}\biggr)\in q^{5/6}\mathbb Q[[q]].
$$
They numerically observed and proved some related results about the $p$-integrality of $\tilde E_4$ for primes $p\equiv1\bmod3$. This theme was later analysed and generalised in \cite{AA14,Gu13,Gu20}.
Bringing some parallel to that investigations, it is easy to check that the functions $E_4$ and $E_4\,\delta^{-1}(\Delta/E_4^2)$ (both with integer coefficients in their $q$-expansions!) span the solution space of the differential equation $Df=0$, where
$$
D=\delta^2-E_2\delta+\frac1{36}\biggl(7E_2^2-5E_4-2\frac{E_2E_6}{E_4}\biggr)
=D_5+\frac16\biggl(E_2\frac{\delta E_4}{E_4}-5\delta E_2\biggr).
$$
At the same time, the only quasi-modular solutions of $D_5y=0$ are spanned by $\delta E_4$ (see \cite[Theorem 2]{KK03}).
A somewhat different account of strong divisibility of the coefficients of modular forms shows up in the context of arithmetic properties of traces of singular moduli initiated in Zagier's work \cite{Za02}.
As this topic remains quite popular, we only list a selection of contributions \cite{Ah12,AO05,DJ08,Ed05,Gu07,Je05}.
The methods involved make use of the Shimura correspondence, which is also the main ingredient of our proof of Theorems~\ref{th1} and~\ref{th2}.
\section{Magnetic quasi-modular forms}
\label{V4}
In this part we formalise the notion of magnetic forms and give results, which may be thought of as generalisations of Theorems~\ref{th1} and \ref{th2} but use the theorems as principal steps.
Consider the family
$$
f_{a,b,c}=E_2^aE_4^bE_6^c, \quad\text{where}\; a,b,c\in\mathbb Z, \; a\ge0,
$$
of meromorphic quasi-modular forms. Their $q$-expansions all belong to $\mathbb Z[[q]]$.
For $k\in\mathbb Z$ even, denote by $W_k$ the $\mathbb{Q}$-vector space in $\mathbb Q\otimes_{\mathbb{Z}}\mathbb Z[[q]]$ (the $q$-series $f\in\mathbb Q[[q]]$ with $Nf\in\mathbb Z[[q]]$ for some $N\in\mathbb Z_{>0}$) spanned by the $q$-expansions of the forms $f_{a,b,c}$ of weight $k$, that is, with $2a+4b+6c=k$.
Because
\begin{equation}
\delta f_{a,b,c}=\frac{k-a}{12}f_{a+1,b,c}-\frac{a}{12}f_{a-1,b+1,c}-\frac{b}{3} f_{a,b-1,c+1}-\frac{c}{2} f_{a,b+2,c-1},
\label{delta}
\end{equation}
the differential operator $\delta$ defines a well defined map $W_k\to W_{k+2}$.
Clearly, the image $\delta W_k$ in $W_{k+2}$ is a $\mathbb Q$-subspace in $\mathbb Q\otimes_{\mathbb{Z}}q\mathbb{Z}[[q]]$; we will call $W_{k+2}^0$ the cuspidal subspace of $W_{k+2}$, that is, the set of all elements in $W_{k+2}$ with vanishing constant term in their $q$-expansion.
We will say that an element $v\in W_k^0$ is \emph{magnetic} if its formal anti-derivative
$$
\delta^{-1}v=\int_0^q v\,\frac{\d q}{q}\in \mathbb{Q}\otimes_{\mathbb{Z}} q\mathbb{Z}[[q]].
$$
We also call it \emph{strongly magnetic} if $\delta^{-1}v\in q\mathbb{Z}[[q]]$.
With the magnetic property, we can associate the equivalence relation $\sim$ on $W_k$ writing $v\sim w$ if and only if the difference $v-w$ is in $W_k^0$ and is magnetic.
Let $V_k$ (respectively, $V_k^0$) be the $\mathbb{Q}$-vector subspace of $W_k$ (respectively, of $W_k^0$) generated by the forms $f_{a,b,c}$ with $a\in\{0,1,\dots,k-2\}$.
Notice that $\delta V_2\subseteq V_{4}^0$.
\begin{theorem}
\label{th:w4}
Any element of $V_4^0$ is magnetic.
\end{theorem}
\begin{remark}
\label{rem:w4}
It seems that the elements of $W_4^0$ with $a>2$ (that is, outside the range assumed in $V_4^0$) and the magnetic property are only linear combinations from $\delta W_2$. In other words, we expect that the choice of $V_4^0$ in the theorem as a magnetic space of weight~4 to be sharp.
\end{remark}
\begin{proof}[Derivation of Theorem~\textup{\ref{th:w4}} from Theorem~\textup{\ref{th1}}]
It follows from Theorem~\textup{\ref{th1}} that the forms
$$
f_{0,1,0}-f_{0,-2,2}=1728F_{4a}
\quad\text{and}\quad
f_{1,2,-1}-f_{0,1,0}
=6\delta f_{0,2,-1}-5184F_{4b}
$$
are magnetic; in other words, we have the equivalences $f_{0,-2,2}\sim f_{0,1,0}$ and $f_{1,2,-1}\sim f_{0,1,0}$.
Any element in $V_4$ can be written as $E_2^aP(E_4,E_6)/(E_4^mE_6^n)$, for some $a,m,n$ non-negative integers, $a\le2$, and $P(x,y)\in\mathbb Q[x,y]$. Such an expression clearly splits into a linear combination of the form $f_{a,b,c}\in V_4$ with $0\le a\le2$ and either $b\ge0$ or $c\ge0$.
If both $b\ge0$ and $c\ge0$ then we get only two elements in $V_4$, namely, $f_{0,1,0}$ and $f_{2,0,0} = f_{0,1,0} + 12\delta f_{1,0,0}$, both equivalent to $f_{0,1,0}$. Therefore, we only need to prove the theorem in two situations:
$b\ge0$ and $c<0$, or $b<0$ and $c\ge0$.
If $b\ge0$ and $c<0$, then there is only one form $f_{a,b,c}\in V_4$ with $c=-1$.
Indeed, solving $4=2a+4b+6c=2a+4b-6$ we get $a=1$, $b=2$. By the hypothesis, this form $f_{1,2,-1}\sim f_{0,1,0}$.
For $c\le-2$ we use equation \eqref{delta} (with $k=2$) in the form
\begin{equation*}
\frac{c+1}{2}f_{a,b,c}=-\delta f_{a,b-2,c+1}
-\frac{a}{12}f_{a-1,b-1,c+1}-\frac{b-2}{3} f_{a,b-3,c+2}-\frac{a-2}{12} f_{a+1,b-2,c+1},
\end{equation*}
and induction on $-c$ to conclude that $f_{a,b,c}$ is equivalent to a linear combination of $f_{1,2,-1}$ and $f_{0,1,0}$, hence to $f_{0,1,0}$ alone. (Notice that prefactors $a/12$ and $(a-2)/12$ leave the terms on the right-hand side in~$V_4$.)
If $b<0$ (and $c\ge0$), we use equation \eqref{delta} in the form
\begin{equation}
\frac{b+1}{3} f_{a,b,c}
=-\delta f_{a,b+1,c-1}
-\frac{a-2}{12}f_{a+1,b+1,c-1}
-\frac{a}{12}f_{a-1,b+2,c-1}
-\frac{c-1}{2} f_{a,b+3,c-2}.
\label{delta2}
\end{equation}
When $b=-1$ and $b=-2$, the only forms $f_{a,b,c}\in V_4$ possible with $c\ge0$ are $f_{1,-1,1}$ and $f_{0,-2,2}$, respectively. Substituting $a=0$, $b=-2$, $c=2$ in \eqref{delta2} leads to
$$
-\frac{1}{3} f_{0,-2,2}
=-\delta f_{0,-1,1}
+\frac{1}{6}f_{1,-1,1}
-\frac{1}{2} f_{0,1,0}
$$
implying $f_{1,-1,1}\sim f_{0,-2,2}\sim f_{0,1,0}$ from the hypothesis.
For $b\le-3$ we use \eqref{delta2} to conclude by induction on $-b$ that any such $f_{a,b,c}$ is equivalent to a linear combination of $f_{0,-2,2}$, $f_{1,-1,1}$ and $f_{0,1,0}$, hence to $f_{0,1,0}$.
This completes the proof of the theorem.
\end{proof}
\begin{remark}
\label{rem2}
It follows from the proof that we can replace the generator $f_{0,-2,2}-f_{0,1,0}$ with $f_{1,-1,1}-f_{0,1,0}$.
Furthermore, alternative choices for $f_{0,-2,2}-f_{0,1,0}$ and $f_{1,2,-1}-f_{0,1,0}$ are $\tilde F_j=E_2\cdot(\delta E_j)/E_j$ or $\hat F_j=(\delta^2E_j)/E_j$ for $j=4,6$.
\end{remark}
For weight $6$ the situation is slightly different. Only the following is true.
\begin{theorem}
\label{th:w6}
Let $U_6$ be the subspace of $V_6$ spanned over $\mathbb Q$ by $f_{a,b,c}$ with the additional constraint $c\ge 0$,
and $U_6^0=U_6\cap V_6^0$ its cuspidal subspace.
Then any element of $U_6^0$ is magnetic.
\end{theorem}
\begin{remark}
\label{rem3}
In fact, it seems that the space $U_6^0$ possesses the \emph{strongly} magnetic property: the anti-derivative of any difference of two $f_{a,b,c}$ from $U_6$ has an integral $q$-expansion.
\end{remark}
\begin{proof}
For $c=0$, we only have two elements $f_{3,0,0}$ and $f_{1,1,0}$ in $U_6$,
and $f_{3,0,0}\sim f_{1,1,0}$ since $ f_{3,0,0}- f_{1,1,0}=6\delta f_{2,0,0}$.
Moreover, they are both strongly equivalent to $f_{0,0,1}$, because $f_{1,1,0}-f_{0,0,1}=3\delta E_4$.
For $c=1$, we find out that $f_{0,0,1}$, $f_{2,-1,1}$ and $f_{4,-2,1}$ are in $U_6$.
Then $f_{4,-2,1}$ is strongly equivalent to any of $f_{3,0,0}$, $f_{1,1,0}$ and $f_{0,0,1}$ in accordance with $ f_{4,-2,1}-f_{3,0,0}=3\delta f_{4,-1,0}$ and the above.
With the help of Theorem~\ref{th2} and derivation
$$
f_{2,-1,1}-f_{0,0,1}
=4\delta f_{1,-1,1}-4\delta f_{0,-2,2}+2(f_{1,1,0}-f_{0,0,1})-2304F_6,
$$
we see that the same is true for $f_{2,-1,1}$.
We have just shown that any element in the subspace $U_6^0$ generated by $f_{a,b,c}$ with $c\in\{0,1\}$ does have the (strongly) magnetic property.
For the rest of our theorem, we proceed by induction over $c$ using the following consequence of equation~\eqref{delta}:
\begin{equation*}
\frac{b}{3} f_{a,b-1,c+1}=-\delta f_{a,b,c}+\frac{4-a}{12}f_{a+1,b,c}-\frac{a}{12}f_{a-1,b+1,c}-\frac{c}{2} f_{a,b+2,c-1}.
\qedhere
\end{equation*}
\end{proof}
\section{A magnetic extension of the field of quasi-modular forms}
\label{non-surj}
The functions $\tau,q,E_2,E_4,E_6$ are algebraically independent over $\mathbb C$ (see \cite{Ma69,Re66}).
We can identify the differential field $\mathbb C\<\tau,q,E_2,E_4,E_6\>$ generated by them over $\mathbb C$ with the differential field $\mathcal{K}=\mathbb C\<\tau,q,X,Y,Z\>$ equipped with the derivation
$$
D=\frac1{2\pi i}\,\frac\partial{\partial\tau}+q\frac\partial{\partial q}
+\frac1{12}(X^2-Y)\frac\partial{\partial X}+\frac13(XY-Z)\frac\partial{\partial Y}+\frac12(XZ-Y^2)\frac\partial{\partial Z}.
$$
Our goal is to demonstrate that the elements
$$
v_1=\frac{XZ}{Y}-Y \quad\text{and}\quad v_2=\frac{XY^2}{Z}-Z,
$$
corresponding to $f_{1,-1,1}-f_{0,1,0}$ and $f_{1,2,-1}-f_{0,1,0}$, do not have $D$-anti-derivatives in $\mathcal{K}$
(not even in $\mathcal{K}\<D^{-1}v_2\>$ and $\mathcal{K}\<D^{-1}v_1\>$, respectively).
This follows trivially from noticing that $\operatorname{ord}_Yv_1=-1$ and $\operatorname{ord}_Zv_2=-1$, so that if either $D^{-1}v_1$ or $D^{-1}v_2$ existed then $\operatorname{ord}_YD^{-1}v_1<0$ and $\operatorname{ord}_ZD^{-1}v_2<0$, hence $\operatorname{ord}_Yv_1=\operatorname{ord}_YD(D^{-1}v_1)\le-2$ and similarly $\operatorname{ord}_Zv_2\le-2$, contradiction.
By \cite[Lemma 3.9]{Ka57} applied twice, the anti-derivatives
$$
\tilde E_{4a}=\delta^{-1}(f_{1,-1,1}-f_{0,1,0})
\quad\text{and}\quad
\tilde E_{4b}=\delta^{-1}(f_{1,2,-1}-f_{0,1,0})
$$
are algebraically independent over the field $\mathbb C\<\tau,q,E_2,E_4,E_6\>$, the extended differential field
$$
\mathbb C\<\tau,q,E_2,E_4,E_6,\tilde E_{4a},\tilde E_{4b}\>
$$
has transcendence degree 7 over $\mathbb C$ and is a Picard--Vessiot extension of the differential field $\mathbb C\<\tau,q,E_2,E_4,E_6\>$. Again, by identifying the latter through the isomorphism
$$
\varphi\colon E_2\mapsto X, \; E_4\mapsto Y, \; E_6\mapsto Z, \; \tilde E_{4a}\mapsto S, \; \tilde E_{4b}\mapsto T
$$
with the differential field $\hat{\mathcal{K}}=\mathbb C\<\tau,q,X,Y,Z,S,T\>$ equipped with the derivation
\begin{align*}
\hat D&=\frac1{2\pi i}\,\frac\partial{\partial\tau}+q\frac\partial{\partial q}
+\frac1{12}(X^2-Y)\frac\partial{\partial X}+\frac13(XY-Z)\frac\partial{\partial Y}+\frac12(XZ-Y^2)\frac\partial{\partial Z}
\\ &\qquad
+\biggl(\frac{XZ}{Y}-Y\biggr)\frac\partial{\partial S}
+\biggl(\frac{XY^2}{Z}-Z\biggr)\frac\partial{\partial T},
\end{align*}
we want to demonstrate that the element
$$
v_3=\frac{X^2Z}{Y}-Z
$$
corresponding to $f_{2,-1,1}-f_{0,0,1}$ does not have a $\hat D$-anti-derivative in $\hat{\mathcal{K}}$.
Assume on the contrary that there is an element $u_3\in\hat{\mathcal{K}}$ such that $\hat Du_3=v_3$.
Notice that the functions $\tau$, $q=e^{2\pi i\tau}$, $E_2(\tau)$, $E_4(\tau)$ and $E_6(\tau)$ are all analytic at $\tau=\rho=e^{2\pi i/3}$, the latter three having the values
$$
E_2(\rho)=\frac{2\sqrt3}{\pi}, \quad E_4(\rho)=0, \quad E_6(\rho)=\biggl(\frac{3\Gamma(\frac13)^6}{8\pi^4}\biggr)^3.
$$
With the help of Ramanujan's system \eqref{rama-DE} we find out that
$$
E_4(\tau)=-\frac{2\pi i}3\,E_6(\rho)(\tau-\rho)+O\bigl((\tau-\rho)^2\bigr) \quad\text{as}\; \tau\to\rho,
$$
so that
$$
\begin{aligned}
f_{1,-1,1}-f_{0,1,0}
&=\frac{3iE_2(\rho)}{2\pi}\,\frac1{\tau-\rho}+O(1),
\\
f_{2,-1,1}-f_{0,0,1}
&=\frac{3iE_2(\rho)^2}{2\pi}\,\frac1{\tau-\rho}+O(1)
\end{aligned}
\quad\text{as}\; \tau\to\rho
$$
and $f_{1,2,-1}-f_{0,1,0}$ is analytic at $\tau=\rho$. In turn, this implies that
$$
\begin{aligned}
\tilde E_{4a}&=\frac{3iE_2(\rho)}{2\pi}\,\ln(\tau-\rho)+g_1(\tau),
\\
\delta^{-1}(f_{2,-1,1}-f_{0,0,1})
&=\frac{3iE_2(\rho)^2}{2\pi}\,\ln(\tau-\rho)+g_3(\tau)
\end{aligned}
\quad\text{as}\; \tau\to\rho
$$
for some functions $g_1(\tau)$ and $g_3(\tau)$ analytic at $\tau=\rho$, while $\tilde E_{4b}(\tau)$ is analytic there.
To summarise, the function
$$
\delta^{-1}(f_{2,-1,1}-f_{0,0,1})
-\frac{2\sqrt3}\pi\,\tilde E_{4a}(\tau)
=\delta^{-1}(f_{2,-1,1}-f_{0,0,1})
-E_2(\rho)\tilde E_{4a}(\tau)
$$
is analytic at $\tau=\rho$, hence only representable as a rational function of $\tau,q,E_2,E_4,\allowbreak E_6,\tilde E_{4b}$.
Using the isomorphism $\varphi$ we conclude that
$$
u=u_3-\frac{2\sqrt3}\pi\,S\in\hat{\mathcal{K}}
$$
is a polynomial in $\tau,q,X,Y,Z,T$. The latter is seen to be impossible after the operator $\hat D$ is applied to $u$ and to $u_3-\dfrac{2\sqrt3}\pi\,S$ leading to a rational expression of $S$ in terms of the other generators of~$\hat{\mathcal{K}}$.
The contradiction we arrive at implies that the anti-derivative
$$
\tilde E_6=\delta^{-1}(f_{2,-1,1}-f_{0,0,1})
$$
is transcendental over the field $\mathbb C\<\tau,q,E_2,E_4,E_6,\tilde E_{4a},\tilde E_{4b}\>$.
On replacing the generators of the latter with the anti-derivatives of magnetic modular forms from Theorems~\ref{th1} and~\ref{th2} we obtain the following result.
\begin{theorem}
\label{th555}
The differentially closed field
$$
\mathbb C\<\tau,q,E_2,E_4,E_6,\tilde F_{4a},\tilde F_{4b},\tilde F_6\>,
$$
generated by $\tau$, $q=e^{2\pi i\tau}$, the Eisenstein series \eqref{eis-ser} and the anti-derivatives
$$
\tilde F_{4a}=\delta^{-1}\biggl(\frac{\Delta}{E_4^2}\biggr), \quad
\tilde F_{4b}=\delta^{-1}\biggl(\frac{E_4\Delta}{E_6^2}\biggr), \quad
\tilde F_6=\delta^{-1}\biggl(\frac{E_6\Delta}{E_4^3}\biggr)
$$
with integral coefficients in their $q$-expansions,
has transcendence degree $8$ over $\mathbb C$.
\end{theorem}
\begin{remark}
\label{rem-alt}
Another way to see that no $u_3$ exists in $\hat{\mathcal{K}}$ such that $\hat Du_3=v_3$ is by casting $u_3$ in the form $p/q$ with
$p,q$ in the ring $\mathcal R[S]$, where $\mathcal R=\mathbb C\<\tau,q,X,Y,Z,T\>$, and $\gcd(p,q)=1$.
After clearing the denominators in $\hat D(p/q)=v_3$ and comparing the degree in $S$ on both sides, one concludes that $\hat Dq=uq$ for some $u\in\mathcal R$ (that is, independent of~$S$).
This leads to conclusion $q\in\mathcal R$, so that $u_3$ is a polynomial in~$S$.
Finally, the equation $\hat Du_3=X^2Z/Y-Z$ is seen to be impossible by comparing the order in~$Y$ on both sides.
\end{remark}
\begin{exercise}
\label{ex1}
We leave to the reader the exercise to prove that the anti-derivative of $\tilde F_6$ (in turn, the second anti-derivative of $F_6$) is transcendental over the field in Theorem~\ref{th555}.
\end{exercise}
\section{Half-integral weight weakly holomorphic modular forms}
\label{w5/2}
Following the ideas in \cite{LN19}, we will cast magnetic modular forms of weight $2k$ as the images of weakly holomorphic eigenforms of weight $k+1/2$ under the Shimura--Borcherds lift.
In our settings, an input for the lift is a form $f(\tau)=\sum_{n\gg-\infty}a(n)q^n$ from the Kohnen plus space $M_{k+1/2}^{!,+}$ (meaning that $a(n)$ vanishes when $(-1)^kn\not\equiv0,1\bmod4$);
the output is the meromorphic modular form $\Psi(f)(\tau)=\sum_{n>0}A(n)q^n$ with
\begin{equation}
A(n)=\sum_{d\mid n}\bigg(\frac dD\bigg)d^{k-1}a(|D|\,n^2/d^2),
\label{a->A}
\end{equation}
where $D=D_k=1$ for $k$ even (so that the Kronecker--Jacobi symbol $\big(\frac dD\big)$ is always~1) and $D=D_k=-3$ for $k$~odd.
In other words,
\begin{equation}
\Psi=\Psi_k\colon f=\sum_{n\gg-\infty}a(n)q^n
\mapsto F=\sum_{n>0}q^n\sum_{d\mid n}\bigg(\frac d{D_k}\bigg)d^{k-1}a(|D_k|\,n^2/d^2),
\label{SB}
\end{equation}
and the latter expression is just $F=\sum_{n>0}q^n\sum_{d\mid n}d^{k-1}a(n^2/d^2)$ when $k$~is even.
We will also distinguish the Kohnen plus cuspidal space $S_{k+1/2}^{!,+}$ in $M_{k+1/2}^{!,+}$ by imposing the additional constraint $a(0)=0$.
Our examples of forms from $M_{k+1/2}^{!,+}$ with $k=2$ involved in the proof of Theorem~\ref{th1} are the following three:
\begin{align*}
g_0(\tau)&=\theta(\tau)\,(\theta(\tau)^4-20 E_{2,4}(\tau))
\\
&=1 - 10q - 70q^4 - 48q^5 - 120q^8 - 250q^9 - \dotsb - 550q^{16} - \dotsb
\\ &\quad
- 1210q^{25} - \dotsb - 1750q^{36} - \dotsb - 3370q^{49} -\dotsb,
\displaybreak[2]\\
g_1(\tau)&=\frac{\theta(\tau)E_4(4\tau)^2E_6(4\tau)}{\Delta(4\tau)}
\\
&=q^{-4} + 2q^{-3} + 2 - 196884q^4 - \dotsb - 85975040q^9 - \dotsb
\\ &\quad
- 86169224844q^{16} - \dotsb - 51186246451200q^{25} - \dotsb
\\ &\quad
- 35015148280961780q^{36} - \dotsb - 21434928162930081792q^{49} - \dotsb,
\displaybreak[2]\\
g_2(\tau)&=\frac{f_0(\tau)E_4(4\tau)^3}{\Delta(4\tau)}
\\
&=q^{-4} - 10q^{-3} + 674 - 7488q + 144684q^4 - \dotsb - 224574272q^9 - \dotsb
\\ &\quad
- 42882054732q^{16} - \dotsb - 63793268216640q^{25} - \dotsb
\\ &\quad
- 31501841125150388q^{36} - \dotsb - 22385069000981561664q^{49} - \dotsb,
\end{align*}
where $\theta(\tau)=\sum_{n\in\mathbb Z}q^{n^2}$ and
\begin{equation}
E_{2,4}(\tau)
=\frac{-E_2(\tau)+3E_2(2\tau)-2E_2(4\tau)}{24}
=\sum_{\substack{n=1\\n\;\text{odd}}}^\infty q^n\sum_{d\mid n}d.
\label{E24}
\end{equation}
The modular form $g_0(\tau)$ is known by the name of normalised Cohen--Eisenstein series of weight~$5/2$.
\begin{lemma}
\label{lem1}
\begin{enumerate}[(a)]
\item[\textup{(a)}]
The weight $5/2$ weakly holomorphic modular form
$$
f_{4a}(\tau) = \frac{7}{8} g_0(\tau) + \frac{1}{768} g_1(\tau) - \frac{1}{768} g_2(\tau)
= \frac{1}{64}q^{-3} + q - 506q^4 + \dotsb
$$
lies in the Kohnen plus cuspidal space $S^{!,+}_{5/2}$ and its Shimura--Borcherds lift $\Psi(f_{4a})$ is $F_{4a}=\Delta/E_4^2$.
\item[\textup{(b)}]
The weight $5/2$ weakly holomorphic modular form
$$
f_{4b}(\tau) = \frac{19}{18} g_0(\tau) -\frac{5}{648} g_1(\tau) - \frac{1}{648} g_2(\tau)
= -\frac{1}{108}q^{-4} + q + 1222q^4 + \dotsb
$$
lies in the Kohnen plus cuspidal space $S^{!,+}_{5/2}$ and its Shimura--Borcherds lift $\Psi(f_{4b})$ is $E_4\Delta/E_6^2$.
\end{enumerate}
\noindent
Moreover, $f_{4a}\in \frac{1}{64} q^{-3}\mathbb{Z}[[q]]$ and $f_{4b}\in \frac{1}{108} q^{-4}\mathbb{Z}[[q]]$.
\end{lemma}
The identification $\Psi(f_{4a})=F_{4a}$ is already in Borcherds' \cite[Example~14.4]{Bo98}.
\begin{proof}
Indeed, we only need to check that $f_{4a},f_{4b}$ have vanishing constant term and that the first three coefficients in the $q$-expansions of $\Psi(f_{4a})$, $\Psi(f_{4b})$ agree with those of the predicted meromorphic modular forms; we choose to check the first seven coefficients.
For the integrality statement, we use the alternative expressions
$$
64f_{4a}(\tau)= \frac{f_{14+1/2}^*(\tau)}{\Delta(4\tau)}
$$
and
$$
-108f_{4b}(\tau)= \frac{ f_{14+1/2}(\tau)\,(j(4\tau)-674)+10 f_{14+1/2}^*(\tau)}{\Delta(4\tau)},
$$
where the forms $f_{b+1/2}(\tau), f^*_{b+1/2}(\tau)$ are the holomorphic modular forms of weight $b+1/2$ with integral $q$-expansions from the table in \cite[Appendix]{DJ08} and $j(\tau)=E_4(\tau)^3/\Delta(\tau)$ is the elliptic modular invariant.
\end{proof}
As we will see further, for certain forms $\sum_{n\gg-\infty}a(n)q^n\in S^{!,+}_{5/2}$ with integral $q$-expansions (in particular, for the forms $64f_{4a}$ and $108f_{4b}$) one can make use of Hecke operators to conclude with the divisibility $n\mid a(n^2)$ for $n>0$.
This readily implies that $64F_{4a}$ and $108F_{4b}$ in Theorem~\ref{th1} are strongly magnetic modular forms, since the relation in~\eqref{a->A} translates the divisibility into
$$
\frac{A(n)}n
=\sum_{d\mid n}\frac{a(n^2/d^2)}{n/d}
=\sum_{d\mid n}\frac{a(d^2)}{d}\in\mathbb Z.
$$
A detailed analysis below reveals that the factors $64$ and $108$ can be also removed.
\section{The square part and Hecke operators}
\label{sq}
We refer the reader to \cite{DJ08} and \cite{BGK15} for the definition of Hecke operators $\mathcal{T}_p$ and $T_{p^2}$ on integral weight $2k$ and half-integral weight $k+1/2$ modular forms (including weakly holomorphic or meromorphic), respectively.
As in the case of the Shimura--Borcherds lift $\Psi=\Psi_k$ in \eqref{SB}, these definitions make perfect sense for \emph{any} Laurent series $f=\sum_{n\gg-\infty}a(n)q^n$, not necessarily of modular origin but with the weight $2k$ or $k+1/2$ additionally supplied.
We refer to the finite sum $\sum_{n<0}a(n)q^n$ as to the principal part of~$f$.
We take
$$
f\kern1.5pt|\kern1.5pt U_p=\sum_{n\gg-\infty}a(np)q^n,
\quad
f\kern1.5pt|\kern1.5pt V_p=\sum_{n\gg-\infty}a(n)q^{np},
\quad
f\kern1.5pt|\kern1.5pt\chi=\sum_{n\gg-\infty}\chi(n)a(n)q^n
$$
for a character $\chi\colon\mathbb{Z}\to\mathbb C$, and define
$$
f\kern1.5pt|\kern1.5pt \mathcal{T}_p=f\kern1.5pt|\kern1.5pt(\mathcal{T}_p,2k)=f\kern1.5pt|\kern1.5pt U_p+p^{2k-1} V_p
$$
and
$$
f\kern1.5pt|\kern1.5pt T_{p^2}=f\kern1.5pt|\kern1.5pt(T_{p^2},k+1/2)=f\kern1.5pt|\kern1.5pt U_p^2+p^{k-1}\chi_p+p^{2k-1} V_p^2,
$$
where $\chi_p(n)=\chi_{p,k}(n)=\big(\frac{(-1)^kn}p\big)$ is the Kronecker--Jacobi symbol.
A simple calculation shows that $\Psi_k(f)\kern1.5pt|\kern1.5pt(\mathcal{T}_p,2k)=\Psi_k\big(f\kern1.5pt|\kern1.5pt(T_{p^2},k+1/2)\big)$, which we can reproduce in a simplified form
\begin{equation*}
\Psi(f)\kern1.5pt|\kern1.5pt\mathcal{T}_p=\Psi(f\kern1.5pt|\kern1.5pt T_{p^2})
\end{equation*}
when $k$ is fixed.
\begin{lemma}\label{Cong1}
Given a positive integer $k$, assume that there are no cusp forms of weight $2k$.
For a prime $p$, let $f\in M^{!,+}_{k+1/2}$ have $p$-integral coefficients and satisfy $p^2>-\operatorname{ord}_q(f)$.
Then
$$
f\kern1.5pt|\kern1.5pt T_{p^2}^n\equiv 0 \bmod p^{(k-1)n}.
$$
\end{lemma}
\begin{proof}
Following the argument in \cite[proof of Lemma~3.1]{BGK15}, we can write
\begin{equation}
T_{p^2}^n
=\sum_{\substack{a,b,c,r\ge0\\a+b+c=n\\r\le\min\{a,c\}}}
\alpha_{a,b,c,r}\cdot p^{(2k-1)c+(k-1)b}\cdot U_{p^2}^{a-r}\chi_p^bV_{p^2}^{c-r},
\label{BGK}
\end{equation}
where $\alpha_{a,b,c,r}$ are some integers.
This writing can be easily deduced from $V_{p^2}\chi_p =\chi_p U_{p^2}=0$ and the fact that $V_{p^2}U_{p^2}$ is the identity.
We only need to analyse the principal part of $f\kern1.5pt|\kern1.5pt T_{p^2}^n$ which, by the hypothesis $\dim S_{2k}=0$, determines it uniquely.
If $r<a$, then $f\kern1.5pt|\kern1.5pt U_{p^2}^{a-r}\chi_p^b V_{p^2}^{c-r}$ has no principal part, because the latter is killed by a single action of $U_{p^2}$ (since $a_{-p^2m}=0$ for any $m\ge 0$).
Therefore, we may assume that $a=r\le c$. This implies that $(2k-1)c+(k-1)b\ge (k-1)(2c+b)\ge(k-1)n$, hence the principal of $ f\kern1.5pt|\kern1.5pt T_{p^2}^n$ part is divisible by $p^{(k-1)n}$.
This in turn implies that $f\kern1.5pt|\kern1.5pt T_{p^2}^n=p^{(k-1)n}\cdot g$ for some $g\in \mathcal{M}_{k+1/2}^{!,+}$ with $p$-integral coefficients, since there is a basis $\{g_m=q^m+O(q):m\in\mathbb{Z},\;(-1)^km\equiv0\}$ of $M^{!,+}_{k+1/2}$ whose elements have all coefficients integral (see \cite[Proposition~2]{DJ08}).
\end{proof}
In parallel with \eqref{SB}, define
\begin{equation*}
\Phi=\Phi_k\colon g=\sum_{n\gg-\infty}b(n)q^n
\mapsto \sum_{n>0}q^{|D_k|n^2}\sum_{d\mid n}\bigg(\frac d{D_k}\bigg)d^{k-1}\mu(d)b(n/d),
\end{equation*}
where $\mu(\,\cdot\,)$ is the M\"obius function and, as before, $D_k=2\cdot(-1)^k-1\in\{1,-3\}$.
We further define the `square part' of a Laurent series $f=\sum_{n\gg-\infty}a(n)q^n$ as
$$
f^{\square}=\sum_{n>0} a(|D_k|n^2) q^{|D_k| n^2}.
$$
The definitions immediately lead to the following conclusions.
\begin{lemma}\label{sqr}
We have $\Phi (\Psi(f))=f^{\square}$.
In particular, if $\Psi(f)\in q\mathbb{Z}[[q]]$, then $f^{\square}\in q\mathbb{Z}[[q]]$.
\end{lemma}
Notice that $f_{4a}^\square, f_{4b}^\square \in q\mathbb{Z}[[q]]$ by this lemma, because both $F_{4a}=\Psi(f_{4a})$ and $F_{4b}=\Psi(f_{4b})$ are in $q\mathbb{Z}[[q]]$.
In addition to this, we list some other easily verifiable properties about the interaction of Hecke operators and square parts.
\begin{lemma}\label{Heck}
Given a Laurent series $f=\sum_{n\gg-\infty}a(n)q^n$ and positive integer $k$, the following statements are true.
\begin{enumerate}
\item[\textup{(a)}] $\Psi(f)\kern1.5pt|\kern1.5pt\mathcal{T}_p^n=\Psi(f\kern1.5pt|\kern1.5pt T_{p^2}^n)$ for $n=1,2,\dots$\,.
\item[\textup{(b)}] $\Psi(f)=\Psi(f^\square)$.
\item[\textup{(c)}]
$(f\kern1.5pt|\kern1.5pt T_{p^2})^{\square}=f^{\square}\kern1.5pt|\kern1.5pt T_{p^2}$ termwise, that is,
$(f\kern1.5pt|\kern1.5pt U_{p^2})^{\square}=f^{\square}\kern1.5pt|\kern1.5pt U_{p^2}$,
$(f\kern1.5pt|\kern1.5pt V_{p^2})^{\square}=f^{\square}\kern1.5pt|\kern1.5pt V_{p^2}$ and
$(f\kern1.5pt|\kern1.5pt \chi_p)^{\square}=f^{\square}\kern1.5pt|\kern1.5pt \chi_p$.
\item[\textup{(d)}] If the coefficients of $f$ are integral and $k\ge 2$, then $f\kern1.5pt|\kern1.5pt T_{p^2}\equiv f\kern1.5pt|\kern1.5pt U_p^2\bmod p$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Theorem \textup{\ref{th1}}]
Consider $f\in\{f_{4a},f_{4b}\}$. For a prime $p\ge5$, the form $f$ is $p$-integral and we have $\operatorname{ord}_q(f)\ge-4$;
therefore Lemma~\ref{Cong1} with $k=2$ applies to result in
$$
f\kern1.5pt|\kern1.5pt T_{p^2}^n\equiv 0 \bmod p^n.
$$
Applying Shimura--Borcherds map \eqref{SB} we deduce that, for $F=\Psi(f)\in\{F_{4a},F_{4b}\}$, we have $F\kern1.5pt|\kern1.5pt \mathcal{T}_p^n\equiv 0 \bmod p^{n}$ for all $n\ge 1$,
hence $F\kern1.5pt|\kern1.5pt U_{p}^n\equiv 0 \bmod p^{n}$; in other words, $F=\sum_{m>0}A(m)q^m$ has the strong $p$-magnetic property:
\begin{equation}\label{mag4}
p^n\mid m\implies p^n\mid A(m)
\end{equation}
for any prime $p\ge 5$.
This argument also works for $f=f_{4a}$ in the case of $p=3$, because $f_{4a}$ is $3$-integral.
Consider now $p=3$ and $f=f_{4b}$, in which case we only know that $27f$ is $3$-integral.
Take the (unique!) element $g_r\in M_{5/2}^{!, +}$ with $q$-expansion $g_r=q^{-4\cdot 9^r}+O(q)$;
by \cite[Proposition~2]{DJ08} it has integral coefficients.
We first show that $g_0^\square\kern1.5pt|\kern1.5pt T_9^n\equiv 0 \bmod 3^{n+3}$.
For $n=0$ this is true, because $g_0=-108\cdot f_{4b}$ and $f_{4b}^\square$ is in $q\mathbb{Z}[[q]]$.
For $n=1$ we observe that $\Psi(-\frac{1}{108}g_0\kern1.5pt|\kern1.5pt T_9)=F_{4b}\kern1.5pt|\kern1.5pt \mathcal{T}_3$ and $F_{4b}\equiv \Delta\bmod 3$ (since both $E_4,E_6\equiv 1\bmod 3$).
This implies that $F_{4b}\kern1.5pt|\kern1.5pt \mathcal{T}_3\equiv\Delta\kern1.5pt|\kern1.5pt \mathcal{T}_3\equiv 0 \bmod 3$, hence
$$
-\frac{1}{108}g_0^\square\kern1.5pt|\kern1.5pt T_9=\Phi(F_{4b}\kern1.5pt|\kern1.5pt \mathcal{T}_3)\equiv 0\bmod 3
$$
meaning that $g_0^\square\kern1.5pt|\kern1.5pt T_9^n\equiv 0 \bmod 3^{n+3}$ is true when $n=1$.
Since $g_0\kern1.5pt|\kern1.5pt T_9=27 g_1-3g_0$ we also deduce from this that $g_1^\square\equiv 0 \bmod 3$.
For $n$ general, we want to write $g_0\kern1.5pt|\kern1.5pt T_9^n$ as a $\mathbb{Z}$-linear combination of $g_r$ with $r=0,1,\dots,n$.
Looking at the principal part of $g_0\kern1.5pt|\kern1.5pt T_9^n$, one finds out that only terms of the form $q^{-4\cdot 3^{2m}}$ appear, so that subtracting the related linear combination of $f_r$ leads to a holomorphic cusp form, which then must vanish. To examine this linear combination in more details we proceed as in the proof of Lemma~\ref{Cong1}:
$$
g_0\kern1.5pt|\kern1.5pt T_9^n
=\sum_{a,b,c,r} \alpha_{a,b,c,r}\cdot 3^{3c+b}\cdot g_0\kern1.5pt|\kern1.5pt U_9^{a-r}\chi_3^bV_9^{c-3}
$$
(see equation~\eqref{BGK}).
As already noticed in that proof, only the terms with $r=a\le c$ contribute to the principal part, thus to the linear combination; the terms with $r=a$ contribute by the subsum
$$
\sum_{a,b,c} \alpha_{a,b,c,a}\cdot (-1)^b\cdot 3^{3c+b}\cdot g_{c-a}.
$$
Now notice that if $2c\ge a+3$, then the coefficient is divisible by $3^{n+3}$. In the remaining situations we have $2a\le 2c<a+3$, in particular $a\in\{0,1,2\}$, and we use the following analysis:
\begin{enumerate}[(a)]
\item If $a=2$, then the inequalities imply that $c=2$, hence $b=n-4$; the corresponding term is then a multiple of $3^{3\cdot 2+n-4}g_0$.
\item If $a=1$, then $c=1$, hence $b=n-2$; the corresponding term happens to be a multiple of $3^{3\cdot 1+n-2}g_0$.
\item If $a=0$, then $c\in\{0,1\}$. The term corresponding to $c=0$ is a multiple of $3^ng_0$, while the term corresponding to $c=1$ is a multiple of $3^{n+2}\cdot g_1$.
\end{enumerate}
Gathering all the terms, we end up with an expression
$$
g_0\kern1.5pt|\kern1.5pt T_9^n=3^{n+3} g+3^{n+2}\alpha\cdot g_1+3^{n}\beta\cdot g_0,
$$
where $g$ is integral and both $\alpha$ and $\beta$ are integers.
Taking the square parts on both sides and using the results for $n=0,1$ we deduce that $g_0^\square\kern1.5pt|\kern1.5pt T_9^n\equiv 0 \bmod 3^{n+3}$ for any $n=0,1,\dots$\,.
Finally, we apply the Shimura--Borcherds map to this congruence to deduce that $F_{4b}\kern1.5pt|\kern1.5pt \mathcal{T}_3^n\equiv 0 \bmod 3^n$ for all $n\ge 0$. In other words, this implies the congruences \eqref{mag4} for $p=3$.
Turning now our attention to the prime $p=2$, notice that the Hecke operator $T_4$ does not respect the Kohnen plus space. However, if we define the projection
$$
K^+=K_k^+\colon \sum_{n\in\mathbb Z} a(n) q^n \mapsto \sum_{\substack{n\in\mathbb Z\\(-1)^kn\equiv0,1\bmod4}} a(n) q^n,
$$
then the operator $T_4'=K^+\circ T_4$ maps the space $M_{k+1/2}^{!,+}$ onto itself and inherits all the properties used above for $T_{p^2}$ when $p>2$.
We use this operator~$T_4'$ in place of $T_4$ to complete the proof of our Theorem~\ref{th1}.
Notice that in both cases $f=f_{4a}$ and $f=f_{4b}$ has powers of $2$ in the denominator of its main term.
For an ease of the argument, we treat the two cases separately, though the same strategy is used for both, along the line with the proof above of relation \eqref{mag4} for $p=3$.
When $f=f_{4b}$, we need to prove that $F_{4b}\kern1.5pt|\kern1.5pt \mathcal{T}_{2}^n\equiv 0 \bmod 2^n$, which is in turn implied by the congruence $f_{4b}^\square\kern1.5pt|\kern1.5pt {T_4'}^n\equiv 0\bmod 2^n$. Introduce $g_r=q^{-4\cdot4^r}+O(q)\in M_{5/2}^{!,+}$ with integral $q$-expansions for $r=0,1,\dots$ and notice that $f_{4b}=-\frac{1}{108}\cdot g_0$.
The induction on $r\ge0$ shows that the recursion $g_r\kern1.5pt|\kern1.5pt T_4'=8g_{r+1}+g_{r-1}$ takes place, with the convention that $g_{-1}=0$.
This in turn leads to
$$
g_0\kern1.5pt|\kern1.5pt {T_4'}^n=2^{n+2}g+2^{n+1}\alpha\cdot g_1+2^n\beta\cdot g_0
$$
for some integral $g\in M_{5/2}^{!,+}$ and $\alpha,\beta\in\mathbb{Z}$.
Taking the square parts on both sides and using that $F_{4b}\equiv \Delta \bmod 8$, hence $\Phi(F_{4b}\kern1.5pt|\kern1.5pt\mathcal{T}_2)\equiv\Phi(\Delta\kern1.5pt|\kern1.5pt\mathcal{T}_2)\equiv0\bmod8$, we conclude with $g_0^\square\kern1.5pt|\kern1.5pt {T_4'}^n\equiv 0\bmod 2^{n+2}$, hence with \eqref{mag4} for $p=2$ and $F=F_{4b}$.
For $f=f_{4a}$, we introduce the family $g_r=q^{-3\cdot 4^r}+O(q)\in M_{5/2}^{!,+}$, where $r=0,1,\dots$, which is invariant under the action of the operator $T_4'$, and proceed similarly to get exactly the same recursion $g_r\kern1.5pt|\kern1.5pt T_4'=8g_{r+1}+g_{r-1}$ for $r\ge0$ with $g_{-1}=0$. On using $g_0=\frac1{64}f_{4a}$,
$$
g_0\kern1.5pt|\kern1.5pt {T_4'}^n=2^{n+6}g+2^{n+5}\alpha\cdot g_2+2^{n+4}\beta\cdot g_1+2^{n+3}\gamma\cdot g_0
$$
for $n\ge 3$, and $F_{4a}\equiv \Delta \bmod 8$, we conclude with $g_0^\square\kern1.5pt|\kern1.5pt {T_4'}^n\equiv 0\bmod 2^{n+6}$ implying $F_{4a}\kern1.5pt|\kern1.5pt \mathcal{T}_{2}^n\equiv 0 \bmod 2^n$ as required.
\end{proof}
\begin{proof}[Proof of Theorem~\textup{\ref{th2}}]
We now work with $k=3$.
Consider
$$
f(\tau)=-\frac{1}{384}\,\frac{f_{15+1/2}^*(\tau)}{\Delta(4\tau)}\in \mathcal{M}_{k+1/2}^{!,+},
$$
where $f_{b+1/2}^*$ is the weight $b+1/2$ modular form from the table in \cite[Appendix]{DJ08}.
One can easily check (through the first few coefficients) that $\Psi(f)=F_6$ and from the expression above we also know that $f$ has $p$-integral coefficients for any $p\ge 5$.
It follows from Lemma~\ref{Cong1} (applied this time with $k=3$) that $f\kern1.5pt|\kern1.5pt T_{p^2}^n\equiv 0 \bmod p^{2n}$.
Therefore, $F_6\kern1.5pt|\kern1.5pt \mathcal{T}_p^n\equiv 0\bmod p^{2n}$ for all $n\ge 0$ implying that $F_6\kern1.5pt|\kern1.5pt U_p^n\equiv 0 \bmod p^{2n}$ and that for $F_6=\sum_{m>0} A(m) q^m$ we have
\begin{equation}\label{cong2}
p^n\mid m\implies p^{2n}\mid A(m)
\end{equation}
for any prime $p\ge5$.
Since $384=3\cdot2^7$, for $p=3$ we see that $3f$ is $3$-integral.
Repeating the argument from Lemma~\ref{Cong1} and using the fact that $f$ is a multiple of the unique element in $ \mathcal{M}_{7/2}^{!,+}$ with the integral $q$-expansion $q^{-1}+O(q)$, we deduce that $f\kern1.5pt|\kern1.5pt T_9^n=3^{2n}\cdot (g+ \alpha f)$ with $\alpha$ an integer and $g$ a $3$-integral modular form.
Indeed, the principal part of $f\kern1.5pt|\kern1.5pt T_9^n$ is an $\mathbb{Z}$-linear combination of the principal parts of
$$
3^{(2\cdot 3-1)c+(3-1)b}\cdot f\kern1.5pt|\kern1.5pt \chi_{3}^b V_9^{c-a}=3^{2n}\cdot (3^{c-a} f)\kern1.5pt|\kern1.5pt \chi_{3}^b V_9^{c-a}.
$$
If $c-a\ge 1$ the principal part of $(3^{c-a} f)\kern1.5pt|\kern1.5pt \chi_{3}^b V_9^{c-a}$ is $3$-integral; when $c=a$ the principal part of $ f\kern1.5pt|\kern1.5pt \chi_{3}^b$ will be an integral multiple of the principal part of~$f$.
Thus, $f\kern1.5pt|\kern1.5pt T_9^n=3^{2n}\cdot (g+ \alpha \cdot f)$ implies (applying the Shimura--Borcherds lift to both sides) that $F_6\kern1.5pt|\kern1.5pt \mathcal{T}_3^n\equiv 0 \bmod 3^{2n}$, hence we deduce that \eqref{cong2} is true also for $p=3$.
To prove the relation \eqref{cong2} for $p=2$, we proceed as in the proof of Theorem~\ref{th1}.
We introduce the $T_4'$-invariant family of weight $7/2$ weakly holomorphic modular forms $g_r=q^{-4^r}+O(q)$ with integral $q$-expansions with the help of \cite[Proposition~2]{DJ08}.
Again, we write the expression of $g_0\kern1.5pt|\kern1.5pt {T_4'}^n$ as $\mathbb{Z}$-linear combination of $g_r$ with $r=0,1,\dots,n$ and analyse the powers of $2$ appearing in the coefficients; similarly, we can prove that $g_0^\square\kern1.5pt|\kern1.5pt {T_4'}^n\equiv 0 \bmod 2^{2n+7}$ for any $n\ge 0$. For $n=0$ this comes from the integrality of $f^\square$, while for $n=1$ we get it, again, by noticing that $F_6\equiv E_6\Delta \bmod 2^{4}$ while $E_6\Delta$ being an eigenform of weight 18 with slope $4$ at the prime~$2$.
The induction argument follows \emph{mutatis mutandis} as in the proof of Theorem~\ref{th1}.
\end{proof}
\section{Miscellania on half-integral weight modular forms}
\label{w1/2}
In this part, not well related to the proofs of Theorems~\ref{th1} and~\ref{th2}, we indicate a different strategy of constructing half-integral weight weakly holomorphic modular forms using a traditional rising operator.
Standard examples of weight $1/2$ modular forms (see \cite[Sect.~14, Example 2]{Bo95}) include the theta function $\theta(\tau)=\sum_{n\in\mathbb Z}q^{n^2}$ and
\begin{align*}
h_0(\tau)
&=\frac{ E_{2,4}(\tau) \theta(\tau) \,(\theta(\tau)^4-2E_{2,4}(\tau))\,(\theta(\tau)^4-16E_{2,4}(\tau))\,E_6(4\tau)}{\Delta(4\tau)}+56\theta(\tau)
\\
&=q^{-3}-248q+26752q^4+\dotsb,
\end{align*}
where $E_{2,4}(\tau)$ is given in \eqref{E24}. The images of $12\theta$ and $4\theta+h_0$ under the multiplicative Borcherds lift
$$
\Psi^{\text{mult}}\colon\sum_{n\gg-\infty}c(n)q^n
\mapsto q^{-h}\prod_{n>0}(1-q^n)^{c(n^2)}
$$
are the modular forms $\Delta(\tau)$ and $E_4(\tau)$, respectively (see \cite[Theorem~14.1]{Bo95} for the definition of~$h$).
Although it is not useful for our results in this note, we remark that the two weakly holomorphic modular forms can serve as constructors of some weight $5/2$ modular forms from Section~\ref{w5/2}.
\begin{lemma}
\label{lem-D}
The raising operator
$$
\mathcal{D}=\mathcal{D}_k\colon f\mapsto \delta f- \frac{2k+1}{6} E_2(4\tau)\cdot f
$$
maps ${M}_{k+1/2}^{!, +}$ onto ${M}_{k+5/2}^{!, +}$.
\end{lemma}
\begin{proof}
Observe that $E_2(\tau)-4E_2(4\tau)$ is a modular form of weight 2 for $\Gamma_0(4)$, so that the difference between the usual raising operator and $\mathcal{D}$ is the multiplication by a weight 2 modular form, thus indeed $\mathcal{D}\colon{M}_{k+1/2}^{!}\to {M}_{k+5/2}^{!}$. On the other hand, both $\delta$ and multiplication by any modular form $f(4\tau)$ preserve the Kohnen plus space condition, and the lemma follows.
\end{proof}
For the functions $g_0$, $f_{4a}$ and $f_{4b}$ in Section~\ref{w5/2} we find out that
$$
g_0= -6 \mathcal{D}\theta,
\quad
64f_{4a}=-\frac{6}{19} \mathcal{D}h_0
$$
and
$$
108 f_{4b}=\frac{3}{25}\mathcal{D}\biggl(-3 h_0+2012\theta+\frac{2\theta E_6(4\tau)^2}{\Delta(4\tau)}\biggr).
$$
\section{Concluding remarks}
Though we expect that our discussion above exhausts all elements with the magnetic property in $W_4^0$, many such exist for $W_{2k}^0$ with $k>2$; for example, the $q$-series $E_2^m\cdot(\delta E_j)/E_j$ for $j=4,6$ and $m=1,2,3,4,6$ (but not for $m=5$). Constructing magnetic \emph{modular} forms\,---\,meromorphic ones with poles at quadratic irrationalities from the upper half-plane\,---\,is a routine on the basis of Shimura--Borcherds (SB) lift~\eqref{SB};
Table~\ref{tab1} lists a few instances of this production explicitly in terms of the $j$-ivariant $j(\tau)=E_4^3/\Delta$.
Generating the forms with multiple magnetic property in higher weights is a tougher task;
one such example $E_4^2(j -3\cdot 2^{10})/j^2$ can be found in the more recent work \cite{LS20} of L\"obrich and Schwagenscheidt; another example of a triply magnetic form of weight~8 is
$$
E_4^2\,\frac{13 j^3 - 443556 j^2 + 1610452125 j - 98280\cdot 15^6}{(j+15^3)^4}.
$$
We have observed that in all such instances the related numerators, viewed as polynomials in~$j$, have real zeroes only.
Furthermore, there are weaker divisibility conditions (resembling the Honda--Kaneko congruences \cite{HK12}) for individual summands of magnetic forms; for example, the anti-derivatives of
$$
\frac{E_4j}{(j-2\cdot 30^3)^2} \quad\text{and}\quad \frac{E_4}{(j-2\cdot 30^3)^2}
$$
are already $p$-integral for primes $p\equiv 5\pmod6$.
We have not tried to investigate this arithmetic subphenomenon.
There is a good reason to believe that all such magnetic forms originate from suitable Shimura--Borcherds lifts.
But, maybe, there is more in this story\,---\,then time will show.
\begin{table}[h]
\caption{Strong magnetic modular forms of weight $4$ (where $f_m=q^{-m}+O(q)$ denotes the unique weakly holomorphic cusp form in $M_{5/2}^{!,+}$)}
\begin{tabular}{|c|c|}
\hline
SB lift & description in terms of $E_4$ and $j=E_4^3/\Delta$ \\
\hline
\hline
$\Psi(3^{-3} f_7)$ & $\displaystyle E_4\,\frac{19 j- 8\cdot 15^3}{(j+15^3)^2}$ \\
\hline
$\Psi(-2^{-3} f_8)$ & $\displaystyle E_4\,\frac{101 j- 3\cdot 20^3}{(j-20^3)^2}$ \\
\hline
$\Psi(2^{-6} f_{11})$ & $\displaystyle E_4\,\frac{43 j- 6\cdot 32^3}{(j+32^3)^2}$ \\
\hline
$\Psi(48^{-2}f_3\kern1.5pt|\kern1.5pt T_4)$ & $\displaystyle E_4\,\frac{14 j+ 18\cdot 15^3}{(j-2\cdot 30^3)^2}$ \\
\hline
$\Psi(108^{-1}f_4\kern1.5pt|\kern1.5pt (1-\tfrac{1}{2}T_4))$ & $\displaystyle E_4\,\frac{611 j+ 404\cdot 33^3}{(j- 66^3)^2}$ \\
\hline
$\Psi(3^{-3} f_7\kern1.5pt|\kern1.5pt (2-\tfrac{1}{2}T_4))$ & $\displaystyle E_4\,\frac{82451 j+ 5272\cdot 255^3}{(j- 255^3)^2}$ \\
\hline
$\Psi(12^{-3} f_{19})$ & $\displaystyle E_4\,\frac{25 j- 2\cdot 96^3}{(j+96^3)^2}$ \\
\hline
$\Psi(12^{-3} f_{43})$ & $\displaystyle E_4\,\frac{11329 j- 578\cdot 960^3}{(j+960^3)^2}$ \\
\hline
$\Psi(12^{-3} f_{67})$ & $\displaystyle E_4\,\frac{1221961 j- 49442\cdot 5280^3}{(j+5280^3)^2}$ \\
\hline
$\Psi(12^{-3} f_{163})$ & $\displaystyle E_4\,\frac{908855380249 j- 23238932978\cdot 640320^3}{(j+640320^3)^2}$ \\
\hline
$\Psi(15^{-1} f_{15})$ & $\displaystyle E_4\,\frac{785 j^3-15219684 j^2+28709816985 j+837864\cdot 495^3}{(j^2 + 191025 j - 495^3)^2}$ \\
\hline
$\Psi(-80^{-1} f_{20})$ & $\displaystyle E_4\,\frac{733 j^3+72767680 j^2-984198615040 j+123\cdot 20^3\cdot 880^3}{(j^2-158\cdot 20^3 j-880^3)^2}$ \\
\hline
$\Psi(- f_{23})$ & $\displaystyle E_4\,\frac{P_{23}(j)}{(j^3+27934\cdot 5^3 j^2-329683\cdot 5^6 j+187^3\cdot 5^9)^2}$ \\
\hline
\end{tabular}
\hbox to\hsize{where\hfill}
\begin{align*}
P_{23}(j)&=141826 j^5-286458244\cdot 5^3 j^4+5214621227\cdot 5^6 j^3+3414887843776\cdot 5^9 j^2
\\ &\qquad
-47816219216827\cdot 5^{12} j+4378632\cdot 187^3\cdot 5^{15}
\end{align*}
\label{tab1}
\end{table}
| -56,311.83418 |
[
-3.037109375,
2.69140625
] | 32.694938 |
[
-2.95703125,
0.529296875,
-2.537109375,
-6.203125,
-1.0673828125,
8.8203125
] |
[
3.533203125,
8.796875,
2.359375,
5.609375
] | 378 | 4,825 |
[
-3.421875,
3.8359375
] | 40.306342 |
[
-5.44921875,
-3.783203125,
-4.37109375,
-2.453125,
1.5400390625,
11.890625
] | 0.479157 | 11.521859 | 32.227979 | 9.603622 |
[
1.5595248937606812
] | -32,618.16545 | 6.055544 | -55,391.368774 | 0.438086 | 6.245389 |
[
-2.017578125,
-3.24609375,
-3.74609375,
-5.32421875,
1.9677734375,
12.2265625
] |
[
-5.48046875,
-1.830078125,
-2.30859375,
-1.3857421875,
3.607421875,
4.046875
] | |
BkiUdJE5qoYDgaG4RJlt
|
\section{Introduction} \label{sec:intro}
Image annotation is a major challenge in computer vision, which aims to describe pictures with multiple semantic concepts taken from large vocabularies (also known as keywords, classes or categories) \cite{Bernard2003, Makadia2008,boujemaa2004visual,vo2012transductive,sahbi2013cnrs}. This problem is motivated by different visual recognition tasks (including multimedia information access, human computer interaction, robotics, autonomous driving, etc.) and also by the exponential growth of visual contents in the web and social medias. Image annotation is also very challenging due to the difficulty to characterize and discriminate a large number of highly variable concepts (including abstract notions) which are widely used in visual recognition.\\
\indent Existing annotation methods are usually based on machine learning; first, they learn intricate relationships between concepts and image features (e.g.~color, shape, deep features, etc.) using different classifiers (SVMs, \cite{Goh2005, Qi2007,sahbi2002coarse}, nearest neighbors~\cite{Guillaumin2009, Verma2012}, etc.), then they assign concepts to new images depending on the scores of the learned classifiers. Among these classification techniques, SVMs are known to be effective in image annotation \cite{Villegas2013}, but their success is highly dependent on the choice of kernels \cite{ShaweTaylor2004}. The latter, defined as symmetric and positive semi-definite functions, should provide high values when images share similar semantic contents and vice-versa; among existing kernels, linear, polynomial, gaussian and histogram intersection are particularly popular. In addition to these widely used standard kernels, many existing algorithms have been proposed in the literature in order to combine and learn discriminant kernels (from these standard functions) that capture better semantic similarity such as multiple kernel learning \cite{Bach2004, Jiu2015, Jiu2016a} and additive kernels \cite{Vedaldi2012}. However, all these standard kernels and their combinations rely on the local visual content of images, which is clearly insufficient to capture their semantic especially when this content is highly variable. In this work, we focus on learning better kernels with a high discrimination power, by including {\it not only content but also context} which is also learned instead of being handcrafted. Moreover, the design principle of our method allows us to obtain the explicit form of the learned kernels and this makes their evaluation highly efficient, especially for large scale databases.
Considering a collection of images, each one seen as a constellation of cells (grid of patches \cite{thiemert2005applying}) with each cell encoded with appearance features (e.g., Bag-of-Words, deep VGG-Net features, etc.), our goal is to learn accurate kernel functions that measure similarity between images by comparing their constellations of cells. The design principle of our method consists in learning appropriate kernels (for cell comparison) that return relevant values; this is achieved not only by comparing content of cells, but also their spatial geometric context which provides complementary clues for classification (see also \cite{Belongie01shapematching, Hecvpr2004, Sahbi2011, Jiuprl2014,li2011superpixel}). In our proposed solution (in section \ref{sec:unslearning}), two cells\footnote{belonging to two images.} with similar visual contents should be declared as different if their contexts (i.e., their spatial neighborhoods) are different while two cells with different visual contents could be declared as similar if their contexts are similar. In this paper, we implement this principle by designing context-aware kernels as the solution of an optimization problem which includes three criteria: a fidelity term that measures the similarity between cells using local content only, a context term that strengthens (or weakens) the similarity between cells depending on their spatial neighbors and finally a regularization criterion that controls the smoothness of the learned kernel values. The initial formulation of this optimization problem (see also \cite{Sahbi2013icvs, Sahbi2011, Sahbi2014,sahbi2010context,yuan2012mid}), considers handcrafted context in kernel design which makes it possible to clearly enhance the performance of image annotation. As an extension of this work, we consider instead a {\it learned context} as a part of our kernel design framework
using deep learning \cite{Hinton2006, Benjio2007, Bengio2009, Krizhevsky2012}. Indeed, as an alternative to the fixed context, we consider a deep network (DN) whose weights correspond to the learned context; high weights in this network characterize the most discriminant parts of the learned context that favor particular spatial geometric relationships while small weights attenuate other relationships. Context update (i.e., DN weights update) is achieved using an "end-to-end" framework that back-propagates the gradient of a regularized objective function (which seeks to reduce the classification error of the SVMs built on top of the learned context-aware kernels).
Note that context learning has recently attracted some attention for different tasks; for instance, 3D holistic scene understanding~\cite{ZhangBaiICCV2107}, scene parsing~\cite{HungTsaiICCV2107} and person re-identification~\cite{HungTsaiCVPR2107} etc., demonstrating that context is an important clue to improve performances. Our contribution is very different from these works as we consider deep context learning as a part of similarity (and kernel feature) design for 2D image annotation. Our work is rather more related to Convolutional Kernel Networks (CKN) \cite{Mairal2014} which learn kernel maps for gaussian functions using convolutional networks. However, our proposed contribution, in this paper, is conceptually different from CKN: on the one hand, our solution is meant to learn a more general class of kernels that capture context using DN. On the other hand, the particularity of our work w.r.t CKN (and also w.r.t \cite{Jiu2016b}) is to build deep kernel map networks while also {\it learning and optimizing context}.
The rest of this paper is organized as follows: first, we briefly remind in Section~\ref{sec:unslearning} our previous context-aware kernel map learning \cite{Sahbi2013icvs, Sahbi2015} and we introduce our new contribution in Section~\ref{sec:deepconstruction}: a deep network that allows us to design kernels and enables us to automatically learn better context. The experimental results on the ImageCLEF annotation benchmark are shown in Section~\ref{sec:experiments}, followed by conclusions in Section~\ref{concl}.
\section{Context-aware kernel maps}\label{sec:unslearning}
Let $\{{\cal I}_p\}_p$ be a collection of images and let ${\cal S}_p=\{ {\bf x}_1^p, \ldots, {\bf x}_n^p \}$ be a list of non-overlapping cells taken from a regular grid of ${\cal I}_p$; without a loss of generality, we assume that $n$ is constant for all images. We measure the similarity between any two given images ${\cal I}_p$ and ${\cal I}_q$ using the convolution kernel, which is defined as the sum of the similarities between all the pairs of cells in ${\cal S}_p$ and ${\cal S}_q$: ${\cal K}( {\cal S}_p, {\cal S}_q) = \sum_{i, j} \kappa ({\bf x}_i^p, {\bf x}_j^q)$. Here $\kappa ({\bf x}_i^p, {\bf x}_j^q)$ is a symmetric and positive semi-definite (p.s.d) function that returns the similarity between two cells. Resulting from the closure of the p.s.d with respect to the convolution, the latter is also positive semi-definite. In this definition, $\kappa$ usually corresponds to standard kernels (such as polynomial or gaussian) which only rely on the visual content of the cells; in other words, cells
are compared independently. As context may also carry out discriminant clues for classification, a {\it more relevant} kernel $\kappa$ should provide a high similarity between pairs of cells, not only when they share similar content, but also similar context.
\indent Following our previous work~\cite{Sahbi2013icvs, Sahbi2015}, our goal is to learn the kernel $\kappa$ (or equivalently its gram matrix $\mathbf{K}$) that returns similarity between all data in ${\cal X}=\cup_p {\cal S}_p$. The objective function, used to build ${\mathbf{K}}$, is defined as
\begin{equation}
\min_{\mathbf{K}} {\bf \textrm{tr}}(-\mathbf{K}\mathbf{S}') - \alpha \sum_{c=1}^C {\bf \textrm{tr}} (\mathbf{K} \mathbf{P}_c \mathbf{K}' \mathbf{P}_c^{'} ) + \frac{\beta}{2} ||\mathbf{K}||^2_2,
\label{equa:kernelfunction}
\end{equation}
\noindent here $\beta > 0$, $\alpha \geq 0$, $\kappa({\mathbf{x}, \mathbf{x}'})=\mathbf{K}_{\mathbf{x}, \mathbf{x}'}$ (with $\mathbf{K}_{\mathbf{x}, \mathbf{x}'}$ being an entry of $\mathbf{K}$), $\mathbf{S}$ is a (context-free) similarity matrix between data in $\cal X$, $'$ and ${\bf \textrm{tr}}$ denote matrix transpose and the trace operator respectively. The first term of Eq. (\ref{equa:kernelfunction}) seeks to maximize kernel values between any pair of cells ${\mathbf{x}, \mathbf{x}'}$ with a high similarity $\mathbf{S}_{\mathbf{x}, \mathbf{x}'}$ while the second term aims to maximize kernel values between cells whose neighbors are highly similar too (and vice-versa). Finally, the third term acts as a regularizer that also helps getting a smooth closed-form kernel solution (see details subsequently).
\noindent In the above objective function, the context of a cell refers to the set of its neighbors (see also Fig.~\ref{fig:imagegrids}); the intrinsic adjacency matrices $\{\mathbf{P}_c\}_c$ model this context between cells. More precisely, for a given $c \in \{ 1, \dots, C\}$ ($C=4$ in practice), the matrix $\mathbf{P}_c$ captures a particular geometric relationship between neighboring cells; for instance when $c=1$, $\mathbf{P}_{c,\mathbf{x}, \mathbf{x}'}\leftarrow 1$ iff $\mathbf{x}$, $\mathbf{x}'$ belong to the same image and $\mathbf{x}'$ corresponds to one of the {\it left} neighbors of $\mathbf{x}$ in the regular grid, otherwise $\mathbf{P}_{c,\mathbf{x}, \mathbf{x}'} \leftarrow 0$.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.3\linewidth]{figures/figure1}
\caption{\small This figure shows the handcrafted neighborhood system used in order to define the context-aware kernels. Red cross stands for a particular cell in the regular grid, and colored circles stand for its 4 different types of neighbors. Here we consider a disk with 4 sectors.}
\label{fig:imagegrids}
\end{figure}
\indent We can show that the optimization problem is Eq.~(\ref{equa:kernelfunction}) admits the following closed-form kernel solution (details about the proof are omitted in this paper)
\begin{equation}
\mathbf{K}^{(t+1)} = \mathbf{S} + \gamma \sum_{c=1}^C \mathbf{P}_c \mathbf{K}^{(t)} \mathbf{P}_c^{'},
\label{equa:kernelsolution}
\end{equation}
\noindent here $\mathbf{K}^{(0)}=\mathbf{S}$, $\gamma = \alpha / \beta$ with $\alpha / \beta$ chosen in order to guarantee the convergence of the closed-form solution (\ref{equa:kernelsolution}) to a fixed-point (which is always observed when $\gamma$ is not very large\footnote{i.e., $\gamma$ is chosen to make the norm of the right-hand side term in Eq. \ref{equa:kernelsolution} not very large compared to the left-hand side term.}). Note that when $\mathbf{S}$ is p.s.d, all the resulting closed-form solutions (for different $t$) will also be p.s.d and this results from the closure of the positive semi-definiteness w.r.t. different operations including sum and product. Hence, this kernel solution can be expressed as an inner product $\mathbf{K}^{(t+1)} =\mathbf{\Phi}^{(t+1)'} \mathbf{\Phi}^{(t+1)}$ involving maps that take data from their input space to a high dimensional space; one may show that this map $\mathbf{\Phi}^{(t+1)}$ is explicitly and recursively given as
\begin{equation}
\mathbf{\Phi}^{(t+1)} = \Big( \mathbf{\Phi}^{'(0)} \ \ \gamma^{\frac{1}{2}} \mathbf{P}_1 \mathbf{\Phi}^{'(t)} \ \ldots \ \ \gamma^{\frac{1}{2}} \mathbf{P}_C \mathbf{\Phi}^{'(t)} \Big)'.
\label{equa:mapsolution}
\end{equation}
\noindent Here ${\bf \Phi}^{(0)}_\x= {\bf V}_\x$ (with ${\bf S}_{\x,\x'}={\bf V}'_\x {\bf V}_\x$) and the subscript in ${\bf {\Phi}}_{\x}$ denotes the restriction of the map ${\bf {\Phi}}$ to a point $\x$. According to Eq.~(\ref{equa:mapsolution}), it is clear that the mapping ${\bf \Phi}^{(t+1)}$ is not equal to ${\bf \Phi}^{(t)}$ since the dimensionality of the map increases w.r.t. $t$. However, the convergence of the inner product ${\bf \Phi'}^{(t+1)}{\bf \Phi}^{(t+1)}$ to a fixed-point is again guaranteed when $\gamma$ is bounded, i.e., the gram matrices of the designed kernel maps are convergent. \\
\noindent Resulting from the definition of the adjacency matrices $\{{\bf P}_c\}_c$ and from Eq.~(\ref{equa:mapsolution}), it is easy to see that building kernel maps could be achieved image per image with obviously the same number of iterations, i.e., the evaluation of kernel maps of a given image is independent from the others and hence not transductive.
\begin{figure}[htbp]
\centering
\scalebox{0.45}{\input{deep.pdf_t}}
\caption{\small This figure shows that the learned kernel maps can be seen as multi-layered maps of increasing dimensionality corresponding to larger and more influencing geometrical context.}\label{deep1}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.35]{figures/figure2.pdf}
\end{center}
\caption{\small \textcolor{black}{This figure shows the whole architecture and flowchart of our deep context learning. Given an input image (divided into $8\times10$ cells; as also shown in experiments), cells are first described using the pre-trained VGG CNN. Afterwards, the context-based kernel map of a given cell (for instance cell 0), at a given iteration, is obtained by combining the kernel maps of its neighboring cells (namely cells 1, 2, 3 and 4), obtained at the previous iteration, as shown in the red dashed rectangle and also in Eq.~(\ref{equa:mapsolution}). At the end of this iterative process, the kernel maps of all the cells are pooled together in order to obtain the global representation of the input image, prior to achieve its classification. Note that the network shown in the red rectangle, together with the pooling layer, correspond to the DN shown in Fig. \ref{deep1}.}} \label{fig:flowchart}
\end{figure*}
\section{Context Learning with Deep Networks} \label{sec:deepconstruction}
The framework presented in Section~\ref{sec:unslearning} is able to design more effective kernels (compared to context-free ones) by taking into account the context. However, this framework is totally unsupervised, so it does not benefit from existing labeled data in order to produce more discriminating kernels; furthermore, the context (and its matrices $\{\mathbf{P}_c\}_c$) is completely handcrafted. Considering these issues, we propose in this section a method that considers context learning as a part of kernel design using deep networks.
\subsection{From context-aware kernels to deep networks}
Considering any two samples $\x$, $\x'$ in $\cal X$ and also the kernel definitions in Eqs.~(\ref{equa:kernelsolution}) and (\ref{equa:mapsolution}), one may rewrite ${{\bf K}}_{\x,\x'}^{(t)}$ as
\begin{equation*}
\begin{array}{l}
{{\bf K}}_{\x,\x'}^{(t)}=\underbrace{\phi_t(\phi_{t-1}(...\phi_1(\phi_0(\x))))}_{t \ \textrm{times}} . \underbrace{\phi_t(\phi_{t-1}(...\phi_1(\phi_0(\x'))))}_{t \ \textrm{times}},
\end{array}
\end{equation*}
\noindent with $\phi_{t}(\x)={\bf \Phi}_\x^{(t)}$. Following the definition of the convolution kernel ${\cal K}$, at iteration $t$, we have
\begin{equation}
\begin{array}{ll}\label{sck}
\displaystyle {\cal K}( {\cal S}_p, {\cal S}_q) & = \displaystyle \sum_{\x \in {\cal S}_p} \sum_{\x' \in {\cal S}_q} {{\bf K}}_{\x,\x'}^{(t)} \\
& = \displaystyle \sum_{\x \in {\cal S}_p} \underbrace{\phi_t(\phi_{t-1}(...\phi_1(\phi_0(\x))))}_{t \ \textrm{times}} \\
& \displaystyle \ \ \ \ \ \ \ \ \ \ . \sum_{\x' \in {\cal S}_q} \underbrace{\phi_t(\phi_{t-1}(...\phi_1(\phi_0(\x'))))}_{t \ \textrm{times}}.
\end{array}
\end{equation}
Each side of the convolution kernel, in Eq. (\ref{sck}), corresponds to multi-layered feature maps of increasing dimensionality that capture larger and more influencing context, related to points and their geometrical configurations (see \textcolor{black}{Fig.~\ref{deep1}}). We observe that the architecture in this figure is very similar to the one widely used in deep learning, with some differences residing in the definition of the weights. Indeed, the latter correspond to the values of matrices $\{{\bf P}^{(t)}_c\}_{t,c}$ for different iterations and the weights in the final layer correspond to pooling (summation) weights; the latter are equal to $1$ following Eq.~(\ref{sck}). In contrast to many usual deep learning algorithms, the weights of this {deep network} are easy to interpret.
\noindent Note that the {\it appropriate} number of layers in this network is defined by the asymptotic behavior of our context-aware kernel; indeed, the number of iterations necessary in order to obtain convergence is exactly the number of layers in this deep network. In practice, convergence happens in less than five iterations \cite{Sahbi2015}. Now, considering ${\bf \tilde{K}}$ as the limit of Eq.~(\ref{equa:kernelsolution}) (for some $t \leadsto T$) and ${\bf \tilde{\Phi}}$ as the underlying kernel map (using Eq.~(\ref{equa:mapsolution})), the new form of the convolution kernel $\cal K$ between two sets of points ${\cal S}_p$, ${\cal S}_q$ can be rewritten ${\cal K}({\cal S}_p,{\cal S}_q) = \sum_{(\x,\x') \in {\cal S}_p \times {\cal S}_q } \langle {\bf \tilde{\Phi}}_{\x},{\bf \tilde{\Phi}}_{\x'} \rangle$. It is easy to see that $\cal K$ is a p.s.d kernel as it can be rewritten as a dot product involving finite dimensional and explicit maps, i.e., ${\cal
K}({\cal S}_p,{\cal S}_q) = \langle \phi_{\cal K}({\cal S}_p), \phi_{\cal K}({\cal S}_q) \rangle$, with $\phi_{\cal K}({\cal S}_p) = \sum_{\x \in {\cal S}_p} {\bf \tilde{\Phi}}_{\x}$, which clearly shows that each constellation of points ${\cal S}_p$ can be {\it deeply} represented with the explicit kernel map $\phi_{\cal K}({\cal S}_p)$ (i.e., the output of the deep network in the \textcolor{black}{Fig.~\ref{deep1}}).
It is worth noticing that our context learning framework, in spite of being targeted to kernel design, can be achieved "end-to-end" thanks to the existence of the maps $\phi_0(.)$ for many kernels (e.g. linear, histogram intersection, polynomial kernel and other complex kernels); see for instance \cite{Sahbi2013icvs}. Therefore, we adopt back-propagation in order to update the weights of the network in Fig.~\ref{deep1} as described subsequently (see also the flowchart of the whole framework in Fig.~\ref{fig:flowchart}); interestingly, this framework could also benefit from pre-trained convolutional neural networks (CNN) as input to the kernel map $\phi_0(.)$ as also shown through experiments in Section~\ref{sec:experiments}.
\begin{table*}[t]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{|c|c|cc|cc|}
\hline
\multirow{2}{*}{$r$} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{BoW features} & \multicolumn{2}{c|}{VGG-CNN features} \\
\cline{3-6}
& & Linear kernel & HI kernel & Linear kernel& HI kernel \\
\hline
& Context-free & 39.7/24.4/46.6 & 41.3/25.1/49.5 & 45.3/30.8/56.4 & 45.5/30.1/57.9 \\
\hline
\multirow{2}{*}{1} & fixed Context-aware (\cite{Sahbi2013icvs}) & 40.6/24.6/48.3 & 42.6/26.3/50.5 & 45.8/31.2/57.6 & 46.4/30.7/58.5 \\
& learned Context-aware & 42.7/26.4/50.5 & 45.2/26.4/53.9 & 47.5/32.7/58.7 & \textbf{48.8/32.7/59.9} \\
\hline
\multirow{2}{*}{5} & fixed Context-aware (\cite{Sahbi2013icvs}) & 41.0/25.3/48.9 & 42.9/26.7/51.3 & 46.8/31.8/57.9 & 46.9/31.1/58.7 \\
& learned Context-aware & \textbf{44.0/26.6/52.0} & \textbf{45.6/26.2/54.0} & \textbf{47.9/33.2/58.8} & 48.4/32.7/59.5 \\
\hline
\end{tabular}
}
\vspace{0.1cm}
\caption{\small \textcolor{black}{The performance (in $\%$) of different kernels on ImageCLEF database. A triple $\cdot/\cdot/\cdot$ stands for MFS/MFC/MAP. In these experiments $r$ corresponds to the radius of the disk that supports context.} \label{tab:imageclefresults}}
\end{table*}
\subsection{Context learning}
Considering a multi-class problem with $K$ classes and $N$ training images $\{{\cal I}_p\}_{p=1}^N$, we define $\Y_k^p$ as the membership of the image ${\cal I}_p$ to the class $k \in \{1,\dots,K\}$; here $\Y_k^p=+1$ iff ${\cal I}_p$ belongs to class $k$ and $\Y_k^p=-1$ otherwise. We consider in this section, a dynamic and discriminative update of matrices $\{{\bf P}_c^{(t)}\}_{c,t}$; first, we plug the explicit form of $\tilde{{\bf \Phi}}$ into $\phi_{\cal K}({\cal S}_p)$, then we optimize the following objective function (w.r.t. $\{{\bf P}_c^{(t)}\}_{c,t}$ with $t=0,\dots,T-1$ and the SVM parameters denoted $\{w_k\}_k$)\footnote{For ease of writing and unless confusing, the superscript $t$ is sometimes omitted in the notation.}. This objective function includes the following regularization term and hinge loss
\begin{equation}\label{eq8}
\min_{{\bf P}_c,w_k} \displaystyle \sum_{k=1}^K \frac{1}{2} ||w_k||^2 + C_{k} \sum_{p=1}^{N} \max(0, 1-\Y_k^p w_{k}' \phi_{\cal K}({\cal S}_p)),
\end{equation}
\noindent as the optimization of the above objective function w.r.t. the two sets of parameters together is difficult, we adopt an alternating optimization procedure; this is iteratively achieved by fixing one of the two sets of variables and solving w.r.t. the other. When fixing $\{{\bf P}_c^{(t)}\}_{c,t}$, the goal is to learn the parameters $\{w_k\}_k$ of $K$ binary SVM classifiers (denoted $\{f_k\}_{k=1}^K$). Since kernel map $\phi_{\cal K}({\cal S}_p)$ of a given image ${\cal I}_p$ is explicitly given as the output of the deep network in Fig.~\ref{deep1}, each classifier $f_k$ can be written as $f_k({\cal I}) = w_k' \phi_{\cal K}({\cal S}_p)$, where $w_k$ corresponds to the parameters of the SVM. The optimization of Eq.~(\ref{eq8}) w.r.t. these parameters $\{w_k\}_k$ is achieved using LIBSVM \cite{Chang2011}.
When fixing $\{w_k\}_k$, the optimization of Eq.~(\ref{eq8}) is achieved (w.r.t. $\{{\bf P}_c^{(t)}\}_{c,t}$) using gradient descent. Let ${E}$ denote the objective function in Eq.~(\ref{eq8}), the gradient of ${E}$ w.r.t. the final kernel map $\phi_{\cal K}$ (i.e., output of the DN) is
\begin{equation}
\frac{\partial E}{\partial \phi_{\cal K}} = - \sum_{p=1}^N \sum_{k=1}^K C_k \Y_k^p w_k \mathbbm{1}_{\{1-\Y_k^p w_{k}' \phi_{\cal K}({\cal S}_p)\}},
\label{equa:gradientl1}
\end{equation}
\noindent here $\mathbbm{1}_{\{\}}$ is the indicator function. From Eq.~(\ref{equa:mapsolution}), it can be seen that the gradient of ${E}$ w.r.t. $\{{\bf P}_c^{(T-1)}\}_{c,T-1}$ can be computed as the sum of gradients over all the cells in the regular grid. This gradient is backward propagated to the previous layers using the chain rule \cite{LeCun98} in order to (i) obtain gradients w.r.t. adjacency matrices $\{{\bf P}_c^{(t)}\}_{c,t}$, for $t=T-1,\dots, 0$ (weights of the deep network in Fig.~\ref{deep1}), and (ii) update these weights using gradient descent. \\
\indent Finally, the two steps of this iterative process are repeated till convergence is reached, i.e., when the values of these two sets of parameters remain unchanged. In practice, this convergence is observed in less than 100 iterations.
\section{Experimental Validation} \label{sec:experiments}
We evaluate the performance of our context and kernel learning method on the challenging ImageCLEF benchmark~\cite{Villegas2013}. The targeted task is image annotation; given an image, the goal is to predict the list of concepts present into that image. These concepts are declared as present iff the underlying SVM scores are positive.
The dataset of this benchmark is large and includes more than 250k images; we only use the dev set (which includes 1,000 images) as the ground truth is released for this subset only. Images of this subset belong to 95 categories (concepts); as these concepts are not exclusive, each image may be annotated with one or multiple concepts. We randomly split the dev set into two subsets (for training and testing). We process each image in the training and testing folds by re-scaling its dimensions to $400 \times 500$ pixels\footnote{This corresponds to the median dimension of images in the dev set.} and partitioning its spatial extent into $8 \times 10$ cells; each cell includes $50 \times 50$ pixels. \textcolor{black}{Two different representations are adopted in order to describe each cell: i) Bag-of-Words (BoW) histogram (of 500 dimensions) over SIFT features, where the codewords of this histogram are pre-trained offline using \textit{K}-means; and ii) Deep features based on the pre-trained VGG model on the ImageNet database (``imagenet-vgg-m-1024'') \cite{Chatfield14}. Our purpose here is to investigate the adaptability of the proposed framework both with handcrafted and pre-trained CNN features.}
\begin{table}[htb]
\centering
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{c|ccc}
\hline
Kernel & MF-S & MF-C & mAP \\
\hline
\hline
GMKL(\cite{Varma2009}) & 41.3 & 24.3 & 49.1 \\
2LMKL(\cite{Zhuang2011a}) & 45.0 & 25.8 & 54.0 \\
LDMKL (\cite{Jiutip2017}) & 47.8 & 30.0 & 58.6 \\
The proposed context-aware & 48.8 & 32.7 & 59.9 \\
\hline
\end{tabular}
}
\vspace{0.1cm}
\caption{\small \textcolor{black}{This table shows comparison of performances (in \%) between different kernel-learning methods.} \label{tab:comparsionresults}}
\end{table}
\begin{figure*}[thbp]
\begin{center}
\includegraphics[angle=0,width=1\linewidth]{figures/figure3.pdf}
\end{center}
\caption{\small Examples of annotation results using context-free kernels (``CF''), context-aware kernels with fixed and learned context (resp. ``FCA'' and "LCA"). ``GT'' refers to ground truth annotation while the stars stand for the presence of a given concept in the test image.} \label{fig:annotationexamples}
\end{figure*}
\begin{figure}[thbp]
\begin{center}
\includegraphics[scale=0.33]{figures/figure4.pdf}
\end{center}
\caption{\small This figure shows original images (left), handcrafted (middle) and learned contexts (right) between $8\times10$ cells (with $r=1$). For each cell, the importance of its context is shown with colored connections to its neighboring cells; warm colors stand for important relationships while cold colors stand for less important ones. In the handcrafted context, adjacency matrices $\{\mathbf{P}_{c}\}_c$, are set to be row-stochastic, and this is obtained by normalizing each row (cell) in these matrices by the number of its spatial neighbors that's why cells in the four corners have larger values (better viewed/zoomed in PDF)} \label{fig:contextexamples}
\end{figure}
We consider four types of geometric relationships in order to build our context-aware kernel maps (see again Fig. \ref{fig:imagegrids}). As the dimensionality of the DN (in Fig. \ref{deep1}) increases rapidly with the layers, we adopt in our experiments an architecture depth that provides a reasonable balance between dimensionality and performance as well as convergence of kernel map evaluation\footnote{Again as mentioned earlier in Section~\ref{sec:unslearning}, $\gamma$ and the number of iterations $T$ should be set appropriately in order to obtain convergence of kernel evaluation; in practice, with $\gamma=1$ convergence is well approached with only $T=3$ iterations of kernel map evaluation ($T+1$ is then the chosen number of layers in our DN).}; hence, we chose an architecture with 3+1 layers where the last one corresponds to pooling (again following Fig.~\ref{deep1}). In these experiments, we consider linear and histogram intersection maps as -- context-free kernel map -- initialization of our DN; note that {the explicit maps of linear and histogram intersection kernels correspond to identity and decimal-to-unary mapping respectively (see~\cite{Sahbi2015} for more details about these maps).}
\indent Using the setting above, we learn multi-class SVMs for different concepts (using the training fold) on top of the learned context-aware kernel maps, and we evaluate their performances on the testing fold. These performances are measured using the F-scores (harmonic means of recall and precision) at the sample and concept levels (denoted MFS and MFC respectively) as well as the mean average precision (MAP); higher values of these measures imply better performances.
Tab.~\ref{tab:imageclefresults} shows a comparison of our context-aware kernels against context-free ones, with handcrafted and learned contexts on top of the BoW features \textcolor{black}{as well as the deep VGG features}. From these results the gain obtained with the learned context is clear both w.r.t. handcrafted context and when no context is used. \textcolor{black}{Tab.~\ref{tab:comparsionresults} shows performance comparison of our learned context-aware kernel against other kernel learning methods on the ImageCLEF database; from these results a clear gain is obtained when learning context.} Fig.~\ref{fig:annotationexamples} shows a sample of images and their annotation results using context-free and context-aware kernels with fixed (handcrafted) and learned context. Finally, Fig.~\ref{fig:contextexamples} is a visualization of handcrafted and learned contexts superimposed on three images from ImageCLEF; when context is learned, it is clear that some spatial cell context relationships are {\it amplified} while others are {\it attenuated} and this reflects their importance in kernel learning and image classification.
\section{Conclusion}\label{concl}
We introduced in this paper a novel method that considers context learning as a part of kernel design. The method is based on a particular deep network architecture which corresponds to the map of our context-aware kernel solution. In this contribution, relevant context is selected by learning weights of a DN; indeed high weights correspond to the relevant parts of the context while small weights are related to the irrelevant parts. Experiments conducted on the challenging ImageCLEF benchmark, show a clear gain of SVMs trained on top of kernel maps (with learned context) w.r.t. SVMs trained on top of kernel maps (with handcrafted context) and context-free kernels.
\section*{ACKNOWLEDGMENT}
{This work was partially supported by a grant from the research agency ANR (Agence Nationale de la Recherche) under the MLVIS project (Machine Learning for Visual Annotation in Social-media, ANR-11-BS02-0017). Mingyuan Jiu was with T\'el\'ecom ParisTech as a postdoc, working with Hichem Sahbi, when this work was discussed and part of it achieved and written (under the MLVIS project).}
| -21,061.257939 |
[
-2.12109375,
2.275390625
] | 34.408602 |
[
-3.044921875,
0.463623046875,
-2.115234375,
-6.08984375,
-0.90478515625,
8.6875
] |
[
2.736328125,
6.7578125,
1.154296875,
7.19921875
] | 320 | 4,015 |
[
-2.595703125,
2.912109375
] | 28.154191 |
[
-6.2734375,
-4.7890625,
-4.90625,
-2.0078125,
2.681640625,
12.84375
] | 0.867416 | 23.013299 | 26.774595 | 4.676089 |
[
2.1166787147521973
] | -14,719.71605 | 5.742715 | -20,903.569836 | 0.429371 | 5.937034 |
[
-2.615234375,
-3.47265625,
-3.640625,
-4.7109375,
2.4921875,
11.6953125
] |
[
-6.1796875,
-2.37109375,
-2.595703125,
-1.3330078125,
4.03125,
4.91796875
] | |
BkiUa6_xK1ThhBMLeLz5
|
\section{Expressing the Hamiltonians with Casimir operators}
In this section, we express the linear operators,
\begin{gather}
\label{eq:7}
H^{\mathrm{W}}=\frac{1}{n_{\ensuremath{\mathrm{L}}}n_{\ensuremath{\mathrm{R}}}}\sum_{i\in L,j\in R}F_{ij},\ \text{and}\\
H^{\mathrm{I}}=-\frac{1}{n_{\ensuremath{\mathrm{L}}}n_{\ensuremath{\mathrm{R}}}}\sum_{i\in L, j\in R}F^{t_{R}}_{ij},
\end{gather}
with tensor product representations of the quadratic Casimir operator of \(\ensuremath{\mathrm{SU}}(d)\).
First, we must formulate the quadratic Casimir operator itself. We use the generators,
\begin{equation}
\label{eq:8}
S^{\alpha\beta}=|\beta \rangle \langle\alpha|-\frac{1}{d}\delta_{\beta\alpha}\ensuremath{1\!\!1},\quad \alpha,\beta=1, \ldots ,d,
\end{equation}
where \(\{ |\alpha \rangle\}_{i=1}^{d}\) is a fixed, orthonormal basis of \(\mathbb{C}^{d}\). Since \(\sum_{i=1}^d S^{\alpha\alpha}=0\), only \(d-1\) of the \(d\) diagonal generators are independent. With our choice of generators, the quadratic Casimir element is expressed as,
\begin{equation}
\label{eq:9}
C=\sum_{\alpha,\beta=1}^{d}S^{\alpha\beta}S^{\beta\alpha}=\frac{d^{2}-1}{d}\ensuremath{1\!\!1},
\end{equation}
where the last equality is due to Schur's lemma.
The \(N\)-fold tensor product of the defining representation of \(\ensuremath{\mathrm{SU}}(d)\) maps \(C\) into,
\begin{equation}
\label{eq:10}
C_{n}=\sum_{\alpha,\beta=1}^d \left( \sum_{i=1}^n S_{i}^{\alpha\beta} \right)\left( \sum_{j=1}^n S^{\beta\alpha}_{j} \right)=2 \sum_{\alpha,\beta=1}^d \sum_{\substack{i,j=1\\i<j}}^n S_{i}^{\alpha\beta}S_{j}^{\beta\alpha}+n\frac{d^{2}-1}{d}\ensuremath{1\!\!1},
\end{equation}
where the lower indices \(i,j\) denote which tensor components the generators act on.
The two-fold tensor product representation of \(\ensuremath{\mathrm{SU}}(d)\) decomposes into two irreps, \((2,0)\) and \((1,1)\). As the flip operator \(F_{ij}\) is invariant to this representation, it can be expressed as a linear combination of two independent, \(\ensuremath{\mathrm{SU}}(d)\) invariant two-particle operators. For this role, we choose the identity and \(\sum_{\alpha,\beta=1}^d S_{i}^{\alpha\beta}S_{j}^{\beta\alpha}\),
\begin{equation}
\label{eq:11}
F_{ij}=\sum_{\alpha,\beta=1}^d S_{i}^{\alpha\beta}S_{j}^{\beta\alpha}+\frac{\ensuremath{1\!\!1}}{d}.
\end{equation}
According to Eqs.~\eqref{eq:10} and~\eqref{eq:11}, a permutation-symmetric linear combination of flip operators involving \(n\) tensor components, such as the ones appearing in Eq.~\eqref{eq:7}, is a linear combination of \(C_{n}\) and the identity. Making use of this, we re-express \(E^{\mathrm{W}}\) as,
\begin{equation}
\label{eq:12}
H^{\mathrm{W}}=\frac{1}{2n_{\ensuremath{\mathrm{L}}}n_{\ensuremath{\mathrm{R}}}}\left( C_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}-C_{\ensuremath{\mathrm{L}}}-C_{\ensuremath{\mathrm{R}}}\right)+\frac{\ensuremath{1\!\!1}}{d},
\end{equation}
where \(C_{\ensuremath{\mathrm{L}}}, C_{\ensuremath{\mathrm{R}}}\) and \(C_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\) denote \(C\) in the \(n_{\ensuremath{\mathrm{L}}}\), \(n_{\ensuremath{\mathrm{R}}}\) and \(n_{\ensuremath{\mathrm{L}}}+n_{\ensuremath{\mathrm{R}}}\)-fold product representations acting on the left, right subsystems and the entire system respectively.
When in a similar fashion, we express \(H^{\mathrm{I}}\) with the generators, the partial transposition of the flip operators leads to terms proportional to \(S_{i}^{\alpha\beta}S_{j}^{\alpha\beta}\) appearing in the expression. We can relate the second, transposed generator with the dual of the defining representation, which is generated by \(-\{S^{\beta\alpha}\}_{\alpha,\beta=1}^{d}\). Indeed, terms of the form we are looking for appear when we express the Casimir operator in a tensor product of the defining representation and its dual. Let \(\tilde{C}_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\) denote \(C\) in the \(n_{\ensuremath{\mathrm{L}}}+n_{\ensuremath{\mathrm{R}}}\)-fold tensor product representation in which the defining representation is used on the left subsystems, and its dual on the right subsystems,
\begin{equation}
\label{eq:12+}
\tilde{C}_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}=\left( \sum_{k\in \ensuremath{\mathrm{L}}}S^{\alpha\beta}_{k}-\sum_{l\in \ensuremath{\mathrm{R}}} S^{\beta\alpha} \right)\left( \sum_{k\in \ensuremath{\mathrm{L}}} S^{\beta\alpha}_{k}-\sum_{l\in \ensuremath{\mathrm{R}}} S^{\alpha\beta}_{l} \right)=
-2\sum_{k\in \ensuremath{\mathrm{L}}}\sum_{l\in \ensuremath{\mathrm{R}}} S_{k}^{\alpha\beta}S_{l}^{\alpha\beta}+C_{\ensuremath{\mathrm{L}}}+C_{\ensuremath{\mathrm{R}}}.
\end{equation}
Using \(\tilde{C}_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\), we can express \(H^{\mathrm{I}}\) as,
\begin{equation}\label{eq:16} H^{\mathrm{I}}=\frac{1}{2n_{\ensuremath{\mathrm{L}}}n_{\ensuremath{\mathrm{R}}}}\left(C^{\ensuremath{\mathrm{SU}}(d)}_{\ensuremath{\mathrm{L}}}+C^{(\ensuremath{\mathrm{SU}}(d))}_{\ensuremath{\mathrm{R}}}-\tilde{C}^{\ensuremath{\mathrm{SU}}(d)}_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\right)+\frac{\ensuremath{1\!\!1}}{d}.
\end{equation}
\section{The minimum product diagram}
In this section, we determine the irreducible constituent, \(\hat{\lambda}^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}(\lambda^{(\ensuremath{\mathrm{L}})}, \lambda^{(\ensuremath{\mathrm{R}})})\), of the product of two arbitrary, fixed \(\ensuremath{\mathrm{SU}}(d)\) irreps, \(\lambda^{(\ensuremath{\mathrm{L}})}\otimes \lambda^{(\ensuremath{\mathrm{R}})}\), that is the minimum w.r.t.~the dominance order of partitions.
In order to find the partition we are looking for, we first must look at the problem from a different angle, by fixing a different pair of \(\ensuremath{\mathrm{SU}}(d)\) irreps in the product. For a pair of irreps labeled by \(\lambda^{(\ensuremath{\mathrm{L}})}\vdash_{d}n_{\ensuremath{\mathrm{L}}}\) and \(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{L}}}+n_{\ensuremath{\mathrm{R}}}\) for which \(\lambda^{(\ensuremath{\mathrm{L}})}_{i}\le \lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}_{i}\) for
all \(i=1,2, \ldots,d\), we define \(LR(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}})})\) as the set of all \(\ensuremath{\mathrm{SU}}(d)\) irreps labeled by \(\lambda^{(\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{R}}}\) for which \(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}\) is an irreducible constituent of \(\lambda^{(\ensuremath{\mathrm{L}})}\otimes\lambda^{(\ensuremath{\mathrm{R}})}\). We further define the difference partition \(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}-\lambda^{(\ensuremath{\mathrm{L}})}\vdash_{d}n_{\ensuremath{\mathrm{R}}}\) as the integer partition created from the pointwise difference of the two partitions by sorting its elements into decreasing order.
\begin{equation}
\label{eq:2}
\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}-\lambda^{(\ensuremath{\mathrm{L}})}=\text{sort}\{\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}_{i}-\lambda^{(\ensuremath{\mathrm{L}})}_{i}\}_{i=1}^{d}.
\end{equation}
By using the symmetry properties of the skew-diagrams that appear in the Littlewood-Richardson algorithm describing the fusion rules of \(\ensuremath{\mathrm{SU}}(d)\), in Ref.~[], it was
Azenhas showed that \(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}-\lambda^{(\ensuremath{\mathrm{L}})}\in LR(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}})})\), furthermore, for all \(\lambda^{(\ensuremath{\mathrm{R}})}\in LR(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}})})\),
\begin{equation}
\label{eq:1}
\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}-\lambda^{(\ensuremath{\mathrm{L}})}\unlhd \lambda^{(\ensuremath{\mathrm{R}})}.
\end{equation}
In order to obtain from this result, the partition \(\hat{\lambda}^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}(\lambda^{(\ensuremath{\mathrm{L}})}, \lambda^{(\ensuremath{\mathrm{R}})})\), we make use of the dual symmetry of the of the fusion rules of \(\ensuremath{\mathrm{SU}}(d)\). The irrep decomposition of \(\lambda^{(\ensuremath{\mathrm{L}})}\otimes \lambda^{(\ensuremath{\mathrm{R}})}\) is described by the multiplicities \(m(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})})\) in,
\begin{equation}
\label{eq:3}
\lambda^{(\ensuremath{\mathrm{L}})}\otimes \lambda^{(\ensuremath{\mathrm{R}})}=\bigoplus_{\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}}m(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})})\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}.
\end{equation}
These multiplicities are invariant to exchanging \(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}\) and \(\lambda^{(\ensuremath{\mathrm{R}})}\), then replacing both representations with their respective duals. This symmetry is evident if one expresses the multiplicities using the inner product of characters,
\begin{equation}
\label{eq:4}
\begin{split}
m(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}) = \langle \chi_{\lambda^{(\ensuremath{\mathrm{L}})}}\chi_{\lambda^{(\ensuremath{\mathrm{R}})}}, \chi_{\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}} \rangle= \int_{\ensuremath{\mathrm{SU}}(d)} \chi_{\lambda^{(\ensuremath{\mathrm{L}})}}(u)\chi_{\lambda^{(\ensuremath{\mathrm{R}})}}(u)\overline{\chi}_{\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}}(u)du\\
=\int_{\ensuremath{\mathrm{SU}}(d)} \chi_{\lambda^{(\ensuremath{\mathrm{L}})}}(u)\chi_{\overline{\lambda}^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}}(u)\overline{\chi}_{\overline{\lambda}^{(\ensuremath{\mathrm{R}})}}{(u)}du=m(\lambda^{(\ensuremath{\mathrm{L}})},\overline{\lambda}^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}\overline{\lambda}^{(\ensuremath{\mathrm{R}})}),
\end{split}
\end{equation}
where by \(\chi_{\lambda}\), we denote the character of an irrep \(\lambda\), and we integrate over the invariant Haar measure of \(\ensuremath{\mathrm{SU}}(d)\).
The exchange of \(\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}\) and \(\lambda^{(\ensuremath{\mathrm{R}})}\) in Eq.~\eqref{eq:4} gives us a way to obtain the minimum product diagram from Eq.~\eqref{eq:1}, but first, we must make sure that the operation of mapping an irrep to its dual does not influence dominance order. If \(\lambda,\mu\vdash_{d} n\), \(\lambda\unrhd \mu\) and we take the \(M\)-dual of both partitions, so as to make sure that dominance order is well defined between the results, then for all \(1\le l\le d\),
\begin{equation}
\label{eq:7+}
n-(d-l)M+\sum_{i=1}^{d-l}\overline{\mu}=
n-\sum_{i=l+1}^d\mu_{i} =\sum_{i=1}^l \mu_{i}\le \sum_{i=1}^l \lambda_{i}=n-\sum_{i=l+1}^d \lambda_{i}=n-(d-l)M+\sum_{i=1}^{d-l}\overline{\lambda};
\end{equation}
therefore, \(\overline{\lambda}\unrhd \overline{\mu}\).
From combining Eqs.~\eqref{eq:1} and~\eqref{eq:4}, we obtain
\begin{equation}
\label{eq:5}
\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}-\lambda^{(\ensuremath{\mathrm{L}})} =\min\{\lambda^{(\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{R}}} : m(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})})>0\}=\min\{\lambda^{(\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{R}}} : m({\lambda^{(\ensuremath{\mathrm{L}})},\overline{\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}}, \overline{\lambda^{(\ensuremath{\mathrm{R}})}}})>0\}.
\end{equation}
As the dual does not influence dominance order, relabeling the partitions gives us the result we were looking for,
\begin{equation}
\label{eq:6}
\hat{\lambda}^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}(\lambda^{(\ensuremath{\mathrm{L}})}, \lambda^{(\ensuremath{\mathrm{R}})})=\min\{\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{L}}}+n'_{\ensuremath{\mathrm{R}}}:m(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})},\lambda^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})})>0\}=\overline{(\overline{\lambda}^{(\ensuremath{\mathrm{R}})}-\lambda^{(\ensuremath{\mathrm{L}})})}=\text{sort}{\{\lambda^{(\ensuremath{\mathrm{L}})}_i+\lambda^{(\ensuremath{\mathrm{R}})}_{d-i+1}\}}_{i=1}^{d},
\end{equation}
where \(n'_{\ensuremath{\mathrm{R}}}=Md-n_{\ensuremath{\mathrm{R}}}\) and \(\lambda^{\ensuremath{\mathrm{R}}}\vdash_{d}n'_{\ensuremath{\mathrm{R}}}\).
\section{The ground state problem of Werner states}
In this section, we show that the pair of partitions that minimizes the eigenvalue of \(H^{\mathrm{W}}\) from Eq.~\eqref{eq:12} corresponding to the triple of \(\ensuremath{\mathrm{SU}}(d)\) irreps \((\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})}, \hat{\lambda}^{(\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}})}(\lambda^{(\ensuremath{\mathrm{L}})}, \lambda^{(\ensuremath{\mathrm{R}})}))\),
\begin{equation}
\label{eq:20}
E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})})= \frac{1}{n_{\ensuremath{\mathrm{L}}}n_{\ensuremath{\mathrm{R}}}}\sum_{i=1}^d \left[ \lambda^{(\ensuremath{\mathrm{L}})}_{i}\lambda^{(\ensuremath{\mathrm{R}})}_{d-i+1}-i(\text{sort}{({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})}_{d-j+1}\}}_{j=1}^{d})}_{i}\right.- \left.(\lambda^{(\ensuremath{\mathrm{L}})}_{i}+\lambda^{(\ensuremath{\mathrm{R}})}_{i})) \right],
\end{equation}
is composed of the dominance order minima of all integer \(\hat{d}\)-partitions of \(n_{\ensuremath{\mathrm{L}}}\) and \(d-\hat{d}\)-partitions of \(n_{\ensuremath{\mathrm{R}}}\) for some \(\hat{d}=1,2, \ldots d-1\). That is,
\begin{equation}
\label{eq:24}
\min_{\lambda^{(\ensuremath{\mathrm{L}})}\vdash_{d}n_{\ensuremath{\mathrm{L}}}, \lambda^{(\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{R}}}}E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})})=E^{\mathrm{W}}(\hat{\lambda}^{(\ensuremath{\mathrm{L}})}(\hat{d}),\hat{\lambda}^{(\ensuremath{\mathrm{R}})}(\hat{d})),
\end{equation}
where,
\begin{equation}
\label{eq:25}
\begin{split}
&\hat{\lambda}^{(\ensuremath{\mathrm{L}})}(\hat{d})=\left(\left\lceil\frac{n_{\ensuremath{\mathrm{L}}}}{\hat{d}}\right\rceil, \left\lceil\frac{n_{\ensuremath{\mathrm{L}}}}{\hat{d}}\right\rceil, \ldots , \overset{n_{\ensuremath{\mathrm{L}}}\bmod \hat{d}\text{-th}}{\left\lceil\frac{n_{\ensuremath{\mathrm{L}}}}{\hat{d}}\right\rceil}\right.,
\left.\left\lfloor \frac{n_{\ensuremath{\mathrm{L}}}}{\hat{d}} \right\rfloor,\left\lfloor \frac{n_{\ensuremath{\mathrm{L}}}}{\hat{d}} \right\rfloor, \ldots , \overset{\hat{d}\text{-th}}{\left\lfloor \frac{n_{\ensuremath{\mathrm{L}}}}{\hat{d}} \right\rfloor}\right),\\
&\hat{\lambda}^{(\ensuremath{\mathrm{R}})}(\hat{d})=\left(\left\lceil\frac{n_{\ensuremath{\mathrm{R}}}}{d-\hat{d}}\right\rceil,\left\lceil\frac{n_{\ensuremath{\mathrm{R}}}}{d-\hat{d}}\right\rceil, \ldots , \overset{n_{\ensuremath{\mathrm{R}}}\bmod (d-\hat{d})\text{-th}}{\left\lceil\frac{n_{\ensuremath{\mathrm{R}}}}{d-\hat{d}}\right\rceil}\right., \left.\left\lfloor \frac{n_{\ensuremath{\mathrm{R}}}}{d-\hat{d}} \right\rfloor,\left\lfloor \frac{n_{\ensuremath{\mathrm{R}}}}{d-\hat{d}} \right\rfloor, \ldots,\overset{d-\hat{d}\text{-th}}{\left\lfloor \frac{n_{\ensuremath{\mathrm{R}}}}{d-\hat{d}} \right\rfloor}\right).
\end{split}
\end{equation}
To prove our statement, we will create a path of diagram pairs starting from any \(\lambda^{(\ensuremath{\mathrm{L}})}\vdash_{d} n_{\ensuremath{\mathrm{L}}}\) and \(\lambda^{(\ensuremath{\mathrm{R}})}\vdash_{d}n_{\ensuremath{\mathrm{R}}}\), that terminates in one of the pairs in Eq~\eqref{eq:25}, along which the value of \(E^{\mathrm{W}}\) is weakly decreasing.
We denote the reversal of a partition \(\lambda\) with \(\lambda^{r}\), i.e., \(\lambda^{r}_{i}=\lambda_{d-i+1}\); furthermore, we denote the number of overlapping rows between \(\lambda^{(\ensuremath{\mathrm{L}})}\) and \(\lambda^{(\ensuremath{\mathrm{R}})r}\) by \(d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\), and the numbers of non-overlapping rows of \(\lambda^{(\ensuremath{\mathrm{L}})}\) and \(\lambda^{(\ensuremath{\mathrm{R}})r}\) by \(d_{\ensuremath{\mathrm{L}}}\) and \(d_{\ensuremath{\mathrm{R}}}\) respectively. Therefore, the number of rows of \(\lambda^{(\ensuremath{\mathrm{L}})}\) is \(d_{\ensuremath{\mathrm{L}}}+d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\le d\) and that of \(\lambda^{(\ensuremath{\mathrm{R}})}\) is \(d_{\ensuremath{\mathrm{R}}}+d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\le d\), where \(d_{\ensuremath{\mathrm{L}}}+d_{\ensuremath{\mathrm{R}}}+d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}=d\) and \(\lambda^{(\ensuremath{\mathrm{L}})}_{i}\lambda^{(\ensuremath{\mathrm{R}})r}_{i}\neq 0\) iff \(d_{\ensuremath{\mathrm{L}}}+1\le i\le d_{\ensuremath{\mathrm{L}}}+d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\), see Figure~\ref{fig:wernersolution1}. Moreover, we denote the number of boxes in the non-overlapping parts by \(n'_{\ensuremath{\mathrm{L}}}=\sum_{i=1}^{d_{\ensuremath{\mathrm{L}}}}\lambda^{(\ensuremath{\mathrm{L}})}_{i}\) and \(n'_{\ensuremath{\mathrm{R}}}=\sum_{i=1}^{d_{\ensuremath{\mathrm{R}}}}\lambda^{(\ensuremath{\mathrm{R}})}_{i}\). We build the path from two different types of steps. In the first one, we transform the non-overlapping parts of the two diagrams into a standard form. In the second one, we take a pair of diagrams for which the non-overlapping parts are in the standard form, and move boxes from the overlapping into the non-overlapping parts.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{wernersolution1.pdf}
\caption{\label{fig:wernersolution1} An example for a transformation that decreases the non-overlapping part of \(\lambda^{(\ensuremath{\mathrm{L}})}\) in dominance order. The distance between the two affected rows in \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\) is always larger than or equal to that in \(\lambda^{(\ensuremath{\mathrm{L}})}\), as the sorting can shuffle in additional rows between the ones present in \(\lambda^{(\ensuremath{\mathrm{L}})}\). These are indicated with a darker color.}
\end{figure}
Consider the transformation that consists of moving a single box downward within the non-overlapping part of \(\lambda^{(\ensuremath{\mathrm{L}})}\) in a way that results in a valid integer partition. That is, we transform \(\lambda^{(\ensuremath{\mathrm{L}})}\) into \(\lambda^{(\ensuremath{\mathrm{L}})'}\), where \(\lambda^{(\ensuremath{\mathrm{L}})'}_{i}=\lambda^{(\ensuremath{\mathrm{L}})}_{i}-1\), \(\lambda^{(\ensuremath{\mathrm{L}})'}_{j}=\lambda^{(\ensuremath{\mathrm{L}})}_{j}+1\) for some \(1\le i<j\le d_{\ensuremath{\mathrm{L}}}\) and all other rows of \(\lambda^{(\ensuremath{\mathrm{L}})}\) stay unchanged, see Figure~\ref{fig:wernersolution1}. An important detail to take note of here, is that we can always choose the order of rows of \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\) in a way to make it invariant to the transformation. If there is ambiguity in the order, i.e., \({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d}\) has multiple elements equal to \(\lambda^{(\ensuremath{\mathrm{L}})}_{i}\) or \(\lambda^{(\ensuremath{\mathrm{L}})}_{j}\), we choose \(\lambda^{(\ensuremath{\mathrm{L}})}_{i}\) to be the bottommost, and \(\lambda^{(\ensuremath{\mathrm{L}})}_{j}\) to be the topmost of them.
Let us compute the change in \(E^{\mathrm{W}}\) after transforming \(\lambda^{(\ensuremath{\mathrm{L}})}\) into \(\lambda^{(\ensuremath{\mathrm{L}})'}\). Since the overlapping part stays unchanged, the quadratic term of Eq.~\eqref{eq:20} has no contribution. The contribution of the remaining terms depends on the distance between the two changed rows in \(\lambda^{(\ensuremath{\mathrm{L}})}\), and in \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\),
\begin{multline}\label{eq:26}
E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})'},\lambda^{(\ensuremath{\mathrm{R}})})-E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})})=\\
\frac{1}{n_{\ensuremath{\mathrm{L}}}n_{\ensuremath{\mathrm{R}}}}\left[j-i-\right. \left.\left(\text{first}(\lambda^{(\ensuremath{\mathrm{L}})}_{j},\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})) - \text{last}(\lambda^{(\ensuremath{\mathrm{L}})}_{i},\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d}))\right)\right]\le0,
\end{multline}
where \(\text{first}(x,y)\) and \(\text{last}(x,y)\) denote the first and last positions of \(x\) in the sequence \(y\). The distance between the last occurrence of \(\lambda^{(\ensuremath{\mathrm{L}})}_{i}\) and the first occurrence of \(\lambda^{(\ensuremath{\mathrm{L}})}_{j}\) in \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\) must be at least \(j-i\), since \(\lambda^{(\ensuremath{\mathrm{L}})}_{i+1},\lambda^{(\ensuremath{\mathrm{L}})}_{i+2}, \ldots ,\lambda^{(\ensuremath{\mathrm{L}})}_{j-1}\) all lie between the two elements. Additionally, since \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})=\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})r}_{j}+\lambda^{(\ensuremath{\mathrm{R}})}_{j}\}}_{j=1}^{d})\), performing the analog of this transformation on \(\lambda^{(\ensuremath{\mathrm{R}})}\) weakly decreases \(E^{\mathrm{W}}\) as well.
Repeating the transformation just described on both \(\lambda^{(\ensuremath{\mathrm{L}})}\) and \(\lambda^{(\ensuremath{\mathrm{R}})}\), creates sequences of diagrams that are strictly decreasing in dominance order. Continuing until there is no legal way left to move boxes downwards within the non-overlapping parts of \(\lambda^{(\ensuremath{\mathrm{L}})}\) and \(\lambda^{(\ensuremath{\mathrm{R}})}\), transforms these parts into the minimum elements of \(\{\lambda\vdash_{d_{\ensuremath{\mathrm{L}}}}n'_{\ensuremath{\mathrm{L}}}\}\) and \(\{\lambda\vdash_{d_{\ensuremath{\mathrm{R}}}}n'_{\ensuremath{\mathrm{R}}}\}\) respectively, i.e., the end results of the repeated transformations are,
\begin{equation}\label{eq:27}
\begin{split}
&\hat{\lambda}^{(\ensuremath{\mathrm{L}})''}=\left(\left\lceil\frac{n'_{\ensuremath{\mathrm{L}}}}{d_{\ensuremath{\mathrm{L}}}}\right\rceil, \left\lceil\frac{n'_{\ensuremath{\mathrm{L}}}}{d_{\ensuremath{\mathrm{L}}}}\right\rceil, \ldots , \overset{n'_{\ensuremath{\mathrm{L}}}\bmod d_{\ensuremath{\mathrm{L}}}\text{-th}}{\left\lceil\frac{n'_{\ensuremath{\mathrm{L}}}}{d_{\ensuremath{\mathrm{L}}}}\right\rceil}\right.,
\left.\left\lfloor \frac{n'_{\ensuremath{\mathrm{L}}}}{d_{\ensuremath{\mathrm{L}}}} \right\rfloor,\left\lfloor \frac{n'_{\ensuremath{\mathrm{L}}}}{d_\ensuremath{\mathrm{L}}} \right\rfloor, \ldots , \overset{d_\ensuremath{\mathrm{L}}\text{-th}}{\left\lfloor \frac{n'_{\ensuremath{\mathrm{L}}}}{d_\ensuremath{\mathrm{L}}} \right\rfloor}, \lambda^{(\ensuremath{\mathrm{L}})}_{d_{\ensuremath{\mathrm{L}}}+1}, \lambda^{(\ensuremath{\mathrm{L}})}_{d_{\ensuremath{\mathrm{L}}}+2}, \ldots \lambda^{(\ensuremath{\mathrm{L}})}_{d}\right),\\
&\hat{\lambda}^{(\ensuremath{\mathrm{R}})''}=\left(\left\lceil\frac{n'_{\ensuremath{\mathrm{R}}}}{d_{\ensuremath{\mathrm{R}}}}\right\rceil,\left\lceil\frac{n'_{\ensuremath{\mathrm{R}}}}{d_{\ensuremath{\mathrm{R}}}}\right\rceil, \ldots , \overset{n'_{\ensuremath{\mathrm{R}}}\bmod d_{\ensuremath{\mathrm{R}}}\text{-th}}{\left\lceil\frac{n'_{\ensuremath{\mathrm{R}}}}{d_{\ensuremath{\mathrm{R}}}}\right\rceil}\right., \left.\left\lfloor \frac{n'_{\ensuremath{\mathrm{R}}}}{d_{\ensuremath{\mathrm{R}}}} \right\rfloor,\left\lfloor \frac{n'_{\ensuremath{\mathrm{R}}}}{d_{\ensuremath{\mathrm{R}}}} \right\rfloor, \ldots,\overset{d_{\ensuremath{\mathrm{R}}}\text{-th}}{\left\lfloor \frac{n'_{\ensuremath{\mathrm{R}}}}{d_{\ensuremath{\mathrm{R}}}} \right\rfloor}, \lambda^{(\ensuremath{\mathrm{R}})}_{d_{\ensuremath{\mathrm{R}}}+1}, \lambda^{(\ensuremath{\mathrm{R}})}_{d_{\ensuremath{\mathrm{R}}}+2}, \ldots , \lambda^{(\ensuremath{\mathrm{R}})}_{d}\right).
\end{split}
\end{equation}
In this way, we are able to transform any pair of diagrams into this standard form without increasing \(E^{\mathrm{W}}\).
We define a second type of transformation that acts on pairs of diagrams, \(\lambda^{(\ensuremath{\mathrm{L}})}, \lambda^{(\ensuremath{\mathrm{R}})}\) of the form described in Eq.~\eqref{eq:27}; we further assume that \(d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}>0\). Consider the transformation that takes a single box from the bottommost overlapping row of \(\lambda^{(\ensuremath{\mathrm{L}})}\), and attaches it to the non-overlapping part in the way that makes the resulting diagram the smallest w.r.t.\ dominance order. I.e., we transform \(\lambda^{(\ensuremath{\mathrm{L}})}\) to \(\lambda^{(\ensuremath{\mathrm{L}})'}\) where \(\lambda^{(\ensuremath{\mathrm{L}})'}_{d_{\ensuremath{\mathrm{L}}}+d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}}=\lambda^{(\ensuremath{\mathrm{L}})}_{d_{\ensuremath{\mathrm{L}}}+d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}}-1\), \(\lambda^{(\ensuremath{\mathrm{L}})'}_{n'_{\ensuremath{\mathrm{L}}}\bmod d_{\ensuremath{\mathrm{L}}}+1}=\lambda^{(\ensuremath{\mathrm{L}})}_{n'_{\ensuremath{\mathrm{L}}}\bmod d_{\ensuremath{\mathrm{L}}}+1}+1\), and all other rows stay unchanged, see Figure~\ref{fig:wernersolution2}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.56\textwidth]{wernersolution2.pdf}
\caption{\label{fig:wernersolution2} A transformation of a pair of diagrams of the standard form described by Eq.~\eqref{eq:27}, that moves a box form the bottom row of \(\lambda^{(\ensuremath{\mathrm{L}})}\) into the non-overlapping part. In the diagram \(\lambda^{(\ensuremath{\mathrm{L}})}\), \(d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}-1\) overlapping rows are between the original and new positions of the box, while in \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\), these overlapping rows are not necessarily between the two positions. }
\end{figure}
When our transformation moves the box downwards in \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\), the contribution of every term in \(E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})'},\lambda^{(\ensuremath{\mathrm{R}})})-E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})})\) is non-positive; therefore, we need only be concerned with the cases in which the box moves upwards. Without loss of generality, we can assume that \(n'_{\ensuremath{\mathrm{L}}}/d_{\ensuremath{\mathrm{L}}}\le n'_{\ensuremath{\mathrm{R}}}/d_{\ensuremath{\mathrm{R}}}\), thus, we move the box into the top row of the bottommost rectangular section of \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\). In the case of the contrary, we simply apply the transformation to \(\lambda^{(\ensuremath{\mathrm{R}})}\) instead of \(\lambda^{(\ensuremath{\mathrm{L}})}\). This means, that the only arrangements in which the box moves upwards, are the ones where it is taken from one of those rows that are shuffled below all the non-overlapping rows. Let us assume that we take the box from the \(x\)-th row below the bottommost non-overlapping row, i.e., it moves \(x+d_{\ensuremath{\mathrm{L}}}-n'_{\ensuremath{\mathrm{L}}}\bmod d_{\ensuremath{\mathrm{L}}}-1\) rows upward in \(\text{sort}({\{\lambda^{(\ensuremath{\mathrm{L}})}_{j}+\lambda^{(\ensuremath{\mathrm{R}})r}_{j}\}}_{j=1}^{d})\), and \(d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}+d_{\ensuremath{\mathrm{L}}}-n'_{\ensuremath{\mathrm{L}}}\bmod d_{\ensuremath{\mathrm{L}}}-1\) upwards in \(\lambda^{(\ensuremath{\mathrm{L}})}\). This way, the change in \(E^{\mathrm{W}}\) is,
\begin{equation}\label{eq:28}
E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})'},\lambda^{(\ensuremath{\mathrm{R}})})-E^{\mathrm{W}}(\lambda^{(\ensuremath{\mathrm{L}})},\lambda^{(\ensuremath{\mathrm{R}})})=-\lambda^{(\ensuremath{\mathrm{R}})}_{d_{\ensuremath{\mathrm{R}}}+1}+x-d_{\ensuremath{\mathrm{L}}\ensuremath{\mathrm{R}}}\le0.
\end{equation}
Starting from any pair of diagrams, by first rearranging the non-overlapping rows into the standard form in Eq.~\eqref{eq:27}, then moving the boxes from the overlapping rows into the non-overlapping ones with the method just described, we eventually reach one of the diagrams in Eq.~\eqref{eq:25} without increasing the value of \(E^{\mathrm{W}}\).
| -49,755.146148 |
[
-2.646484375,
2.29296875
] | 29.166667 |
[
-2.53515625,
0.708984375,
-1.9755859375,
-6.75,
-2.12109375,
8.8828125
] |
[
1.212890625,
9.125,
0.345703125,
4.4296875
] | 117 | 1,849 |
[
-3.34765625,
3.873046875
] | 34.278149 |
[
-5.4609375,
-4.48046875,
-4.6484375,
-3.01171875,
1.6689453125,
12.515625
] | 0.614282 | 14.797192 | 33.747972 | 2.737428 |
[
2.680100440979004
] | -38,880.399191 | 10.56517 | -49,928.391741 | 0.563092 | 5.501535 |
[
-1.947265625,
-3.33203125,
-4.19140625,
-5.953125,
2.0546875,
12.765625
] |
[
-4.578125,
-1.23828125,
-1.765625,
-1.2734375,
2.283203125,
2.978515625
] | |
BkiUdTY5qhLA23QkplqT
|
\section{Introduction \label{introduction}}
The B[e] stars form a heterogeneous group composed of objects at different evolutionary stages, but with many similar observational signatures characterising the so-called "B[e] phenomenon": simultaneous presence of hydrogen emission lines, low-excitation forbidden and permitted metallic lines in emission, and a significant infrared (IR) excess mainly caused by hot circumstellar dust. Another common property of B[e] stars is the presence of a non-spherical circumstellar environment (hereafter CSE; e.g. Zickgraf 2003).
Lamers et al. (1998) defined five B[e] sub-classes, one of which contains unclassified stars. Miroshnichenko (2007) proposed an additional B[e] sub-class (the FS CMa stars) to explain at least part of the unclassified B[e]-type stars as binaries at a phase of ongoing or recently ended rapid mass transfer and dust formation.
One of the B[e] sub-classes is composed of luminous ($\log (L_\star/L_{\sun}) \ga 4$) post-main sequence objects: the B[e] supergiant stars (hereafter sgB[e]). Previous spectroscopic and polarimetric observations of sgB[e] (e.g. Zickgraf et al. 1985; Magalh\~{a}es 1992) show that the wind of these massive and luminous stars is composed of two distinct components: (1) a wind of low density and high velocity and (2) a wind of high density and low velocity. Zickgraf et al. (1985) proposed a picture where the sgB[e] winds consist of a hot and fast radiation-driven polar wind and a slow, much cooler and denser (by a factor of $10^2$ or $10^3$) equatorial wind. This disc-like structure provides a natural explanation for the existence of dust around those objects, because the presence of dust requires regions of sufficiently high density and low kinetic temperatures. One possible explanation for this two-component CSE is that rapid rotation of the central star leads to the formation of an equatorial disc because of the combination of rotation-induced bi-stability and rotation-induced wind compression (Lamers \& Pauldrach 1991; Bjorkman 1998; Pelupessy et al. 2000). Other mechanisms, such as binarity, are evoked to explain the disc-like CSE, but in any case, rapid rotation seems to play a key role in the origin of these discs (e.g. Meynet \& Maeder 2006). Owing to their physical characteristics (fast rotation, disc-like CSE, high luminosity, evolutionary status), it has also been suggested (Vink et al. 2009) that sgB[e] might share evolutionary links with rapidly rotating O-stars and long-duration gamma-ray bursts (GRBs).
Because of the large distances of sgB[e] ($\ga 1$~kpc for the closest ones), the geometry and physical structure (e.g. density and temperature distribution) of their CSE could be only quite recently directly probed, thanks to modern high angular resolution (HAR) techniques. For example, Domiciano de Souza et al. (2008) used ESO's VLT/VISIR instrument to directly measure the typical size of the dusty CSE of the sgB[e] MWC\,300 from diffraction-limited mid-IR images. Among HAR techniques, optical/IR long baseline interferometry (OLBI) provides the highest resolving power, allowing the study of sgB[e] CSEs at angular resolutions $\sim1-10$ milliarcseconds (mas). For example, Millour et al. (2009) combined adaptive optics (VLT/NACO) and OLBI (VLTI/AMBER and VLTI/MIDI) to detect a companion around the sgB[e] HD~87643 and to perform an extensive study of the dusty CSE of the binary system.
In the examples above as well as in most works based on HAR data of sgB[e], two different strategies are commonly adopted to interpret the observations: (1) geometrical analytical modelling and (2) radiative transfer modelling (e.g. using the Monte Carlo approach). The geometrical analytical models have the advantage to be very fast to calculate, allowing a full model-fitting (for example a $\chi^2$ minimisation) and error estimate of the model parameters (mostly geometrical parameters of the CSE). However, these simple models do not give access to physical parameters of the targets such as temperature and density distributions, optical depths, etc. On the other hand, most radiative transfer models present a consistent description of the physical conditions of the CSE. However, because these models are quite complex, they demand a lot of computing time, which prevents one from exploring a large domain of the parameter space and also from obtaining a good estimate of the uncertainties on the fitted parameters. In this work we adopt a third approach for the data modelling, which tries to keep the advantages of the other approaches, without the drawbacks. To this aim we use our \textit{fast ray-tracing algorithm for circumstellar structures} (FRACS), which is based on a parametrised CSE combined to a simplified radiative transfer (no scattering). A complete description of this algorithm is given in Niccolini, Bendjoya \& Domiciano de Souza (2010; hereafter paper I).
In the present paper we apply FRACS to study the CSE of the Galactic sgB[e] CPD-57\degr\,2874 (also named \object{Hen 3-394}, \object{WRAY 15-535}) based on mid-IR spectro-interferometric observations performed with ESO's VLTI/MIDI beam-combiner instrument. Previous near- and mid-IR interferometric observations of CPD-57\degr\,2874 directly revealed an elongated CSE that is compatible with a disc-like structure formed by gas and dust (Domiciano de Souza et al. 2007; hereafter DS07). However, because only a limited number (four) of baselines was available and since the authors adopted simple analytical models, only geometrical parameters could be derived from this first analysis of CPD-57\degr\,2874. As shown below, the use of FRACS allowed us to confirm the previous results and, most importantly, to derive physical parameters for this Galactic sgB[e].
In Sect.~\ref{observations} we give the log of the VLTI/MIDI observations and describe the data reduction procedure. In Sect.~\ref{feros} we provide a new distance estimate of CPD-57\degr\,2874 obtained from spectroscopic observations with FEROS. A short reminder of the ray-tracing code FRACS is presented in Sect.~\ref{fracs}, followed by the results obtained from a model-fitting analysis of the VLTI/MIDI observations (Sect.~\ref{data_analysis}). A discussion of the results and the conclusions of this work are presented in Sects.~\ref{discussion} and \ref{conclusions}, respectively.
\section{VLTI/MIDI observations \label{observations}}
The interferometric observations of CPD-57\degr\,2874 were performed with MIDI, the mid-infrared 2-telescope beam-combiner instrument of ESO's VLTI (Leinert et al. 2004). All four 8.2~m unit telescopes (UTs) were used. The N-band spectrum as well as spectrally dispersed fringes have been recorded between $\lambda\simeq7.5\,\mu\mathrm{m}$ and $\lambda\simeq13.5\,\mu\mathrm{m}$ with a spectral resolution of $R \simeq 30$ using a prism. In total, $n_\mathrm{B}=10$ data sets have been obtained with projected baselines ($B_\mathrm{proj}$) ranging from $\simeq40$\,m to $\simeq130$\,m, and baseline position angles (PA) between $\simeq8\degr$ and $\simeq105\degr$ (from North to East). A summary of the VLTI/MIDI observations of CPD-57\degr\,2874 is given in Table~\ref{tab_midiobs}, and the corresponding uv-plane coverage is shown in Fig.~\ref{fig:Bproj_PA}.
\begin{table}[th]
\begin{minipage}[th]{\columnwidth}
\caption{Summary of VLTI/MIDI observations of CPD-57\degr\,2874: data set index, date, Coordinated Universal Time (UTC) of observation, baseline configuration, projected baseline length and position angle.}
\label{tab_midiobs}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{lcccrr}
\hline
\# & date & $t_\mathrm{obs}$ & UT config. &\multicolumn{1}{c} {$B_\mathrm{proj}$} & \multicolumn{1}{c} {PA} \\
& & (UTC) & & \multicolumn{1}{c} {(m)} & \multicolumn{1}{c} {(\degr)} \\
\hline
1 & 2004-11-01 & 08:51:55 & UT2-UT4 & 85.1 & 37.8 \\
2\footnote{Data previously used by DS07.} & 2004-12-29 & 05:52:12 & UT2-UT3 & 45.2 & 18.6 \\
3$^a$ & 2004-12-29 & 07:26:06 & UT2-UT3 & 43.9 & 35.1 \\
4$^a$ & 2004-12-31 & 06:04:03 & UT3-UT4 & 54.8 & 79.6 \\
5$^a$ & 2004-12-31 & 08:02:48 & UT3-UT4 & 60.9 & 104.8 \\
6 & 2006-11-09 & 07:15:36 & UT1-UT4 & 129.7 & 8.6 \\
7 & 2006-12-13 & 08:35:55 & UT1-UT3 & 93.8 & 28.6 \\
8 & 2006-12-31 & 08:13:05 & UT1-UT4 & 125.8 & 63.5 \\
9 & 2006-12-31 & 08:59:06 & UT1-UT4 & 121.8 & 72.5 \\
10 & 2007-01-05 & 08:35:09 & UT1-UT3 & 88.1 & 42.8 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{figure}[th]
\centering
\sidecaption
\includegraphics*[width=5.0cm,draft=false]{CPD-57_2874_Bproj_PA.eps}
\caption{uv-plane coverage: projected baselines (length and position
angle) for the VLTI/MIDI observations of CPD-57\degr\,2874 (further details are given in Table~\ref{tab_midiobs}).}
\label{fig:Bproj_PA}
\end{figure}
The MIDI data were reduced with the MIA+EWS data reduction package, which includes two different sub-packages: the MIA package developed at the Max-Planck-Institut f\"{u}r Astronomie, and
the EWS package developed at the Leiden Observatory\footnote{The MIA+EWS software package is available at http://www.mpia-hd.mpg.de/MIDISOFT/ and
http://www.strw.leidenuniv.nl/\textasciitilde nevec/MIDI/index.html.}.
While MIA is based on the power spectrum analysis, which measures the total power
of observed fringes (Leinert et al. 2004), EWS coherently adds the fringes after correction for optical path differences (instrumental as well as atmospheric delays) in each scan (Jaffe et al. 2004). The data reduction results obtained with the MIA and EWS packages agree well within de uncertainties.
The instrumental transfer function at each spectral channel was obtained from the observations of calibrator stars with known uniform-disc diameters ($\diameter_\mathrm{UD}$). The calibrators used in the data reduction and the adopted angular diameters and uncertainties are listed in Table~\ref{tab:calibrators}. The calibrated visibilities were calculated from the ratio of the targets' raw visibilities and the average transfer function derived from the calibrator measurements of the corresponding night (Fig.~\ref{fig:midi_vis_cpd}). The error of the calibrated MIDI visibilities is of the order of $\simeq 5\%-10\%$ and includes the raw visibility error as well as the error of the transfer function. The uncertainties on the calibrator angular diameter are negligible compared to the standard deviation of the transfer function. Usually, $\simeq3-5$ calibrator measurements per night were available (Table~\ref{tab:calibrators}). In the few cases where only one suitable calibrator observation was available we assumed a typical transfer function error of 5\% to estimate the errors on the final calibrated visibilities.
VLTI/MIDI also provides spectral fluxes of CPD-57\degr\,2874 in the N-band. On average, all fluxes are compatible within $\simeq10\%$ with those previously presented by DS07 (MIDI and ISO-SWS spectra). We here considered the VLTI/MIDI spectrum used by DS07 with an uncertainty of 20\% (Fig.~\ref{fig:midi_flux_cpd}). This larger uncertainty ensures a complete agreement between the observed MIDI and ISO fluxes at all wavelengths. We note that the mid-IR spectrum of CPD-57\degr\,2874 does not show any clear evidence of an important silicate feature around $10\,\,\mu\mathrm{m}$.
\begin{table}[t]
\caption{Observation log and angular diameters of calibrators (values from DS07) used to derive the calibrated N-band visibilities of CPD-57\degr\,2874.}
\label{tab:calibrators}
\centering
\begin{tabular}{ccccc}
\hline
date & $t_\mathrm{obs}$ & UT config. & Calibrator & $\diameter_\mathrm{UD}$ \\
& (UTC) & & & (mas) \\
2004-11-01 & 09:28:16 & UT2-UT4 & HD94510 & $2.16\pm0.11$ \\
2004-12-29 & 04:12:26 & UT2-UT3 & HD37160 & $2.08\pm0.20$ \\
2004-12-29 & 05:29:32 & UT2-UT3 & HD37160 & $2.08\pm0.20$ \\
2004-12-29 & 06:13:08 & UT2-UT3 & HD50778 & $3.95\pm0.22$ \\
2004-12-29 & 07:47:21 & UT2-UT3 & HD94510 & $2.16\pm0.11$ \\
2004-12-31 & 02:15:59 & UT3-UT4 & HD50778 & $3.95\pm0.22$ \\
2004-12-31 & 03:04:33 & UT3-UT4 & HD50778 & $3.95\pm0.22$ \\
2004-12-31 & 06:31:19 & UT3-UT4 & HD94510 & $2.16\pm0.11$ \\
2004-12-31 & 07:19:17 & UT3-UT4 & HD107446& $4.54\pm0.23$ \\
2004-12-31 & 07:41:22 & UT3-UT4 & HD94510 & $2.16\pm0.11$ \\
2006-11-09 & 07:45:21 & UT1-UT4 & HD94510 & $2.16\pm0.11$ \\
2006-12-13 & 02:04:34 & UT1-UT3 & HD23249 & $2.33\pm0.01$ \\
2006-12-13 & 08:10:27 & UT1-UT3 & HD94510 & $2.16\pm0.11$ \\
2006-12-13 & 08:54:41 & UT1-UT3 & HD94510 & $2.16\pm0.11$ \\
2006-12-31 & 06:58:10 & UT1-UT4 & HD94510 & $2.16\pm0.11$ \\
2006-12-31 & 07:44:55 & UT1-UT4 & HD94510 & $2.16\pm0.11$ \\
2006-12-31 & 08:37:22 & UT1-UT4 & HD94510 & $2.16\pm0.11$ \\
2007-01-05 & 08:08:04 & UT1-UT3 & HD94510 & $2.16\pm0.11$ \\
\hline
\end{tabular}
\end{table}
\section{FEROS observations and distance estimate \label{feros}}
In addition to our mid-IR interferometric observations, we obtained high-resolution optical spectra of CPD-57\degr\,2874. The spectra were recorded with the high-resolution Fiber-fed Extended Range Optical Spectrograph (FEROS), attached to the 2.2-m telescope at ESO in La Silla (Chile). FEROS is a bench-mounted Echelle spectrograph with fibers, covering a sky area of $\simeq2\arcsec\times2\arcsec$ and a wavelength range from 3600\,\AA \ to 9200\,\AA. Its spectral resolution is $R \simeq 55\,000$ (around 6000 \AA). We have adopted its complete automatic online reduction, which includes the heliocentric correction. The FEROS spectra were obtained on 2008 December 21. We recorded two exposures of 1000 seconds with S/N of $\sim60$ in the 5500\,\AA \ region.
These observations were used to estimate the distance of CPD-57\degr\,2874. A previous distance estimate of $d=2.5$~kpc has been proposed by McGregor et al. (1988), assuming that this star belongs to the Carina OB association.
From our FEROS high-resolution spectra it is possible to estimate the distance of CPD-57\degr\,2874, based on the statistical relation cited by Allen (1973). This relation uses the equivalent widths of interstellar \ion{Na}{I} lines. In our data, each \ion{Na}{I} line is composed of two absorption components. However, owing to the lack of data from different epochs, it is impossible to see any temporal changes, which would allow us to derive a possible circumstellar contamination. We have therefore assumed that both \ion{Na}{I} components are of interstellar origin. The measured equivalent widths are given in Table \ref{tab:lines}. Our estimated distance for CPD-57\degr\,2874 is $d=1.7$~kpc with an uncertainty of 0.7~kpc. This large error is firstly due to a possible contamination from the circumstellar emission component and the saturation of the absorption one, and secondly to a systematic error caused by the statistical relation used. Within the error bars, our distance estimate is roughly compatible with the result of McGregor et al. (1988). We have considered both distances in our analysis: 1.7 and 2.5~kpc.
\begin{table}[t!]
\caption{Equivalent widths (EW) of the \ion{Na}{I} absorption lines ($\lambda$$\lambda$ 5890\,\AA, 5896\,\AA), obtained from our FEROS data. The relative uncertainty of these measurements is about 20\%.}
\label{tab:lines}
\begin{center}
\begin{tabular}{ccccc}
\hline
Line & \multicolumn{2}{c}{\textrm \protect{5890$\AA$}} & \multicolumn{2}{c}{\textrm \protect{5896$\AA$}} \\
& abs. comp. 1 & abs. comp. 2 & abs. comp. 1 & abs. comp. 2 \\
\hline
W($\AA$) & 0.11 & 0.80 & 0.05 & 0.70 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Description of the model FRACS \label{fracs}}
Here we present a short description of our numerical model FRACS and the parametrisation adopted to describe a sgB[e]. A full description of FRACS is given in paper I.
FRACS is based on the ray-tracing technique using quadtree meshes for the grid and the full symmetries of the problem to hand to significantly decrease the computing time necessary to obtain monochromatic images (calculated with $300\times300$ pixels) within seconds. Complex visibilities and fluxes can be directly derived from these monochromatic images. FRACS neglects scattering, an approximation well suited to interpret spectro-interferometric observations in the thermal IR. Indeed, compared to absorption of light from dust, scattering can be neglected in the IR and beyond (paper I; Lamers \& Cassinelli 1999).
To analyse the VLTI/MIDI data of CPD-57\degr\,2874 we adopted the same parametrised description of a sgB[e] (central star and dusty CSE) as given in paper I. Below we summarise the main equations of this description in axis-symmetric spherical coordinates, and define the free parameters used in the model-fitting.
We assume the specific intensity from the central regions of the star to be a power-law with spectral index $\alpha$ and level $I^\mathrm{s}_{\lambda_0}$ at a fiducial wavelength $\lambda_0=10\,\mu\mathrm{m}$:
\begin{eqnarray}
\label{eq:csource}
I^{\mathrm{s}}_\lambda = I^\mathrm{s}_{\lambda_0}\,\left(\frac{\lambda_0}{\lambda}\right)^\alpha \, .
\end{eqnarray}
This emission from the central region includes a contribution from the stellar photosphere and from the continuum radiation (free-free and free-bound) of the circumstellar ionised gas. The spectral index $\alpha$ is sensitive to the nature of the central source. Panagia \& Felli (1975) and Felli \& Panagia (1981) give theoretical values for the spectral index for spherical envelopes: $\alpha\simeq4$ for a blackbody and $\alpha\simeq2.6$ for a fully ionised gas (free-free emission) with an electron density proportional to $r^{-2}$. Their estimates are valid within the Rayleigh-Jeans domain of the spectrum, which fits to our case when we consider the hot central parts of a sgB[e] in the mid-IR.
A radius $R_\mathrm{s}=54R_{\sun}$ was adopted for the central region. This value is used simply as a scaling factor and to convert $I^\mathrm{s}_{\lambda_0}$ to the observed $10\,\mu\mathrm{m}$ flux from the central region:
\begin{eqnarray}
\label{eq:fsource}
F^\mathrm{s}_{\lambda_0}=\pi\left(\frac{R_\mathrm{s}}{d}\right)^2I^\mathrm{s}_{\lambda_0} \, .
\end{eqnarray}
We note that at distances of a few kpc the central regions of a sgB[e] are not resolved by VLTI/MIDI and can thus be considered as point sources.
For the gas number density we adopt the bi-modal distribution used by Carciofi, Miroshnichenko, \& Bjorkman (2010) to study another B[e] star. Similar density distribution descriptions were adopted for Be stars (e.g. Stee et al. 1995). The adopted distribution is motivated by the two-wind scenario proposed by Zickgraf et al. (1985) and assumes a fast polar wind, a slow equatorial outflow, and a latitude-dependent mass loss rate. The number density of dust grains is therefore given by
\begin{equation}
\label{eq:density}
n(r,\theta)=n_\mathrm{in}\,\left(\frac{R_\mathrm{in}}{r}\right)^2\,\left( \frac{1+A_2}{1+A_1}\right) \,
\frac{1+A_1\,\left(\sin{\theta}\right)^m}{1+A_2\,\left(\sin{\theta}\right)^m} \, ,
\end{equation}
where $(r,\theta)$ are the radial coordinate and co-latitude, and $n_\mathrm{in}$ is the dust grain number density at $\theta=90\degr$ and at $r=R_\mathrm{in}$, which is the inner dust radius where dust starts to survive. Dust is confined between $R_\mathrm{in}$ and the outer dust radius $R_\mathrm{out}$. The value of $R_\mathrm{out}$ cannot be determined from the VLTI/MIDI data and has been fixed to a high value: $750$~AU (the exact value does not affect our results).
The parameter $A_1$ controls the ratio between the equatorial and polar mass loss rates. More precisely, $(1+A_1)$ specifies the ratio between the mass loss rate per unit solid angle in the equator and the pole. In the bi-modal scenario, the poles are assumed to be much less dense than the equator, therefore $A_1$ must have a large value ($\ga10$). Our models indicate that for a wide range of values this parameter does not have a strong influence on the results because it can always be compensated by $n_\mathrm{in}$ (see also discussion in paper I). Therefore, we have arbitrarily fixed $(1+A_1)$ to 50.
The parameter $A_2$ indicates how much faster the polar wind is compared to the slow equatorial wind. $(1+A_2)$ is the equatorial to polar terminal velocity, i.e., $v_\infty(90\degr)/v_\infty(0\degr)$. This parameter is also quite uncertain and in principle can assume values ranging from $\sim1$ to $\sim100$. We kept $A_2$ as a free parameter, although it is not well constrained from the observations as shown in the next sections.
Finally, parameter $m$ controls how fast the mass loss (and consequently the density) drops from the equator to the pole. Defining the disc opening angle $\Delta\theta_\mathrm{d}$ as the latitudinal range within which the mass loss rate is higher then half its equatorial value, we have
\begin{equation}
\label{eq:thetadust}
\Delta\theta_\mathrm{d}=2
\arccos{\left(\frac{A_1-1}{2\,A_1}\right)^{\frac{1}{m}}}\simeq
2\arccos{\left(\frac{1}{2}\right)^{\frac{1}{m}}} \, .
\end{equation}
High $m$ values correspond to thinner regions of high density around the equator. These dense, slowly flowing disc-like regions around the equatorial plane of sgB[e] stars provide favourable conditions for dust to form and survive. Different approaches exist to define regions of dust formation (e.g. Carciofi, Miroshnichenko, \& Bjorkman 2010). Here we adopt the relatively simple assumption where dust is allowed to exist only within the disc opening angle, i.e., at co-latitudes between $90\degr-0.5\Delta\theta_\mathrm{d}$ and $90\degr+0.5\Delta\theta_\mathrm{d}$.
The dust grain opacity was calculated in the Mie theory (Mie 1908) for silicate dust and for a dust size distribution following the commonly adopted MRN (Mathis, Rumpl \& Nordsieck 1977) power-law $\propto a^{-3.5}$, where $a$ is the dust grain radius. The Mie absorption cross sections are computed from the optical indices of astronomical silicate (Draine \& Lee 1984; see also paper I). One possibility to reproduce the absence of a silicate feature in the N-band spectrum of CPD-57\degr\,2874 is to have relatively large grain sizes. We thus used grain radii ranging from $a=0.5$ to 50 $\,\mu\mathrm{m}$, which can relatively well reproduce the observed spectrum (Fig.~\ref{fig:midi_flux_cpd}). {We checked that ignoring scattering remains a valid assumption for this dust distribution with large grains. By neglecting the dust albedo the visibilities and fluxes are affected by only a few percent ($\la3.5\%$) within the N-band (further details in paper I).}
The temperature structure of the dusty CSE is given by
\begin{equation}
\label{eq:temperature}
T(r)=T_\mathrm{in}\,\left(\frac{R_\mathrm{in}}{r}\right)^\gamma \, ,
\end{equation}
where $T_\mathrm{in}$ is the dust temperature at the disc inner radius $R_\mathrm{in}$, i.e., the dust sublimation temperature. To be consistent with our choice of dust composition we require that $T_\mathrm{in}\leq1500$~K. The coefficient $\gamma$ is expected to assume values $\la1$.
Finally, because OLBI is sensitive to the projection of the object's intensity distribution onto the sky, there are two angles related to this projection:
\begin{itemize}
\item the inclination of the disc plane towards the observer $i$ ($0\degr$ for pole-on view and $90\degr$ for equator-on view).
\item the position angle (from North to East) of the maximum elongation of the sky-projected disc $\mathrm{PA}_\mathrm{d}$. This angle is defined for $i \ne 0\degr$.
\end{itemize}
Thus, the 10 free parameters ($n_\mathrm{free}$) of the model are: $I^\mathrm{s}_{\lambda_0}$, $\alpha$, $T_\mathrm{in}$, $\gamma$, $R_\mathrm{in}$, $i$, $\mathrm{PA}_\mathrm{d}$, $A_2$, $n_\mathrm{in}$, and $m$.
\section{Model-fitting with FRACS \label{data_analysis}}
Here we use FRACS with the parametrised sgB[e] description defined in the last section in order to interpret the VLTI/MIDI observations of CPD-57\degr\,2874 through a model-fitting procedure.
To ensure spectrally independent observations for the model-fitting we decided to consider one data point every $\simeq0.5\,\mu\mathrm{m}$ between $8\,\mu\mathrm{m}$ and $13\,\mu\mathrm{m}$. This step approximately corresponds to twice the used spectral resolution width ($\Delta\lambda=\lambda/R$). Additionally, to avoid poor visibility calibration owing to the Earth signature of ozone around $9.6\,\mu\mathrm{m}$ we have not included observations in this spectral region in our analysis. Finally, the same spectral sampling ($n_\lambda=10$ wavelengths points) was adopted for the VLTI/MIDI visibilities and spectrum. This choice also provides faster calculations because it is not necessary to compute model images at too many wavelengths.
We have performed a $\chi^2$ minimisation simultaneously on the VLTI/MIDI visibilities and fluxes using a Levenberg-Marquardt (LM) algorithm (Markwardt 2008). In order to treat the visibilities and fluxes on the same level (similar weights) we have minimised a $\chi^2$ like quantity defined as (see further details in paper I):
\begin{eqnarray}
\label{eq:chi2}
\chi^2=\sum\limits_{j=1}^{n_\lambda}\!\sum\limits_{k=1}^{n_\mathrm{B}}\,
\left[
\left(\frac{V^\mathrm{obs}_k-V_k}{\sigma_{V,k} }\right)^2 +
\left(\frac{F^\mathrm{obs}_j-F_j}{\sigma_{F,j}}\right)^2\right] ,
\end{eqnarray}
where $V^\mathrm{obs}_k$ and $V_k$ are the observed and modelled visibility modulus for baseline index $k$, $F^\mathrm{obs}_j$ and $F_j$ are the observed and modelled mid-IR fluxes for wavelength index $j$. $\sigma_{V,k}$ and $\sigma_{F,j}$ are the estimated errors on the visibilities and fluxes.
The starting parameter values for the fit were determined from physical considerations of the CSE and from the previous results from DS07. Below we consider the reduced $\chi^2$ defined by $\chi_\mathrm{r}^2=\chi^2/(2n_\mathrm{B}n_\lambda-n_\mathrm{free})$, where $n_\mathrm{free}=10$. The LM algorithm stops when the relative decrease in $\chi_\mathrm{r}^2$ is less then $10^{-3}$. For the CPD-57\degr\,2874 data, the LM algorithm reaches the $\chi_\mathrm{r}^2$ minimum ($\chi_\mathrm{min,r}^2$) in a few hours ($\simeq2-3$~h) on a single CPU.
Figure~\ref {fig:fracs_img_cpd} shows the intensity map of the model corresponding to $\chi_\mathrm{min,r}^2$ (best-fit model) for our distance estimate of 1.7~kpc, which also corresponds to the lowest $\chi_\mathrm{min,r}^2$. The visibilities and fluxes for the best-fit model are shown, together with the observations, in Figs.~\ref{fig:midi_vis_cpd} and \ref{fig:midi_flux_cpd}. These plots show that the model well reproduces most observations within their uncertainties for both adopted distances (1.7 and 2.5~kpc). In particular the slightly curved shape of the visibilities is well reproduced by FRACS. The models indicate that this curved shape is probably caused by the combined fact (1) that the intensity maps have different relative contributions from the central source and from the dusty CSE at different wavelengths, (2) that the optical properties of the adopted dust grains are wavelength-dependent even if there is no strong silicate feature seen in the spectrum, and (3) that the angular resolution significantly changes along the observed wavelengths.
The model parameters at $\chi_\mathrm{min,r}^2$ and their uncertainties are listed in Table~\ref{tab:fit_parameters}. The derived parameters are almost independent of the adopted distance, except of course for those scaling with the distance. The uncertainties of the parameters have been estimated from $\chi_\mathrm{r}^2$ maps calculated with $21\times21$ points for each pair of free parameters (45 pairs) and centred on the $\chi_\mathrm{min,r}^2$ position. All 45 $\chi_\mathrm{r}^2$ maps are shown in Figs.~\ref{fig:chi2_maps_1} to \ref{fig:chi2_maps_3} for $d=1.7$~kpc. These maps show that the $\chi_\mathrm{r}^2$ space presents a well defined $\chi_\mathrm{min,r}^2$, without showing several local minima in the parameter domain explored. Additionally, they provide visual and direct information on the behaviour of the model parameters in the vicinity of $\chi_\mathrm{min,r}^2$, revealing, for instance, potential correlations between certain parameters.
We have estimated the parameter uncertainties in a conservative way by searching for the maximum parameter extension in all $\chi_\mathrm{r}^2$ maps corresponding to $\chi_\mathrm{min,r}^2+\Delta\chi^2$, where $\Delta\chi^2=1$ (see contours in Figs.~\ref{fig:chi2_maps_1} to \ref{fig:chi2_maps_3}). This choice of $\Delta\chi^2$ sets a lower limit confidence region of $\simeq60\%$ to the parameter uncertainties. This limit results from two extreme assumptions about the data:
\begin{itemize}
\item Data points per baseline are completely dependent (correlated): because the same set of stars is used to calibrate all visibilities of a given baseline, we can consider a limiting case where all these visibilities are correlated. This assumption implies that only 10 independent visibility observations are available (this corresponds to the number of baselines). The flux at each spectral channel can still be considered to be independent. This pessimistic assumption leads to the lower limit of $\simeq60\%$ to the formal confidence level for $\Delta\chi^2=1$, corresponding to only 20 independent observations (10 baselines and 10 fluxes).
\item All data points are completely independent (uncorrelated): an upper limit of $\simeq100\%$ of formal confidence level is obtained if we assume that all data points are independent. Then the uncertainties derived from $\Delta\chi^2=1$ are very conservative (overestimated).
\end{itemize}
Hence the parameter uncertainties given in Table~\ref{tab:fit_parameters} correspond to a confidence level of at least $60\%$, but most probably they are somewhat overestimated.
In the next section we present a physically motivated discussion of the derived model parameters of CPD-57\degr\,2874.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\hsize,draft=false]{CPD_1700pc_FRACS_image_Is_small.eps}
\caption{Intensity map of CPD-57\degr\,2874 at $10\,\mu\mathrm{m}$ for the best-fit FRACS model obtained for a distance $d=1.7$~kpc (see Table~\ref{tab:fit_parameters}). The image scale is in log of the specific intensity $I^{\mathrm{s}}_\lambda$. \label{fig:fracs_img_cpd}}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\hsize,draft=false]{plot_midi_vis_cpd.eps}
\caption{VLTI/MIDI visibilities of CPD-57\degr\,2874 (circles) and the best-fit FRACS visibilities obtained from a $\chi^2$ minimisation for a distance $d=1.7$~kpc (solid curve) and $d=2.5$~kpc (dashed curve). The model-fitting was performed simultaneously on the visibilities and spectral flux. The visibilities effectively used for the fit are shown as filled circles together with the corresponding visibility error bars. The good quality of this fit is reflected by a reduced $\chi^2_\mathrm{min}$ of $\simeq0.55$ for both distances (see details in Table~\ref{tab:fit_parameters}). \label{fig:midi_vis_cpd}}
\end{figure*}
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption{Best-fit model parameters and uncertainties derived for CPD-57\degr\,2874 from a $\chi^2$ minimisation. The uncertainties were estimated from the $\chi_\mathrm{r}^2$ maps.}
\label{tab:fit_parameters}
\centering
\renewcommand{\footnoterule}{}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{c | c c | c c }
\hline
Adopted distance & \multicolumn{2}{c} {$d=1.7$~kpc} & \multicolumn{2}{| c} {$d=2.5$~kpc} \\
Reduced $\chi^2_\mathrm{min}$ & \multicolumn{2}{c} {$\chi_\mathrm{min,r}^2=0.54$} & \multicolumn{2}{| c} {$\chi_\mathrm{min,r}^2=0.56$} \\
\hline
Model parameters & value & error & value & error \\
\hline
$I^\mathrm{s}_{\lambda_0}$ ($10^{5}\,\mathrm{W}\,\mathrm{m}^{-2}\,\,\mu\mathrm{m}^{-1}\,\mathrm{str}^{-1}$) &
$ 2.2$ & $_{-0.7}^{+0.7}$ & $ 4.2$ & $_{-1.4}^{+1.8}$ \\
$\alpha$ &
$ 2.4$ & $_{-1.2}^{+1.3}$ & $ 2.4 $ & $_{-1.4}^{+1.4}$ \\
$T_\mathrm{in}$ (K)\footnote{To be physically consistent with the adopted dust composition the upper limit for $T_\mathrm{in}$ is $1500$~K, even
though the $\chi_\mathrm{r}^2$ maps were allowed to explore higher temperature values.} &
$1498 $ & $_{-427}^{+1042}$ & $ 1500 $ & $_{-535}^{+1050} $ \\
$\gamma$ &
$ 1.02 $ & $_{-0.29}^{+0.71} $ & $ 0.86 $ & $_{-0.24}^{+0.43} $ \\
$R_\mathrm{in}$ (AU) &
$ 12.7$ & $_{-2.9}^{+3.6}$ & $ 14.4 $ & $_{-4.0}^{+5.1}$ \\
$i$ (\degr) &
$ 61.3 $ & $_{-18.2}^{+10.8}$ & $ 59.6 $ & $_{-21.2}^{+11.8} $ \\
$\mathrm{PA}_\mathrm{d}$ (\degr) &
$ 140.3 $ & $_{-14.0}^{+12.3} $ & $ 139.4 $ & $_{-15.1}^{+13.7} $ \\
$A_2\;$\footnote{Not well constrained.} &
$ -0.98$ & - & $ -0.98 $ & - \\
$n_\mathrm{in}$ (m$^{-3}$)$^{\;b}$ &
$ 0.30$ & - & $ 0.33 $ & - \\
$m^{\;b}$ &
$ 332$ & - & $ 377$ & - \\
\hline
Other derived parameters & value & error & value & error \\
\hline
$F^\mathrm{s}_{\lambda_0}$ ($10^{-13}\,\mathrm{W}\,\mathrm{m}^{-2}\,\,\mu\mathrm{m}^{-1}$)\,\footnote{Observed mid-IR flux from the central region at $\lambda_0=10\,\mu\mathrm{m}$ (Eq.~\ref{eq:fsource}).} &
$ 3.5$ & $_{-1.1}^{+1.1}$ & $ 6.8$ & $_{-2.3}^{+2.8}$ \\
$R_\mathrm{in}/d$ (mas) &
$7.5$ & $_{-1.7}^{+2.1} $ & $5.8$ & $_{-1.6}^{+2.0} $ \\
$\Delta\theta_\mathrm{d}$ (\degr) &
$7.5$ & - & $7.0$ & - \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\hsize,draft=false]{plot_midi_flux_cpd.eps}
\caption{VLTI/MIDI flux of CPD-57\degr\,2874 (thick solid grey curve) and the $\pm20\%$ adopted uncertainty (dots). The thin solid and dashed curves are the best-fit model fluxes for assumed distances of $d=1.7$~kpc and $d=2.5$~kpc, respectively. The wavelengths used for the fit are those from Fig.~\ref{fig:midi_vis_cpd}. \label{fig:midi_flux_cpd}}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_T0.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_Rin.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_theta.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_pa.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_A2.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_rho0.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Phi0_m.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_gamma1_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_theta_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_pa_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_A2_ExpPhi.fits.eps}
\caption{Reduced $\chi^2$ maps from VLTI/MIDI observations of CPD-57\degr\,2874 for all 45 combinations of free parameters of our model (FRACS). The maps are centered on the $\chi_\mathrm{min,r}^2(=0.54)$ position and correspond to the distance of 1.7~kpc. Contours are drawn for $\chi_\mathrm{min,r}^2+\Delta\chi^2$, with $\Delta\chi^2=0.3\, ,1\, , 3$. The parameters are ordered as in Table~\ref{tab:fit_parameters}. The map scale is given in logarithmic units. Details on the model-fitting procedure are given in Sect.~\ref{data_analysis}. \label{fig:chi2_maps_1}}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_rho0_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_m_ExpPhi.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_Rin.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_theta.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_pa.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_A2.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_rho0.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_T0_m.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_theta_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_pa_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_A2_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_rho0_gamma1.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_m_gamma1.fits.eps}
\caption{Continuation of Fig.~\ref{fig:chi2_maps_1}. \label{fig:chi2_maps_2}}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_theta.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_pa.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_A2.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_rho0.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_Rin_m.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_theta_pa.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_theta_A2.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_rho0_theta.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_theta_m.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_pa_A2.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_rho0_pa.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_pa_m.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_rho0_A2.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_A2_m.fits.eps}
\includegraphics[width=0.33\hsize,draft=false]{CPD_1700pc_rho0_m.fits.eps}
\caption{Continuation of Fig.~\ref{fig:chi2_maps_2}. \label{fig:chi2_maps_3}}
\end{figure*}
\section{Discussion \label{discussion}}
\subsection{Geometrical parameters ($\mathrm{PA}_\mathrm{d}$, $R_\mathrm{in}$, and $i$)}
Let us first compare the derived geometrical parameters ($\mathrm{PA}_\mathrm{d}$, $R_\mathrm{in}$, and $i$) with those previously obtained by DS07 from elliptical Gaussian models fitted on a sub-set of the VLTI/MIDI data used here.
The geometrical parameter $\mathrm{PA}_\mathrm{d}$ can be directly compared with the major-axis position angle of the ellipse previously determined by DS07 ($\simeq143\degr-145$\degr). As expected, the two estimates of $\mathrm{PA}_\mathrm{d}$ are identical within their error bars.
Because the bulk of the thermal IR emission comes from the internal regions of the disc, one can expect the inner dust angular radius ($R_\mathrm{in}/d$) to be comparable to (or slightly smaller than) the major-axis half width at half maximum (HWHM) of an elliptical Gaussian. Indeed, the $R_\mathrm{in}/d$ derived here agrees with the major-axis HWHM ($=0.5$FWHM) given by DS07: $4.5 < \mathrm{HWHM~(mas)} < 8.0$.
Domiciano de Souza et al. (2007) estimated a CSE viewing angle $i\sim30\degr-60\degr$ from the minor- to major-axis ratio of the fitted elliptical Gaussian model. This estimate agrees fairly well with the more precise determination of this parameter given here.
The comparison of these parameters shows on one hand that the geometrical parameters obtained with FRACS agree with those obtained from a simpler approach using analytical models. On the other hand, this comparison clearly shows that, at the cost of somewhat higher but similar computing times, FRACS gives us access to physical parameters of CPD-57\degr\,2874 that cannot be extracted from simple geometrical analytical models.
\subsection{Continuum emission from the central source ($I^\mathrm{s}_{\lambda_0}$ and $\alpha$)}
In contrast to what may be initially expected, we show below that the central source (star and continuum emission components such as free-free, free-bound) contributes to almost half of the total mid-IR radiation of CPD-57\degr\,2874. The total $10\,\mu\mathrm{m}$ flux of the best model for $d=1.7$~kpc is $F_\mathrm{tot}=7.9\times10^{-13} \mathrm{W}\,\mathrm{m}^{-2}\,\,\mu\mathrm{m}^{-1}$ (see Fig.~\ref{fig:midi_flux_cpd}). The contribution of the dust CSE alone to the $10\,\mu\mathrm{m}$ flux computed with FRACS is $F_\mathrm{d}=4.4\times10^{-13} \mathrm{W}\,\mathrm{m}^{-2}\,\,\mu\mathrm{m}^{-1}$. It follows therefore that the $10\,\mu\mathrm{m}$ flux from the CSE is $\simeq56\%F_\mathrm{tot}$ and from the central source is $\simeq44\%F_\mathrm{tot}$. Similar results are obtained for $d=2.5$~kpc.
We note that in the particular case of CPD-57\degr\,2874 where $i\simeq60\degr$ and where the dust is confined in a relatively narrow disc (opening angle $\sim7\degr$), these relative flux contributions can be directly obtained from $F^\mathrm{s}_{\lambda_0}$ (derived from $I^\mathrm{s}_{\lambda_0}$; see Table~\ref{tab:fit_parameters}). This is valid if there is no absorption of the central regions by the CSE.
Although the uncertainties on the spectral index $\alpha$ are relatively high ($\simeq50\%$), the derived value ($=2.4$) suggests an important contribution of free-free continuum radiation from the central regions (Felli \& Panagia 1981).
\subsection{Temperature structure of the dusty CSE ($T_\mathrm{in}$ and $\gamma$)}
The dust temperature at the inner radius $T_\mathrm{in}$ is found to be $\simeq1500$~K (the imposed upper limit for the fit), which is consistent with the definition of $T_\mathrm{in}$ itself and with the chosen silicate dust composition. The large upper limit uncertainties in $T_\mathrm{in}$ and the fact that the best-fit $T_\mathrm{in}$ is 1500~K for $d=2.5$~kpc indicate that a $T_\mathrm{in}$ slightly higher than $1500$~K could still be compatible with the observations. A higher $T_\mathrm{in}$ value is consistent with, for instance, different dust compositions. However, we prefer to keep our choice of dust composition and to have a relatively large upper limit uncertainty in $T_\mathrm{in}$, since the present observations do not provide strong constraints on the exact dust composition.
The derived value for the coefficient of the temperature profile $\gamma$ indicates an almost linear decrease of the disc temperature as a function of $r$. The steepness of this temperature profile lies between those expected for a non irradiated, adiabatically cooling disc ($T \propto r^{-4/3}$) and a reprocessing disc ($T \propto r^{-3/4}$) (c.f. Porter 2003). This implies that a non negligible part of the reprocessed radiation from the inner parts of the disc escapes without being re-absorbed, so that the disc cools down faster than highly optically thick discs, without of course reaching the limit of a purely adiabatic cooling.
\subsection{Parameters related to the density law ($n_\mathrm{in}$, $A_2$, and $m$)}
Our results suggest that the observed mid-IR visibilities and fluxes cannot strongly constrain each individual parameter related to the density law: $n_\mathrm{in}$, $A_2$, and $m$. The $\chi^2$ maps show a significant correlation between these three parameters, indicating that they are degenerated for the available VLTI/MIDI observations. Even if their uncertainties are significant, the values of these parameters at $\chi^2_\mathrm{min}$ suggest
\begin{itemize}
\item a low inner density $n_\mathrm{in}$ corresponding to relatively low CSE optical depth in the mid-IR ($\la0.2$ along the line of sight around $10\,\mu\mathrm{m}$).
\item a polar-to-equatorial terminal velocity ratio $v_\infty(0\degr)/v_\infty(90\degr)=1/(1+A_2)\simeq50$, compatible with the values found in the literature (e.g. Zickgraf 2003).
\item a high value for $m$, translating into a quite narrow opening angle ($\leq10\degr$) for the dust disc.
\end{itemize}
Because $n_\mathrm{in}$, $A_2$, and $m$ are not well constrained, we fitted the observations by fixing these parameters to their values in Table~\ref{tab:fit_parameters} in order to investigate their influence on the remaining parameters. We have also fixed the $T_\mathrm{in}$ to $1500$~K. The fit was performed for $d=1.7$~kpc, starting from slightly different values from those in Table~\ref{tab:fit_parameters} . The $\chi^2$ and values obtained for the free parameters ($I^\mathrm{s}_{\lambda_0}$, $\alpha$, $\gamma$, $R_\mathrm{in}$, $i$, $\mathrm{PA}_\mathrm{d}$) are essentially the same as in Table~\ref{tab:fit_parameters} (differences are only a small fraction of the parameter uncertainties). We have also checked that the uncertainties on the other parameters are not affected by the fact that $n_\mathrm{in}$, $A_2$, and $m$ are not well constrained.
\begin{table}[!t]
\begin{minipage}[t]{\linewidth}
\caption{Best-fit model parameters derived for CPD-57\degr\,2874 from a $\chi^2$ minimisation using a model with fewer free parameters then the initial one (c.f. Sect.~\ref{simple_model}). Uncertainties were estimated from the Levenberg-Marquardt (LM) algorithm and can be considered as lower limits to the errors on the derived parameters (see discussion in the text).}
\label{tab:fit_parameters2}
\centering
\renewcommand{\footnoterule}{}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{c | c c | c c }
\hline
Adopted distance & \multicolumn{2}{c} {$d=1.7$~kpc} & \multicolumn{2}{| c} {$d=2.5$~kpc} \\
Reduced $\chi^2_\mathrm{min}$ & \multicolumn{2}{c} {$\chi_\mathrm{min,r}^2=0.54$} & \multicolumn{2}{| c} {$\chi_\mathrm{min,r}^2=0.56$} \\
\hline
Model parameters & value & error & value & error \\
& & LM & & LM \\
\hline
$I^\mathrm{s}_{\lambda_0}$ ($10^{5}\,\mathrm{W}\,\mathrm{m}^{-2}\,\,\mu\mathrm{m}^{-1}\,\mathrm{str}^{-1}$) &
$ 2.1$ & $_{-0.1}^{+0.1}$ & $ 4.2$ & $_{-0.1}^{+0.1}$ \\
$\alpha$ &
$ 2.4$ & $_{-0.2}^{+0.2}$ & $ 2.4 $ & $_{-0.2}^{+0.2}$ \\
$\gamma$ &
$ 0.92 $ & $_{-0.07}^{+0.07} $ & $ 0.85 $ & $_{-0.05}^{+0.05} $ \\
$R_\mathrm{in}$ (AU) &
$ 11.0$ & $_{-2.0}^{+2.0}$ & $ 13.9 $ & $_{-2.4}^{+2.4}$ \\
$i$ (\degr) &
$ 60.5 $ & $_{-1.5}^{+1.5}$ & $ 59.3 $ & $_{-1.5}^{+1.5}$ \\
$\mathrm{PA}_\mathrm{d}$ (\degr) &
$ 139.8$ & $_{-1.0}^{+1.0} $ & $ 139.3 $ & $_{-1.0}^{+1.0}$ \\
$n_\mathrm{in}$ (m$^{-3}$) &
$ 0.09$ & $_{-0.06}^{+0.06} $ & $ 0.11 $ & $_{-0.07}^{+0.07} $\\
$\Delta\theta_\mathrm{d}$ (\degr) &
$7.5$ & $_{-4.4}^{+4.4} $ & $5.9$ & $_{-4.4}^{+4.4} $\\
\hline
\end{tabular}
\end{minipage}
\end{table}
\subsection{Data analysis from a model with fewer free parameters \label{simple_model}}
Thanks to the data analysis performed here we found that some parameters of the sgB[e] model adopted for CPD-57\degr\,2874 cannot be well constrained from the available VLTI/MIDI data. Of course this is not necessarily a general conclusion because it depends on the nature of the studied target and on the spectro-interferometric data available.
The results from our analysis of CPD-57\degr\,2874 indicate that it is justified to consider a simplified version of the model described in Sect~\ref{fracs}. As shown in the previous section, the parameters related to the density law are the less constrained by the data. Let us thus consider an alternative density law where the number density of dust grains is given by
\begin{equation}
\label{eq:density2}
n(r,\theta)=\left\{
\begin{array}{l@{\, ; \;}l}
n_\mathrm{in}\,\left(\frac{R_\mathrm{in}}{r}\right)^2 & 90\degr-0.5\Delta\theta_\mathrm{d} \leq \theta \leq 90\degr+0.5\Delta\theta_\mathrm{d} \\
0 & \theta < 90\degr-0.5\Delta\theta_\mathrm{d} \,\mathrm{and}\, \theta > 90\degr+0.5\Delta\theta_\mathrm{d} \\
\end{array} \right.
\end{equation}
The parameters $A_1$, $A_2$, and $m$ are not present in this simpler density prescription. Only $n_\mathrm{in}$ and $\Delta\theta_\mathrm{d}$ are necessary to define the density structure. Based on our previous results we have also fixed $T_\mathrm{in}$ to $1500$~K. The number of free parameters is thus reduced from 10 to 8, namely, $I^\mathrm{s}_{\lambda_0}$, $\alpha$, $\gamma$, $R_\mathrm{in}$, $i$, $\mathrm{PA}_\mathrm{d}$, $n_\mathrm{in}$, and $\Delta\theta_\mathrm{d}$.
Following the previous procedure we have performed a $\chi^2$ minimisation on the visibilities and fluxes using a LM algorithm. The results are shown in Table~\ref{tab:fit_parameters2}. Note that the best-fit parameters completely agree (well within the uncertainties) with the previous values obtained with the initial model (Table~\ref{tab:fit_parameters}).
Only $n_\mathrm{in}$ shows an important difference compared to the previously tested model. Contrarily to the previous density law (Eq.~\ref{eq:density}), the new one (Eq.~\ref{eq:density2}) assumes that the density is constant along a given $r$ inside the dust disc. In order to obtain similar fluxes and optical depths as before, $n_\mathrm{in}$ has to be somewhat smaller than the previous value.
Table~\ref{tab:fit_parameters2} also provides the parameter uncertainties estimated with the LM algorithm, which are smaller than those estimated from the $\chi_\mathrm{r}^2$ maps (given in Table~\ref{tab:fit_parameters}). These smaller errors appear because all data points are assumed to be independent and no covariance matrix is used in the error estimations. The estimated errors from LM in Table~\ref{tab:fit_parameters2} can then be considered as lower limits, while the errors from the $\chi_\mathrm{r}^2$ maps can be considered as upper limits to the parameter uncertainties (see discussion in Sect.~\ref{data_analysis}).
\section{Conclusions \label{conclusions}}
The dusty CSE of the Galactic sgB[e] CPD-57\degr\,2874 was spatially resolved thanks to mid-IR spectro-interferometric observations performed with the VLTI/MIDI instrument. Several physical parameters and corresponding uncertainties of this star were derived from a $\chi^2$ minimisation and from the analysis of $\chi^2$ maps. The physical quantities derived include the inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination (refer to Table~\ref{tab:fit_parameters} and Sect.~\ref{discussion} for details).
To our knowledge, this is the first direct determination of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a model-fitting approach from a $\chi^2$ minimisation. This was possible thanks to FRACS, which adopts a parametrised description of the central object and of the dusty CSE combined to a simplified radiative transfer (no scattering). Its main advantage is computation speed ($<10$~s per monochromatic image with $300\times300$ pixels). Because it is fast, FRACS allows us (1) to explore a large parameter space domain for a true global $\chi^2$ minimisation and (2) to more realistically estimate the uncertainties on the best-fit parameters. We would like to recall that contrarily to a model such as FRACS, simple geometrical models do not allow a simple and direct access to physical parameters and uncertainties of the dusty CSE.
{Future complementary observations could be included to measure new CSE parameters and/or to reduce the uncertainties and the correlations on some parameters that were not strongly constrained by the VLTI/MIDI observations alone. Consistently with the domain of validity of FRACS, these complementary data should be obtained at wavelengths above the near-IR, where the dust starts to contribute to a significant amount of the stellar flux (UV to visible observations cannot be consistently modelled by FRACS). For example, the closure phase information from the MATISSE beam combiner (second generation instrument on VLTI; Lopez et al. 2009, Wolf et al. 2009) could inform on the disc inclination and opening angle, which are parameters that can influence the symmetry of the mid-IR intensity distribution. Also, near-future interferometric observations in the millimetre with ALMA will be more sensitive than MIDI and MATISSE to the thermal emission from the colder regions of the dusty CSE. Thus, these observations will probably better constrain the temperature profile of the dust and also provide direct information on the structure and actual size of the CSE, allowing also for mass estimates. Moreover, this information on the colder dust is very important for the study of the evolutionary history of sgB[e] stars (e.g. mass and angular momentum losses, chemical enrichment and interaction with the close interstellar medium).}
\begin{acknowledgements}
This research used the SIMBAD and VIZIER databases at the CDS, Strasbourg (France), and NASA's ADS bibliographic services. M.B.F. acknowledges financial support from the Programme National de Physique Stellaire (PNPS-France) and the Centre National de la Recherche Scientifique (CNRS-France). M.B.F. also acknowledges Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq-Brazil) for the post-doctoral grant. We thank the CNRS for financial support through the collaboration program PICS France-Brazil. We also thank the referee for his useful and constructive comments, which helped us to improve the quality of this work.
\end{acknowledgements}
| -38,565.542795 |
[
-3.4609375,
3.111328125
] | 40.944882 |
[
-3.41015625,
1.046875,
-1.6806640625,
-5.6015625,
-1.23046875,
8
] |
[
4.453125,
7.4609375,
3.642578125,
5.40234375
] | 713 | 6,337 |
[
-3.4140625,
3.892578125
] | 30.757815 |
[
-6.0703125,
-2.2109375,
-2.599609375,
-2.12890625,
0.81103515625,
9.53125
] | 1.04592 | 30.211786 | 26.321603 | 13.276259 |
[
2.9727835655212402
] | -25,494.55172 | 6.261322 | -37,597.813881 | 0.304955 | 6.20235 |
[
-3.97265625,
-3.4921875,
-3.201171875,
-3.7734375,
2.421875,
10.9453125
] |
[
-7.11328125,
-3.431640625,
-3.78125,
-2.4140625,
4.6953125,
7.96875
] | |
BkiUdSI5qhDBUi61T7F3
|
\section{Introduction}
Nanoparticle characterization in dispersion proves to be a challenging task, in particular for complex and heterogeneous particle systems~\cite{iso:16}. Effects such as particle agglomeration and aggregation can lead to highly polydisperse or multi-modal systems, thus calling for robust, accurate and versatile characterization methods~\cite{anderson:13}. Existing technologies, such as nanoparticle tracking analysis~\cite{hole:13} or electron microscopy, provide possibilities for single particle analysis, however, with the bottleneck of low particle throughput and offline measurements.
The recently introduced optofluidic force induction (\textsc{of2}i) technique addresses these problems using the principle of optical tweezers in combination with a continuous flow, in order to perform single particle analysis of polydisperse samples in real-time~\cite{simic:22}. The physics underlying this scheme is similar to optical tweezer experiments, where a strongly focused laser beam is used to optically trap particles in three dimensions. The basic principle has been pioneered by Arthur Ashkin in 1970, and has been awarded the Nobel Prize for Physics in 2018~\cite{ashkin:70}. Optical tweezers allow for precise control of orientation, position and arrangement of the particles under investigation~\cite{lee:04,butaite:19,donato:16}. Besides holding particles in place, a weakly focused laser beam can also achieve two-dimensional optical trapping, where the particles can move along the optical axis of the exciting beam. Within the context of nanoparticle characterizations, this can be employed for optical chromatography~\cite{imasaka:95}.
At the heart of optical tweezers simulations lies the calculation of the optical forces~\cite{jones:15,gennerich:17}. These forces arise from the light-matter interaction of the exciting laser beam with a particle and the resulting photon momentum transfer, ultimately leading to a light scattering problem. While it is well established how such scattering problems can be solved for usual plane wave excitations within Mie theory~\cite{bohren:83}, more attention is required when dealing with higher-order laser modes carrying orbital angular momentum~\cite{allen:92,franke:08,shen:19}, such as the Laguerre-Gaussian beams used in \textsc{of2}i. Again the light scattering theory for such exctations has been developed elsewhere~\cite{kiselev:14,gutierrez-cuevas:18}, but must be put together with the other ingredients of a full simulation approach with sufficient care. Here, in addition to optical forces, viscous drag and thermal fluctuations contribute considerably to the dynamics of a particle in a liquid medium~\cite{bui:17}.
In this paper we develop and discuss a four-step model for the simulation of \textsc{of2}i, which accounts for the incoming electromagnetic fields of a Laguerre-Gauss beam, solves Maxwell's equations for such excitations and spherical particles, computes from the solutions of Maxwell's equations the optical scattering spectra and optical forces, and uses Newton's equations of motion to simulate the particle trajectories. A number of representative and prototypical setups are investigated to estimate the importance of the various ingredients entering our model.
The outline of the paper is as follows. In Sec.~\ref{sec:theory} we present the theory and derivation of the OF2i trajectory model. The resulting particle trajectories are presented in Sec.~\ref{sec:results}, and we provide detailed discussions of the impact of particle refractive indices, sphere sizes, and Brownian motion. Finally, in Sec.~\ref{sec:summary} we summarize our results and give an outlook to future work. Some of the theoretical details are given in the Appendices.
\section{Theory}\label{sec:theory}
The basic principle of \textsc{of2}i is sketched in Fig.~\ref{fig:setup}. The nanoparticles to be analyzed are immersed in a solution and are pumped through a microfluidic flow cell. Additionally, a weakly focused laser beam propagates in the flow direction. The purpose of this laser is three-fold. First, the optical forces in the transverse directions $x$,$y$ (see Fig.~\ref{fig:setup}) push the nanoparticles to the intensity maxima of the laser field, such that particles propagating sufficiently close to the maxima become trapped in the transverse directions. Second, the optical forces in the laser propagation direction $z$ push the particles and lead to velocity changes depending on size and material properties. Third, light is scattered off the particles and can be monitored outside the flow cell. By analyzing the velocity changes of the individual particles being transported through the focus region, one obtains detailed information about their properties. The light scattering intensities and emission patterns provide additional information, as will be discussed in more detail below.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{setup}
\caption{Schematics of optofluidic force induction (\textsc{of2}i). (a)~Nanoparticles to be analyzed are transported through a microfluidic channel alongside a weakly focused laser beam with an optical vortex (optical angular momentum $m=2$). The dashed box indicates the region where the field distribution is shown in Fig.~\ref{fig:optforce01}. The solid box indicates the region where in panel (b) the field intensity distribution of a nanosphere with a diameter of 2 $\mu$m, located at the intensity maximum, is shown.}
\label{fig:setup}
\end{figure}
An important ingredient of \textsc{of2}i is the use of a vortex laser beam with an orbital angular momentum (\textsc{oam})~\cite{allen:92,franke:08,bliokh:15,shen:19}. Throughout this work we consider a weakly focused Laguerre-Gaussian laser beam with a polarization along $x$, with the electric field~\cite{song:20} (see also Appendix~\ref{sec:LG})
\begin{equation}\label{eq:oam}
\bm E(r,\phi,z)\approx \mathscr{E}_m(r,z) e^{im\phi}\,\hat{\bm x}\,,
\end{equation}
where $m$ is the so-called topological charge associated with the \textsc{oam}, and $\mathscr{E}_m(r,z)$ is the field profile in the radial and propagation directions. The intensity profile of such a beam is depicted in Fig.~\ref{fig:setup} for $m=2$. Because of the topological charge, it has a ring-like distribution in the transverse directions with zero intensity in the center, and the trapped nanoparticles move along spiral-shaped trajectories through the focus region. This has the advantage that nanoparticles can bypass each other more easily and collisions are strongly surpressed in comparison to laser beams with an intensity maximum on the optical axis.
In Ref.~\cite{simic:22} we have experimentally demonstrated the working principle of \textsc{of2}i for an ensemble of standard polystyrene nanoparticles with well-known size distributions, and have developed a theoretical model for the analysis of the experiments. In the remainder of this section, we give a detailed presentation of the various ingredients entering this model. We start by presenting the theory in its most general form, and then specialize on the implementations using either Mie theory or a fully numerical simulation approach.
\subsection{Four-step model for OF2i}\label{sec:fourstep}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{optforce01new}
\caption{Field distribution in the focus region of the laser, see also dashed box in Fig.~\ref{fig:setup}(a). The light becomes deflected by the nanoparticle, through the actio-reactio principle an optical force (solid lines) is exerted on the particle that leads to a trapping in the transverse $x$,$y$ directions and a velocity change in the $z$ direction. The positions of panels (a--e) are reported in the panel on the left. We show results for nanospheres with diameters of 500 and 1000 nm, respectively, the refractive index is $n_b=1.33$ for the embedding medium (water) and $n=1.59$ for the nanosphere (polystyrene). Note that in panels (e) the intensity is low and the fields are hardly visible.}
\label{fig:optforce01}
\end{figure}
The theoretical description of \textsc{of2}i consists of an electromagnetic part and a particle trajectory part. We first provide a brief summary of the theoretical ingredients and then ponder on the details. In the electromagnetic part, we account for the optical response of the nanoparticles and compute the optical forces and scattering fields, see also Fig.~\ref{fig:optforce01}. We start with the incoming fields of the Laguerre-Gauss laser beam, $\bm E_{\rm inc}$, $\bm H_{\rm inc}$, which would be solutions of Maxwell's equations in absence of the nanoparticle. In presence of the nanoparticle we additionally have to consider the scattered fields $\bm E_{\rm sca}$, $\bm H_{\rm sca}$, which are chosen such that the boundary conditions of Maxwell's equations are fulfilled at the particle boundary. The sum of incoming and scattered fields then provides us with the total fields, which are the proper solutions of Maxwell's equations. From the deflection of the incoming fields we can compute the optical force $\bm F_{\rm opt}$, as shown in Fig.~\ref{fig:optforce01} and discussed in more detail below. In the particle trajectory part, we consider a Newton's equation of motion for the nanoparticle,
\begin{equation}\label{eq:newton}
m\ddot{\bm r}=\bm F_{\rm opt}(\bm r)+\bm F_{\rm drag}+\bm F_{\rm stoch}\,,
\end{equation}
where $m$ is the mass of the particle, which might include the added mass due to the fluid~\cite[Sec.~4.15]{newman:17}, $\bm r$ is the particle position, $\bm F_{\rm drag}$ the drag force of the particle moving through the fluid, and $\bm F_{\rm stoch}$ accounts for the stochastic fluid forces that are needed according to the fluctuation-dissipation theorem to counterbalance the drag forces~\cite{kubo:85}. By successively computing the optical forces and updating the particle position according to Eq.~\eqref{eq:newton}, we obtain the nanoparticle trajectories. Altogether, the theoretical model for \textsc{of2}i can be broken up into the following four steps.
\begin{enumerate}
\item Provide an expression for the incoming electromagnetic fields of the Laguerre-Gauss laser beam.
\item Solve Maxwell's equations in presence of the nanoparticle, using either an analytical or numerical approach. This step provides us with the scattered electromagnetic fields.
\item Use the total fields, this is the sum of incoming and scattered fields, to compute the optical force acting on the nanoparticle at a given position.
\item Use Newton's equation of motion including optical and microfluidic forces to obtain the particle trajectory.
\end{enumerate}
\noindent In this work we will establish the methodology for this four-step model and discuss results of representative simulation setups. In the future we plan to extend this model by tracing the scattered electromagnetic fields through the imaging system, which will allow us a most direct comparison with the experimental results. For completeness, we here list the additional steps that will be needed to simulate the imaging system.
\begin{enumerate}
\setcounter{enumi}{4}
\item Propagate scattered electromagnetic fields through glass boundaries of microfluidic flow cell.
\item Simulate imaging of scattered electromagnetic fields, using for instance the approach of Richards and Wolf~\cite{richards:59,novotny:06,hohenester:20}.
\end{enumerate}
\noindent We start by discussing the electromanetic part of our simulation approach. The power scattered by the nanoparticle is computed from the flow of scattered energy through the nanoparticle boundary~\cite{jackson:99}
\begin{equation}\label{eq:psca}
P_{\rm sca}=\frac 12\oint_{\partial V}\mbox{Re}\left(\bm E_{\rm sca}^{\phantom*}\times
\bm H_{\rm sca}^*\right)\cdot d\bm a\,,
\end{equation}
where $\partial V$ is the particle boundary with the infinitesimal boundary element $d\bm a$. In deriving this expression we have assumed the usual time harmonic dependence $e^{-i\omega t}$ for the electromagnetic fields and have averaged over an oscillation cycle. Eq.~\eqref{eq:psca} gives an estimate of how bright the nanoparticle appears in an imaging system, although a detailed analysis should additionally include the emission pattern of the scattered fields and the aforementioned deflection of these fields through lenses.
Similarly, the transfer of momentum from the electromagnetic fields to the nanoparticle, this is the optical force, can be computed from the net flux of momentum carried by the electromagnetic fields through the nanoparticle boundary and by utilizing momentum conservation in the composite system formed by the nanoparticle and the electromagnetic fields. This is, the inbalance of electromagnetic flux through the nanoparticle boundary provides us with the momentum transferred from the fields to the nanoparticle. For time harmonic electromagnetic fields and by averaging over an oscillation cycle, we obtain under the assumption of quasi-stationarity, where the nanoparticle motion is negligible on the time scale of the field oscillations, the expression~\cite{marago:13,jones:15,gennerich:17,hohenester:20}
\begin{equation}\label{eq:fopt}
\bm F_{\rm opt}=\frac 12\oint_{\partial V}\mbox{Re}\left[
\overset\leftrightarrow{\theta}-\frac 12\openone\mbox{tr}\big(\overset\leftrightarrow{\theta}\big)
\right]\cdot d\bm a\,.
\end{equation}
The term in brackets is Maxwell's stress tensor accounting for the momentum density flow of the electromagnetic fields, with~\cite{jackson:99}
\begin{equation}\label{eq:stress}
\theta_{ij}=\varepsilon E_i^{\phantom*}E_j^*+\mu H_i^{\phantom*}H_j^*\,,
\end{equation}
where $\varepsilon$ and $\mu$ are the permittivity and permeability of the embedding background medium, respectively. Eqs.~\eqref{eq:psca} and \eqref{eq:fopt} are the central expressions for the electromagnetic part of our theoretical modeling, and can be evaluated once the electromagnetic fields are at hand. Note that the expression for the optical force can be easily generalized to obtain optical torques acting on nanoparticles, which is of importance for non-spherical particle geometries~\cite{jones:15,gennerich:17,hohenester:20}.
For the trajectory part, we consider for the force on a small sphere moving with veclocity $\bm v$ through a viscous fluid the usual Stokes' drag valid for a creeping flow with a Reynolds number much smaller than one~\cite{note:stokes}
\begin{equation}
\bm F_{\rm drag}=-6\pi\mu R_{\rm hyd}\big(\bm v-\bm v_{\rm fluid}\big)\,,
\end{equation}
where $\bm v_{\rm fluid}$ is the velocity of the fluid and $\mu$ the dynamic viscosity. In this work we set for simplicity $R_{\rm hyd}$ to the radius of the sphere, but in general this hydrodynamic radius might differ from the radius entering the optical calculations~\cite{wyatt:14}. We will address this point in future work.
For sufficiently large spheres, say for diameters above 10 nm, the momentum relaxation time is so short that we can approximately set $\dot{\bm v}\approx 0$~\cite{neuman:08}. Also the stochastic forces don't play a decisive role for larger spheres, as will be discussed in Sec.~\ref{sec:stochatsic}. The nanosphere's velocity $\bm v$ is then obtained from the condition that the optical force is balanced by the drag force, and we get
\begin{equation}\label{eq:vsteady}
\bm v(\bm r)=\bm v_{\rm fluid}+\frac{{\bm F}_{\rm opt}(\bm r)}{6\pi\eta R_{\rm hyd}}\,.
\end{equation}
We emphasize that our model contains no free parameters, and all laser, fluid, and nanoparticle parameters can be inferred in principle from experiment.
\subsection{Mie theory}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{scheme}
\caption{Schematics of optical simulation approach. The incoming fields of the vortex laser are expanded in vector spherical harmonics (\textsc{vsh}), and are used together with the Mie coefficients to compute the scattered electromagnetic fields. Once the incoming and scattered fields are at hand, we can compute optical response properties such as the scattered light or the optical forces. In the right panel we show the $z$-component of the force density, this is the integrand of Eq.~\eqref{eq:fopt}, on the sphere boundary.}
\label{fig:scheme}
\end{figure}
Mie theory provides an efficient and versatile method for solving Maxwell's equations for spherical nanoparticles~\cite{bohren:83}, as schematically depicted in Fig.~\ref{fig:scheme}. The basic idea is to expand the electromagnetic fields in a complete basis with spherical symmetry. The transverse fields can be expanded using~\cite{jackson:99,hohenester:20}
\begin{equation}\label{eq:basis}
z_\ell(kr)\bm X_{\ell,m}(\theta,\phi)\,,\quad \nabla\times z_\ell(kr)\bm X_{\ell,m}(\theta,\phi)\,,
\end{equation}
where $z_\ell(kr)$ are spherical Bessel or Hankel functions, $k$ is the wavenumber of the medium, and $\bm X_{\ell,m}$ are the vector spherical harmonics. The angular degree and order are denoted with $\ell$ and $m$, respectively. The basis of Eq.~\eqref{eq:basis} has the advantage that field matching at the nanosphere boundary can be done easily and seperatly for each pair of $\ell$, $m$. Unfortunately, Mie theory is often complicated by the fact that the definitions of the various functions are not unique and different choices have been adopted in the literature, such that it is often difficult to compare results. We here largely follow the definitions given in~\cite{kiselev:14,jackson:99,hohenester:20}. For the incoming fields we choose spherical Bessel functions, which become plane waves at large distances $kr\gg 1$. The incoming electromagnetic fields can then be expanded via
\begin{eqnarray}\label{eq:mieinc}
\bm E_{\rm inc}&=&\sum_{\ell,m}\left[b_{\ell,m}^{\rm inc}j_\ell\bm X_{\ell,m}+
\frac ik a_{\ell,m}^{\rm inc}\nabla\times j_\ell\bm X_{\ell,m}\right]Z \nonumber\\
\bm H_{\rm inc}&=&\sum_{\ell,m}\left[a_{\ell,m}^{\rm inc}j_\ell\bm X_{\ell,m}-
\frac ik b_{\ell,m}^{\rm inc}\nabla\times j_\ell\bm X_{\ell,m}\right]\,,\quad
\end{eqnarray}
where $Z$ is the impedance and $a_{\ell,m}^{\rm inc}$, $b_{\ell,m}^{\rm inc}$ are the coefficients to be determined for specific incoming fields. Similarly, for the scattered fields outside the nanoparticle we choose spherical Hankel functions, which become outgoing spherical waves at large distances,
\begin{eqnarray}\label{eq:miesca}
\bm E_{\rm sca}&=&-\sum_{\ell,m}\left[b_{\ell,m}^{\rm sca}h_\ell^{(1)}\bm X_{\ell,m}+
\frac ik a_{\ell,m}^{\rm sca}\nabla\times h_\ell^{(1)}\bm X_{\ell,m}\right]Z \nonumber\\
\bm H_{\rm sca}&=&-\sum_{\ell,m}\left[a_{\ell,m}^{\rm sca}h_\ell^{(1)}\bm X_{\ell,m}-
\frac ik b_{\ell,m}^{\rm sca}\nabla\times h_\ell^{(1)}\bm X_{\ell,m}\right]\,.\qquad
\end{eqnarray}
These scattered fields are uniquely determined upon knowledge of the coefficients $a_{\ell,m}^{\rm sca}$, $b_{\ell,m}^{\rm sca}$. Additionally, we need the scattered electromagnetic fields inside the nanosphere, which are identical to Eq.~\eqref{eq:miesca}, however, with the replacement of the spherical Hankel by spherical Bessel functions that remain finite at the origin, and with different coefficients $c_{\ell,m}^{\rm sca}$, $d_{\ell,m}^{\rm sca}$. Below we will discuss how the scattering coefficients can be obtained through field matching at the sphere boundary.
For the incoming fields we consider a weakly focused Laguerre-Gauss laser beam and employ the paraxial approximation~\cite{song:20}, which is well justified for our case of weak focusing. The explicit expressions are given in Appendix~\ref{sec:LG}. In~\cite{kiselev:14} the coefficients $a_{\ell,m}^{\rm inc}$, $b_{\ell,m}^{\rm inc}$ were computed by matching the incoming fields and the Mie expansion of Eq.~\eqref{eq:mieinc} in the far-field limit. We here proceed somewhat differently and compute the coefficients using the field values on the sphere boundary~\cite[Eq.~(E.5)]{hohenester:20}
\begin{eqnarray}\label{eq:mieinc2}
a_{\ell,m}^{\rm inc}j_\ell(kR) &=&-\frac{Z^{-1}k}{\sqrt{\ell(\ell+1)}}
\oint Y_{\ell,m}^*\Big[\bm r\cdot \bm E_{\rm inc}(\bm r+\bm r_0)\Big]\,d\Omega\nonumber\\
b_{\ell,m}^{\rm inc}j_\ell(kR) &=&\phantom-\frac{k}{\sqrt{\ell(\ell+1)}}
\oint Y_{\ell,m}^*\Big[\bm r\cdot \bm H_{\rm inc}(\bm r+\bm r_0)\Big]\,d\Omega\,,\nonumber\\
\end{eqnarray}
where the integrals extend over the unit sphere, and $\bm r$ is a position determined by the unit sphere angles and located on the sphere with radius $R$. In Mie theory, the coefficients have to be computed for a reference frame where the sphere center is in the origin. As the incoming electromagnetic fields are defined in a reference frame where the focus is the origin, we have to translate $\bm r$ by the center position $\bm r_0$ of the nanosphere. The computation of the integrals can be considerably accelerated by using an equidistant grid for the azimuthal coordinate and noting that the resulting integral can be computed using the fast Fourier transform~\cite{press:02}. The remaining integral over the polar angle is computed by means of a Legendre-Gauss quadrature. The implementation of Eq.~\eqref{eq:mieinc2} can be easily tested for an incoming plane wave through comparison with the resulting analytic expressions~\cite[Eq.~(10.53)]{jackson:99}.
The computation of the scattered fields is particularly simple within Mie theory because each pair of angular degrees and orders $\ell$, $m$ can be handled separately. Field matching is accomplished through the so-called Mie coefficients~\cite{bohren:83,hohenester:20}
\begin{eqnarray}\label{eq:miecoeffs}
a_\ell&=&\frac{Z_2\psi_\ell(x_1)\psi_\ell'(x_2)-Z_1\psi_\ell'(x_1)\psi_\ell(x_2)
{Z_2\psi_\ell(x_1)\xi_\ell'( x_2)-Z_1\psi_\ell'(x_1)\xi_\ell(x_2)}\nonumber\\
b_\ell&=&\frac{Z_2\psi_\ell'(x_1)\psi_\ell(x_2)-Z_1\psi_\ell(x_1)\psi_\ell'(x_2)
{Z_2\psi_\ell'(x_1)\xi_\ell(x_2)-Z_1\psi_\ell(x_1)\xi_\ell'(x_2)} \,,
\end{eqnarray}
with $k_1$, $k_2$ being the wavenumbers of the medium inside and outside the nanosphere, respectively, and $Z_1$, $Z_2$ the corresponding impedances. We have introduced the abbreviation $x=kR$ and the Riccati-Bessel functions $\psi_\ell(x)=xj_\ell(x)$, $\xi_\ell(x)=xh_\ell^{(1)}(x)$, where a prime indicates the derivative with respect to $x$. With the Mie coefficients, the scattered and incoming fields can be related through
\begin{equation}\label{eq:abinc}
a_{\ell,m}^{\rm sca}=a_\ell\,a_{\ell,m}^{\rm inc}\,,\quad
b_{\ell,m}^{\rm sca}=b_\ell\,b_{\ell,m}^{\rm inc}\,.
\end{equation}
Thus, the entire solution of Maxwell's equations for spherical particles is embodied in the Mie coefficients of Eq.~\eqref{eq:miecoeffs}, where the matching of fields at the particle boundary has been explicitly worked out. Mie theory can be also used to compute the optical forces from the incoming and scattering coefficients only. We here follow the approach of \cite{gutierrez-cuevas:18} where analytic expressions are derived. Appendix~\ref{sec:mie} gives the explicit formulas used in this work.
\subsection{Boundary element method}
We additionally performed simulations using a fully numerical Maxwell solver. In this work these simulations are mainly used for testing purposes to check the proper implementation of our Mie theory. However, in future work such an approach might be useful for the investigation of non-spherical or coupled particles. We employ our home-made \textsc{nanobem} solver~\cite{hohenester.cpc:22} which is based on a boundary element method (\textsc{bem}) approach that can be easily adopted for the nanospheres under study. Details of the approach and typical runtime examples are discussed in some length in~\cite{hohenester.cpc:22}. In the present work we use the \texttt{optforce} function of the \texttt{galerkin.solution} class in order to directly compute the optical forces. Results of our \textsc{bem} simulations will be presented in the next section.
\section{Results}\label{sec:results}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{optforce02new}
\caption{Optical force $F_x$, $F_z$ in the focus plane $z=0$ and for a nanosphere with a diameter of 500 nm. We compare results for different computation schemes. \texttt{Mie}($\ell_{\rm max}$) report results for Mie theory with the cutoff number $\ell_{max}$ for the angular order and for the incoming fields computed within the paraxial approximation given in Appendix~\ref{sec:LG}. \texttt{farfield} gives the results for the approach of~\cite{kiselev:14} where the fields are matched in the farfield, for details see text. \texttt{BEM} reports results derived with our \textsc{nanobem} Maxwell solver based on a boundary element method approach. For the sphere discretization we use 796 boundary elements, for details see~\cite{hohenester.cpc:22}. The region shaded in gray reports the intensity of the Laguerre-Gauss laser beam in arbitrary units. As apparent from the figure, the optical force in the propagation direction $F_z(x)$ directly follows the intensity profile.}
\label{fig:optforce02}
\end{figure}
Using the methodology developed in the previous section, we performed simulations with the same parameters as previously used in~\cite{simic:22}. We consider a Laguerre-Gaussian beam with a topological charge of $m = 2$, a beam waist of $w_0=4.78$ $\mu$m for the fundamental Gaussian beam, a wavelength of $\lambda=532$ nm, and a power of 1.65 W. For details of the incoming laser fields see Appendix~\ref{sec:LG}. The fluid velocity is set to $v_{\rm fluid}=0.3$ mm/s and we use material parameters representative of water, namely a dynamic viscosity of $\eta=9.544 \times 10^{-4}$~Pa\,s and a refractive index of $n_b = 1.33$. The refractive index of the nanospheres is set to $n=1.59$, a value representative for polystyrene, if not noted differently.
Figure~\ref{fig:optforce02} reports results for the optical force in the focus region. The force $F_z$ in the longitudinal direction is largest at the intensity maxima of the vortex beam, see Fig.~\ref{fig:setup}. There the sphere is pushed in the positive $z$ direction leading to the velocity enhancements to be discussed below. The force $F_x$ in the transverse direction leads to trapping along $x$, and vanishes at the trapping positions $\pm w_0$, where the intensity and $F_z$ is largest. Additionally, there is an unstable equilibrium position at $x=0$ where no force is present because of the ring-like intensity profile of the vortex beam. In the figure we compare different computation schemes, namely Mie theory with different cutoff numbers for the angular order, the determination of the incoming Mie coefficients using either Eq.~\eqref{eq:abinc} or the scheme presented in~\cite{kiselev:14}, and a fully numerical approach based on the boundary element method. All schemes give indistinguishable results, thus demonstrating the accuracy and robustness of our approach.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{optforce03new}
\caption{Absolute values of incoming and scattering Mie coefficients, and of force $F_z$ as a function of angular degree $\ell$. We consider a sphere with 1000 nm diameter at the trapping position in the focus plane. For the incoming Mie coefficients we plot $\sum_m\left(\left|a_{\ell,m}^{\rm inc}\right|+\left|b_{\ell,m}^{\rm inc}\right|\right)$, with a similar expression for the scattering coefficients. The contributions are scaled such that the sum of the scattering coefficients gives one. For the optical force, we report the increments $|F_z(\ell)-F_z(\ell-1)|$ for different degrees $\ell$. The force contributions are scaled such that the sum gives one.}
\label{fig:optforce03}
\end{figure}
Figure~\ref{fig:optforce03} shows as function of the angular degree $\ell$ the absolute values of the incoming and scattered Mie coefficients for a nanosphere with 1000 nm diameter, which is trapped in the focus plane. With increasing $\ell$ the incoming coefficients increase, whereas the Mie coefficients of Eq.~\eqref{eq:miecoeffs} decrease (not shown). The scattering coefficients of Eq.~\eqref{eq:abinc} are the product of the incoming and Mie coefficients, which have a maximum at $\ell=6$ for the diameter under investigation, and then drop rapidly. A similar behavior is observed for the optical force $F_z$, the explicit expressions are given in Appendix~\ref{sec:mie}. In what follows, we choose a conservative cutoff number $\ell_{\rm max}=30$ for the angular degree, which provides a good compromise between fast simulations and highly accurate results.
\begin{figure*}
\centerline{\includegraphics[width=1.85\columnwidth]{trajectory01}}
\caption{Trajectories and velocities for nanospheres with different diameters and for laser beams (a--c) with and (a*--c*) without an optical vortex. In each panel, we report selected trajectories in the (1) $xz$ and (2) $xy$ plane, (3) the nanoparticle velocities as a function of propagation length $z$. The colors of the line segments scale with the total scattering power of the spheres, given in arbitrary units with the colorbar reported in panel (4). Trapped particles scatter more light and can be observed more easily.}
\label{fig:trajectory01}
\end{figure*}
Figure~\ref{fig:trajectory01} shows results for the nanosphere trajectories as obtained with the four-step model introduced in Sec.~\ref{sec:fourstep}. We compare laser excitations (a--c) with and (a*--c*) without an optical vortex, as well as sphere diameters of (a) 250, (b) 500, and (c) 1000 nm. Let us start by analyzing the sub-figures of the various panels in slightly more detail. In (1,2) we show selected trajectories. Initially, the spheres are located at postions $(x,0,z_0)$ sufficiently far away from the focus ($z=0$) in a region where the optical forces are weak and can be neglected. The nanoparticles are then transported through the fluid into regions of larger field strength, where some of them become trapped in the transverse directions. The velocity changes of the trapped nanospheres in the laser propagation direction $z$ are shown in (3). The color of the trajectories and velocities corresponds to the scattering power of Eq.~\eqref{eq:psca}, see (4) for the color code in arbitrary units. It is apparent that trapped particles scatter more light and appear significantly brighter in an imaging system. In the focus region, the scattered power of the trapped spheres with a diameter of 250 nm is at least three orders larger than that of the untrapped ones, and at least five orders for the larger spheres. Additionally, only the trapped particles experience noticeable velocity changes. The red dots in (1) indicate those particles which are trapped in the focus plane $z=0$. As can be seen, some spheres become trapped after the focus plane.
When comparing the results for different sphere diameters in panels (a--c) of Fig.~\ref{fig:trajectory01}, we observe that with increasing diameter (i) more particles become trapped and (ii) experience larger velocity enhancements. This can be attributed to the larger optical forces for larger nanoparticles. We also find that (iii) the trajectories of all trapped particles are practically indistinguishable, and (iv) the deflection of the particles out of the $xz$-plane increases with increasing diameters [see panels (2)]. This is due to the orbital angular momentum transferred from the vortex laser to the nanospheres. Finally, (v) also the scattering power increases with increasing diameter. All these observations are supported by the experimental findings reported in~\cite{simic:22}, and suggest a dynamics where the nanospheres become first trapped in the transverse directions, and then propagate along the intensity maxima of the focused laser in presence of almost identical optic and fluidic forces through the focus region. Note that in typical experiments the nanoparticles initially don't propagate in a single plane but are randomly distributed, correspondingly they are also randomly distributed in the focus region around the circular intensity maximum distribution of the vortex beam. This leads to the aforementioned suppression of collissions and blockage in comparison to laser excitations with an intensity maximum on the optical axis.
To make this point more explicit, in panels (a*--c*) we report results for a Laguerre-Gauss excitation with zero topological charge, $m=0$, this is, for an excitation without an \textsc{oam}. The trajectories are similar to the previous ones, with the exception of the larger velocity enhancements attributed to the higher field strengths of the focused laser without a vortex. Additionally, we observe (2) that all particle trajectories are bound to the $xz$-plane because of the missing \textsc{oam}. Owing to the laser intensity distribution that has a maximum at the $z$-axis for $m=0$, all trajectories are located on the $z$-axis around the focus regions, thus leading to particle collisions and blockage.
In what follows, we investigate the ability of \textsc{of2}i to infer from the observed velocity changes the size and material composition of the nanospheres. We here only discuss the impact of these parameters and leave the problem of how to solve the inverse problem, namely the determination of size, material, and possibly geometry, to future work.
\subsection{Refractive index of nanospheres}
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{velfocus}}
\caption{Maximal velocity in the focus region for nanospheres with different diameters and refractive indices (see inset). For larger refractive indices the velocity increases non-monotonically because of Mie resonances supported by the spheres.}\label{fig:velfocus}
\end{figure}
Figure~\ref{fig:velfocus} shows the maximal velocity in the focus region for dielectric nanospheres with different diameters and refractive indices (see inset). In all simulations we use water with an refractive index of $n_b=1.33$ for the embedding medium. For the smallest refractive indices of the nanospheres, say for $n\le 1.6$, the maximal velocity increases monotonically with increasing diameter, at least for the sphere sizes under investigation. In this regime it is thus possible to directly correlate the observed velocity enhancement with the particle diameter, as we have previously done in~\cite{simic:22}. Things somewhat change for larger nanospheres where the optical response is governed by Mie resonances supported by the spherical nanoparticles. Correspondingly, beyond a certain cutoff diameter the maximal velocity no longer simply increases with increasing diameter, but exhibits a more complicated resonance behavior.
For nanoparticles with larger refractive indices and/or larger particles in the micrometre range, in general it thus might be useful to analyze more carefully the light scattered off the nanoparticles. In Fig.~\ref{fig:emission} we show the emission pattern of nanospheres with different diameters and refractive indices. With increasing diameter the emission pattern sharpens into the forward direction (note that in the plots we use a logarithmic scale), however, at the same time the emission into other directions becomes strongly structured and provides detailed information about the nanosphere properties. Using Fraunhofer diffraction and Mie scattering approaches, the characterization of particle sizes upon knowledge of the refractive indices of the nanoparticle and the embedding medium is a well established technique~\cite{boer:87}. A more refined modeling of imaging within \textsc{of2}i would be needed (steps 5 and 6) to address the question whether the viable nanoparticle parameters can be uniquely extracted using this additional information.
\subsection{Active volume}
When inferring the particle number distribution from \textsc{of2}i measurements, we have to account for the fact that larger particles become trapped more easily than smaller ones, owing to the increase of optical forces with increasing particle size. See for instance the red dots in panels (1) of Fig.~\ref{fig:trajectory01} for those particles which are trapped in the focus plane. Recall that in our simulations we start with an initial position $(x,0,z_0)$ for the particles, where the propagation distance $z_0$ is located in a region where the optical forces are negligible (we use $z_0=-1$~mm). Subsequently, the particles are transported by the fluid into regions of larger field intensities, where they become trapped and experience the velocity changes previously discussed.
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{emission}}
\caption{Normalized emission pattern for nanospheres with diameters of (a) 250, (b) 500, (c) 1000, and (d) 2000 nm. We use a logarithmic scale in the radial direction and refractive indices of 1.4, 1.6, 1.8, 2.0, with the same color code as in Fig.~\ref{fig:velfocus}. All plots are scaled to the respective maxima of the emission patterns. In all cases the nanospheres are located in the focus plane at the trapping position around the intensity maxima of the vortex laser.}\label{fig:emission}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=0.9\columnwidth]{xcut}}
\caption{Velocity in focus region for different transverse starting positions $x$ and diameters, as well as for different refractive indices. In all simulations the particles start at $(x,0,z_0)$ in a region where the optical forces are negligible. Particles become either trapped or not (gray region), where all trapped particles are transported with the same velocity through the focus region. With increasing sphere diameter more particles become trapped, owing to the increase of optical forces for the larger particles.}\label{fig:xcut}
\end{figure}
In Figure~\ref{fig:xcut} we show the velocities in the focus plane as a function of transverse starting position $x$ and sphere diameter, and for different refractive indices. We observe that particles become either trapped or not, and for a given diameter and refractive index all trapped particles are transported with the same velocity through the focus plane. This observation agrees with the velocity curves shown in panel (3) of Fig.~\ref{fig:trajectory01}. When measuring particle size distributions one has to account for the different cutoff parameters for trapping $x_{\rm cut}(R,n)$, which depend on particle radius $R$ and refractive index $n$. For starting position $x\le x_{\rm cut}$ particles are trapped in the focus plane, for $x> x_{\rm cut}$ the optical forces are too weak for trapping. As previously discussed in~\cite{simic:22}, one can define an active volume
\begin{equation}
V_{\rm active}(R,n)=\Big[\pi x_{\rm cut}^2(R,n)\Big]v_{\rm fluid}t_{\rm meas}\,,
\end{equation}
where the term in brackets is the cross section in the transverse direction, and $v_{\rm fluid}t_{\rm meas}$ is the size of the sampling volume along the propagation direction in the measurememt time $t_{\rm meas}$. The active volume corrects for the fact that larger particles are trapped more easily and are observed more frequently in comparison to smaller particles. For $N_{\rm meas}$ velocity counts within $t_{\rm meas}$, the particle density is then proportional to $\nicefrac{N_{\rm meas}}{V_{\rm active}}$.
\subsection{Stochastic forces}\label{sec:stochatsic}
\begin{figure}[t]
\centerline{\includegraphics[width=1.05\columnwidth]{brownian}}
\caption{Velocities (left) and trajectories (right) as a function of propagation distance, with (thick lines) and without (thin lines) consideration of Brownian motion and for different sphere diameters (see inset). We use different colors for the different starting positions of the spheres. For the Brownian motion the velocity $v=\nicefrac{\Delta z}{\Delta t}$ is defined as the ratio between the propagation distance $\Delta z$ travelled by a particle in a time interval $\Delta t=0.01$ s and the time interval $\Delta t$. $r=\sqrt{x^2+y^2}$ is the transverse distance. For the smallest diameter shown in panel (a) the stochastic forces are of equal strength than the optical forces. For the larger diameters shown in panels (b--d) only the positions where the particles become trapped are somewhat altered by the Brownian motion. Once they are trapped, they essentially follow the trajectories previously discussed for simulations without stochastic forces.}\label{fig:brownian}
\end{figure}
We finally comment on the influence of stochastic forces and Brownian motion, which are known to have an important impact for optical tweezers and related experiments. The necessity for considering such forces was first noticed in the groundbreaking paper of Albert Einstein on Brownian motion~\cite{einstein:05}. In our implementation of a stochastic force term we closely follow Ref.~\cite{bui:17}. We first compute the drift velocity $\bm v$ using Eq.~\eqref{eq:vsteady} and then update the position according to~\cite[Eq.~(18)]{bui:17}
\begin{equation}
\bm r(t+\Delta t)\approx \bm r(t)+\bm v\Delta t+\left(\frac{k_BT\delta t}{3\pi\eta R}\right)^{\frac 12}\bm W\,,
\end{equation}
where $\Delta t$ is the computational timestep, $k_B$ is Boltzmann's constant, $T$ is the temperature, $R$ is the sphere radius, and $W_x$, $W_y$, $W_Z$ are normally distributed random numbers with a variance equal to one, as obtained for instance by the \textsc{matlab} function \texttt{randn}. The time step $\Delta t$ has to be chosen sufficiently small such that the optical forces at $\bm r(t)$ and $\bm r(t+\Delta t)$ do not differ significantly. In all our simulations we used a value of $\Delta t=1$ ms and a temperature of $T=293$ K.
Fig.~\ref{fig:brownian} shows results for simulations including stochastic forces. Let us first concentrate on the results for spheres with a sufficiently large diameter, say panels (b--d). In contrast to simulations without stochastic forces (thin lines), the velocity curves exhibit fluctuations that decrease with increasing diameter, and the motion in the transverse direction is altered in regions of weak optical forces. Once particles are trapped, they follow along the intensity maxima of the laser along trajectories that are very similar to the ones we have previously discussed. As in \textsc{of2}i experiments predominantly the trapped particles can be observed, stochastic forces typically have no crucial impact on the observed particle trajectories. Things are somewhat different for the smallest spheres, where the stochastic forces are of equal strength than the optical forces, and trapping can only be observed close to the focus region. Such behavior is not found in experiment where spheres with a diameter of 200 nm are clearly trapped. We attribute this diagreement to our simplified choice of the hydrodynamic radius in Eq.~\eqref{eq:vsteady}, and will analyze this point in more detail elsewhere.
\section{Summary and Outlook}\label{sec:summary}
To summarize, we have presented a four-step model for the theoretical description of \textsc{of2}i, which accounts for the nanoparticle propagation in a microfluidic channel in presence of laser excitation. The approach is currently based on Mie theory but can be extended with moderate computational overhead to full Maxwell solvers, using for instance the boundary element method, in order to simulate non-spherical or coupled particles. We have investigated the influence of particle size, refractive index, and Brownian motion on the observed trajectories and velocity enhancements. Quite generally, our results support the unique measurement capabilities of \textsc{of2}i for single-particle tracking with high throughput.
\textsc{of2}i measurement results provide additonal information such as the emission pattern, which might be used in future work to extract further properties of the particles to be analyzed. With this additional information we might overcome the difficulties regarding Mie resonances, in particular for particles with larger refractive indices, which currently lead to a problematic non-monotonic relation between sphere diameter and velocity enhancement. It will be also interesting to see how our conclusions become modified for non-spherical particles or particles with no sharp interfaces.
From the experimental side, we plan to investigate shorter Rayleigh ranges where smaller particles can be trapped more easily, as well as different polarization states of the incoming laser. For small particles the issue regarding geometric and hydrodynamic radius should be addressed with greater care. We also expect that for absorbing particles heating effects and the resulting photophoretic forces must be taken into account. This leaves us with a relatively large to-do list for the future. However, the four-step model introduced in this work provides us with a solid and versatile machinery for future investigations.
\section*{Acknowledgements}
This work was supported in part by the Austrian Research Promotion Agency (FFG) through project AoDiSys 891714, the European Commission (EC) through the projects NanoPAT (H2020-NMBP-TO-IND-2018-2020, Grant Agreement number: 862583) and MOZART (HORIZON-CL4-2021-RESILIENCE-01, Grant Agreement Number: 101058450). We thank the whole
nano-medicine workgroup at the Gottfried Schatz Research Center for their cooperation and most helpful discussions.
\begin{appendix}
\section{Fields of Laguerre-Gauss beam}\label{sec:LG}
The electromagnetic fields for a Laguerre-Gauss laser beam within the paraxial approximation are taken from~\cite{song:20} and are repeated in this Appendix for the sake of completeness. Let $m$ be the topological charge of the vortex beam and $w_0$ the beam waist. The radial index is set to $n=0$ throughout. The wavenumber of the embedding medium is $k$. We introduce the Rayleigh range
\begin{equation}
z_R=\frac 12 kw_0^2
\end{equation}
and the $z$-dependent waist
\begin{equation}
w(z)=w_0\sqrt{1+\zeta^2}\,,
\end{equation}
where $\zeta=\frac z{z_R}$. We next define~\cite[Eq.~(3)]{song:20}
\begin{equation}
u_0=\frac 1{1+i\zeta}\exp\left[-\left(\frac r{w_0}\right)^2\frac 1{1+i\zeta}\right]
\end{equation}
together with
\begin{equation}
u_m=\left(\frac{\sqrt 2 r}{w(z)}\right)^m\exp\left[im\left(\phi-\tan^{-1}\zeta\right)\right]\,.
\end{equation}
The electric field is then given through~\cite[Eqs.~(35,37)]{song:20}
\begin{eqnarray}
E_x &=& Au_0u_me^{ikz}\\
E_z &=& \left(\frac{m(x+iy)}{kr^2}-\frac {ix}{iz-z_R}-\frac{4ix}{kw^2}\right)Au_0u_me^{ikz}\,.\nonumber
\end{eqnarray}
Here $A$ is the amplitude of the laser beam. Similarly, the magnetic field reads~\cite[Eqs.~(39,49)]{song:20}
\begin{eqnarray}
ZH_y &=& Au_0u_me^{ikz}\\
ZH_z &=& \left(\frac{m(iy-x)}{kr^2}-\frac {iy}{iz-z_R}-\frac{4iy}{kw^2}\right)Au_0u_me^{ikz}\,.\nonumber
\end{eqnarray}
\section{Optical forces within Mie theory}\label{sec:mie}
In this Appendix we give the expressions for the optical forces in terms of Mie coefficients~\cite{gutierrez-cuevas:18}. A few modifications arise due to the different notations adopted in this work. We first introduce the abbreviations
\begin{eqnarray*}
\Lambda^{(1)} &=& \frac 1{\ell+1}\sqrt{\frac{(\ell+m+2)(\ell+m+1)\ell(\ell+2)
{(2\ell+1)(2\ell+3)}} \\
\Lambda^{(2)} &=& \frac 1{\ell+1}\sqrt{\frac{(\ell-m+2)(\ell-m+1)\ell(\ell+2)
{(2\ell+1)(2\ell+3)}} \\
\Lambda^{(3)} &=& - \frac{\sqrt{(\ell+m+1)(\ell-m)}}{\ell(\ell+1)}
\end{eqnarray*}
as well as
\begin{eqnarray*}
\Lambda_z^{(1)} &=& \frac 1{\ell+1}\sqrt{\frac{(\ell-m+1)(\ell+m+1)\ell(\ell+2)
{(2\ell+1)(2\ell+3)}}\\
\Lambda_z^{(2)} &=& \frac m{\ell(\ell+1)}\,.
\end{eqnarray*}
The expressions given in~\cite[Eq.~(29a)]{gutierrez-cuevas:18} can then be written in the compact form
\begin{eqnarray*}
f &=& \Lambda^{(1)}\left[2a_{\ell,m}^{\rm sca}a_{\ell+1,m+1}^{{\rm sca}\,*}+
a_{\ell,m}^{\rm inc}a_{\ell+1,m+1}^{{\rm sca}\,*}+
a_{\ell,m}^{\rm sca}a_{\ell+1,m+1}^{{\rm inc}\,*}\right] \\
&+& \Lambda^{(1)}\left[2b_{\ell,m}^{\rm sca}b_{\ell+1,m+1}^{{\rm sca}\,*}+
b_{\ell,m}^{\rm inc}b_{\ell+1,m+1}^{{\rm sca}\,*}+
b_{\ell,m}^{\rm sca}b_{\ell+1,m+1}^{{\rm inc}\,*}\right] \\
&+& \Lambda^{(2)}\left[2a_{\ell+1,m-1}^{\rm sca}a_{\ell,m}^{{\rm sca}\,*}+
a_{\ell+1,m-1}^{\rm inc}a_{\ell,m}^{{\rm sca}\,*}+
a_{\ell+1,m-1}^{\rm sca}a_{\ell,m}^{{\rm inc}\,*}\right] \\
&+& \Lambda^{(2)}\left[2b_{\ell+1,m-1}^{\rm sca}b_{\ell,m}^{{\rm sca}\,*}+
b_{\ell+1,m-1}^{\rm inc}b_{\ell,m}^{{\rm sca}\,*}+
b_{\ell+1,m-1}^{\rm sca}b_{\ell,m}^{{\rm inc}\,*}\right] \\
&+& \Lambda^{(3)}\left[2a_{\ell,m}^{\rm sca}b_{\ell,m+1}^{{\rm sca}\,*}+
a_{\ell,m}^{\rm inc}b_{\ell,m+1}^{{\rm sca}\,*}+
a_{\ell,m}^{\rm sca}b_{\ell,m+1}^{{\rm inc}\,*}\right] \\
&-& \Lambda^{(3)}\left[2b_{\ell,m}^{\rm sca}a_{\ell,m+1}^{{\rm sca}\,*}+
b_{\ell,m}^{\rm inc}a_{\ell,m+1}^{{\rm sca}\,*}+
b_{\ell,m}^{\rm sca}a_{\ell,m+1}^{{\rm inc}\,*}\right] \,.
\end{eqnarray*}
Similarly, we obtain ~\cite[Eq.~(29b)]{gutierrez-cuevas:18}
\begin{eqnarray*}
f_z &=& \Lambda_z^{(1)}\left[2a_{\ell+1,m}^{\rm sca}a_{\ell,m}^{{\rm sca}\,*}+
a_{\ell+1,m}^{\rm inc}a_{\ell,m}^{{\rm sca}\,*}+
a_{\ell+1,m}^{\rm sca}a_{\ell,m}^{{\rm inc}\,*}\right] \\
&+& \Lambda_z^{(1)}\left[2b_{\ell+1,m}^{\rm sca}b_{\ell,m}^{{\rm sca}\,*}+
b_{\ell+1,m}^{\rm inc}b_{\ell,m}^{{\rm sca}\,*}+
b_{\ell+1,m}^{\rm sca}b_{\ell,m}^{{\rm inc}\,*}\right] \\
&+& \Lambda_z^{(2)}\left[2b_{\ell,m}^{\rm sca}a_{\ell,m}^{{\rm sca}\,*}+
b_{\ell,m}^{\rm inc}a_{\ell,m}^{{\rm sca}\,*}+
b_{\ell,m}^{\rm sca}a_{\ell,m}^{{\rm inc}\,*}\right] \,.
\end{eqnarray*}
With these expression the optical force becomes
\begin{equation}
\bm F_{\rm opt}=-\frac{\varepsilon_0}{2k^2}\left(
\frac 12 \mbox{Im}[f]\,\hat{\bm x}-\frac 12\mbox{Re}[f]\,\hat{\bm y}+\mbox{Im}[f_z]\,\hat{\bm z}
\right)\,.
\end{equation}
In setting up our Mie code with the above formulas we found it particularly useful to additionally perform \textsc{bem} simulations for excitations with a single $\ell$, $m$ term, and to compare the \textsc{bem} and Mie results. With this comparison it is then relatively easy to check the proper implementation of the various contributions governing $\bm F_{\rm opt}$.
\end{appendix}
| -30,095.81585 |
[
-3.44921875,
3.1640625
] | 40.056022 |
[
-2.84375,
0.5615234375,
-1.9970703125,
-5.4765625,
-0.79345703125,
7.91796875
] |
[
4.66015625,
8.09375,
3.271484375,
7.0234375
] | 344 | 6,444 |
[
-2.943359375,
3.380859375
] | 25.753591 |
[
-6.52734375,
-4.5390625,
-4.6875,
-2.123046875,
2.412109375,
12.8203125
] | 1.092697 | 24.961187 | 23.075729 | 2.889587 |
[
2.467710494995117
] | -22,191.000182 | 5.964773 | -30,027.231742 | 0.624398 | 5.935993 |
[
-2.734375,
-3.955078125,
-3.927734375,
-4.69140625,
2.587890625,
12.4140625
] |
[
-5.9765625,
-2.375,
-2.904296875,
-1.6806640625,
4.015625,
5.71875
] | |
BkiUdTLxaKgTu3N3s7NK
|
\section{Introduction}
Let $(M,g)$ be a complete, boundaryless\footnote{we assume that $M$ has no boundary for the sake of simplicity, the method presented here can be adapted to more general manifolds with boundary provided that $S$ is compactly contained in the interior of $M$.
If this is not the case, such as in the classical heat content setting as in \cite{Vandenberg1994},
it should be possible to obtain similar results by modifying the geometrical optics construction used.
}, oriented Riemannian manifold with Laplace--Bel\-tra\-mi operator $\Delta$,
and volume $\,\mathrm{d}V$. On a codimension-$1$ submanifold of $M$, we write $\,\mathrm{d}A$ for the induced surface (hyper)-area form.
The \emph{heat semigroup} $T_t \coloneqq \exp(t\Delta)$ acting on $L^2(M,\,\mathrm{d}V)$ is well-defined ($\Delta$ is essentially self-adjoint on $C^\infty_c(M)$ \cite{Chernoff1973}) and its behaviour as $t \rightarrow 0^+$ has been extensively investigated in the literature.
Specifically, for a set $S \subset M$, the \emph{heat content} of the form $\Omega_{S,f}(t) \coloneqq \int_S T_t(f \mathds 1_S)\,\mathrm{d}V$, $f \in C^\infty(M)$, has recently
received much attention; see, for instance, \cite{Miranda2007,Vandenberg2015,Vandenberg2018} and the references therein.
Let us briefly recall some known results.
On $\mathbb R^n$, sets $S$ of \emph{finite perimeter} $P(S)$ are characterized by \cite[Thm.~3.3 ]{Miranda2007}
\begin{equation}\label{eq:res1}
\lim\limits_{t \rightarrow 0^+ }\sqrt{\frac{\pi}{t}}\Big(\Omega_{S,\mathds 1_M}(0) - \Omega_{S,\mathds 1_M}(t)\Big) = P(S)\,.
\end{equation}
Extensions of this idea to abstract metric spaces are given in \cite{Marola2016}.
In the setting of compact manifolds $M$ (or $M = \mathbb R^n$) and $S$ a full-dimensional submanifold with smooth boundary $\partial S$, the authors of \cite{Vandenberg2015} show that
\begin{equation}\label{eq:res2}
\Omega_{S,f}(t) = \sum_{j=0}^\infty \beta_j t^\frac{j}{2},\quad t\rightarrow 0^+,
\end{equation}
where the coefficients $\beta_j$ depend on $S$, $f$ and the geometry of $M$. The setting of \cite{Vandenberg2015} is more general, amongst other things it includes $f$ which have singularities. Some of the cofficients obtained in \cite[corollary 1.7]{Vandenberg2015} are
\begin{equation*}
\beta_0 = \int_S f \,\mathrm{d}V\,,\qquad \beta_1 = -\frac{1}{\sqrt \pi}\int_{\partial S} f \,\mathrm{d}A
\,,\qquad\beta_2 = \frac{1}{2}\int_{S} \Delta f \,\mathrm{d}V\,.
\end{equation*}
Extensions to some non-compact manifolds $M$ and certain non-compact $S$ are in \cite{Vandenberg2018}.
Both \cref{eq:res1,eq:res2} are proven with significant technical effort, yielding strong results. For example,
in \cite{Miranda2007}, explicit knowledge of the fundamental solution of the heat equation is used
to obtain \cref{eq:res1} for $C^{1,1}$-smooth $\partial S$, after which geometric measure theory is used. Similarly,
\cite{Vandenberg2015} requires pseudo-differential calculus and invariance theory.
Our aim is to show that slightly weaker
results can be obtained by considerably lower technical effort.
In contrast to \cite{Miranda2007}, we treat only compact $S$ with smooth boundary,
and do not allow $f$ to have singularities like \cite{Vandenberg2015} does.
On the other hand, we put no further restrictions than completeness on $M$.
The proof presented here is simple, comparatively short, and provides an alternative differential geometric/functional analytic point of view to questions regarding heat content.
Moreover, this approach is readily extended to some other PDEs including the semi-group generated by $\Delta^m$.
Observe that $T(t) = k(\sqrt{-t\Delta})$ with $k(x) = \exp(-x^2)$. We allow $k$ to be an arbitrary even Schwarz function, with $\Omega_{S,f}(t) = \int_{S} k(\sqrt{-t\Delta})(f \mathds 1_S)\,\mathrm{d}V$ and will prove:
\begin{theorem}\label{thm:thm1}
Let $M$ be a complete Riemannian manifold with Laplace-Beltrami operator $\Delta$, Riemannian volume $\,\mathrm{d}V$ and induced (hyper) area form $\,\mathrm{d}A$.
Let $S \subset M$ be a compact full-dimensional submanifold with smooth boundary. For $f \in C^\infty(M)$ and $N \in \mathbb N$,
\begin{equation*}
\Omega_{S,f}(t) = \sum_{j=0}^N \beta_j t^{\frac{j}{2}} + o(t^\frac{N}{2})\,,\quad t \rightarrow 0^+\,,
\end{equation*}
for constants $(\beta_j)_{j=0}^N$ described further in the next theorem.
\end{theorem}
With the $j$-th derivative $k^{(j)}$ (for $j \in \mathbb N_0$), let $r_{j} \coloneqq (-1)^{j/2} k^{(j)}(0)$ for $j$ even and $r_{j} \coloneqq (-1)^{(j-1)/2} \int_0^\infty \frac{2k^{j}(s)}{-\pi s}\, \mathrm d s$ for $j$ odd. Let $\varphi$ locally be the signed distance function (see also \cite[section 3.2.2]{Petersen2016}) to $\partial S$ with $S = \varphi^{-1}([0,\infty))$, and denote by $\nabla$ and $\cdot$ the gradient and (metric) inner product respectively. The vector field $\nu \coloneqq -\nabla \varphi$ is outer unit normal at $\partial S$.
\begin{theorem}\label{thm:thm2}
The coefficients of \cref{thm:thm1} satisfy $\beta_0 = r_0 \int_S f \,\mathrm{d}V$ and $\beta_1 = -\frac12 r_1 \int_{\partial S} f \,\mathrm{d}A$. For even $j \in \mathbb N_{\geq 2}$,
\begin{equation*}
\beta_{j} = \frac{r_j}{j!}
\int_S\frac12 {\Delta^{j/2} f}\,\mathrm{d}V
\end{equation*}
Moreover, given the Lie-derivative $\mathcal L_\nu$ with respect to $\nu$,
\begin{align*}
\beta_3 = \frac{r_3}{2\cdot3!} \int_{\partial S}\mathcal L_\nu (-\mathcal L_\nu + \frac12\Delta\varphi) f - \frac12 \Delta f + \frac12 (-\mathcal L_\nu + \frac12\Delta\varphi)^2 f\,\mathrm{d}A\,,
\end{align*}
similar expression can be found also for larger odd values of $j$ (see \cref{sec:higher_coeffs}).
\end{theorem}
The properties of the signed distance function $\varphi$ may be used to express terms appearing in \Cref{thm:thm2} using other quantities.
For example, its Hessian $\nabla^2 \varphi$ is the second fundamental form on the tangent space of $\partial S$ \cite[ch.~3]{Gray2004},
and thus $\frac12\Delta\varphi$ is the mean curvature.
Our approach to prove \cref{thm:thm1,thm:thm2} is to combine 3 well-known facts:
\begin{enumerate}[(A)]
\item The short-time behaviour of the heat flow is related to the short-time behaviour of the wave equation (cf.~\cite{Cheeger1982}).
\item The short-time behaviour of the wave equation with discontinous initial data is related to the short-time behaviour of the eikonal equation (cf.~`geometrical optics' and the progressing wave expansion \cite{Taylor2011}).
\item The short-time behaviour of the wave and eikonal equations with initial data $f \mathds 1_S$ is directly related to the geometry of $M$ near $\partial S$.
\end{enumerate}
Though points (A)-(C) are well known in the literature, they have (to the best of our knowledge) not been applied
to the study of heat content so far.
A significant portion of
(C) will rest on an application of the Reynolds transport theorem. Here,
denote by $\Phi^s$ the time-$s$ flow of the vector field $\nu = - \nabla \varphi$.
For small $s$, the (half) tubular neighborhood
\begin{equation}\label{eq:nghb}
S^{-s} \coloneqq \{x \in M \setminus S : \mathrm{dist}(x,\partial S) \leq s\}
\end{equation}
satisfies $S \cup S^{-s} = \Phi^s(S)$.
For $a \in C^\infty((-\varepsilon, \varepsilon) \times M)$, by \cite[Ch.~V, Prop.~5.2]{Lang1995},
\begin{align}\nonumber
\frac{\mathrm d}{\mathrm ds}\left.\int_{S^{-s}} a(s,\cdot)\,\mathrm{d}V\right\vert_{s=0}
&=\frac{\mathrm d}{\mathrm ds}\left(\left.\int_{S^{-s} \cup S} a(s,\cdot)\,\mathrm{d}V- \int_{S} a(s,\cdot)\,\mathrm{d}V\right)\right\vert_{s=0} \\
&=\int_{S} \mathcal L_{\tilde \nu} [ a(0,\cdot) \,\mathrm{d}V]= \int_{\partial S} a(0,\cdot)\,\mathrm{d}A\,.\label{eq:mainformula}
\end{align}
The last equation is a consequence of Cartan's magic formula and Stokes' theorem,
where we use that $\,\mathrm{d}V(\nu, \cdot) = \,\mathrm{d}A(\cdot)$ on $\partial S$.
\section{Proof for $\beta_0,\beta_1$}
By Fourier theory (for non-Gaussian $k$, the formulae must be adapted),
\begin{equation*}\label{eq:ft}
k(t) = \exp(- t^2) = \int_0^\infty \hat k(s) \cos(ts) \,\mathrm{d}s \quad
\mathrm{with}\quad \hat k(s) \coloneqq \frac{1}{\sqrt{\pi}}\exp\left(\frac{-s^2}{4 }\right)\,.
\end{equation*}
On the operator level, this yields the well-known formula \cite[section~6.2]{Taylor2011}
\begin{equation}\label{eq:transmutation}
T_t = \exp(t\Delta) = \int_0^\infty \hat k(s) \cos( s \sqrt{-t\Delta}) \,\mathrm ds\,.
\end{equation}
The operator $W^s \coloneqq \cos(s\sqrt{-\Delta})$ is the time-$s$ solution operator for the wave equation with zero initial velocity,
in particular $u(s,x) \coloneqq (W^s f \mathds 1_S)(x)$ (weakly) satisfies $(\partial_t^2 - \Delta)u = 0$.
Let $\Bra{\cdot,\cdot}$ denote the $L^2(M,\,\mathrm{d}V)$ inner product. Using \cref{eq:transmutation},
\begin{equation*}\label{eq:transmutation2}
\Bra{T_t f \mathds 1_S, \mathds 1_S} = \int_0^\infty \hat k(s)\Bra{W_{s\sqrt t}f\mathds 1_S, \mathds 1_{S}}\, \mathrm{d} s\,.
\end{equation*}
Similar reasoning has been used to great effect in
\cite{Cheeger1982} to derive heat-kernel bounds by making use of the \emph{finite propagation speed} of the wave equation.
As in \cite{Cheeger1982}, finite propagation speed yields for $s \geq 0$ that
$\Bra{W_{s} f \mathds 1_S, \mathds 1_{M \setminus S}} = \Bra{W_{s} f \mathds 1_{S^{s}}, \mathds 1_{S^{-s}}}$, where $S^{s} \coloneqq (M \setminus S)^{-s}$ is defined like \cref{eq:nghb}. Even if $\mathds 1_{M \setminus S} \notin L^2(M,\,\mathrm{d}V)$, we have just seen that the inner product $\Bra{W_s f \mathds 1_S, \mathds 1_{M \setminus S}}$ is nevertheless well-defined.
In \cite{Cheeger1982}, it is further observed that $\norm{W_s} \leq 1$. Using the Cauchy-Schwarz inequality and assuming $f = \mathds 1_M$, \cref{eq:mainformula} yields
\begin{equation}\label{eq:csi}
h(s) \coloneqq \Bra{W_{s} f \mathds 1_{S^{s}}, \mathds 1_{S^{-s}}} \leq
\norm{\mathds 1_{S^s}}_2 \norm{\mathds 1_{S^{-s}}}_2
\leq s \int_{\partial S} \,\mathrm{d}A + o(s), \quad s \rightarrow 0^+.
\end{equation}
In addition, $|\Bra{W_s f \mathds 1_S, \mathds 1_{S}}| \leq \norm{f\mathds 1_S}_2\norm{\mathds 1_S}_2$ for all $s\geq 0$, in particular as $s \rightarrow \infty$.
We conclude with some calculations (cf.~\cref{lemma:heatkernellemma} below), that
\begin{align}\nonumber
\Bra{T_t \mathds 1_S, \mathds 1_{S}} &= \int_0^\infty\hat k(s) \left( \Bra{W_{s\sqrt t } \mathds 1_S, \mathds 1_M} - \Bra{W_{s \sqrt t} \mathds 1_S, \mathds 1_{M\setminus S}}\right)\,\mathrm{d} s\\
\label{eq:crude}
&= \Bra{\mathds 1_S, \mathds 1_M} - \int_0^\infty \hat k(s) h(s\sqrt t)\, \,\mathrm d s\\
& \geq \int_S \,\mathrm{d}V - 2\sqrt{\frac{t}{\pi}} \int_{\partial S} \,\mathrm{d}A + o(\sqrt t), \qquad t \rightarrow 0^+.\nonumber
\end{align}
This is weaker than the desired estimate, and restricts to $f = \mathds 1_M$. The problem is that the estimates in \cref{eq:csi} are too crude.
To improve them, we instead approximate the solution $u$ to the wave equation with geometrical optics, using the ``progressing wave'' construction described in \cite[section 6.6]{Taylor2011}, some details of which we recall here.
The basic idea is that $u$ is in general discontinuous, with an outward-- and an inward-- moving discontinuity
given by the zero level-set of functions $\varphi^+$ and $\varphi^-$ respectively.
The functions $\varphi^\pm$ satisfy the eikonal equation
$\partial_t \varphi = \pm|\nabla \varphi^\pm|$ with inital value $\varphi^\pm(0,\cdot) = \varphi(\cdot)$.
Equivalently, using the the (nonlinear) operator $Ew \coloneqq (\partial_t w )^2 - |\nabla w|^2$, the
functions $\varphi^\pm$ satisfy $E(\varphi^\pm)=0$.
Our analysis is greatly simplified by choosing the initial $\varphi$ to (locally) be the signed distance function to $\partial S$.
The eikonal equation is then $\partial_t \varphi^\pm = \pm|\nabla \varphi| = \pm|-\nu|= \pm1$, i.e.~ $\varphi^\pm(x,t) = \varphi(x) \pm t$.
The progressing wave construction further makes use of two (locally existing and smooth) solutions $a^\pm_0$ to the first-order
transport equations $\pm\partial_t a^\pm_0(t,\cdot) + \nu \cdot \nabla a^\pm_0(t,x) = \frac12 a^\pm_0\Delta\varphi^\pm$.
Observe that with the Heaviside function
$\theta \colon \mathbb R \rightarrow \mathbb R$, and
$\Box \coloneqq \partial_t^2 - \Delta$,
the expression $\Box(a_0^\pm \theta(\varphi^\pm))$ is given by
\begin{align}
(\theta''(\varphi^\pm)E\varphi^\pm + \Box\varphi^\pm\theta'(\varphi^\pm))a_0^\pm +
2\left(\partial_t a_0^\pm\partial_t \varphi^\pm - \nabla a_0^\pm \cdot \nabla \varphi^\pm\right)\theta'(\varphi^\pm) + \Box a_0^\pm \theta(\varphi^\pm).\nonumber
\end{align}
The functions $\varphi^\pm$ and $a_0^\pm$ have been chosen so the above simplifies to
\begin{align}
\label{eq:progressing_wave_general}
\Box(a_0^\pm\theta(\varphi^\pm))&= 2\left(\pm \partial_t a_0^\pm + \nabla a_0^\pm \cdot \nu -\frac12\Delta \varphi a^\pm_0 \right)\theta'(\varphi^\pm) + \Box a_0^\pm \theta(\varphi^\pm)\nonumber \\ &= \Box a_0^\pm \theta(\varphi^\pm) \,.
\end{align}
Thus $\Box(a_0^\pm \theta(\varphi^\pm))$ is as smooth as $\theta$ is. We use
\begin{equation*}\label{eq:geometricoptics}
\tilde u(t,x) \coloneqq a^+_0(t,x)\theta(\varphi^+(t,x)) + a^-_0(t,x)\theta(\varphi^-(t,x))
\end{equation*}
as an approximation to the discontinuity of the solution $u$ to the wave-equation.
To maintain consistency with the initial values of $u$,
the initial values of the approximation $\tilde u$ are chosen to coincide with those of $u$ at $t=0$,
this is achieved by setting $a_0^\pm(0,\cdot) = \frac 12 f$ so that (at least formally) $\partial_t \tilde u(0,\cdot) = 0$ and also $\tilde u(0,\cdot) = \mathds 1_S f$.
The function $\tilde u$ approximates the discontinuous solution $u$ of the wave-equation well enough that the function $(s,x) \mapsto u(s,x) - \tilde u(s,x)$ is
continuous on $[-T,T] \times M$, see \cite[section 6.6, eq.~6.35]{Taylor2011}. By construction, $\tilde u(0,\cdot) = u(0,\cdot)$. Hence
$|(u(s,x) - \tilde u (s,x)| = o(1)$ as $s \rightarrow 0^+$,
which implies
\begin{equation}\label{eq:part1}
|\Bra{u(s,\cdot), \mathds 1_{S^{-s}}} - \Bra{\tilde u(s,\cdot),\mathds 1_{S^{-s}}}| = o(s)\,\quad s \rightarrow 0^+\,.
\end{equation}
As $\nabla \varphi = -\nu$, for sufficiently small $t$ the sets $\{x \in M : \varphi^+(t,x) = 0\}$ (resp. $\{x : \varphi^-(t,x) = 0\}$) are level sets of $\varphi$ on the outside (resp. inside) of $S$ (see also \cite[section 6.6]{Taylor2011}).
By construction, $\theta(\varphi^-)$ vanishes outside of $S$ for $t > 0$.
Consequently, using \cref{eq:mainformula}, we see that as $s \rightarrow 0^+$,
\begin{align}
\Bra{\tilde u (s,\cdot),\mathds 1_{S^{-s}}} &= \int_{S^{-s}} a^+_0(s,x) \mathds 1_{ \{\varphi^+(s,\cdot) \geq 0 \}} + a^-_0(s,x) \mathds 1_{\{\varphi^-(s,x) \geq 0 \}} \,\mathrm{d}V(x)\nonumber\\
&= s\int_{\partial S} a_0^+(0,x) \,\mathrm{d}A(x) + o(s)
= \frac{s}{2}\,\int_{\partial S} f \,\mathrm{d}A + o(s).\label{eq:part2}
\end{align}
Combining \cref{eq:part1,eq:part2},
\begin{equation*}\label{eq:fromprevious}
h(s) = \Bra{W_s f \mathds 1_S, \mathds 1_{S^{-s}}} = \Bra{u(s,\cdot),\mathds 1_{S^{-s}}} = \frac{s}{2} \int_{\partial S} f \,\mathrm{d}A + o(s),\quad s \rightarrow 0^+.
\end{equation*}
Calculations along the lines of \cref{lemma:heatkernellemma,eq:crude} yield
\begin{equation}\nonumber
\Bra{T_t f \mathds 1_S, \mathds 1_S} =\int_S f \,\mathrm{d}V - \sqrt{\frac{t}{\pi}} \int_{\partial S}f \,\mathrm{d}A + o(\sqrt t),\qquad t \rightarrow 0^+,
\end{equation}
as claimed.
\begin{lemma}\label{lemma:heatkernellemma}
Let $j \in \mathbb N$ and $\gamma: \mathbb R_{\geq 0} \rightarrow \mathbb R$. Let $\gamma(s) = s^j + o(s^j)$ for $s \rightarrow 0$ and $\gamma(s) = O(1)$ for $s \rightarrow \infty$. Then for $t \rightarrow 0^+$,
\begin{equation}\label{eq:generalft}
\int_0^\infty \gamma(s \sqrt t)\hat k(s) \,\mathrm ds = t^\frac{j}{2}\begin{cases}
(-1)^{\frac{j}{2}}\,k^{(j)}(0) & \textrm{$j$ even } \\
(-1)^{\frac{j-1}{2}}\int_0^\infty \frac{2\, k^{(j)}(s)}{-\pi s}\, \mathrm d s & \textrm{$j$ odd}
\end{cases} \quad + o\left(t^\frac{j}{2}\right)\,.
\end{equation}
With $k(s) = \exp(-s^2)$ and $h(s) = c_0 + c_1s + c_2 s^2 + o(s^2)$, this implies
\begin{equation}\label{eq:specialcase}
\int_0^\infty h(s\sqrt t) \hat k(s) \,\mathrm d s = c_0 + \frac{2c_1}{\sqrt \pi} \sqrt t + 2 c_2 t + o(t)\,.
\end{equation}
\end{lemma}
\begin{proof}
For even $j$, we obtain \cref{eq:generalft} by the Fourier-transform formula for$j$-th derivatives. If $j$ is odd,
we also need to multiply by the sign-function in frequency space, and then use that the inverse Fourier-transform (unnormalized) of the sign function is given by the principal value $\mathrm{p.v.}\left(\frac{2i}{x}\right)$ \cite[section 4]{Taylor2011}, see also \cite[Chapter 7]{Rudin1991}.
\Cref{eq:generalft} holds more generally,
e.g. if $k$ is an even Schwarz function. \Cref{eq:specialcase} may also be verified directly without \cref{eq:generalft}.
\end{proof}
\section{Proof for $\beta_2,\beta_3,\cdots$}\label{sec:higher_coeffs}
We now turn to calculating $\beta_j$ for $j \geq 2$.
We use the $N$-th order progressing wave construction with sufficiently large $N \gg j$.
For the sake of simplicity, we write $O(t^\infty)$ for quantities that can be made $O(t^k)$ for any $k\in\mathbb N$ by choosing sufficiently large $N$.
As in the previous section, the construction is from \cite[section 6.6]{Taylor2011}.
With $\theta_0 \coloneqq \theta$, and $\theta_i(t) \coloneqq \int_{-\infty}^t \theta_{i-1}(s)\mathrm ds$ we write
\begin{align*}
\tilde u^\pm(t,x) \coloneqq \sum_{i=0}^N a^\pm_i(t,x) \theta_i(\varphi^\pm(t,x))\,.
\end{align*}
Here the functions $a_0^\pm$ are defined as before; and for $i \geq 1$ the $i$-th order transport equations $\pm \partial_t a^\pm_i = -\nu \cdot \nabla a^\pm_i + \frac 12 a_i^\pm \Delta \varphi^\pm - \frac 12\Box a_{i-1}^\pm$ define $a^\pm_i$ together
with initial data $a^\pm_i(0,\cdot) = -\frac12(\partial_t a^+_{i-1}(0,\cdot) + \partial_t a^-_{i-1}(0,\cdot))$. As in
\cref{eq:progressing_wave_general}, one may verify that $\Box\tilde u^\pm = \Box a_{i} \theta_N(\varphi^\pm)$.
Writing $\tilde u = \tilde u^+ + \tilde u^-$ and
\begin{equation}\nonumber
u(t,x) = \tilde u^+(t,x) + \tilde u^-(t,x) + R_N(t,x)\,,
\end{equation}
the remainder satisfies $R_N \in C^{(N,1)}([-T,T] \times M)$ and $R_N(t,\cdot)$ vanishes at $t=0$, see \cite[section 6.6, eq.~6.35]{Taylor2011}. Moreoever, $R_N$ is supported on $\{(x,t) : \dist(x,S) \leq |t|\}$, all of this implies that, as $t \rightarrow 0^+$,
\begin{align}\label{eq:approximationworks}
h(t) = \int_{M \setminus S} u(t,x)\,\mathrm{d}V(x) = \int_{M \setminus S} \tilde u^+(t,x)\,\mathrm{d}V(x) + O(t^\infty)\,
\end{align}
and moreover $h \in C^\infty([0,T])$.
The structure of $R_N$ implies that $\Box \tilde u^+(t,x) = O(t^\infty)$ on $M \setminus S$,
provided that this expression is interpreted in a sufficiently weak sense.
Formally, therefore
\begin{align}
\partial_t^2\int_{M\setminus S} \tilde u^+(\cdot,t)\,\mathrm{d}V &= \int_{M \setminus S} \Delta \tilde u^+(\cdot,t) \,\mathrm{d}V + O(t^\infty) \nonumber \\
&= - \int_{\partial S} \nabla \tilde u^+(\cdot,t) \cdot \nu \,\mathrm{d}A + O(t^\infty)\,,\label{eq:afterdivergence}
\end{align}
where the last step is the divergence theorem. One may verify \cref{eq:afterdivergence} rigorously by either doing the above steps in the sense
of distributions, or by a (somewhat tedious) manual computation.
Combining this with \cref{eq:approximationworks},
\begin{align}\label{eq:hdd}
h''(t) = -\int_{\partial S} \nabla \tilde u^+(\cdot,t) \cdot \nu \,\mathrm{d}A + O(t^\infty)\,.
\end{align}
The quantity $h^{(j)}(0)$ may thus be seen to depend $\tilde u^+(0,\cdot)$ at $\partial S$, which in turn depends on $a_i^\pm$ at $t=0$.
Defining $\mathbf S_i \coloneqq a_i^+ + a_i^-$ and $\mathbf D_i \coloneqq a_i^+ - a_i^-$ for $i=0,1,\dots$,
let $L$ be the (spatial) differential operator defined for $w \in C^\infty(M)$ by $Lw \coloneqq \frac12 \Delta \varphi w - \nu \cdot \nabla w$.
For $i \in \mathbb N_0$, the transport equations imply
\begin{alignat}{5}\label{eq:recurrence_relations_a}
\partial_t \mathbf S_0 &= L\mathbf D_0\,, &\quad\quad \partial_t \mathbf D_0 &= L \mathbf S_0\,, \quad\quad\\
\partial_t \mathbf S_{i+1} &= L\mathbf D_{i+1} - \frac12\Box \mathbf D_{i}\,, &\quad\quad \partial_t \mathbf D_{i+1} &= L\mathbf S_{i+1} - \frac12\Box S_{i} \quad \mathrm{for} \quad i \geq 0\,,\label{eq:recurrence_relations_b}
\end{alignat}
with initial values satisfying
\begin{alignat}{4}\label{eq:recurrence_relations_iv_a}
a_{0}^+(0,\cdot) &=\ \frac12 \mathbf S_0(0,\cdot) = \frac12 f(\cdot)\,,\quad\quad &\mathbf D_0(0,\cdot) &= 0\,,\\
a_{i+1}^+(0,\cdot) &= \frac12 \mathbf D_{i+1}(0,\cdot) = -\frac12 \partial_t \mathbf S_{i}(0,\cdot)\,,\quad\quad &\mathbf S_{i+1}(0,\cdot) &= 0\label{eq:recurrence_relations_iv_b}\,.
\end{alignat}
\begin{lemma}\label{lemma:ailemma}
For $i,n \in \mathbb N_0$ it holds that $\partial_t^{2n}\mathbf D_i(0,\cdot) = 0$ (note that as a consequence, also $a_{i+1}(0,\cdot)$, $L\mathbf D_i(0,\cdot)$, and $\Box^n \mathbf D_i(0,\cdot)$ are zero).
\end{lemma}
\begin{proof}
We will proceed by induction over $i$ and use the identities \crefrange{eq:recurrence_relations_a}{eq:recurrence_relations_iv_b}. For $i=0$, $\mathbf D_0(0,\cdot) = 0$ is trivially satisfied. Moroever, $\partial_t^{2n} \mathbf D_0 = R^n\mathbf D_0$, which is zero at $t=0$. For $i=1$, observe that $a_1^+(0,\cdot) = -\frac12 \partial_t \mathbf S_0(0,\cdot) = -\frac12 L\mathbf D_0(0,\cdot) = 0$, and thus $\mathbf D_1(0,\cdot) = 0$. Likewise, $\partial_t^2 \mathbf D_1 = \partial_t(L\mathbf S_1 - \frac12\Box \mathbf S_0) = L(L\mathbf D_1 - \frac 12\Box \mathbf D_0) - \frac12\Box L\mathbf D_0$. As the operator $L$ commutes with $\partial_t^2$, this expression vanishes at $t=0$. Induction over $n$ proves the remainder of of the statement for $i=1$.
For the general case, we assume the induction hypothesis for $i$ and $i+1$ and start by noting that $\mathbf D_{i+2}(0,\cdot) = 2a_{i+2}^+(0,\cdot) = -\partial_t \mathbf S_{i+1}(0,\cdot) = -\left(L \mathbf D_{i+1}(0,\cdot) - \frac12\Box \mathbf D_{i}(0,\cdot) \right) = 0$. Moreover, $\partial_t^2 \mathbf D_{i+2} = \partial_t ( L \mathbf S_{i+2} - \frac12 \Box \mathbf S_{i+1}) = L(L\mathbf D_{i+2} - \frac12 \Box \mathbf D_{i+1}) - \frac12 \Box \left(L \mathbf D_{i+1} - \frac12 \Box \mathbf D_i\right)$, which again vanishes at $t=0$; the case $n > 1$ may again be proven by induction over $n$.
\end{proof}
\begin{corollary}\label{lemma:coefficientslemma} For even $j \in \mathbb N_{\geq 2}$, the $j$-th derivative of $h$ sastifies
\begin{equation}\nonumber
h^{(j)}(0) = -\frac 12 \int_S \Delta^{j/2} f\,\mathrm{d}V\,.
\end{equation}
\end{corollary}
\begin{proof}
\Cref{lemma:ailemma} shows that for $i \geq 1$, $a^+_i(0,x) = 0$. Together with \cref{eq:hdd}, thus $h''(0) = -\int_{\partial S} \nabla a^+_0(0,\cdot) \cdot \nu \,\mathrm{d}A = -\frac12\int_{\partial S} \nabla f \cdot \nu\,\mathrm{d}A$. This is the case $j=2$.
More generally,
for $j = 2k$ with $k \in \mathbb N_{\geq 2}$, we use that (for $x \in \partial S$), $\tilde u^+$ satisfies
$\partial_t^2 \tilde u^+(t,x) = \Delta \tilde u^+(t,x) + O(t^\infty)$. \Cref{eq:hdd} ensures that as $t \rightarrow 0^+$,
\begin{align*}
h^{(2k)}(t) = \int_{\partial S} \nabla (\Delta^{k-1} \tilde u^+(t,\cdot)) \cdot \nu \,\mathrm{d}A + O(t^\infty)\,.
\end{align*}
As for the case $k=1$, it follows that $h^{(2k)}(0) = -\int_{\partial S} \nabla(\Delta^{k-1} a^+_0) \cdot \nu\,\mathrm{d}A$, the divergence theorem yields the claim.
\end{proof}
The odd coefficients are trickier, we only compute the case $j=3$. We start with the observation that for $x\in\partial S$, $\varphi^+(t,x) = t$ and therefore
\begin{align*}\label{eq:vanishinghigher}
\tilde u^+(t,x) &= \sum_{i=0}^N \frac{1}{i!} t^i a^+_i(t,x)\quad \mathrm{ for }\ \ t \geq 0,\ \ x \in \partial S\,.
\end{align*}
Recall that that the Lie-derivative acts on functions $w \in C^\infty(M)$ by $\mathcal L_\nu w = \nabla w \cdot \nu$. Thus
$\mathcal L_\nu \theta_{i+1}(\varphi^+(t,x)) = -\theta_{i}(\varphi^+(t,x))$, so for $x \in \partial S$,
\begin{align*}
\mathcal L_\nu \tilde u^+(t,x) &= \sum_{i=0}^{N-1} \frac{t^i}{i!}(\mathcal L_\nu a_i^+(t,x) - a_{i+1}(t,x)) + O(t^\infty)\,.
\end{align*}
Therefore $\partial_t \mathcal L_\nu \tilde u^+(0,x) = \partial_t(\mathcal L_\nu a_0^+(0,x) - a_1^+(t,x)) + (\mathcal L_\nu a_1^+(0,x) - a_2^+(0,x))$,
but the second term is zero as $a_1^+$ and $a_2^+$ vanish at $t=0$ by \cref{lemma:ailemma}. Substituting the transport equations and removing further zero terms leaves
$\partial_t \mathcal L_\nu \tilde u^+(0,x) = \mathcal L_\nu La_0^+(0,x) + \frac12 \Box a_0(0,x) = \frac12\left(\mathcal L_\nu L f(x) - \frac12 \Delta f(x) + \frac12 L^2 f(x) \right)$. Thus (recall that $L = - \mathcal L_\nu + \frac12\Delta \varphi$) directly from \cref{eq:hdd},
\begin{align*}
h^{(3)}(0) &= -\frac12\int_{\partial S}\mathcal L_\nu L f(x) - \frac12 \Delta f(x) + \frac12 L^2 f(x)\,\mathrm{d}A(x)\,.
\end{align*}
The formula
\begin{equation}\label{eq:basicidea}
\Omega_{S,f}(t) = \int_0^\infty \hat k(s)\left(\int_S f \,\mathrm{d}V - h( s \sqrt t ) \right) \,\mathrm{d} s
\end{equation}
established in the previous section, together with \cref{lemma:heatkernellemma}, yields the asymptotic behaviour of $\Omega_{S,f}(t)$ by
taking the Taylor-expansion of $h$ using \cref{lemma:coefficientslemma}. This gives the remainder of the claims of \cref{thm:thm2}.
\section{Discussion}
The above-said is not specific to the heat equation. Taking $k(x) = \exp(-x^{2m})$, $m \in \mathbb N$, we may, for example, study the one-parameter operator family $\exp(-t^m \Delta^m)$. The wave equation estimates needed are the same. For $m \geq 2$, a brief calculation yields the explicit $t \rightarrow 0^+$ asymptotics
\begin{equation*}
\Bra{\exp(t^m\Delta^m)f \mathds 1_S, \mathds 1_S} = \int_S f\,\mathrm{d}V - \left(\pi^{-1} \Gamma\left(\frac{2m-1}{2m}\right) \int_{\partial S} f\,\mathrm{d}A\right)\sqrt t + o(t).
\end{equation*}
We conclude with the observation that the generalization of this paper to \emph{weighted} Riemannian manifolds (cf.~\cite{Grigoryan2009}) is straightforward.
\section{Acknowledgements}
The author was supported by the Priority Programme
SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.
The author thanks the reviewer for simplifying a significant part of the argument,
and thanks Oliver Junge and Daniel Karrasch for helping to improve the manuscript.
| -35,643.798849 |
[
-1.8095703125,
1.6640625
] | 32.371795 |
[
-2.98046875,
0.367431640625,
-2.265625,
-6.4609375,
-1.0205078125,
8.8828125
] |
[
3.6640625,
9.453125,
0.291015625,
5.421875
] | 169 | 3,098 |
[
-3.25390625,
3.90625
] | 35.545328 |
[
-5.53125,
-4.2421875,
-4.8515625,
-2.4375,
1.892578125,
12.9609375
] | 0.621013 | 6.671538 | 32.149774 | 2.186001 |
[
1.032259225845337
] | -22,334.580646 | 5.71756 | -35,192.496856 | 0.79038 | 6.01598 |
[
-1.8310546875,
-3.48828125,
-4.16796875,
-5.78125,
2.064453125,
13.2578125
] |
[
-5.0234375,
-1.7275390625,
-1.45703125,
-1.068359375,
3.0390625,
3.109375
] | |
BkiUdgQ4eIZijW5EQVYJ
|
\section{Supplemental Material}
\subsection{Bogoliubov-de Gennes equation}
We consider a near-quasi-1D dipolar condensate modeled by the order parameter $\phi$ solution of the nonlocal GPE%
\begin{equation}
i\partial_t\phi=\left(-\frac{\partial_x^2}{2}+U+g_{\rm dd}\rho\right)\phi-3g_{\rm dd}\phi G*\rho,\label{sm1}
\end{equation}
where $\rho=|\phi|^2$ is the condensate density. Here, the symbol $G*\rho$ indicates the convolution $G*\rho(x)=\int_{-\infty}^{\infty} dx'G(x-x')\rho(x')$, where the interaction kernel $G$ is defined in terms of its Fourier transform by $G(x)=(1/2\pi)\int dk \tilde{G}(\ell_{\bot}k)\exp(ikx)$, and $\tilde{G}(\ell_{\bot}k)$ is the limit $\mathcal{N} \rightarrow\infty$, $\Delta q\rightarrow 0$ of
\begin{equation}
\tilde{G}(\eta)= \frac{\eta^2}{\sum_{j=0}^{\mathcal{N}}j\Delta q^2e^{-j^2\Delta q^2/2}}\sum_{j=0}^{\mathcal{N}}\frac{j\Delta q^2e^{-j^2\Delta q^2/2}}{j^2\Delta q^2+\eta^2}.\label{sm2}
\end{equation}
The advantage of writing the interaction kernel as in the equation above comes from the fact that for $\mathcal{N}\sim10$ and $\Delta q\sim1/3.4$ one already has an excellent approximation to the exact kernel. Also, $\ell_{\bot}$ is the characteristic radial size o the condensate. Equation \eqref{sm1} can also be expressed using the Madelung representation $\phi=\sqrt{\rho}\exp(i\theta)$ in terms of the system density and phase $\theta$ as
\begin{align}
\partial_t\rho&=-\partial_x(\rho v),\label{sm3}\\
-\partial_t\theta&=-\frac{\partial_x^2\sqrt{\rho}}{2\sqrt{\rho}}+\frac{v^2}{2}+U+g_{\rm dd}\rho-3g_{\rm dd}G*\rho,\label{sm4}
\end{align}
where $v=\partial_x \theta$. Equation \eqref{sm3} is just the continuity equation, and Eq.~\eqref{sm4} is a nonlocal, dipolar generalization of the Euler equation. Furthermore, the GPE follows from the Lagrangian
\begin{equation}
L=\int dx\left[\frac{i}{2}\phi^*\partial_t\phi-\frac{i}{2}(\partial_t\phi^*)\phi-\frac{1}{2}|\partial_x\phi|^2-\left(U+\frac{g_{\rm dd}}{2}|\phi|^2\right)|\phi|^2+\frac{3g_{\rm dd}}{2}|\phi|^2G*|\phi|^2\right].
\end{equation}
Given a solution $\phi$ of Eq.~\eqref{sm1}, we want to study small perturbations of the form $\phi\rightarrow\phi+\delta\phi$, that correspond to $L\rightarrow L+\delta L$, where
\begin{align}
\delta L=\int dx\bigg[&\frac{i}{2}\delta\phi^*\partial_t\delta\phi-\frac{i}{2}(\partial_t\delta\phi^*)\delta\phi-\frac{1}{2}|\partial_x\delta\phi|^2-\left(U+2g_{\rm dd}\rho-3g_{\rm dd}G*\rho\right)|\delta\phi|^2-\frac{g_{\rm dd}}{2}(\phi^2\delta\phi^{*2}+\phi^{*2}\delta\phi^{2})\nonumber\\
&+\frac{3g_{\rm dd}}{2}(\phi^*\delta\phi+\phi\delta\phi^*)G*(\phi^*\delta\phi+\phi\delta\phi^*)\bigg],
\end{align}
in view of the GPE. Thus, by starting from the solution $\phi=\sqrt{\rho}\exp(-i\mu t+ivx)$ to the GPE, and defining our field variable $\psi$ by $\delta\phi=\exp(-i\mu t+ivx)\psi$, we obtain the Lagrangian $L_\psi$ for the field $\psi$
\begin{align}
L_{\psi}=\int dx\bigg\{&\frac{i}{2}\psi^*\partial_t\psi-\frac{i}{2}(\partial_t\psi^*)\psi-\frac{1}{2}|\partial_x\psi|^2-\left(g_{\rm dd}\rho+\frac{\partial_x^2\sqrt{\rho}}{2\sqrt{\rho}}\right)|\psi|^2-\frac{g_{\rm dd}}{2}\rho(\psi^{*2}+\psi^{2})\nonumber\\
&+\frac{iv}{2}\left[\psi^*\partial_x\psi-(\partial_x\psi^*)\psi\right]+\frac{3g_{\rm dd}}{2}\sqrt{\rho}(\psi+\psi^*)G*\sqrt{\rho}(\psi+\psi^*)\bigg\},\label{lagrangian}
\end{align}
and the Euler-Lagrangian equation
\begin{equation}
i\partial_t\psi=-\frac{\partial_x^2}{2}\psi-iv\partial_x\psi+\left[\frac{\partial_x^2\sqrt{\rho}}{2\sqrt{\rho}}-\frac{i}{2}(\partial_xv)\right]\psi+\rho g_{\rm dd}(\psi+\psi^*)-3g_{\rm dd}\sqrt{\rho}G*\sqrt{\rho}(\psi+\psi^*),\label{BdG}
\end{equation}
known as the Bogoliubov-de Gennes equation. In order to find the solutions of Eq.~\eqref{BdG}, it is convenient to work with the Nambu spinor defined by
\begin{equation}
\Psi=\frac{1}{\sqrt{\rho}}
\left(\begin{array}{c}
\psi\\
\psi^*
\end{array}\right),
\end{equation}
which is seen to satisfy the reflection property $\sigma_1\Psi^*=\Psi$, and the BdG equation in the form
\begin{equation}
i\partial_t\sigma_3\Psi=-\frac{1}{2\rho}\partial_x\left(\rho\partial_x\Psi\right)-iv\sigma_3\partial_x\Psi+\rho g_{\rm dd}\sigma_4 \Psi-3g_{\rm dd}\sigma_4G*\rho\Psi.\label{BdGtosolve}
\end{equation}
Here, $\sigma_i$, $i=1,2,3$ are the Pauli matrices, and $\sigma_4=1+\sigma_1$. Solutions of the equation above are such that $\Psi$, $\rho\partial_x \Psi$ are everywhere continuous functions, properties used in our model as boundary conditions at the event horizon. Equation \eqref{BdGtosolve} implies that the quantity
\begin{equation}
\langle\Psi,\Psi'\rangle=\int dx\rho\Psi^\dagger\sigma_3\Psi',
\end{equation}
is conserved in time, which we use as a scalar product on the space of solutions to \eqref{BdGtosolve}.
\subsection{Field modes {and the scattering problem}}
Solutions to Eq.~\eqref{BdGtosolve} can be found in the form $\Psi(t,x)=\exp(-i\omega t)\Psi_{\omega}(x)$, $\omega>0$, where
\begin{equation}
\omega\sigma_3\Psi_{\omega}=-\frac{1}{2\rho}\partial_x\left(\rho\partial_x\Psi_{\omega}\right)-i\frac{\rho_{\rm u}}{\rho}\mathfrak{m}_{\rm u}\sigma_3\partial_x\Psi_{\omega}+\frac{\rho}{\rho_{\rm u}} \sigma_4 \Psi_{\omega}-\frac{3}{\rho_{\rm u}}\sigma_4G*\rho\Psi_{\omega}.\label{BdGtosolve2}
\end{equation}
For contact condensates, the solutions to Eq.~\eqref{BdGtosolve2} for $x\neq0$ are superpositions of plane waves. When (nonlocal) dipolar interactions are present, however, that is not the case, as shown in \cite{Ribeiro2022}. Still, our representation for the dipolar kernel \eqref{approxG} is such that the solutions to the problem resemble superpositions of plane waves, and can be found as follows. For $|x|\gg 1,\ell_{\bot}$ {(we remind the reader that we scale lengths in units of $\xi_{\rm u}$)}, because $G(x)\rightarrow0$ when $x\rightarrow\infty$, any solution of Eq.~\eqref{BdGtosolve2} becomes a combination of $\exp(ikx)\Phi_k$, for constant $\Phi_k$, such that
\begin{equation}
\omega\sigma_3\Phi_k=\frac{k^2}{2}\Phi_k+k\frac{\rho_{\rm u}}{\rho}\mathfrak{m}_{\rm u}\sigma_3\Phi_k+\frac{\rho}{\rho_{\rm u}} \sigma_4 [1-3\tilde{G}(\beta k)]\Phi_k.
\end{equation}
Hence non-trivial solutions for $\Phi_k$ exist only if the dispersion relation holds
\begin{equation}
\left(\omega-k\frac{\rho_{\rm u}}{\rho}\mathfrak{m}_{\rm u}\right)^2=k^2\left\{\frac{\rho}{\rho_{\rm u}}[1-3\tilde{G}(\beta k)]+\frac{k^2}{4}\right\},
\end{equation}
fixing possible $\Phi_k$ solutions for each $\omega$. The group velocity $V(k)=d\omega/d k$ thus indicate if the plane wave is propagating to the right or leftwards. For the case of dipolar condensate, the dispersion relation admits in general more wave vector solutions for each $\omega$ in comparison to local BH analogues \cite{Ribeiro2022}. We showed in \cite{Ribeiro2022} that each plane wave propagating towards (the event horizon at) $x=0$ combined with transmitted, reflected and evanescent waves gives rise to a quasiparticle mode
\begin{align}
\Psi^{(\alpha)}_{\omega}(x)=\left\{
\begin{array}{c}
\sum_{p}S_{p}^{(\alpha)}e^{ipx}\Phi_{p},\ x>0,\\
\sum_{k}S_{k}^{(\alpha)}e^{ikx}\Phi_{k},\ x<0,
\end{array}\right.\label{supp1}
\end{align}
where we denoted by $p$ the wave vector solutions for $x>0$, and the $\alpha$ index distinct
{ingoing (propagating towards the horizon)} quasiparticles for a given $\omega$, {i.e., $\alpha\in\{k_{\rm in1},k_{\rm in2},k_{\rm in3}, k_{\rm r},p_{\rm in},p_{\rm H}\}$}. Also,
\begin{align}
\Phi_{k}=\left|\frac{k^2}{4\pi \rho V(k)(\omega-\mathfrak{m}_{\rm u}k\rho_{\rm u}/\rho)(\omega-\mathfrak{m}_{\rm u}k\rho_{\rm u}/\rho-k^2/2)^2}\right|^{1/2}\left(\begin{array}{c}
(1-3\tilde{G})\rho/\rho_{\rm u}\\
\omega-\mathfrak{m}_{\rm u}k\rho_{\rm u}/\rho-k^2/2-(1-3\tilde{G})\rho/\rho_{\rm u}
\end{array}\right).\label{suppnorm}
\end{align}
Furthermore, {we set $S^{(k_{\rm in1})}_{k_{\rm in1}}=S^{(k_{\rm in2})}_{k_{\rm in2}}=S^{(k_{\rm in3})}_{k_{\rm in3}}=S^{(k_{\rm r})}_{k_{\rm r}}=S^{(p_{\rm in})}_{p_{\rm in}}=S^{(p_{\rm H})}_{p_{\rm H}}=1$ to have ``unit'' signals approaching the horizon, and the sums in Eq.~\eqref{supp1} include, in addition to the incoming channel $\alpha$, all the outgoing propagating channels, and evanescent waves solutions of the dispersion relation. By counting the number of $S^{(\alpha)}_{k}$, $S^{(\alpha)}_{p}$, we find $4+2\mathcal{N}$ unknowns for each $\mathcal{N}$ \cite{Ribeiro2022}.}
{The various coefficients $S^{(\alpha)}_k$, $S^{(\alpha)}_p$ are fixed by the $2\mathcal{N}$ conditions
\begin{equation}
\sum_{k}\frac{S^{(\alpha)}_k}{k-ij\Delta q/\beta}\sigma_4\Phi_{k}=\sum_{p}\frac{S^{(\alpha)}_p}{p-ij\Delta q/\beta}\sigma_4\Phi_{p},
\end{equation}
for $-\mathcal{N}\leq j\leq\mathcal{N}$, $j\neq0$, plus the $4$ boundary conditions: $\Psi^{(\alpha)}_{\omega}$, $\rho\partial_x \Psi^{(\alpha)}_{\omega}$ continuous at $x=0$ \cite{Ribeiro2022}, such that the solutions to the BdG equation are found.
}
{For each quasiparticle $\alpha$, to find all the (scattering) coefficients $S^{(\alpha)}_k$, $S^{(\alpha)}_p$ for real $k$, $p$ is a problem known as the Scattering Problem, which amounts to determine how the interface at $x=0$ scatters plane waves sent towards it. An important concept in such analysis is that of unitarity. The latter is stated in terms of constraints satisfied by the various scattering coefficients as follows. Equation \eqref{BdGtosolve2} implies that
\begin{align}
\rho(\omega'-\omega)\Psi^{(\alpha)\dagger}_{\omega}\sigma_3\Psi^{(\alpha')}_{\omega'}=&\frac{\partial_x}{2}
\rho\left\{\Psi^{(\alpha)\dagger}_{\omega}\partial_x\Psi^{(\alpha')}_{\omega'}-[\partial_x\Psi^{(\alpha)\dagger}_{\omega}]\Psi^{(\alpha')}_{\omega'}+2i\mathfrak{m}_{\rm u}\frac{\rho_{\rm u}}{\rho}\Psi^{(\alpha)\dagger}_{\omega}\sigma_3\Psi^{(\alpha')}_{\omega'}\right\}\nonumber\\
&+\frac{3\rho}{\rho_{\rm u}}\left[\Psi^{(\alpha)\dagger}_{\omega}\sigma_4G*\rho\Psi^{(\alpha')}_{\omega'}-(G*\rho\Psi^{(\alpha)\dagger}_{\omega})\sigma_4\Psi^{(\alpha')}_{\omega'}\right]:=I^{\alpha,\alpha'}_{\omega,\omega'},\label{cons1}
\end{align}
and the orthogonality is translated as $\int dx I^{\alpha,\alpha'}_{\omega,\omega'}=0$ for all $\omega$, $\omega'$. By performing this integral and making the substitution $\omega'\rightarrow\omega$, we obtain the aforementioned constraint
\begin{equation}
\sum_{k\ {\rm prop}}S^{(\alpha')}_{k}S^{(\alpha)*}_{k}\mbox{sgn}\left[V(k)(\omega-\mathfrak{m}_{\rm u}k)\right]-\sum_{p\ {\rm prop}}S^{(\alpha')}_{p}S^{(\alpha)*}_{p}\mbox{sgn}\left[V(p)(\omega-\mathfrak{m}_{\rm u}p\rho_{\rm u}/\rho_{\rm d})\right]=0,\label{smatrix}
\end{equation}
for all $\alpha$, $\alpha'$.
}
{The} normalization of Eq.~\eqref{suppnorm} and Eq.~\eqref{smatrix} ensure that $\langle\Psi_\omega^{(\alpha)},\Psi_{\omega'}^{(\alpha')}\rangle=\pm\delta_{\alpha,\alpha'}\delta(\omega-\omega')$, where $+$ and $-$ signs
stand for positive and negative norm
modes, respectively.
We let $\Gamma^{(+)}$ (resp.~$\Gamma^{(-)}$) be the index set for positive (resp.~negative) norm quasiparticle modes, and the field modes become $\Phi^{(\alpha)}_\omega(t,x)=\exp(-i\omega t)\Psi^{(\alpha)}_{\omega}(x)$. {After a lengthy manipulation, we find that $\Gamma^{(+)}=\{k_{\rm in1},k_{\rm in2},k_{\rm in3},p_{\rm in}\}$, and $\Gamma^{(-)}=\{k_{\rm r},p_{\rm H}\}$.} Furthermore, if {the indices $\alpha,\omega$ are such that $\Phi^{(\alpha)}_\omega(t,x)$ is not a solution to the BdG equation, we define $\Phi^{(\alpha)}_\omega(t,x)=0$. With this notation, the quantum field expansion then reads}
\begin{align}
\hat{\Phi}=&\int_0^\infty\mathrm{d}\omega\Bigg[ \sum_{\alpha\in\Gamma^{(+)}}(\hat{a}^{(\alpha)}_{\omega}\Phi^{(\alpha)}_\omega+\hat{a}^{(\alpha)\dagger}_{\omega}\sigma_1\Phi^{(\alpha)*}_\omega)+\sum_{\alpha\in\Gamma^{(-)}}(\hat{a}^{(\alpha)\dagger}_{\omega}\Phi^{(\alpha)}_\omega+\hat{a}^{(\alpha)}_{\omega}\sigma_1\Phi^{(\alpha)*}_\omega)\Bigg],
\end{align}
and if we write $\Psi^{(\alpha)}_{\omega}= \exp(-i\omega t)\left(\begin{matrix}f^{(\alpha)}_{\omega}\\ h^{(\alpha)}_{\omega}\end{matrix}\right)$,
we finally obtain
\begin{align}
\psi=&\sqrt{\rho}\int_0^\infty\mathrm{d}\omega\Bigg[ \sum_{\alpha\in\Gamma^{(+)}}\left(\hat{a}^{(\alpha)}_{\omega}e^{-i\omega t/\xi^2_{\rm u}}f^{(\alpha)}_\omega+\hat{a}^{(\alpha)\dagger}_{\omega}e^{i\omega t/\xi^2_{\rm u}}h^{(\alpha)*}_\omega\right)+\sum_{\alpha\in\Gamma^{(-)}}\left(\hat{a}^{(\alpha)\dagger}_{\omega}e^{-i\omega t/\xi^2_{\rm u}}f^{(\alpha)}_\omega+\hat{a}^{(\alpha)}_{\omega}e^{i\omega t/\xi^2_{\rm u}}h^{(\alpha)*}_\omega\right)\Bigg].
\end{align}
\subsection{Local versus global energy conservation}
From the Lagrangian \eqref{lagrangian} we calculate the canonically conjugate momentum $\pi=\delta L_{\psi}/\delta (\partial_t\psi)=i\psi^*/2$, and the Hamiltonian $H=\int dx(\pi\partial_t\psi+\pi^*\partial_t\psi^*)-L_{\psi}=\int dx \mathcal{H}$, where
\begin{align}
\mathcal{H}=\frac{1}{2}|\partial_x\psi|^2+\left(g_{\rm dd}\rho+\frac{\partial_x^2\sqrt{\rho}}{2\sqrt{\rho}}\right)|\psi|^2+\frac{g_{\rm dd}}{2}\rho(\psi^{*2}+\psi^{2})-\frac{iv}{2}\left[\psi^*\partial_x\psi-(\partial_x\psi^*)\psi\right]-\frac{3g_{\rm dd}}{2}\sqrt{\rho}(\psi+\psi^*)G*\sqrt{\rho}(\psi+\psi^*).\label{hamiltoniandensity}
\end{align}
From the Hamiltonian density \eqref{hamiltoniandensity} we obtain
\begin{equation}
\partial_t\mathcal{H}=-\partial_x S+\frac{3g_{\rm dd}}{2}\sqrt{\rho}[\partial_t(\psi+\psi^*)]G*\sqrt{\rho}(\psi+\psi^*)-\frac{3g_{\rm dd}}{2}\sqrt{\rho}(\psi+\psi^*)G*\sqrt{\rho}\partial_t(\psi+\psi^*),
\end{equation}
where $S=(-1/2)\{(\partial_t\psi^*)(\partial_x+iv)\psi+[(\partial_x-iv)\psi^*]\partial_t\psi\}$. We thus observe that unless $g_{\rm dd}=0$ or $\beta=0$ the system energy is not locally conserved in general. Nevertheless, the system total energy in its ground state is still conserved. By performing this integral and making the substitution $\omega'\rightarrow\omega$, $\alpha'=\alpha$, we obtain the constraint
{Indeed,} the system energy in its ground state is given by $H=\int \mathrm{d} x\langle\hat{\mathcal{H}}\rangle$, and the Hamiltonian operator $\hat{\mathcal{H}}$ is obtained from Eq.~\eqref{hamiltoniandensity} by making $\psi\rightarrow \hat{\psi}$ followed by normal ordering. Now, because of stationarity we have $\partial_t H=0$, which follows from $\partial_t \langle\hat{\mathcal{H}}\rangle=0$. The condition $\partial_t H=0$ then has a clear physical meaning: If the system radiates, the power emitted at $x\rightarrow-\infty$ equals the power absorbed at $x\rightarrow\infty$. Our goal is to calculate the radiation power $S_{-\infty}$ at $x\rightarrow-\infty$, and thus $\partial_t H=-(S_{\infty}-S_{-\infty})=0$. We find that
\begin{align}
\partial_t \langle\hat{\mathcal{H}}\rangle=
i\int_{0}^{\infty}\mathrm{d}\omega \omega\bigg\{&\bigg(\sum_{\alpha\in\Gamma^{(+)}}-\sum_{\alpha\in\Gamma^{(-)}}\bigg)\bigg[\frac{\partial_x}{2}\rho\bigg(h^{(\alpha)*}_{\omega}\partial_xh^{(\alpha)}_{\omega}-h^{(\alpha)}_{\omega}\partial_xh^{(\alpha)*}_{\omega}-2i\mathfrak{m}_{\rm u}\frac{\rho_{\rm u}}{\rho}|h^{(\alpha)}_{\omega}|^2\bigg)\nonumber\\
&+\frac{3\rho}{\rho_{\rm u}}h^{(\alpha)*}_{\omega}G*\rho(f^{(\alpha)}_{\omega}+h^{(\alpha)}_{\omega})+\frac{3\rho}{\rho_{\rm u}}h^{(\alpha)}_{\omega}G*\rho(f^{(\alpha)*}_{\omega}+h^{(\alpha)*}_{\omega})\bigg]\bigg\}
+
i\int_{0}^{\infty}\mathrm{d}\omega \omega
\sum_{\alpha\in\Gamma^{(-)}}I^{\alpha,\alpha}_{\omega,\omega}.\label{eqenergy}
\end{align}
From Eq.~\eqref{eqenergy}, because $\partial_t \langle\hat{\mathcal{H}}\rangle=0$ and $I^{\alpha,\alpha}_{\omega,\omega}=0$, we conclude that the first term inside the curly brackets when integrated over $\omega$ gives 0 for all $x$, and thus the net contribution of this term to $S_{-\infty}$ is zero. Accordingly, the second term, which is zero
irrespective of the integration in $\omega$, gives rise to the radiation power.
Indeed, $I^{\alpha,\alpha}_{\omega,\omega}=0$ contains the gradient of a constant function containing the radiated power, and which can be determined as follows. Note that
\begin{equation}
\partial_t H=\partial_t\int_{-\infty}^{\infty}\mathrm{d} x\langle\mathcal{H}\rangle
i\int_{0}^{\infty}\mathrm{d}\omega \omega\sum_{\alpha\in\Gamma^{(-)}}\int_{-\infty}^{\infty}\mathrm{d} x I^{\alpha,\alpha}_{\omega,\omega},
\end{equation}
and $\int_{-\infty}^{\infty}\mathrm{d} x I^{\alpha,\alpha}_{\omega,\omega}$ can be calculated with the aid of Eq.~\eqref{cons1} taking $x'\rightarrow\infty$ in
\begin{align}
i\int_{-x'}^{x'}\mathrm{d} xI^{\alpha,\alpha}_{\omega,\omega'}=&\rho_{\rm u}\sum_{k,k'}S^{(\alpha)*}_{k}S^{(\alpha)}_{k'}\Phi^\dagger_{k}\left[\frac{k'+k^*}{2}+\mathfrak{m}_{\rm u}\sigma_3-3\sigma_4\frac{\tilde{G}(\beta k')-\tilde{G}(\beta k^*)}{k'-k^*}\right]\Phi_{k'}e^{-ix'(k'-k^*)}\nonumber\\
&-\rho_{\rm d}\sum_{p,p'}S^{(\alpha)*}_{p}S^{(\alpha)}_{p'}\Phi^\dagger_{p}\left[\frac{p'+p^*}{2}+\mathfrak{m}_{\rm u}\frac{\rho_{\rm u}}{\rho_{\rm d}}\sigma_3-3\frac{\rho_{\rm d}}{\rho_{\rm u}}\sigma_4\frac{\tilde{G}(\beta p')-\tilde{G}(\beta p^*)}{p'-p^*}\right]\Phi_{p'}e^{ix'(p'-p^*)},
\end{align}
where we used the orthogonality condition $\int_{-\infty}^{\infty}\mathrm{d} x I^{\alpha,\alpha}_{\omega,\omega'}=0$ and the sums in primed wave vectors refer to $\omega'$, whereas unprimed wave vectors correspond to $\omega$. Thus, by taking $\omega'\rightarrow \omega$ and $x'\rightarrow \infty$ we obtain
\begin{align}
\partial_tH=
-\frac{1}{2\pi}
\int_{0}^{\infty}\mathrm{d}\omega \omega\sum_{\alpha\in\Gamma^{(-)}}\bigg\{\sum_{p\ {\rm prop}}|S^{(\alpha)}_{p}|^2\mbox{sgn}\left[V(p)(\omega-\mathfrak{m}_{\rm u}p\rho_{\rm u}/\rho_{\rm d})\right]-\sum_{k\ {\rm prop}}|S^{(\alpha)}_{k}|^2\mbox{sgn}\left[V(k)(\omega-\mathfrak{m}_{\rm u}k)\right]\bigg\},
\end{align}
and {the outgoing flux at upstream infinity becomes}
\begin{align}
S_{-\infty}
\frac{1}{2\pi}
\int_{0}^{\infty}\mathrm{d}\omega \omega\sum_{\alpha\in\Gamma^{(-)}}\sum_{k\ {\rm prop}}|S^{(\alpha)}_{k}|^2\mbox{sgn}\left[V(k)(\omega-\mathfrak{m}_{\rm u}k)\right].
\end{align}
{Note, in particular, that the unitarity condition \eqref{smatrix} for $\alpha'=\alpha$ implies that $S_{\infty}$=$S_{-\infty}$, i.e., global energy conservation.}
\end{widetext}
\end{document}
| -27,739.495588 |
[
-2.72265625,
2.38671875
] | 17.877095 |
[
-3.73046875,
-0.0626220703125,
-1.9580078125,
-6.0703125,
-0.751953125,
8.578125
] |
[
0.42822265625,
7.95703125,
0.10943603515625,
3.322265625
] | 72 | 1,411 |
[
-3.52734375,
3.80078125
] | 35.601893 |
[
-5.7734375,
-4.125,
-4.1953125,
-2.513671875,
1.8779296875,
11.328125
] | 0.551286 | 14.000177 | 44.507442 | 1.680508 |
[
1.0104676485061646
] | -18,946.898105 | 8.484763 | -27,546.738009 | 0.225526 | 5.72071 |
[
-2.400390625,
-3.654296875,
-4.10546875,
-5.44921875,
2.33203125,
12.8046875
] |
[
-5.04296875,
-1.0078125,
-1.4052734375,
-0.568359375,
2.435546875,
1.7900390625
] | |
BkiUd5HxK1Thg98UgnFI
|
\section{Complexity}
\label{section:complexity}
As a generalization of the well-known ATSP, the TDTSP\xspace is $\ensuremath{\mathcal{NP}}\xspace$-hard
itself. What is more, there exists no $\alpha$-approximation for any
$\alpha \geq 1$ for the general ATSP~\cite{Bibel}. On the other hand,
approximation algorithms are known for the metric variant of the
ATSP. Unfortunately, such algorithms don't exist in the case of the
TDTSP\xspace:
\begin{thm}
\label{thm:inapx}
There is no $\alpha$-approximation algorithm
for any $\alpha > 1$ for the TDTSP\xspace
unless $\P = \ensuremath{\mathcal{NP}}\xspace$. This is the case even if the time-dependent
triangle inequality is satisfied.
\end{thm}
\begin{proof}
Suppose there exists an $\alpha$-approximation algorithm $A$ for the TDTSP\xspace
for a fixed value of $\alpha \geq 1$. We show that algorithm
$A$ could be used to solve the \emph{Hamiltonian cycle} problem on
an undirected graph $G=(V, E)$. To this end, let $D=(V, A)$ be the
bidirected complete graph with costs
%
\begin{equation*}
c_{uv} \coloneqq
\begin{cases}
1, & \text{ if } \{u, v\} \in E \\
2, & \text{ otherwise}.
\end{cases}
\end{equation*}
%
Note that $G$ is Hamiltonian iff $D$ contains a tour with costs of
at most $n$. Consider the time-expansion of $D$ given by
$\theta^{\max} \coloneqq \alpha n$ and the following time-dependent
cost functions (satisfying the time-dependent triangle inequality):
%
\begin{equation*}
c_{uv}(\theta) \coloneqq
\begin{cases}
c_{uv}, & \text{ if } \theta \leq n \\
\alpha n + 1, & \text{ otherwise}.
\end{cases}
\end{equation*}
%
We apply $A$ to the instance $(D, c, \theta^{\max})$. It the
resulting tour $T$ has $\ensuremath \theta^{\arr}(T) \leq n$, it must correspond to
Hamiltonian cycle in $G$. Otherwise we know that $\ensuremath \theta^{\arr}(T) >
\alpha n$, since $T$ must contain at least one arc $(u, v)$
such that $T$ arrives at $u$ at a time $\geq n$. Since $A$ is
an $\alpha$-approximation, the optimal tour $T_{\opt}$ has
$\ensuremath \theta^{\arr}(T_{\opt}) > n$ and $G$ is not Hamiltonian.
\end{proof}
\begin{rem}[Dynamic Programming]
It is well-known that the (asymmetric) TSP can be solved by using
a dynamic programming approach: Let $C(S, v)$ be the
smallest cost of an $(s,v)$-path consisting of the vertices $S \subseteq V$
with $s, v \in S$. Then $C(S, v)$ satisfies the following
relations:
%
\begin{equation}
\begin{aligned}
C(\{s, v\}, v) &= c_{sv} && \forall v \in V, v \neq s \\
C(S, v) &= \min_{\substack{u \in S\\ u \neq s, v}}
C(S \setminus \{v\}, u) + c_{uv} && \forall S \subseteq V, v \in S. \\
\end{aligned}
\end{equation}
%
The cost of an optimal tour is then given by
$\min_{v \neq s} C(V, v) + c(v, s)$ and can be computed
in $\O(2^{n}\cdot n^{2})$.
If a given TDTSP\xspace instance satisfies the FIFO property, these
relations can be generalized to incorporate time-dependent costs:
%
\begin{equation}
\begin{aligned}
C(\{s, v\}, v) &= c_{sv}(0) && \forall v \in V, v \neq s \\
C(S, v) &= \min_{\substack{u \in S\\ u \neq s, v}}
C(S \setminus \{v\}, u) + c_{uv}(C(S \setminus \{v\}, u))
&& \forall S \subseteq V, v \in S. \\
\end{aligned}
\end{equation}
%
Note that the complexity is the same as in the case of an ATSP.
This is due to the fact that the FIFO property ensures that only the
shortest path for fixed $S, v$ needs to considered for subsequent
computations. Without the FIFO property it becomes necessary to
consider an $(s, v)$-paths for each $\theta \in ^{\top}(v)$ during the
computations.
\end{rem}
\subsection{Approximation for Special Cases}
While the TDTSP\xspace problem is relatively hard by itself, some results regarding
approximations can be preserved in the case where the time-dependent cost
functions
are of low variance.
\begin{thm}
Let $\lambda \geq 1$ such that for all $u, v \in V$, $\theta ,\theta' \in
\{0, \ldots, T\}$
it holds that
%
\begin{equation}
c_{uv} (\theta) \leq \lambda c_{uv}(\theta').
\end{equation}
Then, any $\alpha$-approximation of the TSP yields a $(\alpha
\lambda)$-approximation of
the TDTSP\xspace.
\end{thm}
\begin{proof}
Let $c : A \to \mathds{N}$ be defined as
%
\begin{equation}
c_{uv} \coloneqq \min_{\theta \in \{0, \ldots, T\}} c_{uv}(\theta).
\end{equation}
%
This implies that $c_{uv} \leq c_{uv}(\theta) \leq \lambda c_{uv}$ for all
$\theta$.
Let $T_{\opt}$, $T_{h}$ be the optimal and $\alpha$-approximate tour with
respect to the costs $c$ and $T_{\opt, t}$ be the optimal tour. We have that
%
\begin{equation}
\begin{aligned}
\ensuremath \theta^{\arr}(T_h) \leq \lambda \cdot c(T_h) & \leq (\alpha \lambda) \cdot
c(T_{\opt}) \\
& \leq (\alpha \lambda) \cdot c(T_{\opt, t}) \\
& = (\alpha \lambda) \cdot \ensuremath \theta^{\arr}(T_{\opt, t}) \\
\end{aligned}
\end{equation}
\end{proof}
Note that since the ATSP in general is inapproximable in general, further
assumptions,
such as a metric lower bound $c_{uv}$, are still necessary to obtain an
approximation.
\subsection{One-trees}
Relaxations play an important role in integer programming in general
and the TSP in particular. They provide lower bounds which can
be used to obtain quality guarantees for solutions. The prevalent
relaxation of combinatorial problems formulated as integer programs is
given by their LP relaxations. In several cases however, it is
possible to derive purely combinatorial relaxations. In the case of
the \emph{symmetric} traveling salesman problem, a popular
combinatorial relaxation is given by \emph{one-trees}. A one-tree with
respect to a graph $G = (V, E)$ is given by a spanning tree of $G$
together with an edge adjacent to a distinguished source $1 \in V$.
Since every tour is a one-tree, the one-tree of minimum cost provides
a lower bound on the cost of a tour.
The computation of a one-tree in the static case involves the computation
of a minimum spanning tree (MST). This computation can be
performed
efficiently using Prim's algorithm. It is therefore natural to ask whether
this approach can be generalized to the time-dependent case.
Let $T=(V, F)$ be a spanning tree of the graph $G$. We direct the
edges in $T$ away from $1$. For each vertex $v \in V$, there exists a
unique $(1,v)$-path $P_v$ in $T$. Hence, there is a unique
arrival time $\theta^{\arr}(u)$ induced by $T$ for each $u \in V$. The total
cost of the edges in $T$ is then given by
\begin{equation}
c(T) \coloneqq \sum_{(u, v) \in F} c_{u, v}(\ensuremath \theta^{\arr}(u)).
\end{equation}
A \emph{time-dependent minimum spanning tree (TDMST)} minimizes
$c(T)$. Unfortunately the computation of a TDMST is hard:
\begin{thm}
\label{thm:tdmst_hardness}
There is no $\alpha$-approximation algorithm for any $\alpha > 1$ for
the TDMST problem unless $P = \ensuremath{\mathcal{NP}}\xspace$.
\end{thm}
\begin{proof}
Consider an instance of the \textsc{3Sat} problem.
Let $X \coloneqq \{x_1, \ldots, x_n\}$ be a set of $n$~variables and
$\mathcal{Z} \coloneqq \{Z_1, \ldots, Z_m\}$ be a set of $m$ clauses, where
each clause contains at most three literals.
We construct a suitable instance of the TDMST problem
using a number of components. First we define a component $A_i$ for each literal
$x_i \in X$. The component is shown in Figure~\ref{pic:mst_proof_variable_component},
the edges are annotated with their (static) travel times.
Any spanning tree will arrive at $x_i$ either at time 1 or
time $2$ depending on whether the resulting path leads past $s_i$ or not.
%
\begin{figure}
\begin{center}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{TreeGadgetVariable.tikz}
\caption{A component which queries whether a variable is set}
\label{pic:mst_proof_variable_component}
\end{subfigure}
~
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{TreeGadgetClause.tikz}
\caption{A component which determines whether a clause is satisfied}
\label{pic:mst_proof_clause_component}
\end{subfigure}
\caption{Gadgets used in the proof of Theorem~\ref{thm:tdmst_hardness}}
\end{center}
\end{figure}
%
Next we define a component $B_j$ for each clause $Z_j \in \mathcal{Z}$.
Let $x_k$, $x_l$, $x_m$ be the literals which appear in $Z_j$. The edges
in the component have the following travel times:
%
\begin{enumerate}
\item
The edges between the vertices $w_{j, k}$, $w_{j, l}$ and $w_{j, m}$ have
a constant travel time of 1.
\item
The edge connecting $v_{j, k}$ and $w_{j, k}$ has a travel time of $1$
for times at most two, and $M \geq 1$ otherwise. The same holds
true for the two other respective edges.
\item
The travel time of the edge connecting $x_k$ and $v_{j, k}$ depends
on whether $x_k$ or $\overline{x}_k$ appears in the clause $Z_j$.
In the former case the travel time is always 1, whereas in the latter
it is given by
%
\begin{equation}
c_{x_k v_{j, k}}(\theta) \coloneqq
\begin{cases}
0, & \text{ if } \theta \geq 2 \\
2, & \text{ otherwise}\\
\end{cases}
\end{equation}
\end{enumerate}
%
The instance including the components is depicted in
Figure~\ref{pic:mst_proof_complete}. Consider a satisfying truth
assignment. For every literal $x_i$ set to \emph{true} we choose the
path $0$, $s_i$, $x_i$, $t_i$ in component $A_i$. If the literal is
set to \emph{false} we choose the path $0$, $s_i$, $t_i$, $x_i$.
Thus, the arrival time at $x_i$ is $1$ if $x_i$ is set to
\emph{true} and $2$ otherwise. For each clause $Z_j$ we add the
edges between the vertices corresponding to its literals and their
respective $v_j$ counterparts. Since the clause is satisfied, the
arrival time at at least one $v_j$ is $2$. Thus, the remaining part
of $B_j$ can be spanned using three additional edges of cost $1$
each. The resulting tree hast costs of at most $2n + 6m$.
Conversely, consider a tree with costs less than $M$. We first make
the observation that any $x_i$ is connected by a path leading past
$A_i$ to $0$. Otherwise, the path from $0$ to $x_i$ would lead past
the component $B_j$ corresponding to some clause $Z_j$ containing
$x_i$ or $\overline{x}_i$. In this case however, it would not be
possible to reach vertex $w_{j, i}$ before time $2$ and the cost of
the tree would increase beyond $M$. Thus, any such tree corresponds
to an assignment of variables. Every component $B_j$ is connected by
an edge with cost of $1$. Therefore the assignment is also
satisfying. Assume there was an $\alpha$-approximation for the
TDMST problem. We let $M \coloneqq \alpha (2n+6m) + 1$ and run the
approximation. If the resulting tree has costs less than $M$, the
\textsc{3Sat} instance is satisfiable. Otherwise, the optimal TDMST
has costs at least $M / \alpha > 2n + 6m$, i.e.,\xspace the instance is not
satisfiable.
%
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{TreeGadgetComplete.tikz}
\caption{The TDMST construction used to prove Theorem~\ref{thm:tdmst_hardness}}
\label{pic:mst_proof_complete}
\end{figure}
\end{proof}
\section{Computational experiments}
\label{section:computational}
\subsection{Instances}
In order to test different formulations and techniques we generated
several problem instances, each given by a directed complete graph and
cost functions associated with its arcs. We embedded the vertices of
the graph into $\{0, \ldots ,100\}^{2}$ and introduced (symmetric)
costs $c_a$ using rounded-down euclidean distances between the points
of the embedding.
We then augmented the static costs to time-dependent functions $c_a(\theta)$.
We first added $M \in N$ time steps $\theta_{1} < \theta_{2} < \ldots < \theta_{M}$
within the range $\{0, \ldots, \theta^{\max}\}$ and used these time
steps to construct a piecewise linear function
$f_a(\theta) : \mathbb{N} \to \mathbb{Z}$:
\begin{enumerate}
\item
We let $f_a(0) \coloneqq 0$ and fixed the slope at zero to $+1$.
\item
The slope alternates between $+1$ and $-1$ with break points
at $\theta_i$ for $i = 1,\ldots, M$.
\end{enumerate}
We let $\lambda > 1$ and define the cost function
$c_a : \{0, \ldots, \theta^{\max}\} \to \mathds{N}$ as
\begin{equation}
c_a(\theta) \coloneqq c_a + \max(\min(f_a(\theta), \lambda c_a), 0)
\end{equation}
The parameter $\lambda$ controls the multiple of $c_a(\theta) / c_a$ which
can be attained. We generally let
$\lambda = 3$ (see Figure~\ref{pic:func} for an example) and distribute
$M = 100$ break point over an interval of 1000 points in time.
\begin{center}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{axis}[xmin=0,ymin=0, ymax=40,xmax=100,grid=major, grid style={thin,black!5}, unit vector ratio=1 1]
\addplot[mark=none, black] table [x=x, y=y, col sep=comma] {Data/func.plot};
\end{axis}
\end{tikzpicture}
\caption{A sample plot of the travel time function for costs of 10, a time
horizon of $\theta^{\max} = 100$ and $M = 10$ break points.
The cost is constrained by a factor of $\lambda = 3$}
\label{pic:func}
\end{center}
\end{figure}
\end{center}
Note that the functions defined above satisfy the FIFO property. By using
shortest-path distances we ensure that the time-dependent triangle inequality
is satisfied as well.
\subsection{Formulations}
We implemented the different formulations based on the
SCIP\cite{SCIP} IP solver\footnote{SCIP version 4.0.0 with SoPlex 3.0.0 as an
LP-solver}. We ran all experiments on an Intel Core i7 CPU clocked
at \SI{3.2}{\giga\hertz} on a system with \SI{8}{\giga\byte} of
RAM. We started with 50 relatively small instances containing 20 vertices
each. We first computed optimal tours with respect to the static
costs and used the resulting travel times with respect to the
time-dependent cost to derive smaller time horizons to decrease the
size of the corresponding time-expanded graphs. Despite their
relatively small size, the time-expansions were quite large, each
containing between \num{80000} and \num{170000} arcs.
We started by comparing the different combinations of formulations
and pricing approaches. We were for the most part not able to
determine the optimal solutions within the prescribed time
limit of \SI{3600}{\second}. There are however significant
differences between the different
formulations (see Table~\ref{table:formulations} for details).
\begin{itemize}
\item
The different pricing approaches differ significantly performance-wise.
Specifically, the arc-based pricing approach fails to solve even a single
root LP.
\item
The path-based formulation \eqref{eq:path_based} generally yields
smaller LPs than the full arc-based formulation. However, similarly
to the arc-based pricing approach, many root LPs are not solved within
the time limit.
\item
The path-based pricing approaches reduce the average size of the solved LPs
by about 90 \%. As a result, almost all of the root LPs are solved successfully.
\item
Using a dual stabilization approach does not result in smaller gaps
compared to the simple path-based pricing.
\item
The most successful approach is based on pricing 2-cycle free paths.
While the average LP size does not change much, the remaining gap
after the exhaustion of the computation time is decreased furthest
when pricing 2-cycle free paths. This is due to the fact that,
as mentioned above, the improved dual bound more than makes
up for the increased computational time required to
avoid 2-cycles during the pricing of new paths.
\end{itemize}
\begin{center}
\input{Tables/Formulations}
\end{center}
\subsection{Valid inequalities and primal heuristics}
We proceed to study the effect of adding valid inequalities in order
to increase dual bounds. To this end we restrict ourselves to the
arc-based formulation with 2-cycle free path pricing, which
performed best in the experiments conducted this far. In order
to evaluate the effectiveness of different classes of valid
inequalities we again consider the remaining gap after \SI{3600}{\second}
of computations. The remaining gap is a good measure of the overall
effectiveness of the different classes of inequalities, since it
strikes a balance between the increase of the dual bound and
the required separation time. The latter can be substantial, in
particular if the separation involves the solution of $\ensuremath{\mathcal{NP}}\xspace$-hard problems.
We make the following observations (see the details in
Table~\ref{table:inequalities}):
\begin{itemize}
\item
The separation of cycle inequalities actually increases the gap compared
to the formulation without any separation. It is therefore inefficient
to consider these inequalities at all.
\item
There is no significant decrease with respect to the remaining gap
when separating unitary AFC, odd path-free, and odd CAT inequalities.
Apparently, the separation time for these classes of inequalities does
not merit the increased dual bounds.
\item
By far the most efficient classes of inequalities are (lifted) subtour
elimination constraints. Few inequalities suffice to significantly
decrease the remaining gap.
\item
Adding primal heuristics decreases the remaining gap by a considerable margin.
It is particularly efficient to construct tours based on the combined
flow $x_{uv}^{*}$ of the current LP relaxation. Apparently the LP is able
to accurately determine the underlying arcs which are contained in
tours with small travel times. In contrast the built-in heuristics
seem to be unable to take advantage of this fact.
\item
The propagation of upper and lower bounds yields an additional improvement
on the running times. The combination of the speedup techniques makes it
possible to solve more than ten percent of the instances to optimality.
\end{itemize}
\begin{center}
\input{Tables/Full}
\end{center}
\subsection{Combinations of inequalities}
It is clear from the previous experiments that the addition of SECs / LSECs is the most
effective approach to improve the improve dual bounds. Together with primal heuristics
and objective value propagation it is possible to reduce gaps to the point
of being able to solve a significant part of all instances. We go on to study
the effect of combining SECs / LSECs with other classes of inequalities. To this
end we first separate SECs / LSECs to strengthen the LP-relaxation before applying
separation procedures for different classes on inequalities while employing
both primal heuristics and objective value propagation. The results are depicted
in Table~\ref{table:combined}. Unfortunately, the effects of separating additional
inequalities from the strengthened relaxation has no significant effect on either
the amount of instances solved to optimality or the remaining gap for unsolved instances.
\begin{center}
\input{Tables/Combined}
\end{center}
\subsection{Learning to branch}
While tailoring the solver lead to significant improvements in the
running times, it is still not possible in a reasonable amount of time to solve
real world instances to optimality.
A key problem regarding Branch-and-Bound schemes (and by extension,
Branch-Price-and-Cut schemes)
is the selection of the branching candidate in the presence of several fractional variables.
To this end, multiple branching rules have been proposed in the literature,
among these the \emph{strong branching} rule.
Strong branching chooses the ''best'' possible fractional variable with
respect to a certain score, based on LP-relaxations related to the current
Branch-and-Bound node (see \cite{StrongBranching} for details).
Strong branching usually yields much smaller Branch-and-Bound
trees. However, the computational costs to evaluate the score of variables is
rather high.
As a result, strong branching is usually only
employed at the root node of the Branch-and-Bound tree.
Khalil et al.~\cite{Khalil2016} suggested to employ machine learning techniques
to learn a branching rule yielding a similar size
of the Branch-and-Bound tree, while avoiding the computational overhead.
The authors used an SVM-based approach to learn a branching rule based
on several generic MIP features, such as fractionality, pseudocost,
and various variable statistics. Labels were assigned based on
strong branching scores. While the results were promising,
the authors were not able to beat the CPLEX-default branching
rule with respect to running time or Branch-and-Bound tree size.
Still, they suggest that the selection and weighing of different
features may be advantageous in order to obtain improved instance-specific
branching models. Furthermore, there has been rapid development regarding
rank learning techniques, mainly driven by web search engine
development (see \cite{LearningToRank} for a summary). As a
result, the SVM-based ranking approach~\cite{SVMRank} has been
superseded by different approaches. Specifically,
the lambdaMART~\cite{Burges2010} algorithm, a boosted tree version of LambdaRank
seems to perform significantly better on web search
related test data.\footnote{A lambdaMART implementation is readily available as part of the
Quickrank~\cite{Quickrank} \texttt{C++} library.}
In this paper, we use the lambdaMART algorithm to learn a ranking of branching
candidates depending on some of the features from~\cite{Khalil2016} and some
features specific to the TDTSP\xspace.
We generated 20~to~30 training instances.
For each of these, we collected training data from several branching nodes,
yielding slightly more than $5000$~data samples to learn the ranking function.
Labels were assigned based on strong branching scores.
The following features were collected for each arc $a=(u,v) \in A$
depending on the value of the current LP-value $Z_{\text{LP}}$ and the cost
$Z^{*}$ of the best feasible solution available:
\begin{itemize}
\item
cost relative to the current LP value: $c_{a} / Z_{\text{LP}}$
\item
cost relative to the best feasible solution: $c_{a} / Z^{*}$
\item
cost relative to the current gap: $c_{a} / (Z^{*} - Z_{\text{LP}})$
\item distance to one and zero: $x_{uv}$, $1 - x_{uv}$
\item variable slack: $\min(x_{uv}, 1 - x_{uv})$
\item
number of arcs $(u, v, \theta)$ in $A^{^{\top}}$ divided by $|A^{^{\top}}|$
\item
number of arcs $(u, v, \theta)$ in $A^{^{\top}}$ which
have been priced into the current LP
divided by $|A^{^{\top}}|$
\item
pseudocost of arc $x_{uv}$
\item
the (four) sizes relative to $|V|$ of the connected components
containing $u$ or $v$ of the subgraph containing only the arcs
branched to one or zero
\end{itemize}
We trained two different ranking functions; one based on small
instances, each containing $|V| = 10$ vertices, one based
large instances, each containing $|V| = 20$ vertices.
We compared the resulting branching rules with
the one built into SCIP. To this end, we generated
20~small, respectively large, random instances
different from the training instances and
compared the branching rules with respect
to running time and remaining gap.
The results can be found in Table~\ref{table:learning}.
Unfortunately, these first tests were not successful, since the
running times got worse.
\begin{center}
\input{Tables/Learning}
\end{center}
\section{Conclusion}
\label{section:conclusion}
In this paper we have discussed several theoretical and empirical properties
of the TDTSP\xspace. Since the TDTSP\xspace is a generalization of the ATSP, many of
the complexity-specific theoretical results, such as $\ensuremath{\mathcal{NP}}\xspace$-hardness and
inapproximability, carry over to the TDTSP\xspace.
Unfortunately, several positive results regarding the ATSP are not retained in the TDTSP\xspace.
Specifically, the TDTSP\xspace remains inapproximable even if a generalized
triangle inequality is satisfied. Furthermore, even simple
relaxations, such as time-dependent trees cannot be used to
determine combinatorial lower bounds on the TDTSP\xspace.
From a practitioner's point of view, the increase in problem size poses significant
problems when trying to solve even moderate-sized instances of the TDTSP\xspace.
The authors of \cite{TimeDependentTSPPoly} conclude that there are challenging
instance of the STDTSP with less than one hundred vertices. While the results
date back some years, the increase in computational complexity is apparent
even in the case of the STDTSP.
To be able to tackle the TDTSP\xspace, a sophisticated pricing routine is absolutely
necessary. The path-based structure of the formulation is helpful in devising
a pricing routine which employs the technique of Lagrangean relaxations.
The connection between TDTSP\xspace and ATSP yields a variety of feasible classes
of inequalities which help to significantly improve dual bounds. Unfortunately,
the generalizations of STDTSP-type inequalities do not perform equally well in comparison.
Objective value propagation and primal heuristics decrease the gap even further. The
primal heuristics profit from the connection to the ATSP, significantly outperforming
the heuristics built into the solver itself.
Similar to the results in~\cite{Khalil2016}, the learned branching rule
does not surpass the conventional methods. Still, better features,
a reinforcement learning approach, or learning parts of a solution directly
might change the picture.
\section{Formulations}
\label{section:formulations}
\subsection{Time-expanded graphs}
In the following, we will consider formulations based on time-expanded graphs.
In order to introcude time-expanded graphs we first define a set of
reachable points in time. We let $^{\top} : V \to 2^{\Theta}$,
\begin{equation}
\begin{aligned}
^{\top}(v) \coloneqq \{ \theta \in \Theta \mid \: &
\exists (a_1, \ldots, a_k), a_1 = (s,v_1), a_k = (u_k, v), \\
& \ensuremath \theta^{\arr}(a_1, \ldots, a_k) = \theta \}
\end{aligned}
\end{equation}
The time-expanded graph $D^{^{\top}} = (V^{^{\top}}, A^{^{\top}})$ has vertices
$V^{^{\top}} \coloneqq \{ v_{\theta} \mid v \in V,\, \theta \in ^{\top}(v) \}$
and arcs
\begin{equation}
A^{^{\top}} \coloneqq \{ (u_{\theta}, v_{\theta'}) \mid
u_{\theta}, v_{\theta'} \in V^{^{\top}},
\theta' = \theta + c_{uv}(\theta) \}.
\end{equation}
We will denote an arc $(u_{\theta}, v_{\theta'})$ by $(u,v, \theta)$.
We will from now on assume that $c_{uv}(\theta) > 0$ for all
$(u, v) \in A$, $\theta \in ^{\top}(v)$. This directly implies
that $D^{^{\top}}$ is acyclic.
\begin{exmp}
Figure~\ref{pic:example} shows a directed graph with travel times for each arc
and its time expansion. Any tour on $D$ can be embedded into $D^{^{\top}}$
as a $(s_0,s_{\theta})$-path.
%
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[t]{0.4\textwidth}
\includegraphics[width=\textwidth]{ExampleGraph.tikz}
\caption{The directed graph $D$}
\label{subfig:example_graph}
\end{subfigure}
~
\begin{subfigure}[t]{0.4\textwidth}
\includegraphics[width=0.66\textwidth]{ExpandedExample.tikz}
\caption{The time-expansion $D^{^{\top}}$ of $D$ with time horizon $\theta^{\max} = 6$}
\label{subfig:expanded_example}
\end{subfigure}
\caption{A directed graph $D$ and its time-expansion $D^{^{\top}}$.}
\label{pic:example}
\end{center}
\end{figure}
\end{exmp}
\subsection{An Arc-based formulation}
We consider an arc-based formulation based on the graph $D^{^{\top}}$
consisting of binary variables $x_{uv,\theta}$ for each arc in
$D^{^{\top}}$. The resulting formulation is inspired by
a three-index-formulation for STDTSP~\cite{Picard1978}. The formulation
consists of a flow through the time-expanded graph $D^{^{\top}}$, which has
to cover each vertex exactly once.
\begin{equation}
\label{eq:arc_based}
\begin{aligned}
\min\ & \sum_{(u,v,\theta) \in A^{^{\top}}} c_{uv,\theta} \cdot x_{uv, \theta} && \\
&\sum_{\theta \in ^{\top}(v)} \sum_{(v, w, \theta) \in \delta^{+}(v_{\theta})} x_{vw, \theta} =1 &&\text{for all } v \in V \\
& \sum_{(v,w, \theta) \in \delta^{+}(v_\theta)} x_{uv,\theta} - \sum_{(u, v, \theta') \in \delta^{-}(v_{\theta})} x_{uv,\theta'}
= 0
&& \text{for all } v \ne s, \theta \in ^{\top}(v) \\
&x_{vw,\theta} \in \set{0,1} && \text{for all } v \neq w, \theta \in ^{\top}(v).
\end{aligned}
\end{equation}
\begin{rem}
\label{rem:flow}
Any solution of the IP or its LP-relaxation can be decomposed into
a set of paths leading from vertex $s_0$ to $s_{\theta}$ for
$\theta > 0$. Thus, an equivalent cost function is given by
$\sum_{\theta \in ^{\top}(s)} \sum_{(v,s,\theta') \in \delta^{-}(s_\theta)} \theta \cdot x_{vs,\theta}$.
\end{rem}
\paragraph{Relation to the static ATSP}
In the following, we will consider the relationship between the TDTSP\xspace and the
static ATSP problem. To this end, we let $x : A \to \mathds{R}_{\geq 0}$ be the combined
flow traversing an arc $(u, v) \in A$, i.e.
\begin{equation}
x_{uv} \coloneqq \sum_{\theta \in ^{\top}(u)} x_{uv, \theta}
\end{equation}
where $(x_{uv,\theta})_{(u, v, \theta) \in A^{^{\top}}}$ is a feasible
solution of \eqref{eq:arc_based}. Observe that the covering
constraints and the flow conservation yield the well-known 2-matching
equations $x(\delta^{+}(v)) = x(\delta^{-}(v)) = 1$ for all $v \in V$.
Similarly, integrality together with the condition $x_{uv} \leq 1$
follows from the integrality of the original solution. However, a
correct static ATSP formulation still requires subtour
elimination constraints (SECs) of the form
\begin{equation}
x(\delta^{+}(S)) \geq 1 \quad \forall S \subset V,\ S \neq \emptyset, V.
\end{equation}
Since $D^{^{\top}}$ is acyclic, any solution of \eqref{eq:arc_based} is
guaranteed to satisfy the additional SECs\footnote{Equivalently,
flow augmentation techniques such as \cite{Gouveia1995} used to strengthen
ATSP formulations are redundant for solutions of the TDTSP\xspace.}.
Still, SECs are not necessarily satisfied by fractional
solutions. Thus, formulation \eqref{eq:arc_based} can be strengthened
by separating SECs with respect to the underlying static ATSP.
We can produce fractional solutions to the static ATSP problem by
computing the combined flow after having successfully separated all SECs.
Consequently, we can use any ATSP separator to derive valid inequalities for the
ATSP which we can then formulate in terms of the variables corresponding
to $D^{^{\top}}$ in order to strengthen our formulation.
Note that while any feasible TDTSP\xspace solution is feasible for the underlying
ATSP, generic ATSP solutions do not necessarily produce feasible solutions
of the TDTSP\xspace.
Specifically, no tour $T=(a_1,
\ldots, a_n)$ with $\ensuremath \theta^{\arr}(T) > \theta^{\max}$ can be embedded into
$D^{^{\top}}$. The complete description of the TDTSP\xspace in terms of combined
variables can be obtained by adding \emph{forbidden path} constraints
of the form
\begin{equation}
\sum_{a \in P} x_a \leq k - 1 \quad
\forall \: P = (a_1, \ldots, a_k) : \ensuremath \theta^{\arr}(P) > \theta^{\max}.
\end{equation}
As a result, facet-defining ATSP inequalities, while valid, are
not necessarily facet-defining for the TDTSP\xspace.
\begin{lem}[Dimensionality]
Let $n^{^{\top}} \coloneqq |V^{^{\top}}|$, $m^{^{\top}} \coloneqq |A^{^{\top}}|$ be the
number of vertices and arcs, respectively, of $D^{^{\top}}$, and $V^{s}
\coloneqq \{ s_{\theta} \in V^{^{\top}} \}$ the $n^{s} \coloneqq |V^{s}|$
many vertices corresponding to $s$. If $c$ satisfies the
time-dependent triangle inequality
\eqref{eq:time_dependent_triangle}, then
%
\begin{equation}
\dim(P) \leq m^{^{\top}} - (n^{^{\top}} - n^{s}) - (n - 1)
\end{equation}
\end{lem}
\begin{proof}
The dimension of $P$ is trivially bounded by the difference between the
number of variables $m^{^{\top}}$ and the rank of the system of
equations \eqref{eq:arc_based}. Since $D^{^{\top}}$ is acyclic, the
system of equations ensuring flow conservation on the
vertices in $V^{^{\top}} \setminus V^{s}$ has full rank of $n^{^{\top}} - n^{s}$.
Consider a vertex $v \neq s$ contained in a tour
$T=(v_1 = s, \ldots, v_{i - 1}, v_{i} = v, v_{i + 1}, \ldots, v_n)$
given as a sequence of vertices in $D$. Assume that
equation $x(\delta^{+}(v)) = 1$ is struck from system \eqref{eq:arc_based}.
Since $c$ satisfies the time-dependent triangle inequality,
the sequence $T'=(v_1 = s, \ldots, v_{i - 1}, v_{i + 1}, \ldots, v_n)$
is a feasible solution to the reduced system of equations, yet
it is not a feasible TDTSP\xspace solution. Therefore, none of
the $n - 1$ equations corresponding to vertices other than
$s $ can be struck without increasing
dimensionality. Thus, the combined system of equations has
the required rank.
\end{proof}
\subsection{Pricing}
The approach of time-expansion can be used to solve a variety of
time-dependent problems~\cite{QuickestFlows,Railroad}.
Unfortunately, the time-expansion
of a problem quickly increases the size of the resulting formulations.
Specifically, in case of the TDTSP\xspace, even moderately sized instances
of less than one hundred vertices can result in millions of arcs in $A^{^{\top}}$,
making it difficult to solve even the LP-relaxation of the TDTSP\xspace.
To alleviate the problem, an obvious approach is to use
column-generation (see \cite{CGSummary} for a summary on the topic).
Nevertheless, there are a number of different variants of CG, especially
regarding the pricing strategy. It is not immediately clear which one is the
best for the TDTSP\xspace. We will present the tested approaches in the following.
Let $(\lambda_v)_{v \in V}$, $(\mu_{v_{\theta}})_{v \neq s,
\theta \in ^{\top}(v)}$ be
the dual variables of the respective constraints in~\eqref{eq:arc_based}.
The reduced cost of an arc $(v, w, \theta)$ is then given by
\begin{equation}
\overline{c}_{vw,\theta} \coloneqq c_{vw}(\theta) -
\left( \lambda_{v} + \mu_{v_{\theta}} - \mu_{w_{\theta + c_{vw}(\theta)}} \right).
\end{equation}
To obtain a feasible solution to populate the initial LP, we compute
a heuristic tour, which we then add as a whole.
\paragraph{Lagrangean pricing} While the pricing approach
significantly reduces the formulation size and facilitates the
solution of much larger instances, the approach can be significantly
improved. Consider a single arc $(v, w, \theta)$ with negative reduced
costs: The arc can only obtain a positive value in the subsequent
LP-solution if it is part of a $(s_0,s_{\theta})$-path. It is therefore
advisable to generate entire paths at once rather than single arcs.
The pricing problem then becomes a shortest path problem in $D^{^{\top}}$.
Even though the reduced costs are negative, the pricing problem can
be solved using breadth-first search since $D^{^{\top}}$ is acyclic.
We also employ a technique known as
\emph{Lagrangean pricing}~\cite{LagrangeanPricing}.
The technique is based on the observation that the pricing problem
is a Lagrange relaxation of the full LP, which implies that
the difference between the current LP value and the value of the
full LP is bounded by the minimum reduced cost of, in this case,
an $(s_0,s_{\theta})$-path. The pricing loop is aborted as
soon as the cost rises above a value of $-\epsilon$. This approach
helps to deal with the degeneracy often present in formulations
of combinatorial optimization problems by avoiding to price
variables which have negative reduced costs without attaining
a nonzero value in the optimal basis of the LP relaxation.
It is also the case that paths obtained from the pricing procedure
occasionally correspond to tours in $D$ and therefore
to feasible TDTSP\xspace solutions.
\paragraph{Pricing cycle-free paths}
\label{paragraph:cycle_free}
Unfortunately, many paths which are generated throughout the pricing
do not share much resemblance with tours in the underlying graph
$D$: On the one hand certain paths only contain few vertices and
lead almost immediately back to $s_{\theta}$. We will address this
problem using the propagation of lower bounds. On the other hand,
paths frequently contain cycles with respect to $D$, i.e.,\xspace they
contain two different versions $v_{\theta}$, $v_{\theta'}$ of the
same vertex $v \neq s$. It is of course possible to generate
inequalities in order to cut off a fractional solution $\tilde{x}$
containing a cycle in its path decomposition (see
Subsection~\ref{subsection:inequalities}). However, ideally we
would like not to have paths containing cycles in the LP in the
first place. Obviously, the problem of finding an acyclic path of
negative reduced cost is equivalent to finding the optimal solution
to the TDTSP\xspace problem. It is however possible to find $k$-cycle free
paths, i.e.,\xspace paths not containing a cycle with at most $k$ arcs using
Dijkstra-like labeling schemes~\cite{AcyclicShortestPath}.
Specifically, avoiding 2-cycles merely increases computation time by
a factor of two, while significantly improving the resulting lower
bounds. It is also possible to avoid $k$-cycles for arbitrary $k$;
however, the proposed algorithm takes $\O((k!)^2)$ time, which
quickly makes the approach intractable for increasing values of $k$.
\subsection{Valid inequalities}
\label{subsection:inequalities}
In order to strengthen the formulation, a number of additional inequalities
can be included in the formulation. We give a brief summary of valid inequalities,
some of which are well-known ATSP inequalities, whereas others are either adaptations
of STDTSP inequalities or newly derived ones.
\paragraph{ATSP inequalities}
\label{paragraph:dkp}
Apart from the subtour elimination constraints, the probably best-known family of
facet-defining inequalities for the ATSP goes by the name of
$D_{k}^{+}$-inequalities \cite{LineareCharatkerisierung,TSPVariations}.
$D_{k}^{+}$-inequalities are defined on a complete directed graph $D=(V, A)$ with $n$ vertices.
To simplify notation, for sets $S,T \subseteq V$ we let
$[S:T] \coloneqq \{ (u, v) \in A \mid u \in S, v \in T\}$.
The $D_{k}^{+}$-inequality for a sequence $(v_1, \ldots, v_k)$ of $2 \leq k < n$
distinct vertices is given by
\begin{equation}
\begin{split}
\sum_{j=1}^{k -1} x_{v_{j},v_{j+1}} + x_{v_k,v_1}
+ 2 x([\{v_1\} : \{v_3, \ldots, v_k\}]) & \\
+ \sum_{j = 4}^{k} x([\{v_j\} : \{v_3, \ldots, v_{j - 1}\}]) \leq k - 1.
\end{split}
\end{equation}
The separation of $D_{k}^{+}$-inequalities involves the enumeration of
possible sequences in a branch and bound-like fashion. Nonetheless,
the separation works well in practice, since many of the possible
sequences can be pruned. Note that in the special case $k = 2$
the inequality becomes $x_{uv} + x_{vu} \leq 1$.
\paragraph{Incompatibilites}
\label{paragraph:odd_cat}
Since any feasible solution to an integer program is a stable set with
respect to its incompatibility graph $\ensuremath{\mathcal{I}}$, cliques and odd cycles in $\ensuremath{\mathcal{I}}$ are
the basis for many strong inequalities for arbitrary integer programs, a
fact which is often used in MIP solvers.
While the incompatibility graph of the symmetric TSP problem is empty, the
ATSP problem already has a significant amount of incompatibilities. Specifically,
the arcs $(u, v) \neq (u',v')$ are incompatible if $v = v'$ or $u = u'$ or
both $u = v'$ and $u' = v$. Clique inequalities are implied by the constraints
$x(\delta^{+}(v)) = x(\delta^{-}(v)) = 1$ and $x_{uv} + x_{vu} \leq 1$.
However, it is possible to derive inequalities from odd cycles in $\ensuremath{\mathcal{I}}$.
These \emph{odd closed alternating trails}~\cite{NewFacets} (odd
CATS for short) can be separated heuristically by computing shortest
paths in an auxiliary bipartite graph. Note that with respect to the
incompatibility graph of the TDTSP\xspace, the cuts correspond to
odd cycles of cliques rather than odd cycles of vertices. As a result the
obtained cuts are stronger than ordinary odd cycle cuts and easier
to separate due to the small size of the incompatibility graph of the
underlying ATSP.
\paragraph{Odd path-free inequalities}
\label{paragraph:odd_path_free}
Consider a set $S \subseteq V \setminus \{s\}$ of vertices
of the original graph. Let
$V^{^{\top}}(S) \coloneqq \{u_{\theta} \in V^{^{\top}} \mid u \in S \}$
be the corresponding vertices in $D^{^{\top}}$ and $A^{^{\top}}$
the induced subgraph:
\begin{equation}
A^{^{\top}}(S) \coloneqq \{ (u, v, \theta) \in A^{^{\top}} \mid u, v \in S \}.
\end{equation}
The intersection of any tour with the set $A^{^{\top}}$ can contain at most $|S| - 1$ arcs,
the corresponding inequality is equivalent to a subtour elimination constraint.
If $|S| = 2k + 1$ is odd, another inequality
can be obtained by considering certain subsets of $A^{^{\top}}$. Specifically,
$A' \subseteq A^{^{\top}}$ is called \emph{path-free}, if it does not
contain a path consisting of at least three different vertices in $S$. The intersection
of a path-free set $A'$ with any tour can contain at most $k$ arcs,
yielding the following \emph{odd path-free} inequality (see Figure~\ref{pic:path_free}):
\begin{equation}
\sum_{(u, v, \theta) \in A'} x_{uv, \theta} \leq k.
\end{equation}
In order to separate an odd path-free inequality we first have to
find some \emph{promising} subset $S$, i.e. a set off odd size forming
a clique of sufficient weight. For such a set $S$ the separation problem
is equivalent to finding a stable set of maximum weight in
the (undirected) line graph of $(V^{^{\top}}(S), A^{^{\top}}(S))$.
Since both problems are $NP$-hard themselves, we make several
restrictions in order to decrease the computational costs.
First, note that larger values of $k$ result in larger
line graphs making the computation of stable sets much
more challenging. We therefore chose to restrict ourselves
to the case of $k = 1$ (in which odd path-free sets
correspond to cliques in the incompatibility graph $\ensuremath{\mathcal{I}}$).
This restriction also enables us to find
promising sets in polynomial time by enumerating all
3-sets of vertices in $D$. In order to avoid separating
very similar inequalities we consider only
the largest promising 3-set containing
each vertex $v \in D \setminus \{s\}$. The separation
for each set is performed using an integer program.
\begin{figure}[ht]
\centering
\includegraphics[width=0.25\textwidth]{PathFree.tikz}
\caption{A path-free set of arcs on three vertices}
\label{pic:path_free}
\end{figure}
\paragraph{Lifted subtour elimination inequalities}
\label{paragraph:lsec}
As discussed above, subtour elimination constraints
can be used to cut off fractional TDTSP\xspace solutions.
SECs can be separated in polynomial time by solving a series
of flow problems on the underlying graph, ultimately yielding
a set $S \subseteq V$, $S \neq \emptyset, V$ maximizing
$x(\delta^{+}(S))$. In the following we will assume w.l.o.g that
$s \in S$. SECs can be strengthened by imposing an upper limit on
the time of the initial departure from the set $S$.
After all, any tour has to leave $S$ sufficiently early
to be able to reach the vertices in $V \setminus S$ and return to
$s$. More formally, let $\hat\theta$ be such that
\begin{equation}
\hat\theta \geq
\max \{ \theta \mid \text{There exists a tour $T$ leaving $S$ for the first time at $\theta$} \}.
\end{equation}
Then, the following \emph{lifted} subtour elimination constraint
(LSEC) is valid for all tours:
\begin{equation}
\sum_{\substack{ (u, v, \theta) \in A^{^{\top}} \\ u \in S, v \notin S, \\ \theta \leq \hat\theta }} x_{uv, \theta} \geq 1
\end{equation}
Clearly, $\hat\theta$ is maximized for a tour which first serves $S$,
then $V \setminus S$ and returns to $s$ immediately afterwards. Thus,
maximizing $\hat\theta$ would involve the
solution of a series of TDTSP\xspace problems on $V \setminus S$,
an approach which is clearly intractable in practice. We propose to
compute a larger value of $\hat\theta$ given by $\theta^{\max} - \check\theta$
with a lower bound $\check\theta$ on the length of the shortest tour on $V \setminus S$.
We derive $\check\theta$ by considering an ATSP on $V \setminus S$ with costs given
by suitably chosen lower bounds on the travel times $c_{uv, \theta}$. The ATSP itself can
be bounded from below by computing an arborescence of minimum weight.
\paragraph{Cycle inequalities}
\label{paragraph:cycle}
While the formulation (\ref{eq:arc_based}) does not contain any subtours it is
possible that a path in the LP-relaxation visits a vertex at several
different points in time. Specifically, a path $P$ in $D^{^{\top}}$ can be
of the form $P = (\ldots, (u, v, \theta), (v, w, \theta'), \ldots)$,
forming a cycle of length 2 in
$D$ (where $v \neq s, \theta' = \theta + c_{uv}(\theta)$).
We know that if a tour visits $v$ at $\theta'$,
it has to go on using an arc $(v, w, \theta')$ such
that $w \neq u$. Hence the following inequality is valid:
\begin{equation}
x_{uv,\theta} \leq \sum_{w \neq u,v} x_{vw,\theta'}
\end{equation}
More generally, consider a path which contains the sequence
$(u_1, v_1, \theta_1), \ldots, (u_k, v_k, \theta_k)$ such that
$v_1 = v_k$. In this case it makes sense to add the following
inequality:
\begin{equation}
x_{u_1v_1,\theta_1} \leq \sum_{j=1}^{k-1} \sum_{v \notin \{u_1, v_1, \ldots, v_j \}} x_{v_j v,\theta_{j+1}}
\end{equation}
In order to separate these $r$-cycle inequalities it is convenient
to consider a path-decomposition of the flow through the network
and to eliminate $r$-cycles from the individual paths.
\paragraph{Unitary AFCs}
\label{paragraph:unitary_afc}
Unitary AFCs (admissible flow constraint) were introduced in
\cite{TimeDependentTSPPoly} for the STDTSP. In the context of the
TDTSP\xspace they can be explained as follows: Consider an arc $(u, v,
\theta)$ with $u, v \neq s$ carrying a nonzero flow. The flow enters
some set of vertices which has to be left again in order to reach the
source $s$. Specifically, let $X \subseteq V^{^{\top}}$ such that
\begin{enumerate}
\item
$X$ contains $v_{\theta + c_{uv}(\theta)}$.
\item
Every vertex $(v', \theta') \in X$ is reachable
from $(v, \theta + c_{uv}(\theta))$
using only arcs in the graph induced by $X$.
\item
The set $X$ contains no copies of the vertices $u, v, s$.
\end{enumerate}
In this case we can add the following inequality:
\begin{equation}
x_{uv, \theta} \leq \sum_{\substack{(u', v', \theta') \in \delta^{+}(X) \\ v' \neq u, v}} x_{u'v',\theta'}
\end{equation}
In order to separate these types of inequalities we consider for
a fixed arc $(u, v, \theta)$ all vertices which are
reachable from $(v, \theta + c_{uv}(\theta))$. We then solve a series
of min-cut problems with capacities according to the fractional solution.
If we find a cut with a value of less than one we add it to the LP.
\subsection{Speedup techniques}
The addition of cutting planes already significantly strengthens
formulation \eqref{eq:arc_based}. There are however several other
techniques which can be used to speed up the computation of the
optimal tour in a branch-cut-and-price framework:
\paragraph{Propagation}
\label{paragraph:propagation}
At any given step in the solution process we have a (local) dual bound
$\underline{\theta}$ given by the value of the
LP-relaxation (of the current node in the branch-and-bound tree) and
a primal bound $\overline{\theta}$ given by the currently best known
integral solution. Clearly, any arc $(v, w, \theta)$ with
$\theta > \overline{\theta}$ can be fixed to
zero, as can any arc $(v, s, \theta)$ with
\begin{equation}
\theta + c_{vs}(\theta) < \underline{\theta}.
\end{equation}
As these bounds become more accurate, more and more arcs can
be discarded. The relaxation can often be strengthened significantly
by employing this technique since the LP-relaxations frequently
consists of paths which send an amount of flow for $s_0$ back
to $s_{\theta}$ via a path with containing very few vertices and
leading back into $s$ at a time lower than $\underline{\theta}$.
\paragraph{Compound branching}
Traditionally, a branch-and-bound approach would branch on individual
variables $x_{uv, \theta}$, leading to a highly unbalanced
branch-and-bound tree. We instead propose to branch on the combined
flow $(x_{uv})_{(u,v) \in A}$. We incorporate the incompatibilities with respect to
the underlying ATSP in order to increase the dual bounds in child
nodes. Specifically, whenever an arc $(u, w)$ is fixed to one during
the branching, we fix every incompatible arc $(u', v')$ to zero. We
incorporate the branching rule into the pricing loop by ignoring arcs
fixed to zero and incorporating the dual costs of arcs which have been
fixed to one.
\paragraph{Primal heuristics}
\label{paragraph:primal_heuristics}
In order to obtain improved primal solutions we a simple
heuristic based on the current LP-solution
$(x^{*}_{uv, \theta})_{(u,v, \theta) \in A^{^{\top}}}$.
Specifically, we construct a path $P$ traversing
$D^{^{\top}}$ starting at $s_0$ by
appending arcs to vertices whose counterparts in $D$ are still
unexplored by $P$ until the path forms a tour in $D$. During the
construction of the tour we disregard arcs fixed to zero by the
compound branching rule introduced above. If there are multiple arcs
to choose from, we compute scores using the following metrics:
\begin{enumerate}
\item
We score arc $(u, v, \theta)$ by the inverse of
its travel time $c_{uv}(\theta)$.
\item
We evaluate $(u,v, \theta)$ according to the value of $x^{*}_{uv, \theta}$
using travel times to break ties.
\item
We measure $(u, v, \theta)$ using the combined value $x^{*}_{uv}$ using
a similar tie-breaking rule.
\end{enumerate}
Note that the iterative construction of paths in $D^{^{\top}}$ is computationally
inexpensive. Thus, to increase the chance of finding an improved tour,
we randomize the selection of arcs based on probabilities proportional
to the different score functions and perform several runs using
different random seeds.
\subsection{A path-based formulation}
Recall that any feasible solution of the TDTSP\xspace problem
\eqref{eq:arc_based} can be decomposed into paths from $s_0$ to
$s_{\theta}$ for some $\theta \in \Theta$. With respect to $D$ these
paths correspond to cycles containing vertex $s$. We let $\mathcal{P}$ be
the set of paths, let $\alpha_{v,P} \coloneqq |\{ \theta \mid (v, w, \theta) \in P \}|$
and reformulate the problem in terms of individual paths:
\begin{equation}
\label{eq:path_based}
\begin{aligned}
\min\ &\sum_{P \in \mathcal{P}} c_P x_P \\
&\sum_{P \in \mathcal{P}} \alpha_{v,P} \cdot x_P = 1
&&\text{for all } v \in V \\
&x_P \in \{0, 1\} && \text{for all } P \in \mathcal{P} \\
\end{aligned}
\end{equation}
Note that any solution of this IP consists of a single variable
$x_P$ set to $1$ and all others set to $0$, in which case $P$ must
correspond to a tour. Any fractional solution consists of at most
$n$ different paths which need not be tours in $D$.
The resulting system is small in terms of the number
of constraints at the expense of the number of variables. Thus,
a pricing approach is absolutely necessary in this case. Since arc-based
and path-based solutions are equivalent, all previously discussed techniques
can be easily adapted to the path-based formulation.
\section{Introduction}
\label{section:introduction}
The traveling salesman problem (TSP) is among the best studied
combinatorial optimization problem (see \cite{TSPBook,TSPVariations}
for summaries). Considerable effort has been put into polyhedral
analysis of the problem, development of primal heuristics, and
implementation of branch-and-bound based code. Several generalizations
of the problem have been considered as well, such as the TSP with
time windows \cite{ATSPTimeWindow,SolvingATSPTimeWindow}, or
the class of vehicle routing problems (VRPs) \cite{VRPBook}.
The classical asymmetric TSP is based on the assumption that the
travel time $c_{ij}$ for an arc $(i, j)$ is constant throughout the
traversal of the graph by the optimum tour. While this assumption is
justified when it comes to travel times based on geometric distances,
travel times tend to vary over time in real-world instances (such as
road networks).
Some effort has been made in order to generalize the TSP with respect
to time-dependent travel times. The authors
of~\cite{TimeDependentTSP,TDTSPTardiness} consider the problem of
minimizing the travel time of a tour where the travel time of an arc
$(i, j)$ depends on the position of $i$ in the tour. Thus, the travel
time of $(i, j)$ is a function $c_{ij}(k)$ ($k = 1, \ldots, n$). This
simplified time-dependent TSP (which we denote by STDTSP) has since
attracted some attention, specifically, the authors
of~\cite{TimeDependentTSPPoly} conduct a polyhedral study and perform
computational experiments. The STDTSP is solved on a graph
which
consists of $n$ layers of vertices, a tour corresponds then to a
path containing exactly one representative of each vertex.
Note that the STDTSP is closely related to identical machine
scheduling, in particular $P || \sum w_j T_j$, which can
be solve in a similar fashion~\cite{ParallelMachines}.
We further generalize the concept of time-dependent travel times
to the case where the travel time of an arc $(u, v)$ is a
function $c_{uv} : \{0, \ldots, \theta^{\max}\} \to \mathds{N}$. As a result,
the corresponding instances tend to be much larger than in the
case of position-dependent travel times and particular care has
to be taken in order to provide exact solutions within a reasonable
time. It is however possible to generalize many results from the
STDTSP to the real-time-dependent TSP (TDTSP\xspace).
(see, e.g.,\xspace \refsec{section:complexity}).
\section{Preliminaries}
\label{section:preliminaries}
An instance $(D,c,\theta^{\max})$ of the TDTSP\xspace consists of a complete
directed graph $D=(V, A)$ with $n$ vertices ($V=\{1, \ldots, n\}$), a
time-horizon $\theta^{\max}$, and time-dependent travel times
$c_{a}: \Theta \to \mathds{N}$ for $a \in A$, where $\Theta \coloneqq \{0, \ldots,
\theta^{\max}\}$ is
a set of points in time. The vertex $s\coloneqq 1$ is defined as the source
vertex.
For each sequence of arcs $(a_1, \ldots, a_k)$ with $a_k = (u_k, v_k)$ and
$v_k = u_{k + 1}$ we can recursively define an arrival time
\begin{equation}
\ensuremath \theta^{\arr}(a_1, \ldots, a_k) \coloneqq
\begin{cases}
c_{u_1, v_1}(0), & \text{ if } k = 1, \\
\ensuremath \theta^{\arr}(a_1, \ldots, a_{k - 1}) + c_{u_k, v_k}(\ensuremath \theta^{\arr}(a_1, \ldots, a_{k -
1})), & \text{ else}.\\
\end{cases}
\end{equation}
The asymmetric TDTSP\xspace asks for a tour $T = (a_1, \ldots, a_n)$ which
minimizes the arrival time $\ensuremath \theta^{\arr}(a_1, \ldots, a_n)$.
We will consider several special cases of travel time functions which
play an important role in time-dependent versions of combinatorial problems:
\begin{enumerate}
\item
Several well-known results (e.g.,\xspace \cite{DoubleTree,Christofides})
state that the symmetric version of
the TSP can be approximated in case of \emph{metric} cost coefficients,
i.e. cost coefficients satisfying the triangle inequality.
The definition of the triangle inequality can be easily generalized
to the time-dependent case. Formally, a set of travel time functions
satisfies the \emph{time-dependent triangle inequality}
iff for each $u, v, w \in V$, $\theta \in \Theta$ with
$\theta + c_{uv}(\theta) \leq \theta^{\max}$, it holds that
%
\begin{equation}
\label{eq:time_dependent_triangle}
\theta + c_{uw}(\theta) \leq
\theta + c_{uv}(\theta) + c_{vw}(\theta + c_{uv}(\theta)).
\end{equation}
\item
Another property of time-dependent cost functions goes by the name
of FIFO (first-in-first-out). A function $f : \mathds{N} \to \mathds{N}$ satisfies the
FIFO-property iff
%
\begin{equation}
\theta + f(\theta) \leq \theta' + f(\theta') \quad \forall\,
\theta, \theta' \in \mathds{N},\; \theta \leq \theta'.
\end{equation}
%
The FIFO property implies that is is never advisable to wait at a
certain vertex to decrease the arrival time at a destination.
If the FIFO property is satisfied for each time-dependent cost function,
then shortest paths with respect to time-dependent costs can be computed
efficiently using a variant of Dijkstra's
algorithm~\cite{TimeDependentDijkstra,TimeDependentRoutePlanning}.
\end{enumerate}
| -37,831.195446 |
[
-2.251953125,
2.0546875
] | 22.360765 |
[
-3.115234375,
0.74658203125,
-1.7197265625,
-5.91015625,
-1.3564453125,
7.84765625
] |
[
0.367431640625,
8.125,
0.3359375,
4.26171875
] | 384 | 7,773 |
[
-3.39453125,
4.00390625
] | 28.104092 |
[
-5.5859375,
-4.046875,
-4.23046875,
-1.8935546875,
1.9892578125,
11.25
] | 1.033661 | 10.593901 | 19.542004 | 0.698772 |
[
1.9851396083831787
] | -23,951.955315 | 5.289592 | -37,429.545148 | 0.394007 | 6.018466 |
[
-2.17578125,
-3.03125,
-3.57421875,
-5.13671875,
2.3125,
11.671875
] |
[
-5.84765625,
-1.890625,
-2.15625,
-1.7978515625,
3.6328125,
4.4296875
] | |
BkiUdyLxaJJQn2qq7O63
|
\section{Introduction}
The importance of permanent magnets to many modern technologies has led to increased interest in developing magnets that contain lower amounts of supply-critical materials \cite{Gutfleisch2011}.
Progress in processing, characterization and simulation of rare earth permanent magnets has helped continually improve their performance.
Reversal depends on the microstructure, with the grain boundary phase and surface defects being of particular importance.
Numerical micromagnetics is successfully being used to understand the reversal mechanisms that determine important extrinsic properties such as the coercive field $H_{\mathrm{c}}$ \cite{bance2014influence}.
Understanding the temperature dependence of coercivity is of importance in the design of permanent magnets for applications at high temperatures,
e.g. in the motors of electric and hybrid vehicles where the operating temperature is typically around $T=450\mathrm{\,K}$.
In leading order, the temperature dependence of the intrinsic magnetic properties causes the reduction of the coercive field with temperature, $T$ \cite{Skomski2013}.
This is expressed by the well known relation \cite{Kronmueller1988}
\begin{equation}
H_{\mathrm c}(T) = \alpha \frac{2K(T)}{\mu_0 M_{\mathrm s}(T)} - N_{\mathrm{eff}} M_{\mathrm s}(T)
\label{eq:kroni}
\end{equation}
which relates the coercive field, $H_{\mathrm{c}}$, to the the anisotropy constant, $K(T)$, and the magnetization,
$M_{\mathrm s}(T)$. In (\ref{eq:kroni}) $\mu_0$ is the permeability of vacuum. While equation (\ref{eq:kroni}) is widely used to classify permanent magnets based on the microstructural parameters $\alpha$ and $N_{\mathrm{eff}}$, it also expresses the main contribution to the temperature dependence of $H_{\mathrm c}(T)$. In particular equation (\ref{eq:kroni}) relates the coercive field to the nucleation field, ${2K(T)}/({\mu_0 M_{\mathrm s}(T)})$, of a small magnetic sphere \cite{Brown1959}.
The role of thermal fluctuations becomes evident through viscosity experiments. Under the action of a constant applied field the magnetization decays with time \cite{Wohlfarth1984}. The change of magnetization within the time $t$ is given by
\begin{equation}
\label{eq:viscosity}
\Delta M(t) = - S \ln(t).
\end{equation}
The viscosity, $S$, is attributed to irreversible changes of the magnetization across the
energy barrier. The logarithmic dependence results from the distribution of energy barriers in the magnet.
Under the assumption that a coercivity is related to the expansion of an already reversed nucleus the
coercive field of a permanent magnet can be written as \cite{Givord1988,barthem2002analysis}
\begin{equation}
\label{eq:givord}
H_{\mathrm c} = \alpha' \frac{\gamma}{\mu_0 M_{\mathrm s}v^{1/3}} - N_{\mathrm{eff}} M_{\mathrm s}
- \frac{25 k_{\mathrm B}T}{\mu_0 M_{\mathrm s}v}.
\end{equation}
Here $k_{\mathrm B}T$ is the Boltzmann constant and $\alpha'$ replaces $\alpha$. Similarly, to equation (\ref{eq:kroni}) the intrinsic parameters and the derived quantities depend on temperature. In order to improve the readability, we have dropped the $(T)$ behind the symbols. $\gamma = \gamma(T)$ is the energy per unit area of a Bloch wall, $\gamma = 4\sqrt{AK}$, with the exchange constant $A = A(T)$. The activation volume, $v = v(T)$, may be associated with the volume of the initial nucleus. The last term in equation (\ref{eq:givord}) is proportional to the fluctuation field. The fluctuation field drives the systems over an energy barrier of $\Delta E = 25 k_{\mathrm B}T$ within the characteristic measurement time. This energy can be overcome within the characteristic measurement time of the coercive field. Applying the Arrhenius-N\'{e}el law the relaxation time over an energy barrier, $\Delta E$ is
\begin{equation}
\tau =\frac{1}{f_{0}}\mathrm{exp}\Bigg(\frac{\Delta E}{k_{\mathrm{B}}T}\Bigg)
\label{equation:arrenhius}
\end{equation}
where $f_{0}$ is the attempt frequency, which limits the probability for reversal.
The first term of equation (\ref{eq:givord}) can be rewritten as $\alpha''{2K(T)}/{(\mu_0 M_{\mathrm s})}$ when the
activation volume is assumed to be proportional to the Bloch wall width $\delta_{\mathrm B} = \sqrt{A/K}$.
The parameters $\alpha$, $\alpha'$, and $N_{\mathrm{eff}}$ can be derived by fitting the measured temperature dependent coercive field to equations (\ref{eq:kroni}) and (\ref{eq:givord}) \cite{kou1994coercivity}.
In particulate and thin film recording the particle or grain size is small. The total magnetic volume, $V$, is low. In zero field the energy barrier for magnetization reversal is given by the smaller of the two values $\Delta E_{0} = KV$ or $\Delta E_{0} = 4F\sqrt{AK}$, where $F$ is the minimum cross section of a columnar grain. At an opposing field $H$ the energy barriers \cite{sharrock1990time} decays with field according to
\begin{equation}
\Delta E = \Delta E_{0}\big(1-H/H_{\mathrm{0}}\big)^{n}.
\label{equation:sharrock}
\end{equation}
Here $\Delta E_{0}$ is the energy barrier in zero field and $H_0$ is the field where the barrier is zero.
The exponent $n$ has been discussed in detail in the literature, covering the dependence of $n$ on factors including external field strength and field angle \cite{harrell_orientation_2001, suess_reliability_2007}.
A value close to $n=2$ is usually used in situations corresponding to coherent reversal, while $n=1.5$ corresponds to nucleation and expansion.
When using micromagnetics, as in this article, the reversal mode is known directly from the calculations, so it is not necessary to know the value of $n$ to determine the type of reversal mechanism.
Equations (\ref{equation:arrenhius}) and (\ref{equation:sharrock}) lead to a time and temperature dependent
coercive field \cite{sharrock1990time}
\begin{equation}
H_{\mathrm c}(t,T) = H_0\left(1-\left(\frac{k_{\mathrm B}T}{\Delta E_{0}}\ln(f_0t) \right)^{1/n} \right)
\label{equation:sharrock2}
\end{equation}
Equation (\ref{equation:sharrock2}) gives the field that causes switching of half of the particles or grains within time $t$.
Because of the much larger grain size in permanent magnets and higher magneto-crystalline anisotropy in modern permanent magnets as compared to magnetic recording materials, it was widely believed that thermal fluctuations play only a minor role during magnetization reversal in permanent magnets. In permanent magnets the magnetization reversal is initiated within a small volume: either the reversed nucleus or the volume associated with a domain wall depinning process. Similarly to magnetization reversal in small particles thermal activation helps to initiate reversal within this characteristic volume. Advances in computational methods and computing power have made it possible to compute the effects of temperature in permanent magnets taking into account both the temperature dependent intrinsic properties and the thermal fluctuations over finite energy barriers.
Using methods from chemical physics \cite{henkelman2000climbing}, we compute the energy barrier as a function of the field and thus can estimate the influence of thermal fluctuations on the coercive field. Details of the computations will be presented in section \ref{sec:method} of this paper. We will demonstrate the influence of thermal fluctuations on coercivity for Pr$_2$Fe$_{14}$B magnets in section \ref{sec:cuberesults}, comparing the angle dependence of coercivity at 4.2 K, 175 K and 300 K.
Traditional Nd\(_{2}\)Fe\(_{14}\)B permanent magnets are doped with dysprosium to improve their performance.
The higher uniaxial anisotropy $K$ and lower magnetization $M_{\mathrm{s}}$ of the dysprosium increases the anisotropy field $H_{\mathrm A}=2K/(\mu_0M_{\mathrm{s}})$, which results in a higher $H_{c}$ but, of course, a lower overall magnetization.
Importantly, the market price of dysprosium and other heavy rare earth elements peaked drastically in 2010,
prompting a frantic search for new permanent magnets using cheaper materials. In order to produce magnets with high energy product, using fewer rare earth elements, a number of routes are currently being followed.
In addition to the grain boundary phase which separates the grains magnetically, modern magnets use the concept of magnetic surface hardening \cite{Ghandehari1987,Nakamura2005} for improved coercivity. The local anisotropy field near the surfaces of each grain is increased by partially substituting Nd with Dy in Nd\(_{2}\)Fe\(_{14}\)B based magnets. In section \ref{sec:dodecresults} we compute the coercivity
of Nd$_2$Fe$_{14}$B grains with a thin (Dy,Nd)$_2$Fe$_{14}$B shell.
\section{Method}
\label{sec:method}
In this work we follow a computational micromagnetics approach to treat the temperature dependence of the coercive field. The classical nucleation field theory starts \cite{Brown1959} from the uniform magnetic states and determines the critical field when this state becomes unstable. Under the action of an opposing field the system is in a metastable minimum. An energy barrier separates this local minimum (magnetization and field antiparallel) of the global minimum (magnetization and field parallel). With increasing opposing field the energy barrier decreases. At the nucleation field the local minimum vanishes; the system is at a saddle point and may reverse towards a global minimum.
In non-ellipsoidal particles the demagnetizing field is non-unform. In turn the remanent magnetic state and magnetic states under an opposing external field are inhomogeneous. However, a similar stability criterion as for the ellipsoid may be applied to define the nucleation field \cite{schabes1988magnetization,schabes1991micromagnetic,schmidts1994algorithm}. Standard numerical micromagnetic methods implicitly apply this criterion for the calculation of the switching field. The temperature dependence of the coercive field can be computed when the temperature dependent intrinsic magnetic properties, $M_{\mathrm s}(T)$, $K(T)$, and $A(T)$ are used as input for the computations. Similarly to an experiment these computed temperature dependent values for the coercive field can be fitted to equation (\ref{eq:kroni}) \cite{sepehri2014micromagnetic}.
In addition to the effect of the temperature dependent intrinsic magnetic properties on the coercivity, the influence of the thermal fluctuations on the temperature dependence of the coercive field can be addressed using numerical micromagnetics. By means of methods widely applied in chemical physics for the computation of reaction rates \cite{henkelman2000climbing}, it is possible to compute the height of the energy barrier separating the local minimum associated with the magnetic state before the reversal from the minimum that corresponds to the reversed state \cite{dittrich2002path}. Reversal occurs when the opposing field reduces the energy barrier to a height that can be overcome by thermal energy \cite{dittrich2005thermally}. This field is the coercive field and is a function of temperature. This method has been applied to compute the finite temperature switching field of permalloy elements and the thickness dependence of the coercive field in granular recording media \cite{suess2011calculation}.
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{fig1}
\caption{Schematic representation of the method for calculating thermally activated coercivity using numerical micromagnetics: (a) equilibrium states are calculated during a hysteresis simulation, using LLG micromagnetics or an energy minimization method; (b) energy barriers along a path in configuration space are calculated using the elastic band method or string method. $p$ is the distance along this path; (c) the thermally activated coercivity is estimated as the field required to reduce the energy barrier height to $25\,\mathrm{ k_{B}}T$. }
\label{fig:fig1}
\end{figure*}
The simulation process is illustrated in Fig. \ref{fig:fig1}. We first compute the demagnetization curve of the magnet using a standard micromagnetic solver (Fig. \ref{fig:fig1}a). We can either compute minimum energy states \cite{exl2014} or solve the Landau-Lifshitz Gilbert (LLG) equation \cite{suess2002time} for different applied fields. The coercive field obtained for the simulation of the demagnetization curve corresponds to $H_0$ in equation (\ref{equation:sharrock2}). It is the field where the energy barrier that separating the states before and after irreversible switching is zero. Then we want to compute the energy barrier between a state with $|H_i| < |H_0|$ which we denote with $\mathbf M_{\mathrm {initial}}$ and the reversed state which is called $\mathbf M_{\mathrm {final}}$. The transient magnetic states from the computation of the demagnetization curve serve as initial path for the computation of the minimum energy path connecting $\mathbf M_{\mathrm {initial}}$ and $\mathbf M_{\mathrm {final}}$ (Fig. \ref{fig:fig1}b). A path is optimal, if for any point along the path the gradient of the energy is parallel to the path. In other words: the component of the energy gradient normal to the path is zero. This path is called the minimum energy path, which means that the energy is stationary for any degree of freedom perpendicular to the path. Let's denote the magnetic state of the point with the maximum energy in the path by $\mathbf M^*$. This is the saddle point. The difference between the energy of the saddle point and the initial state is the energy barrier:
\begin{equation}
\label{eq:barrier}
\Delta E(H_i) = E\left(\mathbf M^*\right) - E\left(\mathbf M_{\mathrm {initial}}(H_i)\right).
\end{equation}
We apply the climbing image nudged elastic band method \cite{henkelman2000climbing,dittrich2002path} or the modified string method \cite{E2007simplified} to compute the minimum energy path. Both methods take a series of magnetization configurations as input and minimize the energy path, formed from a series of connected nodes, in the multi-dimensional configuration space according to the local gradient at each node. They differ only in their algorithms. The nudged elastic band method employs a spring force between adjacent nodes in order that they do not become too separated. The string method renormalizes the distance between adjacent nodes after each iteration. We repeat the computation of the energy barrier (\ref{eq:barrier}) for different applied fields and fit the results to equation (\ref{equation:sharrock}). The critical field value at which the energy
barrier becomes 25~$k_{\mathrm B}T$ is the temperature dependent
coercive field, $H_{\mathrm c}(T)$, see Fig. \ref{fig:fig1}c and Fig. 4 in the paper by Sharrock \cite{sharrock1990time}.
The value of 25~$k_{\mathrm B}T$ is the energy barrier that follows from (\ref{equation:arrenhius}) for a typical measurement time of 1 second \cite{givord1987magnetic}. Hereby an attempt frequency of $f_0 = 10^{11} \mathrm{Hz}$ was assumed. The attempt frequency may depend on the nature of the domain nucleation or depinning process. Thus for a more accurate numerical treatment of $H_{\mathrm c}(T)$ a method for the computation of the attempt frequency such as forward flux sampling \cite{vogler2013simulating} may be applied.
\begin{table}[t]
\caption{Material properties of the phases used in the simulations. }
\label{table:materials}
\centering
\begin{tabular}{c c c c c
\hline\hline
Name & $T$(K) & $K$(MJ/m\textsuperscript{3}) & $\mu_0M_{s}$(T) & $A$(pJ/m)\\
\hline
\PrFeB & 4.2 & 23.5\cite{Hirosawa1986} & 1.85\cite{Sagawa1985} & 11.3 \\
\PrFeB & 175& 12.39\cite{Hirosawa1986} & 1.78\cite{Sagawa1985} & 10.6 \\
\PrFeB & 300 & 5.40\cite{Hirosawa1986} & 1.56\cite{Sagawa1985} & 8.12 \\
\Dy & 300 & 5.17\cite{Sagawa1987} & 1.151\cite{Sagawa1987} & 8.7\cite{Hawton1943}\\
\Dy & 450 & 2.70\cite{Sagawa1987} & 0.990\cite{Sagawa1987} & 6.44\cite{Hawton1943}\\
\Nd & 300 & 4.30\cite{Hock1988} & 1.613\cite{Hock1988} & 7.7\cite{Durst1986}\\
\Nd & 450 & 2.09\cite{Hock1988} & 1.285\cite{Hock1988} & 4.89\cite{Durst1986}\\
\hline
\end{tabular}
\end{table}
We apply a finite element method for the computation of the demagnetization curve and the energy barriers.
Rave and co-workers\cite{rave1998corners} suggest that the mesh size should be smaller than the theoretical exchange length, which is defined analytically as $L=\sqrt{A/\mu_0M_{\mathrm s}^2}$ for a ferromagnet.
To satisfy this requirement while restraining the finite element mesh to a reasonable number of elements we use an adaptive mesh,
where the fine mesh is constrained to the regions of domain wall nucleation. The intrinsic material constants used for the simulations are given in Table \ref{table:materials}.
The solver uses a hybrid finite element / boundary element method to calculate the external demagnetizing field and, unless otherwise stated, neighbouring phases in the models are fully exchange coupled according to the local material parameters.
\section{Results \& Discussion}
\label{sec:results}
\subsection{Surface defects in PrFeB grains}
\label{sec:cuberesults}
A single \PrFeB grain is modelled as a cube with 100 nm edge length and a soft surface defect with 0.8 nm thickness and uniaxial anisotropy constant $K=0$.
Confining the defect to one corner allows a reduction in model size and thus simulation cost, since the nucleation region must be finely discretized, without changing the resulting critical fields.
The initial magnetization is in the $+z$ direction, parallel to the c-axis.
An opposing field is applied with an angle $\theta _{H}$ from the $-z$ direction in the $z-x$ plane.
\begin{figure}[t]
\includegraphics[width=1.0\columnwidth]{fig2}
\caption{Schematic of magnetization reversal. Sketches of the magnetic state in the metastable minimum before reversal (1), at the saddle point (2), and after passing the saddle point (3).}
\label{fig:reversal}
\end{figure}
Fig. \ref{fig:reversal} shows the thermally activated magnetization reversal process schematically.
Reversal begins with the rotation of the magnetization in the soft defect.
As the field increases, the magnetization within the nucleus rotates towards the applied field direction. The Zeeman energy decreases and makes the presence of a domain wall energetically favourable. A domain wall like state forms between the nucleus and rest of the magnet. Increasing the field further reduces the energy barrier to zero, at which point the nucleus expands and the domain wall rapidly passes through the remaining grain volume.
The energy barriers corresponding to a range of applied field strengths are calculated by minimizing this path using the nudged elastic band method. Fitting the field-dependent barrier height $\Delta E(H_i)$ to equation (\ref{equation:sharrock}) the temperature dependent coercivity is estimated. Fig. \ref{fig:pr} shows the angle dependent coercive field computed for different temperatures. The dashed lines are computed with an LLG solver, which correspond to the temperature dependent coercivity, $H_0(T)$, taking into account the temperature dependent intrinsic parameters but neglecting possible fluctuations over finite barriers. The solid lines give the temperature dependent coercivity, $H_{\mathrm c}(T)$, including both the temperature dependent intrinsic material properties and thermal hopping over finite energy barriers. With increasing temperature the difference between $H_0(T)$ and $H_{\mathrm c}(T)$ increases. For a field angle of zero the relative change of the coercivity by thermal fluctuations
\begin{equation}
\Delta H_{\mathrm {fl}}(T) = \frac{H_0(T)-H_{\mathrm c}(T)}{H_0(T)}
\end{equation}
is 0.01, 0.11, and 0.18 for a temperature of 4.5~K, 175~K, and 300~K, respectively.
It is interesting to note that with increasing temperature the minimum in the coercive field as function of temperature becomes less pronounced.
\begin{figure}[t]
\includegraphics[width=0.8\columnwidth]{fig3}
\caption{Coercive field as a function of applied field angle for a cubic \PrFeB grain calculated without thermal activation (dashed lines) and including thermal activation (solid lines).
The grain includes an anisotropy-reduced surface defect layer of 0.8 nm thickness and has an edge length of 100 nm. The inset images show the grain model used with the surface defect of 0.8 nm thickness and reduced uniaxial anisotropy. }
\label{fig:pr}
\end{figure}
\subsection{Grain boundary diffused magnetic grains}
\label{sec:dodecresults}
We simulate a Dy grain boundary diffused magnetic grain with a \Nd core, a hard 4 nm \Dy shell and a 2 nm soft surface defect.
The soft defect has the properties of \Dy except the uniaxial magnetocrystalline anisotropy constant is reduced to $K=0$.
The outer diameter of the dodecahedral grain is constant at 50 nm.
Intrinsic material properties are given in Table \ref{table:materials}. The thermally activated reversal process at $T=450$~K is illustrated in Figure \ref{fig:dy}.
Reversal begins by rotation of the magnetic moments within the soft defect before nucleation of a reversal domain at the corner.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{fig4}
\caption{Simulation results for calculation of thermally-activated coercive field in a single grain of a Dy grain boundary diffused core-shell permanent magnet at 450~K.
(a) Schematic representation of the model (not to scale).
(b) The minimum energy path during reversal under an applied field of 2.8 T, calculated using the string method.
The saddle point of highest energy corresponds with the so-called activation volume. The energy barrier $\Delta E$ is the difference in the total magnetic Gibbs free energy between the initial configuration and the saddle configuration.
The inset images show the nucleation of the reversal domain, which occurs at the outer surface, within the soft defect layer. Black arrows indicate the local magnetization direction, which initially opposes the applied field $H$.
(c) $\Delta E$ is calculated for a range of applied field strengths, allowing an estimation of the field required to reduce the energy barrier to a height of $25\,\mathrm{ k_{B}}T$.}
\label{fig:dy}
\end{figure}
The insets in Figure \ref{fig:dy}b show the grain and the formation of the reversed nucleus in the soft outer defect using a three-dimensional contour plot.
The solid line plot gives the total energy of the system along the minimum energy path calculated using the string method with an applied field of 2.8 T.
At the saddle point, the point with maximum energy along the minimum energy path, a small nucleus is formed at the corner of the dodecahedron.
Figure \ref{fig:dy}c gives the energy barrier as function of the applied field, $\Delta E(H)$ along with the corresponding fit to Equation \ref{equation:sharrock}, where $n=1.37$.
The critical field where the barrier crosses the line $\Delta E(H) = 25 k_{\mathrm B}T$ is the coercive field computed taking into account thermal fluctuations.
Without thermal fluctuations the barrier would have to vanish ($\Delta E(H) = 0$) for magnetization reversal to occur.
At a temperature of $T=450$ K thermal fluctuations reduce the coercive field from its static value $\mu_0 H_{\mathrm 0} = 3.6$~T to $\mu_0 H_{\mathrm c} = 2.78$~T.
This gives a relative change of the coercivity by thermal fluctuations of
$\Delta H_{\mathrm {fl}}(450 {\mathrm K}) = 0.23$.
\begin{table}[t]
\caption{Effect of a \Dy shell on the coercive field of a \Nd grain. The first column gives the thickness of a defect layer with $K=0$. The second column gives the thickness of the \Dy shell. $H_0$ is the coercive field taking into account the temperature dependence of the intrinsic materials parameters only. $H_{\mathrm c}$ is the temperature dependent coercive field including thermal fluctuations. The last column gives the relative change in the coercive field owing to thermal fluctuations.}
\label{table:dy
\begin{tabular}{c c c c c c
\hline\hline
defect(nm) & shell (nm) & $T$(K) & $\mu_0 H_0$(T) & $\mu_0 H_{\mathrm c}$(T) & $\Delta H_{\mathrm {fl}}$ \\
\hline
0 & 0 & 300 & 5.89 & 4.97 & 0.16 \\
0 & 0 & 450 & 3.58 & 2.62 & 0.27 \\
2 & 0 & 300 & 3.84 & 3.23 & 0.16 \\
2 & 0 & 450 & 2.44 & 1.80 & 0.26 \\
2 & 4 & 300 & 5.81 & 4.97 & 0.14 \\
2 & 4 & 450 & 3.60 & 2.78 & 0.23 \\
\hline
\end{tabular}
\end{table}
In order to understand the influence of the \Dy shell on the thermal stability of the coercive field, we calculate $H_0(T)$ and $H_{\mathrm c}(T)$ for different configurations: (i) a perfect \Nd grain, (ii) a \Nd grain with a 2 nm thick soft magnetic surface defect ($K=0$), and (iii) the above discussed \Nd core / \Dy shell grain. Table \ref{table:dy} summarized the results.
At $T=450$~K the perfect \Nd grain shows a coercivity of $\mu_0H_{\mathrm c} = 2.62$~T.
Adding a surface defect the coercivity reduces to $\mu_0H_{\mathrm c} = 1.8$~T.
The \Dy recovers coercivity and compensates the loss in $H_{\mathrm c}$ caused by the defect: The coercivity of the core/shell grain with a surface defect is $\mu_0H_{\mathrm c} = 2.78$~T.
This value is higher than the coercivity of the perfect \Nd without a defect.
While there is no guarantee that such a diffusion shell fabricated experimentally will be continuous and of constant thickness, the key message is that if you can make a perfect diffusion shell you only need 4 nm to reach the target coercivity. We are able to make a prediction, using simulation, of how thick the diffusion layer needs to be to reach the required coercivity.
It is important to validate our simulations by comparing to experimental results.
Sepehri-Amin et al. \cite{Sepehri-Amin2013} found experimentally a coercivity of 1.5 T for ultrafine-grained anisotropic Nd–Fe–B magnets, where the grains are platelet-like in shape.
The reduction in coercivity due to grain shape was recently investigated by Bance at al. \cite{bance_grain-size_2014}, where it was shown that, for a 50 nm grain diameter, an aligned cubic grain has a 0.85 reduction in coercivity with respect to a dodecahedral grain shape.
Additionally, grain easy axis misalignment reduces coercivity in real magnets. Sepehri-Amin et al. suggest a grain misalignment of around 15 degrees in their sample.
Using a Stoner-Wohlfarth model this gives an approximate further reduction in coercivity of 0.61 \cite{Stoner1948, bance2014influence}.
Combining the two reduction factors from shape and misalignment to adjust our calculated value for $H_{\mathrm c}$ of the dodecahedral grain with a defect but no hard shell at 300 K we go from 3.23 T to 1.68 T, which agrees well with the experimental value.
\section{Conclusions}
We presented a micromagnetic scheme for the computation of the temperature dependence of coercivity in permanent magnets. In addition to the change of the coercive field through the change of the temperature dependent anisotropy field, thermal fluctuations cause a further reduction of $H_{\mathrm c}$. This relative change of the coercivity owing to thermal fluctuations is around 15 percent at room temperature and 25 percent at 450~K.
The temperature dependence of the coercive field was calculated for Dy diffused permanent magnets. The results show that a \Dy shell of 4 nm only is sufficient to compensate the loss in coercivity caused by soft magnetic defects.
\section*{Acknowledgement}
This paper is based on results obtained from the future pioneering program ``Development of magnetic material technology for high-efficiency motors''
commissioned by the New Energy and Industrial Technology Development Organization (NEDO). The authors would like to acknowledge funding support from the Replacement and Original Magnet Engineering Options (ROMEO) Seventh Framework Program (FP7).
| -18,363.452169 |
[
-3.09765625,
2.9140625
] | 49.333333 |
[
-2.99609375,
0.634765625,
-1.841796875,
-5.4765625,
-0.71044921875,
7.859375
] |
[
2.96484375,
8.3984375,
3.880859375,
7.19921875
] | 251 | 3,798 |
[
-3.55078125,
4.00390625
] | 24.868256 |
[
-6.1953125,
-2.69921875,
-3.359375,
-2.53515625,
1.4970703125,
10.9921875
] | 1.389468 | 16.348903 | 26.315789 | 5.507364 |
[
3.1151108741760254
] | -13,119.597724 | 5.684834 | -17,668.160134 | 1.704414 | 5.698319 |
[
-3.248046875,
-3.873046875,
-3.41796875,
-4.33984375,
2.625,
11.5546875
] |
[
-5.47265625,
-2.0625,
-2.244140625,
-1.6171875,
3.697265625,
4.77734375
] | |
BkiUczXxK6nrxq6DnFCL
|
\section{Introduction\label{sect:introduction}}
In this paper we present an abstract framework for asymptotic analysis of
convergence based on the notions of eventual families of sets that we define.
A family $\mathcal{F}$ of subsets of a set $X$ is called here an
\textquotedblleft eventual family\textquotedblright\ if $S\in\mathcal{F}$ and
$S^{\prime}\supseteq S\,$\ implies $S^{\prime}\in\mathcal{F},$ i.e., if it is
upper hereditary with respect to inclusion. If $S\in\mathcal{F}$ and
$S^{\prime}\subseteq S\,$\ implies $S^{\prime}\in\mathcal{F},$ i.e., if it is
lower hereditary with respect to inclusion, then we call it a
\textquotedblleft co-eventual family\textquotedblright\textbf{. }We define
accumulation points of eventual families in a Hausdorff topological space and
define the \textquotedblleft image family\textquotedblright\ $\mathcal{G}$ of
an eventual family $\mathcal{F}$ under a given mapping $f,$ called
\textquotedblleft the push of $\mathcal{F}$ by $f$\textquotedblright\ via
$\mathcal{G=}\mathrm{Push}(f,\mathcal{F)}:=\{S\subseteq Y\,\mid f^{-1
(S)\in\mathcal{F}\}.$
Focusing on eventual families in the set $\mathbb{N}$ of the integers enables
us to talk about sequences of points, particularly, points that are generated
by repeated application of an operator $T:X\rightarrow X.$ We then define the
notion of an \textquotedblleft\textbf{$\mathcal{E}$}-limit of a sequence
$(A_{n})_{n\in\mathbb{N}}$ of subsets\ of a set $X$\textquotedblright\ as the
set of all $x\in X$ such that the set of $n$ with $x\in A_{n}$ belongs to
\textbf{$\mathcal{E},$} i.e., $\mathbf{\mathcal{E}}$-$\lim_{n\rightarrow
\infty}A_{n}:=\{x\in X\mid\{n\,|\,x\in A_{n}\}\in\mathbf{\mathcal{E}}\}$ where
$\mathcal{E}$ is an eventual family in $\mathbb{N}.$ The relationship of this
notion with the classical notion of limit of a sequence of sets is studied.
In the sequel we expand our work to the notion of a \textquotedblleft
multiset\textquotedblright\ which is a modification of the concept of a set
that allows for multiple instances of its elements. The number of instances
given for each element is called the multiplicity of that element in the
multiset. With multisets in hand we define and develop \textquotedblleft
multifamilies\textquotedblright\ which are either \textquotedblleft
increasing\textquotedblright\ or \textquotedblleft
decreasing\textquotedblright, connecting with the earlier notions via the
statement that a family of subsets of $X$ is an eventual (resp.\ co-eventual)
family if the multifamily that defines it is increasing (resp.\ decreasing).
The abstract structure created here is motivated by, and feeds back to, our
look at the convergence analysis of an iterative process for asymptotically
finding a common fixed point of a family of operators. This particular case
serves as an example of the possible use of our theory. The work presented
here adds a new angle to the theory of set convergence, see, e.g., the books
by Rockafellar and R.J.-B. Wets \cite[Chapter 4]{rock-book} and by Burachik
and Iusem \cite{burachik-book}.
\section{Eventual Families and Their Use in Limiting Processes\label{sect:Ev}}
\subsection{Eventual Families\label{subsec:eve-fams}}
We introduce the following notion of eventual families of subsets.
\begin{definition}
\label{def:event-co-event}Let $X$ be a set and let $\mathcal{F}$ be a family
of subsets of $X.$ The family $\mathcal{F}$ is called an \textquotedbllef
\textbf{eventual family\textquotedblright} if it is \textit{upper hereditary
with respect to inclusion}, i.e.,\ i
\begin{equation}
S\in\mathcal{F},\,S^{\prime}\supseteq S\,\Rightarrow S^{\prime}\in\mathcal{F}.
\end{equation}
The family $\mathcal{F}$ is called a \textquotedblleft\textbf{co-eventual
family\textquotedblright\ }if it is \textit{lower hereditary with respect to
inclusion}, i.e.,\ i
\begin{equation}
S\in\mathcal{F},\,S^{\prime}\subseteq S\Rightarrow S^{\prime}\in\mathcal{F}.
\end{equation}
\end{definition}
We mention in passing that Borg \cite{borg-hereditary} uses the term
\textquotedblleft hereditary family\textquotedblright, in his work in the area
of combinatorics, for exactly what we call here \textquotedblleft co-eventual
family\textquotedblright. Several simple observations regarding such families
can be made.
\begin{proposition}
\label{lem:simple-observ}(i) A family $\mathcal{F}$ of subsets of $X$ is
co-eventual iff its complement, i.e., the family of subsets of $X$ which are
not in $\mathcal{F}$, is eventual.
(ii) The empty family and the family of all subsets of $X$ are each both
eventual and co-eventual, and they are the only families with this property.
\end{proposition}
\begin{proof}
(i) This follows from the definitions. (ii) That the empty family and the
family of all subsets of $X$ are each both eventual and co-eventual is
trivially true. We show that if $\mathcal{F}$ is eventual and co-eventual and
is nonempty then it must contain all subsets of $X.$ Let $S\in\mathcal{F}$ and
distinguish between two cases. If $S=\emptyset$ then\thinspace\thinspace
$\mathcal{F}$ must contain all subsets of $X$ because $\mathcal{F}$ is
eventual. If $S\neq\emptyset$ let $x\in S$, then, since $\mathcal{F}$ is
co-eventual it must contain the singleton $\{x\}$. Consequently, the set
$\{x,y\},$ for any $y,$ is also in $\mathcal{F}$ and so $\{y\}\in\mathcal{F}$,
thus, all subsets of $X$ are contained in $\mathcal{F}$. Alternatively, if we
look at $S\in\mathcal{F}$, then for any subset $S^{\prime}$ of $X$,
$\mathcal{F}$ contains $S\cup S^{\prime}$ since $\mathcal{F}$ is eventual.
Then since $\mathcal{F}$ is co-eventual, it must contain $S^{\prime}$, leading
to the conclusion that it contains all subsets.
\end{proof}
\begin{remark}
\label{rem:filter}An eventual family $\mathcal{F}$ need not contain the
intersection of two of its members. If it does so for every two of its members
then it is a \textit{filter}.
\end{remark}
Similar to the notion used in \cite{danzig-folkman-shapiro} and
\cite{lent-censor} in the finite-dimensional space setting, we make here the
next definition.
\begin{definition}
\label{def:star-set}Given a family $\mathcal{F}$ of subsets of a set $X$, the
\textquotedblleft\textbf{star set associated with} $\mathcal{F}
\textquotedblright, denoted by\thinspace\thinspace$\mathrm{Star
(\mathcal{F}),$ is the subset of $X$ that consists of \textit{all }$x\in X$
\textit{such that the singletons }$\{x\}\in\mathcal{F}$, namely
\begin{equation}
\mathrm{Star}(\mathcal{F}):=\{x\in X\mid\{x\}\in\mathcal{F}\}.
\end{equation}
\end{definition}
\subsection{Accumulation Points as Limits of Eventual
Families\label{subsect:LimEv}}
Suppose now that $X$ is a \textit{Hausdorff }Topological space.
\begin{definition}
\label{def:limit-pt-and-set}Let $\mathcal{F}$ be an eventual family of subsets
of $X$. A point $x\in X$ is called an \textquotedblleft\textbf{accumulation
(or limit) point} of $\mathcal{F}$\textquotedblright\ if every (open)
neighborhood \footnote{Since, by definition, a neighborhood always contains an
\textit{open} neighborhood, considering all neighborhoods or just the open
ones does not make a difference here.} of $x$ belongs to $\mathcal{F}$. The
set of all accumulation points of $\mathcal{F}$ is called the
\textquotedblleft\textbf{limit set} of $\mathcal{F}$\textquotedblright.
\end{definition}
\begin{proposition}
\label{prop:lim-set-closed}The limit set of an eventual family $\mathcal{F}$
is always closed.
\end{proposition}
\begin{proof}
We show that the complement of the limit set, i.e., the set of all
non-accumulation points, is open. The point $y$ is a non-accumulation point
iff it has an open neighborhood which does not belong to $\mathcal{F}$,
i.e.,\ when it is a member of some open set not in $\mathcal{F}$. Hence the
complement of the limit set is the union of all open sets not in $\mathcal{F
$, and by definition, in a topological space, the union of any family of open
sets is open.
\end{proof}
We turn our attention now to sequences in $X$, i.e.,\ maps $\mathbb{N
\rightarrow X,$ where $\mathbb{N}$ denotes the positive integers.
\begin{definition}
\label{def:push}Given are a family $\mathcal{F}$ of subsets of $X$\ and a
mapping between sets $f:X\rightarrow Y$. The family $\mathcal{G}$ of subsets
of $Y$ whose inverse image sets $f^{-1}(S)$ belong to $\mathcal{F}$ will be
denoted by $\mathcal{G=}\mathrm{Push}(f,\mathcal{F)}$ and called the
\textquotedblleft\textbf{push} of $\mathcal{F}$ by $f$\textquotedblright$,$
namely
\begin{equation}
\mathcal{G=}\mathrm{Push}(f,\mathcal{F)}:=\{S\subseteq Y\,\mid f^{-1
(S)\in\mathcal{F}\}.
\end{equation}
\end{definition}
Combining Definitions \ref{def:limit-pt-and-set} and \ref{def:push} the
following remark is obtained.
\begin{remark}
\label{claim:push}Let $\mathcal{E}$ be an eventual family of subsets
\textit{of }$\mathbb{N}$ and let $f:\mathbb{N}\rightarrow X$ be defined by
some given sequence $(x_{n})_{n\in\mathbb{N}}$ in $X$. The accumulation points
and the limit set of $(x_{n})_{n\in\mathbb{N}}$ with respect to $\mathcal{E}$
are those defined with respect to the push of $\mathcal{E}$ by $f$ .
\end{remark}
The next examples emerge by using two different eventual families in
$\mathbb{N}$. The same `machinery' yields both `cases' via changing the
eventual family $\mathcal{E}$ in $\mathbb{N}$.
\begin{Exmps}
\label{exmps}
\begin{enumerate}
\item Let $\mathcal{E}$ be the family of complements of finite sets in
$\mathbb{N}$. Then accumulation points (i.e., limits with respect to
$\mathcal{E}$) are the usual limits, and if there is a limit point then it is
unique. This is the case, as one clearly sees, in a Hausdorff space $X$
whenever $\mathcal{E}$ is a filter, as here $\mathcal{E}$ clearly is.
\item Let $\mathcal{E}$ be the family of infinite subsets in $\mathbb{N}$.
Then being an accumulation point means \textit{being some accumulation point
of the sequence} in the usual sense, which in general, need not be unique.
Indeed, here $\mathcal{E}$ is not a filter.
\end{enumerate}
\end{Exmps}
\subsection{Operators and Seeking Fixed Points\label{subsect:OpFxPt}}
Continuing to consider a Hausdorff topological space $X$, call any continuous
self-mapping $T:X\rightarrow X$ \textquotedblleft\textbf{an operator
\textquotedblright.
\begin{definition}
\label{def:seq-follow-T}Let $X$ be a Hausdorff topological space,
$T:X\rightarrow X$ an operator, $(x_{n})_{n\in\mathbb{N}}$ a sequence in $X,$
and $\mathcal{E}$ an eventual family of subsets of $\mathbb{N}$. We say that
\textquotedblleft\textbf{the sequence }$(x_{n})_{n\in\mathbb{N}}$\textbf{
follows }$T$\textbf{ with respect to $\mathcal{E}$\textquotedblright} if, for
every $S\in\mathcal{E}$, there are integers $p,q$ in $S$ so that
$x_{p}=T(x_{q}).$
\end{definition}
\begin{theorem}
\label{tm} In a Hausdorff topological space $X$, if a sequence $(x_{n
)_{n\in\mathbb{N}}$ follows a continuous operator $T$ with respect to some
eventual family $\mathcal{E}$ in $\mathbb{N}$, and if $y$ is an accumulation
point of the sequence with respect to $\mathcal{E}$ then $y$ is a fixed point
of $T$.
\end{theorem}
\begin{proof}
Assume to the contrary that\ $T(y)\neq y$. Then, since the space is Hausdorff,
$T(y)$ and $y$ have disjoint open neighborhoods $U_{y}$ and $U_{T(y)}$.
Continuity of $T$ guarantees that there is an open neighborhood $V_{y}$ of $y$
so that $T(V_{y})\subset U_{T(y)}$. Hence
\begin{equation}
U_{y}\cap T(V_{y})=\emptyset, \label{eq:disjoint
\end{equation}
meaning that $T(z)\neq z$ for $z\in U_{y}\cap V_{y}$. But $U_{y}\cap V_{y}$ is
also an open neighborhood of $y$, and $y$ is an accumulation point of the
sequence with respect to $\mathcal{E}$, hence, the se
\begin{equation}
S:=\{n\in\mathbb{N}\mid\,x_{n}\in U_{y}\cap V_{y}\}
\end{equation}
is in $\mathcal{E}$. Since the sequence follows $T$ with respect to
$\mathcal{E}$, there must be $p$ and $q$ in $S$ so that $x_{p}=T(x_{q})$. This
point must belong to both $U_{y}$ and $T(V_{y})$, which contradicts
(\ref{eq:disjoint}).
\end{proof}
\subsection{Finitely-Insensitive Eventual Families in $\mathbb{N
$\label{subsec:FinIns}}
When considering eventual families in $\mathbb{N}$ it is often desirable to
assume that they are \textit{finitely-insensitive}, as we define next. All our
examples have this property.
\begin{definition}
\label{def:finite-intensive}A family $\mathcal{E}$ of subsets of $\mathbb{N}$
is called a \textquotedblleft\textbf{finitely-insensitive
family\textquotedblright} if for any $S\in\mathcal{E}$, finitely changing $S$,
which means here\ adding and/or deleting a finite number of its members, will
result in a set $S^{\prime}\in\mathcal{E}$.
\end{definition}
\subsection{Limits of Sequences of Sets\label{subsect:LimSeqSt}}
In \cite{lent-censor}, \cite{danzig-folkman-shapiro} and \cite{salinetti-wets}
the notions of \textit{upper limit} and \textit{lower limit} of a sequence of
subsets $(A_{n})_{n\in\mathbb{N}}$ of some $X$ are considered, in the
framework of the Euclidean space, a locally compact metric space, or a normed
linear space of finite dimension, respectively. When these upper limit
$\limsup_{n\rightarrow\infty}A_{n}$ and lower limit $\liminf_{n\rightarrow
\infty}A_{n}$ coincide one says that the sequence of sets has their common
value as a \textit{limit}, denoted by $\lim_{n\rightarrow\infty}A_{n}.$ Thus,
a function defined on sets, or taking values in sets, may be said to be
\textit{continuous} when it respects limits of sequences.
Here we define the notion of an \textquotedblleft\textbf{$\mathcal{E}$}-limit
of a sequence $(A_{n})_{n\in\mathbb{N}}$ of subsets\ of a set $X
\textquotedblright\ and state its relationship with the classical notion of
limit mentioned above.
\begin{definition}
\label{def:E-limit}Let $X$ be a set, let $(A_{n})_{n\in\mathbb{N}}$ be a
sequence of subsets of $X,$ let $\mathcal{E}$ be an eventual family in
$\mathbb{N}$ and assume that $\mathcal{E}$ is finitely-insensitive. The
\textquotedblleft\textbf{$\mathcal{E}$}-limit of the sequence $(A_{n
)_{n\in\mathbb{N}}$\textquotedblright, denoted by $\mathbf{\mathcal{E}}
-$\lim_{n\rightarrow\infty}A_{n},$ is the set of all $x\in X$ such that the
set of $n$ with $x\in A_{n}$ belongs to $\mathcal{E}$, namely
\begin{equation}
\mathbf{\mathcal{E}}\text{-}\lim_{n\rightarrow\infty}A_{n}:=\{x\in
X\mid\{n\,|\,x\in A_{n}\}\in\mathbf{\mathcal{E}}\}.
\end{equation}
\end{definition}
Strict logic tells us that the $\mathcal{E}$-limit is well-defined also for an
empty $\mathcal{E}$ or if $\mathcal{E}$ contains all subsets. Indeed, if
$\mathcal{E}=\emptyset$ then $\mathbf{\mathcal{E}}$-$\lim_{n\rightarrow\infty
}A_{n}=\emptyset,$ and if $\mathcal{E}$ is the family of all subsets then
$\mathbf{\mathcal{E}}$-$\lim_{n\rightarrow\infty}A_{n}=X$.
\begin{theorem}
\label{thm:limit-E-limit}Let $X$ be a set, let $(A_{n})_{n\in\mathbb{N}}$ be a
sequence of subsets of $X,$ and let $\mathcal{E}$ be an eventual family in
$\mathbb{N}$. If $\mathcal{E}$ is a finitely-insensitive family which is not
trivial, i.e.,\ is not either empty or containing all subsets, and if the
(classical) $\lim_{n\rightarrow\infty}A_{n}$ exists the
\begin{equation}
\mathbf{\mathcal{E}}\text{-}\lim_{n\rightarrow\infty}A_{n}=\lim_{n\rightarrow
\infty}A_{n}.
\end{equation}
\end{theorem}
\begin{proof}
Note that, for a given sequence of sets $(A_{n})_{n\in\mathbb{N}}$, the
`larger' the eventual family $\mathcal{E}$ is, the `larger' is its
$\mathcal{E}$-limit.
Denote by $\mathcal{G}$ the family of all\textit{ infinite }subsets of
$\mathbb{N}$ and by $\mathcal{H}$ the family of all subsets of $\mathbb{N
$\textit{ }with \textit{finite complement\footnote{The families $\mathcal{G}$
and $\mathcal{H}$ were denoted by $\mathcal{N}_{\infty}^{\#}$ and
$\mathcal{N}_{\infty}$, respectively, in \cite[page 108]{rock-book}.}}. Then
clearly (cf.\ Examples \ref{exmps}) The upper limit (resp.\ lower limit) of
$A_{n}$ is obtained as $\mathcal{E}\text{-}\lim_{n\rightarrow\infty}A_{n}$ for
$\mathcal{E}:=\mathcal{G}$\thinspace(resp.\ $\mathcal{E}:=\mathcal{H}$.)
Now, The family $\mathcal{G}$ is the largest finitely-insensitive family which
is not the set of all subsets. This is so because if $\mathcal{G}$ would
contain a finite set then it would have to contain the empty set, hence, all subsets.
And the family $\mathcal{H}$ is the smallest finitely-insensitive family which
is not empty. This is so because if $\mathcal{H}$ is not empty, it has a
member $S$, thus, must contain the whole $\mathbb{N}$, {hence}, all subsets
with finite complement.
Consequently, for a sequence $(A_{n})_{n\in\mathbb{N}}$ for which
$\lim_{n\rightarrow\infty}A_{n}$ exists, that limit will be also the
$\mathcal{E}$-limit for any finitely-insensitive eventual family $\mathcal{E}$
which is not trivial, i.e.,\ is not either empty or containing all subsets.
\end{proof}
\subsection{Topological vs.\ Purely Set-Theoretical\label{subsec:topo-vs-set
\ }
Note that in contrast to Subsections \ref{subsect:LimEv} and
\ref{subsect:OpFxPt}, the notions in Subsection \ref{subsect:LimSeqSt} are
purely set-theoretic and do not involve any topology in $X$. Yet, one can
distill the topological aspect via the next definition.
\begin{definition}
\label{def:distill}Let $X$ be a Hausdorff topological space and let
$\mathcal{F}$ be an eventual family in $X$. The \textquotedbllef
\textbf{closure} of an eventual family $\mathcal{F}$ in $X$\textquotedblright,
denoted by $\mathrm{cl}\mathcal{F}$, consists of \textit{all subsets
}$S\subseteq X$ \textit{such that all the open subsets }$U\subseteq X$
\textit{which contain }$S$ \textit{belong to $\mathcal{F}$.}
\end{definition}
Clearly, $\mathcal{F}$ is always a subfamily of $\mathrm{cl}\mathcal{F}$, and
the set of limit points of an eventual family $\mathcal{F}$, in a Hausdorff
topological space $X,$ is just $\mathrm{Star}(\mathrm{cl}\mathcal{F}),$ given
in Definition \ref{def:star-set}.
\section{Multisets and Multifamilies\label{sect:mult-fam}}
A \textbf{multiset} (sometimes termed \textbf{bag}, or \textbf{mset}) is a
modification of the concept of a set that allows for multiple instances for
each of its elements. The number of instances given for each element is called
the multiplicity of that element in the multiset. The multiplicities of
elements are any number in $\{0,1,\ldots,\infty\}$, see the corner-stone
review of Blizard \cite{blizard-multiset-1989}.
\begin{definition}
(i) A \textbf{multiset} $M$ in a set $X$ is represented by a function
$\varphi_{M}:X\rightarrow\{0,1,\ldots,\infty\}$ such that for any $x\in X,$
$\varphi_{M}(x)$ is the multiplicity of $x$ in $M$. We refer to this function
as the \textquotedblleft\textbf{representing function of the multiset
\textquotedblright. If $\varphi_{M}(x)=0$ then the multiplicity $0$ means `not
belonging to the set'. A subset $S\subseteq X$ is a multiset represented by
$\iota_{S}$, the \textquotedblleft\textit{indicator function\textquotedblrigh
} of $S,$ i.e.
\begin{equation}
\iota_{S}(x):=\left\{
\begin{array}
[c]{cc
1, & \mathrm{if}\text{ \ }x\in S,\\
0, & \mathrm{if}\text{ \ }x\notin S.
\end{array}
\right.
\end{equation}
(ii) A \textbf{multifamily} $\mathcal{M}$ on a set $X$ is a multiset in the
powerset $2^{X}$ of $X$ (i.e., all the subsets of $X$). Its representing
function, denoted by $\varphi_{\mathcal{M}}:2^{X}\rightarrow\{0,1,\ldots
,\infty\},$is such that for any $S\subseteq X,$ $\varphi_{M}(S)$ is the
multiplicity of $S$ in $\mathcal{M}$. A family $\mathcal{F}$ of subsets of $X$
is a multifamily on $X$ represented by $\iota_{\mathcal{F}}$, the
\textquotedblleft\textit{indicator function\textquotedblright\ of
}$\mathcal{F}$, i.e.
\begin{equation}
\iota_{\mathcal{F}}(f):=\left\{
\begin{array}
[c]{cc
1, & \mathrm{if}\text{ \ }f\in\mathcal{F},\\
0, & \mathrm{if}\text{ \ }f\notin\mathcal{F}.
\end{array}
\right.
\end{equation}
(iii) A multifamily $\mathcal{M}$ on a set $X$ with a representing function
$\varphi_{\mathcal{M}}$is called \textbf{increasing} i
\begin{equation}
S,S^{\prime}\subseteq X,\,S\subseteq S^{\prime}\Rightarrow\varphi
_{\mathcal{M}}(S)\leq\varphi_{\mathcal{M}}(S^{\prime}),
\end{equation}
and called \textbf{decreasing} i
\begin{equation}
S,S^{\prime}\subseteq X,\,S\subseteq S^{\prime}\Rightarrow\varphi
_{\mathcal{M}}(S)\geq\varphi_{\mathcal{M}}(S^{\prime}).
\end{equation}
\end{definition}
Clearly, a\textit{ family }of subsets of $X$ is an \textit{eventual
(resp.\ co-eventual) family if the multifamily that defines it is increasing
(resp.\ decreasing)}. The next example shows why these notions may be useful.
\begin{example}
\label{ex:Gap} Considering the set $\mathbb{N}$, for a, finite or infinite,
subset $S\subseteq\mathbb{N}$ write $S$ a
\begin{equation}
S=\{n_{1}^{S},n_{2}^{S},\ldots\},
\end{equation}
where $n_{\ell}^{S}\in\mathbb{N}$ for all $\ell,$ and the sequence $(n_{\ell
}^{S})_{\ell=1}^{L}$ (where $L$ is either finite or $\infty$) is strictly
increasing, i.e., $n_{1}^{S}<n_{2}^{S}<\ldots$. We consider the\textit{
}\textbf{gaps }between consecutive elements\textit{ }in\textit{ }$S$ as the
sequence of difference
\begin{equation}
n_{2}^{S}-n_{1}^{S}-1,n_{3}^{S}-n_{2}^{S}-1,\ldots,
\end{equation}
where, if $S$ is finite add $\infty$ at the end. Definin
\begin{equation}
\mathrm{Gap}(S):=\limsup_{k}(n_{k+1}^{S}-n_{k}^{S}-1),
\end{equation}
makes $\mathrm{Gap}$ a\textit{ }multifamily\textit{ }on\textit{ }$\mathbb{N}$,
thus taking values in $\{0,1,\ldots,\infty\}$, in particular, taking the value
$\infty$ for (among others) any finite $S$.
\end{example}
Note that if $\mathrm{Gap}(S)$ is finite then there must be an infinite number
of differences $(n_{k+1}^{S}-n_{k}^{S}-1)$ equal to $\mathrm{Gap}(S)$, but
this is not true for any larger integer - because by the definition of
$\limsup$ and because we are dealing with integer-valued items, a finite
$\limsup$ must actually be attained an infinite number of times.
Observe further that the larger the set $S$ is -- the smaller (or equal) is
$\mathrm{Gap}(S).$ Thus, $\mathrm{Gap}$ is a decreasing multifamily.
Define the complement-multifamily for some multifamily $\mathcal{G}$ on the
subsets of a set $X$ b
\begin{equation}
\mathcal{G}^{c}(S):=\mathcal{G}(S^{c}),\quad\forall S\subseteq X
\label{eq:script-G-complement
\end{equation}
where $S^{c}$ is the complement of $S$ in $X$.
We will focus on $\mathrm{coGap}:=\mathrm{Gap}^{c}$. For any $S\subseteq
\mathbb{N},$ let us denote by $c_{S}$ the maximal number of integers between
consecutive elements of $S,$ namely, between $n_{\ell}^{S}\in S$ and
$n_{\ell+1}^{S}\in S.$ If $S$ has arbitrarily big such `intervals' between
consecutive elements then we write $c_{S}=\infty$. With this in mind,
$\mathrm{coGap}=\mathrm{Gap}^{c}$ is an increasing multifamil\textit{y} equal
to $(c_{S})_{\forall S\subseteq\mathbb{N}}.$
\subsection{Extensions to Multifamilies\label{subsect:transferring}}
We now extend some of the notions of Subsection \ref{subsec:eve-fams} to multifamilies.
\begin{definition}
\label{def:star-set copy(1)}Given a multifamily $\mathcal{M}$ on the subsets
of a set $X$ whose representing function is $\varphi_{\mathcal{M}}.$ The
\textquotedblleft\textbf{star set associated with} $\mathcal{M}
\textquotedblright, denoted by\thinspace\thinspace$\mathrm{Star
(\mathcal{M}),$ is the multiset $M$ on $X$ whose representing function
$\varphi_{M}$ is related to $\varphi_{\mathcal{M}}$ in the following manne
\begin{equation}
\mathrm{Star}(\mathcal{M}):=M,\text{ such that }\varphi_{M}(x)=\varphi
_{\mathcal{M}}(\{x\}).
\end{equation}
\end{definition}
\begin{definition}
\label{def:push copy(1)}Given a multifamily $\mathcal{M}$ on the subsets of
$X$\ whose representing function is $\varphi_{\mathcal{M}}$ and a mapping
between sets $f:X\rightarrow Y$. The multifamily $\mathcal{G}$ on the subsets
of $Y,$ denoted by $\mathcal{G=}\mathrm{Push}(f,\mathcal{M)}$, with
representing function $\varphi_{\mathcal{G}},$ will be called the
\textquotedblleft\textbf{push} of $\mathcal{M}$ by $f$\textquotedblright\ if
its representing function is related to the representing function of
$\mathcal{M}$ in the following manner
\begin{equation}
\mathcal{G=}\mathrm{Push}(f,\mathcal{M)}\text{ such that }\varphi
_{\mathcal{G}}(S)=\varphi_{\mathcal{M}}(f^{-1}(S)).
\end{equation}
}
\end{definition}
\begin{definition}
\label{def:finite-intensive copy(1)}A multifamily $\mathcal{M}$ of subsets of
$\mathbb{N}$ whose representing function is $\varphi_{\mathcal{M}}$ is called
a \textquotedblleft\textbf{finitely-insensitive multifamily\textquotedblright}
if for any $S\in\mathcal{M}$, finitely changing $S$, i.e.,\ adding and/or
deleting a finite number of its members, will not change its multiplicity,
i.e., will result in a set $S^{\prime}\in\mathcal{M}$ such that $\varphi
_{\mathcal{M}}(S)=\varphi_{\mathcal{M}}(S^{\prime})$.
\end{definition}
\begin{definition}
\label{def:distill copy(1)}Let $X$ be a Hausdorff topological space and let
$\mathcal{M}$ be an \textit{increasing} multifamily whose representing
function is $\varphi_{\mathcal{M}}$. The \textquotedblleft\textbf{closure of
an increasing multifamily }$M$\textbf{ in }$X$\textquotedblright, denoted by
$\mathrm{cl}\mathcal{M}$, is defined to be the (increasing) multifamily such
that for any $S\subseteq X$ it holds tha
\begin{equation}
\varphi_{\mathrm{cl}\mathcal{M}}(S)=\min\{\varphi_{\mathcal{M}}(U)\mid\text{
all\ \textit{open subsets }}U\subseteq X\text{ such that }S\subseteq
U\}.\text{
\end{equation}
\end{definition}
\begin{definition}
\label{def:it:lim}Let $X$ be a Hausdorff topological space and let
$\mathcal{M}$ be an \textit{increasing} multifamily whose representing
function is $\varphi_{\mathcal{M}}$. The multiset $M:=\mathrm{Star
(\mathrm{cl}\mathcal{M})$ will be called the \textquotedbllef
\textbf{multiset-limit} of $\mathcal{M}$\textquotedblright\ and denoted by
$\lim\mathcal{M}$. Its representing function is for any $x\in X,
\begin{equation}
\varphi_{M}(x)=\min\{\varphi_{\mathcal{M}}(U)\mid\text{all\ open
subsets}\mathit{\ }U\subseteq X\text{ such that }x\in U\}.
\end{equation}
\end{definition}
Given a multifamily $\mathcal{M}$ on the subsets of $\mathbb{N}$ whose
representing function is $\varphi_{\mathcal{M}}$, the `limiting notions' with
respect to $\mathcal{M}$ for a sequence $(x_{n})_{n\in\mathbb{N}}$, are
defined as those with respect to $\mathrm{Push}(f,\mathcal{M)}$ of
$\mathcal{M}$ to $X$ by the function $f$ $f:\mathbb{N}\rightarrow X$ which
represents the sequence $(x_{n})_{\ }$. In particular, for an increasing
multifamily $\mathcal{M}$ on the subsets of $\mathbb{N}$ whose representing
function is $\varphi_{\mathcal{M}}$, the multiset limit of $\mathrm{Push
(f,\mathcal{M)}$ will be called the \textquotedblleft\textbf{multiset-limit}
of $(x_{n})$\textquotedblright, denoted by $\lim_{\mathcal{M}}x_{n}.$
Denoting the representing function of this multiset $\mathcal{G}$ on $X$ by
$\varphi_{\mathcal{G}},$ we can describe it as follows. Given a point $x\in
X,$ consider the following subsets of $\mathbb{N}$
\begin{equation}
S(U):=\{n\in\mathbb{N}\mid x_{n}\in U\},\text{ for open neighborhoods }U\text{
of }x.
\end{equation}
}
Then
\begin{equation}
\varphi_{\mathcal{G}}(x)=\min\{\varphi_{\mathcal{M}}(S(U))\mid\text{all\ open
subsets}\mathit{\ }U\subseteq X\text{ such that }x\in U\}.
\end{equation}
\begin{remark}
\label{rerere} Note, that for a set $S$ not to belong to $\mathrm{coGap}$,
i.e.,\ to have $\mathrm{coGap}(S)=0,$ just means that $S$ is finite - as a
`family, ignoring multiplicities' and $\mathrm{coGap}$ is just the family of
\textit{infinite} sets of natural\ numbers.
Thus, when we turn to the \textit{limit} of a sequence $(x_{n})_{n\in
\mathbb{N}}$ in a Hausdorff Space $X$ (a notion which is obviously dependent
on the topology. In a Banach or Hilbert space we will have strong and weak
limits etc.); and we take the $\mathrm{coGap}$-limit (it will be a multiset on
$X$, to which for some $x$ in $X$ to belong (at least) $n$ times, one must
have, for every neighborhood $U$ of $x$, that the $x_{n}$ stay in $U$ for some
$n$ consecutive places as far as we go); then the $\mathrm{coGap}$-limit of
$(x_{n})_{n\in\mathbb{N}}$, `forgetting the multiplicities' is just the set of
accumulation points of $(x_{n})_{n\in\mathbb{N}}$ (which is, recalling the
examples in Section \ref{sect:Ev}, just its $\mathcal{G}$-limit for
$\mathcal{G}$ the eventual family of the infinite subsets of $\mathbb{N}$).
Note that, in general, if the sequence has a limit $x^{\ast}$ (in the good old
sense) then its $\mathrm{coGap}$-limit `includes $x^{\ast}$ infinitely many
times and does not include any other point'. This sort of indicates to what
extent the $\mathrm{coGap}$-limit may be viewed as `more relaxed' than the
usual limit.
The inverse implication does not always hold (it holds however in a compact
space) as the following counterexample shows. In $\mathbb{R}$ (the reals),
define a sequence b
\begin{equation}
x_{2n}:=n\text{ \ and \ }x_{2n-1}:=-1
\end{equation}
then its $\mathrm{coGap}$-limit contains $-1$ infinitely often and does not
contain others, but $-1$ is not a limit.
\end{remark}
\section{Convergence of Algorithms for Solving the Common Fixed-Point Problem}
Given a finite family of self-mapping operators $\left\{ T_{i}\right\}
_{i=1}^{m}$ acting on the Hilbert space $H$ with $\operatorname*{Fix}T_{i
\neq\emptyset,$ $i=1,2,\ldots,m,$ where $\operatorname*{Fix}T_{i}:=\{x\in
H\mid T_{i}(x)=x\}$ is the fixed points set of $T_{i},$ the \textquotedbllef
\textbf{common fixed point problem}\textquotedblright\ (CFPP) is to find a
poin
\begin{equation}
x^{\ast}\in\cap_{i=1}^{m}\operatorname*{Fix}T_{i}. \label{Common fixed pp
\end{equation}
This problem serves as a framework for handling many important aspects of
solving systems of nonlinear equations, feasibility-seeking of systems of
constraint sets and optimization problems, see, e.g., the excellent books by
Berinde \cite{Berinde-book} and by Cegielski \cite{Cegielski-book} and
references therein. In particular, iterative algorithms for the CFPP form an
ever growing part of the field. There are many algorithms around for solving
CFPPs, see, e.g., Zaslavski's book \cite{zaslavski-book}. To be specific, we
use the \textquotedblleft Almost Cyclic Sequential Algorithm (ACSA) for the
common fixed-point problem\textquotedblright, which is Algorithm 5 in Censor
and Segal \cite{censor-segal-2009}, which is, in turn, a special case of an
algorithm in the paper by Combettes \cite[Algorithm 6.1]{Combettes01}. The
abstract study of limits of eventual families developed here can serve as a
unifying convergence analysis of many iterative processes. It grew out of our
look at the almost cyclic sequential algorithm and, therefore, we describe
this algorithm and its relation with the present work next.
\subsection{The Almost Cyclic Sequential Algorithm (ACSA)}
Let $\left\langle x,y\right\rangle $ and $\left\Vert x\right\Vert $ be the
Euclidean inner product and norm, respectively, in the $J$-dimensional
Euclidean space $R^{J}$. Given $x,y\in R^{J}$ we denote the half-spac
\begin{equation}
H(x,y):=\left\{ u\in R^{J}\mid\left\langle u-y,x-y\right\rangle
\leq0\right\} .
\end{equation}
\begin{definition}
\label{Def of directed ops}An operator $T:R^{J}\rightarrow R^{J}$ is called
\textquotedblleft\textbf{a cutter\textquotedblright} i
\begin{equation}
\operatorname*{Fix}T\subseteq H(x,T(x)),\text{ for all }x\in R^{J},
\end{equation}
or, equivalently
\begin{equation}
\text{if }z\in\operatorname*{Fix}T\text{ then }\left\langle T\left( x\right)
-x,T\left( x\right) -z\right\rangle \leq0,\text{ for all }x\in R^{J}.
\label{def directed2
\end{equation}
\end{definition}
The class of cutters was called $\Im$-class by Bauschke and Combettes
\cite{BC01} who first defined this notion and showed (see \cite[Proposition
2.4]{BC01}) (i) that the set of all fixed points of a cutter $T$ with nonempty
$\operatorname*{Fix}T$ is closed and convex becaus
\begin{equation}
\operatorname*{Fix}T=\cap_{x\in R^{J}}H\left( x,T\left( x\right) \right) ,
\end{equation}
and (ii) that the following hold
\begin{equation}
\text{If }T\in\Im\text{ then }Id+\lambda(T-Id)\in\Im,\text{ for all
\lambda\in\lbrack0,1], \label{BCresult
\end{equation}
where $Id$ is the identity operator. This class of operators includes, among
others, the resolvents of a maximal monotone operators, the firmly
nonexpansive operators, namely, operators $N:R^{J}\rightarrow R^{J}$ that
fulfi
\begin{equation}
\left\Vert N(x)-N(y)\right\Vert ^{2}\leq\left\langle
N(x)-N(y),x-y\right\rangle ,\text{ for all }x,y\in R^{J},
\end{equation}
the orthogonal projections and the subgradient projectors. Note that every
cutter belongs to the class of operators $\mathcal{F}^{0},$ defined by Crombez
\cite[p. 161]{Crombez05}. The term \textquotedblleft cutter\textquotedblrigh
\ was proposed in \cite{ceg-cen-2011}, see \cite[pp. 53--54]{Cegielski-book}
for other terms that are used for these operators.
The following definition of a demiclosed operator that originated in Browder
\cite{Browder} (see, e.g., \cite{Combettes01}) will be required.
\begin{definition}
An operator $T:R^{J}\rightarrow R^{J}$ is said to be \textquotedbllef
\textbf{demiclosed} \textbf{at }$y\in R^{J}$\textquotedblright\ if for every
$\overline{x}\in R^{J}$ and every sequence $(x_{n})_{n\in\mathbb{N}}$ in
$R^{J},$ such that, $\lim_{n\rightarrow\infty}x_{n}=\overline{x}$ and
$\lim_{n\rightarrow\infty}T(x_{n})=y,$ we have $T(\overline{x})=y.$
\end{definition}
For instance, the orthogonal projection onto a closed convex set is everywhere
a demiclosed operator, due to its continuity.
\begin{remark}
\cite{Combettes01} If $T:R^{J}\rightarrow R^{J}$ is nonexpansive, then $T-Id$
is demiclosed on $R^{J}.$
\end{remark}
In sequential algorithms for solving the common fixed point problem the order
by which the operators are chosen for the iterations is given by a
\textquotedblleft\textbf{control sequence\textquotedblright}\textit{ }of
indices\textit{\ }$(i(n))_{n\in\mathbb{N}},$ see, e.g., \cite[Definition
5.1.1]{Censor book}.
\begin{definition}
(i) \textbf{Cyclic control.} A control sequence is \textquotedbllef
\textbf{cyclic\textquotedblright} if $i(n)=n\operatorname*{mod}m+1,$ where $m$
is the number of operators in the common fixed point problem.
(ii) \textbf{Almost cyclic control. }$(i(n))_{n\in\mathbb{N}}$ is
\textquotedblleft\textbf{almost cyclic on }$\{1,2,\ldots,m\}
\textquotedblright\ if $1\leq i(n)\leq m$ for all $n\geq0,$ and there exists
an integer $c\geq m$ (called the \textquotedblleft\textbf{almost cyclicality
constant\textquotedblright}), such that, for all $n\geq0$, $\{1,2,\ldots
,m\}\subseteq\{i(n+1),i(n+2),\ldots,i(n+c)\}.$
\end{definition}
Consider a finite family $T_{i}:R^{J}\rightarrow R^{J},$ $i=1,2,\ldots,m,$ of
cutters with $\cap_{i=1}^{m}\operatorname*{Fix}T_{i}\neq\emptyset$. The
following algorithm for finding a common fixed point of such a family is a
special case of \cite[Algorithm 6.1]{Combettes01}.
\begin{algorithm}
\textbf{Almost Cyclic Sequential Algorithm (ACSA) for solving common fixed
point problems }\cite[Algorithm 5]{censor-segal-2009}
\label{alg CFP of AV ops}$\left. {}\right. $
\textbf{Initialization:} $x_{0}\in R^{J}$ is an arbitrary starting point.
\textbf{Iterative Step: }Given\textbf{\ }$x_{n},$ compute $x_{n+1}$ b
\begin{equation}
x_{n+1}=x_{n}+\lambda_{n}(T_{i(n)}\left( x_{n}\right) -x_{n}).
\label{eq. Iter of AVoperators
\end{equation}
\textbf{Control: }$(i(n))_{n\in\mathbb{N}}$ is almost cyclic on $\{1,2,\ldots
,m\}$.
\textbf{Relaxation parameters: }$(\lambda_{n})_{n\in\mathbb{N}}$ are confined
to the interval $\left[ 0,2\right] $.
\end{algorithm}
The convergence theorem of Algorithm \ref{alg CFP of AV ops} is as follows.
\begin{theorem}
\label{Theor. GenKM}Let $\left\{ T_{i}\right\} _{i=1}^{m}$ be a finite
family of cutters $T_{i}:R^{J}\rightarrow R^{J}$, which satisfies
(i) $\Omega:=\cap_{i=1}^{m}\operatorname*{Fix}T_{i}$ is nonempty, and
(ii) $T_{i}-Id$ are demiclosed at $0,$ for every $i\in\{1,2,\ldots,m\}.$
Then any sequence $(x_{n})_{n\in\mathbb{N}},$ generated by Algorithm
\ref{alg CFP of AV ops}, converges to a point in $\Omega.$
\end{theorem}
\begin{proof}
This follows as a special case of \cite[Theorem 6.6 (i)]{Combettes01}.
\end{proof}
\subsection{An Abstract Approach to The Convergence of the ACSA}
Given a sequence $(x_{n})_{n\in\mathbb{N}}$ in a Hausdorff topological space
$X$, push the multiset $\mathrm{coGap}$ in $\mathbb{N}$ to a multiset
$\mathcal{M}$ on the subsets of $X$, and then consider its \textit{limit} $L$
(see Definitions \ref{def:distill copy(1)} and \ref{def:it:lim} above) with
respect to the multiset $\mathrm{Star}(\mathrm{cl}\mathcal{M})$ whose
representing function value at $x\in X$ is the minimum of the value of
$\mathrm{coGap}$ on the sets $\{n\in\mathbb{N}\,\mid\,x_{n}\in U\}$ for (open)
neighborhoods $U$ of $x$.
Then, by what was said in Subsection \ref{subsect:OpFxPt}, Example
\ref{ex:Gap} and Theorem \ref{tm}, we reach the following conclusion.
\begin{conclusion}
For an operator (i.e.,\ a continuous mapping) $T:X\rightarrow X$, if
$(x_{n})_{n\in\mathbb{N}}$ follows $T$ for the eventual family which is the
level family, for some $c,
\begin{equation}
\mathrm{coGap}_{c}:=\{S\subset\mathbb{N}\,\mid\mathrm{coGap}(S)\geq c\},
\end{equation}
then the level set $\{x\in X\mid\,L(x)\geq c\},$ where $L$ is the limit of the
multiset $\mathcal{M}$ on the subsets of $X$, mentioned above, will consist of
fixed points of $T$.
\end{conclusion}
This is the case with respect to each of the operators of the CFPP, for any
sequence generated by the ACSA. Thus, any sequence of iterations of the ACSA
follows each of the operators of the CFPP with respect to the eventual family
$\mathcal{E}_{c}$ in $\mathbb{N}$ consisting of \textit{all subsets of
}$\mathbb{N}$ \textit{that, after any number }$\mathit{N}$\textit{, contain
some `interval' of $c$ consecutive numbers} for some fixed number $c$.
This means that the eventual family $\mathcal{E}_{c},$ mentioned in Subsection
\ref{subsect:OpFxPt} as relevant to the sequence of iterations in the ACSA
will be just \textit{the `level family' }$\mathit{\{S\subset\mathbb{N
\mid\mathrm{coGap}(S)\geq c\}}$, and clearly any such level family of an
\textit{increasing} multiset is automatically an eventual family.\bigskip
\textbf{Declarations:}
Funding: T\textbf{he work of Yair Censor is supported by the Israel Science
Foundation and the Natural Science Foundation China, ISF-NSFC joint research
program Grant No. 2874/19.} (information that explains whether and by whom the
research was supported)
Conflicts of interest/Competing interests: \textbf{The authors have no
conflicts of interest to declare that are relevant to the content of this
article.} (include appropriate disclosures)
Availability of data and material: \textbf{Not applicable.} (data transparency)
Code availability: \textbf{Not applicable.} (software application or custom code)
Authors' contributions: \textbf{Not applicable.} (optional: please review the
submission guidelines from the journal whether statements are mandatory)
| -43,076.941985 |
[
-2.48046875,
2.1328125
] | 26.315789 |
[
-3.267578125,
0.370361328125,
-2.208984375,
-5.734375,
-0.470947265625,
8.1484375
] |
[
1.76171875,
7.109375,
-1.2900390625,
5.0390625
] | 263 | 4,642 |
[
-2.478515625,
2.484375
] | 28.22523 |
[
-5.3125,
-3.1015625,
-4.1171875,
-2.04296875,
1.5283203125,
11.0390625
] | 1.178636 | 18.0578 | 23.76131 | 0.527662 |
[
1.7585804462432861
] | -26,876.444672 | 6.086385 | -42,683.460175 | 0.672495 | 5.720898 |
[
-1.7138671875,
-3.087890625,
-3.77734375,
-5.29296875,
1.9638671875,
11.8046875
] |
[
-5.2578125,
-1.943359375,
-1.7880859375,
-1.27734375,
3.380859375,
4.16796875
] | |
BkiUaGo25V5hYDkHE-XL
|
\section{Introduction and notation.}
The well known problem launched decades ago by
Dorothy Maharam \cite{maharam} of whether a
Boolean algebra admits a strictly positive, additive
set functions defined thereon -- the so called
Maharam problem -- has motivated a long lasting
stream of mathematical research, see \cite{jech}
for a comprehensive review. In much of this literature
the focus has been on complete (or $\sigma$ complete)
Boolean algebras and countably additive set functions.
One of the first papers on this topic was that of Kelley
\cite{kelley} and it was also one of the few treating
the case of finitely additive set functions. His approach
in terms of intersection numbers is still one of the few
results characterizing the situation of a finitely additive,
strictly positive set function. Another one was obtained
much later by Jech et al \cite{balcar_jech_pazak}.
In this paper we present a very simple proof of this
important result based on the minimax theorem.
Another simple proof was obtained in recent years
by Aversa and Bhaskara Rao \cite{aversa_rao} using
linear programming (see other references quoted therein).
We also develop a number of implications that justify
interest for the method proposed here.
In this paper terms such as measure or probability
will always refer to finitely additive set functions.
\section
{Kelley's Theorem.}
Let $\A$ be an algebra of subsets of some non empty
set $\Omega$ and $\A_+=\A\setminus\{\emp\}$.
$\Prob(\A)$ designates the family of (finitely additive)
probabilities defined on $\A$ and, for each $m\in\Prob(\A)$,
let $m(f)$ indicate the integral of $f$ with respect to
$m$ -- if well defined. $m\in\Prob(\A)$ is strictly positive
if $m(A)>0$ for all $A\in\A_+$.
For given $\Bor\subset\A_+$ write the set of finite
sequences from $\Bor$ as $\Seq[0]\Bor$. With each
$\beta\in\Seq[0]\Bor
\footnote{
The elements of $\beta\in\Seq[0]\Bor$ need of
course not be distinct.
}
we can associate the following function on $\Omega$:
\begin{equation}
s(\beta)
=
\frac{1}{\abs\beta}\sum_{B\in\beta}\set B
\end{equation}
where $\abs\beta$ designs the length of the sequence
$\beta$. Kelley \cite{kelley} defined the intersection number
of $\Bor$ as
\begin{equation}
\label{I}
I(\Bor)
=
\inf_{\beta\in\Seq[0]\Bor}
\sup_\omega
s(\beta)(\omega)
\end{equation}
Clearly, $0\le I(\Bor)\le1$; if $\Bor$ contains an infinite,
disjoint collection of sets then necessarily $I(\Bor)=0$.
If $\Bor\subset\A_+$, we introduce the family $\Sim(\Bor)$
of convex combinations of indicators of sets in $\Bor$.
Clearly,
$\{s(\beta):\beta\in\Seq[0]\Bor\}\subset\Sim(\Bor)$.
The closure of a set $A$ of real valued functions on
$\Omega$ with respect to the topology of uniform
distance will be denoted by $\cl[u]A$.
\begin{theorem}[Kelley, 59]
\label{th kelley}
An algebra $\A$ of sets admits a strictly positive, finitely
additive probability measure if and only if $\A$ may be
written in the form
\begin{equation}
\label{kelley}
\A
=
\{\emp\}
\cup
\bigcup_n\Bor_n
\qtext{with}
I(\Bor_n)>0
\qtext{for}
n=1,2,\ldots.
\end{equation}
\end{theorem}
\begin{proof}
Necessity is obvious -- if $m\in\Prob(\A)$ is strictly
positive, take $\Bor_n=\{B\in\A:m(B)>1/n\}$. As for
sufficiency, since each $f\in\Sim(\Bor)$ with values
in $\Q$ belongs to
$\{s(\beta):\beta\in\Seq[0]\Bor\}$,
then
$
\Sim(\Bor)
=
\cl[u]{\big\{s(\beta):\beta\in\Seq[0]\Bor\big\}}
$.
Moreover, each $f\in\Sim(\Bor)$ has finite range and
every $A\in\A_+$ admits some $m\in\Prob(\A)$ with
$m(A)=1$. Then, we deduce from \cite[Corollary 3.3]{sion}
\begin{align}
\label{sion}
I(\Bor)
&=
\inf_{\beta\in\Seq[0]\Sim}\sup_\omega s(\beta)(\omega)
=
\inf_{f\in\Sim(\Bor)}\sup_\omega f(\omega)
=
\inf_{f\in\Sim(\Bor)}\sup_{m\in\Prob(\A)}m(f)
=
\sup_{m\in\Prob(\A)}\inf_{f\in\Sim(\Bor)}m(f).
\end{align}
Under \eqref{kelley} each $n\in\N$ admits
$m_n\in\Prob(\A)$ satisfying
$\inf_{B\in\Bor_n}m_n(B)
>
I(\Bor_n)/2$. Then, $\sum_n2^{-n}m_n\in\Prob(\A)$ is
strictly positive.
\end{proof}
Since each $\Bor$ with $I(\Bor)>0$ can contain at
most finitely many, pairwise disjoint sets, it follows
from \eqref{kelley} that a family of pairwise disjoint
sets in $\A_+$ must be countable. This is the well
known CC ({\it countable chain}) necessary condition
formulated by Maharam and long conjectured to be
sufficient until Gaifman \cite{gaifman} counterexample
of a Boolean algebra possessing the CC property but
lacking a strictly positive measure.
Aversa and Bhaskara Rao \cite{aversa_rao} make use
of Tychonoff Theorem to prove Kelley's Theorem. This
is also important in our proof, although indirectly, via
Sion's lemma.
\section{Some Related Results}
The relative advantage of our proof, apart from
simplicity, is the great ease of generalization.
Denote by $\La(\A)$ the vector space spanned by the
indicators of sets in $\A$ and for each $\rho:\La\to\R_+$,
let
\begin{equation}
\label{N rho}
\Neg(\rho)
=
\{A\in\A:\rho(\set A)=0\}.
\end{equation}
A set function $m\in ba(\A)_+$ such that
$\Neg(m)\subset\Neg(\rho)$
is said to be strictly $\rho$-positive. If
$\rho(\set A)\ge m(A)$ for all $A\in\A$ then $m$ is
said to be $\rho$-dominated.
\begin{theorem}
\label{th kelley pi}
Let $\pi$ be a monotone, sublinear functional on $\La(\A)$.
There exists a $\pi$ dominated and strictly $\pi$ positive
$m\in ba(\A)_+$ if and only if $\A$ may be written
in the form
\begin{equation}
\label{kelley pi}
\A
=
\Neg(\pi)
\cup\bigcup_n\Bor_n
\qtext{with}
I_\pi(\Bor_n)
\equiv
\inf_{\beta\in\Seq[0]\Bor}\pi\big(s(\beta)\big)
>
0
\qquad
n=1,2,\ldots
\end{equation}
Moreover, the set function $m$ may be chosen to be
a probability if and only if $\pi(1)\ge1\ge-\pi(-1)$.
\end{theorem}
\begin{proof}
The proof of Theorem \ref{th kelley} remains true
after replacing $I$ with $I_\pi$ provided we can
show that
\begin{equation}
\label{attain}
\pi(f)
=
\sup_{m\in ba(\A,\pi)_+}\int fdm
\qquad
f\in\La(\A)
\end{equation}
and that set
\begin{equation}
ba(\A,\pi)_+
=
\Big\{m\in ba(\A)_+:
\pi(h)\ge\int hdm\text{ for all }h\in\La(\A)\Big\}
\end{equation}
is convex and weak$^*$ compact. Both claims are,
however, obvious: the former follows from Hahn
Banach Theorem and the representation of linear
functionals on $\La(\A)$ (see \cite[Chapter 3]{rao}
and ultimately \cite{horn_tarski}); the latter from
Tychonoff Theorem. If $m(\Omega)=1$ then
necessarily $\pi(1)\ge1\ge-\pi(-1)$; conversely, if
$\pi(1)\ge1\ge-\pi(-1)$ then, by well known arguments,
the functional on $\La(\A)$ defined by letting
$\hat\pi(f)
=
\inf_{a\in\R}\pi(a+f)-a$ is monotone, sublinear and
additive with respect to constants so that
$\hat\pi(1)=1=-\hat\pi(-1)$. Clearly, $\pi\ge\hat\pi$.
If $\hat m\in ba(\A)_+$ is $\hat\pi$-dominated it is
then a probability.
\end{proof}
As in Theorem \ref{th kelley}, the decomposition
\eqref{kelley pi}, although necessary and sufficient,
is not very handy to use. An easier condition is obtained
by imposing a constraint on the degree of non linearity
of $\pi$.
\begin{lemma}
\label{lemma kelley pi}
Let $\pi$ be a monotone, sublinear functional on
$\La(\A)$ satisfying the property
\begin{equation}
\label{m}
\mathfrak m(\pi)
\equiv
\sup
\frac{\sum_{i=1}^Na_i\pi(f_i)-\pi\big(\sum_{i=1}^Na_if_i\big)}
{\pi\big(\sum_{i=1}^Na_if_i\big)}
<
\infty
\end{equation}
the supremum being over all convex combinations of
elements of $\La(\A)_+$ such that
$\pi\big(\sum_{i=1}^Na_if_i\big)
>
0$.
Then there exists a strictly $\pi$-positive $m\in ba(\A)_+$
which is $\pi$-dominated.
\end{lemma}
\begin{proof}
Let $\{A_1,\ldots,A_N\}\in\Seq[0]{\Bor_n}$ with
$\Bor_n
=
\{A\in\A:\pi(\set A)>1/n\}$. Then,
$
\pi\Big(\frac1N\sum_{i=1}^N\set{A_i}\Big)
\ge
\frac1N\pi(\set{A_1})
>0
$
and, by the assumption,
\begin{align*}
\pi\Big(\frac1N\sum_{i=1}^N\set{A_i}\Big)
\ge
\frac{1}{1+\mathfrak m(\pi)}\frac1N\sum_{i=1}^N\pi(\set{A_i})
\ge
\frac{1/n}{1+\mathfrak m(\pi)}.
\end{align*}
Thus, the decomposition \eqref{kelley pi} holds.
\end{proof}
Considering the role played in \eqref{kelley pi} by the
collection $\Neg(\pi)$, one may invert the perspective
adopted in Theorem \ref{th kelley pi} and raise the
question whether a pre assigned family of sets
$\Neg\subset\A$ coincides with the collection of
null sets of some $m\in\Prob(\A)$, i.e. with the set
$\Neg(m)=\{A\in\A:m(A)=0\}$. Define
$\Prob(\A,\Neg)
=
\{m\in\Prob(\A):\Neg\subset\Neg(m)\}$.
We can modify definition
\eqref{I} into the following:
\begin{equation}
\label{IN}
I_\Neg(\Bor)
=
\inf_{\beta\in\Seq[0]\Bor}
\inf_{N\in\Neg}
\sup_{\omega\in N^c}s(\beta)(\omega)
\qquad
\Bor\subset\A_+.
\end{equation}
The proof of the following Corollary may be given in
terms of quotient algebras, as clearly remarked by Gaifman
\cite[p. 61]{gaifman}, but ours is much simple
\footnote{
A different proof of the following Corollary appears in
\cite[Corollary 5]{Projection_2019}.
}.
An ideal of sets is of course a collection closed with
respect to union and to subsets.
\begin{corollary}
\label{cor kelley}
Let $\Neg\subset\A$. Then, $\Neg=\Neg(m)$ for some
$m\in\Prob(\A)$ if and only if $\Neg$ is a proper ideal
(of sets) and if $\A$ admits the representation
\begin{equation}
\label{kelley N}
\A
=
\Neg\cup\bigcup_n\Bor_n
\qtext{with}
I_\Neg(\Bor_n)>0
\qtext{for}
n=1,2,\ldots.
\end{equation}
\end{corollary}
\begin{proof}
The functional defined on $\La(\A)$ by letting
$\pi_\Neg(f)
=
\inf_{N\in\Neg}\sup_{\omega\in N^c}f(\omega)$
is monotone and positively homogeneous by
definition and subadditive because $\Neg$ is
an ideal. Moreover, given that $\Neg$ is proper,
$\pi_\Neg(1)\ge1\ge-\pi_\Neg(-1)$. The claim
follows from Theorem \ref{th kelley pi}.
\end{proof}
If $\pi$ is as in Theorem \ref{th kelley pi} and
$\pi(1)>0$, then $\Neg(\pi)$ is a proper ideal
and Corollary \ref{cor kelley} may be used to
determine the existence of a strictly $\pi$
positive probability, not necessarily $\pi$-dominated.
Fix $\Meas\subset\Prob(\A)$. By choosing
$\Neg=\bigcap_{m\in\Meas}\Neg(m)$,
Corollary \ref{cor kelley} provides an answer to the
question of whether a given subfamily of $\Prob(\A)$
is weakly dominated. The notion of weak domination
appears in \cite[p. 159]{rao} under the name of weak
absolute continuity. The corresponding question of
whether a given set $\Meas$ is dominated -- i.e. each
$m\in\Meas$ is absolutely continuous with respect
to a fixed $m_0$ -- has recently been characterized in
\cite{JMAA_2019}.
On examining the proof of Theorem \ref{th kelley pi},
the only properties of $\Prob(\A,\Neg)$ that are used
are convexity, weak$^*$ compactness and \eqref{attain}
which translates into
\begin{equation}
\label{norming}
m(f)
=
\inf_{N\in\Neg}\sup_{\omega\in N^c}f(\omega).
\end{equation}
In case $\Neg$ is the ideal of null sets of a given family
$\Meas\subset\Prob(\A)$ these same properties are
also true of the set
\begin{equation}
\Meas^*
=
\cco[*]
{\{m_A:m\in\Meas,\ A\in\A,\ m(A)>0\}}
\end{equation}
where $m_A\in\Prob(\A)$ is defined as the restrictio
\footnote{
That is $m_A(B)=m(A\cap B)/m(A)$ for each $B\in\A$.
}
of $m$ to $A$
and $\cco[*]{}$ denotes the weak$^*$-closed convex
hull. Thus the probability that weakly dominates $\Meas$,
if it exists, can be taken to be an element of $\Meas^*$.
This remark delivers a version of a well known result
of Halmos and Savage \cite[Lemma 7]{halmos_savage}:
\begin{corollary}[Halmos and Savage, 49]
\label{cor hs}
If $\Meas\subset\Prob(\A)$ is weakly dominated it then
admits a weakly dominating subset which is countable.
\end{corollary}
\section{A.s. Rankings}
Following the intuitions of de Finetti \cite{definetti},
probability should be deduced endogenously from
some decision problem. In this final section we
investigate whether an {\it a priori} given partial
order $\ge_*$ define
\footnote{
The symbol $\ge$ will be be reserved for
pointwise order.
}
for all real-valued functions defined on $\Omega$
admits the representation as a probabilistic ranking
such as
\begin{equation}
\label{rep}
f\ge_*g
\qqtext{if and only if}
f\ge g
\quad
m
\text{ almost surely}
\end{equation}
for some reference probability $m$. Given our interest
for finite additivity, justified by the preceding results the
exact meaning of the expression {\it almost surely}
requires some care. We shall use the expression
$f\ge g$, $m$-a.s. as short for the condition
\begin{equation}
\label{a.s.}
\inf_{t>0}m(f-g<-t)=0.
\end{equation}
When the partial order $\ge_*$ satisfies \eqref{rep}
in the above defined sense, we shall say that $\ge_*$
admits a probabilistic representation, or, if $m$ is known,
that $\ge_*$ is represented by $m$. We observe that
if $\ge_*$ indeed admits a probabilistic representation
then it will surely satisfy, among other properties, the
following ones:
\begin{enumerate}[(i).]
\item\label{1>0}
$0\not>_*1$;
\item\label{bounded}
$f\ge_*0$ and $a>0$ imply $f\wedge a\ge_*0$;
\item\label{deterministic}
$f\ge0$ implies $f\ge_*0$;
\item\label{convex}
if $f\ge_*g$ then
$bf+h\ge_*bg+h$
for all $b,h:\Omega\to\R$
with $b$ positive and bounded;
\item\label{robust}
if $f+\varepsilon\ge_*0$ for all $\varepsilon>0$ then
$f\ge_*0$.
\end{enumerate}
If $\ge_*$ is a given partial order, denote by $[f]_*$
the corresponding equivalence class of $f$
and write $\Neg_*=\{A\subset\Omega:0\ge_*A\}$.
\begin{theorem}
\label{th kelley order}
A partial order $\ge_*$ defined on $\R^\Omega$
has a probabilistic representation if and only if
(a)
it satisfies \iref{1>0}--\iref{robust}
and
(b)
the following decomposition holds:
\begin{equation}
\label{kelley *}
2^\Omega
=
\Neg_*\cup\bigcup_{n\in\N}\Bor_n
\qtext{where}
I_*(\Bor_n)
\equiv
\inf_{f\in\Seq[0]{\Bor_n}}\inf_{g\in[f]_*}\sup_{\omega}g(\omega)>0
\quad
n\in\N.
\end{equation}
\end{theorem}
\begin{proof}
Assume \tiref a. By \iref{deterministic} and \iref{convex}
if $f\ge_*0$ and $t>0$ we have
$
0\ge_*
-f\sset{f<-t}
\ge_*
t\sset{f<-t}
$
and so $\{f<-t\}\in\Neg_*$. Then, $g\in[f]_*$ and $\eta>0$
imply
$N_{\eta,g}\equiv\{\abs{f-g}>\eta\}
\in
\Neg_*$
while
$N\in\Neg_*$ implies $f\set{N^c}\in[f]_*$. Choose $h\in[f]_*$
and, since $\Neg_*$ is $\cup$-closed,
$N_{\eta,h}
\subset
N
\in
\Neg_*$.
We get
\begin{align}
\label{eq}
\inf_{g\in[f]_*}\sup_{\omega}g(\omega)
\le
\sup_{\omega\in N^c}f(\omega)
\le
\eta
+
\sup_{\omega}h(\omega).
\end{align}
In other words, \tiref a implies that
$I_*
=
I_{\Neg_*}
$
(see \eqref{IN}) and, by \iref{1>0} and \iref{convex}, that
$\Neg_*$ is a proper ideal. Therefore, under \tiref a and
\tiref b Corollary \ref{cor kelley} guarantees that
$\Neg_*=\Neg(m)$ for some $m\in\Prob(2^\Omega)$:
$m$ represents $\ge_*$. In fact, $f\ge_*0$ implies
$\sup_{t>0}m(f<-t)=0$ while $\{f<-t\}\in\Neg_*$ leads
first to $(f\vee-c)\sset{f<-t}\ge_*0$ for all $c>0$ (by
\iref{convex}), then to $f\sset{f<-t}\ge_*0$ (by
\iref{bounded}), hence to
$
f+\varepsilon
\ge_*
(f+\varepsilon)\sset{f\ge-\varepsilon}
\ge_*
0
$
for all $\varepsilon>0$ and, eventually, to $f\ge_*0$,
(by \iref{robust}).
On the other hand, if $\ge_*$ is the ranking
induced by some $m\in\Prob(2^\Omega)$,
then $\Neg_*=\Neg(m)$, \tiref a holds and
therefore $I_*=I_{\Neg(m)}$. Let
$\Bor_n=\{A\subset\Omega:m(A)>1/n\}$ and
$f\in\Seq[0]{\Bor_n}$.
\begin{align*}
1/n
\le
m(f)
=
\inf_{N\in\Neg_*}m(\sset{N^c}f)
\le
\inf_{N\in\Neg_*}\sup_{\omega\in N^c}f(\omega)
=
I_{\Neg_*}(\Bor_n)
=
I_*(\Bor_n)
\end{align*}
so that \tiref b holds as well.
\end{proof}
This last result has a subjective probability interpretation:
a decision maker following a choice criterion that satisfies
the above conditions \tiref a and \tiref b may be said to
take his decisions on a probabilistic basis. This means that
in principle a probability may be deduced from his behaviour.
We highlight that condition \tiref a would be enough to
imply that $\Neg_*$ is a proper ideal -- and therefore
that $\Neg_*\subset\Neg(m)$ for some $m\in\Prob(2^\Omega)$
-- but this would not be enough to guarantee that such $m$
is unique (and thus inferable from his decisions).
Uniqueness indeed requires that \eqref{kelley *} is
satisfied, although this appears as a rather difficult
condition to establish in practical problems.
\bibliographystyle{acm}
| -21,239.252879 |
[
-2.330078125,
2.05859375
] | 16.112084 |
[
-2.611328125,
0.357421875,
-2.1796875,
-5.5703125,
-0.57080078125,
8.1015625
] |
[
2.30078125,
8.171875,
1.2861328125,
6.55859375
] | 109 | 1,993 |
[
-3.595703125,
4.1796875
] | 31.958064 |
[
-5.41015625,
-3.85546875,
-4.6796875,
-2.150390625,
1.6923828125,
12.109375
] | 0.358166 | 9.161628 | 35.725038 | 2.288219 |
[
1.854668140411377
] | -14,213.775323 | 5.603613 | -21,083.636658 | 0.483524 | 5.7671 |
[
-1.5322265625,
-3.033203125,
-3.66796875,
-5.265625,
2.009765625,
11.796875
] |
[
-4.98046875,
-1.4697265625,
-1.529296875,
-1.1171875,
2.998046875,
3.42578125
] | |
BkiUdWo4uzlhfMbxT7ag
|
\section{Introduction}
\label{sec:intro}
Deep neural networks (DNNs) have recently demonstrated impressive predictive performance due to their ability to learn complex, non-linear, relationships between variables. However, the inability to effectively visualize these relationships has led DNNs to be characterized as black boxes. Consequently, their use has been limited in fields such as medicine (e.g. medical image classification \citep{litjens2017survey}), policy-making (e.g. classification aiding public policy makers \citep{brennan2013emergence}), and science (e.g. interpreting the contribution of a stimulus to a biological measurement \citep{angermueller2016deep}). Moreover, the use of black-box models like DNNs in industrial settings has come under increasing scrutiny as they struggle with issues such as fairness \citep{dwork2012fairness} and regulatory pressure \citep{goodman2016european}.
To ameliorate these problems, we introduce the use of hierarchical interpretations to explain DNN predictions. Our proposed method, agglomerative contextual decomposition (ACD)\footnote{Code and scripts for running ACD and experiments available at \url{https://github.com/csinva/acd}
, is a general technique that can be applied to a wide range of DNN architectures and data types. Given a prediction from a trained DNN, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive (see \fref{fig:intro}).
The development of ACD consists of two novel contributions. First, importance scores for groups of features are obtained by generalizing contextual decomposition (CD), a previous method for obtaining importance scores for LSTMs \citep{murdoch2018beyond}. This work extends CD to arbitrary DNN architectures, including convolutional neural networks (CNNs). Second, most importantly, we introduce the idea of hierarchical saliency, where a group-level importance measure, in this case CD, is used as a joining metric in an agglomerative clustering procedure. While we focus on DNNs and use CD as our importance measure, this concept is general, and could be readily applied to any model with a suitable measure for computing importances of groups of variables.
We demonstrate the utility of ACD on both long short term memory networks (LSTMs) \citep{hochreiter1997long} trained on the Stanford Sentiment Treebank (SST) \citep{socher2013recursive} and CNNs trained on MNIST \citep{lecun1998mnist} and ImageNet \citep{russakovsky2015imagenet}. Through human experiments, we show that ACD produces intuitive visualizations that enable users to better reason about and trust DNNs. In particular, given two DNN models, we show that users can use the output of ACD to select the model with higher predictive accuracy, and that overall they rank ACD as more trustworthy than prior interpretation methods. In addition, we demonstrate that ACD's hierarchy is robust to adversarial perturbations \citep{szegedy2013intriguing} in CNNs.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{intro.pdf}
\caption{ACD illustrated through the toy example of predicting the phrase ``not very good'' as negative. Given the network and prediction, ACD constructs a hierarchy of meaningful phrases and provides importance scores for each identified phrase. In this example, ACD identifies that ``very'' modifies ``good'' to become the very positive phrase ``very good'', which is subsequently negated by "not" to produce the negative phrase ``not very good''. Best viewed in color.}
\label{fig:intro}
\end{figure}
\section{Background}
\label{sec:background}
Interpreting DNNs is a growing field \citep{murdoch2019interpretable} spanning a range of techniques including feature visualization \citep{olah2017feature,yosinski2015understanding}, analyzing learned weights \citep{tsang2017detecting} and others \citep{frosst2017distilling, andreas2016neural, zhang2017interpreting}. Our work focuses on local interpretations, where the task is to interpret individual predictions made by a DNN.
\paragraph{Local interpretation} Most prior work has focused on assigning importance to individual features, such as pixels in an image or words in a document. There are several methods that give feature-level importance for different architectures. They can be categorized as gradient-based \citep{springenberg2014striving, sundararajan2016gradients, selvaraju2016grad, baehrens2010explain}, decomposition-based \citep{murdoch2017automatic, shrikumar2016not, bach2015pixel}
and others
\citep{dabkowski2017real, fong2017interpretable, ribeiro2016should,zintgraf2017visualizing}, with many similarities among the methods \citep{ancona2018towards, lundberg2017unified}.
By contrast, there are relatively few methods that can extract the interactions between features that a DNN has learned. In the case of LSTMs, \citet{murdoch2018beyond} demonstrated the limitations of prior work on interpretation using word-level scores, and introduced contextual decomposition (CD), an algorithm for producing phrase-level importance scores from LSTMs. Another simple baseline is occlusion, where a group of features is set to some reference value, such as zero, and the importance of the group is defined to be the resulting decrease in the prediction value \citep{zeiler2014visualizing, li2016understanding}. Given an importance score for groups of features, no existing work addresses how to search through the many possible groups of variables in order to find a small set to show to users. To address this problem, this work introduces hierarchical interpretations as a principled way to search for and display important groups.
\paragraph{Hierarchical importance} Results from psychology and philosophy suggest that people prefer explanations that are simple but informative \citep{harman1965inference, read1993explanatory} and include the appropriate amount of detail \citep{keil2006explanation}. However, there is no existing work that is both powerful enough to capture interactions between features, and simple enough to not require a user to manually search through the large number of available feature groups. To remedy this, we propose a hierarchical clustering procedure to identify and visualize, out of the considerable number of feature groups, which ones contain meaningful interactions and should be displayed to the end user. In doing so, ACD aims to be informative enough to capture meaningful feature interactions while displaying a sufficiently small subset of all feature groups to maintain simplicity.
\section{Method}
\label{sec:method}
This section introduces ACD through two contributions: \sref{subsec:scores} proposes a generalization of CD from LSTMs to arbitrary DNNs, and \sref{subsec:agglom} explains the main contribution: how to combine these CD scores with hierarchical clustering to produce ACD.
\subsection{Contextual Decomposition (CD) importance scores for general DNNs}
\label{subsec:scores}
In order to generalize CD to a wider range of DNNs, we first reformulate the original CD algorithm into a more generic setting than originally presented. For a given DNN $f(x)$, we can represent its output as a SoftMax operation applied to logits $g(x)$. These logits, in turn, are the composition of $L$ layers $g_i$, such as convolutional operations or ReLU non-linearities.
\begin{align}
f(x) = \text{SoftMax}(g(x)) = \text{SoftMax}(g_L(g_{L-1}(...(g_2(g_1(x))))))
\end{align}
Given a group of features $\{x_j\}_{j \in S}$, our generalized CD algorithm, $g^{CD}(x)$, decomposes the logits $g(x)$ into a sum of two terms, $\beta(x)$ and $\gamma(x)$. $\beta(x)$ is the importance measure of the feature group $\{x_j\}_{j \in S}$, and $\gamma(x)$ captures contributions to $g(x)$ not included in $\beta(x)$.
\begin{align}
g^{CD}(x) & = (\beta(x), \gamma(x)) \\
\beta(x) + \gamma(x) & = g(x)
\end{align}
To compute the CD decomposition for $g(x)$, we define layer-wise CD decompositions $g^{CD}_i(x) = (\beta_i, \gamma_i)$ for each layer $g_i(x)$. Here, $\beta_i$ corresponds to the importance measure of $\{x_j\}_{j \in S}$ to layer $i$, and $\gamma_i$ corresponds to the contribution of the rest of the input to layer $i$. To maintain the decomposition we require $\beta_i + \gamma_i = g_i(x)$ for each $i$. We then compute CD scores for the full network by composing these decompositions.
\begin{align}
g^{CD}(x) = g^{CD}_L(g_{L-1}^{CD}(...(g_2^{CD}(g_1^{CD}(x)))))
\end{align}
Previous work \citep{murdoch2018beyond} introduced decompositions $g^{CD}_i$ for layers used in LSTMs. The generalized CD described here extends CD to other widely used DNNs, by introducing layer-wise CD decompositions for convolutional, max-pooling, ReLU non-linearity and dropout layers. Doing so generalizes CD scores from LSTMs to a wide range of neural architectures, including CNNs with residual and recurrent architectures.
At first, these decompositions were chosen through an extension of the CD rules detailed in \citet{murdoch2018beyond}, yielding a similar algorithm to that developed concurrently by \citet{godin2018explaining}. However, we found that this algorithm did not perform well on deeper, ImageNet CNNs. We subsequently modified our CD algorithm by partitioning the biases in the convolutional layers between $\gamma_i$ and $\beta_i$ in Equation \ref{eq:conv_cd}, and modifying the decomposition used for ReLUs in Equation \ref{eq:relu_cd}. We show the effects of these two changes in Supplement~\ref{sec:godin_comparison}, and give additional intuition in Supplement~\ref{sec:cd_score_comparison}.
When $g_i$ is a convolutional or fully connected layer, the layer operation consists of a weight matrix $W$ and a bias $b$. The weight matrix can be multiplied with $\beta_{i-1}$ and $\gamma_{i-1}$ individually, but the bias must be partitioned between the two. We partition the bias proportionally based on the absolute value of the layer activations. For the convolutional layer, this equation yields only one activation of the output; it must be repeated for each activation.
\begin{align}
\label{eq:conv_cd}
\beta_i &= W\beta_{i-1} + \frac{|W\beta_{i-1}|}{|W\beta_{i-1}| + |W\gamma_{i-1}|} \cdot b \\
\gamma_i &= W\gamma_{i-1} + \frac{|W\gamma_{i-1}|}{|W\beta_{i-1}| + |W\gamma_{i-1}|} \cdot b
\end{align}
When $g_i$ is a max-pooling layer, we identify the indices, or channels, selected by max-pool when run by $g_i(x)$, denoted $max\_idxs$ below, and use the decompositions for the corresponding channels.
\begin{align}
max\_idxs &= \underset{idxs}{\text{argmax}} \: \left[ \text{maxpool}(\beta_{i-1} + \gamma_{i-1}; idxs) \right] \\
\beta_i &= \beta_{i-1}[max\_idxs] \\
\gamma_i &= \gamma_{i-1}[max\_idxs]
\end{align}
Finally, for the ReLU, we update our importance score $\beta_i$ by computing the activation of $\beta_{i-1}$ alone and then update $\gamma_i$ by subtracting this from the total activation.
\begin{align}
\label{eq:relu_cd}
\beta_{i} &= \text{ReLU}(\beta_{i-1}) \\
\gamma_{i} &= \text{ReLU}(\beta_{i-1} + \gamma_{i-1}) - \text{ReLU}(\beta_{i-1})
\end{align}
For a dropout layer, we simply apply dropout to $\beta_{i-1}$ and $\gamma_{i-1}$ individually, or multiplying each by a scalar. Computationally, a CD call is comparable to a forward pass through the network $f$.
\subsection{Agglomerative Contextual Decomposition (ACD)}
\label{subsec:agglom}
Given the generalized CD scores introduced above, we now introduce the clustering procedure used to produce ACD interpretations. At a high-level, our method is equivalent to agglomerative hierarchical clustering, where the CD interaction is used as the joining metric to determine which clusters to join at each step. This procedure builds the hierarchy by starting with individual features and iteratively combining them based on the interaction scores provided by CD. The displayed ACD interpretation is the hierarchy, along with the CD importance score at each node.
More precisely, algorithm~\ref{algo:agglom} describes the exact steps in the clustering procedure. After initializing by computing the CD scores of each feature individually, the algorithm iteratively selects all groups of features within k\% of the highest-scoring group (where $k$ is a hyperparameter, fixed at 95 for images and 90 for text) and adds them to the hierarchy.
Each time a new group is added to the hierarchy, a corresponding set of candidate groups is generated by adding individual contiguous features to the original group. For text, the candidate groups correspond to adding one adjacent word onto the current phrase, and for images adding any adjacent pixel onto the current image patch. Candidate groups are ranked according to the CD interaction score, which is the difference between the score of the candidate and original groups.
ACD terminates after an application-specific criterion is met. For sentiment classification, we stop once all words are selected. For images, we stop after some predefined number of iterations and then merge the remaining groups one by one using the same selection criteria described above.
\begin{algorithm}[hbtp]
\caption{Agglomeration algorithm.}
\label{algo:agglom}
\small
\textbf{ACD}(Example x, model, hyperparameter k, function CD(x, blob; model))
\begin{algorithmic}
\CommentInline{initialize}
\State tree = Tree() \Comment{tree to output}
\State scoresQueue = PriorityQueue() \Comment{scores, sorted by importance}
\For{feature in x}
\State scoresQueue.push(feature, priority=CD(x, feature; model))
\EndFor
\State
\CommentInline{iteratively build up tree}
\While{scoresQueue is not empty}
\State selectedGroups = scoresQueue.popTopKPercentile(k) \Comment{pop off top k elements}
\State tree.add(selectedGroups) \Comment{Add top k elements to the tree}
\State
\CommentInline{generate new groups of features based on current groups and add them to the queue}
\For{selectedGroup in selectedGroups}
\State candidateGroups = getCandidateGroups(selectedGroup)
\For{candidateGroup \:in candidateGroups}
\State scoresQueue.add(candidateGroup, priority=CD(x, candidateGroup;model)-CD(x,selectedGroup; model))
\EndFor
\EndFor
\EndWhile
\item \textbf{return} tree
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{algo:agglom} is not specific to DNNs; it requires only a method to obtain importance scores for groups of input features. Here, we use CD scores to arrive at the ACD algorithm, which makes the method specific to DNNs, but given a feature group scoring function, Algorithm~\ref{algo:agglom} can yield interpretations for any predictive model. CD is a natural score to use for DNNs as it aggregates saliency at different scales and converges to the final prediction once all the units have been selected.
\section{Results}
\label{sec:results}
We now present empirical validation of ACD on both LSTMs trained on SST and CNNs trained on MNIST and ImageNet. First, we introduce the reader to our visualization in \sref{subsec:qualitative}, and how it can (anecdotally) be used to understand models in settings such as diagnosing incorrect predictions, identifying dataset bias, and identifying representative phrases of differing lengths. We then provide quantitative evidence of the benefits of ACD in \sref{subsec:quantitative} through human experiments and demonstrating the stability of ACD to adversarial perturbations.
\subsection{Experimental details}
\label{sec:exp_details}
We first describe the process for training the models from which we produce interpretations. As the objective of this paper is to interpret the predictions of models, rather than increase their predictive accuracy, we use standard best practices to train our models. All models are implemented using PyTorch. For SST, we train a standard binary classification LSTM model\footnote{model and training code from https://github.com/clairett/pytorch-sentiment-classification}, which achieves 86.2\% accuracy. On MNIST, we use the standard PyTorch example\footnote{model and training code from https://github.com/pytorch/examples/tree/master/mnist}, which attains accuracy of 97.7\%. On ImageNet, we use a pre-trained VGG-16 DNN architecture \cite{simonyan2014very} which attains top-1 accuracy of 42.8\%. When using ACD on ImageNet, for computational reasons, we start the agglomeration process with 14-by-14 superpixels instead of individual pixels. We also smooth the computed image patches by adding pixels surrounded by the patch. The weakened models for the human experiments are constructed from the original models by randomly permuting a small percentage of their weights. For SST/MNIST/ImageNet, 25/25/0.8\% of weights are randomized, reducing test accuracy from 85.8/97.7/42.8\% to 79.8/79.6/32.3\%.
\subsection{Qualitative experiments}
\label{subsec:qualitative}
Before providing quantitative evidence of the benefits of ACD, we first introduce the visualization and demonstrate its utility in interpreting a predictive model's behavior. To qualitatively evaluate ACD, in Supplement~\ref{sec:more_examples} we show the results of several more examples selected using the same criterion as in our human experiments described below.
\subsubsection{Understanding predictive models using ACD}
\label{subsec:anecdotes}
In the following examples, we demonstrate the use of ACD to diagnose incorrect predictions in SST and identify dataset bias in ImageNet. These examples are only a few of the potential uses of ACD.
\paragraph{Text example - diagnosing incorrect predictions} In the first example, we show the result of running ACD for our SST LSTM model in Figure \ref{fig:text_ex}. We can use this ACD visualization to quickly diagnose why the LSTM made an incorrect prediction. In particular, note that the ACD summary of the LSTM correctly identifies two longer phrases and their corresponding sentiment \textit{a great ensemble cast} (positive) and \textit{n't lift this heartfelt enterprise out of the ordinary} (negative). It is only when these two phrases are joined that the LSTM inaccurately predicts a positive sentiment. This suggests that the LSTM has erroneously learned a positive interaction between these two phrases. Prior methods would not be capable of detecting this type of useful information.
\begin{figure}[h]
\centering
\includegraphics[width=1.2 \textwidth]{text_ex.png}
\caption{ACD interpretation of an LSTM predicting sentiment. Blue is positive sentiment, white is neutral, red is negative. The bottom row displays CD scores for individual words in the sentence. Higher rows display important phrases identified by ACD, along with their CD scores, converging to the model's (incorrect) prediction in the top row. (Best viewed in color)}
\label{fig:text_ex}
\end{figure}
\paragraph{Vision example - identifying dataset bias}
\label{paragraph:vision}
\fref{fig:viz_ex} shows an example using ACD for an ImageNet VGG model. Using ACD, we can see that to predict ``puck", the CNN is not just focusing on the puck in the image, but also on the hockey player's skates. Moreover, by comparing the fifth and sixth plots in the third row, we can see that the network is only able to distinguish between the class ``puck" and the other top classes when the orange skate and green puck patches merge into a single orange patch. This suggests that the CNN has learned that skates are a strong corroborating features for pucks. While intuitively reasonable in the context of ImageNet, this may not be desirable behavior if the model were used in other domains.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{viz_ex.png}
\caption{ACD interpretation for a VGG network prediction, described in \ref{paragraph:vision}.
ACD shows that the CNN is focusing on skates to predict the class ``puck", indicating that the model has captured dataset bias. The top row shows the original image, logits for the five top-predicted classes, and the CD superpixel-level scores for those classes. The second row shows separate image patches ACD has identified as being independently predictive of the class ``puck". Starting from the left, each image shows a successive iteration in the agglomeration procedure. The third row shows the CD scores for each of these patches, where patch colors in the second row correspond to line colors in the third row. ACD successfully finds important regions for the target class (such as the puck), and this importance increases as more pixels are selected. Best viewed in color.}
\label{fig:viz_ex}
\end{figure}
\subsubsection{Identifying top-scoring phrases}
When feasible, a common means of scrutinizing what a model has learned is to inspect its most important features, and interactions. In Table \ref{table:main_top_phrases}, we use ACD to show the top-scoring phrases of different lengths for our LSTM trained on SST. These phrases were extracted by running ACD separately on each sample in SST's validation set. The score of each phrase was then computed by averaging over the score it received in each occurrence in a ACD hierarchy. The extracted phrases are clearly reflective of the corresponding sentiment, providing additional evidence that ACD is able to capture meaningful positive and negative phrases.
Additional phrases are given in Supplement~\ref{sec:top_phrases}.
\begin{center}
\begin{table}
\begin{center}
\begin{tabular}{lp{6.8cm}p{6cm}}
\hline
\textbf{Length} & \textbf{Positive} & \textbf{Negative} \\
\hline
1 & pleasurable, sexy, glorious & nowhere, grotesque, sleep \\
\hline
3 & amazing accomplishment., great fun. & bleak and desperate, conspicuously lacks. \\
\hline
5 & a pretty amazing accomplishment. & ultimately a pointless endeavour. \\
\hline
8 & presents it with an unforgettable visual panache. & my reaction in a word: disappointment. \\
\hline
\end{tabular}
\caption{Top-scoring phrases of different lengths extracted by ACD on SST's validation set. The positive/negative phrases identified by ACD are all indeed positive/negative.}
\label{table:main_top_phrases}
\end{center}
\end{table}
\end{center}
\subsection{Quantitative experiments}
\label{subsec:quantitative}
Having introduced our visualization and provided qualitative evidence of its uses, we now provide quantitative evidence of the benefits of ACD.
\subsubsection{Human experiments}
\label{subsec:human}
We now demonstrate through human experiments that ACD allows users to better trust and reason about the accuracy of DNNs. Human subjects consist of eleven graduate students at the author's institution, each of whom has taken a class in machine learning. Each subject was asked to fill out a survey with two types of questions: whether, using ACD, they could identify the more accurate of two models and whether they trusted a models output. In both cases, similar questions were asked on three datasets (SST, MNIST and ImageNet), and ACD was compared against three baselines: CD \citep{murdoch2018beyond}, Integrated Gradients (IG) \citep{sundararajan2016gradients}, and occlusion \citep{li2016understanding, zeiler2014visualizing}. The exact survey prompts are provided in Supplement~\ref{sec:amt}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{human_eval.png}
\caption{Results for human studies. \textbf{A.} Binary accuracy for whether a subject correctly selected the more accurate model using different interpretation techniques \textbf{B.} Average rank (from 1 to 4) of how much different interpretation techniques helped a subject to trust a model, higher ranks are better.}
\label{fig:human_results}
\end{figure}
\paragraph{Identifying an accurate model}
The objective of this section was to determine if subjects could use a small number of interpretations produced by ACD in order to identify the more accurate of two models. For each question in this section, two example predictions were chosen. For each of these two predictions, subjects were given interpretations from two different models (four total), and asked to identify which of the two models had a higher predictive accuracy. Each subject was asked to make this comparison using three different sets of examples for each combination of dataset and interpretation method, for 36 total comparisons. To remove variance due to examples, the same three sets of examples were used across all four interpretation methods.
The predictions shown were chosen to maximize disagreement between models, with SST also being restricted to sentences between five and twenty words, for ease of visualization. To prevent subjects from simply picking the model that predicts more accurately for the given example, for each question a user is shown two examples: one where only the first model predicts correctly and one where only the second model predicts correctly. The two models considered were the accurate models of the previous section and a weakened version of that same model (details given in \sref{sec:exp_details}).
Fig~\ref{fig:human_results}A shows the results of the survey. For SST, humans were better able to identify the strongly predictive model using ACD compared to other baselines, with only ACD and CD outperforming random selection (50\%). Based on a one-sided two-sample t-test, the gaps between ACD and IG/Occlusion are significant, but not the gap between ACD and CD. In the simple setting of MNIST, ACD performs similarly to other methods. When applied to ImageNet, a more complex dataset, ACD substantially outperforms prior, non-hierarchical methods, and is the only method to outperform random chance, although the gaps between ACD and other methods are only statistically suggestive (p-values fall between 0.15 and 0.07).
\paragraph{Evaluating trust in a model} In this section, the goal is to gauge whether ACD helps a subject to better trust a model's predictions, relative to prior techniques. For each question, subjects were shown interpretations of the same prediction using four different interpretation methods, and were asked to rank the interpretations from one to four based on how much they instilled trust in trust the model. Subjects were asked to do this ranking for three different examples in each dataset, for nine total rankings. The interpretations were produced from the more accurate model from the previous section, and the examples were chosen using the same criteria as the previous section, except they were restricted to examples correctly predicted by the more accurate model.
Fig~\ref{fig:human_results}B shows the average ranking received by each method/dataset pair. ACD substantially outperforms other baselines, particularly for ImageNet, achieving an average rank of 3.5 out of 4, where higher ranks are better. As in the prior question, we found that the hierarchy only provided benefits in the more complicated ImageNet setting, with results on MNIST inconclusive. For both SST and ImageNet, the difference in mean ranks between ACD and all other methods is statistically significant (p-value less than 0.005) based on a permutation test, while on MNIST only the difference between ACD and occlusion is significant.
\subsubsection{ACD hierarchy is robust to adversarial perturbations}
\label{subsec:adversarial_experiments}
While there has been a considerable amount of work on adversarial attacks, little effort has been devoted to qualitatively understanding this phenomenon. In this section, we provide evidence that, on MNIST, the hierarchical clustering produced by ACD is largely robust to adversarial perturbations. This suggests that ACD's hierarchy captures fundamental features of an image, and is largely immune to the spurious noise favored by adversarial examples.
To measure the robustness of ACD's hierarchy, we first qualitatively compare the interpretations produced by ACD on both an unaltered image and an adversarially perturbed version of that image. Empirically, we found that the extracted hierarchies are often very similar, see Supplement~\ref{sec:acd_adv}. To generalize these observations, we introduce a metric to quantify the similarity between two ACD hierarchies. This metric allows us to make quantitative, dataset-level statements about the stability of ACD feature hierarchies with respect to adversarial inputs. Given an ACD hierarchy, we compute a ranking of the input image's pixels according to the order in which they were added to the hierarchy. To measure the similarity between the ACD hierarchies for original and adversarial images, we compute the correlation between their corresponding rankings. As ACD hierarchies are class-specific, we average the correlations for the original and adversarially altered predictions.
We display the correlations for five different attacks (computed using the Foolbox package \cite{rauber2017foolbox}, examples shown in Supplement~\ref{sec:adv_examples}),
each averaged over 100 randomly chosen predictions, in Table \ref{table:adv_results}. As ACD is the first local interpretation technique to compute a hierarchy, there is little prior work available for comparison. As a baseline, we use our agglomeration algorithm with occlusion in place of CD. The resulting correlations are substantially lower, indicating that features detected by ACD are more stable to adversarial attacks than comparable methods. These results provide evidence that ACD's hierarchy captures fundamental features of an image, and is largely immune to the spurious noise favored by adversarial examples.
\begin{center}
\begin{table}
\begin{center}
\begin{tabularx}{0.82\textwidth}{@{} l c c @{}}
\hline
\textbf{Attack Type} & \textbf{ACD} & \textbf{Agglomerative Occlusion} \\
\hline
Saliency \citep{papernot2016limitations} & 0.762 & 0.259 \\
\hline
Gradient attack & 0.662 & 0.196 \\
\hline
FGSM \citep{goodfellow2014explaining} & 0.590 & 0.131 \\
\hline
Boundary \citep{brendel2017decision} & 0.684 & 0.155 \\
\hline
DeepFool \citep{moosavi2016deepfool} & 0.694 & 0.202 \\
\hline
\end{tabularx}
\caption{Correlation between pixel ranks for different adversarial attacks. ACD achieves consistently high correlation across different attack types, indicating that ACD hierarchies are largely robust to adversarial attacks. Using occlusion in place of CD produces substantially less stable hierarchies.}
\label{table:adv_results}
\end{center}
\end{table}
\end{center}
\section{Conclusion}
\label{sec:discussion}
In this work, we introduce agglomerative contextual decomposition (ACD), a novel hierarchical interpretation algorithm. ACD is the first method to use a hierarchy to interpret individual neural network predictions. Doing so enables ACD to automatically detect and display non-linear contributions to individual DNN predictions, something prior interpretation methods are unable to do. The benefits of capturing the non-linearities inherent in DNNs are demonstrated through human experiments and examples of diagnosing incorrect predictions and dataset bias. We also demonstrate that ACD's hierarchy is robust to adversarial perturbations in CNNs, implying that it captures fundamental aspects of the input and ignores spurious noise.
\section{CD score comparisons}
\label{sec:cd_score_comparison}
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.8\textwidth]{cd_propagation_full.png}
\caption{Intuition for CD run on a corner-shaped blob compared to \textit{build-up} and \textit{occlusion}. CD decomposes a DNN's feedforward pass into a part from the blob of interest (top row) and everything else (second row). Left column shows original image with overlaid blob. Other columns show DNN activations summed over the filter dimension. Top and third rows are on same color scale. Second and bottom rows are on same color scale.}
\label{fig:cd_propagation_full}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.6\textwidth]{compare_scores.png}
\caption{Comparing unit-level CD scores for the correct class to scores from baseline methods. In each case, the model correctly predicts the label, shown on the y axis. Blue is positive, white is neutral, and red is negative. Best viewed in color.}
\label{fig:score_comparison}
\end{figure}
\fref{fig:cd_propagation_full} gives intuition for CD on the VGG-16 ImageNet model described in \sref{sec:results}. CD keeps track of the contributions of the blob and non-blob throughout the network. This is intuitively similar to the occlusion and build-up methods, shown in the bottom two rows. The build-up method sets everything but the patch of interest to a references value (often zero). These rows compare the CD decomposition to perturbing the input as in the occlusion and build-up methods. They are similar in early layers, but differences become apparent in later layers.
\fref{fig:score_comparison} compares the 7x7 superpixel-level scores for four images comparing different methods for obtaining importance scores. CD scores better find information relevant to predicting the correct class.
\FloatBarrier
\section{Top scoring ACD phrases}
\label{sec:top_phrases}
Here we provide an extended version of Table \ref{table:top_phrases}, containing the top 5 phrases of each length for positive/negative polarities. These were extracted using ACD from an LSTM trained on SST.
\begin{center}
\begin{table}
\begin{center}
\begin{tabular}{lp{6cm}p{6cm}}
\hline
\textbf{Length} & \textbf{Positive} & \textbf{Negative} \\
\hline
1 & 'pleasurable', 'sexy', 'glorious', 'delight', 'unforgettable' & 'nowhere', 'grotesque', 'sleep', 'mundane', 'cliché' \\
\hline
3 & 'amazing accomplishment .', 'great fun .', 'good fun .', 'language sexy .', 'are magnificent .' & 'very bad .', ': disappointment .', 'quite bad .', 'conspicuously lacks .', 'bleak and desperate' \\
\hline
5 & 'a pretty amazing accomplishment .', 'clearly , great fun .', 'richness of its performances .', 'a delightful coming-of-age story .', 'an unforgettable visual panache .' & 'ultimately a pointless endeavor .', 'this is so bad .', 'emotion closer to pity .', 'fat waste of time .', 'sketch gone horribly wrong .' \\
\hline
8 & 'presents it with an unforgettable visual panache .', 'film is packed with information and impressions .', 'entertains by providing good , lively company .' & 'my reaction in a word : disappointment .', "'s slow -- very , very slow .", 'a dull , ridiculous attempt at heart-tugging .' \\
\hline
12 & 'in delicious colors , and the costumes and sets are grand .', 'part stevens glides through on some solid performances and witty dialogue .', 'mamet enthusiast and for anyone who appreciates intelligent , stylish moviemaking .' & "actors provide scant reason to care in this crude '70s throwback .", 'more often just feels generic , derivative and done to death .', 'its storyline with glitches casual fans could correct in their sleep .' \\
\hline
15 & 'serry shows a remarkable gift for storytelling with this moving , effective little film .', ', lathan and diggs are charming and have chemistry both as friends and lovers .' & 'level that one enjoys a bad slasher flick , primarily because it is dull .', 'technicality that strains credulity and leaves the viewer haunted by the waste of potential .' \\
\hline
\end{tabular}
\caption{Top-scoring phrases of different lengths extracted by ACD on SST's validation set. The positive/negative phrases identified by ACD are all indeed positive/negative}
\label{table:top_phrases}
\end{center}
\end{table}
\end{center}
\section{ACD Examples}
We provide additional, automatically selected, visualizations produced by ACD. These examples were chosen using the same criteria as the human experiments describes in \sref{subsec:human}. All examples are best viewed in color.
\label{sec:more_examples}
\includepdf[fitpaper=true,pages=-]{supp_examples.pdf}
\section{Human experiments experimental setup}
\label{sec:amt}
Order of questions is randomized for each subject. Below are the instructions and questions given to the user (for brevity, the actual visualizations are omitted, but are similar to the visualizations shown in Supplement~\ref{sec:more_examples}).
\fbox{\fbox{\parbox{5.5in}{\centering
This survey aims to compare different interpretation techniques. In what follows, blue is positive, white is neutral, and red is negative.}}}
\subsection{Sentiment classification}
\subsubsection{Choosing the better model}
In this section, the task is to compare two models that classify movie reviews as either positive (good movie) or negative (bad movie). One model has better predictive accuracy than the other.
In what follows, you will see visualizations of what both models have learned. These visualizations use different methods of identifying contributions to the final prediction of either individual words or groups of them. For each model, we show visualizations of two different examples.
In these visualizations, the color shows what the model thinks for individual words / groups of words. Blue is positive sentiment (e.g. "great", "fantastic") and red is negative sentiment (e.g. "terrible", "miserable").
\textbf{Using these visualizations, please write A or B to select which model you think has higher predictive accuracy.}
\subsubsection{Gauging trust}
Now, we show results only from the good model. Your task is to compare different visualizations. For the following predictions, please select which visualization method leads you to trust the model the most.
\textbf{Put a number next to each of the following letters ranking them in the order of how much they make you trust the model (1-4, 1 is the most trustworthy).}
\subsection{MNIST}
\subsubsection{Choosing the better model}
Now we will perform a similar challenge for vision. Your task is to compare two models that classify images into classes, in this case digits from 0-9. One model has higher predictive accuracy than the other.
In what follows, you will see visualizations of what both models have learned. These visualizations use different methods of identifying contributions to the final prediction of either individual pixels or groups of them. Using these visualizations, please select the model you think has higher accuracy.
For each prediction, the top row contains the raw image followed by five heat maps, and the title shows the predicted class. Each heatmap corresponds to a different class, with blue pixels indicating a pixel is a positive signal for that class, and red pixels indicating a negative signal. \textbf{The first heatmap title shows the predicted class of the network - this is wrong half the time. In some cases, each visualization has an extra row, which shows groups of pixels}, at multiple levels of granularity, that contribute to the predicted class.
\textbf{Using these visualizations, please select which model you think has higher predictive accuracy, A or B.}
\subsubsection{Gauging trust}
Now, we show results only from the good model. Your task is to compare different visualizations. For the following predictions, please select which visualization method leads you to trust the model the most.
\textbf{Put a number next to each of the following letters ranking them in the order of how much they make you trust the model (1-4, 1 is the most trustworthy).}
\subsubsection{Choosing the more accurate model}
Now we will perform a similar challenge for vision. Your task is to compare two models that classify images into classes (ex. balloon, bee, pomegranate). One model is better than the other in terms of predictive accuracy.
In what follows, you will see visualizations of what both models have learned. These visualizations use different methods of identifying contributions to the final prediction of either individual pixels or groups of them.
For each prediction, the top row contains the raw image followed by five heat maps, and the title shows the predicted class. Each heatmap corresponds to a different class, with blue pixels indicating a pixel is a positive signal for that class, and red pixels indicating a negative signal. \textbf{The first heatmap title shows the predicted class of the network - this is wrong half the time. In some cases, each visualization has an extra row, which shows groups of pixels}, at multiple levels of granularity, that contribute to the predicted class.
\textbf{Using these visualizations, please select which model you think has higher predictive accuracy, A or B.}
\subsubsection{Gauging trust}
Now, we show results only from the more accurate model. Your task is to compare different visualizations. For the following predictions, please select which visualization method leads you to trust the model's decision the most.
\textbf{Put a number next to each of the following letters ranking them in the order of how much they make you trust the model (1-4, 1 is the most trustworthy).}
\section{ACD on adversarial examples}
The hierarchies constructed by ACD to explain a prediction of 0 are substantially similar for both the original image and an adversarially perturbed image predicted to be a 6.
\label{sec:acd_adv}
\begin{figure}[H]
\centering
\textbf{Original image}
\includegraphics[width=1.0\textwidth]{zero_orig.png}
\end{figure}
\begin{figure}[H]
\centering
\textbf{Adversarial image}
\includegraphics[width=1.0\textwidth]{zero_adv.png}
\caption{Example of ACD run on an image of class 0 before and after an adversarial perturbation (a DeepFool attack). Best viewed in color.}
\end{figure}
\section{Adversarial attack examples}
\label{sec:adv_examples}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{attack_examples.png}
\caption{Examples of attacks for one image. Original image (left column) is correctly predicted as class 0. After each adversarial perturbation (middle column), the predicted class for the adversarial image (right column) is now altered.}
\label{fig:attack_examples}
\end{figure}
\section{Generalizing CD to CNNs}
\label{sec:godin_comparison}
\fref{fig:godin_comparison} qualitatively shows the change in behavior as the result of two modifications made to the naive extension of CD to CNNs, which was independently developed by \cite{godin2018explaining}. During development of our general CD, two changes were made. First, we partitioned the bias between $\gamma_i$ and $\beta_i$, as described in Equation \ref{eq:conv_cd}. As can be seen in the second column, this qualitatively reduces the noise in the heat maps. Next, we replace the ReLU Shapely decomposition by the decomposition provided in Equation \ref{eq:relu_cd}. In the third column, you can see that this effectively prevents the CD scores from becoming unrealistically large in areas that should not be influencing the model's decision. When these two approaches are combined in the fourth column, they provide qualitatively sensible heatmaps with reasonably valued CD scores. When applied to the smaller models used on SST and MNIST, these changes don't have large effects on the interpretations.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.9\textwidth]{godin_fig.png}
\caption{Comparing unit-level CD scores to CD scores from the naive extension of CD to CNNs, independently developed by \cite{godin2018explaining}. Labels under the bottom row signify the minimum and maximum scores from each column. Altering the bias partition and ReLU decomposition qualitatively improves scores (e.g. see scores in bottom row corresponding to the location of the crane), and avoids extremely large magnitudes (see values under left two columns). Blue is positive, white is neutral, and red is negative. In each case, scores are for the correct class, which the model predicts correctly (shown on the y axis).}
\label{fig:godin_comparison}
\end{figure}
| -20,931.600868 |
[
-2.79296875,
2.68359375
] | 38.613861 |
[
-2.794921875,
1.099609375,
-1.830078125,
-5.76953125,
-0.96875,
7.6875
] |
[
4.3046875,
7.1171875,
1.50390625,
8.46875
] | 378 | 5,857 |
[
-2.958984375,
3.62890625
] | 22.379328 |
[
-6.17578125,
-4.46875,
-4.6875,
-1.876953125,
2.826171875,
12.84375
] | 0.474533 | 23.750991 | 24.620113 | 1.166398 |
[
1.7705990076065063
] | -15,609.123157 | 5.936657 | -20,791.909203 | 0.161054 | 6.111004 |
[
-2.869140625,
-3.6484375,
-3.3984375,
-4.3828125,
2.7734375,
11.28125
] |
[
-5.51953125,
-2.4765625,
-2.544921875,
-2.259765625,
3.75,
6.30859375
] | |
BkiUezLxK4sA-7sDgkRI
|
\section{Introduction}
\label{introduction}
In the last few years, we have seen
a remarkable progress in the ab-initio simulation
of realistic electronic systems based on first principles quantum
mechanics.
Despite the power of density functional theory (DFT), with standard
local density approximation (LDA) and generalized gradient approximation (GGA) functionals,
much effort has been devoted to schemes
that are able to describe more accurately the electronic correlations.
This is because several materials -- such as high temperature superconductors --
are indeed strongly correlated. Furthermore, long-range
dispersive forces
may be extremely important even in simple and fundamental materials, like water,
and are notoriously difficult to describe with standard DFT.
A promising many-body approach, alternative to DFT,
is the so called quantum Monte Carlo (QMC) method,
allowing one to include the electronic correlations by means of a highly-accurate
many-body wave function (WF), sampled by a statistical method.
All the basic ingredients of the electronic
correlation are described explicitly within this framework. This is
particularly appealing because its computational cost scales
rather well with the number of electrons $N$, with a modest power, e.g., $N^3$ or $N^4$.
Due to this important property, QMC is very promising for large scale calculations especially
when compared with standard post-Hartree-Fock methods. In fact, these methods are
also capable to describe rather well the electronic correlations. However, they typically require a larger
computational cost: from polynomial in the range between $N^4$ and
$N^7$, to exponential complexity in full configuration interaction (FCI) schemes.
Despite the clear advantage of QMC for the accurate electronic
simulation of materials containing a large number of atoms, its application
has been mainly limited to total energy calculations.
In particular, no general method to calculate ionic forces, that remains efficient even for a large number of atoms $M$, is available so far.
Indeed, many-body forces usually lead to cumbersome expressions,
whose evaluation can be done at present only by means of complicated and computationally expensive algorithms. For this reason, such calculations have been implemented so far only for particularly simple cases.
For instance, it was recently possible \cite{attacc} to simulate $ \sim 100$
hydrogen atoms in the low temperature, high pressure phase, with the
calculation of the ionic forces based on
efficient strategies to work with finite variance expressions \cite{caffarell,caffforce}.
Unfortunately, this route cannot be followed in general because it works very efficiently
only for light atoms.
On the other hand, an accurate algorithm for the calculation
of forces
that is in principle
efficient also with heavy atoms, and with the use of pseudopotentials,
was introduced a long time ago \cite{warp}.
This method is based on the so called
the {\em space warp coordinate transformation} (SWCT), allowing a zero-variance expression -- i.e., maximum efficiency in QMC --
even for isolated atoms.
We believe that, without a highly effective variance reduction scheme in the calculation
of forces, such as SWCT, there is no hope to extend the applicability of QMC to structural optimization of complex and correlated
materials, or to perform ab-initio molecular dynamics simulation at finite
temperature. However, even when using this promising approach, or others
based on the zero-variance principle \cite{caffforce},
standard implementations, e.g.,
based on finite-difference approximations of the derivatives appearing in the
expressions for the ionic forces, are still computationally very expensive -- to the
point of being unfeasible for a large number of atoms.
In this work, we propose a simple strategy for the
efficient calculation of the ionic forces
-- and generally of any arbitrarily complicated derivative of the
QMC total energy --
by using {\em adjoint algorithmic differentiation}.
We will show that this method will allow us to achieve two very
important targets:
\begin{enumerate}
\item the numerical implementation of all the energy derivatives will be possible in a straightforward way without any reference to complicated expressions, as for instance the terms
shown in Ref.~\onlinecite{needs}, when pseudopotential are used to
remove the effect of core electrons;
\item more importantly, the calculation of {\em an arbitrarily large}
number of energy derivatives will be possible with a computational effort
comparable with the one to compute the energy alone. In other words, once
the calculation of the energy is well optimized, by means of AAD, also the
one for the energy derivatives will be almost optimal.
\end{enumerate}
The latter property is rather remarkable, and suggests that most of
the advanced ab-initio tools belonging to a rather restrictive approximation
of the DFT functional, such as structural optimization and molecular
dynamics at finite temperature, can be extended to highly accurate many-body
approaches based on QMC, with a computational effort that remains
affordable even for a large number of atoms.
Though in most of the examples presented here we have used the standard variational QMC
method, the technique we propose in this paper directly
applies also to more accurate QMC projection
schemes, such as (lattice regularized) diffusion Monte Carlo.
However one has to take into account that in this case there are unsolved
technical problems -- e.g., infinite variance in the most
accurate estimators -- that
do not depend on the technique we propose, and are
outside the main scope of this paper.
In order to avoid confusion, we anticipate that we apply the AAD
technique only to the specific calculation of the wave function and
the local energy (see later for their definitions)
required for the VMC or DMC (LRDMC) evaluation of the average energy. In
principle AAD can be applied to the whole algorithm, but we have
not exploited this possibility, that may be interesting for future applications.
\section{QMC wave function, VMC and LRDMC }
In this Section we begin by describing the WF that we have used in our QMC calculations.
In the following we will denote with $\mathbf{ r} $ a generic three dimensional
electronic position, whereas $x=\{ \mathbf{r}_1, \mathbf{r}_2, \cdots ,
\mathbf{r}_N \} $ stands for a configuration of
all electron positions and spins; the $N_\uparrow$ spin-up
electrons are at positions $\mathbf{r}_i$ with $1\le i \le N_\uparrow$ , and the
$N_\downarrow$ spin-down electrons are at positions $N_\uparrow +1 \le i \le N $.
The usual trial WF used in the QMC calculation is the product of an
antisymmetric part, and a Jastrow factor. The antisymmetric part is a single Slater determinant, while the Jastrow factor is a bosonic
many-body function which accounts for the dynamic correlations
in the system. Our Slater determinant is obtained with
$N/2$ doubly occupied molecular orbitals $\psi_j(\mathbf {r} )$, expanded over $L$ atomic Gaussian
orbitals $\phi_j( \mathbf{ r})$, centered at atomic positions $\mathbf{R}_j$,
as \cite{marchi}:
\begin{equation} \label{defpsi}
\psi_i( \mathbf{ r} ) = \sum \limits_{j=1}^L \chi_{ij} \phi_j( \mathbf{ r})
\end{equation}
where the coefficients $\chi_{i,j}$, as well as the non-linear coefficients,
appearing in the exponents of the Gaussians,
can be fully optimized by energy minimization as described later.
The molecular orbitals are initialized from a
self consistent DFT-LDA calculation, in the same atomic basis.
The Jastrow factor takes into account the electronic correlation between two
electrons, and is conventionally split into an homogeneous interaction $J_2$, depending on the relative
distance between two electrons, and two non-homogeneous contributions $J_3$ and $J_4$,
depending on the positions of two electrons and one atom, and two electrons and two atoms, respectively.
It also contains an inhomogeneous term $\tilde J_2$, describing the electron-ion interaction. This
is important to compensate for the
change in the one particle density induced by $J_2$, $J_3$ and $J_4$, as well as to satisfy the electron-ion cusp conditions.
The homogeneous and inhomogeneous two-body terms $J_2$ and $ \tilde J_2$ are defined by the following equations:
\begin{eqnarray}
\label{j1}
\tilde J_2=\exp{\big[\sum_{ia}-(2Z_a)^{3/4}u(Z_a^{1/4}r_{ia})+\sum_{ial}
g_l^a \chi_{al}^J(\mathbf{r}_i)\big]},
\end{eqnarray}
and
\begin{eqnarray}
\label{j2}
J_2=\exp{[\sum_{i<j}^{}u(r_{ij})]},
\end{eqnarray}
where $i,j$ are indices running over the electrons, and $l$ runs over different single particle orbitals $\chi_{al}^J$
centered on the atomic center $a$; $r_{ia}$ and $r_{ij}$ denote
the electron-ion and the
electron-electron distances, respectively. The corresponding cusp conditions are fixed by
the function $u(r)=F[1-\exp(-r/F)]/2$ (see e.g., Ref.~\cite{rocca}), whereas
$g_l^a$ and $F$ are optimizable variational parameters.
The three and four-body Jastrow terms $J_{3} J_{4}$ are given by:
\begin{equation}
\label{jastrow}
J_{3}(x) J_{4}( x)=
\exp\left(\sum \limits_{i<j} f(\mathbf{ r}_i ,\mathbf{ r}_j)\right),
\end{equation}
with $f (\mathbf{ r}_i ,\mathbf{ r}_j)$, being a two-electron coordinate
function that can be expanded into the same single-particle basis
used for $\tilde J_{2}$:
\begin{eqnarray}\label{3bjsp}
f(\mathbf{r}_i,\mathbf{r}_j)=
\sum_{ablm}^{} g_{lm}^{ab}\,\chi_{al}^{J}(\mathbf{r}_i)
\chi_{bm}^{J}(\mathbf{r}_j),
\end{eqnarray}
with $g_{lm}^{ab}$ optimizable parameters. Three-body (electron-ion-electron) correlations are described by the diagonal matrix elements
$g^{aa}$, whereas four-body correlations (electron-ion-electron-ion) are described by the
matrix elements with $a\ne b$.
The complete expression of the Jastrow factor $J( x) = J_2(x) \tilde J_2 (x) J_3(x) J_4(x) $
that we adopt in this work allows us to take into account weak and long-range electron-electron interactions,
and it is also extremely effective for suppressing higher energy configurations occurring when electrons are too close.
In order to minimize the energy expectation value corresponding to this WF
we have applied the well-established energy minimization
schemes \cite{casulamol,rocca,umrigar}, that
we have recently adapted for an efficient optimization of the molecular
orbitals of the Slater determinant in presence of the Jastrow factor described above \cite{marchi}.
In variational Monte Carlo (VMC), the energy expectation value of a given
correlated WF, depending on a set of variational parameters
$c_i, i=1,\cdots p$, can be computed with a standard statistical method.
The energy depends in turn on the atomic positions ${\mathbf{R}_a}$,
$a=1, \cdots M$, so that we indicate formally:
\begin{equation}
E(\{c_i\},\{\mathbf{R}_a\})= { \langle \Psi_{\{c_i\},\{\mathbf{R}_a \} }
| \hat H_{\{\mathbf{R}_a \}} | \Psi_{\{c_i\},\{\mathbf{R}_a \}} \rangle
\over
\langle \Psi_{\{c_i\},\{\mathbf{R}_a \}}
| \Psi_{\{c_i\},\{\mathbf{R}_a \} } \rangle }~.
\end{equation}
In VMC the above energy expectation value is computed statistically by sampling the probability $ \Pi (x) \propto \langle x | \Psi \rangle^2 $.
Analogous and more accurate techniques are also possible within QMC.
In this work the lattice regularized diffusion Monte Carlo (LRDMC) will be also used.
The latter method is a projection
technique, filtering out the ground state component of a given
variational WF by applying the propagator $ e^{ - H \tau } $
to $ \Psi$ for large imaginary time $\tau $. This propagation is
subject to the
restriction to modify only the amplitudes of the WF without
affecting its phases.
In this way one can avoid the so called ``fermion sign problem''
instability in QMC, with an highly accurate technique,
providing a rigorous upper bound of the total energy even in presence of
pseudopotentials \cite{lrdmc}.
\section{ Space warp coordinate transformation and its differential form}
The main purpose of this Section is to describe an efficient method
to compute
the forces $\mathbf{F}$ acting on each of the $M$ nuclear positions
$\{\mathbf{R}_1, \ldots , \mathbf{R}_M\}$, namely,
\begin{equation}
\label{force}
\mathbf{F}(\mathbf{R}_a)=-\mathbf{\nabla}_{\mathbf{R}_a}
E(\{c_i\}, \{\mathbf{R}_a\}),
\end{equation}
with a reasonable statistical accuracy.
Following Ref.~\onlinecite{casulamol}, we introduce
a finite-difference operator $ { \Delta/\Delta\mathbf{ R}_a } $ for the evaluation of the
force acting on a given nuclear position $\mathbf{R}_a$,
\begin{equation} \label{forcefinite}
{ \Delta \over {\Delta \mathbf{R}_a} }E =
{ E(\mathbf{R}_a + \Delta\mathbf{ R}_a ) - E(\mathbf{R}_a )
\over {\Delta \mathbf{ R}_a} }
\end{equation}
so that
\begin{equation}
\mathbf{F} (\mathbf{R}_a)=- { \Delta \over \Delta \mathbf{ R}_a }E + O({\Delta R})
\end{equation}
where $\Delta \mathbf{R}_a$ is a three dimensional vector.
Its length $\Delta R $ can be chosen as small as $10^{-6}$ atomic
units, yielding negligible finite-difference errors for the evaluation of the exact energy derivative.
In order to evaluate the energy differences in Eq.~(\ref{forcefinite})
it is very convenient to apply
the space warp coordinate transformation (SWCT).
This transformation
was introduced a long time ago in Ref.~\onlinecite{warp},
for an efficient
calculation of the ionic forces within VMC.
According to this transformation, as soon as the ions are displaced,
also the electronic coordinates
$\bf r$ will be translated in order to mimic
the displacement of the charge around the nucleus
$\mathbf{R}_a$, namely $x \to \bar x$, with
\begin{equation} \label{spacewarp}
\overline{ \mathbf{r}}_i=\mathbf{r}_i+
\Delta\mathbf{ R}_a ~ \omega_a(\mathbf{r}_i),
\end{equation}
\begin{equation}
\omega_a(\mathbf{r})=\frac{F(| \mathbf{r}-\mathbf{R}_a|)}
{\sum_{b=1}^{M} F(| \mathbf{r}-\mathbf{R}_b|)}~,
\end{equation}
and $F(r)$ is a function which must decay rapidly; here we used
$F(r)={1}/{r^4}$ as suggested in Ref.~\onlinecite{filippi}.
The expectation value of the energy depends on
$\Delta\mathbf{ R}_a$, because both the Hamiltonian and the WF depend
on the nuclear positions. Applying the SWCT to
the integral involved in the calculation, the expectation value reads
\begin{equation} \label{forcewarp}
E( \mathbf{R_a} +\Delta\mathbf{R}_a)=
\frac{\int d\mathbf{r}^{3N} \tilde J_{\Delta\mathbf{R}_a}(x)
\Psi_{\Delta\mathbf{R}_a}^2 (\bar x (x))
E^{\Delta\mathbf{R}_a}_L(\bar x (x))}
{\int d\mathbf{r}^{3N} \tilde J_{\Delta\mathbf{ R}_a}(x)
\Psi^2_{\Delta\mathbf{ R}_a}(\bar x (x))},
\end{equation}
where $\tilde J$ is the Jacobian of the transformation and,
\begin{equation} \label{elocal}
E_L^{\Delta\mathbf{ R}_a}=
{ \langle \Psi_{\Delta\mathbf{R}_a} | \hat H |x \rangle \over \langle \Psi_{\Delta\mathbf{ R}_a} |x \rangle }
\end{equation}
is
the so called {\em local energy} defined
by the wave function
$\Psi_{\Delta\mathbf{ R}_a}$
on a real space electronic configuration $x$.
In the following, we define
$E_L=E_L^{\Delta\mathbf{ R}}$ for ${\Delta\mathbf{ R}_a}=0$.
The importance of the SWCT in reducing the
statistical error in the evaluation of the force is easily understood
for the case of an isolated
atom $a$. In this case, the force acting on the atom is obviously
zero, but only after the SWCT
with $\omega_a=1$ the integrand
in Eq.~ (\ref{forcewarp}) is independent of $\Delta\mathbf{ R}_a$,
providing an estimator of the force with zero variance.
Starting from Eq.~(\ref{forcewarp}),
it is straightforward to derive explicitly a
differential expression for the force estimator,
which is related to the gradient of the previous quantity with respect
to $\Delta\mathbf{ R}_a$ in the limit of vanishing displacement,
\begin{eqnarray}
\label{vmcforce}
\mathbf{F}_a
& = & - \big \langle
\frac{d }{d \mathbf{R}_a} E_L \big \rangle
\\ \nonumber
& + & 2 \Big (
\big \langle E_L \big \rangle \big \langle
\frac{d}{d \mathbf{ R}_a}
\log (J^{1/2} \Psi ) \big \rangle -
\big \langle E_L
\frac{d }{ d \mathbf{ R}_a}
\log (J^{1/2} \Psi ) \big \rangle
\Big ),
\end{eqnarray}
where the brackets indicate a Monte Carlo like average over the
square modulus of the trial WF, namely over
the probability $\Pi (x)$
introduced in the previous Section.
In the calculation of the total derivatives $ { d / d\mathbf{R}_a} $, we have to take into account
that the electron coordinates are also implicitly differentiated, according to the SWCT. Then, all the terms above
can be written in a closed expression once the partial derivatives of the local energy and of the
WF logarithm are known, namely,
\begin{eqnarray}
{ d \over d \mathbf{ R}_a } E_L &=& {\partial \over \partial \mathbf{ R}_a } E_L + \sum \limits_{i=1}^N
\omega_a (\mathbf{ r}_i) { \partial \over \partial \mathbf{ r}_i } E_L~, \label{locdiff} \\
\frac{d}{d \mathbf{ R}_a} \log (J^{1/2} \Psi ) &=& {\partial \over \partial \mathbf{ R}_a } \log ( \Psi) + \sum \limits_{i=1}^N \left[
\omega_a ( \mathbf{ r}_i) { \partial \over \partial \mathbf{ r}_i } \log \Psi +
{ 1 \over 2 } { \partial \over \partial \mathbf{ r}_i } \omega_a ( \mathbf{ r}_i) \right]~, \label{puldiff}
\end{eqnarray}
where the term ${ 1 \over 2 } { \partial \over \partial \mathbf{ r}_i } \omega_a ( \mathbf{ r}_i)$ in the square brackets gives the contribution of the
Jacobian.
Based on the expressions above we can evaluate the forces in Eq.~(\ref{vmcforce})
in three different ways, listed below in increasing order of efficiency:
\begin{enumerate}
\item Helmann-Feynmann (HFM). By neglecting all the dependence of $\Psi$ on the
atomic position, and without the SWCT:
$$\mathbf{F}_a =- \langle \partial_{\mathbf{R}_a} \hat H \rangle~. $$
This is the least accurate expression, because without the so called Pulay
terms derived in Eq.~(\ref{puldiff}) the force is consistent with the energy derivative
only when the WF is the exact ground state.
This is also the least efficient way to compute the forces as indicated in
Tab.~\ref{tabeff}.
\item No-SWCT. This is obtained by using the terms (\ref{locdiff},\ref{puldiff}) in
Eq.~(\ref{vmcforce}) with $\omega_a ( \vec r ) =0$.
This expression is much more accurate and efficient than the previous one,
as it fulfills the so called ``zero-variance principle'':
if the WF $\Psi$ coincides with the exact ground state for arbitrary
atomic position $\mathbf{R}_a$, it is easy
to realize that the estimator of the forces
does not have statistical fluctuations, as the local energy and its derivative
are just constant, and independent of the real space configuration $x$.
\item Differential SWCT. The properties above are clearly fulfilled
also in this case.
Moreover,
the SWCT does not change the mean value of the forces, but
affects only their statistical fluctuations.
In fact, as mentioned before, the SWCT allows us to obtain
a zero-variance property
for the forces acting on isolated atoms. As a result, this transformation is
extremely important for computing forces between atoms at large distance,
as in this limit they can be considered isolated.
\end{enumerate}
The advantage to use the SWCT for a water dimer molecule
is illustrated in Tab.~\ref{tabeff}. It is clear that without the SWCT ,
or even without the differentiation of the local
energy with respect to the atomic position (no-SWCT case), the evaluation of forces with a reasonable statistical error is simply not possible.
As discussed in the following Section, all partial derivatives involved
in the expressions above, namely the $6N +6M$ components,
$ { \partial \over \partial \mathbf{ r}_i } E_L,
{ \partial \over \partial \mathbf{ r}_i } \log \Psi, {\partial \over \partial \mathbf{ R}_a } E_L, {\partial \over \partial \mathbf{ R}_a } \log \Psi $,
can be evaluated very efficiently with algorithmic
differentiation. This is true also when the WF and the expression for the local
energy are extremely cumbersome, e.g., when using pseudopotentials.
As a result, the quantities in Eqs.~(\ref{locdiff},\ref{puldiff}) can be evaluated
using just a minor computational effort with roughly $ \propto N M $ operations.
In particular, one of the most involved contribution to the local energy is the one
corresponding to the bare
kinetic energy $ \hat K= -{ 1\over 2} \sum \limits_{i=1}^N
\Delta_i $. The Hamiltonian $\hat H$ in Eq.~(\ref{elocal}) always contains this
term, even in presence of pseudopotentials.
In the following, we will discuss how to differentiate the contribution
$K = { \langle \Psi | \hat K | x \rangle / \langle \Psi | x \rangle } $
to the local energy for the particularly simple but instructive case of
a Slater determinant WF with no Jastrow factor.
\section{Adjoint Algorithmic Differentiation}
\label{adsec}
Algorithmic differentiation (AD) \cite{griewank} is a set of programming
techniques for the efficient calculation of the derivatives
of functions implemented as computer programs.
The main idea underlying these techniques is the fact that any such function
-- no matter how complicated -- can be interpreted as the composition of
more elementary functions each of which is in turn a composition of
basic arithmetic and intrinsic operations that are easy to differentiate.
As a result, it is possible to calculate the derivatives of the
outputs of a program with respect to its inputs by
applying mechanically the rules of differentiation --
and in particular the {\em chain rule} -- to the composition of
its constituent functions.
What makes AD particularly attractive, when compared to standard
(finite-difference) methods for the calculation of derivatives is
its computational efficiency. In fact, AD exploits the information
on the calculations performed by the computer code, and the
dependencies between its various parts, in order to optimize the
calculation. In particular, when one requires the derivatives
of a small number of outputs with respect to a large number of inputs,
the calculation can be highly optimized
by applying the chain rule through the instructions of the program
in opposite order with respect to the one of evaluation of the
original instructions. This gives rise to the so called adjoint
(mode of) algorithmic differentiation (AAD).
Even if AD has been an active branch of computer science for
several decades, its impact in other research fields has
been surprisingly limited until very recently \cite{autodiff}.
Only over the past two years its tremendous
effectiveness in speeding up the calculation of sensitivities
e.g., in Monte Carlo simulations, has been first exploited
in computational finance applications \cite{caprio1}. In particular, the potential
of AD has been largely left untapped
in the field of computational physics where,
as we demonstrate in the following,
it could move significantly the boundary of what can be studied
numerically with the computer power presently available.
Griewank \cite{griewank} contains a detailed discussion
of the computational cost of AAD \cite{caprio2}. Here, we will only recall the main
ideas underlying this technique to clarify how it can be beneficial in
the implementation of the calculation of the forces in QMC. To this end,
we consider a particular computer implemented function $ X \rightarrow Y $
\begin{equation}\label{function}
Y = \texttt{FUNCTION}(X)
\end{equation}
mapping a vector $X$ in $\mathbb{R}^n$ in a vector $Y$ in ${\mathbb R}^m$
through a sequence of two sequential steps
\[
X\ \rightarrow\ U\ \rightarrow\ V\ \rightarrow\ Y.
\]
Here, each step can be a distinct high-level function, or even an individual instruction in a computer code.
A general code is usually implemented by several steps of this type, and,
more importantly, the output of a particular instance can be used
as an input not only for the next one
but generally for all instances occurring later in the algorithm.
Generally speaking an algorithm can be viewed as a sequential
tree or graph with connectivity larger than one, where each node has more than one children. However,
to keep things as simple as possible, in this ``warm up'' example we do not consider
these more complex cases. The generalization to a realistic computational graph
is however straightforward \cite{caprio2}.
The adjoint mode of AD results from propagating the derivatives of the final result
with respect to all the intermediate variables -- the so called {\em adjoints} --
until the derivatives with respect to the independent variables are formed.
Using the standard AD notation, the adjoint $ \bar V $
of any input variable $V$
of an instance $ V \rightarrow Y$ is defined as the derivative
of a given
linear combination of the output $ \sum \limits_j \bar Y_j Y_j$
with respect to the input $V$, namely:
\begin{equation} \label{adjointdef}
\bar V_k = \sum_{j=1}^m \bar Y_j \frac{\partial Y_j}{\partial V_k} ~,
\end{equation}
where $\bar Y$ is a given input vector in ${\mathbb R}^m$.
In particular, for each of the intermediate variables, using the chain rule, we get,
\[
\bar Y_j \frac{\partial Y_j}{\partial X_i} = \bar Y_j
\frac{\partial Y_j}{\partial V_k} \frac{\partial V_k}{\partial U_l}
\frac{\partial U_l}{\partial X_i}
\]
where repeated indices indicate implicit summations.
It is simple to realize that in such a simple case we can use the definition
in Eq.~(\ref{adjointdef}) to evaluate $ \bar X$, namely:
\[
\bar Y_j \frac{\partial Y_j}{\partial X_i} = \bar V_k
\frac{\partial V_k}{\partial U_l}
\frac{\partial U_l}{\partial X_i} = \bar U_l \frac{\partial U_l}{\partial X_i} = \bar X
\]
In other words, once all adjoint instances have been defined, the bar input
of each adjoint instance can be obtained from the output of the previous
adjoint instance according to a diagram that follows very straightforwardly
the original algorithm in reversed sequential order:
\begin{equation}
\bar Y \rightarrow \bar V \rightarrow \bar U \rightarrow \bar X~.
\end{equation}
In this way we obtain $\bar X$, i.e., the linear combination
of the columns of the Jacobian of the function $X \to Y$,
with weights given by the input $ \bar Y$ (e.g., $1,0,\ldots,0$), namely,
\begin{equation}\label{adjoint}
\bar X_i = \sum_{j=1}^m \bar Y_j \frac{\partial Y_j}{\partial X_i} ~,
\end{equation}
with $i=1,\ldots, n$.
In the adjoint mode, the cost does not increase with the number of inputs, but it is linear in the number
of (linear combinations of the) columns of the Jacobian that need to be evaluated independently. In particular, if the full Jacobian is
required, one needs to repeat the adjoint calculation $m$ times, setting the vector $\bar Y$ equal to each of the
elements of the canonical basis in $\mathbb{R}^m$.
Furthermore, since the partial derivatives depend on the values of the intermediate
variables, one generally first has to compute the original calculation storing the values of
all of the intermediate variables such as $U$ and $V$, before performing the adjoint mode
sensitivity calculation.
One particularly important theoretical result is that given a computer code
performing some high-level function (\ref{function}), the execution time of its adjoint counterpart
\begin{equation}\label{adjointfunction}
\bar X = \texttt{FUNCTION}\_{\texttt B}(X, \bar Y)
\end{equation}
(with suffix $\_{\texttt B}$ for ``backward'') calculating
the linear combination (\ref{adjoint}) is bounded by approximatively 4 times the cost of
execution of the original one.
Thus, one can obtain the sensitivity of a single output, or of a linear combination of outputs,
to an unlimited number of inputs for little more work than the original computation.
The propagation of the adjoints, being mechanical in nature can be automated. Indeed,
several AD tools \cite{autodiff} are available that, given a
function of the form (\ref{function}), generate the adjoint function (\ref{adjointfunction}).
While the application of such automatic AD tools to
large inhomogeneous simulation software is challenging, the
principles of AD can be used as a programming paradigm
of any algorithm.
This is especially useful for the most common situations where
simulation codes use a variety of libraries written in different
languages, possibly linked dynamically.
However, automatic tools are of great utility to generate the adjoint of self contained
functions and subroutines thus effectively reducing the development time of adjoint implementations.
A detailed tutorial on the programming techniques that are useful for adjoint implementations is
beyond the scope of this paper. However, when hand-coding the adjoint counterpart of a
set of instructions in a general algorithm it is often enough to keep in mind just a few
practical recipes, for instance:
\begin{enumerate}
\item[i)] As previously mentioned, each intermediate differentiable variable $U$ can be used not only
by the subsequent instance but also by several others occurring later in the
program.
As a result, the adjoint of $U$ has
in general several contributions, one for each instruction of the original function in which $U$ was on the
right hand side of the equal sign (assignment operator). Hence, by exploiting the linearity of differential operators,
it is in general easier to program according to a syntactic paradigm in which adjoints are always updated
so that the adjoint of an instruction of the form
$$
V = V(U)
$$
reads
$$
\bar U_i = \bar U_i + \frac{\partial V_k(U)}{\partial U_i} \bar V_k~.
$$
Clearly, this implies that the
adjoints have to be appropriately initialized as discussed in the following
paragraphs.
In particular, to cope with input variables that are
changed by the algorithm (see next point) it is generally best to initialize to zero the adjoint of a given variable in correspondence of
the instruction in which it picks its first contribution (i.e., right before the adjoint corresponding to the last instruction of the original code
in which the variable was to the right of the assignment operator).
For instance, the adjoint of the following sequence of instructions
\begin{eqnarray*}
y &=& F(x)\\
z &=& H(x,y)\\
x &=& G(z)
\end{eqnarray*}
can be written as:
\begin{eqnarray*}
\bar z &=& 0 \\
\bar z &=& \bar z +\frac{\partial G(z)}{\partial z} \bar x \\
\bar y &=& 0 \\
\bar x &=& 0 \\
\bar x &=& \bar x + \frac{\partial H(x,y)}{\partial x} \bar z \\
\bar y &=& \bar y +\frac{\partial H(x,y)}{\partial y} \bar z \\
\bar x &=& \bar x + \frac{\partial F(x)}{\partial x} \bar y~.
\end{eqnarray*}
Note in particular that the symbol $x$ represents the input and the output
of the algorithm. As a result, also
$\bar x$ represents both the input and the output of the adjoint algorithm
and it is crucial to reinitialize to zero $\bar x$
before it picks its first contribution from an adjoint statement, i.e., the one associated with the instruction $z=H(x,y)$.
As explained in detail in the following example, the algorithm could be more
easily understood by replacing the last statement
by another independent
output variable $u=G(z)$, and following the straightforward
derivation of the adjoint algorithm that has for input $\bar u$ and output
$\bar x$ (see also the example below). One can easily derive that the resulting algorithm
coincides with the one above, namely the same input provides the same output.
\item[ii)] In some situations the input $U$ of a function $V=V(U)$ is modified by the function itself. This situation is easily
analyzed by introducing an auxiliary variable $U^\prime$ representing the value of the input after the function evaluation. As a result,
the original function can be thought of the form $(V,U^\prime) = (V(U), U^\prime(U))$, where $V(U)$ and $U^\prime(U)$ do not mutate their inputs, in combination with the assignment $U=U^\prime$, overwriting the
original input $U$. The adjoint of this pair of instructions clearly reads
\begin{eqnarray*}
\bar U^\prime_i &=& 0\\
\bar U^\prime_i &=& \bar U^\prime_i + \bar U_i~,
\end{eqnarray*}
where we have used the fact that the auxiliary variable $U^\prime$ is not used elsewhere (so $\bar U^\prime_i$ does not have any previous contribution), and
\begin{eqnarray*}
\bar U_i &=& 0\\
\bar U_i &=& \bar U_i+ \frac{\partial V_k(U)}{\partial U_i} \bar V_k + \frac{\partial U^\prime_l(U)}{\partial U_i} \bar U^\prime_l~.
\end{eqnarray*}
where, again, we have used the fact that also the original input $U$ is not used after the instruction $V=V(U)$, as it gets overwritten.
One can therefore eliminate altogether the adjoint of the auxiliary variable $\bar U^\prime$ and simply write
\begin{equation*}
\bar U_i = \frac{\partial V_k(U)}{\partial U_i} \bar V_k + \frac{\partial U^\prime_l(U)}{\partial U_i} \bar U_l~.
\end{equation*}
Very common examples of this situation are given by increments of the form
$$
U_i = a\, U_i + b
$$
with $a$ and $b$ constant with respect to $U$. According to the recipe above, the adjoint counterpart of this instruction simply reads
$$
\bar U_i = a\, \bar U_i~.
$$
These situations are common in iterative loops where a number of variables are typically updated at each iteration.
\end{enumerate}
In order to better illustrate these ideas, here we consider the calculation of the kinetic energy
and of its adjoint counterpart, where for simplicity we consider only one spin
component (maximum polarized case), as the calculation of both spin up and
spin down contributions can be obtained just by summing them.
Given the position of the electrons $x$ and of the ions $R$,
the calculation of the kinetic energy can be performed according to the following steps, as derived in App.~\ref{laplacian}:
\begin{enumerate}
\item Calculate $A_{i,j} = \psi_i(\mathbf{r}_j)$ and $B_{i,j} = \Delta_j \psi_i(\mathbf{r}_j)$
according to the definition of the molecular orbitals in Eq.~(\ref{defpsi}).
\item Calculate $A^{-1}_{i,j}$ by matrix inversion.
\item Calculate the kinetic energy as
\begin{equation} \label{lapeq}
K = -{1 \over 2}
\sum_{i,j} A^{-1}_{i,j} B_{j,i}.
\end{equation}
\end{enumerate}
The corresponding adjoint algorithm can be constructed by associating to each of the steps
above its adjoint counterpart according to the correspondence given by
Eqs.~(\ref{function}), (\ref{adjoint}),
and ~(\ref{adjointfunction}). As a result, as also illustrated schematically in Fig.~\ref{aad_diagram},
the adjoint algorithm for the derivatives of the kinetic energy with respect to the positions of
the electrons and ions, consists of steps 1-3 above, and their adjoint counterparts
executed in reverse order, namely:
\begin{enumerate}
\item[$\bar{3}$.] Set $\bar K = 1$, and evaluate the adjoint of the function $(A^{-1}_{i,j}, B_{j,i}) \to K$ defined in step 3.
This is a function of the form $(A_{i,j}^{-1}, B_{i,j},\bar K) \to (\bar A_{i,j}^{-1}, \bar B_{i,j})$ with
$\bar A^{-1}_{i,j} = -{1 \over 2} \bar K B_{j,i} $ and $\bar B_{i,j} =-\frac{1}{2} \bar K A^{-1}_{j,i}$.
\item[$\bar{2}$.] Evaluate the adjoint of the function $A_{i,j} \to A^{-1}_{i,j}$ (step 2), namely,
$(A_{i,j}, \bar A^{-1}_{i,j}) \to \bar A_{i,j}$ with $\bar A = -(A^{-1})^T \bar A^{-1} (A^{-1})^T$ (see App. \ref{mathinv}).
\item[$\bar{1}$.] Evaluate the adjoint of the function $(x,R) \to (A_{i,j}, B_{i,j})$ (step 1), namely,
$(x, R, \bar A_{i,j}, \bar B_{i,j}) \to (\bar x, \bar R)$ with
\begin{eqnarray*}
\bar x_j &=& { \partial K \over \partial {\mathbf{ r}_j} } =\sum_{i} \bar A_{i,j} \partial_{\mathbf{ r}_j} \psi_i(\mathbf{r}_j) + \bar B_{i,j} \partial_{\mathbf{ r}_j} \Delta_j \psi_i(\mathbf{r}_j)~, \\
\bar R_a &=& { \partial K \over \partial {\mathbf{ R}_a} } = \sum_{i,j} \left[ \bar A_{i,j} \partial_{\mathbf{ R}_a} \psi_i(\mathbf{r}_j) + \bar B_{i,j} \partial_{\mathbf{R}_a} \Delta_j \psi_i(\mathbf{r}_j) \right] \\
&=& \sum_{i,j,k} \chi_{i,k} \delta_{\mathbf{ R}_k,\mathbf{ R}_a}
\left[ \bar A_{i,j} \partial_{\mathbf{ R}_k} \phi_k(\mathbf{r}_j) + \bar B_{i,j} \partial_{\mathbf{R}_k} \Delta_j \phi_k(\mathbf{r}_j) \right]~,
\end{eqnarray*}
where in the latter equality we have expanded the orbitals
in terms of atomic orbitals, by means of Eq.(\ref{defpsi}).
\end{enumerate}
Notice that in the last expression it is the presence of the Kronecker delta, that allows the computation of all the derivatives with respect to the
atomic positions $ {\mathbf R}_a$ in $ \simeq 2 N^2 L$ operations, namely the
same amount of operations used in the forward step.
Indeed by summing only {\rm once}
over the three indices $i,j,k$ in the above expression all the force components
acting on all the atoms are obtained.
In AAD this is not accidental, and the structure of the algorithm is automatically optimized for computing several derivatives at the cheapest computational
cost.
By applying the chain rule it is immediate to see that $\bar x$ and $\bar R$ computed according to the
steps above are the derivatives of the kinetic energy with respect to the position of the electrons and
the ions, respectively. It is also easy to realize that -- as expected according to general results on the computational
complexity of adjoint algorithms \cite{griewank}
quoted above -- the number of operations involved in each adjoint step is a small constant times the number of operations of the original step, namely (considering only multiplications) $2 N^2 L$ vs $N^2 L $, $2 N^3$ vs $N^3$, and $2N^2$ vs
$N^2$ for steps $\bar{1}$ vs 1, $\bar{2}$ vs 2, and $\bar{3}$ vs 3, respectively.
As also anticipated, the propagations of the adjoints (steps $\bar{3}-\bar{1}$) can be performed only after the calculation of
the kinetic energy has been completed (steps $1-3$) and some of the intermediate results (e.g., the matrices $A$,
$B$, and $A^{-1}$) have been computed and stored. This is the reason why, in general, the adjoint of a given function
generally contains a {\em forward sweep}, reproducing the steps of the original function, plus a {\em backward sweep},
propagating the adjoints. This construction can be clearly applied recursively for each of the steps involved in the
calculation.
It is worth noting that each adjoint step, taken in isolation, contains in turn a forward sweep, recovering the
information computed in the original step that is necessary for the propagation of the adjoints.
However, this can be clearly avoided by storing such information at the time it is first computed in the original step.
Strictly speaking, this is necessary to ensure that the computational cost of the overall algorithm remains
within the expected bounds. However, there is clearly a tradeoff between the time necessary to store and retrieve
this information and
the time to recalculate it from scratch, so that in practice it is often enough to store in the main forward sweep only the
results of relatively expensive computations. In the example above for instance, significant savings can be obtained by storing
the inverse of the matrix $A$ at the output of step 2 and passing it as an input of Step $\bar 2$ (see Fig.~\ref{aad_diagram}).
The main complication in the algorithm above is the implementation of the adjoint of the Laplacian of the WF
in step $\bar{1}$. However, the calculation of the Laplacian is a good example of an instance that can be represented by
a self contained, albeit complex, computer function, for which several automatic differentiation tools are available.
In particular, in order to complete step $\bar{1}$, it is enough to define the
adjoint functions of the calculation of the Laplacian
$ \vec r \to \Delta \phi_j (\vec r) $,
for a set of explicit functions $\{\phi_j \} $
(e.g. gaussians).
The adjoints are then computed by means of the corresponding
gradient of the Laplacian, namely $ \bar \psi \to \vec{ \bar r} $
where $ \vec{ \bar r} = \vec {\bar r} + \bar\psi \nabla_{\vec r } \Delta \phi_j ( \vec r)$.
For this application we have used TAPENADE,
developed at INRIA by Hasco\"et and collaborators \cite{tapenade}.
\section{Results}
After implementing the adjoint counterpart of the two main instances
corresponding to the evaluation of the log WF and the local energy,
we have computed
the exact energy derivatives and compared with the straightforward finite-difference evaluation, finding perfect agreement within numerical accuracy.
However, the finite-difference method presents a well known bottleneck:
in order to evaluate the $3M$ energy derivatives, one has to evaluate the local
energy and the log WF at least $3 M $ times more.
Since the computation of such quantities is the most relevant part in
QMC, with a computational effort
scaling as $N^3$, we end up with a very inefficient algorithm for large
number of atoms.
As shown in Fig.~\ref{adcpu}, this slowing down can be completely removed
by using AAD, as the cost to compute
all the force components in a system containing several water molecules,
remains approximately $4$ times larger than the cost to compute only the
total energy.
This factor $4$ is a very small cost, if we consider that the
main adjoint instance has to be evaluated twice, one for the local energy
and the other for the WF logarithm, and that, on the other hand,
VMC is the
fastest method in QMC. For instance, we can evaluate forces within LRDMC
with only a small overhead,
as the cost to generate a new independent configuration
within LRDMC is about 10 times larger than VMC, and therefore, for this
more accurate method, the
cost to compute all force components will be essentially negligible.
Analogous consideration holds during an energy optimization.
We have to consider that in this case AAD can be used to compute not only
the force components, but also all the energy derivatives with respect to
all variational parameters $ \{ c_i \} $ of the WF,
essentially at the same computational cost, even when the number
$p$ of variational parameters is extremely large.
Though we have not implemented AAD for this general task, we expect a further
speed up (and simplification) of the code, once AAD will be fully implemented
for all possible energy derivatives. We believe this will become common
practice for future quantum Monte Carlo packages.
At present, in order to have consistent forces within VMC,
all variational parameters have to be
optimized \cite{rappe},
and to this purpose we have used the standard way to compute
energy derivatives.
We have applied the efficient evaluation of the forces for the structural
optimization of the water monomer.
We have used energy-consistent pseudopotentials \cite{filippipseudo} only for the oxygen atom.
In the calculation we have adopted a huge basis set to avoid basis
superposition errors.
The molecular orbitals are expanded in a primitive basis containing
24s22p10d6f1g on the oxygen and 6s5p1d on the hydrogen atom.
The exponents of the Gaussians are optimized by minimizing the
energy of a self-consistent DFT calculation within the LDA approximation \cite{azadi}.
The accuracy in the total DFT energy is well below $1mHa$ for the water dimer,
implying that we are essentially working with an almost complete basis set.
For the Jastrow factor we have also used a quite large basis, to achieve
similar accuracy in the total energy, within a VMC calculation on a WF
obtained by
optimizing the Jastrow over the LDA Slater determinant.
The final optimized basis for the Jastrow
contains a contracted basis 6s5p2d/3s3p1d on
the oxygen and an uncontracted 1s1p basis on the hydrogen atom.
In the following we describe the first application of this method
for optimizing the structure of simple water compounds.
The variational parameters of the
WF -- molecular orbitals and Jastrow factor -- are
optimized, by energy minimization, with the method
described in Ref.~\onlinecite{marchi}.
At each step of optimization, we compute the ionic forces by AAD,
and we employ a standard steepest descent
move of the ions $ \mathbf{ R}_a \to \mathbf{ R}^\prime_a$:
\begin{equation}
\mathbf{ R}^\prime_a= \mathbf{ R}_a + \Delta \tau \mathbf{ F}_a
\end{equation}
where $\Delta \tau = 1/2 a.u.$.
After several hundred iterations both the variational parameters and
the atomic positions fluctuate around average values, and we use the last
few hundred iterations to evaluate the error bars and the mean value
of the atomic positions, as illustrated in Fig.~\ref{dimer}.
In Tab.~\ref{watermon} we show the optimized structure of the water monomer.
As it is clearly evident our final atomic positions are almost indistinguishable from the experimental ones. Generally speaking our calculation appears more
accurate than simple mean field
DFT methods, and comparable with state of the art
quantum chemistry techniques, such as CCSD(T).
The accuracy of the VMC method has been also confirmed recently in another context.\cite{valsson}
In the dimer structure the situation is slightly different.
As shown in Tab.~\ref{waterdim}, the oxygen-oxygen distance is in quite
good agreement with experiments, whereas the $OHO$ angle is overestimated by
few degrees. Probably in this case the quantum corrections
should affect the
hydrogen position between the two oxygens, because the
dimer bond is very weak.
Indeed we have also checked that, with the more
accurate LRDMC calculation, the equilibrium structure obtained by the
VMC method remains stable as all the force components are well below
$10^{-3} a.u.$. On the other hand LRDMC increases
the binding of the dimer by about $ 1 Kcal/mol$, showing that, from the energetic point of view, the LRDMC calculation may be
important, as also confirmed in previous studies \cite{marchi,needswat}.
All the above calculations can be done with a relatively small computational
effort (few hours in a 32 processor parallel computer), and therefore the
same type of calculation, with the same level of accuracy, can be extended
to much larger systems containing several atoms
with modern supercomputers.
Stimulated by the above success we have tested the finite-temperature
molecular dynamics simulation introduced some time ago \cite{attacc}, using
4 water molecules in a cubic box with $4.93$\AA ~side length, mimicking
the density
of liquid water at ambient conditions. Since we are interested in
static equilibrium properties we have used for the oxygen the same mass of hydrogen.
Though the system is very small we have been able to perform several thousands
steps. For each step all variational parameters are optimized using
a given number $n$ of stochastic reconfiguration (SR) optimizations \cite{rocca}.
For the first $18000$
steps we used $n=1$ and a time integration step $\Delta t$
for the MD ranging from $20 a.u.$
or $40 a.u.$.
Several iterations were possible because in QMC we can decide to work
with a relatively small number of samples to accumulate statistics for the
energy derivatives and the forces. In these conditions the forces are rather
noisy but the molecular dynamics with noise correction \cite{attacc}
allows us to have sensible
results, at the price to have an overdamped dynamics.
However, as it is shown in Fig.~\ref{energy}, it is difficult to remain
within the Born-Oppheneimer energy surface, because at selected times, we
have fully optimized the wave-function using further 200 SR iterations, and
found 6$mHa$ difference between the energy on fly and the optimized energy.
In order to overcome this bias in the dynamics, in the final part of the
MD simulation, we have used $n=10$ (and $\Delta t=40 a.u.$)
and found that we remain sufficiently
close to the Born-Oppheneimer energy surface.
This very preliminary application is clearly limited by
the too small number of water molecules considered in the simulation,
and therefore does not allow us to
determine the equilibrium
properties of liquid water. Nevertheless, we believe that this result is
rather encouraging because it shows that all
the possible sources of errors in the MD driven by QMC forces, can be
controlled in a rather straightforward way.
\section{Conclusions}
In this work we have shown that the calculation of all the force components
in an electronic system containing several atoms, can be done very
efficiently using adjoint algorithmic differentiation (AAD).
In particular it is possible to employ the very efficient space warp coordinate transformation (SWCT) in differential form in
a straightforward and simple way, even when pseudopotentials and/or complicated
many-body wave functions are used.
More importantly, we have shown that, using AAD, one can compute all these force components, and in principle
all the energy derivatives with respect to any variational parameter contained
in the many-body wave function, in about four times the cost
to compute the expectation value of the energy.
So far, for large number of atoms,
the use of quantum Monte Carlo methods have been generally limited to total energy calculations. We believe that our work opens the way for new and more
accurate tools for ab-initio electronic simulation based on quantum Monte
Carlo.
In particular we have shown that it is possible to perform an ab-initio
molecular dynamics simulation for several picoseconds, in a system containing
four water molecules. Since the cost of a variational Monte Carlo
calculation with fixed statistical accuracy in the energy per atom (total
energy) increases with the number of atoms as $M^2$ ($M^4$) the simulation
of about $32$ water molecules should be possible with less than $10^5$
($10^7$) CPU hours, a figure that is nowadays possible (at the limits of
present possibilities) with modern massively parallel supercomputers.
It is not known at present if
it is sufficient to target a fixed statistical error in the energy per atom
in order to obtain well-converged thermodynamic extensive
quantities. Otherwise a computationally more expensive calculation
with a statistical error on the total energy of the order of $kT$ is necessary, as in the penalty method\cite{ceperleypenalty}.
In the example we have presented, we have also seen that the
accuracy of variational Monte Carlo in determining the equilibrium structure of the water
monomer and the water dimer is rather remarkable and comparable to post
Hartree-Fock methods, requiring much more computer resources for
large number of atoms.
Therefore we believe that, in view of the efficiency in the evaluation
of forces obtained by AAD, realistic and very accurate ab-initio simulation
based on quantum Monte Carlo will be within reach in the near future.
\acknowledgements
This work was partially supported by COFIN2007, and CNR.
| -29,115.83599 |
[
-2.712890625,
2.548828125
] | 25.714286 |
[
-2.947265625,
-0.04254150390625,
-2.13671875,
-5.7734375,
-0.68896484375,
8.3828125
] |
[
3.84375,
8.1015625,
2.267578125,
6.6953125
] | 287 | 7,299 |
[
-2.837890625,
3.02734375
] | 25.554511 |
[
-6.10546875,
-4.65625,
-4.6171875,
-2.14453125,
2.365234375,
12.7578125
] | 1.67491 | 9.683733 | 20.290451 | 0.709672 |
[
2.232041120529175
] | -18,706.541314 | 5.235101 | -28,696.743514 | 0.669964 | 5.973402 |
[
-2.41796875,
-3.76953125,
-3.76171875,
-4.81640625,
2.40625,
12.359375
] |
[
-5.4453125,
-2.173828125,
-2.357421875,
-0.89501953125,
3.421875,
4.1484375
] | |
BkiUd03xaKgQLBMQPhQy
|
\section{Introduction}
Fixed points and fixed point operations have been used in just about all areas of
computer science. There has been a tremendous amount of work on the existence,
construction and logic of fixed point operations.
It has been shown that most fixed point operations, including
the least (or greatest) fixed point operation on monotonic functions
over complete lattices, satisfy the same equational properties.
These equational properties are captured by the notion of iteration
theories, or iteration categories, cf. \cite{BEbook}
or the recent survey \cite{EsMFCS2015}.
For an account of fixed point approaches to logic programming containing
original references we refer to \cite{Fittingsurvey}. These approaches,
and in particular
the stable and well-founded fixed point semantics of logic
programs with negation, based on the notion of
bilattices, have led to the development of an
elegant abstract `approximation fixed point theory', cf.
\cite{Deneckeretalsurvey,DeneckeretalULT,Vennekensetal}.
In this paper, we study the equational properties of the well-founded fixed
point operation as defined in \cite{Deneckeretalsurvey,DeneckeretalULT,Vennekensetal}
with the aim of relating well-founded fixed points to iteration categories.
We extend the well-founded fixed point operation to a parametric operation
giving rise to an external fixed point (or dagger) operation \cite{BEbook,BEccc} over
the cartesian category of approximation function pairs between complete
bilattices. We offer
an initial analysis of the equational properties of the well-founded
fixed point operation. Our main results show that several identities of iteration theories
hold for the well-founded fixed point operation, but some others fail.
\section{Complete lattices and bilattices}
Recall that a \emph{complete lattice} \cite{Daveyetal} is a partially ordered set $L = (L,\leq)$ such that
each $X \subseteq L$ has a supremum $\bigvee X$ and hence also an infimum
$\bigwedge X$. In particular, each complete lattice has a least
and a greatest element, respectively denoted either $\bot$ and $\top$,
or $0$ and $1$.
We say that a function $f: L \to L$ over a complete lattice $L$
is monotonic (anti-monotonic, resp.) if for all $x,y \in L$, if $x \leq y$
then $f(x) \leq f(y)$ ($f(x) \geq f(y)$, resp.).
A \emph{complete bilattice}\footnote{Sometimes bilattices are equipped with a negation operation
and the bilattices as defined here are called pre-bilattices.}
\cite{Fittingsurvey,Fittingnice,Ginsberg} $(B,\leq_p,\leq_t)$ is equipped with two partial orders,
$\leq_p$ and $\leq_t$, both giving rise to a complete lattice.
We will denote the $\leq_p$-least and greatest elements of a complete bilattice
by $\bot$ and $\top$, and the $\leq_t$-least and greatest elements by $0$ and $1$,
respectively.
An example, depicted in Figure~\ref{figure:four}, of a complete bilattice is $\four$, which has 4 elements, $\bot,\top,0,1$.
The nontrivial order relations are given by $\bot \leq_p 0,1 \leq_p \top$ and $0 \leq_t \bot,\top \leq_t 1$.
\input{four.tex}
Two closely related constructions of a complete bilattice from a complete lattice
are described in \cite{Deneckeretalsurvey} and \cite{Fittingnice}, see \cite{Ginsberg}
for the origins of the constructions.
Here we recall one of them. Suppose that $L = (L,\leq)$ is a complete lattice
with extremal (i.e., least and greatest) elements $0$ and $1$.
Then define the partial orders $\leq_p$ and $\leq_t$ on $L \x L$
as follows:
\begin{eqnarray*}
(x,x') \leq_p (y,y') &\Leftrightarrow& x \leq y\ \wedge \ x' \geq y'\\
(x,x') \leq_t (y,y') &\Leftrightarrow& x \leq y \ \wedge\ x' \leq y'.
\end{eqnarray*}
Then $L \x L$ is a complete bilattice with $\leq_p$-extremal elements
$\bot = (0,1)$ and $\top = (1,0)$, and $\leq_t$-extremal elements $0 = (0,0)$ and $1 = (1,1)$.
Note that when $L$ is the $2$-element lattice $\mathbf{2} = \{0 \leq 1\}$, then
$L \x L$ is isomorphic to $\four$.
In this paper, we will mainly be concerned with the
ordering $\leq_p$.
In any category, we usually denote the composition of morphisms $f: A \to B$
and $g: B \to C$ by $g \circ f$ and the
identity morphisms by $\id_A$.
We let $\Set$ denote the category of sets and functions
and we denote by $\CL$ the category of complete lattices and monotonic functions.
Both $\Set$ and $\CL$ have all products and hence are
\emph{cartesian categories}. The
usual direct product, equipped with the pointwise order in $\CL$,
serves as categorical product. In $\CL$, a terminal object is a $1$-element lattice
$T$. In both categories,
for any sequence $A_1,\ldots,A_n$ of objects, the categorical projection morphisms
$\pi^{A_1 \x \cdots \x A_n}_i: A_1 \x\cdots \x A_n \to A_i$, $ i \in [n] = \{1,\ldots ,n\}$, are
the usual projection functions.
Products give rise to a \emph{tupling} operation. Suppose that $f_i: C \to A_i$, $i \in [n]$ in $\Set$ or $\CL$,
or in any cartesian category. Then there is a unique
$f: C \to A_1 \x \cdots \x A_n$ with $\pi^{A_1 \x \cdots \x A_n}_i \circ f = f_i$
for all $i \in [n]$. We denote this unique morphism $f$ by $\langle f_1,\ldots,f_n \rangle$
and call it the (target) tupling of the $f_i$ (or pairing, when $n = 2$).
And when $f: C \to A$ and $g: D \to B$, then we define $f \x g$
as the unique morphism $h: C \x D \to A \x B$ with $\pi^{A \x B}_1 \circ h =
f \circ \pi^{C \x D}_1$ and $\pi^{A \x B}_2 \circ h = g \circ \pi^{C \x D}_2$.
When $m,n \geq 0$, $\rho$ is a function $[m] \to [n]$ and
$A_1,\ldots,A_n$ is a sequence of objects in a cartesian category,
we associate with $\rho$ (and $A_1,\ldots,A_n$) the morphism
$$\rho^{A_1,\ldots,A_n} = \langle \pi^{A_1 \x \cdots \x A_n}_{\rho(1)}, \ldots, \pi^{A_1\x \cdots \x A_n}_{\rho(m)} \rangle$$
from $A_1 \x \cdots \x A_n$ to $A_{\rho(1)} \x \cdots \x A_{\rho(m)}$
(Note that in $\Set$ and $\CL$, $\rho^{A_1,\ldots,A_n}$ maps $(x_1,\ldots,x_n) \in A_1 \x \cdots \x A_n$
to $(x_{\rho(1)},\ldots,x_{\rho(m)}) \in A_{\rho(1)} \x \cdots \x A_{\rho(m)}$.)
With a slight abuse of notation, we usually let $\rho$ denote this morphism
as well. Morphisms of this form are sometimes called \emph{base morphisms}.
When $m = n$ and $\rho$ is a bijection, then the associated morphism
$ A_1\x \cdots \x A_n \to A_{\rho(1)}\x \cdots \x A_{\rho(n)}$
is an isomorphism. Its inverse is the morphism associated
with the inverse $\rho^{-1}$ of the function $\rho$.
For each object $A$,
the base morphism associated
with the unique function $[m] \to [1]$ is the
\emph{diagonal morphism} $\Delta^A_m = \langle \id_A,\ldots,\id_A \rangle : A \to A^m$, usually denoted just $\Delta_m$.
\section{Iteration categories}
The category $\CL$ is equipped with an (external) \emph{fixed point} or
\emph{dagger} operation \cite{BEbook,BEccc}
mapping a monotonic function $f: A \x B \to A$ to the monotonic function $f^\dagger : B \to A$
such that for all $y \in B$, $f^\dagger(y)$ is the least solution of the fixed
point equation $x = f(x,y)$. We will sometimes denote $f^\dagger(y)$ by $\mu x. f(x,y)$.
It provides the unique least solution to the parametric fixed point equation
\begin{eqnarray}
\label{eq-fp}
x &=& f(x,y).
\end{eqnarray}
When $B$ is the terminal object $T$, $f$ can be viewed as a function
$A \to A$ and $f^\dagger$ can be identified with an element of $A$.
The least fixed point operation $^\dagger$ over $\CL$ satisfies several nontrivial identities
captured by the notion of \emph{iteration theories} or \emph{iteration categories}
\cite{BEbook,EsMFCS2015}.
For later use, we collect here some of these identities.
{\sc Fixed point identity}
\begin{eqnarray*}
f^\dagger &=& f \circ \langle f^\dagger, \id_B\rangle,
\end{eqnarray*}
where $f: A \x B \to A$.
The fixed point identity expresses that $f^\dagger(y)$ is a solution of the fixed point equation
(\ref{eq-fp}).
{\sc Parameter identity}
\begin{eqnarray*}
(f \circ (\id_A \x g))^\dag &=& f^\dag \circ g,
\end{eqnarray*}
for all $f: A \x B \rightarrow A$ and $g: C \rightarrow B$.
In functional notation, the parameter identity expresses that
if $h(x,z) = f(x,g(z))$, then for the least solution $h^\dagger(z)$ of the
equation $x = h(x,z)$ it holds that $h^\dagger(z) = f^\dagger(g(z))$, where $f^\dagger(y)$ is the least
solution of $x = f(x,y)$.
{\sc Permutation identity}
\begin{eqnarray*}
(\rho \circ f \circ (\rho^{-1} \x \id_B))^\dagger &=& \rho \circ f^\dagger,
\end{eqnarray*}
for all $f : A_1\x \cdots \x A_n \x B \rightarrow A_1\x \cdots \x A_n$
and permutation $\rho: [n]\to [n]$.
This can be explained alternatively as follows. Consider the (systems of) fixed point
equations
\begin{eqnarray}
\label{eq-fp2}
x &=& f(x,z)
\end{eqnarray}
and
\begin{eqnarray}
\label{eq-fp3}
y &=& \rho(f(\rho^{-1}(y),z)),
\end{eqnarray}
where $x$ ranges over $A_1 \x \cdots \x A_n$, $y$ ranges over $A_{\rho(1)}\x \cdots \x A_{\rho(n)}$
and $z \in B$. Here, $\rho$ also denotes the bijective function $A_1\x \cdots \x A_n \to A_{\rho(1)}\x \cdots \x A_{\rho(n)}$
as explained above, and $\rho^{-1}$ also denotes the inverse of this function.
Then the permutation identity expresses that the least solution of (\ref{eq-fp3}) is $\rho(f^\dagger(z))$,
where $f^\dagger(z)$ is the least solution of (\ref{eq-fp2}).
{\sc Composition identity}
\begin{eqnarray*}
(f \circ \langle g, \pi^{A \x C}_2\rangle)^\dagger
&=& f \circ \langle (g \circ \langle f, \pi^{B \x C}_2\rangle)^\dagger, \id_C\rangle,
\end{eqnarray*}
where $f: B \x C \to A$ and $g: A \x C \to B$.
The composition identity relates the fixed point equations
\begin{eqnarray}
\label{eq-eqcomp1}
x &=& f(g(x,z),z)
\end{eqnarray}
and
\begin{eqnarray}
\label{eq-eqcomp2}
y &=& g(f(y,z),z).
\end{eqnarray}
It asserts that the least solution of (\ref{eq-eqcomp1}) can be obtained
by applying $f$ to the least solution of (\ref{eq-eqcomp2}) and the parameter.
{\sc Double dagger identity}
\begin{eqnarray*}
f^{\dagger\dagger} &=& (f \circ (\langle \id_A,\id_A\rangle \x \id_B))^\dagger,
\end{eqnarray*}
for all $f: A \x A \x B \to A$.
This identity means that the least solution of the equation
\begin{eqnarray*}
x &=& f(x,x,z)
\end{eqnarray*}
is the same as the least solution of
\begin{eqnarray*}
y &=& f^\dagger(y,z),
\end{eqnarray*}
where $f^\dagger(y,z)$ is the least solution of $x = f(x,y,z)$.
{\sc Pairing identity}
\begin{eqnarray*}
\langle f,g\rangle^\dagger &=&
\langle f^\dagger \circ \langle h^\dagger, \id_C\rangle, h^\dagger \rangle,
\end{eqnarray*}
for all $f: A \x B \x C \to A$ and $g: A \x B \x C \to B$, where
$h = g \circ \langle f^\dagger, \id_{B \x C}\rangle : B \x C \to B$.
This identity was independently found in \cite{Bekic} and \cite{DeBakkerScott}.
As is well-known, it asserts that a system
\begin{eqnarray*}
x &=& f(x,y,z)\\
y &=& g(x,y,z)
\end{eqnarray*}
can be solved by Gaussian elimination by solving the first equation and substituting
the solution into the second equation to obtain
\begin{eqnarray*}
x &=& f^\dagger(y,z)\\
y &=& g(f^\dagger(y,z),y,z) = h(y,z),
\end{eqnarray*}
and then by solving the second equation and substituting the solution into the first
to obtain the final result
\begin{eqnarray*}
x &=& f^\dagger(h^\dagger(z),z)\\
y &=& h^\dagger(z).
\end{eqnarray*}
In conjunction with the fixed point and parameter identities, the following
is a special case of the pairing identity:
\begin{eqnarray}
\label{eq-pairingspec}
\langle f , g \circ (\pi^{A \x B}_2 \x \id_C)\rangle^\dagger &=& \langle f^\dagger \circ \langle g^\dagger, \id_C\rangle,
g^\dagger\rangle,
\end{eqnarray}
where $f: A \x B \x C \to A$ and $g: B \x C \to B$. In the category $\CL$, it asserts that the least solution
of the system of equations
\begin{eqnarray*}
x &=& f(x,y,z)\\
y &=& g(y,z)
\end{eqnarray*}
is $x = f^\dagger(g^\dagger(z),z)$ and $y = g^\dagger(z)$.
{\sc Group identities}
Suppose that $G$ is a finite group whose underlying set is $[n]$.
Let $i\cdot j$ denote the multiplication
of $i,j\in [n]$. The identity associated with $G$ is:
$$ \langle f \circ (\rho_1 \x \id_B), \ldots, f \circ (\rho_n \x \id_B) \rangle^\dagger
= \Delta_n \circ (f \circ (\Delta_n \x \id_B))^\dagger$$
where $f: A^n \x B \to A$ and for each $i$, $\rho_i$ denotes the
function $[n] \to [n]$ given by $j \mapsto i\cdot j$
(as well as the associated morphism $\rho_i^{A,\ldots,A} = \langle \pi^{A^n}_{i \cdot 1 },\ldots,\pi^{A^n}_{i\cdot n}\rangle
: A^n \to A^n$ and $\Delta_n = \Delta^A_n$ is the diagonal morphism $A \to A^n$
defined above.
This identity can be explained in the following way. Consider the system
of equations
\begin{eqnarray}
\notag
x_1 &=& f(x_{1\cdot 1},\ldots,x_{1\cdot n},y)\\
&\vdots &\label{eq-grp1}\\
\notag
x_n &=& f(x_{n\cdot 1},\ldots,x_{n \cdot n},y)
\end{eqnarray}
and the single equation
\begin{eqnarray}
\label{eq-grp2}
x &=& f(x,\ldots,x,y).
\end{eqnarray}
Then the group identity associated with $G$ asserts that (\ref{eq-grp1}) is equivalent to (\ref{eq-grp2})
in the sense that each component of the least solution of (\ref{eq-grp1})
agrees with the least solution of (\ref{eq-grp2}).
Each finite group $G$ (equipped with the natural self action)
can be seen as a finite automaton, and in a similar fashion,
one may associate an identity with every finite automaton \cite{Esgroup}.
These are essentially the commutative identities of \cite{Esikaxioms}.
\begin{deff}
An iteration category is a cartesian category equipped with a dagger operation
satisfying either the parameter, fixed point, pairing, permutation and
group (or commutative) identities, or the parameter, composition,
double dagger and group (or commutative) identities.
\end{deff}
The following completeness result is from \cite{Esikaxioms,BEbook}.
\begin{thm}
An identity involving the cartesian category operations and dagger holds in
$\CL$ with the least fixed point operation as dagger iff it holds in
all iteration categories.
\end{thm}
\begin{remark}
{\rm
Iteration categories, or iteration theories, were introduced independently
in \cite{BEW1} and \cite{Esikaxioms}\footnote{In \cite{Esikaxioms},
iteration theories were called `generalized iterative theories'.}.
The axiomatization in \cite{Esikaxioms} used the commutative identities.
It was proved in \cite{Esgroup} that the commutative identities can be
simplified to the group identities. Moreover, it was shown
that the identities associated
with the members of a subclass $\mathcal{G}$ of the finite groups suffices instead
of all group identities iff every finite group is isomorphic to a quotient of a subgroup
of a group in $\mathcal{G}$, see \cite{Esgroup,Espower}. Nevertheless some further
simplifications of the axioms are still possible, see \cite{EsAC,EsMSCS2015}.
}
\end{remark}
We mention one more property that is not an identity, but a quasi-identity.
It is stronger that the group identities, yet most of the standard models satisfy it.
(Actually the commutative identities were introduced in \cite{Esikaxioms}
in order to replace this quasi-identity by weaker identities, since
when it comes to equational theories, the best way to present them is by
providing equational bases.)
{\sc Weak functorial implication}
This axiom asserts that for all $f: A ^n \x B \to A^n$ and $g: A \x B \to A$,
if
$f \circ (\Delta_n \x \id_B) = \Delta_n \circ g$, then
$$f^\dagger = \Delta_n \circ g^\dagger.$$
In $\CL$, this means that if $f = \langle f_1,\ldots,f_n \rangle : A^n \x B \to A^n$
and $g : A \x B \to A$ are such that $f_i(x,\ldots,x,y) = g(x,y)$
for all $i \in [n]$, then the system of equations
\begin{eqnarray*}
x_1 &=& f_1(x_1,\ldots,x_n,y)\\
&\vdots & \\
x_n &=& f_n(x_1,\ldots,x_n,y)
\end{eqnarray*}
is equivalent to the single equation
\begin{eqnarray*}
x &=& g(x,y).
\end{eqnarray*}
It is clear that if the weak functorial implication holds, then so do the
group (or commutative) identities.
\begin{remark}
{\rm
Sometimes we will apply the least fixed point operation to functions
$f : A \x B \to A$, where $A,B$ are complete lattices,
which are monotonic in the first argument but anti-monotonic
in the second. Such a function may be viewed as a monotonic
function $A \x B^d \to A$, where $B^d$ is the dual
of $B$. Hence, in this case, $f^\dagger$ is a monotonic
function $B^d \to A$, or --as we will consider it-- an anti-monotonic function
$B \to A$. More generally, we will also consider functions
that are monotonic in some arguments and anti-monotonic in
others, but always take the least fixed point w.r.t. an
argument in which the function is monotonic.
}
\end{remark}
\section{The category $\bCL$}
The objects of $\bCL$ are complete lattices. Suppose that $A,B$ are complete lattices.
A morphism from $A$ to $B$ in $\bCL$, denoted $f: A \circarrow B$, is a $\leq_p$-monotonic function $f: A \x A \to B \x B$, where $A \x A$ and $B \x B$ are the complete bilatices determined by
$A$ and $B$. Thus, $f = \langle f_1,f_2\rangle$ such that $f_1 : A \x A \to B$ is monotonic
in its first argument and anti-monotonic in the
second argument, and $f_2: A \x A \to B$ is anti-monotonic in its first argument and monotonic in its second argument.
(Such functions $f$ are called approximations in \cite{Vennekensetal}.)
Composition is ordinary function composition and for each complete lattice $A$,
the identity morphism $\bid_A : A \circarrow A$ is the identity function
$\id_{A \x A} = \id_A \x \id_A = \langle \pi^{A \x A}_1,\pi^{A \x A}_2\rangle: A \x A \to A \x A$.
The category $\bCL$ has finite products. (Actually it has all products).
Indeed, a terminal object of $\bCL$
is any $1$-element lattice. Suppose that $A_1,\ldots,A_n$ are complete lattices.
Then consider the direct product $A_1 \x \cdots \x A_n$ as an object of $\bCL$
together with the following morphisms $\bpi^{A_1 \x \cdots \x A_n}_i : A_1 \x \cdots \x A_n \circarrow A_i$,
$ i \in [n]$. For each $i$, $\bpi^{A_1 \x \cdots \x A_n}_i$ is the function
$$A_1 \x \cdots \x A_n \x A_1 \x \cdots \x A_n \to A_i\x A_i$$
defined by
\begin{eqnarray*}
\bpi^{A_1\x \cdots \x A_n}_i(x_1,\ldots,x_n, x'_1,\ldots,x'_n) &=& (x_i,x'_i),
\end{eqnarray*}
so that in $\Set$, $\bpi^{A_1\x \cdots \x A_n}_i$ can be written as
\begin{eqnarray*}
\langle \pi^{A_1\x \cdots \x A_n \x A_1 \x \cdots \x A_n}_i,
\pi^{A_1\x \cdots \x A_n \x A_1 \x \cdots \x A_n}_{n+i}\rangle
&=& \pi^{A_1\x \cdots \x A_n}_i \x \pi^{A_1\x \cdots \x A_n}_i.
\end{eqnarray*}
It is easy to see that the morphisms $\bpi^{A_1\x \cdots \x A_n}_i$, $i \in [n]$,
determine a product diagram in $\bCL$. To this end, let $f^i = \langle f^i_1, f^i_2 \rangle
: C \circarrow A_i$ in $\bCL$, for all $i\in [n]$, so that each
$f^i$ is a $\leq_p$-monotonic function $C \x C \to A_i \x A_i$.
Then let $h = \langle h_1,h_2\rangle$, where
$h_1 = \langle f^1_1,\ldots,f^n_1\rangle$ and $h_2 = \langle f^1_2,\ldots,f^n_2 \rangle$
in the category $\CL$. Thus, $h_1$ and $h_2$ are functions $C \x C \to A_1 \x \cdots \x A_n$.
We prove that $h$ is the
target tupling of $f^1,\ldots,f^n$ in $\bCL$. First, since each $f^i_1$
is monotonic in its first argument and anti-monotonic in the second argument,
the same holds for $h_1$. In the same way, $h_2$ is anti-monotonic
in the first argument and monotonic in the second.
Thus, $h$ is $\leq_p$-monotonic.
Next, writing just
$\bpi_i$ for $\bpi^{A \x \cdots \x A_n}_i$ and $\pi_i$ for
$\pi^{A \x \cdots \x A_n}_i$,
where $i \in [n]$, we have
\begin{eqnarray*}
\bpi_i \circ h
&=&
\bpi_i \circ \langle h_1,h_2 \rangle\\
&=&
(\pi_i \x \pi_i) \circ \langle \langle f^1_1,\ldots,f^n_1 \rangle , \langle f^1_2,\ldots,f^n_2 \rangle \rangle \\
&=&
\langle \pi_i \circ \langle f^1_1,\ldots,f^n_1 \rangle , \pi_i \circ \langle f^1_2,\ldots,f^n_2 \rangle \rangle \\
&=&
\langle f^i_1,f^i_2\rangle \\
&=&
f_i.
\end{eqnarray*}
It is also clear that $h$ is the unique morphism
$C \circarrow A_1 \x \cdots \x A_n$ in $\bCL$ with this property.
\begin{prop}
$\bCL$ is a cartesian category in which the product of any objects $A_1,\ldots,A_n$
agrees with their product in $\CL$.
\end{prop}
By the above argument, the tupling of any sequence of morphisms
$f^i = \langle f^i_1,f^i_2 \rangle: C \circarrow A_i$ in $\bCL$ is $h = \langle h_1,h_2\rangle$,
where $h_1$ is the tupling of the $f^i_1$ and $h_2$ is the tupling of the
$f^i_2$ in $\Set$. We will denote it by $\blangle f^1,\ldots,f^n\brangle: C \circarrow A_1 \x \cdots \x A_n$.
For further use, we note the following.
Suppose that $\rho: [m]\to [n]$ and $A_1,\ldots,A_n$ are complete lattices.
Then the associated morphism $\brho^{A_1,\ldots,A_n} : A_1\x \cdots \x A_n
\circarrow A_{\rho(1)} \x \cdots \x A_{\rho(m)}$
in $\bCL$ is the function
$$A_1\x \cdots \x A_n \x A_1\x \cdots \x A_n \to A_{\rho(1)} \x \cdots \x A_{\rho(m)} \x A_{\rho(1)} \x \cdots \x A_{\rho(m)}$$
given by
$$(x_1,\ldots,x_n,x'_1,\ldots,x'_n) \mapsto (x_{\rho(1)},\ldots,x_{\rho(m)}, x'_{\rho(1)},\ldots,x'_{\rho(m)}).$$
Thus,
$$\brho^{A_1,\ldots,A_n} = \rho^{A_1,\ldots,A_n} \x \rho^{A_1,\ldots,A_n},$$
where $\rho^{A_1,\ldots,A_n}$ is the morphism associated with $\rho$
and $A_1,\ldots,A_n$ in $\Set$ (or $\CL$). This is in accordance with
$\bid_A = \id_A \x \id_A$.
Suppose that $f: C \circarrow A$ and $g: D \circarrow B$ in $\bCL$, so that $f$ is
a function $C \x C \to A \x A$ and $g$ is a function $D \x D \to B \x B$. Then
$f \x g : C \x D \circarrow A \x B$ in the category $\bCL$ is the function
$$(\id_A \x \langle \pi^{B \x A}_2, \pi^{B \x A}_1 \rangle \x \id_B) \circ
h \circ (\id_C \x \langle \pi^{D \x C} _2, \pi^{D \x C}_1 \rangle \x \id_D): C \x D \x C \x D \to A \x B \x A \x B,$$
where $h$ is $f \x g : C\x C \x D \x D \to A \x A \x B \x B$ in $\Set$. Hence,
$h = \langle h_1,h_2\rangle$ with
\begin{eqnarray*}
h_1(x,y,x',y') &=& (f_1(x,x'), g_1(y,y'))\\
h_2(x,y,x',y') &=& (f_2(x,x'), g_2(y,y')).
\end{eqnarray*}
\subsection{Some subcategories}
Motivated by \cite{Deneckeretalsurvey,DeneckeretalULT,Vennekensetal}, we define several subcategories of $\bCL$.
Suppose that $A,B$ are complete lattices. Following \cite{Deneckeretalsurvey}, we call
an ordered pair $(x,x') \in A \x A$ \emph{consistent} if $x \leq x'$. Moreover, we call $f: A \circarrow B$
in $\bCL$ consistent if it maps consistent pairs to consistent pairs.
It is clear that if $f : A \circarrow B$ and $g: B \circarrow C$ in $\bCL$ are consistent,
then so is $g \circ f: A \circarrow C$, moreover, $\bid_A$ is always consistent.
Also, for any sequence $A_1,\ldots,A_n$ of complete
lattices, the projections $\bpi^{A_1 \x \cdots \x A_n}_i: A_1 \x \cdots \x A_n \circarrow A_i$, $i \in [n]$
are consistent. And when $f_i: C \circarrow A_i$, for all $i \in [n]$, then
$\blangle f_1,\ldots,f_n \brangle : C \circarrow A_1 \x \cdots \x A_n$ is consistent
iff each $f_i$ is. Hence, the consistent morphisms in $\bCL$ determine a cartesian
subcategory of $\bCL$ with the same product diagrams. Let $\CCL$ denote this subcategory.
We define two subcategories of $\CCL$. The first one, $\ACL$, is the subcategory determined by those
morphisms $f = \langle f_1,f_2\rangle : A \circarrow B$ in $\bCL$
such that $f_1(x,x) \leq f_2(x,x)$ for all $x \in A$.
The second, $\AsCL$, is the subcategory determined by those $f :A \circarrow B$
with $f_1(x,x) = f_2(x,x)$. These are again cartesian subcategories
with the same product diagrams.
As noted in \cite{Deneckeretalsurvey}, most applications of approximation
fixed point theory use \emph{symmetric} functions.
We introduce the subcategory of $\bCL$
having complete lattices as object but only symmetric $\leq_p$-preserving functions as
morphisms.
Suppose that $f: A \circarrow B$ in $\bCL$, say $f = \langle f_1,f_2\rangle$,
We call $f$ symmetric if $f_2(x,x') = f_1(x',x)$, i.e., when
\begin{eqnarray*}
f_2 &=& f_1 \circ \langle \pi^{A\x A}_2,\pi^{A \x A}_1\rangle: A \x A \to B.
\end{eqnarray*}
We will express this condition in a concise way as $f_2 = f_1^\op$.
It is easy to prove that if $f: A \circarrow B$ and $g: B \circarrow C$
are symmetric, then so is $g \circ f$. Moreover, $\bid_A$ is always
symmetric. Thus, symmetric morphisms determine a subcategory of $\bCL$,
denoted $\sCL$. In fact, $\sCL$ is a subcategory of
$\AsCL$, since when $f = \langle f_1,f_2\rangle : A \circarrow B$ is symmetric,
then necessarily $f_1(x,x) = f_2(x,x)$ for all $x \in A$.
Moreover, it is again a cartesian subcategory with the same products.
Since the first component of a symmetric morphism uniquely determines the
second component, $\sCL$ can be represented as the category whose objects are
complete lattices having as morphisms $A \circarrow B$ (where $A$ and $B$ are complete lattices)
those functions $f: A \x A \to B$ which are monotonic in the first and anti-monotonic in the
second argument. Composition, denoted $\bullet$, is then defined as follows. Given $f: A \circarrow B$
and $g: B \circarrow C$, $g \bullet f : A \circarrow C$ is the function
$$g \circ \langle f,f^\op\rangle : A \x A \to C,$$ so that $h(x,x') = g(f(x,x'),f(x',x))$.
The identity morphism $A \circarrow A$ is the projection $\pi^{A \x A}_1$.
\section{Fixed points}
In this section, we recall from \cite{Deneckeretalsurvey} the construction
of stable and well-founded fixed points. More precisely, only symmetric functions
were considered in \cite{Deneckeretalsurvey}, but it was remarked that
the construction also works for non-symmetric functions.
Suppose that $f = \langle f_1, f_2\rangle : A \circarrow A$ in $\bCL$,
so that $f$ is a $\leq_p$-monotonic function $A \x A \to A \x A$. Then $f_1: A \x A \to A$
is monotonic in its first argument and anti-monotonic in its second argument,
and $f_2: A \x A \to A$ is monotonic in its second argument and anti-monotonic
in its first argument. Define the functions $s_1,s_2: A \to A$ by
\begin{eqnarray*}
s_1(x') &=& \mu x. f_1(x,x')\\
s_2(x) &=& \mu x'. f_2(x,x')
\end{eqnarray*}
and let $S(f) : A \x A \to A \x A$ be the function $S(f)(x,x') = (s_1(x'),s_2(x))$.
Since $s_1$ and $s_2$ are anti-monotonic, $S(f)$ is a morphism $A \circarrow A$ in
$\bCL$. We call $S(f)$ the \emph{stable function} for $f$.
It is known that every fixed point of $S(f)$ is a fixed point
of $f$, called a \emph{stable fixed point} of $f$. We let $f^\triangle$
denote the set of all stable fixed points of $f$. Since $S(f)$
is $\leq_p$-monotonic, there is a $\leq_p$-least stable fixed point
$f^\ddag$, called the \emph{well-founded fixed point} of $f$.
The above construction can slightly be extended.
Suppose that $f = \langle f_1,f_2\rangle : A \x B \circarrow A$ in $\bCL$,
so that $f$ is a function $A \x B \x A \x B \to A \x A$.
Then $f_1 : A \x B \x A \x B \to A$ is monotonic in its first and second arguments
and anti-monotonic in the third and fourth arguments,
while $f_2: A \x B \x A \x B \to A$ is monotonic in the third and fourth arguments
and anti-monotonic in the first and second arguments.
Now let $s_1,s_2 : A \x B \x B \to A$ be defined by
\begin{eqnarray*}
s_1(x',y,y') &=& \mu x. f_1(x,y,x',y')\\
s_2(x,y,y') &=& \mu x'. f_2(x,y,x',y').
\end{eqnarray*}
We have that $s_1$ is monotonic in its second argument and
anti-monotonic in the first and third arguments, and
$s_2$ is monotonic in the third argument and anti-monotonic
in the first and second arguments.
Define $S(f): A\x A \x B \x B \to A \x A$ by
\begin{eqnarray*}
S(f)(x,x',y,y') &=& (s_1(x',y,y'), s_2(x,y,y')).
\end{eqnarray*}
Then $S(f)$, as a function $(A \x A) \x (B \x B) \to A \x A$, is
$\leq_p$-monotonic in both of its arguments. We call $S(f)$
the stable function for $f$. (Note that $S(f)$ can be considered as a morphism $L \x L' \to L$
of the category $\CL$, where $L$ and $L'$ are the complete bilattices $A \x A$
and $B \x B$ considered as complete lattices ordered by the relation $\leq_p$.)
For each $y,y'\in B$, let $f^\triangle(y,y')$
denote the set of solutions of the fixed point equation $(x,x') = S(f)(x,x',y,y')$.
Hence, $f^\triangle$ is a function from $B \x B$ to the power set of $A \x A$,
that we call the stable fixed point function. In particular, for each $y,y'\in B$ there is a $\leq_p$-least
element of $f^\triangle(y,y')$. We denote it by $f^\ddag(y,y')$. Since $S(f)$ is
$\leq_p$-monotonic, so is $f^\ddag : B \x B \to A \x A$. Hence $f^\ddag: B \circarrow A$
in $\bCL$.
We have thus defined a dagger operation $^\ddag$ on $\bCL$,
called the (parametric) \emph{well-founded fixed point operation}.
In the next two sections, we investigate the
equational properties of this operation.
\begin{remark}
\label{rem-pointwise}
{\rm
The parametric well-founded fixed point operation $^\ddag$ is just the pointwise extension
of the operation defined on morphisms $A \circarrow A$. Indeed,
when $f: A \x B \circarrow A$ and $(y,y') \in B\x B$,
then let $g: A \circarrow A$ be given by
$g(x,x') = f(x,y,x',y')$. Then
$f^\ddag (y,y') = g^\ddag$ and $f^\triangle(y,y') = g^\triangle$.
}
\end{remark}
\begin{remark}
\label{rem-symm}
{\rm
Suppose that $f : \two \circarrow \two$ is given by $f(x,x') = (\neg x', \neg x)$.
Then $f$ is symmetric but $f^\ddag$ is not, since $f^\ddag = (0,1)$. Hence $\sCL$ is not closed
w.r.t. the parametric well-founded fixed point operation. Let $g : \two \x \two \circarrow \two$
be given by $g(x,y,x',y') = (\neg x', \neg x)$. Then $g$ is a morphism in
$\ACL$. However, $g^\ddag(y,y') = (0,1)$ for
all $y,y' \in \two$, so that $g^\ddag$ is not a morphism in $\ACL$.
Hence, $\ACL$ is also not closed under the parametric well-founded fixed point operation.
}
\end{remark}
\begin{remark}
\label{rem-consistent}
{\rm
We provide an example showing that when $f: A \x B \circarrow A$ in $\bCL$ is consistent, $f^\ddag$ may not be consistent. Indeed, let $A = \two$ and $B = T$ (terminal object), and let $f: A \circarrow A$ be given by
$f(x,x') = (1,\neg x \vee x')$. Then $f$ is consistent, since
$f(0,0) = f(0,1) = f (1,1) = (1,1)$, but $f^\ddag = (1,0)$, so that $f^\ddag$ is not consistent. Since $f$ is in fact in $\AsCL$, this example also shows that
neither $\ACL$ nor $\AsCL$ is closed with respect to the well founded
fixed point operation.
Note that the above $f$ is not symmetric. In fact, if $f: A \circarrow A$ is
symmetric, then $f^\ddag : T \circarrow A$ is consistent. This follows from
Remark~\ref{rem-pointwise} and Theorem 23 in \cite{Deneckeretalsurvey}.
}
\end{remark}
\section{Some valid identities}
In this section we establish the parameter, fixed point, permutation and
group identities and the special case (\ref{eq-pairingspec})
of the pairing identity for the parametrized well-founded fixed
point operation over $\bCL$. In fact, we prove that the weak functorial
implication holds.
\begin{prop}
The parameter identity holds:
\begin{eqnarray*}
(f \circ (\bid_A \x g))^\ddag &=& f^\ddag \circ g,
\end{eqnarray*}
for all $f: A \x B \circarrow A$ and $g: C \circarrow B$.
\end{prop}
{\sl Proof.} Let $h = f \circ (\bid_A \x g) : A\x C \circarrow A$. Then $S(h) : A \x A \x C \x C \to A \x A$
is given by
\begin{eqnarray*}
S(h)(x,x',z,z') &=& (\mu x. f_1 (x,g_1(z,z'), x', g_2(z,z')), \mu x'. f_1(x, g_1(z,z'), x', g_2(z,z')))\\
&=& S(f)((\id_{A \x A} \x g)(x,x',z,z')),
\end{eqnarray*}
where $f = \langle f_1,f_2\rangle$ and $g = \langle g_1,g_2\rangle$.
Thus, $S(h) = S(f)\circ (\id_{A \x A} \x g)$ in $\Set$ (or $\CL$)
and therefore $h^\triangle = f^\triangle \circ (\id_{A \x A} \x g)$. Moreover,
$h^\ddag = f^\ddag \circ g$, since the parameter identity holds for the
least fixed point operation over $\CL$. \eop
\begin{prop}
The fixed point identity holds:
\begin{eqnarray*}
f\circ \blangle f^\ddag,\id_B \brangle &=& f^\ddag,
\end{eqnarray*}
for all $f : A \x B \circarrow A$.
\end{prop}
{\sl Proof.}
By Remark~\ref{rem-pointwise}, it is sufficient to prove our claim only in
the case when $f: A \circarrow A$, i.e., $f$ is a
$\leq_p$-monotonic function $ A \x A \to A \x A$.
But it is known that if $f: A \circarrow A$, then each stable fixed point of $f$
is a ($\leq_t$-minimal) fixed point, so $f \circ f^\ddag = f^\ddag$.
(We also have $f \circ f^\triangle = f^\triangle$.) \eop
\begin{prop}
The permutation identity holds:
\begin{eqnarray*}
(\brho \circ f \circ (\brho^{-1} \x \bid_B))^\ddag &=& \brho \circ f^\ddag,
\end{eqnarray*}
for all $f : A_1\x \cdots \x A_n \x B \circarrow A_1\x \cdots \x A_n$
and permutation $\rho: [n]\to [n]$.
\end{prop}
{\sl Proof.}
We prove this only when $B$ is the terminal object, so that
$f$ can be viewed as a morphism $f = \langle f_1,f_2 \rangle
: A_1\x \cdots \x A_n \circarrow A_1\x \cdots \x A_n$,
where $f_1,f_2$ are appropriate functions
$$A_1\x \cdots \x A_n \x A_1\x \cdots \x A_n \to A_1\x \cdots \x A_n.$$
Let $g = \brho \circ f \circ \brho^{-1}$ in $\bCL$, so that $g = \langle g_1,g_2\rangle$
where $g_1,g_2$ are functions
$$A_{\rho(1)}\x \cdots \x A_{\rho(n)} \x A_{\rho(1)} \x \cdots \x A_{\rho(n)} \to
A_{\rho(1)}\x \cdots \x A_{\rho(n)}.$$
First we show that
\begin{eqnarray}
\label{eq-perm1}
S(g) &=& \brho \circ S(f) \circ \brho^{-1}
\end{eqnarray}
in $\bCL$, i.e.,
\begin{eqnarray*}
S(g) &=& (\rho \x \rho) \circ S(f) \circ (\rho^{-1} \x \rho^{-1})
\end{eqnarray*}
in $\Set$ (or $\CL$).
Below we will denote by $x,x'$ $n$-tuples
in $A_1\x \cdots \x A_n$. Similarly, let $y,y'$ denote $n$-tuples in
$A_{\rho(1)}\x \cdots \x A_{\rho(n)}$.
Note that if $x = (x_1,\ldots,x_n) \in A_1\x \cdots \x A_n$,
then $\rho(x) = (x_{\rho(1)},\ldots,x_{\rho(n)})$ in $A_{\rho(1)}\x \cdots \x A_{\rho(n)}$.
And if $y = (y_1,\ldots,y_n) \in A_{\rho(1)}\x \cdots \x A_{\rho(n)}$,
then $\rho^{-1}(y) = (y_{\rho^{-1}(1)},\ldots,y_{\rho^{-1}(n)})$ in $A_1\x \cdots \x A_n$.
Let
\begin{eqnarray*}
s_1(x') &=& \mu x. f_1(x,x')\\
s_2(x) &=& \mu x'. f_2(x,x').
\end{eqnarray*}
Then $S(f)(x,x') = (s_1(x'), s_2(x))$.
Similarly, let
\begin{eqnarray*}
t_1(y')
&=& \mu y. \rho(f_1(\rho^{-1}(y), \rho^{-1}(y')))\\
t_2(y)
&=& \mu y'. \rho(f_2(\rho^{-1}(y), \rho^{-1}(y'))).
\end{eqnarray*}
Then $S(g)(y,y') = (t_1(y'), t_2(y))$. Since the permutation and parameter identities
hold for the least fixed point operation over $\CL$, we obtain that
\begin{eqnarray*}
t_1(y') &=& \rho(s_1(\rho^{-1}(y'))\\
t_2(y) &=& \rho(s_2(\rho^{-1}(y)),
\end{eqnarray*}
proving (\ref{eq-perm1}). Now from (\ref{eq-perm1}), since the permutation identity holds
for the least fixed point operation over $\CL$, it follows that $g^\ddag = \brho \circ f^\ddag$ in $\bCL$.
Moreover, it follows that the stable fixed points of $g$ are of the form $(\rho(x), \rho(x'))$,
where $(x,x')$ is a stable fixed point of $f$. (A suggestive notation: $g^\triangle = \rho \circ f^\triangle$.) \eop
We now establish a special case of the pairing identity. It will be shown later that
the general form of the identity does not hold.
\begin{prop}
\label{prop-pairingspec}
The identity (\ref{eq-pairingspec}) holds:
\begin{eqnarray*}
\blangle f , g \circ (\bpi^{A \x B}_2 \x \bid_C)\brangle^\ddag &=& \blangle f^\ddag \circ \blangle g^\ddag, \bid_C\brangle,
g^\ddag\brangle,
\end{eqnarray*}
where $f: A \x B \x C \circarrow A$ and $g: B \x C \circarrow B$.
\end{prop}
{\sl Proof.} It suffices to consider the case when there is no parameter. So let
$f = \langle f_1, f_2\rangle: A \x B \circarrow A$ and $g = \langle g_1,g_2\rangle : B \circarrow B$,
so that $f_1, f_2: A \x B \x A \x B \to A$ and $g_1,g_2: B \x B \to B$.
Let $ h = \blangle f , g \circ \bpi^{A \x B}_2 \brangle :
A \x B \circarrow A \x B$ in $\bCL$.
Then $h^\ddag$ can be constructed as
follows. First consider
\begin{eqnarray*}
&&\mu (x,y). (f_1(x,y,x',y'), g_1(y,y'))\quad {\rm and}\\
&&\mu (x',y'). (f_2(x,y),x',y'), g_2(y,y')).
\end{eqnarray*}
Since (\ref{eq-pairingspec}) and the parameter identity hold for the least fixed point operation over $\CL$,
we know that these functions can respectively be written as
\begin{eqnarray*}
&&(\mu x. f_1(x,\mu y. g_1(y,y'), x',y'), \mu y. g_1(y,y'))\quad {\rm and}\\
&&(\mu x'. f_2(x, y, x', \mu y'.g_2(y,y')), \mu y'. g_2(y,y')).
\end{eqnarray*}
Now $h^\ddag$ can be obtained
by solving the system of equations
\begin{eqnarray*}
(x,x') &=& (\mu x. f_1(x,\mu y. g_1(y,y'), x',y'), \mu x'. f_2(x, y, x', \mu y'. g_2(y,y')) = S(f)((x,x'), S(g)(y,y'))\\
(y,y') &=& (\mu y. g_1(y,y'), \mu y'. g_2(y,y')) = S(g)(y,y')
\end{eqnarray*}
for its least solution w.r.t. $\leq_p$. Moreover, it follows that
$h^\triangle$ consists of all $((x,y),(x',y'))$ such that $(y,y')$ is a
stable fixed point of $g$ and $(x,x')$ is in $f^\triangle(y,y')$.
In particular, since the least fixed point operation over $\CL$
satisfies (\ref{eq-pairingspec}), it holds that
$h^\ddag =
\blangle f^\ddag \circ g^\ddag, g^\ddag \brangle$ as claimed. \eop
\begin{remark}
{\rm
The identity (\ref{eq-pairingspec}) has already been established in
Theorem 3.11 of \cite{Vennekensetal}, see also the
Splitting Set Theorem of \cite{LifschitzTurner}.
}
\end{remark}
\begin{prop}
The weak functorial dagger implication holds: for all $f: A ^n \x B \circarrow A^n$ and $g: A \x B \circarrow A$ in
$\bCL$: if $f \circ (\bDelta_n \x \bid_B) = \bDelta_n \circ g$, then $f^\ddag = \bDelta_n \circ g^\ddag$.
\end{prop}
{\sl Proof.} We spell out the proof only in the case when $B$ is a terminal object.
So let $f : A^n \circarrow A^n$ and $g: A \circarrow A$
in $\bCL$, say $f = \langle f_1,f_2\rangle$ and $g = \langle g_1,g_2\rangle$, where
$f_i: A^n\x A^n \to A^n$ and $g_i: A\x A \to A$ are appropriate functions for $i = 1,2$.
The assumption $f \circ \bDelta_n = \bDelta_n \circ g$ can be rephrased
as $$f_i \circ ( \Delta_n \x \Delta_n) = \Delta_n \circ g_i,\quad i= 1,2,$$
i.e.,
\begin{eqnarray*}
f_1(x,\ldots, x,x', \ldots,x') &=& (g_1(x,x'),\ldots,g_1(x,x'))\\
f_2(x,\ldots, x,x', \ldots,x') &=& (g_2(x,x'),\ldots,g_2(x,x'))
\end{eqnarray*}
for all $x,x' \in A$.
Since the weak functorial dagger implication and the parameter identity
hold for the least fixed point operation over $\CL$, it follows that
\begin{eqnarray*}
h_1(x',\ldots,x') &=& (k_1(x'),\ldots,k_1(x'))\\
h_2(x,\ldots,x) &=& (k_2(x),\ldots,k_2(x))
\end{eqnarray*}
where $h_1(x_1',\ldots,x_n')$ and $h_2(x_1,\ldots,x_n)$
are respectively the least solutions of
\begin{eqnarray*}
(x_1,\ldots,x_n) &=& f_1(x_1,\ldots,x_n,x'_1,\ldots,x'_n)\quad {\rm and}\\
(x_1',\ldots,x_n') &=& f_2(x_1,\ldots,x_n,x'_1,\ldots,x'_n)
\end{eqnarray*}
and $k_1(x')$ and $k_2(x)$ denote the least solutions of
\begin{eqnarray*}
x & =& g_1(x,x')\quad{\rm and}\\
x' &=& g_2(x,x'),
\end{eqnarray*}
so that $S(f)(x_1,\ldots,x_n,x_1',\ldots,x_n') = (h_1(x_1',\ldots,x_n'), h_2(x_1,\ldots,x_n))$,
moreover,
$S(g)(x,x') = (k_1(x'), k_2(x))$.
Consider now the equations
\begin{eqnarray*}
(x_1,\ldots,x_n,x'_1,\ldots,x'_n) &=& (h_1(x_1',\ldots,x_n'), h_2(x_1,\ldots,x_n))
\end{eqnarray*}
and
\begin{eqnarray*}
(x,x') &=& (k_1(x'),k_2(x)).
\end{eqnarray*}
Since the weak functorial dagger implication and the parameter identity
hold for the least fixed point operation over $\CL$, the
$\leq_p$-least solution of the first equation can be obtained as the
$2n$-tuple whose first $n$ components are equal to the first component
of the $\leq_p$-least solution of the second equation, and whose second $n$
components are equal to the second component of the $\leq_p$-least solution of the second equation.
This means that $f^\ddag = (\Delta_n \x \Delta_n) \circ g^\ddag$ in $\Set$, i.e.,
$f^\ddag = \bDelta_n \circ g^\ddag$ in $\bCL$. (It also holds that if $(x,x')$
is a stable fixed point of $g$, then $(x,\ldots,x,x',\ldots,x')$ is a stable fixed
point of $f$.) \eop
\begin{cor}
The identities associated with finite groups hold for the
parametrized well-founded fixed point operator over $\bCL$.
\end{cor}
In fact, each identity associated with a finite automaton holds.
\section{Some identities that fail}
\begin{prop}
The composition identity fails even in the following simple case:
\begin{eqnarray*}
f \circ (f \circ f)^\ddag &=& ( f \circ f)^\ddag,
\end{eqnarray*}
where $f : A \circarrow A$.
\end{prop}
{\sl Proof.}
Let $f : \two \circarrow \two$ be given by $f(x,x') = (\neg x', \neg x)$ (see also Remark~\ref{rem-symm}).
Then $f \circ f$ is the identity function on $\two \x \two$,
hence $(f \circ f)^\ddag = (0,0)$. On the other hand,
$f \circ (f \circ f)^\ddag = (1,1)$. \eop
\begin{prop}
The squaring identity $(f \circ f)^\ddag = f^\ddag$ fails, where $f: A \circarrow A$.
\end{prop}
{\sl Proof.} Let $f$ be as in the previous proof. Then $(f \circ f)^\ddag = (0,0)$
as shown above. But $f^\ddag = (0,1)$. \eop
Since the fixed point, parameter and permutation identities hold
but the composition identity fails, the pairing identity also must fail,
see \cite{BEbook}.
We can give a direct proof.
\begin{prop}
The pairing identity
\begin{eqnarray*}
\langle f,g\rangle^\ddag &=& \langle f^\ddag \circ \langle h^\ddag, \bid_C\rangle, h^\ddag\rangle,
\end{eqnarray*}
where $h = g \circ \blangle f^\ddag, \bid_{B \x C}\brangle$ fails, where $f: A \x B \x C \circarrow A$
and $g: A \x B \x C \circarrow B$.
\end{prop}
{\sl Proof.}
Let $f,g : \two \x \two \circarrow \two$ in $\bCL$,
so that $f$ and $g$ are appropriate functions
$\two \x \two \x \two \x \two \to \two \x \two$,
\begin{eqnarray*}
f(x,y,x',y') &=& (\neg y', \neg y)\\
g(x,y,x',y') &=& (\neg x', \neg x ).
\end{eqnarray*}
Then
\begin{eqnarray*}
\blangle f,g \brangle(x,y,x',y') &=& (\neg y',\neg x',\neg y,\neg x)
\end{eqnarray*}
and thus $\blangle f,g \brangle^\ddag = (0,0,1,1)$.
On the other hand, $f^\ddag(y,y') = (\neg y', \neg y)$,
hence $h = g \circ \blangle f^\ddag, \bid_\two \brangle$
is the identity function on $\two \x \two$ and $h^\ddag = (0,0)$
and $f^\ddag \circ h^\ddag = (1,1)$. It follows that
$\blangle f^\ddag \circ h^\ddag , h^\ddag\brangle = (1,0,1,0)$.
\eop
Each of the above examples involved symmetric morphisms. We now
refute the double dagger identity, but we use a non-symmetric
morphism.
\begin{prop}
The double dagger identity fails in $\bCL$.
\end{prop}
{\sl Proof.} Let $g: \two \x \two \circarrow \two$ be given by
$g(x,y,x',y') = (\neg y', \neg x)$, and let
$h = g \circ \blangle \bid_\two,\bid_\two \brangle : \two \circarrow \two$,
so that $h(x,x') = (\neg x',\neg x)$. We already know that
$h^\ddag = (0,1)$. But $g^\ddag(y,y') = (\neg y',y)$ and
$g^{\ddag\ddag} = (1,0)$. \eop
\section{Conclusion}
We extended the well-founded fixed point operation of
\cite{Deneckeretalsurvey,Vennekensetal} to a parametric operation and studied
its equational properties. We found that several of the
identities of iteration theories hold for the parametric
well-founded fixed point operation, but some others fail.
Two interesting questions for further investigation arise.
The first one concerns the \emph{algorithmic description} of the valid
identities of the well-founded fixed point operation.
Does there exist an algorithm to decide whether an identity
(in the language of cartesian categories equipped with a
dagger operation) holds for the well-founded fixed point
operation? The second one concerns the \emph{axiomatic description} of
the valid identities of the well-founded fixed point
operation. These questions are relevant in connection with modular
logic programing, cf. \cite{Ferrarisetal,Janhunenetal,LifschitzTurner}.
An alternative semantics of logic programs with negation based on
an infinite domain of truth values was proposed
in \cite{RondogiannisWadge}. The infinite valued approach has been
further developed in the abstract setting of `stratified complete lattices' in
\cite{Charalambidisetal,EsikRondogiannis2,EsikRondogiannis1,EsikWOLLIC,EsikTbiLLC}.
In particular, it has been proved in \cite{EsikWOLLIC} that the
stratified least fixed point operation arising in this approach
does satisfy all identities of iteration theories.
{\bf Acknowledgments} The authors would like to thank Panos Rondogiannis for pointing
out some of the references. The second author would like to thank the hospitality of the
Institute of Informatics Gaspard Monge of Universt\'e Paris Est.
\thebibliography{nn}
\bibitem{Bekic}
H. Beki\'c: Definable operations in general algebras, and the theory of automata and flowcharts.
Technical report, IBM Vienna, 1969. Reprinted in:
Programming Languages and Their Definition -
Hans Beki\'c (1936-1982), LNCS 177, pp. 30--55, Springer, 1984.
\bibitem{BEW1}
S.L. Bloom, C.C. Elgot and J.B. Wright:
Solutions of the iteration equation and extensions of the scalar iteration operation,
{\em SIAM J. Comput.}, 9(1980), 25--45.
\bibitem{BEbook}
S.L. Bloom and Z. \'Esik: Iteration theories. Springer, 1993.
\bibitem{BEccc}
S.L. Bloom and Z. \'Esik: Fixed-point operations on ccc's. Part I.
{\em Theor. Comput. Sci.} 155(1996), 1--38.
\bibitem{Charalambidisetal}
A. Charalambidis, Z. \'Esik and P. Rondogiannis:
Minimum model semantics for extensional higher-order logic programming with negation,
{\em Theor. Prac. Log. Prog.}, 14(2014), 725--737.
\bibitem{Daveyetal}
B.A. Davey and H.A. Priestley: Introduction to lattices and order, 2nd Edition,
Cambridge University Press, 2002.
\bibitem{DeBakkerScott}
J.W. De Bakker and D. Scott: A theory of programs. Technical Report, IBM Vienna, 1969.
\bibitem{Deneckeretalsurvey}
M. Denecker, V.M. Marek and M. Truszczy{\'n}ski:
Approximations, stable operators, well-founded fixpoints and applications in nonmonotonic reasoning,
in: {\em Logic-Based Artificial Intelligence},
Springer International Series in Engineering and Comput. Sci., Vol. 597, Chapter 6, pp. 127--144, 2000.
\bibitem{DeneckeretalULT}
M. Denecker, V.M. Marek and M. Truszczy{\'n}ski:
Ultimate approximation and its applications in nonmonotonic knowledge representation systems,
{\em Inform. Comput.}, 192(2004), 82--121.
\bibitem{Esikaxioms}
Z. \'Esik: Identities in iterative and rational algebraic theories.
{\em Comput. Linguist. Comput. Lang.}, 14(1980), 183--207.
\bibitem{Esgroup}
Z. \'Esik: Group axioms for iteration. {\em Inform. Comput.}, 148(1999), 131--180.
\bibitem{EsAC}
Z. \'Esik:
Axiomatizing iteration categories, {\em Acta Cybern.}, 14(1999), 65--82.
\bibitem{Espower}
Z. \'Esik:
The power of the group-identities for iteration,
{\em Int. J. Algbebra Comp.}, 10(2000), 349--374.
\bibitem{EsikWOLLIC}
Z. \'Esik:
Equational properties of stratified least fixed points (Extended abstract),
in: {\em WoLLIC 2015}, LNCS 9160, pp. 174--188, 2015.
\bibitem{EsMSCS2015}
Z. \'Esik:
Equational axioms associated with finite automata for fixed point operations in cartesian categories,
\emph{Math. Struct. Comput. Sci.}, to appear.
\bibitem{EsikTbiLLC}
Z. \'Esik: A representation theorem for stratified complete lattices, CoRR abs/1503.05124, 2015.
\bibitem{EsMFCS2015}
Z. \'Esik:
Equational properties of fixed point operations in cartesian categories: An overview.
In: {\em MFCS (1)}, Springer, LNCS 9234, pp. 18--37, 2015.
\bibitem{EsikRondogiannis2}
Z. \'Esik and P. Rondogiannis:
Theorems on pre-fixed points of non-monotonic functions with applications in logic programming and formal grammars,
{\em WoLLIC 2014}, LNCS 8652, pp. 166--180, 2014.
\bibitem{EsikRondogiannis1}
Z. \'Esik and P. Rondogiannis:
A fixed point theorem for non-monotonic functions, {\em Theor. Comput. Sci.}, 574(2015), 18--38.
\bibitem{Ferrarisetal}
P. Ferraris, J. Lee, V. Lifschitz and R. Palla:
Symmetric splitting in the general theory of stable models
in: proc. {\em IJCAI 2009}, pp. 797--803, IJCAI Organization, 2009.
\bibitem{Fittingsurvey}
M. Fitting:
Fixed point semantics for logic programming, a survey,
{\em Theoret. Comput. Sci.}, 278(2002), 25--51.
\bibitem{Fittingnice}
M. Fitting: Bilattices are nice things, in: \emph{Self-Reference}, Center for the Sudy of Language
and Information, pp. 53--77, 2006.
\bibitem{Ginsberg}
M. Ginsberg: Multivalued logics: a uniform approach to reasoning in AI.
{\em Comput. Intelligence}, 4(1988), 256--316.
\bibitem{Janhunenetal}
T. Janhunen, E. Oikarinen, H. Tompits and S. Woltran:
Modularity aspects of disjunctive stable models,
{\em J. Artificial Intell. Research}, 35(2009), 813-–857.
\bibitem{LifschitzTurner}
V. Lifschitz and H. Turner: Splitting a logic program,
in: proc. \emph{Logic Programming 1994}, pp. 23--37, MIT Press, 1994.
\bibitem{RondogiannisWadge}
P. Rondogiannis and W.W. Wadge:
Minimum model semantics for logic programs with negation-as-failure,
{\em ACM Trans. Comput. Log.}, 6(2005), 441--467.
\bibitem{Vennekensetal}
J. Vennekens, D. Gilis and M. Denecker:
Splitting an operator:
Algebraic modularity results for logics with fixpoint
semantics,
\emph{ACM Transactions on Computational Logic}, 5(2009), 1–-32.
\end{document}
| -58,766.081336 |
[
-3.4375,
3.111328125
] | 36.962025 |
[
-2.8984375,
0.59619140625,
-2.244140625,
-6.3515625,
-0.53759765625,
9.046875
] |
[
2.66015625,
7.8984375,
1.6318359375,
7.48828125
] | 492 | 6,949 |
[
-3.43359375,
3.916015625
] | 34.972521 |
[
-5.45703125,
-3.466796875,
-4.4375,
-2.259765625,
1.74609375,
11.921875
] | 0.45396 | 22.495947 | 16.817724 | 2.211747 |
[
2.0219569206237793
] | -38,411.673107 | 4.69161 | -58,678.817941 | 0.349672 | 5.649903 |
[
-2.28125,
-3.17578125,
-3.802734375,
-5.1640625,
2.208984375,
12.140625
] |
[
-5.828125,
-2.08984375,
-1.92578125,
-1.36328125,
3.5390625,
4.2578125
] | |
BkiUdqrxK7IDOE3osUUX
|
\section{Introduction and general remarks}
The famous Rado graph $R$ (see \cite{diestel},\cite{rado}) has the property of being isomorphic to $R-W$, where $W\subset V(R)$ is \emph{any} finite set of vertices. A similar property of a countable tree $T$ with infinitely many leaves would be $T \cong T-W$ for any finite set of \emph{leaves} $W$. Trivially, this property is satisfied by an infinite star, but what happens if we restrict ourselves to locally finite trees? If such a tree exists, it must be fairly ``large''. In this article we construct such a tree $T$ of maximal degree $3$, which is best possible.
The problem is motivated by the work of Bonato and Tardif~\cite{bonato} and the author~\cite{tyomkyn} on mutual embeddings of infinite trees. Call two trees \emph{twins} or \emph{mutually embeddable} if each of them contains an isomorphic copy of the other as a subtree. Define the \emph{twin number} of a tree $T$, in notation $m(T)$, to be the number of isomorphism classes of trees twinned with $T$. Bonato and Tardif~\cite{bonato} asked, what values the twin number can take, i.e. how many pairwise non-isomorphic trees can be mutually embeddable. Their conjecture was that the twin number of a tree is always either $1$ or infinite. In~\cite{tyomkyn} the author considered this problem for the class of locally finite trees. While it is easy to construct locally finite trees of twin number $1$, we could not find a locally finite tree $T$ of $m(T)=1$ which has a non-surjective self-embedding, other than the one-way infinite path. It can be shown that such a tree must be isomorphic to $T-x$ for infinitely many of its leaves $x$, so the natural question to ask is, if such a tree exists. The aim of this paper is to give an affirmative answer, and in fact the statement we prove is much stronger. Our construction might be helpful for solving the orginal problem and, we believe, it should be interesting in its own right.
The additional restriction of having infinitely many leaves is not substantial, as the only tree with finitely many leaves and the above property is a ray, i.e.\ a one-way infinite path. More dramatically, a ray is the only tree $T$ with finitely many leaves satisfying $T \cong T-x$ for \emph{some} leaf $x$. Indeed, the removal of a leaf from a tree with finitely many leaves either decreases the number of leaves by $1$ or does not affect it. In the latter case the vertex next to the removed leaf must have had degree $2$. Therefore, if $T$ has a vertex of degree at least $3$ and we start removing leaves of $T$ one by one in any order, we will at some point obtain a tree with fewer leaves than originally. In particular, repeated application of the isomorphism between $T$ and $T-x$ would result in a tree with fewer leaves than $T$ but isomorphic to $T$, a contradiction.
We now describe the construction of the desired tree $T$. First of all, observe that it suffices to ensure $T \cong T-x$ for any single leaf $x$, for then the desired property follows by iteration. We build up $T$ inductively as a union of an ascending chain of trees $T_0 \subset T_1 \subset T_2 \subset \dotsb$ with $\Delta(T_n)= 3$ for all $n$. To be precise, we set $T = (V,E)$ where $V = \bigcup_{i=0}^{\infty}V(T_i)$, noting that the sequence $\{V(T_i)\}_{i=0}^{\infty}$ is ascending as well, and let $e \in E$ if and only if $e \in E(T_i)$ for some $i$. It is immediate that $T$ is another tree of maximal degree $3$.
Let the \emph{core} $c(T)$ of a tree $T$ be the graph spanned by all vertices $v$ such that $T-v$ has at least two infinite components. It is not hard to see that the core must be connected, i.e.\ $c(T)$ is a subtree of $T$. Note also, that the core is invariant under any isomorphism $\phi\colon T \rightarrow T-x$, i.e.\ $\phi$ restricted to $c(T)$ defines an automorphism of $c(T)$, which we also denote by $\phi$. Consequently, the inverse map $\phi^{-1}\colon T-x \rightarrow T$ gives rise to the inverse automorphism $\phi^{-1}\colon c(T) \rightarrow c(T)$.
Let $S$ and $T$ be two trees with vertices $v\in S$ and $w\in T$. The \emph{sum} of two rooted trees $(S,v)+(T,w)$ is constructed by identifying $v$ with $w$ and ``gluing together'' $S$ and $T$ accordingly. Similarly, if $W\subset V(T)$, we can define $(T,W)+(S,v)$ by attaching a copy of $S$ to each $w\in W$ identifying $v$ with $w$.
Let $x_1,x_2, \dotsc, x_n$ be leaves of a tree $T$ such that $T \cong T-x_i$ for each $i$ and let $\phi_i$ be the corresponding isomorphism between $T$ and $T-x_i$. Denote by $\Phi$ the set $\{\phi_1, \phi_2, \dotsc, \phi_n\}$. For $Y \subset c(T)$, define the \emph{closure} $\Phi(Y)$ of $Y$ under $\Phi$ as the set of all vertices $w \in c(T)$ that can be mapped to some $y \in Y$ by finitely applications of $\phi_1, \dotsc, \phi_n$ and $\phi_1^{-1}, \dotsc, \phi_n^{-1}$, that is, $\Phi(Y)$ is the image of $Y$ under $\GenBy{\Phi}$, the free group generated by $\Phi$. Furthermore, given a rooted tree $(S,s)$ we define $*(T,Y,S,s;\Phi)$, the \emph{convolution} of $(T,Y)$ and $(S,s)$ via $\Phi$, to be the tree $(T,\Phi(Y)) + (S,s)$. As $Y \subset \Phi(Y)$, the convolution $*(T,Y,S,s;\Phi)$ extends the sum $(T,Y) + (S,s)$. More importantly, $*(T,Y,S,s;\Phi)-x_i \cong *(T,Y,S,s;\Phi)$ for all $i$ such that the underlying isomorphism extends the one between $T$ and $T-x_i$ and conversely, the convolution $*(T,Y,S,s;\Phi)$ is the minimal extension of $(T,Y) + (S,s)$ preserving the above isomorphism.
The construction of the desired tree $T$ comprises two major parts. First we construct the sequence $\{T_n\}$. Then we show that its union $T$ has the needed property.
\section{Constructing the sequence}
Our aim is to construct an ascending sequence of trees $T_0\subset T_1\subset T_2\subset \dotsb$ along with a sequence of vertices $x_0,x_1,x_2,\dotsc$ such that for all $n$ the following conditions are satisfied:
\begin{enumerate}
\item $\Delta(T_n)=3$.\label{itm:1}
\item $x_i$ is a leaf of $T_n$ for all $i \le n+1$.\label{itm:2}
\item $T_n \cong T_n - x_i$ for all $i \le n$. Moreover, for $i\le m<n$ the isomorphism between $T_n$ and $T_n-x_i$ extends the isomorphism between $T_m$ and $T_m-x_i$.\label{itm:3}
\item Writing $\phi^{n}_i$ for the isomorphism between $T_n$ and $T_n -x_i$, as in $3$. and $\Phi$ for the set $\{\phi^n_0,\dots,\phi^n_n\}$ , there exists a vertex $y \in c(T_n)$ such that each vertex $y' \in \Phi(y)$ has degree $2$ in $T_n$.\label{itm:4}
\item Among all leaves in $T_n$ apart from $x_0,x_1,\dotsc, x_n$ the leaf $x_{n+1}$ has the minimal distance from $x_0$.\label{itm:5}
\end{enumerate}
Let $T_0$ be a doubly infinite ray, labeled by the integers $\dotsc, -1,0,1,2, \dotsc$ with a leaf attached to each \emph{even} non-negative vertex. If we let $x_0$ and $x_1$ be the leaves attached to the vertices labeled by $0$ and $2$, then a trivial check confirms properties~\ref{itm:1}--\ref{itm:5} for $T_0$.
Suppose that we have constructed $T_n$ along with $x_{n+1}$. We wish to extend $T_n$ to $T_{n+1}$. Note that, by property \ref{itm:3} and the fact that $T_n$ is not a ray (which is implied by property \ref{itm:1}) $T_n$ must have infinitely many leaves, as explained in the first section. Thus, once we have constructed $T_{n+1}$ such that properties \ref{itm:1}--\ref{itm:4} are satisfied, we can choose $x_{n+2}$ according to \ref{itm:5}.
We want to construct $T_{n+1}$ as a union of another ascending sequence of trees $U_0 \subset U_1 \subset U_2 \subset \dotsb$ where $U_0 = T_n$ and for all $i$ the tree $U_i$ should have the following properties:
\begin{enumerate}[(i)]
\item $\Delta(U_i)=3$.\label{itm:i}
\item $x_0, x_1, \dots, x_{n+1}$ are leaves of $U_i$.\label{itm:ii}
\item If $i$ is even, then $U_i \cong U_i - x_j$ for $j = 0, 1, \dots n$. If $i$ is odd, then $U_i \cong U_i - x_{n+1}$. In both cases the corresponding isomorphism between $U_i$ and $U_i - x_j$ extends the one between $U_{i-2}$ and $U_{i-2}-x_j$.\label{itm:iii}
\end{enumerate}
At the same time we construct a sequence of vertex sets $W_i \subset U_{i-1}$, whose meaning will become clear later on.
Choose a vertex $y$ as in \ref{itm:4}. Attach a new path $P$ of length $2$ to $y$, whose other end we denote by $y'$. Attach a new doubly infinite path $R$ to $y'$. Let $R_1$ and $R_2$ denote the two rays, into which $y'$ divides $R$. Now to each vertex on $R_1$ except $y'$ attach a copy of $(P,y)+(T_n,y)$ in the same way as it is attached to $y'$. To each vertex on $R_2$ attach a copy of $(P,y)+(T_n-x_{n+1},y)$. Call the resulting tree $U_1$ and let $W_1 = \{y\}$ and $S = U_1-(U_0-y)$. In other words, $U_1 = (U_0,y)+(S,y)$.
To obtain $U_{2i}$ from $U_{2i-1}$, set $U_{2i} = *(U_{2i-2},W_{2i-1},S,y;\Phi_{2i-2})$ where $\Phi_{2i-2}$ is the set of isomorphisms removing $x_0, \dotsc, x_n$ from $U_{2i-2}$ (note that $U_{2i-1} = (U_{2i-2},W_{2i-1})+(S,y)$ so $U_{2i}$ extends $U_{2i-1}$). Define $W_{2i}$ to be $\Phi_{2i-2}(W_{2i-1})\setminus W_{2i-1}$, in other words $W_{2i}$ is the set of those vertices in $U_{2i-1}$ to which we attached a new copy of $(S,y)$.
Similarly, to obtain $U_{2i+1}$ from $U_{2i}$, set $U_{2i+1}= *(U_{2i-1},W_{2i},S,y;\Phi_{2i-1})$ where $\Phi_{2i-1}$ consists of the single isomorphism between $U_{2i-1}$ and $U_{2i-1}-x_{n+1}$. Again, $U_{2i} = (U_{2i-1},W_{2i})+(S,y)$, and hence $U_{2i+1}$ extends $U_{2i}$. Define $W_{2i+1}$ to be $\Phi_{2i-1}(W_{2i})\setminus W_{2i}$.
Note that $W_1 \subset c(U_0)$ and therefore, by the invariance of the core, $W_2 \subset \Phi_0(W_1) \subset c(U_0) \subset c(U_1)$. Hence, $W_3 \subset \Phi_1(W_2) \subset c(U_1)\subset c(U_2)$. It follows by induction that $W_i \subset c(U_{i-1})$ for all $i$. This shows that a leaf of $U_i$ remains a leaf in $U_{i+1}$. In particular, $x_0, x_1, \dotsc, x_{n+1}$ are leaves of all $U_i$ and (\ref{itm:ii}) holds.
Another important observation is the fact that, for even $i$, the set $W_i$ lies on the same side of $P$ as $y$, whereas for odd $i \geq 3$, $W_i$ lies on the same side as $y'$.
We also note that property~(\ref{itm:iii}) is immediate from the construction and the corresponding property of the convolution.
We now must prove (\ref{itm:i}). For $U_1$ this follows by construction. Property~\ref{itm:4} of $T_n$ gives us $\Delta(U_2) = 3$. To prove the general statement it suffices to show that $d_{U_i}(w) = 2$ for all $w \in W_{i+1}$. Indeed, since we know that $d_S(y) = 1$, this will imply that at each step we identify vertices of degree~$2$ with vertices of degree~$1$, i.e.\ no vertex of degree~$4$ or greater is generated. If $w \in W_{i+1}$, then $w = \phi(w')$ for some $w' \in W_{i}$ and $\phi \in \GenBy{\Phi}$ (group generated by $\Phi$). Therefore, by the induction hypothesis, $d_{U_i}(w)=d_{U_i}(w')=2$ unless $w \in \Phi_{i-1}(v)$ for some $v$ adjacent to one of $x_0, \dots, x_n$ if $i$ is odd and adjacent to $x_{n+1}$ if $i$ is even.
Note, however, that $W_3 \not \subset U_0$, which means that for odd $i$ we have $W_i \not \subset U_0$, so $w$ or $w'$ cannot be mapped to a vertex adjacent to $x_0,\dotsc,x_n$. And if $i$ is even, the above obstruction can only occur in the case $i=2$, however, by construction of $U_1$ we know that this cannot happen. Therefore, property~(\ref{itm:i}) holds.
We need to show that $T_{n+1}$, defined as the union of $U_0 \subset U_1 \subset \dotsb$, satisfies \ref{itm:1}--\ref{itm:4}. Property~\ref{itm:1} is immediate from~(\ref{itm:i}). Property~\ref{itm:2} follows from (\ref{itm:ii}) and the fact that we can always choose $x_{n+2}$ according to \ref{itm:5}. Property \ref{itm:3} is a consequence of (\ref{itm:iii}).
So the only property left to be checked is \ref{itm:4}.
Let $z$ be the center of $P$, i.e.\ the vertex between $y$ and $y'$. Note that, by construction of $T_{n+1}$, vertices in $\Phi(z)$ cannot have degree~$1$ in $T_{n+1}$. If some $z' \in \Phi(z)$ has degree~$3$ then $z'$ has degree~$3$ in some $U_i$. But this implies that $z$ has degree~$3$ in some $U_j$, which contradicts the fact that $z \notin W_j$ for any $j$.
\section{Taking the union}
We now define $T$ to be the union of the ascending sequence~$\{T_n\}$. We already mentioned that $\Delta(T)=3$. Note also that, by property~\ref{itm:3} of the sequence $\{T_n\}$, $x_i$ is a leaf of $T$ for all $i$ and property~\ref{itm:5} implies that $T$ has no other leaves. Finally, by property~\ref{itm:3}, $T$ is isomorphic to $T-x_i$ for all $i$. Indeed, since the isomorphisms $\phi_i^n\colon T_n \rightarrow T_n-x_i$ are ``nested'', we can combine them and define $\phi_i\colon T \rightarrow T-x_i$ by $\phi_i(y)=\phi_i^n(y)$, where $n$ is the smallest index satisfying $y \in T_n$. Hence, we have shown that $T$ is a tree with the desired properties.
\section{Further remarks}
The above construction seems to have a very high ``degree of freedom'', i.e.\ altering the construction one can obtain many pairwise nonisomorphic trees with the above property. It would be interesting to find out what properties a locally finite tree can have in addition to $T$ with $T \cong T-x$ for each leave $x$ . In particular, we do not know, whether such a tree $T$ must or can be isomorphic to $T-X$ for some \emph{infinite} set of leaves $X$. This is closely related to the problem of twin numbers and the solution could shed more light on ``paradoxical'' properties of graphs.
| -16,152.193229 |
[
-2.357421875,
2.19140625
] | 75.675676 |
[
-2.896484375,
1.251953125,
-2.41796875,
-6.6953125,
-1.6044921875,
9.734375
] |
[
3.923828125,
8.6953125,
2.525390625,
6.80859375
] | 124 | 2,102 |
[
-2.994140625,
3.591796875
] | 33.610487 |
[
-5.0703125,
-3.751953125,
-4.83984375,
-2.2890625,
1.67578125,
12.4453125
] | 0.715859 | 43.559058 | 27.3549 | 2.338935 |
[
2.175076723098755
] | -11,584.563689 | 4.319696 | -16,075.062783 | 0.539648 | 5.465351 |
[
-1.724609375,
-2.611328125,
-3.330078125,
-5.28515625,
1.7607421875,
11.5
] |
[
-5.5703125,
-0.95703125,
-1.4208984375,
-0.8486328125,
3.29296875,
2.841796875
] | |
BkiUddo5qWTA8XvyTevi
|
\section{Introduction}
The classical embedding in Sobolev spaces $H^{S}(\mathbb{R}^d)\subset \dot H^{r}(\mathbb{R}^d)$ for $0\leq r\leq S$ follows from
the interpolation inequality in homogeneous Sobolev spaces
\begin{equation}\label{GagliNiren}
\|D^r \varphi\|_{L^p(\mathbb{R}^d)}\leq C(r,S,p, d) \, \|\varphi\|_{L^2(\mathbb{R}^d)}^{1-\theta} \, \|D^{S} \varphi\|_{L^2(\mathbb{R}^d)}^{\theta}, \,,
\end{equation}
where $\varphi\in H^{S}(\mathbb{R}^d)$ and $D^s \varphi$ is defined by
$$
(\widehat {D^s \varphi}) (\xi)
= |\xi|^s \widehat{\varphi} (\xi).
$$
The inequality \eqref{GagliNiren} holds, see \cite{BM01}, Corollary 1.5 in \cite{HMOW11}, \cite{BM19} or Theorem 2.44 in \cite{BCD} provided that
\begin{itemize}
\item $\frac{1}{p}=\frac{1}{2}+ \frac{r-\theta S}{d},$
\item $\frac{r}{S}\leq \theta\leq 1$,
\item $0<r\leq S$, $p>1$.
\end{itemize}
We notice that at the endpoint case $p=2$, corresponding to $\theta=\frac{r}{S}$, we have
\begin{equation}\label{GagliNirenq4}
\|D^r \varphi\|_{L^2(\mathbb{R}^d)}\leq C(r,S,2, d)\|\varphi\|_{L^2(\mathbb{R}^d)}^{1-\frac{r}{S}} \, \|D^{S} \varphi\|_{L^2(\mathbb{R}^d)}^{\frac{r}{S}},
\qquad \forall \varphi\in H^{S}(\mathbb{R}^d),
\end{equation}
and hence the embedding $H^{S}\subset \dot H^{r}$ for $0\leq r\leq S$ is just a consequence of \eqref{GagliNirenq4}.
If we look at the endpoint cases $\theta=\frac{r}{S}$ and $\theta=1$ in \eqref{GagliNiren} we obtain that the range of exponents $p$ without any symmetry and positivity assumption fulfills
\begin{equation}\label{eq:rangep}
\begin{aligned}
& p\in[2,\frac{2d}{d-2(S-r)}] & \text{if } S-r<\frac{d}{2},\\
& p\in[2,\infty) & \text{if } S-r\geq \frac{d}{2}.
\end{aligned}
\end{equation}
We remark that the lower endpoint does not depend on dimension $d$.
Moreover, looking at \eqref{GagliNirenq4}, it is easy to prove that the best constant in \eqref{GagliNirenq4} is $C(r,S,2, d)=1$. Indeed from H\"older's inequality in frequency applied to l.h.s. of \eqref{GagliNirenq4} we get $C(r,S,2, d)\leq 1$ and calling $A_n=\left\{\xi \in \mathbb{R}^d \text{ s.t. }1-\frac{1}{n}<|\xi|<1+\frac{1}{n}\right\}$ it suffices to consider
a sequence $\varphi_n$ such that $\hat \varphi_n(\xi)=\mathbbm{1}_{A_n}(\xi)$ to prove that $C(r,S,2, d)=1$.
In the sequel we consider $ r,S,d$ as fixed quantities and we aim to study the range of $p$ such that \eqref{GagliNiren} holds in case we restrict to \emph{radially symmetric} functions $\varphi$ in $H^{S}(\mathbb{R}^d)$ such that $D^r \varphi$ is not only radially symmetric but also either \emph{positive} or \emph{negative}.
We introduce the notation for $0<r<s$
\begin{equation}
\dot H^{s}_{rad}(\mathbb{R}^d):=\{ \varphi \in \dot H^s(\mathbb{R}^d), \ \ \varphi=\varphi(|x|) \},
\end{equation}
\begin{equation}
H^{s}_{rad}(\mathbb{R}^d):=\{ \varphi \in H^s(\mathbb{R}^d), \ \ \varphi=\varphi(|x|) \},
\end{equation}
\begin{equation}
H^{s,r}_{rad, +}(\mathbb{R}^d):=\{ \varphi \in H^s_{rad}(\mathbb{R}^d), \ \ \ D^r \varphi \geq 0 \},
\end{equation}
\begin{equation}
H^{s,r}_{rad, -}(\mathbb{R}^d):=\{ \varphi \in H^s_{rad}(\mathbb{R}^d), \ \ \ D^r \varphi \leq 0 \}.
\end{equation}
By the relation $ (\widehat {-\Delta \varphi}) (\xi)
= 4\pi^2 |\xi|^2 \widehat{\varphi} (\xi)= 4\pi^2(\widehat {D^2 \varphi}) (\xi)$ we shall emphasize that $H^{s,2}_{rad, +}(\mathbb{R}^d)$ corresponds to the set of \emph{superharmonic} radially symmetric functions belonging to $H^s(\mathbb{R}^d)$ while
$H^{s,2}_{rad, -}(\mathbb{R}^d)$ corresponds to the set of \emph{subharmonic} radially symmetric functions belonging to $H^s(\mathbb{R}^d)$. In the sequel we will call when $r\neq 2$
\emph{fractional superharmonic} radially symmetric functions belonging to $H^s(\mathbb{R}^d)$ the functions belonging to $H^{s,r}_{rad, +}(\mathbb{R}^d)$ and \emph{fractional subharmonic} radially symmetric functions belonging to $H^s(\mathbb{R}^d)$ the functions belonging to $H^{s,r}_{rad, -}(\mathbb{R}^d)$.
The main questions we are interesting in are the following ones:
Question A: Can we find appropriate values of $(r,S)$ such that $p$ can be chosen below $2$ in \eqref{GagliNiren} for fractional superharmonic (resp. subharmonic) functions belonging to $H^{S,r}_{rad, +}(\mathbb{R}^d)$?
\vspace{0.4cm}
Question B: If the answer of question A is positive, then can we expect a compact embedding of type
\begin{equation}\label{eq.cce1}
H^{S,r}_{rad, +}(\mathbb{R}^d) \subset \subset \dot H^r(\mathbb{R}^d)?
\end{equation}
In the sequel we will consider the case $\varphi \in H^{S,r}_{rad, +}(\mathbb{R}^d)$ but all the results are still valid if we consider $\varphi \in H^{S,r}_{rad, -}(\mathbb{R}^d)$.
The first result of the paper gives a positive answer to Question A.
\begin{thm}\label{thm:super}
Let $d\geq 2$ and $\frac 12 <r<\min(\frac{d}{2}, S-\frac 12)$, then
\begin{equation}\label{GagliNiren5}
\begin{aligned}
&\|D^r \varphi\|_{L^p(\mathbb{R}^d)}\leq C_{rad,+}(r,S,p, d) \, \|\varphi\|_{L^2(\mathbb{R}^d)}^{1-\theta} \, \|D^{S} \varphi\|_{L^2(\mathbb{R}^d)}^{\theta}, \\
& \forall \varphi \ \in H^{S,r}_{rad,+}(\mathbb{R}^d) \,,
\end{aligned}
\end{equation}
with
\begin{align}
& p\in(p_0,\frac{2d}{d-2(S-r)}] & & \text{if } S-r<\frac{d}{2},\\
& p\in(p_0,\infty) & & \text{if } S-r\geq \frac{d}{2},
\end{align}
with $\theta$ fixed by the scaling equation
$$\frac{1}{p}=\frac{1}{2}+ \frac{r-\theta S}{d},$$ and $p_0<2$ is given by
$$p_{0}=\frac{d-2r+2(S-r)(d-1)}{-((S -r)-\frac 12)(d-2r)+2(S-r)(d-1)}.$$
\end{thm}
\begin{rem}
Theorem \ref{thm:super} holds also for $\varphi \ \in H^{S,r}_{rad,-}(\mathbb{R}^d)$. The crucial condition is that $D^r \varphi$ does not change sign.
\end{rem}
The constant $C_{rad,+}(r,S,p, d)$ in \eqref{GagliNiren5} is defined as best constant in case of functions belonging to $ H^{S,r}_{rad,+}(\mathbb{R}^d)$.
The fact that $p_0<2$ in the above Theorem implies $D^r \varphi \in L^{p}$ with $p \in (p_0,2)$ and this allows us to obtain also a positive answer to Question B.
\begin{thm}\label{thm:main2}
Let $d\geq 2$ and $\frac 12 <r_0<\min(\frac{d}{2}, S-\frac 12)$, then the embedding
$$H^{S,r_0}_{rad,+}(\mathbb{R}^d)\subset \subset \dot H^{r}_{rad}(\mathbb{R}^d)
$$
is compact for any $0<r<S.$
\end{thm}
\begin{rem}
Theorem \ref{thm:main2} holds also in $H^{S,r_0}_{rad,-}(\mathbb{R}^d)$. Clearly the main difficult in Theorem \ref{thm:main2} is to prove that the embedding $H^{S,r_0}_{rad,+}(\mathbb{R}^d)\subset \subset \dot H^{r_0}_{rad}(\mathbb{R}^d)$ is compact, the compactness for $r\neq r_0$
will follow by interpolation.
\end{rem}
As a second byproduct we have also the following result concerning the existence of maximizers for the interpolation inequality \eqref{GagliNiren5} in case $p=2$.
\begin{thm}\label{thm:main3}
Let $d\geq 2$ and $\frac 12 <r<\min(\frac{d}{2}, S-\frac 12)$ then
$$\|D^r \varphi\|_{L^2(\mathbb{R}^d)}\leq C_{rad,+}(r,S,2, d)\|\varphi\|_{L^2(\mathbb{R}^d)}^{1-\frac{r}{S}} \, \|D^{S} \varphi\|_{L^2(\mathbb{R}^d)}^{\frac{r}{S}}, $$
$$ \forall \varphi \ \in H^{S,r}_{rad,+}(\mathbb{R}^d),$$
and the best constant $C_{rad,+}(r,S,2, d)$ is attained and $C_{rad,+}(r,S,2, d)<1.$
\end{thm}
The strategy to prove Theorem \ref{thm:super} and as a byproduct, the compactness result given in Theorem \ref{thm:main2}, it to rewrite \eqref{GagliNiren} involving $L^2$ norms of Riesz potentials when $0<r<d$. By defining $u=D^r \varphi$ we obtain
\begin{equation}\label{GagliNiren2}
\| u\|_{L^p(\mathbb{R}^d)}\leq C(\alpha,s,p, d) \, \|\frac{1}{|x|^{\alpha}}\star u\|_{L^2(\mathbb{R}^d)}^{1-\theta} \, \|D^s u\|_{L^2(\mathbb{R}^d)}^{\theta}
\end{equation}
where $\alpha=d-r$, $s=S-r$. With respect to the new variables $\alpha, s$ we get without any symmetry or positivity assumption
\begin{equation}\label{eq:rangepm}
\begin{aligned}
& p\in[2, \frac{2d}{d-2s}] & & \text{if } s<\frac{d}{2},\\
& p\in[2, \infty) & & \text{if } s \geq \frac{d}{2}.
\end{aligned}
\end{equation}
If one considers functions fulfilling $D^r \varphi=u\geq 0$, inequality \eqref{GagliNiren2} is hence equivalent to the following inequality
\begin{equation}\label{GagliNiren3}
\| u\|_{L^p(\mathbb{R}^d)}\leq C(\alpha,s,p, d) \, \|\frac{1}{|x|^{\alpha}}\star |u| \|_{L^2(\mathbb{R}^d)}^{1-\theta} \, \|D^s u\|_{L^2(\mathbb{R}^d)}^{\theta}
\end{equation}
considering $|u|$ instead of $u$ in the Riesz potential. The strategy is hence to prove that \emph{the radial symmetry} increases the range of $p$ for which
\eqref{GagliNiren3} holds and therefore as byproduct the range of $p$ for which
\eqref{GagliNiren2} holds when $D^r \varphi=u$ is \emph{positive and radially symmetric} (resp. \emph{negative}). In particular we will show that the lower endpoint is allowed to be below $p=2$. A reasonable idea to prove that the lower endpoint exponent in \eqref{GagliNiren3} decreases with radial symmetry is to look at a suitable pointwise decay in the spirit of the Strauss lemma \cite{S77} (see also \cite{SS00,SS12} for Besov and Lizorkin-Triebel classes). In our context where two terms are present, the Sobolev norm and the Riesz potential involving $|u|$, we have been inspired by \cite{MVS} where the case $s=1$ in \eqref{GagliNiren3} has been studied (see also \cite{BGO} and \cite{BGMMV}). For our purposes the fact that $s$ is in general not integer makes however the strategy completetly different from the one in \cite{MVS} and we need to estimate the decay of the high/low frequency part of the function to compute the decay. To this aim we compute the high frequency part using the explicit formula for the Fourier transform for radially symmetric function involving Bessel functions, in the spirit of \cite{CO}, while we use a weighted $L^1$ norm to compute the decay for the low frequency part. The importance of a pointwise decay for the low frequency part involving weighted $L^p$ norms goes back to \cite{D} and we need to adapt it to our case in order to involve the Riesz potential.
Here is the step where \emph{positivity} is crucial. Indeed if one is interested to show a scaling invariant weighted inequality as
\begin{equation}\label{eq:scalinv}
\int_{\mathbb{R}^d} \frac{|u(x)|}{|x|^{\gamma}}dx \leq C \| \frac{1}{|x|^{\alpha}}\star |u| \| _{L^2(\mathbb{R}^d)}
\end{equation}
a scaling argument forces the exponent $\gamma$ to verify the relation $\gamma=\alpha-\frac{d}{2}$. Unfortunately \eqref{eq:scalinv} cannot hold in the whole Euclidean space following a general argument that goes back to \cite{MVS} and \cite{R}. However a scaling invariant inequality like \eqref{eq:scalinv} restricted on balls and on complementary of balls is enough for our purposes. Eventually, using all these tools, we are able to compute a pointwise decay that allows the lower endpoint for \eqref{GagliNiren3} to be below the threshold $p=2$.
Computed the pointwise decay we will follow the argument in \cite{BGO} to estimate the lower endpoint for fractional superharmonic (resp. subharmonic) radially symmetric functions.
Concerning the compactness we prove that taking a bounded sequence $\varphi_n \in H^{S,r}_{rad, +}$ then $\varphi_n \to \varphi$ $\dot{H}^r$ with $r>0$. Our strategy is to prove the smallness of $\|D^r(\varphi_n - \varphi)\|_{L^2(B_\rho)}$ and $\|D^r(\varphi_n - \varphi)\|_{L^2(B_\rho^c)}$ for suitable choice of the ball $B_\rho.$ For the first term we use Rellich-Kondrachov argument combined with commutator estimates, while for the exterior domain we use the crucial fact that $D^r(\varphi_n - \varphi)$ is in $L^p(|x|>\rho) $ for some $ p \in (1,2).$
Looking at the case $r=0$, by Rellich-Kondrachov we have $\|\varphi_n - \varphi\|_{L^2(B_\rho)} =o(1)$, however we can not obtain the smallness in the complementary $B_\rho^c$ of the ball so the requirement $r>0$ seems to be optimal.
It is interesting to look at the lower endpoint exponent $p_0$ given in Theorem \ref{thm:super} in case we consider radially symmetric superharmonic (or subharmonic), namely when $r=2$. In this case the condition $\frac 12 <r<\min(\frac{d}{2}, S-\frac 12)$, imposes to consider the case $d\geq 5$ and $S>\frac{5}{2}$. As an example we show on Figure \ref{G1} the graph of the function $p_0(S)$, that now is only a function of $S$, in lowest dimensional case $d=5$ that is a branch of hyperbola with asymptote $p_\infty=\lim_{S\rightarrow \infty} p_0(S)=8/7.$ It is interesting how the regularity improves the lower endpoint $p_0(S)$.
As a final comment we notice that for $d\geq 2$ if $D^2\varphi\geq 0$ then $D^{\frac{3}{4}}\varphi=D^{-{\frac{5}{4}}}\left(D^2\ \varphi \right)\geq 0$ then, taking $r_0=3/4$ and using the positivity of the Riesz kernel of $D^{-{\frac{5}{4}}},$ we apply Theorem \ref{thm:main2} and we get the following corollary.
\begin{cor}
Let $\varphi_n$ be a sequence of radially symmetric superharmonic functions uniformly bounded in $H^2(\mathbb{R}^d)$, $d\geq 2$. Then for any $0<r<2$, up to subsequence
$\varphi_n \to \varphi$ in $\dot H^r(\mathbb{R}^d)$.
\end{cor}
\begin{figure}
\includegraphics[width=10cm]{gama4}\\
\caption{The graph of the function $p_0(S) = (16 S-30)/(14 S-27)$ in the case of superharmonic or subharmonic functions. Here $r=2,d=5$ and $S>5/2.$}\label{G1}
\end{figure}
\section{Interpolation inequalities for radial functions involving Riesz potentials.}
Let $d\geq 2$, $0<\alpha<d$, $\frac 12 <s,$ we define
$$X=X_{s,\alpha,d}=\left\{u \in \dot H^s_{rad }(\mathbb{R}^d), \ \ \left\|\frac{1}{|x|^{\alpha}}\star |u|\right\|_{L^2}<+\infty \ \right\}.$$
The aim of this section is to prove the following
\begin{thm}\label{thm:main}
Let $u\in X$ with $d\geq 2$, $s>\frac 12$, $\frac{d}{2}<\alpha<d-\frac 12$, then $u \in L^p(\mathbb{R}^d)$ with
\begin{align*}
& p\in(p_{rad}, \frac{2d}{d-2s}] & & \text{if } s<\frac{d}{2},\\
& p\in(p_{rad}, \infty) & & \text{if } s \geq \frac{d}{2}.
\end{align*}
where $p_{rad}<2$ with
$$p_{rad}=\frac{2(\alpha-\frac{d}{2})+2s(d-1)}{-(2s-1)(\alpha-\frac{d}{2})+2s(d-1)}.$$
\noindent Moreover, we have the scaling invariant inequality for $u\in X$
$$\| u\|_{L^p(\mathbb{R}^d)}\leq C(\alpha, s, p, d) \, \|\frac{1}{|x|^{\alpha}}\star |u|\|_{L^2(\mathbb{R}^d)}^{1-\theta} \, \|D^s u \|_{L^2(\mathbb{R}^d)}^{\theta},$$
with $p\in(p_{rad},\frac{2d}{d-2s}]$ if $s<\frac{d}{2}$ and $p\in(p_{rad},\infty)$ if $s\geq \frac{d}{2}$. Here $\theta$ is fixed by the scaling invariance
$$\frac{d}{p}=(1-\theta)((d-\alpha)+\frac{d}{2})+\theta(-s+\frac{d}{2}).$$
\end{thm}
In order to show Theorem \ref{thm:main} we need to prove some preliminary results.
\begin{prop}\label{prop:scalinv}Let $d\geq 1$, $q>1$, $\frac{d}{q}<\alpha<d$, $\delta>0$, then there exists $C>0$ such that
\begin{equation}\label{eq:dalbass1}
\int_{B_R(0)^c} \frac{|u(x)|}{|x|^{\alpha-\frac{d}{q}+\delta}}dx \leq \frac{C}{R^{\delta}} ||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^q(\mathbb{R}^d)}
\end{equation}
\begin{equation}\label{eq:dalbass2}
\int_{B_R(0)} \frac{|u(x)|}{|x|^{\alpha-\frac{d}{q}-\delta}}dx \leq C R^{\delta} ||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^q(\mathbb{R}^d)}.
\end{equation}
\end{prop}
The proposition for $q=2$ has been proved in \cite{MVS}, we follow the same argument for $q>1$. In order to prove Proposition \ref{prop:scalinv} two crucial lemmas are necessary. The case $q=2$ has been proved in \cite{MVS} and we follow the same argument.
\begin{lem}\label{lem:important}
Let $d\geq 1$, $q\geq 1$, $0<\alpha<d$, then there exists $C>0$ such that for any $a\in \mathbb{R}^d$
$$\int_0^{\infty} \left(\fint_{B_{\rho}(a)} |u(y)|dy\right)^q\rho^{(d-\alpha) q+d-1}d\rho\leq C||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^q(\mathbb{R}^d)}^q.$$
\end{lem}
\begin{proof}
Let us take $x\in \mathcal{A_{\rho}}=B_{\rho}(a) \setminus B_{\frac{\rho}{2}}(a)$, then
$$\frac{1}{|x|^{\alpha}}\star |u|(x)=\int_{\mathbb{R}^d}\frac{|u(y)|}{|x-y|^{\alpha}}dy\geq $$
$$ \geq \int_{B_{\rho}(a) }\frac{|u(y)|}{|x-y|^{\alpha}}dy \geq C \rho^{d-\alpha}\fint_{B_{\rho}(a)} |u(y)|dy.$$
Thus we obtain for $x\in \mathcal{A_{\rho}}$
$$\left(\frac{1}{|x|^{\alpha}}\star |u|(x)\right)^q\geq C \rho^{(d-\alpha) q}\left(\fint_{B_{\rho}(a)} |u(y)|dy\right)^q$$
and hence
$$\int_{\mathcal{A_{\rho}}}\left(\frac{1}{|x|^{\alpha}}\star |u|(x)\right)^q dx \geq C \rho^{(d-\alpha) q+d}\left(\fint_{B_{\rho}(a)} |u(y)|dy\right)^q. $$
By integration we conclude that
$$\int_{0}^{\infty} \rho^{(d-\alpha) q+d-1}\left(\fint_{B_{\rho}(a)} |u(y)|dy\right)^q d\rho \leq $$ $$ \leq C \int_{0}^{\infty} \left(\int_{\mathcal{A_{\rho}}}\left(\frac{1}{|x|^{\alpha}}\star |u|(x)\right)^q dx \right) \frac{d \rho}{\rho}=C ||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^q(\mathbb{R}^d)}^q.$$
\end{proof}
Let us call $W(\rho)=\int_{\rho}^{\infty}w(s)ds$ where $w:(0, \infty)\rightarrow \mathbb{R}$ is a measurable function
such that
\begin{equation}
\int_0^{\infty} |w(\rho)|^{\frac{q}{q-1}}\rho^{\frac{\alpha q+1-d}{q-1}}d\rho<+\infty.
\end{equation}
\begin{lem}\label{lem:important2}
Let $d\geq 1$, $q> 1$, $0<\alpha<d$, then
$$|\int_{\mathbb{R}^d} |u(x)| W(|x|)dx |\lesssim $$ $$ \left(\int_0^{\infty} |w(\rho)|^{\frac{q}{q-1}}\rho^{\frac{\alpha q+1-d}{q-1}}d\rho\right)^{\frac{q-1}{q}}\left(\int_0^{\infty} \left(\fint_{B_{\rho}(a)} |u(y)|dy\right)^q\rho^{\alpha q+d-1}d\rho\right)^{\frac{1}{q}},$$
and hence
\begin{equation}\label{eq:dalbass}
|\int_{\mathbb{R}^d} |u(x)| W(|x|)dx| \leq C ||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^q(\mathbb{R}^d)}.
\end{equation}
\end{lem}
\begin{proof}
We have, thanks to Fubini Theorem,
$$ \int_{\mathbb{R}^d} |u(x)| W(|x|)dx=\int_{\mathbb{R}^d} |u(x)|\left(\int_{|x|}^{\infty}w(\rho)d\rho\right) dx=$$ $$=C \int_0^{\infty}w(\rho)\rho^d \left(\fint_{B_{\rho}(0)} |u(y)|dy\right) d\rho$$
such that by H\"older's inequality we obtain
$$|\int_{\mathbb{R}^d} |u(x)| W(|x|)dx| =C|\int_0^{\infty}w(\rho)\rho^{d-\beta} \left(\fint_{B_{\rho}(0)} |u(y)|dy\right) \rho^{\beta}d\rho | \lesssim $$
$$ \left(\int_0^{\infty} |w(\rho)|^{\frac{q}{q-1}}\rho^{\frac{\alpha q+1-d}{q-1}}d\rho\right)^{\frac{q-1}{q}}\left(\int_0^{\infty} \left(\fint_{B_{\rho}(0)} |u(y)|dy\right)^q\rho^{\alpha q+d-1}d\rho\right)^{\frac{1}{q}},$$
choosing $\beta$ such that $\beta q=(d-\alpha) q+d-1.$
Eq. \eqref{eq:dalbass} comes from Lemma \ref{lem:important}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:scalinv}]$$$$
If we choose
\begin{equation*}
w(\rho)=\left\{
\begin{array}{ll}
0, & \hbox{if $0<\rho<R$;} \\
\frac{1}{\rho^{\alpha-\frac{d}{q}+1+\delta}}, & \hbox{if $\rho>R$.}
\end{array}
\right.
\end{equation*}
thanks to Lemma \ref{lem:important2} we get \eqref{eq:dalbass1}. In order to get \eqref{eq:dalbass2} it is enough to choose
\begin{equation*}
w(\rho)=\left\{
\begin{array}{ll}
0, & \hbox{if $\rho>R$;} \\
\frac{1}{\rho^{\alpha-\frac{d}{q}+1-\delta}}, & \hbox{if $0<\rho<R$.}
\end{array}
\right.
\end{equation*}
\end{proof}
\begin{lem}\label{lem:decayy2}
Let $d\geq 1$, $\frac{d}{2}<\alpha<d$ and $||D^s u||_{L^2(\mathbb{R}^d)}=||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}=1$, then for any $\delta>0$ such that $0<\delta<d-\alpha$,
$$\int_{\mathbb{R}^d} \frac{|u(x)|}{|x|^{\alpha-\frac{d}{2}+\delta}}dx \leq C(\alpha, s,\delta, d).$$
\end{lem}
\begin{proof}
Let $0<\epsilon<\frac{d}{2}$ be a number to be fixed later. We have
$$\int_{B(0,1)}\frac{|u(x)|}{|x|^{\alpha-\frac{d}{2}+\delta}}dx=\int_{B(0,1)}\frac{|u(x)|}{|x|^{\alpha-\frac{d}{2}+\delta-\epsilon}}\frac{1}{|x|^{\epsilon}}dx\leq $$ $$ \leq c_{d,\epsilon} \left(\int_{B(0,1)}\frac{|u(x)|^2}{|x|^{2(\alpha-\frac{d}{2}+\delta-\epsilon)}}\right)^{\frac 12},$$
where $c_{d,\epsilon}=\left(\int_{B(0,1)}\frac{1}{|x|^{2\epsilon}}dx\right)^{\frac{1}{2}}$. Now choose $\epsilon=\alpha-\frac{d}{2}+\delta$. Notice that $\epsilon<\frac{d}{2}$ such that
$$\int_{B(0,1)}\frac{|u(x)|}{|x|^{\alpha-\frac{d}{2}+\delta}}dx\leq c_{d,\epsilon} \left(\int_{B(0,1)}|u(x)|^2 dx\right)^{\frac 12}.$$
which implies
$$\int_{B(0,1)}\frac{|u(x)|}{|x|^{\alpha-\frac{d}{2}+\delta}}dx\lesssim 1.$$
On the other hand by Proposition \ref{prop:scalinv}, when $\frac{d}{2}<\alpha<d$
$$\int_{B(0,1)^c} \frac{|u(x)|}{|x|^{\alpha-\frac{d}{2}+\delta}}dx \leq C ||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}$$
and hence we obtain the claim.\end{proof}
The next Proposition concerning pointwise decay for radial functions in $X$ follows the strategy of Theorem 3.1 in \cite{D}. We will decompose the function in high/low frequency part, estimating the high frequency part involving the Sobolev norm while we control the low frequency part involving the Riesz norm.
\begin{prop}\label{prop:tutto}
Let $d \geq 2,$ $u$ be a radial function in $X$ with $s>\frac 12$, $\frac{d}{2}<\alpha<d$, and
\begin{equation}\label{eq.conJ1}
||D^s u||_{L^2(\mathbb{R}^d)}=||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}=1.
\end{equation}
Then for any $\sigma$ satisfying
\begin{equation}\label{eq.siim1}
\frac{2s\left(\frac{d}{2}-1\right) + \left(\frac{d}{2} \right) }{2s+1} < \sigma < \frac{2s(d-1)- (2s-1) \left(\alpha - \frac{d}{2} \right) }{2s+1}
\end{equation}
we have
$$|u(x)|\leq C(\alpha, s,\sigma, d)|x|^{-\sigma}.$$
\end{prop}
\begin{rem}
It is easy to see that the above Proposition is equivalent to the following statement.
Let $u$ be a radial function in $X$ with $s>\frac 12$, $\frac{d}{2}<\alpha<d$, and
\begin{equation}\label{eq.conJm1}
||D^s u||_{L^2(\mathbb{R}^d)}=||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}=1
\end{equation}
then for any $\delta>0$ such that $0<\delta<d-\alpha$,
$$|u(x)|\leq C(\alpha, s,\delta, d)|x|^{-\sigma}$$
with
\begin{equation}\label{e.sig1}
\sigma=\frac{-(2s-1)(\alpha-\frac{d}{2}+\delta)+2s(d-1)}{2s+1}.
\end{equation}
\end{rem}
\begin{proof}
For any $R>1$ we can take a function $\psi_R(x)=R^{-d}\psi(x/R)$ such that $\widehat{\psi}(\xi)$ is a radial nonnegative function with support in $|\xi| \leq 2$ and $\widehat{\psi}(\xi)=1$ for $|\xi|\leq 1$ and then we make the decomposition of $u$ into low and high frequency part as follows
$$u(x)=\psi_R \star u(x)+ h(x)$$
where $\hat h(\xi)=(1-\hat \psi(R|\xi|))\hat u(\xi)$.
For the high frequency part
we will use
Fourier representation for radial functions in $\mathbb{R}^d$ (identifying the function with its profile)
\begin{equation}\label{eq:rap}
|h(x)|=(2\pi)^{\frac{d}{2}}|x|^{-\frac{d-2}{2}}\int_0^{\infty} J_{\frac{d-2}{2}}(|x|\rho) (1-\psi(R\rho))\hat u (\rho)\rho^{\frac{d}{2}}d\rho
\end{equation}
where $J_{\frac{d-2}{2}}$ is the Bessel function of order $\frac{d-2}{2}.$
Applying the results in \cite{CO} and \cite{D}, we find
\begin{equation}\label{claim B}
|h(x)|\leq c R^{s-\frac 12}|x|^{-\frac 12(d-1)}||u||_{ \dot H^s(\mathbb{R}^d)}, \ s > \frac{1}{2}.
\end{equation}
Indeed, using the uniform bound
$$ |J_{\frac{d-2}{2}}(\rho)| \lesssim (1+\rho)^{-1/2}, $$
we get
$$ |h(x)|\lesssim |x|^{-\frac{d-2}{2}}\int_0^{\infty} |(J_{\frac{d-2}{2}})(|x|\rho)| |(1-\psi(R\rho))| |\hat u (\rho)|\rho^{\frac{d}{2}}d\rho \lesssim $$
$$ |x|^{-\frac{d-2}{2}}\left(\int_{1/R}^{\infty} |J_{\frac{d-2}{2}}(|x|\rho)|^2 \frac{ d \rho}{\rho^{2s-1}} \right)^{1/2} \left(\int_0^{\infty} |\hat u (\rho)|^2 \rho^{2s + d-1}d\rho\right)^{1/2} \lesssim$$
$$ |x|^{-\frac{d-2}{2}}R ^{s-1}\left(\int_{1}^{\infty} (1+|x|\rho/R)^{-1} \frac{ d \rho}{\rho^{2s-1}} \right)^{1/2} \|u\|_{\dot{H}^s(\mathbb{R}^d)} \lesssim $$ $$ \lesssim R^{s-1/2} |x|^{-\frac{d-1}{2}} \|u\|_{\dot{H}^s(\mathbb{R}^d)} $$
and this gives \eqref{claim B}.
For low frequency term
$\psi_R \star u(x)$, since $\psi \in \mathcal{S}\left(\mathbb{R}^{d}\right)$, we can take any $\gamma>1$ so that there exists $C>0$ such that
$$
|\psi(x)| \leq C\left(1+|x|^{2}\right)^{-\gamma / 2}.
$$
We shall need the following estimate that can be found also in \cite{DL} and \cite{D}. For sake of completeness we give an alternative proof of the Lemma in the Appendix.
\begin{lem}\label{l.dN1} If
$ b \in (-d+1, 0), \gamma > d-1,$ then for any radially symmetric function $f(|y|)$ we have
\begin{equation}\label{eq.dN1}
\left| \int_{\mathbb{R}^d} \frac{f(|y|) dy}{(1+|x-y|^2)^{\gamma/2}} \right| \lesssim \frac{1}{|x|^{d-1+b}} \left\| |y|^b f\right\|_{L^1(\mathbb{R}^d)}.
\end{equation}
\end{lem}
Then we estimate $\psi_R \star u(x)$ as follows,
$$
\begin{aligned}
|\psi_R \star u(x)| & \leq \left|\psi_{R}(x)\right| *|u(x)| \leq C \int_{\mathbb{R}^{d}} \frac{1}{R^{d}} \frac{|u(y)|}{\left(1+\left|\frac{x-y}{R}\right|\right)^{\gamma / 2}} d y \\
& \leq C \int_{\mathbb{R}^{d}} \frac{|u(R z)|}{\left(1+\left|\frac{x}{R}-z\right|^{2} d z\right)^{\gamma / 2}} d z \quad(y=R z) .
\end{aligned}
$$
To this end we plan to apply Lemma \ref{l.dN1} assuming $b=-(\alpha-d/2+\delta)$. To check the assumption of the Lemma we use the inequalities
$$ \alpha - \frac{d}{2}+\delta < \frac{d}{2} \leq d-1$$ for $d \geq 2.$ Applying the Lemma \ref{l.dN1} we deduce
$$
\begin{aligned}
&|\psi_R \star u(x)|\\
& \leq C\left|\frac{x}{R}\right|^{-(d-1+b)} \int_{\mathbb{R}^{d}}|u(R z)|^{}|z|^{b} d z \\
& \leq C R^{(d-1+b)}|x|^{-(d-1+b)}\int_{\mathbb{R}^{d}}|u(y)|^{}\left|\frac{y}{R}\right|^{b} \frac{d y}{R^{d}} \\
& \leq C R^{-1 }|x|^{-(d-1+b)}\||y|^{b}u\|_{L^{1}(\mathbb{R}^d)}.
\end{aligned}
$$
Therefore, collecting our estimates and using the condition \eqref{eq.conJ1}, we find
$$
\begin{aligned}
|u(x)| & \leq C\left[|x|^{-(d-1) / 2} R^{s-1 / 2}+|x|^{-(d-1+b) } R^{-1 }\||y|^b u\|_{L^{1}(\mathbb{R}^d)}\right].
\end{aligned}
$$
We use Lemma \ref{lem:decayy2} and we get
$$
\begin{aligned}
|u(x)| & \leq C\left[|x|^{-(d-1) / 2} R^{s-1 / 2}+|x|^{-(d-1+b) } R^{-1 }\right].
\end{aligned}
$$
Minimizing in $R$ or equivalently choosing $R>0$ so that
$$|x|^{-(d-1) / 2} R^{s-1 / 2} = |x|^{-(d-1+b) } R^{-1 },$$
i.e.
$$ R^{s+1/2} = |x|^{-b - (d-1)/2},$$
we find
\begin{equation*}\label{denap1}
|u(x)|\leq C(d,s,\alpha,\delta)|x|^{-\sigma}.
\end{equation*}
where $\sigma $ is defined in \eqref{e.sig1}.
This completes the proof.
\end{proof}
With all these preliminary results we are now ready to prove Theorem \ref{thm:main}.
\begin{proof}
Let $u\in X$ with $||D^s u||_{L^2(\mathbb{R}^d)}=||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}=1$, then by Proposition \ref{prop:tutto}
$$|u(x)|\leq C(d,s,\alpha,\delta)|x|^{-\sigma}$$
with
$$\sigma=\frac{-(2s-1)(\alpha-\frac{d}{2}+\delta)+2s(d-1)}{2s+1}.$$
We aim to show that
$p_{rad}<2$, where $p=2$ is the lower endpoint for \eqref{GagliNiren2}. Therefore it sufficies to show that $\int_{|x|>1}|u|^pdx<+\infty$
provided that $u \in X$ and $p_{rad}<p$ (indeed $\int_{|x|\leq 1}|u|^pdx<+\infty$ for all $0<p<2$ by interpolation). \\
We have, thanks to Proposition \ref{prop:tutto} and Lemma \ref{lem:decayy2},
\begin{equation}
\int_{|x|>1}|u||u|^{p-1}dx\lesssim \int_{|x|>1}\frac{|u|}{|x|^{\sigma(p-1)}}dx\lesssim 1
\end{equation}
provided that $\sigma(p-1)>\alpha-\frac{d}{2}$. This condition is equivalent, $\sigma $ is defined in \eqref{e.sig1} and letting $\delta \rightarrow 0$, to
$$p>\frac{\sigma + \alpha-\frac{d}{2}}{\sigma}=\frac{2(\alpha-\frac{d}{2})+2s(d-1)}{-(2s-1)(\alpha-\frac{d}{2})+2s(d-1)}:=p_{rad}.$$
An elementary computation shows that $p_{rad}<2$ provided that $\frac{d}{2}<\alpha<{d-\frac12}$.
Now consider an arbitrary $v\in X$ and let us call $u=\lambda v(\mu x)$ where the parameters $\lambda, \mu>0$ are chosen such that
$||D^s u||_{L^2(\mathbb{R}^d)}=||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}=1$. By scaling we have
$$1=||D^s u||_{L^2(\mathbb{R}^d)}=\lambda \mu^{s-\frac{d}{2}}||D^s v||_{L^2(\mathbb{R}^d)}$$
$$1=||\frac{1}{|x|^{\alpha}}\star |u| ||_{L^2(\mathbb{R}^d)}=\lambda \mu^{\alpha-\frac{3}{2}d}||\frac{1}{|x|^{\alpha}}\star |v| ||_{L^2(\mathbb{R}^d)}$$
and hence we obtain the relations
$$\mu=\left(\frac{||D^s v||_{L^2(\mathbb{R}^d)}}{||\frac{1}{|x|^{\alpha}}\star |v| ||_{L^2(\mathbb{R}^d)}}\right)^{\frac{1}{\alpha-s-d}}, \ \ \lambda=\frac{||\frac{1}{|x|^{\alpha}}\star |v| ||_{L^2(\mathbb{R}^d)}^{\frac{s-\frac{d}{2}}{\alpha-d-s}}}{||D^s v||_{L^2(\mathbb{R}^d)}^{\frac{\alpha-\frac{3}{2}d}{\alpha-d-s}}}.$$
By the previous estimates we have
$$||u||_{L^p(\mathbb{R}^d)}=\lambda \mu^{-\frac{d}{p}}||v||_{L^p(\mathbb{R}^d)}\lesssim 1$$
which implies
$$||v||_{L^p(\mathbb{R}^d)}\lesssim \lambda^{-1}\mu ^{\frac{d}{p}} = ||D^s v||_{L^2(\mathbb{R}^d)}^{\theta} \|\frac{1}{|x|^{\alpha}}\star |v|\|_{L^2(\mathbb{R}^d)}^{1-\theta},$$
where
$$\theta = \frac{d^2-2\alpha p+3dp-2ds}{2p(d+s-\alpha)}, \ \ 1-\theta = \frac{(2s-d)(d+p)}{2p(d+s-\alpha)}.$$
It is easy to see that
$\theta$ is fixed by the scaling invariance
$$\frac{d}{p}=(1-\theta)((d-\alpha)+\frac{d}{2})+\theta(-s+\frac{d}{2}).$$
\end{proof}
\section{Proof of Theorem \ref{thm:super}}
Our goal is to represent $\varphi$ in the form $\varphi = \frac{1}{|x|^{\alpha}}\star u = c D^{-r} u, $ with $\alpha = d-r, $
$ c = \frac{\pi^{d / 2} \Gamma((d-\alpha) / 2)}{ \Gamma(\alpha / 2)}$
and apply Theorem \ref{thm:main}.
Therefore, we choose (modulo constant) $u = D^r \varphi.$
Then the estimate of Theorem \ref{thm:main} gives
$$ \|D^r \varphi\|_{L^p(\mathbb{R}^d)} = \| u\|_{L^p(\mathbb{R}^d)}\lesssim \, \|\frac{1}{|x|^{\alpha}}\star |u|\|_{L^2(\mathbb{R}^d)}^{1-\theta} \, \|D^s u \|_{L^2(\mathbb{R}^d)}^{\theta} =$$
$$ = \left\| D^{-r} |D^{r} \varphi| \right\|_{L^2(\mathbb{R}^d)}^{1-\theta} \|D^S \varphi|\|_{L^2(\mathbb{R}^d)}^{\theta}.$$
By the assumption
\begin{equation}\label{eq.pos1}
D^{r} \varphi( x) \geq 0
\end{equation}
for almost every $x \in \mathbb{R}^d,$ then we deduce
$$ \left\| D^{-r} |D^{r} \varphi| \right\|_{L^2(\mathbb{R}^d)}^{1-\theta} \|D^S \varphi|\|_{L^2(\mathbb{R}^d)}^{\theta}= \left\| D^{-r} D^{r} \varphi \right\|_{L^2(\mathbb{R}^d)}^{1-\theta} \|D^S \varphi|\|_{L^2(\mathbb{R}^d)}^{\theta}$$ and we obtain \eqref{GagliNiren5}. The lower endpoint $p_0$ is hence nothing but $p_{rad}$ of Theorem \ref{thm:main} substituting $\alpha$ with $d-r$ and $s$ with $S-r$.
The condition $\frac 12 <r<\min(\frac{d}{2}, S-\frac 12)$ is equivalent to the conditions $\frac{d}{2}<\alpha<d-\frac 12$, $s>\frac 12$ of Theorem \ref{thm:main}. All these estimates
remain valid if we consider $D^{r} \varphi( x) \leq 0$, i.e if $\varphi \in H^{s,r}_{rad, -}(\mathbb{R}^d)$. Indeed if $\varphi \in H^{s,r}_{rad, -}(\mathbb{R}^d)$
$$ \left\| D^{-r} |D^{r} \varphi| \right\|_{L^2(\mathbb{R}^d)}^{1-\theta} \|D^S \varphi|\|_{L^2(\mathbb{R}^d)}^{\theta}= $$ $$ = \left\| -D^{-r} D^{r} \varphi \right\|_{L^2(\mathbb{R}^d)}^{1-\theta} \|D^S \varphi|\|_{L^2(\mathbb{R}^d)}^{\theta}=\left\| \varphi \right\|_{L^2(\mathbb{R}^d)}^{1-\theta} \|D^S \varphi|\|_{L^2(\mathbb{R}^d)}^{\theta}.$$
\section{Proof of Theorem \ref{thm:main2}}
We prove that under the assumption of Theorem \ref{thm:main2}, the embedding
$$H^{S,r_0}_{rad,+}(\mathbb{R}^d)\subset \subset \dot H^{r_0}_{rad}(\mathbb{R}^d)
$$
is compact. As a byproduct the embedding
\begin{equation}\label{eq:intcomp}
H^{S,r_0}_{rad,+}(\mathbb{R}^d)\subset \subset \dot H^{r}_{rad}(\mathbb{R}^d)
\end{equation}
is compact for any $0<r<S$. The embedding \eqref{eq:intcomp} follows noticing that if $\varphi_n$ converges weakly to some $\varphi$ in $H^{S }_{rad}(\mathbb{R}^d)$ then $\varphi_n$ converges weakly to the same $\varphi$ in $H^{ r_0 }_{rad}(\mathbb{R}^d)$. Now if we prove that (taking a subsequence)
\begin{equation}\label{eq.cco1}
\|D^{r_0} (\varphi_n-\varphi)\|_{L^2}=o(1)
\end{equation}
as $n \to \infty,$ then
by the following interpolation inequalities
$$\|D^r (\varphi_n-\varphi)\|_{L^2}\lesssim\|D^{r_0} (\varphi_n-\varphi)\|_{L^2}^{1-\frac{r-r_0}{S-r_0}} \, \|D^{S} (\varphi_n-\varphi)\|_{L^2}^{\frac{r-r_0}{S-r_0}}=o(1)$$
if $0<r_0<r<S$ and
$$\|D^r (\varphi_n-\varphi)\|_{L^2}\lesssim\| (\varphi_n-\varphi)\|_{L^2}^{1-\frac{r}{r_0}} \, \|D^{r_0} (\varphi_n-\varphi)\|_{L^2}^{\frac{r}{r_0}}=o(1)$$
if $0<r<r_0$, we get \eqref{eq:intcomp}.
To prove \eqref{eq.cco1} we recall that $(\varphi_n)_{n\in\mathbb N}$ is a bounded sequence in $H^{S,r_0}_{rad,+}(\mathbb{R}^d)$ and we can assume that $\varphi_n$ converges weakly to some $\varphi$ in $H^{S}(\mathbb{R}^d)$. To simplify the notation we will use $r$ instead of $r_0$ in the proof of \eqref{eq.cco1}. We choose a bump function $\theta\in C_0^\infty(\mathbb{R}^d)$, such that $\theta = 1$ on $B_1$ and $\theta = 0$ in $\mathbb{R}^d \setminus B_{2 }$ and for any $ \rho > 1$ we define $\theta_\rho(x) = \theta(x/\rho).$ Clearly the multiplication by $\theta_{\rho} \in \mathcal S(\mathbb{R}^d)$ is a continuous mapping $H^{S} (\mathbb{R}^d)\rightarrow H^{S} (\mathbb{R}^d)$.
Now setting $v_n=\theta_\rho \varphi_n$ and $v=\theta_\rho \varphi$ we aim to show that
\begin{equation}\label{eq.smth1}
\lim_{n \to \infty} \|D^{r}( v_n-v)\| _{L^{2}(\mathbb{R}^d)}^2 = \lim_{n \to \infty} \|D^{r}(\theta_{\rho}( \varphi_n-\varphi))\| _{L^{2}(\mathbb{R}^d)}^2 = 0.
\end{equation}
for any $r \in [0, S).$
Indeed, by Plancharel's identity we have
\begin{equation*}
\|D^{r}( v_n-v)\| _{L^{2}(\mathbb{R}^d)}^2=\underbrace{\int_{| \xi|\leq R} |\xi|^{2r}| \widehat v_n(\xi)-\widehat v(\xi)|^2 d\xi}_{=I}+\underbrace{\int_{ |\xi| > R}|\xi|^{2r}| \widehat v_n(\xi)-\widehat v(\xi)|^2 d\xi}_{=II}.
\end{equation*}
Clearly $$II\leq \frac{1}{R^{2( S-r)}}\int_{ |\xi| > R}|\xi|^{2 S}| \widehat v_n(\xi)-\widehat v(\xi)|^2 d\xi$$
and then we can choose $R>0$ such that $II\leq \frac{\epsilon}{2}$.
\\
Since $e^{-2\pi i x\cdot \xi}\in L^2_x(B_{2 \rho})$, by weak convergence in $L^2(B_{2 \rho})$ we have $\widehat v_n(\xi) \rightarrow \widehat v (\xi)$ almost everywhere. Notice that $||\widehat v_n ||_{L^{\infty}}\leq || v_n|| _{L^1(B_{2 \rho})}\leq
\mu(B_{2 \rho})^{\frac 12} || v_n||_{L^2(B_{2 \rho})}\leq \mu(B_{2 \rho})^{\frac 12}||v_n||_{H^{ S}(\mathbb{R}^d)}$ and hence $| \widehat v_n(\xi)-\widehat v(\xi)|^2$ is estimated by a uniform constant so that by Lebesgue's dominated convergence theorem
$$I=\int_{| \xi|\leq R} |\xi|^{2r}| \widehat v_n(\xi)-\widehat v(\xi)|^2 d\xi<\frac{\epsilon}{2},$$
for $n$ sufficiently large. This proves \eqref{eq.smth1}.
Our next step is to
show that for a given $\varepsilon >0$ one can find $\rho_0=\rho_0(\varepsilon)$ sufficiently large and $n_0(\varepsilon)$ sufficiently large so that
\begin{equation}\label{eq.mestr01}
\|\theta_{\rho} D^{r}( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)}^2 \leq \frac{\varepsilon}{2}
\end{equation}
for $ n\geq n_0, \rho\geq \rho_0$ and any $ r \in [0,S).$
We consider first the case $0\leq r \leq 2, r<S.$ The cases $r=0$ and $r=2$ are trivial, for this we assume $0 < r < \min (2,S).$
We shall use the following statement (see Corollary 1.1 in \cite{FGO}).
\begin{prop}
\label{Corollary:1.1}
Let $p,p_1,p_2$ satisfy $1 < p, p_1, p_2 < \infty$ and $1/p = 1/p_1 + 1/p_2$.
Let $r,r_1,r_2$ satisfy $0 \leq r_1, r_2 \leq 1$, and $r = r_1 + r_2$.
Then the following bilinear estimate
\[
\|\underbrace{ D^r(fg) - f D^r g}_{= [D^r,f]g} - g D^r f \|_{L^{p}}
\leq C \|D^{r_1} f\|_{L^{p_1}} \| D^{r_2} g\|_{L^{p_2}}
\]
holds for all $f,g \in \mathcal S$.
\end{prop}
By a density argument the statement holds for $f,g \in H^S(\mathbb{R}^d).$
We choose
$f = \theta_\rho,$ $g=\varphi_n-\varphi$ and $r_1=r_2 = r/2$ and therefore we aim to use \eqref{eq.smth1} and prove that
\begin{equation}\label{eq.tre1}
\begin{aligned}
& \|[\theta_\rho, D^{r}]( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)} \leq \\ & \leq O(\rho^{-r}) \|\varphi_n-\varphi\|_{L^{2}(\mathbb{R}^d)} + O(\rho^{-r/4})\| \varphi_n-\varphi\| _{H^{r}(\mathbb{R}^d)}.
\end{aligned}
\end{equation}
Indeed from the Proposition \ref{Corollary:1.1} we have
$$ \|[\theta_\rho, D^{r}]( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)} \lesssim \underbrace{\|D^{r} \theta_\rho\| _{L^{\infty}(\mathbb{R}^d)}\| \varphi_n-\varphi\| _{L^{2}(\mathbb{R}^d)}}_{= O(\rho^{-r})}+ $$
$$ + \| D^{r/2} \theta_\rho\| _{L^{p_1}(\mathbb{R}^d)} \| D^{r/2}( \varphi_n-\varphi))\| _{L^{p_2}(\mathbb{R}^d)}.$$
It is easy to check the estimate
$$ \| D^{r/2} \theta_\rho\| _{L^{p_1}(\mathbb{R}^d)} = O(\rho^{-r/4}),$$
as $\rho \to \infty,$ and this is obviously fulfilled if
$\frac{d}{p_1} < \frac{r}{4}$. To control
$ \| D^{r/2}( \varphi_n-\varphi))\| _{L^{p_2}(\mathbb{R}^d)} $
we use Sobolev inequality
$$ \| D^{r/2}( \varphi_n-\varphi))\| _{L^{p_2}(\mathbb{R}^d)} \lesssim \| \varphi_n-\varphi\| _{H^r(\mathbb{R}^d)} $$
so we need
$$ \frac{1}{p_2} > \frac{1}{2}- \frac{r-r/2}{d}.$$
Summing up we have the following restrictions for $1/p_1, 1/p_2$
\begin{equation}\label{eq.sim1}
\begin{aligned}
&\frac{1}{p_1}+\frac{1}{p_2} = \frac{1}{2}\\
&\frac{1}{p_1} < \frac{r}{4d}, \ \frac{1}{p_2} > \frac{1}{2}- \frac{r-r/2}{d}.
\end{aligned}
\end{equation}
Choosing
$ p_2= 2+ \kappa, p_1= 2(2+\kappa)/\kappa$ with $\kappa >0$ sufficiently small we see that \eqref{eq.sim1} is nonempty.
Now notice that
\begin{equation}\label{eq.quellocheserv} \|\theta_{\rho} D^{r}( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)}\leq \|D^{r}( \theta_{\rho}(\varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)}+ \|[\theta_\rho, D^{r}]( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)}
\end{equation}
and we conclude that \eqref{eq.mestr01} is true for $0 \leq r<\min(2, S)$ thanks to \eqref{eq.smth1} and \eqref{eq.tre1}.
Now we consider the case
$2 \leq r<S.$ We have
$ D^{r} = D^{r_1}(-\Delta)^\ell,$ where $\ell \geq 1$ is integer and $ 0 < r_1 < 2.$ Then the commutator relation
$$ [A,BC]= [A,B]C+B[A,C]$$ implies
$$ [\theta_\rho, D^{r}] = [\theta_\rho, D^{r_1} ](-\Delta)^\ell+ D^{r_1}[\theta_\rho,(-\Delta)^\ell ].$$
In fact, we have the relation
$$ \theta_\rho D^{r} ( \varphi_n-\varphi) = [\theta_\rho, D^{r_1} ] ( (-\Delta)^\ell(\varphi_n-\varphi)) + D^{r_1}[\theta_\rho,(-\Delta)^\ell ]( \varphi_n-\varphi)$$
and we use \eqref{eq.tre1} so that
$$\|[\theta_\rho, D^{r_1}](-\Delta)^\ell( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)} \leq $$ $$ \leq O(\rho^{-r_1}) \|(-\Delta)^\ell (\varphi_n-\varphi)\|_{L^{2}(\mathbb{R}^d)} + $$ $$ + O(\rho^{-r_1/4})\| D^{r_1+2\ell}( \varphi_n-\varphi)\| _{L^{2}(\mathbb{R}^d)} = o(1)$$
for $\rho \to \infty.$
The term $$ D^{r_1}[\theta_\rho,(-\Delta)^\ell ]( \varphi_n-\varphi)$$ can be evaluated pointwise via the classical Leibnitz rule and then via the fractional Leibnitz rule as follows
$$ \|D^{r_1}[\theta_\rho,(-\Delta)^\ell ]( \varphi_n-\varphi)\|_{L^{2}(\mathbb{R}^d)} \lesssim $$ $$ \lesssim \sum_{1 \leq |\alpha|, |\alpha| +|\beta|=2\ell} \|D^{r_1}(\partial_x^\alpha \theta_\rho) \partial_x^\beta ( \varphi_n-\varphi)\|_{L^{2}(\mathbb{R}^d)} \lesssim O(\rho^{-1}) \|\varphi_n-\varphi\|_{H^r(\mathbb{R}^d)}.$$
Summing up, we conclude that \eqref{eq.mestr01} holds in case $r \in [0,S).$
To conclude that the embedding is compact it remains to show that also $\|D^{r}( \varphi_n-\varphi)\| _{L^{2}(B_{\rho}^c)}^2\leq \epsilon$.
To this purpose we first use the pointwise decay in terms of homogeneous Sobolev norm, see \cite{CO}. Given $r$ there exists $0<\delta<\frac{d-1}{2}$ with $r+\frac{1}{2}+\delta<S$ such that
\begin{equation}\label{eq:decaycc}
|D^{r}( \varphi_n-\varphi)(x)|\leq \frac{C}{|x|^{\gamma}}||\varphi_n-\varphi||_{\dot H^{r+\frac{1}{2}+\delta}(\mathbb{R}^d)}\lesssim \frac{C}{|x|^{\gamma}}
\end{equation}
with $\gamma=\frac{d-1}{2}-\delta$.
Secondly we use that $p_0<2$, i.e. that $p=2$ is non endpoint. By Theorem \ref{thm:super} there exists $\delta_0>0$ sufficiently small such that $D^{r} \varphi_n$ is uniformly bounded in $L^{2-\delta_0}(\mathbb{R}^d)$ and the same holds hence for $D^{r} \varphi$ and $D^{r}( \varphi_n-\varphi)$. As a consequence we have
$$\|D^{r}( \varphi_n-\varphi)\| _{L^{2}(B_{\rho}^c)}^2=\int_{B_{\rho}^c} |D^{r}( \varphi_n-\varphi)|^{\delta_0}|D^{r}( \varphi_n-\varphi)|^{2-\delta_0}dx\leq $$ $$ \leq \frac{C}{\rho^{\gamma}}^{\delta_0}\|D^{r}( \varphi_n-\varphi)\| _{L^{2-\delta_0}(B_{\rho}^c)}^{2-\delta}$$
with
$$\|D^{r}( \varphi_n-\varphi)\| _{L^{2-\delta_0}(B_{\rho}^c)}\leq \|D^{r}( \varphi_n-\varphi)\| _{L^{2-\delta_0}(\mathbb{R}^d)}=O(1).$$
This proves that $\|D^{r}( \varphi_n-\varphi)\| _{L^{2}(B_{\rho})}^2\lesssim \epsilon$ and hence that the embedding is compact.
\section{Proof of Theorem \ref{thm:main3}}
For easier reference we state the following.
\begin{lem}[\emph{pqr} Lemma \cite{FLL}]\label{Lieb}
Let $1\leq p<q<r\leq\infty$ and let $\alpha, \beta, \gamma>0$. Then there are constants $\eta,c>0$ such that for any measurable function $f\in L^p(X)\cap L^r(X)$, $X$ a measure space, with
\begin{equation*}
\|f\|_{L^p}^p\leq \alpha, \quad
\|f\|_{L^q}^q\geq \beta, \quad
\|f\|_{L^r}^r\leq \gamma, \quad
\end{equation*}
one has (with $|\cdot|$ denoting the underlying measure on $X$)
\begin{equation}\label{statement}
\left|\{ x \in X :\ |f(x)|>\eta \}\right| \geq c \,.
\end{equation}
\end{lem}
\begin{lem}[Compactness up to translations in $\dot H^{s}$ \cite{BFV}]\label{LiebIntro}
Let $s>0$, $1<p<\infty$ and $u_n\in \dot H^s(\mathbb{R}^d)\cap L^p(\mathbb{R}^d)$ be a sequence with
\begin{equation}\label{hyp1}
\sup_n \left( \|u_n\|_{\dot H^{s}(\mathbb{R}^d)} +\|u_n\|_{L^p(\mathbb{R}^d)} \right) <\infty
\end{equation}
and, for some $\eta>0$, (with $|\cdot |$ denoting Lebesgue measure)
\begin{equation}\label{hyp2}
\inf_n \left|\{ |u_n|>\eta \}\right|>0 \,.
\end{equation}
Then there is a sequence $(x_n)\subset\mathbb{R}^d$ such that a subsequence of $u_n(\cdot+ x_n)$ has a weak limit $u\not\equiv 0$ in $\dot H^s(\mathbb{R}^d)\cap L^p(\mathbb{R}^d)$.
\end{lem}
The strategy to prove Theorem \ref{thm:main3} follows the one developed in \cite{BFV}. First we aim to show that the maximum of
$$W(\varphi)=\frac{\|D^r \varphi\|_{L^2(\mathbb{R}^d)}}{ \|\varphi\|_{L^2(\mathbb{R}^d)}^{1-\frac{r}{S}} \, \|D^{S} \varphi\|_{L^2(\mathbb{R}^d)}^{\frac{r}{S}}} \ \ \ \varphi \in H^{S,r}_{rad,+}(\mathbb{R}^d),$$
is achieved in $H^{S,r}_{rad,+}(\mathbb{R}^d)$ . Let us consider a maximizing sequence $\varphi_n$. Since $W$ is invariant under
homogeneity $\varphi(x) \mapsto \lambda \varphi(x)$ and scaling
$\varphi \mapsto \varphi(\lambda x)$ for any $\lambda>0$, we can choose a maximizing sequence $\varphi_n$ such that
\begin{equation}\label{0.0}
\|D^r \varphi_n\|_{L^2(\mathbb{R}^d)}= C_{rad,+}(r,S,2, d)+o(1)
\end{equation}
and
\begin{equation}\label{0.1}
\|\varphi_n\|_{L^2(\mathbb{R}^d)}=\|D^{S} \varphi_n\|_{L^2(\mathbb{R}^d)}=1 \,.
\end{equation}
The key observation is that, since we are looking at a non-endpoint case (i.e. $p_0<2$), there exists $\epsilon>0$ such that from inequality \eqref{GagliNiren5} we infer that
\begin{equation}\label{inequ}
\sup_n \max\left\{\|D^r \varphi_n\|_{L^{2-\epsilon}(\mathbb{R}^d)},
\|D^r \varphi_n\|_{L^{2+\epsilon}(\mathbb{R}^d)}\right\}<\infty \,.
\end{equation}
The $pqr$-lemma (Lemma \ref{Lieb}) now implies that
\begin{equation}\label{superlevel}
\inf_n \left|\{ |D^r \varphi_n|>\eta \}\right| >0.
\end{equation}
Next, we apply the compactness modulo translations lemma (Lemma \ref{LiebIntro}) to the sequence $(D^r \varphi_n)$. This sequence is bounded in $\dot H^{S-r}$ by \eqref{0.1}, and \eqref{hyp1} and \eqref{hyp2} are satisfied by \eqref{0.0} and \eqref{superlevel}. Thus possibly after passing to a subsequence, we have $D^r \varphi_n \rightharpoonup \psi\not\equiv 0$ in $ H^{S-r}(\mathbb{R}^d)$. By the fact the embedding is compact we deduce that $\varphi_n(x) \rightarrow \psi\not\equiv 0$ in $ \dot H^{r}(\mathbb{R}^d)$ and hence $ \psi$ is a maximizer for $W$.\\
Now we conclude showing that $C_{rad,+}(r,S,2, d)<1$.\\
Indeed if the best constant is $C_{rad,+}(r,S,2, d)=1$, the maximizer $\psi$ achieves the equality in H\"older's inequality, which means
\begin{equation}\label{bellisssimo}
\begin{aligned}
& \int_{\mathbb{R}^d} |\xi|^{2r}|\widehat \psi|^2d\xi=\int_{\mathbb{R}^d} |\widehat \psi|^{2-\frac{2r}{S}} |\xi|^{2r}|\widehat \psi|^{\frac{2r}{S}}d\xi= \\ & =\left(\int_{\mathbb{R}^d} |\widehat \psi|^2d\xi\right)^{1-\frac{r}{S}}\left(\int_{\mathbb{R}^d} |\xi|^{2S}|\widehat \psi|^2d\xi\right)^{\frac{r}{S}},
\end{aligned}
\end{equation}
where we used as conjugated exponents $\frac{S}{S-r}$ and $\frac{S}{r}$. Now we recall that if $f\in L^p(\mathbb{R}^d)$ and $g\in L^q(\mathbb{R}^d)$ with $p$ and $q$ conjugated exponents achieve the equality
in H\"older's inequality then $|f|^p$ and $|g|^q$ shall be linearly dependent, i.e. for a suitable $\mu, |f|^p=\mu |g|^q$ almost everywhere.
Therefore, calling $f=|\widehat \psi|^{2-\frac{2r}{S}}$ and $g= |\xi|^{2r}|\widehat \psi|^{\frac{2r}{S}}$, the maximizer $\psi$ should satisfy $ |\widehat \psi|^{2}=\mu |\xi|^{2S}|\widehat \psi|^2 $ for a suitable $\mu$ which drives to the contradiction $\widehat \psi=0.$
\section{Appendix.}
The statement of Lemma \ref{l.dN1} can be found in \cite{DL}. Somehow, due to the fact that in the original paper the proof of Lemma \ref{l.dN1} is not easy readable, being a part of a more general statement, we give an alternative short proof.
\begin{proof}[Proof of Lemma \ref{l.dN1}]
We divide the integration domain in two subdomains:
$$ \Omega= \{|x| < |y|/2 \} \cup \{|x| > 2|y|\} $$
and its complementary set $ \Omega^c.$ In $\Omega$ we use
$$ |x-y| \geq \frac{\max(|x|,|y|)}{2} $$ and via
$$ (1+ |x-y| ^2)^{(d-1)/2} \gtrsim (1+ (\max(|x|,|y|)) ^2)^{(d-1)/2} \geq $$ $$ \geq \max(|x|,|y|))^{(d-1)}\gtrsim |x|^{(d-1+b)} |y|^{-b} $$
with $d-1+b>0, -b>0$ we deduce
$$ \frac{1}{(1+|x-y|^2)^{\gamma/2}} = \frac{1}{(1+|x-y|^2)^{(d-1)/2}} \frac{1}{(1+|x-y|^2)^{(\gamma-d+1)/2}}$$ $$ \lesssim \frac{1}{|x|^{d-1+b}} |y|^b \frac{1}{(1+|x-y|^2)^{(\gamma-d+1)/2}} \leq \frac{1}{|x|^{d-1+b}} |y|^b .$$
These estimates imply
\begin{equation}\label{eq.dN2}
\left| \int_{\Omega} \frac{f(y) dy}{(1+|x-y|^2)^{\gamma/2}} \right| \lesssim \frac{1}{|x|^{d-1+b}} \left\| |y|^b f\right\|_{L^1(\mathbb{R}^d)}.
\end{equation}
For the complementary domain $\Omega^c$ we use spherical coordinates $x=r\theta, y =\rho \omega,$ where $r=|x|, \rho=|y|.$ We have to estimate
$$ \int_{\Omega^c} \frac{f(y) dy}{(1+|x-y|^2)^{\gamma/2}} = \int_{r/2}^{2r} K(r,\rho) f(\rho) \rho^{d-1} d\rho, $$
where
\begin{equation}\label{eq.ke2}
K(r,\rho) = K_{\theta,\gamma}(r,\rho) = \int_{\mathbb{S}^{d-1}} (1+|r\theta-\rho\omega|^2)^{-\gamma/2} d \omega.
\end{equation}
To get the desired estimate
\begin{equation}\label{eq.dN3}
\left| \int_{\Omega^c} \frac{f(y) dy}{(1+|x-y|^2)^{\gamma/2}} \right| \lesssim \frac{1}{|x|^{d-1+b}} \left\| |y|^b f\right\|_{L^1(\mathbb{R}^d)}
\end{equation}
it is sufficient to check the pointwise estimate
\begin{equation}\label{eq.kes1}
K(r,\rho) \lesssim r^{-(d-1+b)} \rho^{b} \sim r^{-(d-1)} \ \ \mbox{for $ r/2 \leq \rho \leq 2r$.}
\end{equation}
To deduce this pointwise estimate of the kernel $K$ we note first that $K$ does not depend on $\theta$ so we can take
$\theta=e_d=(0,\cdots,0,1)$ and $\omega = ( \omega^\prime \sin \varphi, \cos \varphi),$ $\omega^\prime \in \mathbb{S}^{d-2}$ and get
$$K(r,\rho) = c\int_0^\pi \frac{\sin ^{d-2} \varphi d \varphi}{(1+r^2+\rho^2-2r\rho\cos \varphi)^{\gamma/2}}.$$
Using the relation
$$ (1+r^2+\rho^2-2r\rho\cos \varphi) = 1+(r-\rho)^2 + r\rho \sin^2 (\varphi/2),$$
we can use the
$$ (1+r^2+\rho^2-2r\rho\cos \varphi) \gtrsim r\rho \sim r^2 $$
when $\rho \sim r$ and $\varphi$ is not close to $0,$ say $\varphi \in (\pi/4, \pi).$
Then we get
$$ \int_{\pi/4}^\pi \frac{\sin ^{d-2} \varphi d \varphi}{(1+r^2+\rho^2-2r\rho\cos \varphi)^{\gamma/2}} \lesssim \int_{\pi/4}^\pi r^{-\gamma} d\varphi \lesssim r^{-\gamma} \leq r^{-d+1} .$$
For $\varphi$ close to $0$, say $\varphi \leq \pi/4$ we use
$$ \frac{\sin ^{d-2} \varphi }{(1+r^2+\rho^2-2r\rho\cos \varphi)^{\gamma/2}} \lesssim \frac{\varphi^{d-2}}{(1+r\rho \varphi^2)^{\gamma/2}}.$$
In this way, making the change of variable $r\varphi=\eta$ we get
$$ \int_0^{\pi/4} \frac{\varphi^{d-2} d\varphi}{(1+r\rho \varphi^2)^{\gamma/2}} \lesssim \int_0^{\infty} \frac{\varphi^{d-2} d\varphi}{(1+r^2 \varphi^2)^{\gamma/2}} \leq $$ $$ \leq r^{-d+1} \int_0^{\infty} \frac{\eta^{d-2} d\eta}{(1+ \eta^2)^{\gamma/2}} \lesssim r^{-d+1} $$
in view of $\rho \sim r$ and $\gamma > d-1.$
Taking together the above estimates of the integrals over $(0,\pi/4)$
and $(\pi/4,\pi)$, we arrive at \eqref{eq.kes1}.
This completes the proof of the Lemma.
\end{proof}
| -74,518.220093 |
[
-1.9736328125,
1.71875
] | 30.42292 |
[
-3.12109375,
0.92138671875,
-2.716796875,
-5.91796875,
-1.712890625,
8.7109375
] |
[
2.890625,
9.25,
-0.71044921875,
4.265625
] | 273 | 4,758 |
[
-3.458984375,
3.71484375
] | 41.383616 |
[
-5.57421875,
-3.873046875,
-4.65625,
-2.65234375,
1.5751953125,
12.78125
] | 0.627472 | 20.125684 | 27.700715 | 3.254721 |
[
1.6185505390167236
] | -46,309.649936 | 6.163094 | -75,740.680601 | 0.491065 | 6.149684 |
[
-1.73828125,
-3.1328125,
-3.994140625,
-5.8984375,
1.865234375,
12.8125
] |
[
-5.63671875,
-1.287109375,
-1.806640625,
-1.28125,
2.98046875,
2.970703125
] | |
BkiUdas4uBhjAAuFmVFS
|
\section{Introduction}
\label{sec:introduction}
Racetrack memories are an emerging form of nonvolatile memory with extremely high density that have the potential to overcome the fundamental constraints of traditional memory devices~\cite{Racetrack2008, RacetrackPIEEE}. They consist of magnetic nanowires that store numerous bits through magnetic polarity; their value is accessed by shifting the bits stored in each wire to heads at fixed locations. Unfortunately, the shift operation is highly unreliable, thereby leading to position errors in the form of deletions and sticky insertions~\cite{HiFi}. Codes that address these errors have a fundamental relationship to \emph{constrained periodicity} due to the reading structure involving multiple heads simultaneously reading at fixed offsets~\cite{chee2018coding, CheeReconstruction}.
This paper aims to develop efficient codes that constrain periodicity in all windows of encoded messages. Specifically, we consider both the \emph{$\ell$-window $p$-period avoiding} (\emph{PA}) \emph{constraint} where all windows of length $\ell$ cannot contain a period $p$, and the \emph{$\ell$-window $p$-least-period avoiding} (\emph{LPA}) \emph{constraint} where all windows of length $\ell$ cannot contain any period $p' < p$. These constraints were first considered by Chee~\textit{et al.}~\cite{chee2018coding}, where a lower bound on the cardinality proved the \emph{existence} of a binary LPA code with $\ell = \lceil \log(n) \rceil + p$ using a \emph{single} redundancy symbol. Sima and Bruck~\cite{MultipleHeadRacetrack} later proposed an $O(n^2 p \log n)$ time algorithm for the constraint $\ell = \lceil \log(n) \rceil + 3p - 2$ using $p + 1$ redundancy symbols; yet, there remains a significant gap in the redundancy between this explicit construction and the lower bound provided by Chee~\textit{et al.}~\cite{chee2018coding}. Conversely, in this paper, we propose an $O(n)$ average-time construction of LPA codes using a single redundancy symbol for $\ell$ being the minimal integer satisfying $\ell = \lceil \log (n-\ell+2) \rceil + p + 1$. Further, we prove that LPA codes that use a single redundancy symbol exist only for values of $\ell$ that satisfy $\ell \geq \log(n-2\ell+p) + p - 3.5$.
The proposed approach is based on \emph{iteratively repairing} invalid windows until a legal message is encountered. Specifically, as long as there exists a window with invalid periodicity, we remove the window and append an alternate encoding of the window (of identical length). While intuitively this algorithm should not converge due to the lack of monotonic progression (e.g., appended symbols may create additional periodic windows) and the existence of cycles (e.g., repairing an invalid window may lead to the original message), we show that subtle properties of the algorithm guarantee convergence. Further, we prove that only $O(1)$ windows are repaired on average, leading to $O(n)$ average time encoding and decoding.
This paper is organized as follows. Section~\ref{sec:background} begins by providing background on periodicity, the previously-proposed codes, and the run-length-limited (RLL) constraint. Section~\ref{sec:codes} presents the proposed construction, and Section~\ref{sec:generalizations} explores several generalizations. Section~\ref{sec:combinatorical} provides a cardinality analysis, and finally Section~\ref{sec:conclusion} concludes this paper. \short{Some proofs are omitted and are available in the extended version~\cite{Extended}.}
\section{Definitions, Preliminaries, and Related Works}
\label{sec:background}
We begin with several definitions and various results in the theory of periodicity, continue with the previous works for periodicity-constrained codes~\cite{chee2018coding, MultipleHeadRacetrack}, and conclude with background on the run-length-limited (RLL) constraint~\cite{MutuallyUncorrelated}.
\subsection{Notations}
\label{sec:background:notation}
For all $i$, we denote $[i] = \set{k\in\mathbb{N}}{1 \leq k \leq i}$. Let $\Sigma_q$ be a finite alphabet of size $q$ and let $\Sigma_q^n$ be the set of all vectors of length $n$ over $\Sigma_q$; without loss of generality, $0,1 \in \Sigma_q$. For a vector $\v{s} = (s_1, \ldots , s_n)$ and $i, j \in [n], i \leq j$, we denote by $\v{s}_i^j$ the window $(s_i, \ldots , s_j)$ of $\v{s}$. A \emph{zero run} of length $r$ of a vector $\v{s} \in \Sigma_q^n$ is a window $\v{s}^{i+r-1}_i$, $i \in [n-r+ 1]$ such that $s_i = \cdots = s_{i+r-1} = 0$. The notation $\v{s}\v{t}$ denotes the concatenation of the two vectors
$\v{s}$ and $\v{t}$, and $\v{s}^k$ denotes the concatenation of $\v{s}$ with itself $k$ times. Unless stated otherwise, $\log$ refers to $\log_q$, where $q$ is the size of the alphabet.
\subsection{Periodicity Definitions}
\label{sec:background:periodicityDefinitions}
We begin this subsection with a definition for the periodicity of a vector, continue by defining a periodicity-avoiding (PA) vector which avoids a specific period in all windows, and then extend this to a least-periodicity-avoiding (LPA) vector which avoids all periods up to a specific value in all windows.
\begin{definition} For $\v{s} \in \Sigma_q^n$, $p \in [n-1]$ is called a \emph{period} of $\v{s}$ if for all $i \in [n-p], s_{i} = s_{i+p}$.
\label{def:period}
\end{definition}
\begin{definition}[PA] For $\v{s} \in \Sigma_q^n$, $\v{s}$ is an \emph{$\ell$-window $p$-period avoiding vector} if every window of length $\ell$ does not possess period $p$. Let $B_q(n, \ell, p)$ be the set of such vectors, and let $b_q(n, \ell, p) = \abs{B_q(n, \ell, p)}$.
\label{def:PA}
\end{definition}
\begin{definition}[LPA] For $\v{s} \in \Sigma_q^n$, $\v{s}$ is an \emph{$\ell$-window $p$-least-period avoiding vector} if $\v{s}$ is an $\ell$-window $p'$-period avoiding (PA) vector for all $p' < p$. Equivalently, every window of length $\ell$ in $\v{s}$ does \emph{not} contain any period $p' < p$. Let $A_q(n, \ell, p)$ be the set of all such vectors $\v{s}$, and let $a_q(n, \ell, p) = \abs{A_q(n, \ell, p)}$. Notice that $A_q(n, \ell, p) = \bigcap_{p'=\lfloor (p+1)/2 \rfloor}^{p-1} B_q(n, \ell, p)$ as multiples of periods are periods. A code $\mathcal{C}$ is called an \emph{$(\ell,p)$-LPA code} if $\mathcal{C}\subseteq A_q(n, \ell, p)$. If the values of $\ell$ and $p$ are clear from the context, it is simply referred to as an \emph{LPA code}.
\label{def:LPA}
\end{definition}
\noindent This paper tackles the following three problems:
\begin{problem} Design $\ell$-window $p$-period LPA codes with efficient encoding/decoding that minimize the value of $\ell$ while requiring only a single redundancy symbol.
\label{prob:LPA}
\end{problem}
\begin{problem} Design $\ell$-window $p$-period LPA codes with efficient encoding/decoding that minimize the number of redundancy symbols for a given small value of $\ell$.
\label{prob:LPAred}
\end{problem}
\begin{problem} Study the values of $a_q(n, \ell, p)$ and $b_q(n, \ell, p)$.
\label{prob:card}
\end{problem}
\subsection{Theory of Periodicity}
\label{sec:background:periodicityTheorems}
Periodicity has been widely explored as a theoretical concept; we highlight key theorems utilized in Sections~\ref{sec:generalizations} and~\ref{sec:combinatorical}.
\begin{theorem}[Fine and Wilf's~\cite{FineWilf, rozenberg2012handbook}]
Let $\v{s} \in \Sigma_q^n$ with periods $p_s$ and $p_t$ where $n \geq p_s + p_t - \gcd(p_s, p_t)$. Then $\gcd(p_s, p_t)$ is also a period of $\v{s}$.
\label{cor:gcd}
\end{theorem}
Theorem~\ref{cor:gcd} provides conditions for the uniqueness of a period in a message: if there are two periods $p_s,p_t < \lfloor n / 2 \rfloor + 2$, then $p_s$ and $p_t$ are both multiples of a smaller period ($\gcd(p_s, p_t)$). Therefore, by extending a message with a symbol that contradicts the minimal period, we find:
\begin{corollary}
Let $\v{s} \in \Sigma_q^n$. Then there exists $a \in \Sigma_q$ such that $\v{s}a \in \Sigma_q^{n+1}$ contains no periods less than $\lfloor n/2 \rfloor + 2$.\footnote{Notice that $n \geq 2p-4$ implies no periods less than $p$.}
\label{cor:primitive}
\end{corollary}
\subsection{Related Works on Constrained Periodicity}
\label{sec:background:related}
Problem~\ref{prob:LPA} was first considered by Chee~\textit{et al.}~\cite{chee2018coding}, which presented a lower bound on $a_q(n, \ell, p)$ to prove that an LPA code with a single redundancy symbol and $\ell = \lceil \log(n) \rceil + p$ exists; unfortunately, an explicit construction was not derived. Sima and Bruck~\cite{MultipleHeadRacetrack} later proposed an explicit construction with $O(n^2p\log n)$ time complexity for $\ell = \lceil \log(n)\rceil + 3p - 2$ using $p+1$ redundancy symbols; yet, the redundancy is significantly higher than Chee~\textit{et al.}~\cite{chee2018coding}.
Section~\ref{sec:combinatorical} highlights the main results from Chee~\textit{et al.}~\cite{chee2018coding} regarding LPA codes, including the lower bound on $a_q(n,\ell,p)$ and a relationship between the PA constraint and the run-length-limited (RLL)~\cite{MutuallyUncorrelated} constraint.
\subsection{Run-Length-Limited (RLL) Constraint}
\label{sec:background:RLL}
The \emph{run-length-limited} (\emph{RLL}) \emph{constraint} restricts the length of runs of consecutive zeros within encoded messages~\cite{MutuallyUncorrelated, marcus2001introduction}. Similar to~\cite{MutuallyUncorrelated}, we consider the \emph{$(0,k)$-RLL constraint}, which imposes the length of every run of zeros to be at most $k$, and for simplicity refer to this constraint as the \emph{$k$-RLL constraint}. Below is the definition of the constraint and the state-of-the-art construction for a single redundancy symbol.
\begin{definition}[RLL] A vector $\v{s} \in \Sigma_q^n$ satisfies the \emph{$k$-RLL constraint} if there are no zero runs of length $k$. Let $R_q(n, k)$ be the set of such vectors, and let $r_q(n, k) = \abs{R_q(n, k)}$. A code satisfying the $k$-RLL constraint is called a \emph{$k$-RLL code}.
\label{def:RLL}
\end{definition}
\begin{construction}[\hspace{-0.001ex}\cite{MutuallyUncorrelated}]
For all $n$ and $k = \lceil \log(n)\rceil + 1$, there exists an explicit construction of $k$-RLL codes with a single redundancy symbol and encoding/decoding with $O(n)$ time.
\label{const:RLL}
\end{construction}
\section{Single-Symbol LPA Construction}
\label{sec:codes}
This section tackles Problem~\ref{prob:LPA} through an approach of iteratively repairing invalid windows in the vectors, resulting in the following construction for a single redundancy symbol.
\begin{construction}
There exists an explicit construction of $(\ell,p)$-LPA codes for $\ell$ being the minimal value satisfying $\ell = \lceil \log(n-\ell+2)\rceil + p + 1$, a single redundancy symbol, and $O(n)$ average-time encoding and decoding complexity.
\label{const:core}
\end{construction}
The main idea is for the encoder to iteratively \emph{repair} invalid windows until no such windows exist, and then reverse these steps in the decoder. While this is relatively simple, the difficulty remains in proving its convergence due to the lack of monotonic progression: repairing a certain window may cause other previously-valid windows (both to the left and the right) to become invalid. Surprisingly, through a reduction to an acyclic graph walk, we nonetheless show that subtle properties of the repair routine inherently guarantee convergence.
This section continues by detailing the proposed encoder and decoder algorithms, proving their convergence through a reduction to an acyclic graph walk, and attaining $O(n)$ average time complexity. For the remainder of this section, $\ell$ is the minimal integer that satisfies $\ell = \lceil \log(n - \ell + 2)\rceil + p + 1$.
\subsection{Proposed Encoder and Decoder}
\label{sec:codes:encoderDecoder}
The encoder, which is explicitly described in Algorithms~\ref{alg:encoder} and~\ref{alg:routine}, iteratively removes invalid windows while appending a representation of the steps performed to the message. Inspired by Construction~\ref{const:RLL}, the redundancy symbol encodes whether any steps were taken: the symbol is initialized to one at the start, and becomes zero if a repair step is taken. The representation of a single step encodes the kernel of the periodic window removed (the first $p'$ symbols in a window with periodicity $p'$), the periodicity ($p'$), and the index of the window. Both the kernel and $p'$ are encoded within the same $p$ symbols by appending a one padded with zeros to the kernel. Notice that the \emph{message size is unchanged} as $\ell$ was chosen to satisfy $\ell = \lceil \log(n-\ell+2)\rceil + p + 1$. Overall, we proceed with such repair steps until there exists no invalid window.
The decoder reverses the steps of the encoder, as inspired by the decoder from Construction~\ref{const:RLL}. The redundancy symbol is utilized to determine whether the last symbols of the message encode a step that was performed by the encoder. If so, then the decoder removes the step representation, reconstructs the invalid window by extending the given kernel according to the given period, and inserts it at the given index.
Example~\ref{exam:mono} exemplifies the encoder for the binary case.
\begin{algorithm}[t]
\centering
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $\v{x} \in \Sigma_q^n$.
\ENSURE $\v{y} \in \Sigma_q^{n+1}$ such that $\v{y} \in A_q(n+1, \ell, p)$.
\STATE $\v{y} \gets \v{x} 1$
\WHILE{$\v{y} \notin A_q(n+1, \ell, p)$}
\STATE $\v{y} \gets Repair(\v{y})$.
\ENDWHILE
\RETURN $\mathbf{y}$.
\end{algorithmic}
\caption{LPA Encoder}
\label{alg:encoder}
\end{algorithm}
\begin{algorithm}[t]
\centering
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $\v{y} \in \Sigma_q^{n+1}$ such that $\v{y} \notin A_q(n+1, \ell, p)$.
\ENSURE $\v{y} \in \Sigma_q^{n+1}$ such that $y_{n+1} = 0$.
\STATE $i \gets $ index of first $\ell$-window in $\v{y}$ with period $p' < p$.
\STATE Append $\v{y}_i^{i+p'-1}10^{p-p'-1}$ to the end of $\v{y}$ (i.e., $\v{y}_i^{i+p'-1}$, then one, then $p-p'-1$ zeros).
\STATE Append the representation of $i$ (using $\lceil \log(n-\ell+2)\rceil$ symbols; zero-indexed) to $\v{y}$.
\STATE Append $0$ to $\v{y}$.
\STATE Remove the $\ell$-window at index $i$.
\RETURN $\v{y}$.
\end{algorithmic}
\caption{$Repair$}
\label{alg:routine}
\end{algorithm}
\begin{example}
Let $n=14$ and $p=4$ (thus $\ell = 8$) with
\begin{equation*}
\v{x} = 10001010101100.
\end{equation*}
Algorithms~\ref{alg:encoder} and~\ref{alg:routine} perform the following steps:
\begin{enumerate}
\item $\v{y} = \v{x}1 = 100010101011001$.
\item $\v{y} \gets Repair(\v{y})$.
\begin{enumerate}
\item The 8-window starting at $i=3$ ($01010101$) is invalid as it possesses period $p'=2 < p$.
\item $\v{y} = \v{y}0110^1 = 100010101011001\ 0110$.
\item $\v{y} = \v{y}011 \hspace{9.5pt} = 100010101011001\ 0110\ 011$.
\item $\v{y} = \v{y}0 \hspace{19.5pt} = 100010101011001\ 0110\ 011\ 0$.
\item Remove the 8-window at index $i=3$ from $\v{y}$.
\item Return $\v{y} = 100100101100110$.
\end{enumerate}
\item $\v{y} \gets Repair(\v{y})$.
\begin{enumerate}
\item The 8-window starting at $i=0$ ($10010010$) is invalid as it possesses period $p'=3 < p$.
\item $\v{y} = \v{y}10010^0 = 100100101100110\ 1001$.
\item $\v{y} = \v{y}000 \hspace{14.5pt} = 100100101100110\ 1001\ 000$.
\item $\v{y} = \v{y}0 \hspace{24.5pt} = 100100101100110\ 1001\ 000\ 0$.
\item Remove the 8-window at index $i=0$ from $\v{y}$.
\item Return $\v{y} = 110011010010000$.
\end{enumerate}
\item Return $\v{y} = 110011010010000 \in A_2(15, 8, 4)$.
\end{enumerate}
\label{exam:mono}
\end{example}
Notice that the first call to the $Repair$ function in Example~\ref{exam:mono} created the invalid window which was then addressed by the second call. That is, while $Repair$ may fix the current window, it may also create other invalid windows. Therefore, it is unclear whether the algorithm will \emph{ever} converge, considering that each repair may lead to additional invalid windows. Indeed, we find that there even exist states ($\v{y} \in \Sigma_q^{n+1}$) that \emph{if ever reached} will cause Algorithm~\ref{alg:encoder} to never converge. This scenario is demonstrated in the following example.
\begin{example}
Let $n=14$ and $p=4$ (thus $\ell = 8$), with
\begin{equation*}
\v{y} = 111111010101010.
\end{equation*}
The repair routine (Algorithm~\ref{alg:routine}) \emph{would} perform the following steps if $\v{y}$ is reached by Algorithm~\ref{alg:encoder} as an intermediate state:
\begin{enumerate}
\item The window starting at $i=5$ ($10101010$) is invalid as it possesses period $p'=2 < p$.
\item $\v{y} = \v{y}1010^1 = 111111010101010\ 1010$.
\item $\v{y} = \v{y}101 \hspace{9.5pt} = 111111010101010\ 1010\ 101$.
\item $\v{y} = \v{y}0 \hspace{19.5pt} = 111111010101010\ 1010\ 101\ 0$.
\item Remove window at index $i=5$ from $\v{y}$.
\item Return $\v{y} = 111111010101010$.
\end{enumerate}
That is, $Repair(\v{y}) = \v{y}$. Therefore, if Algorithm~\ref{alg:encoder} were to ever reach this $\v{y}$, then the encoder would never converge.
\label{exam:loop}
\end{example}
Nonetheless, as proven in Section~\ref{sec:codes:convergence}, the proposed encoder always converges as it inherently avoids such intermediate states (e.g., Example~\ref{exam:loop}) due to subtle properties of the $Repair$ function. Further, Section~\ref{sec:codes:time} demonstrates that the number of steps taken is only $q-1=O(1)$ on average; thus, the encoder and decoder time complexity is $O(n)$ on average.
\subsection{Convergence Proof}
\label{sec:codes:convergence}
This section proves the convergence of the proposed encoder and decoder through a reduction to an acyclic graph walk. We show that the encoder inherently avoids intermediate states that will lead to infinite loops (e.g., Example~\ref{exam:loop}) by exploiting two subtle properties of the $Repair$ function: the fact that it is injective, and the fact that $Repair(\v{y})$ always ends with zero. The intuition for the proof is as follows. Let $\v{y}$ be given such that $Repair(\v{y}) = \v{y}$ (as in Example~\ref{exam:loop}), we show that the encoder will never reach such $\v{y}$ as an intermediate state. Since $Repair(\v{y}) = \v{y}$, then $\v{y}$ ends with zero; thus, the encoder will never start the repair steps with $\v{y}$. Further, \emph{as $Repair$ is injective}, then there exists no $\v{z}\neq \v{y}$ such that $Repair(\v{z}) = \v{y} = Repair(\v{y})$; thus, $\v{y}$ cannot be reached from a different intermediate state $\v{z}$. Therefore, the encoder will never reach any such $\v{y}$ as the encoder cannot start with such $\v{y}$ and the encoder will never update the intermediate state to be such $\v{y}$.
We generalize the above intuition in Theorem~\ref{the:encoderDefined} to also address cyclic structures that consist of more than one intermediate state (e.g., $Repair(\v{y}_1) = \v{y}_2$ and $Repair(\v{y}_2) = \v{y}_1$).
\begin{lemma}
The $Repair$ function from Algorithm~\ref{alg:routine} is injective (that is, for all $\v{z} \neq \v{y}$, it holds that $Repair(\v{z}) \neq Repair(\v{y})$).
\full{
\begin{IEEEproof}
The inverse of $Repair$ on its image is given by decoding the kernel, $p'$ and $i$, and then reconstructing and inserting the window which was removed. As a unique window is defined by the kernel, $p'$, and $i$, then the inverse is well-defined. Therefore, the $Repair$ function is injective.
\end{IEEEproof}
}
\end{lemma}
\begin{theorem}
The encoder from Algorithm~\ref{alg:encoder} is well-defined.
\begin{IEEEproof}
Notice that if the encoder converges, then the output is in $A_q(n+1, \ell, p)$ by design, and thus a valid message is returned. The difficulty remains in proving the convergence.
We model the encoder as a walk on a directed graph $G = (V,E)$ with nodes representing message states and edges representing the $Repair$ function. We let $S$ represent the possible start nodes of the algorithm. That is,
\begin{equation*}
V = \Sigma_q^{n+1} \quad\quad\quad S = \set{\v{v}1}{\v{v} \in \Sigma_q^n} \subseteq V,
\end{equation*}
\begin{equation*}
E = \set{(\v{u}, \v{v})}{\v{u} \notin A_q(n+1,\ell,p),\; Repair(\v{u}) = \v{v}}.
\end{equation*}
Figure~\ref{fig:graph} illustrates an example of this graph. We observe that the in-degree of all nodes is at most one (as the $Repair$ function is injective), and that the in-degree of all nodes in $S$ is zero (as the output of $Repair$ always ends in 0). Assume by contradiction that there exists a cycle $C$ in $G$ that is reachable from a node in $S$. We find that no node in $C$ belongs to $S$ as all nodes in $S$ have an in-degree of zero. Therefore, as $C$ is reachable from a node in $S$, then we find that there exists an edge $\v{u} \rightarrow \v{v}$ such that $\v{u} \notin C$ and $\v{v} \in C$. Yet, this is a contradiction to the in-degree of all nodes being at-most one (as there exists another edge to $\v{v}$ from a node in the cycle).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{graph.png}
\caption{Example graph which Algorithm~\ref{alg:encoder} traverses. The algorithm starts in $S$ and applies the \emph{Repair} function until a valid node is reached (in $A_q(n+1,\ell,p)$). While cycles exist, they are unreachable from $S$ (see Theorem~\ref{the:encoderDefined}).}
\label{fig:graph}
\end{figure}
Assume by contradiction that the encoder does not converge for some input $\v{x} \in \Sigma_q^{n}$. Let $\v{y}^{(1)}, \v{y}^{(2)},\ldots$ be the intermediate states of the encoder (the value of $\v{y}$ before each iteration of the while loop from Algorithm~\ref{alg:encoder}). Since $\v{y}^{(i)} \in \Sigma_q^{n+1}$ for all $i\in\mathbb{N}$ and $\abs{\Sigma_q^{n+1}} < \infty$, then there exist $i < j$ such that $\v{y}^{(i)} = \v{y}^{(j)}$. Therefore, by design of the encoder, we find that $\v{y}^{(i)} \rightarrow \v{y}^{(i+1)} \rightarrow \cdots \rightarrow \v{y}^{(j-1)} \rightarrow \v{y}^{(j)} = \v{y}^{(i)}$ is a cycle in $G$. We note that $\v{y}^{(i)}$ is reachable from a node in $S$ as $\v{y}^{(1)} \in S$ by properties of the encoder. Therefore, we found a cycle $C$ in $G$ that is reachable from a node in $S$. This is a contradiction.
\end{IEEEproof}
\label{the:encoderDefined}
\end{theorem}
\begin{theorem}
The decoder is well-defined and correct.
\begin{IEEEproof}
The proof is similar to the proof of Construction~\ref{const:RLL}.
\end{IEEEproof}
\end{theorem}
\subsection{Time Complexity}
\label{sec:codes:time}
This section extends the analysis of Section~\ref{sec:codes:convergence} to demonstrate that the average time complexity of the encoder and decoder is $O(n)$. We first show that the average number of steps is $O(1)$, and then propose an $O(n)$ algorithm for each step (i.e., the repair and inverse-repair functions).
\begin{lemma}
The average number of iterations of the while loop in Algorithm~\ref{alg:encoder} is at most $q-1=O(1)$.
\begin{IEEEproof}
As shown in Theorem~\ref{the:encoderDefined}, an execution of Algorithm~\ref{alg:encoder} is equivalent to a walk on $G$. We notice that the two paths generated by two distinct inputs are disjoint as nodes in $G$ possess an in-degree of at most one. As paths from distinct inputs are disjoint, then we find that the sum of the lengths of all possible paths is the size of the union of all paths from all possible inputs. Therefore, as all paths are contained in $V\setminus S$ (excluding start nodes), the sum is bounded by $\abs{V\setminus S}$. Let $t(\v{x})$ be the length of the path for input $\v{x} \in \Sigma_q^n$; we find,
\begin{equation*}
\sum_{\v{x} \in \Sigma_q^n} t(\v{x}) \leq \abs{V\setminus S} = q^{n+1} - q^n = (q-1) \cdot q^n.
\end{equation*}
Therefore, we find that the average path length is $q-1=O(1)$,
\begin{equation*}
\frac{1}{q^n}\sum_{\v{x} \in \Sigma_q^n} t(\v{x}) \leq \frac{(q-1) \cdot q^n}{q^n} = q-1 = O(1).
\end{equation*}
\end{IEEEproof}
\label{lemma:O1}
\end{lemma}
\begin{corollary}
The encoder possesses $O(n)$ average time for $\ell \geq 2p-2$ and $O(np)$ average time otherwise.
\full{
\begin{IEEEproof}
We extend Lemma~\ref{lemma:O1} to prove the overall average time complexity of the encoder by proposing efficient worse-case algorithms for the window search in $Repair$. We propose two algorithms, corresponding to the cases of $\ell \geq 2p-2$ and $\ell < 2p-2$, with $O(n)$ and $O(np)$ time (respectively).
For $\ell \geq 2p-2$, we utilize the algorithm proposed by Main~\cite{main1989detecting} which decomposes the string using the $s$-factorization and then searches for the first occurrences of maximal periodicity\footnote{Maximal periodicity refers to periodic runs where extending the runs would contradict the periodicity. The relation to periodic $\ell$-windows is as follows: any periodic $\ell$-window is part of a maximal-periodicity run (can be extended to the left and right as much as possible), and any such run of length at least $\ell$ contains a periodic $\ell$-window in the first $\ell$ symbols (in particular).} ending within each factor. We modify the algorithm as follows to match the LPA constraint in this paper: the routine for step (3.1) is modified to only search for periods up to $p$ ($j$ iterates from $1$ to $p-1$ instead of $1$ to $n$) and windows of size at least $\ell$ (the condition $\text{LS}(j) + \text{LP}(j+1) \geq j$ is replaced with $\text{LS}(j) + \text{LP}(j+1) \geq \ell-j$), and the algorithm returns the index of the first $\ell$-window with periodicity.
For $\ell < 2p-2$, we exploit the equivalence provided by Chee~\textit{et al.}~\cite{chee2018coding} between the PA and RLL constraints through the difference function. That is, for each $p' < p$, we compute the $p'$-difference function, $d_{p'}: \Sigma_q^n \rightarrow \Sigma_q^{n-p'}$ where $(d_{p'}(\v{x}))_i = x_i - x_{i+p'}$, on the entire message $\v{y}$, and then check if this difference satisfies the $(\ell-p')$-RLL constraint (linear-time pass). This provides an $O(np)$ worst-case algorithm for the repair routine.
\end{IEEEproof}
}
\end{corollary}
\begin{corollary}
The decoder possesses $O(n)$ average time.
\full{
\begin{IEEEproof}
We notice that the inverse repair routine can be performed with $O(n)$ worst-case time complexity as all that is required is to decode $p'$, the kernel, and $i$, and then insert the reconstructed window of length $\ell = O(n)$. Therefore, as the number of iterations of the encoder is $O(1)$ (and the decoder performs the same number of iterations as the corresponding execution from the encoder), then we find that the time complexity of the decoder is $O(n)$ on average.
\end{IEEEproof}
}
\end{corollary}
\section{Extensions of the LPA Encoder}
\label{sec:generalizations}
This section tackles Problem~\ref{prob:LPAred} by proposing generalizations of Construction~\ref{const:core} to support smaller window sizes ($\ell < \lceil \log(n-\ell+2)\rceil + p + 1$) while minimizing the number of redundancy symbols. We demonstrate a trade-off between three proposed constructions which are all based on partitioning the input message into independent segments.
\begin{construction}
For any given $n, \ell, p$,
there exists an explicit construction for $(\ell,p)$-LPA codes with $k$ redundancy symbols, for $k$ the smallest integer such that $\ell \geq 2 \cdot (\lceil \log(n/k - \ell/2 + 2) \rceil + p + 1)$, and $O(n)$ average-time encoding/decoding.
\full{
\begin{IEEEproof}
Let $\v{x} \in \Sigma_q^n$ be the input message and let $\v{x}^{(1)}, \ldots, \v{x}^{(k)}$ be the partition of the vector into $k$ non-overlapping segments (e.g., $\v{x}^{(1)} = \v{x}_1^{n/k} = \v{x}_1 \cdots \v{x}_{n/k}$).\footnote{We assume without loss of generality that $k$ divides $n$. Otherwise, the last partition is of smaller size -- still attaining the desired LPA properties.} Let $\v{y}^{(1)}, \ldots, \v{y}^{(k)}$ be the encoded vectors for $\v{x}^{(1)}, \ldots, \v{x}^{(k)}$ according to Construction~\ref{const:core} with $\ell/2$ (respectively), and define $\v{y} = \v{y}^{(1)} \cdots \;\v{y}^{(k)}$. We now show that $\v{y} \in A_q(n+k, \ell, p)$.
Assume by contradiction that $\v{y} \notin A_q(n+k,\ell,p)$; thus, there exists an invalid window in $\v{y}$ of length $\ell$. As the invalid window is continuous, then at least $\ell/2 = \lceil \log(n/k-\ell/2+2) \rceil + p + 1$ symbols belong to the same segment $\v{x}^{(j)}$, for some $1\leq j \leq k$. Therefore, as a sub-vector of a periodic window is also periodic, we have found an invalid window of size $\lceil \log(n/k-\ell/2+2) \rceil + p + 1$ within the segment $\v{x}^{(j)}$. This is a contradiction to $\v{y}^{(1)}, \ldots, \v{y}^{(k)} \in A_q(n/k+1, \ell/2, p)$ by the correctness of Construction~\ref{const:core}. Therefore, $\v{y} \in A_q(n+k, \ell, p)$.
The time complexity of the proposed algorithm follows directly from that of Construction~\ref{const:core}.
\end{IEEEproof}
}
\label{const:2l}
\end{construction}
\begin{construction}
For given $n, \ell, p$ such that $\ell \geq 3p-3$, there exists an explicit construction for $(\ell,p)$-LPA codes with $(p+3) \cdot (k-1)+1$ redundancy symbols, where $k$ is the smallest value that satisfies $\ell \geq \lceil \log(n/k-\ell+2) \rceil + p + 1$, and $O(n)$ average-time encoding/decoding.
\full{
\begin{IEEEproof}
Let $\v{x} \in \Sigma_q^n$ be the input message and let $\v{x}^{(1)}, \ldots, \v{x}^{(k)}$ be the partition of the message. Let $\v{y}^{(1)}, \ldots, \v{y}^{(k)}$ be the encoded vectors for $\v{x}^{(1)}, \ldots, \v{x}^{(k)}$ according to Construction~\ref{const:core} with $\ell$ (respectively). Define
\begin{equation*}
\v{y} = \v{y}^{(1)}u^{(1)}\v{z}w^{(1)}\v{y}^{(2)}u^{(2)}\v{z}w^{(2)}\v{y}^{(3)}\cdots w^{(k-1)}\v{y}^{(k)}
\end{equation*}
where $\v{z} = 10\cdots 0 \in \Sigma_q^{p}$ (vector that does not possess any period) and, for all $j$, $u^{(j)}$ ($w^{(j)}$) are symbols chosen by Corollary~\ref{cor:primitive} to eliminate any periods less than $p$ from the last $2p-4$ symbols of $\v{y}^{(j)}$ (first $2p-4$ symbols of $\v{y}^{(j+1)}$). We now show that $\v{y} \in A_q(n+(p+3) \cdot (k-1)+1, \ell, p)$.
Assume by contradiction that $\v{y} \notin A_q(n+(p+3) \cdot (k-1)+1, \ell, p)$; thus, there exists an invalid window in $\v{y}$ of length $\ell$ with period $p' < p$. We divide into the following cases:
\begin{itemize}
\item If the window is contained within one of $\v{y}^{(1)}, \ldots, \v{y}^{(k)}$. This is a contradiction to $\v{y}^{(1)}, \ldots, \v{y}^{(k)} \in A_q(n/k+1, \ell, p)$ by the correctness of Construction~\ref{const:core}.
\item Else, if the window contains $\v{z}$. This is a contradiction as $\v{z}$ does not possess any period less than $p$, and thus the $i$-th window also cannot possess any period less than $p$.
\item Else, we find that the window either contains the last $2p-4$ symbols of some $\v{y}^{(j)}$ with $u^{(j)}$, or $w^{(j)}$ with the first $2p-4$ symbols of some $\v{y}^{(j+1)}$ (as $\ell \geq 3p-3$, and the window does not contain $\v{z}$). This is a contradiction to the choice of $u^{(j)}, w^{(j)}$ using Corollary~\ref{cor:primitive}.
\end{itemize}
Therefore, $\v{y} \in A_q(n+(p+3) \cdot (k-1)+1, \ell, p)$.
The time complexity follows directly from that of Construction~\ref{const:core}.
\end{IEEEproof}
}
\label{const:p2}
\end{construction}
\begin{construction}
For given $n, \ell, p$ such that $\ell \geq 4p-7$, there exists an explicit construction for $(\ell,p)$-LPA codes with $3 \cdot k-2$ symbols of redundancy, where $k$ is the smallest value that satisfies $\ell = \lceil \log(n/k-\ell+2) \rceil + p + 1$, and $O(n)$ average-time encoding/decoding.
\full{
\begin{IEEEproof}
Let $\v{x} \in \Sigma_q^n$ be the input message and let $\v{x}^{(1)}, \ldots, \v{x}^{(k)}$ be the partition of the message. Let $\v{y}^{(1)}, \ldots, \v{y}^{(k)}$ be the encoded vectors for $\v{x}^{(1)}, \ldots, \v{x}^{(k)}$ according to Construction~\ref{const:core} with $\ell$ (respectively). Define
\begin{equation*}
\v{y} = \v{y}^{(1)}u^1w^{1}\v{y}^{(2)}u^2w^{2}\v{y}^{(3)}\cdots w^{k-1}\v{y}^{(k)}
\end{equation*}
where, for all $j$, $u^j$ ($w^j$) are chosen by Corollary~\ref{cor:primitive} to eliminate any period less than $p$ from the last $2p-4$ symbols of $\v{y}^{(j)}$ (first $2p-4$ symbols of $\v{y}^{(j+1)}$). The proof that $\v{y} \in A_q(n+3\cdot k-2, \ell, p)$ is similar to that of Construction~\ref{const:p2}, where only the first and third cases are possible (as $\ell \geq 4p-7$). The time complexity follows directly from that of Construction~\ref{const:core}.
\end{IEEEproof}
}
\label{const:3}
\end{construction}
Overall, for given $n, \ell, p$, we seek the construction with minimal redundancy. We first note that Construction~\ref{const:3} is preferable over Construction~\ref{const:p2} in all cases where $\ell \geq 4p-7$. Further, we find that Construction~\ref{const:p2} requires less redundancy than Construction~\ref{const:2l} when $\ell \geq 3p-3$ and
\begin{equation*}
q^{\ell/2 - p - 1} + \frac{\ell}{2} - 2 \leq \frac{q^{\ell -p - 1} + \ell - 2}{p+3}.
\end{equation*}
Similarly, Construction~\ref{const:3} requires less redundancy than Construction~\ref{const:2l} when $\ell \geq 4p-7$ and
\begin{equation*}
q^{\ell/2 - p - 1} + \frac{\ell}{2} - 2 \leq \frac{q^{\ell -p - 1} + \ell - 2}{3}.
\end{equation*}
\section{Cardinally Analysis}
\label{sec:combinatorical}
This section provides a cardinality analysis for the PA and LPA constraints, extending the analysis provided in Chee~\textit{et al.}~\cite{chee2018coding}. We begin by proposing the first upper bound for $a_q(n,\ell,p)$ and demonstrating that $\ell=\log(n-2\ell+p)+p-3.5$ is a lower bound for single-symbol redundancy; this shows that Construction~\ref{const:core} is near the optimal parameters. We continue by proposing several interesting exact formulas for the remaining cases which are not covered by the bounds.
\subsection{Lower and Upper Bounds on $a_q(n, \ell, p)$}
\label{sec:combinatorical:bounds}
We summarize the results from Chee~\textit{et al.}~\cite{chee2018coding} in Theorems~\ref{the:cheeLower}~and~\ref{the:bRLL}, and then provide additional bounds that we propose based on results from the RLL constraint.
\begin{theorem}[Chee~\textit{et al.}~\cite{chee2018coding}] For all $n, \ell, p$, and for all $q$,
\begin{equation*}
a_q(n, \ell, p) \geq q^n \cdot \left(1 - \frac{n}{(q-1)\cdot q^{\ell-p+1}}\right).
\end{equation*}
\label{the:cheeLower}
\end{theorem}
\vspace{-15pt}
In particular, for $\ell = \lceil \log(n) \rceil + p$, we find $a_q(n, \ell, p) \geq q^{n-1}$ and thus a code with single-symbol redundancy exists.
\begin{theorem}[Chee~\textit{et al.}~\cite{chee2018coding}] For all $n, \ell, p$ and for all $q$,
\begin{equation*}
b_q(n, \ell, p) = q^p \cdot r_q (n-p, \ell-p)
\end{equation*}
\vspace{-15pt}
\label{the:bRLL}
\end{theorem}
\noindent We extend this result to the LPA constraint as follows,
\begin{lemma} For all $n, \ell, p$ and for all $q$,\footnote{Equality holds for $p \in \{2,3\}$ due to the result from Definition~\ref{def:LPA}.}
\begin{equation*}
a_q(n, \ell, p) \leq q^{p-1} \cdot r_q (n-p+1, \ell-p+1).
\end{equation*}
\begin{IEEEproof}
The proof follows from Theorem~\ref{the:bRLL} and from the fact that $A_q(n,\ell,p) \subseteq B_q(n, \ell, p-1)$.
\end{IEEEproof}
\label{the:aRLL}
\end{lemma}
Therefore, by utilizing the bound on $k$-RLL codes in Theorem~\ref{the:RLL}, we find in Corollary~\ref{cor:aRLLSub} an upper-bound on $a_q(n,\ell,p)$.
\begin{theorem}[\hspace{-0.001ex}\cite{MutuallyUncorrelated}]
For all $n, k$ where $n\geq 2k$, and for all $q$,
\begin{equation*}
r_q(n,k) \leq q^{n-c \cdot \frac{n-2k}{q^k}}, \;\text{for}\; c = \frac{\log e(q-1)^2}{2q^2}.
\end{equation*}
\vspace{-15pt}
\label{the:RLL}
\end{theorem}
\begin{corollary} For all $n, \ell, p$, $n \geq 2\ell-p+1$, and for all $q$,
\begin{equation*}
a_q(n, \ell, p) \leq q^{n-c \cdot \frac{n-2\ell+p-1}{q^{\ell-p+1}}}, \;\text{for}\; c = \frac{\log e(q-1)^2}{2q^2}.
\end{equation*}
\vspace{-15pt}
\label{cor:aRLLSub}
\end{corollary}
We find the following corollary bounding the optimal window sizes for codes using a single redundancy symbol.
\begin{corollary}
For all $n, \ell, p$ where $n \geq 2\ell - p + 1$, and for all $q$, if there exists an $(\ell,p)$-LPA code with a single redundancy symbol, then $\ell \geq \log(n-2\ell+p) + p - 3.5$.
\full{
\begin{IEEEproof}
As there exists an $(\ell,p)$-LPA code with a single redundancy symbol, we conclude that
\begin{equation*}
a_q(n + 1, \ell, p) \geq q^n.
\end{equation*}
We substitute the result from Corollary~\ref{cor:aRLLSub} to conclude that,
\begin{equation*}
q^{(n+1)- \frac{\log e(q-1)^2}{2q^2} \cdot \frac{(n+1)-2\ell+p-1}{q^{\ell-p+1}}} \geq q^n.
\end{equation*}
Hence,
\begin{align*}
& n + 1 - \frac{\log e(q-1)^2}{2q^2} \cdot \frac{(n+1)-2\ell+p-1}{q^{\ell-p+1}} \geq n \\
\Longleftrightarrow\;\; & \frac{\log e(q-1)^2}{2q^2} \cdot \frac{(n+1)-2\ell+p-1}{q^{\ell-p+1}} \leq 1 \\
\Longleftrightarrow\;\; & \log\left(\frac{\log e(q-1)^2}{2q^2} \cdot \frac{(n+1)-2\ell+p-1}{q^{\ell-p+1}}\right) \leq 0 \\
\Longleftrightarrow\;\; & \ell \geq \log(n-2\ell+p) + p - 3 + \log\left(\frac{\log e(q-1)^2}{2}\right).
\end{align*}
We notice that $\log\left(({\log e(q-1)^2})/{2}\right) \geq -1/2$ for $q \geq 2$, and thus $\ell \geq \log(n-2\ell+p) + p - 3.5$.
\end{IEEEproof}
}
\label{cor:coreLPA}
\end{corollary}
Therefore, we find that Construction~\ref{const:core} is near the lower bound of the optimal construction. In particular, if $n \geq 3\ell-2p+2$, then we differ by up to 5.5 from the lower bound.
\subsection{Exact Formulas}
\label{sec:combinatorical:exact}
In this subsection, we provide interesting exact formulas for the special cases of $n=\ell$ and $n \leq 2\ell-2p+4$. We begin with $b_q(n,n,p)$ in the following simple property.
\begin{lemma}
For all $n, p$, and for all $q$,
\begin{equation*}
b_q(n, n, p) = q^n - q^p.
\end{equation*}
\vspace{-15pt}
\full{
\begin{IEEEproof}
We show $|{\overline{B_q(n,n,p)}}| = q^p$, from which the desired expression follows. We notice that all vectors in $\overline{B_q(n,n,p)}$ are defined exclusively by their first $p$ symbols (as the vector contains periodicity $p$), and that any choice of $p$ symbols for the beginning of a vector can be extended to length $n$. Therefore, there are exactly $q^p$ vectors in $\overline{B_q(n,n,p)}$.
\end{IEEEproof}
}
\label{the:bNN2}
\end{lemma}
We now address the more challenging case of $a_q(n, n, p)$.
\begin{theorem}
For all $n, p$ such that $n \geq 2p-4$, and for all $q$,
\begin{equation*}
a_q(n, n, p) = q^n - \frac{q}{q-1} \cdot \sum_{d=1}^{p-1} \mu(d) \cdot \left(q^{\left\lfloor \frac{p-1}{d}\right\rfloor} - 1\right).
\end{equation*}
where $\mu$ is the M\"{o}bius function.
\full{
\begin{IEEEproof}
Recall from Definition~\ref{def:LPA} that $A_q(n,n,p) = \bigcap_{p'=1}^{p-1} B_q(n,n,p')$; thus, we find by inclusion-exclusion,
\begin{small}
\begin{multline*}
\abs{\overline{A_q(n,n,p)}} = \sum_{k=1}^{p-1} (-1)^{k+1} \cdot \left(\sum_{\substack{S \subseteq [p-1]\\ \abs{S} = k}}\abs{\bigcap_{j\in S}{\overline{B(n, j)}}}\right).
\end{multline*}
\end{small}
By Theorem~\ref{cor:gcd}, we note that $\bigcap_{j\in S}{\overline{B(n, j)}} \subseteq \overline{B(n, \gcd (S))}$. Further, $\overline{B(n, \gcd (S))} \subseteq \bigcap_{j\in S}{\overline{B(n, j)}}$ follows trivially as multiples of a period are also periods. Therefore,
\begin{multline*}
\abs{\overline{A_q(n,n,p)}} = \sum_{k=1}^{p-1} (-1)^{k+1} \cdot \left(\sum_{\substack{S \subseteq [p-1]\\ \abs{S} = k}}\abs{{\overline{B(n, \gcd (S))}}}\right).
\end{multline*}
As the inner summation is only dependent on $\gcd(S)$, then we present this equivalent summation,
\begin{multline*}
\abs{\overline{A_q(n,n,p)}} = \sum_{g=1}^{p-1} \abs{\overline{B(n,n,g)}} \\ \cdot \left[\sum_{k=1}^{p-1}(-1)^{k+1}
\abs{\set{S \subseteq [p-1]}{\substack{\abs{S}=k,\\ \gcd(S) = g}}}\right].
\end{multline*}
Through properties of $\gcd$ and the results of Nathanson~\cite{nathanson2007affine},
\begin{multline*}
\abs{\overline{A_q(n,n,p)}} = \sum_{g=1}^{p-1} \abs{\overline{B(n,n,g)}} \\ \cdot \left[\sum_{d=1}^{\left\lfloor \frac{p-1}{g} \right\rfloor} \mu(d) \sum_{k=1}^{\left\lfloor\left\lfloor \frac{p-1}{g} \right\rfloor/d\right\rfloor}(-1)^{k+1} \binom{\left\lfloor \left\lfloor \frac{p-1}{g} \right\rfloor/d \right\rfloor}{k}\right].
\end{multline*}
We note that the inner-most summation is equal to $1$; thus,
\begin{equation*}
\abs{\overline{A_q(n,n,p)}} = \sum_{g=1}^{p-1} \abs{\overline{B(n,n,g)}} \cdot \left[\sum_{d=1}^{\left\lfloor \frac{p-1}{g} \right\rfloor} \mu(d) \right].
\end{equation*}
Substituting the result from Lemma~\ref{the:bNN2} and rearranging the summation leads to the desired result,
\begin{equation*}
\abs{\overline{A_q(n,n,p)}} = \frac{q}{q-1}\cdot \sum_{d=1}^{p-1} \mu(d) \cdot (q^{\lfloor(p-1)/d\rfloor} - 1).
\end{equation*}
\end{IEEEproof}
}
\label{the:aNN}
\end{theorem}
This result can be extended for more cases when $n > \ell$.
\begin{theorem} For all $n, \ell, p$ such that $n \leq 2\ell - 2p+4$,
\begin{equation*}
\abs{\overline{A_q(n,\ell,p)}} = \abs{\overline{A_q(\ell, \ell, p)}} \cdot q^{n-\ell} \cdot (1 + (n-\ell) \cdot (1- q^{-1})).
\end{equation*}
\full{
\begin{IEEEproof}
Let $n, \ell, p$ be given, and let $i = n - \ell$. We first propose a decomposition of $A_q(\ell, \ell, p)$, and we then utilize that to determine the cardinality of $A_q(n, \ell, p)$. We decompose $A_q(\ell,\ell,p)$ according to the length of the shortest suffix in the vector that avoids all periodicity up to $p$:
\begin{equation*}
A_q(\ell,\ell,p) = \bigcup_{j=\ell-i}^{\ell} S_j,
\end{equation*}
while $S_{\ell-i}$ denotes the set of all vectors in $A_q(\ell,\ell,p)$ where the last $\ell-i$ symbols belong to $A_q(\ell-i, \ell-i, p)$, and $S_j$ for $j > \ell - i$ is the set of all vectors such that the last $j-1$ symbols belong to $\overline{A_q(j-1, j-1, p)}$, yet the last $j$ symbols belong to $A_q(j, j, p)$. The union is disjoint as if the last $j$ symbols belong to $A_q(j, j, p)$, then the last $j'\geq j$ also belong to $A_q(j', j', p)$. Notice that $\abs{S_{\ell-i}} = a_q(\ell-i, \ell-i, p)\cdot q^i$ as any vector in $a_q(\ell-i, \ell-i, p)$ can be extended with any $i$ symbols, and $\abs{S_j} = \abs{\overline{A_q(j-1, j-1, p)}} \cdot (q-1) \cdot q^{\ell-j} = \abs{\overline{A_q(\ell, \ell, p)}} \cdot (q-1) \cdot q^{\ell-j}$ for $j > \ell-i$ by Corollary~\ref{cor:primitive} (as only a single symbol continues the periodicity) and as $\overline{A_q(k, k, p)}$ is independent of $k$ when $k \geq 2p-4$ (see Theorem~\ref{the:aNN}).
We now consider $a_q(n, \ell, p)$ by utilizing the decomposition for the first $\ell$ symbols of vectors in $A_q(n, \ell, p)$. Specifically, we find two cases: (1) when the first $\ell$ symbols belong to $S_{\ell-i}$, then the remaining $i$ symbols may be chosen freely, (2) when the first $\ell$ symbols belong to $S_{j}$ for $j > \ell-i$, then the next $\ell-j+1$ symbols are chosen to contradict the period (which is defined by the last $2p-4$ symbols of the first $\ell$ symbols), and the remaining symbols are chosen freely. That is, we find
\begin{equation*}
a_q(n, \ell, p) = \abs{S_{\ell-i}} \cdot q^{i} + \sum_{j=\ell-i+1}^{\ell} \abs{S_j} \cdot (q^{\ell-j+1}-1) \cdot q^{j-(\ell-i+1)}
\end{equation*}
\vspace{-10pt}
\begin{multline*}
= a_q(\ell-i, \ell-i, p)\cdot q^{2i} \; + \\ \abs{\overline{A_q(\ell, \ell, p)}}\cdot q^{i-1}\cdot \underbrace{\sum_{j=\ell-i+1}^{\ell} (q-1) \cdot (q^{\ell-j+1}-1)}_{=q(q^i-1) - iq + i}.
\end{multline*}
The desired result follows from the above expression.
\end{IEEEproof}
}
\label{the:aNN+1}
\end{theorem}
\short{\vspace{-5pt}}
\section{Conclusion}
\label{sec:conclusion}
In this work, we study codes that constrain periodicity within windows of the encoded messages. We propose a construction with a single symbol of redundancy based on iteratively repairing invalid windows until a valid message is encountered. Even though the algorithm does not possess monotonic progression, we prove convergence with linear average time complexity through a reduction to an acyclic graph walk. We continue by generalizing the proposed construction to offer a trade-off between the window size and the number of additional redundancy symbols. Lastly, we study the cardinality of the constraints to both prove that the proposed construction is nearly optimal, and to mention novel exact formulas. Overall, we establish foundational constructions for constrained periodicity that may be fundamental for many different applications, such as racetrack memories.
\bibliographystyle{IEEEtran}
| -51,387.440181 |
[
-2.939453125,
2.75
] | 41.682243 |
[
-3.119140625,
0.5869140625,
-2.123046875,
-5.35546875,
-1.2509765625,
8.125
] |
[
3.419921875,
8.2734375,
3.033203125,
7.6171875
] | 310 | 5,711 |
[
-3.0625,
3.4140625
] | 32.646231 |
[
-5.921875,
-4.17578125,
-4.296875,
-1.884765625,
2.208984375,
11.96875
] | 0.613889 | 17.137351 | 20.45176 | 3.867429 |
[
1.3708841800689697
] | -32,400.828095 | 5.476449 | -51,347.765147 | 0.335721 | 5.879869 |
[
-1.9609375,
-3.130859375,
-3.865234375,
-5.35546875,
2.2265625,
11.9921875
] |
[
-5.95703125,
-2.61328125,
-2.515625,
-2.171875,
4.0859375,
5.734375
] | |
BkiUeG85qoYA4kbtHtDJ
|
\section{Introduction}
\label{sec_intro}
Anderson emphasized in his famous paper ``More is different"~\cite{Anderson:1972pca}, that
``the ability to reduce everything to simple fundamental laws does not imply the ability to
start from those laws and reconstruct the Universe". This means that a quantum many-body theory for
``reconstructing the Universe" is as critical an ingredient as the fundamental laws
themselves. The quark-gluon plasma (QGP) created in heavy-ion collisions provides a unique
opportunity to study the quantum many-body theory of hot QCD. The complexity of this
system renders exact/controlled first-principles calculations for several quantities
of interest challenging. However, one might expect
that the QGP, as a quantum many-body system, can be characterized by ``universal" properties
that do not depend on precise details of the underlying interaction;
if so, simplified QCD-inspired models should be a useful tool to gain insights into the
structure of the QGP.
The $T$-matrix approach employed in this work is an example for this philosophy.
While the partonic input Hamiltonian is adopted from a relatively simple Cornell-type potential,
with medium effects quantitatively constrained by three sets of lattice-QCD data
(equation of state (EoS), heavy-quark (HQ) free energy and euclidean quarkonium correlators),
the many-body theory is rather elaborate, with selfconsistent one- and two-body Green's functions,
and a resummed skeleton diagram series for the Luttinger-Ward-Baym
functional~\cite{Liu:2016ysz,Liu:2017qah} retaining their full off-shell properties.
This allows to investigate how basic properties
of the underlying force manifest themselves in a wide range of QGP properties~\cite{Liu:2016ysz,Liu:2017qah,Liu:2018syc},
in particular microscopically emerging spectral functions and transport coefficients
which are not part of the fit and thus a prediction of the approach.
In the following, we briefly outline the theoretical setup of the $T$-matrix approach
(Sec.~\ref{sec_tm}), discuss the main results and physical insights (Sec.\ref{sec_res}),
and conclude (Sec.~\ref{sec_con}).
\section{Theoretical formalism for \textit{T}-matrix approach}
\label{sec_tm}
The ``fundamental" degrees of freedom and their interactions are characterized by an in-medium
effective Hamiltonian~\cite{Liu:2016ysz,Liu:2017qah}.
\begin{align}
&H=\sum\varepsilon(\textbf{p})\psi^\dagger(\textbf{p})\psi (\textbf{p})+
\frac{1}{2}\psi^\dagger(\frac{\textbf{P}}{2}-\textbf{p})\psi^\dagger(\frac{\textbf{P}}{2}+\textbf{p})
V \psi(\frac{\textbf{P}}{2}+\textbf{p}')\psi(\frac{\textbf{P}}{2}-\textbf{p}') \ ,
\label{eq_Hqgp}
\end{align}
with parton energies $\varepsilon(\textbf{p})=\sqrt{M^2+\textbf{p}^{2} }$ whose masses are
parameters that are constrained by the lQCD EoS. The two-body potential is composed of
a color Coulomb ($ V_{\mathcal{C}} $) and confining ($ V_{\mathcal{S}} $) terms,
\begin{align}
V(\textbf{p},\textbf{p}')=\mathcal{F}^\mathcal{C}R^\mathcal{C}V_\mathcal{C}(\textbf{p}-\textbf{p}')
+\mathcal{F}^\mathcal{S}R^\mathcal{S}V_\mathcal{S}(\textbf{p}-\textbf{p}') \ .
\label{eq_vp}
\end{align}
withi color Casimir factors, $F^{\mathcal{C}(\mathcal{S})}$, and relativistic corrections,
$ R^{\mathcal{C}(\mathcal{S})}$.
Its static limit in coordinate space, $ \tilde{V}(r) $, is directly related to the HQ free energy
as computed in lQCD,
\begin{align}
&F_{Q\bar{Q}}(r,\beta)=\frac{-1}{\beta}\ln \bigg[\int_{-\infty}^{\infty}
dE\,e^{-\beta E} \frac{-1}{\pi}\text{Im}[\frac{1}{E+i\epsilon-\tilde{V}(r)
-\Sigma_{Q\bar Q}(E+i\epsilon,r)}]\bigg] \ ,
\label{eq_FreeEfinal}
\end{align}
where $ \Sigma_{Q\bar Q}(E+i\epsilon,r) $ is the two-body selfenergy
whose $ r $ dependence encodes interference effects~\cite{Liu:2017qah}.
With this Hamiltonian, a selfconsistent Brueckner $T$-matrix approach is carried out.
The ladder resummation is given by
\begin{align}
&T(E,\textbf{p},\textbf{p}')=V(\textbf{p},\textbf{p}')+
\int_{-\infty}^{\infty}\frac{d^3\textbf{k}}{(2\pi)^3}V(\textbf{p},\textbf{k})
G^{0}_{(2)}(E,\textbf{k})T(E,\textbf{k},\textbf{p}')
\label{eq_T}
\end{align}
with the in-medium 1- and 2-body propagators in Matsubara representation,
$ G=([G^{0}]^{-1}-\Sigma)^{-1}$ and
$G^{0}_{(2)}=-\beta^{-1}\sum_{\omega_n} G(iE_n-\omega_n)G(-i\omega_n)$, respectively. The
single-parton selfenergy,
\begin{equation}
\Sigma=[G^0]^{-1}-G^{-1}=\int d\tilde{p}~\,T(G) G\equiv-\beta^{-1}\sum_{\nu_n}
\int \frac{d^{3}\textbf{p}}{(2\pi)^3}T(i\omega_{n}+i\nu_{n})G(i\nu_n) \ ,
\label{eq_selfE}
\end{equation}
is a nonlinear functional of $ G $, characterizing the selfconsistency and satisfying
the conservation laws~\cite{Baym:1961zz,Baym:1962sx}; it can also be derived by a functional
derivative of Luttinger-Ward functional (LWF)~\cite{PhysRev.118.1417}, $\Phi$,
as $ \delta\Phi/\delta G=\Sigma=\int d\tilde{p}TG $. For the latter we resum the ladder diagrams
to infinite order using a newly implemented matrix-log method to handle the extra $ 1/\nu $ factor,
as
\begin{align}
\Phi=\frac{1}{2}\sum\text{Tr} &\bigg\{G\bigg[V+\frac{1}{2}V G^{0}_{(2)}V+\ldots
+\frac{1}{\nu}VG^{0}_{(2)}V G^{0}_{(2)}\ldots .V+\ldots\bigg]G\bigg\}
= -\frac{1}{2}\ln[1-VG^{0}_{(2)}] \,.
\label{eq_phi2}
\end{align}
The grand-potential ($ \Omega=-P $) with selfconsistent propagators thus takes the
form~\cite{PhysRev.118.1417,Baym:1962sx}
\begin{equation}
\Omega = \mp\sum\text{Tr}\{\ln(-G^{-1})+[(G^0)^{-1}-G^{-1}] G\}\pm\Phi \ ,
\label{eq_Omega}
\end{equation}
essentially corresponding to a single-particle and an interaction ($\Phi$) contribution.
Our fit of this part of the formalism to the lQCD EoS essentially constrains the effective
light-quark and gluon masses of the Hamiltonian.
On the other hand, lQCD data for the HQ free energy largely constrain the in-medium
potential.
Since our approach is evaluated in real-time, spectral functions and scattering amplitudes can
be extracted from the parton propagators and $T$-matrices, yielding direct insights into QGP
structure. They can be further used to compute transport coefficients within the same framework.
The shear viscosity is calculated by a Kubo formula at dressed one-loop level~\cite{Liu:2016ysz},
\begin{equation}
\eta=-\sum_i\pi d_i \int
\frac{d^3\textbf{p}d\omega}{(2\pi)^3} \frac{p_x^2p_y^2}{\varepsilon^2_i(p)}
\rho^2_i(\omega,p)\frac{d n_i(\omega)}{d\omega} \ ,
\end{equation}
where $ d_i $, $ n_i (\omega) $, and $ \rho_{i} $ are the parton degeneracies, thermal
distribution and spectral functions, respectively.
The charm-quark friction coefficient and corresponding spatial diffusion
coefficient are obtained by an off-shell generalization~\cite{Liu:2016ysz,Liu:2018syc}
of previous $ T $-matrix calculations~\cite{Riek:2010fk,Prino:2016cni},
\begin{align}
A_c(p)=&\left\langle (1-\frac{\textbf{p}\cdot\textbf{p}'}{p^2})\rho_i\rho_i\rho_c\right\rangle \ ,
\quad D_s = T/(A_c(0)M_c) \ .
\end{align}
\section{Numerical Results and Insights}
\label{sec_res}
\begin{figure}[b]
\centering
\includegraphics[width=0.245\columnwidth]{fv.eps}
\includegraphics[width=0.245\columnwidth]{tm.eps}
\includegraphics[width=0.233\columnwidth]{rho.eps}
\includegraphics[width=0.253\columnwidth]{eos.eps}
\vspace{-0.3cm}
\caption{From left to right: potential and HQ free energy, imaginary part of color-singlet
$T$-matrix, light-quark spectral function and pressure with LWF contribution, for the SCS (red lines)
and WCS (blue lines).}
\label{fig_micro}
\end{figure}
\begin{figure} [t]
\centering
\includegraphics[width=0.35\columnwidth]{etads.eps}\hspace{0.5cm}
\includegraphics[width=0.35\columnwidth]{ratio.eps}
\vspace{-0.3cm}
\caption{Charm-quark diffusion coefficient and specific shear viscosity (left)
and their ratio (right) in the SCS (red lines) and WCS (blue lines)~\cite{Liu:2016ysz,Liu:2017lhv}.}
\label{fig_trans}
\end{figure}
It turns out that the selfconsistent fits to the EoS and quarkonium correlators and free energies
support two types of solutions: a strongly-coupled (SCS) and weakly-coupled (WCS)
scenario~\cite{Liu:2017qah,Liu:2018syc}. However, the underlying micro-physics for the two
solutions is very different, as illustrated in Fig.~\ref{fig_micro}.
The SCS features a long-range remnant of the confining potential that is much larger than the free
energy (left panel); this leads to strong resonances in the $ T $-matrix (second-from-left panel)
which in turn induce large parton collision rates which melt the quasiparticle peaks in their
spectral functions (second from right panel) signaling a transition in the degrees of freedom.
This transition can also be seen in the pressure, where the LWF ($\Phi$), encoding the
bound-state contribution accounts for more than 50\% close to $T_c$.
On the other hand, in the WCS the potential is close to the free energy; this leads to relatively
weak resonance correlations near $T_c$ and small parton collision rates so that their spectral
functions retain well-defined quasiparticle peaks. For the pressure, the LWF contribution remains
small at all temperatures, with no indication for a transition in the degrees of freedom.
How can we distinguish these solutions? It turns out that the transport parameters of the two
scenarios are quite different, cf.~Fig.~\ref{fig_trans}. For the SCS, the specific shear
viscosity is about twice the conjectured lower bound~\cite{Kovtun:2004de}, but another factor
of $\sim$2 larger in the WCS at low $T$. The difference in the HQ diffusion coefficient is more
pronounced: it is about twice the thermal wavelength in the SCS, but up to another factor of 5
larger in the WCS. From a phenomenological point~\cite{Prino:2016cni,Rapp:2018qla,Liu:2018syc},
this clearly favors the SCS. Of particular interest is the ratio of $D_s(2\pi T)$ over
$4\pi\eta/s$~\cite{Rapp:2009my}, which is expected to be near 1 in the strongly coupled
limit~\cite{Gubser:2006bz}, but 5/2 in a perturbative system~\cite{Danielewicz:1984ww}.
The latter is indeed realized in our WCS at all temperatures, while the former is realized
in the SCS at low temperatures, and increasing toward higher temperatures.
This corroborates the classification of the two scenarios as ``strongly" and ``weakly" coupled.
\section{Conclusion and Discussion}
\label{sec_con}
We have utilized a thermodynamic $T$-matrix approach to understand and connect various properties
of the QGP in the nonperturbative regime. We have employed a QCD-inspired effective Hamiltonian and
constrained its parameters using lattice-QCD data on the equation of state, heavy-quarkonium
correlators and free energies. Carrying out a full quantum calculation in fitting these quantities,
the resulting spectral and transport properties of two solution types -- SCS and WCS -- exhibited
a ``sufficient and necessary" correlation link: ``large color (string) potential" $\Leftrightarrow $
``strong two-body resonances" $\Leftrightarrow $ ``broad (non-quasiparticle) spectral functions"
$ \Leftrightarrow $ ``small viscosity/spatial diffusion coefficient". Considering the phenomenological
constraints for the transport coefficients from hydrodynamic~\cite{Bernhard:2016tnd,Niemi:2018ijm}
and open heavy-flavor data in heavy-ion collisions~\cite{Prino:2016cni,Rapp:2018qla,Liu:2018syc},
this chain implies that the microscopic structure of the QGP should be close to the one predicted
by the SCS. The basic feature of the underlying force is a long-range remnant of the confining
potential, which generates a strong-coupling behavior for the long-wavelength excitations of the
system, {\it i.e.}, its low-momentum spectral functions and transport properties, while recovering
weakly-coupled behavior at short distance. Ultimately, these are manifestation of the nontrivial
vacuum structure of QCD and its running coupling that persist into the QGP.
Our conclusions here are different from that of Bayesian analysis on lQCD Wilson line/loop
data~\cite{Burnier:2014ssa} whose results for the extracted potential and imaginary parts
are close to the WCS in the $T$-matrix approach; further efforts are required to disentangle
this apparent discrepancy.
\section*{Acknowledgments}
This work was supported by the U.S.~National Science Foundation (NSF) under
grant no.~PHY-1614484.
\bibliographystyle{apsrev4-1}
| -10,782.858014 |
[
-2.5078125,
2.34375
] | 21.256039 |
[
-5.32421875,
-3.736328125,
-2.7421875,
-8.6328125,
0.2371826171875,
13.1484375
] |
[
4.48046875,
8.4453125,
2.81640625,
6.75390625
] | 81 | 1,355 |
[
-2.54296875,
2.76953125
] | 27.176547 |
[
-5.37109375,
-4.59375,
-3.5546875,
-1.396484375,
2.119140625,
9.8828125
] | 0.916343 | 10.568134 | 43.690037 | 2.970625 |
[
1.6682413816452026
] | -7,855.648734 | 6.845756 | -10,680.680504 | 0.517464 | 5.641755 |
[
-2.31640625,
-3.46484375,
-4.3046875,
-4.91796875,
2.138671875,
11.8046875
] |
[
-5.9453125,
-4.19140625,
-3.0390625,
-2.20703125,
4.11328125,
7.25390625
] | |
BkiUdlE5qWTD6faZ3JJ_
|
\subsection*{Polarization Process}
Before describing our proposal, we briefly discuss the physical
setting. The outline of the implementation is founded in standard
liquid NMR physics, although as the calculations will indicate,
development of a useful NMR quantum computer will require more.
We consider a collection of macromolecules, each containing $n$ atoms
with nuclear spin $1/2$ and nuclear magnetic moment $\mu$, suspended in a
liquid medium at temperature $T$, so that the relaxation (coherence)
time between the particles and the surrounding liquid is on the order
of seconds or thousands of seconds.
The liquid is subjected to a magnetic field $B_0$. Upon reaching
thermal equilibrium, the difference between the fraction of particles
oriented in the direction of the field, and those oriented in the
opposite direction, is \[ \epsilon = {\mu B_0 \over kT} \]
where $k$ is Boltzman's constant, approximately $10^{-16}$ in CGS
units.
A typical magnetic field $B_0$ is approximately
$10^5$ Gauss.
A nuclear magnetic moment such as that of the proton is approximately
$10^{-23}$ in CGS units.
At room temperature ($T = 300$~K), with an especially strong
magnet, we can therefore obtain $\epsilon \approx 3 \times 10^{-5}$.
As will be explained later, the number of qubits upon which a
quantum computation can be performed, is approximately $\epsilon^2 n$, where
$n$ is the number of spin $1/2$ particles in the macromolecule we
employ as our quantum computer. For $\epsilon$ in the range obtained above,
and in order to carry out a quantum computation on a useful number of
qubits (e.g.\ $10^2$), this would require an impractically large
macromolecule of size about $10^{11}$.
Hence it is imperative to create a stronger initial polarization
$\epsilon$. An obvious parameter to consider is temperature. Reducing the
temperature to $10^{-1}$~K gives $\epsilon \approx 10^{-1}$, therefore
quantum computations on $10^2$ qubits become possible using a molecule
of size about $10^4$. However, it is difficult to obtain
long coherence times at these low temperatures.
Perhaps a more promising avenue is the use of optical pumping
techniques for boosting the value of $\epsilon$. Until recently this
technique has been confined to atomic gases, particularly xenon
\cite{BCGHHN,Pines}; values of $\epsilon$ exceeding $1/2$ have been attained.
There are plans at IBM to explore these techniques for
molecules that may be suitable for quantum computation. With a value
of $\epsilon$ in this range, the size of the molecules needed for a quantum
computation on $10^2$ qubits would be under $10^3$.
In the remainder of the paper we will simply assume that some $\epsilon$
has been provided by the polarizing process, and from that starting
point we will show how to initialize the computer so that it can carry
out any desired computation.
\subsection*{Abstract Setting}
We start by describing an abstract computational model that describes
an NMR quantum computer. An ``NMR quantum computer'' is described by
four parameters: $n,\ell, k$, and $\epsilon$. $n$ is the number of
qubits in the computer (it is the number of spins available for
computation in each molecule of the NMR sample).
Initially the $n$ qubits are in a thermal mixture which deviates
slightly from a uniform distribution. $\epsilon$ is the bias induced, at
the start, by the external polarizing process. Namely, if any given
bit of the computer is measured, the probability that $\ket{0}$ is
observed is ${1+\epsilon \over 2}$.
We assume that the statistical correlation between any two bits on a
molecule, falls off exponentially with the distance between those
bits. $\ell$ is the ``correlation distance'', the distance such that
the correlation falls below some prescribed threshold such as $1/10$.
We will use the term $\epsilon$-biased distribution to refer to such a thermal
mixture.
If there were no correlations, the distribution on the bits would be
binomial; in the more realistic case which we consider, we will be
able to obtain all the same essential results as if the distribution
was binomial. Only the analysis will be a little more difficult, and
the numbers a little worse, than for the binomial distribution.
Why is it sufficient to specify the distribution that results when
we measure the $n$ qubits in the computational basis? To properly
describe the bulk sample in thermal equilibrium,
we would have to specify
the density matrix associated with the bulk sample. Different
mixtures of pure states with the same density matrix are indistinguishable
by any measurement (so long as that measurement is applied to the
whole ensemble, not to individual members of the ensemble), and
therefore by any quantum computation followed
by a measurement in the computational basis. However, we will further
restrict the quantum computation that we will allow during the
state initialization process. The state initialization will be
carried out by a computation that can only permute the computational
basis states (i.e.\ by essentially a classical computation).
Under these restrictions, it is sufficient to
specify only the probability distribution
that results when we measure the initial state of the sample in
the computation basis. This is because different mixtures of pure
states with different density matrices, but with the same resulting
probability distribution, yield the same result under a basis state
permutation followed by a measurement in the basis state. Since at the
end of our initialization process, we plan to obtain $O(n)$ qubits
in the all $\ket{0}$ state, any further (general) quantum computation
that is restricted to these qubits yields the same results that it
would if started on a $\ket{\bar{0}}$ state.
In addition to the operation of initializing the thermal mixture to an
$\epsilon$-biased distribution, there are four primitive computational
operations that an NMR quantum computer supports:
a) Cyclically shift the $n$ bits clockwise or counterclockwise one
position.
b) Apply an arbitrary two bit operation to the first two bits.
c) Measure the first bit (in some fixed basis).
d) (For a quantum cellular automaton) For some fixed value of $k$
(depending upon the structure of molecule chosen for the NMR
experiment), apply an arbitrary 2-bit operation to all pairs of bits
with indices $lk$ and $lk +1$.
Notes: 1. Operation (a) does not require that the macromolecule have a
cyclic topology. Our operative assumption is a linear topology. The
implementation of the cyclic shift operation is given in the
``Architecture'' section, below. 2. As stated at the outset, these
operations are a model of an NMR quantum computer.
It must be understood that there is considerable flexibility in the
design of the model, and that for the sake of specificity, we have
made some arbitrary choices; proper choices must eventually be
made on the basis of experimental considerations.
In fact, there can be substantial reward for enriching the above
operations. The machine architecture given by operations (a)-(c)
corresponds to that of a $1$-tape Turing Machine. (We will speak of
the site where we can execute arbitary operations on the pair of bits,
as the ``tape head''.) Later in the paper, after describing designs
which yield operations (a)-(c), we will also briefly describe how a
slight variation of the design can in fact yield the equivalent of a
$2$-tape Turing Machine. (Still on a linear molecule.) With such a
machine, the run time of our algorithm can be significantly improved.
\subsection*{Overall Scheme}
An ideal NMR quantum computer would have its $n$ qubit register
initialized to $\ket{0^n}$. The main goal of this paper is to
describe an efficient simulation of an ideal NMR quantum computer
using an NMR quantum computer. Notice that if the bias $\epsilon$,
in the initial state of the NMR quantum computer, were $0$
then the density matrix of the mixture (of the $n$ qubit computers)
would remain unchanged by any sequence of computational steps.
Therefore an NMR quantum computer with parameter $\epsilon =0$ is
incapable of supporting any computation. Our goal is to use the small
but constant bias $\epsilon >0$ to isolate $m = \Theta(n)$ qubits such that the
reduced density matrix of these $m$ qubits is very close to the
density matrix corresponding to the pure state $\ket{ 0^m}$.
What we need in order to achieve this goal is quite simple: we wish to
carry out a permutation of the computation basis states $x \in \{0,1\}^n$
such that states with low Hamming weight should be reencoded with a long
prefix of $0$'s. A similar task has been addressed previously by a
quantum computation \cite{CD}. However, in that method, the
necessary permutations are accomplished with the aid of a quantum
computer which already has at its disposal a clean workspace, i.e.\ a
sequence of qubits in a known initial state (of size about
$n^{1/2}$). Obtaining such a clean workspace, in an NMR
computer, is precisely the problem which needs to be addressed in
order to make NMR quantum computing possible in the first place.
In other words, what complicates the construction of these
permutations, for us, is that
we cannot assume that we have any clean bits at all (i.e.\ bits whose
distribution is almost entirely supported on $\ket{0}$ or $\ket{1}$)
to store intermediate
results of our computation, since all the available qubits are
in the thermal state. Consequently, and because of the restricted set of
primitive operations allowed on an NMR quantum computer (necessary
because of the physical limitations), we are initially hampered in
the kinds of logical operations we can implement in our computer.
What we provide is an ``end-to-end'' procedure: we start
with only a string of qubits in a thermal mixture, and we end with a
string of qubits that with high probability are all in the $\ket{0}$
state.
\begin{theorem} \label{thm1}
Assume that the thermal mixture is in an $\epsilon$-biased
distribution. Then there is a constant $c$ such that, using primitives
(a) and (b), we can convert the given mixture to one in which $1-o(1)$ of
the probability is supported on strings which begin with a run of
$c \epsilon^2 n$ $0$'s.
\end{theorem}
The process which we will describe uses $O(n^2)$ steps.
We will show how to obtain a value of approximately $20$ for $c$.
A slightly more complicated implementation of our method (esp.\ by
using blocks of size greater than $2$ in phase 2, see below) can
decrease this constant further.
\noindent{\bf Proof}
We begin by permuting the bits; if we wish to minimize our reliance
on any assumptions concerning the dependencies among spins in the
original mixture, then the permutation of $\{1,...,n\}$ is chosen at
random, uniformly, by the experimenter. If (as is more likely, and as
was assumed in the previous section) we can assume only local
correlations then it is enough to ``shuffle'' the bits in any
predetermined manner that guarantees that all bits that start out
close to each other (within distance $n^{1/3}$) end up far apart (at
least distance $n^{1/3}$.) If we can really assume a binomial
distribution on strings, then this initial permutation is unnecessary.
Under weaker assumptions, the permutation is necessary in order for
the probability bounds of the analysis to be valid.
There are a variety of ways to carry out the permutation; using
operations (a) and (b) it
can be accomplished without difficulty using (to within a constant factor)
the optimum number of transpositions. Typically, and in the worst
case, this number will be on the order of $n^2$.
We will analyze weak (i.e.\ locally correlated) distributions as follows.
The initialization algorithm has the property that it partitions the
$n$ bits into blocks of size $n^{1/3}$, and each processed bit output by
the algorithm depends only on one of the blocks. Now, since the $n$
bits were randomly permuted, with high probability no two bits in
any block started out at distance less than $n^{1/3}$.
This implies (even under very weak assumptions on the manner in which
local correlations decay) that the distribution on each block is very closely
approximated by the binomial distribution. (Under the assumption that
local correlations decay exponentially in distance, the distribution
in the block will have exponentially small distance to the binomial
distribution, in the $L_1$ norm.)
After the initial permutation, we carry out the preparation of the
initial segment of bits. This process will proceed in
three phases.
\begin{enumerate}
\item Boosting to constant bias:
In this phase we extract, from $n$ bits with bias $\epsilon$,
$\Theta(\epsilon^2 n)$ bits which have large constant (i.e.\
independent of $n$) bias. This process is efficient (in terms of how
many bits of output are produced) up to a constant factor.
\item Obtaining polynomially small $\delta=(1-\epsilon)/2$ by increasing
block sizes.
\item Boosting to obtain a nearly perfect block of bits:
In the final phase, while keeping the block size beneath $n^{1/2}$, we
reduce $\delta$ beneath $n^{-10}$. The union bound then implies that
a computation can then begin, working on the assumption that all bits
are $0$'s, and incur only a polynomially small ($n^{-9}$) probability
of error due to possible bad initialization.
\end{enumerate}
\subsection*{Phase 1: Amplification to constant bias}
In phases 1-3 we partition the $n$ bits
into blocks of size $n^{1/3}$. All computations of phases 1-3 are
conducted internally within these blocks, until after phase 3 the
clean bits are finally collected together in one location for use in a
subsequent computation. In this way we ensure that we can use
near-independence of the bits within each block. If the original
probability distribution was binomial (rather than having local
correlations), there is no need for this device.
{\it {\bf Theorem \ref{thm1}, phase 1: }
Starting with $n$ $\epsilon$-biased bits,
and using operations (a),(b), we can with
probability $1-o(1)$ obtain $\Omega(\epsilon^2 n)$ bits with bias at least
$0.856$.
}
We will go through several rounds of amplification; as soon as $\epsilon$
exceeding $0.856$ is achieved, we stop using this process and switch
to phase 2.
The amplification scheme is very simple. Partition the bits into pairs.
If the bits in a pair are different discard both. Else discard one.
The expected bias towards $0$ among the surviving bits is $2 \epsilon
\over 1 + \epsilon^2$.
Also, the expected number of bits that survive is $n {1 + \epsilon^2
\over 4}$. Since the bits are nearly independent (they would be
completely independent if the original distribution was binomial), a
large deviation bound now implies that with probability
at least $1 - e^{-n/3}$, the number of bits surviving is at least
${1 \over 4} n - n^{2/3}$.
As we go through several ($k$) rounds, the probability that we wind up
with less than $n 4^{-k} (1-n^{-1/3})^k$ bits is at most $ke^{-n/3}$.
This is negligible.
A little more complicated question is, can we wind up
with bits with a constant ($0.856$) bias while bounding $4^{-k}$ from
below by $\Omega(\epsilon^{2})$? A positive answer comes from the
following analysis.
From the
formula $\epsilon_{i+1} = { 2 \epsilon_i \over 1 + \epsilon_i^2}$ we
obtain two things. First, \[\epsilon_i = \epsilon_0 2^i / \prod_{j=0}^{i-1}
(1+\epsilon_j^2).\] So we can rephrase our goal: we wish to upper bound
$\prod_{j=0}^{i-1} (1+\epsilon_j^2)$ (where
$\epsilon_i=\hat{\epsilon}$). In an ideal process in which $\epsilon$
doubled in each round, we would need $k=\lg
(\hat{\epsilon}/\epsilon_0)$; in the true process we need to increase
$k$ over this ideal quantity by $\lg \prod_{j=0}^{i-1}
(1+\epsilon_j^2)$. In other words, the multiplicative effect on $4^k$
(over the optimal factor), is at most $(\prod_{j=0}^{i-1}
(1+\epsilon_j^2))^2$.
Second,
\[ \epsilon_i = {1 - \sqrt{1 - \epsilon_{i+1}^2} \over
\epsilon_{i+1}}.\]
The remainder of this analysis is broken into two parts: the rounds
until $\epsilon>1/100$, and the remaining rounds until
$\epsilon>0.856$.
For the first part we use the inequality
\[ x \leq 0.02 \mbox{ implies }
\sqrt{1-x} \geq 1- {1 \over 2} x - {1 \over 4} x^2 \]
to show that
\[ \epsilon_i \leq {1 \over 2} \epsilon_{i+1} (1 + {1 \over 2} \epsilon_{i+1}^2).\]
In particular note that this implies \[ \epsilon_i \leq
0.5004 \epsilon_{i+1} \]
so long as $\epsilon_i$ is beneath our threshold for using this
analysis.
Now, $\prod (1+\epsilon_j^2) \leq e^{\sum \epsilon_j^2}$.
Consequently $\prod (1+\epsilon_j^2) \leq e^{{0.02}^2 {1 \over
1 - 0.5004}}$
and so the multiplicative effect on $4^k$ in these rounds
(the factor for how many bits we are losing) is bounded by
$e^{{0.02}^2 {2 \over 1 - 0.5004}} < 1.0017$.
In the remaining sequence of rounds we have $0.01 < \epsilon_i \leq 0.856$.
We obtain an upper bound on $(\prod_{j=0}^{i-1} (1+\epsilon_j^2))^2$
by explicitly calculating it beginning with the term corresponding to
$0.856$ and working down, until and including the first term that is
less than $0.01$ (which is the seventh iterate, equal to approximately
$0.009985$). This product is less than $6.7$. \\
\noindent {\bf Implementation: }
We have to be somewhat careful to implement the amplification scheme
using the computational primitives described above.
We can think of the machine given by primitives (a),(b), as a Turing
machine, whose ``tape head'' is at the site at which arbitrary unitary
operations can be implemented on a pair of adjacent bits.
We will want to speak of the tape head carrying with it a small
``register'' of several bits: this is easily implemented, by
interspersing rotations of the tape with transpositions at the site of
the ``tape head''. We will use a two-bit register labelled $y_1,y_2$.
We will perform the amplification in stages.
Start with arbitrary
bits in the two-bit register.
For $m$ ranging from $1$ up to $N/2$
(where $N$ is the current number of bits left in the process ---
initially
$O(n^{1/3})$), carry out the
two-bit operation ``are they equal?'', namely $\ket{01} \rightarrow
\ket{11}$, $\ket{11} \rightarrow \ket{01}$, on the pair of bits, which
we will call $x_{m,1}, x_{m,2}$.
Now for $m$ ranging from $1$ up to $N/2$, do the following.
Exchange $x_{m,1}$ with $y_1$, and $x_{m,2}$ with $y_2$.
Now move the tape head back to the first pair, $x_{1,1},x_{1,2}$. For
$i$ from $1$ to $m-1$,
do the following: if $y_1=0$, exchange $y_2$ with $x_{i,2}$. Finally,
move the tape head to pair $m$,
and exchange $x_{m,1}$ with $y_1$, and $x_{m,2}$ with $y_2$.
After $m$ reaches $N/2$, and before the next iteration, exchange each
pair of bits $x_{j,1}, x_{2j,2}$ for $1 \leq j \leq N {1+\epsilon^2 \over
4} (1-o(1))$. This brings all the ``good'' bits to the initial segment
of length $N {1+\epsilon^2 \over
4} (1-o(1))$. This will be the value of $N$ in the next stage. (The
$1-o(1)$ term, derived from a law of large numbers,
is chosen so that with high probability all bits in the
segment are in fact ``good'' bits.)
The total number of steps in all stages of all rounds is quadratic in
the block size, hence $O(n^{2/3})$.
At the end of the process, the $\Theta(n^{1/3} \epsilon^2)$ good bits lie
in a segment at the start of the block.\\
Why is it necessary to switch to
phase 2 once the bias of the bits is high? Because
once the bits have high bias, the bit that is discarded in a phase 1
computation itself has substantial bias. Consequently
the method is wasteful; if we continued with phase 1 to the end, the
ratio of clean bits obtained to the number we started with, would
tend to $0$ in $n$ (rather than being the fixed quantity
$\Omega(\epsilon^2)$, independent of $n$). In phases 2
and 3 we use blocks that, instead of being of the fixed size $2$,
increase together with the bias. Only one or a constant number of bits
are discarded from each block of the computation, and it becomes
possible to discard a small fraction of the bits, while still
amplifying those that remain.
\subsection*{Phase 2: obtaining polynomially small $\delta$.}
{\it {\bf Theorem \ref{thm1}, phase 2: }
Starting with $n$ bits of bias at least $0.856$,
and using operations (a),(b), we can obtain $\Omega(n)$ bits
with $\delta<n^{-0.3}$.
}
This phase will require $O(\log \log n)$ rounds, each using time
$O(n^{2/3})$.
We begin with $n_0$ bits, of which most are $0$'s, but a constant
fraction, $b_0$, are $1$'s.
We partition the bits randomly into bins,
each of $k_0$ bits
$x_1,...,x_{k_0}$. In each bin, we parity bits $x_2$ through $x_{k_0}$ into
bit $x_1$. If $x_1$ equals $1$, we do not pass bits $x_2,...x_{k_0}$
along to the next round; if $x_1$ equals $0$, we do. This is repeated
for several rounds with varying $k$. The bins are
rerandomized in each round. (All the randomness, again, is provided
externally by the experimenter. The computation itself is
deterministic. In particular, all tape movements are oblivious.) \\
\noindent{\bf Analysis: }
Let $\delta_0 = b_0/n_0$. The probability that a given bin contains
exactly one $1$ is (for large $n_0/k_0$) very close to
\[k_0 \delta_0 (1-\delta_0)^{k_0-1}.\]
(This is what it would be exactly, for independent
sampling with probability $\delta_0$).
Moreover for large $n_0/k_0$, there is a law of large numbers saying
the total number of bins containing one $1$, call it $u$, is with high
probability very close to its expected value,
\[b_0 (1-\delta_0)^{k_0-1}. \]
{\bf (a)} The total number of bits passed along to the next round,
$n_1$, is lower bounded by only considering bits from blocks which were
entirely $0$'s; this bound (again using a law of large numbers to make
a high-probability statement) is
\[n_1 \geq {n_0 \over k_0} (1-\delta_0)^{k_0} (k_0-1).\]
{\bf (b)} The total number of $1$'s passed along to the next round is
at most $b_0 - u$ which, w.h.p., is close to its expectation, so
we write
\[ b_1 \leq b_0(1-(1-\delta_0)^{k_0-1}).\]
Now we need to make a good choice of $k$ as a function of $\delta$.
Note that $\delta=0.072$ corresponds to $\epsilon=0.856$.
Our choice is as follows: for $0.0188 < \delta \leq 0.072$, select
$k=3$. For $0.0027 < \delta \leq 0.0188$, select $k=7$. For $0.000158
< \delta \leq 0.0027$, select $k=21$. For $\delta \leq 0.000158$,
select $k=\delta^{-0.4}$. Note that in this region $k \geq 33$.
In the first of these regions we
are guaranteed $n_1/n_0 \geq 0.532$; in the second we
are guaranteed $n_1/n_0 \geq 0.75$; and in the third we
are guaranteed $n_1/n_0 \geq 0.899$. Each of these regions is
encountered at most once in the process.
In the fourth region, we have $n_1/n_0 \geq {k - 1 \over k}
(1-\delta)^k \geq e^{-1.1 \delta^{0.4}}$. We also have
$\delta_1 \leq \delta {1 - (1-\delta)^{k-1} k \over (1-\delta)^k (k-1)}
\leq 1.1 \delta (1 - (1-\delta)^{k-1}) \leq 1.2 \delta^2 (k-1) \leq 1.2
\delta^{1.6}$.
Consequently, over the entire fourth region, $\prod (n_1/n_0) \geq
e^{-1.1 \sum \delta^{0.4}} \geq 0.96$.
The above iterations halt once we reach a large enough block size,
$n^\alpha$ for
$0.2 < \alpha \leq 0.32$. At that point we implement another few
iterations using blocks of size $n^{1/3}$ (we can simply use the
entire block of bits that is allowed to interact), bringing $\delta$
down close to the stationary point of the iteration $\delta_1 \leq
\delta_0^2 k$, i.e.\ $\delta=n^{-1/3}$; let us say we halt when
$\delta \leq n^{-0.3}$.
\subsection*{Phase 3: obtaining $\delta < n^{-10}$.}
{\it {\bf Theorem \ref{thm1}, phase 3: }
Starting with $n$ bits of bias at least $1-n^{-0.3}$,
and using
operations (a),(b), we can obtain $(1-o(1))n$ bits of bias $1-n^{-10}$.
}
This phase will require a constant number of rounds, each using time
$O(n^{2/3})$ in each $n^{1/3}$-size block, hence $O(n^{4/3})$ time
overall.
Fix blocks of size $k=n^{1/6}$. Now instead of paritying into just
one bit, parity into the first $2$ bits, i.e.\ compute modulo $4$ the
number of $1$'s in the block. We now implement the logic gate $(x,y,z)
\rightarrow (x,y,(x \vee y) \oplus z)$ with $x$, $y$ and $z$
representing the first three bits of the block. Now, if after this
gate, the third bit is a $1$, we pass the remaining $n^{1/6}-3$
bits of the block on to the next round. Now that the decision has been
encoded in one bit (namely the third bit of the block), this procedure
can be implemented in a manner similar to that described concerning
phase 2 (the ``decision bit'' is carried in the tape head and controls
whether or not a permutation is implemented).
We will only pass $1$'s through to the next round if there are at
least $4$ of them in the entire
block, or any in the first $3$ bits. The recurrence for $\delta$ is
therefore approximately
\[ \delta_1 \leq \delta_0 (3n^{-1/6} + 3 \delta_0 + {n^{1/6} \choose 3}
\delta_0^{3}) \]
Beginning with the value $\delta \leq n^{-0.3}$ provided by the
previous phase, only a constant number of iterations are required to
reduce $\delta$ beneath $n^{-10}$. The total number of bits is reduced
only by a $1-o(1)$ factor.
\subsection*{Termination}
At this point, in time proportional
to $n^2$, we gather together the remaining bits from all the
$n^{1/3}$-size blocks, ready for a subsequent computation. The probability
that any of these bits are not $0$'s is at most $n^{-9}$.
\subsection*{Efficiency: bit yield}
Collecting together the loss factors from phases 1, 2 and 3, we find
that
\[{ n_{\mbox{initial}} \over n_{\mbox{final}}} \leq 1.0017 \times
6.7 \times {1 \over 0.532} \times {1 \over 0.75} \times {1 \over 0.899}
\times {1 \over 0.96} \times (1+o(1)) \times \epsilon^{-2} \leq 20 \epsilon^{-2} .\]
This factor can be improved by using more complex computations.
The chief place to obtain gains is in the latter stages of
phase 1 and the earlier stages of phase 2; in both cases the way to
improve efficiency is to use larger block sizes, and a more
complicated permutation within each block, in order to extract a
fraction of bits from the block that tends to the optimal fraction,
$({\epsilon_{i} \over \epsilon_{i+1}})^2$.
\subsection*{$\Omega(\epsilon^2 n)$ clean bits is optimal}
It was noted above that if $\epsilon=0$, we cannot prepare any bits
at all that are biased toward $\ket{0}$. If $\epsilon > 0$, how
many such bits can we hope to prepare? If we ask that with high
probability $k$ bits are all $0$'s, then the central limit theorem
places a
limit on $k$ of $n(1-H_2({1+\epsilon \over 2}))$ which, for small
$\epsilon$, is approximately $n \epsilon^2$.
To prepare just one good bit, therefore, we must use about
$\epsilon^{-2}$ bits with bias $\epsilon$.
\section*{Architecture}
We now discuss how the computational primitives (a),(b), and some
extensions, can be implemented on polymers with certain kinds of
periodic structures.
\subsection*{Turing machine: }
Normally one imagines a Turing machine having a ``head'' which
implements computations locally, i.e.\ involving the state of the ``tape''
in the vicinity of the head. We implement this abstraction (but
without any moving parts) in the following way. (It must be understood
that there is considerable flexibility in the design, and
that for the sake of specificity, we are making some arbitrary choices;
the proper choices must eventually be made on the basis of experimental
considerations.)
The tape will not of course be infinite, but a ring of $n$ qubits.
These will be realized in the nuclear spins of a linear polymer. The
polymer will consist of $n/3$ repetitions of the sequence $ABC$, thus
$AB\cAB\cAB\cABC...$; the atoms $A,B,C$ have spin $1/2$
nuclei. In addition, at one point in the chain,
another atom, $D$, is adjacent to the chain, near a neighboring pair
of $C$ and $A$ atoms; it induces a chemical shift in some of the
energy levels at these two neighboring atoms.
(Note: it is not actually necessary for $A$,$B$ and $C$ to be
different types of nuclei; they could all be of one kind, if the
periodic structure resides in adjacent atoms that induce suitable
chemical shifts in the energy levels.)
Five resonant frequencies will be such that we can
implement the following five operations:
\begin{enumerate}
\item Frequency 1: transposition of the qubits in all adjacent $AB$
pairs.
\item Frequency 2: transposition of the qubits in all adjacent $BC$
pairs.
\item Frequency 3: transposition of the qubits in all adjacent $\cA$
pairs.
\item Frequencies 4,5: these resonate only with energy levels shifted
by the presence of atom $D$. Hence they induce a unitary operator
only on the pair of qubits at the $C$ and $A$ atoms immediately
adjacent to atom $D$. We assume that the combinations of frequencies
4 and 5 generate the group of all transformations in that
$4$-dimensional Hilbert space.
\end{enumerate}
Arbitrary ``oblivious'' quantum computations can be performed on this
machine. By an ``oblivious'' computation we mean one in which the
sequence of movements of the tape head is a function is the same in
all the superposed ``copies'' of the machine, in the quantum
computation.
A cyclic shift of the tape by one position is implemented by the
following sequence of transpositions: $(A,B)$, $(C,A)$, and then
$(B, C)$.
(Each such transposition can be
implemented by three CNOT gates: for example $(A,B)$ can be
implemented by the sequence $[A \rightarrow B], [B
\rightarrow A], [A \rightarrow B]$.)
A succession of such triples of transpositions will bring any desired
pair of adjoining qubits next to the tape head.
\subsection*{Cellular automaton with distinguished site: }
Lloyd\cite{Ll} has proposed implementing a quantum cellular automaton.
We propose an architecture similar to what we have described above,
but now we use five kinds of atoms: three $(A,B,C)$ have spin $1/2$
nuclei and two $(D,E)$ induce chemical shifts in resonant
frequencies of nearby atoms of the first three types. We assume that
$k | n$. The ring consists of repetitions of the pattern $A B C$;
after every $k$ atoms of type $A,C$, one atom of type $D$ adjoins
the chain and induces local chemical shifts. At one site an $E$ atom
adjoins the chain and induces chemical shifts, which are different
from those induced by $D$.
One step of the computation is implemented by a pulse at a frequency
that involves a $D$ atom and the two adjacent spin $1/2$ atoms;
rotations of the tape are implemented as above, small rotations
allow information to be sent between adjacent ``cells'' of the
cellular automaton, while global rotations bring the tape contents
past the $E$ site, where individual operations may be implemented.
\subsection*{Two-tape Turing machine: }
To implement a two-tape Turing machine we need to enable the head to
move independently on each of the tapes. Equivalently, in our
implementation, we need to have two cycles of bits, which can
independently be cyclically shifted past the head.
Let the molecule consist of $n$ repetitions of the sequence
$ABCD$. (As above, these are spin $1/2$ nuclei and each adjacent
type of pair can be addressed with distinctive frequencies.)
The $A$ and $C$ nuclei will carry one tape, the $B$ and $D$
nuclei the other. (Note that the nuclei of any given type carry a
contiguous segment of half a tape, not every other bit.)
The sequence of transpositions $(AB)(BC)(AB)(CD)(AD)(CD)$
rotates the $AC$-tape by one position, while leaving the $BD$-tape
fixed.
The most time-consuming stages of our procedure are the initial
permutation of the bits and the final collecting of the clean bits,
each requiring time $O(n^2)$. In fact, these are the only stages which
require more than time $O(n^{4/3})$.
The terminal permutation is very simple; the initial permutation can
be very simple, as well, so long as we make the ``local correlations''
assumption on our initial $\epsilon$-biased distribution, in which case we
can use the permutation which sends bit $r n^{1/3} + s$ (for $0 \leq s
< n^{1/3}$) to position $(r+s)n^{1/3}+s$. In this case, the initial
permutation can be performed in time $O(n^{4/3})$, and the final
permutation in linear time, on the $2$-tape
architecture. Consequently, the entire procedure can be implemented in
time $O(n^{4/3})$. If we further augment our device by combining the
features of a $2$-tape machine with those of a cellular automaton,
with $k=n^{1/3}$, then the initial permutation can be performed in
linear time, and in phase 3 and the latter part of phase 2 we
can gain time by working in parallel within each $n^{1/3}$-size
block. The overall runtime reduces to linear.
Thus there is substantial benefit in implementing slightly
stronger primitives than the minimal list of operations (a)-(c).
\subsection*{Acknowledgments}
Thanks to Isaac Chuang and Richard Singerman for helpful discussions.
| -19,615.975282 |
[
-2.662109375,
2.4140625
] | 31.378299 |
[
-2.890625,
0.2607421875,
-2.517578125,
-6.1640625,
-0.92236328125,
8.96875
] |
[
2.4921875,
7.6796875,
2.310546875,
6.44140625
] | 317 | 5,170 |
[
-3.556640625,
4.2578125
] | 27.886025 |
[
-5.9453125,
-4.140625,
-4.37109375,
-2.25,
2.048828125,
12.1015625
] | 1.173371 | 16.674679 | 21.624758 | 3.525205 |
[
2.3580214977264404
] | -13,015.628475 | 4.698066 | -19,754.145334 | 0.815184 | 5.826757 |
[
-2.5078125,
-3.111328125,
-2.5625,
-4.15234375,
2.28125,
10.359375
] |
[
-5.37109375,
-2.12109375,
-1.958984375,
-1.1103515625,
3.48046875,
4.19921875
] | |
BkiUfYA25V5jayWgFIOf
|
\section{Introduction}
\label{sec:introduction}
Interest in molecular electronics~\cite{Xiang2016} is stimulated by rapid technological advances that allow for isolation and manipulation of individual molecules to realize single-molecule junctions~\cite{Perrin2015Feb,Huang2015Feb} ---nanoscopic devices with tunable optical, mechanical and magnetic properties~\cite{Tao2006}.
One particularly prospective candidate for information storing and processing devices are molecules that exhibit large effective spin and magnetic anisotropy. The combination of these two quantities gives rise to magnetic bistability, which is a key prerequisite for a system to serve as a memory element~\cite{Bartolome_book}. Accordingly, the control of the magnetic anisotropy of molecules deposited in a junction is imperative for achieving functional devices.
So far, only a few schemes for modifying such magnetic anisotropy \emph{in situ} have been demonstrated experimentally in specific molecules. For instance, by means of electrical gating, dissimilar magnetic properties of different molecular charge states were utilized~\cite{Zyazin2010}, or, by mechanical straining of the junction, the ligand field in a molecule based on a single magnetic ion was locally altered~\cite{Parks2010}.
In addition, theoretical analysis predicts that also application of effective spintronic fields should be a feasible approach~\cite{Misiorny2013Dec}.
In this paper, we explore another possible way of engineering magnetic anisotropy in large-spin molecules which harnesses the coupling between spin and molecular vibrations without the application of external fields to the molecule.
Individual molecules inserted in junctions vibrate with discrete frequencies, and these quantized vibrations (so-called \emph{vibrons}) can couple to other molecular degrees of freedom, such as, charge and spin.
For example, the interaction between electronic charge and vibrations can lead to excitation of transitions between different molecular vibrational states, when an electron tunnels through a molecule. This effect has been experimentally observed in single-molecule junctions based on carbon derivatives, specifically carbon nanotubes and fullerenes~\cite{LeRoy2004,Pradhan2005,Pasupathy2005,Sapmaz2006,Leturcq2009,Benyamini2014}, and also in other single molecules~\cite{Stipe1998,Yu2002,Leon2008,Osorio2010,Franke2012}.
Moreover, if this \emph{charge-vibron} coupling is strong, it drastically impacts the transport properties of individual molecules, and at low bias-voltage it may even block transport of electrons ---an effect known as Franck Condon blockade~\cite{Koch2005,Koch2006Nov}. Recently, this effect has been experimentally and theoretically studied also in the context of magnetic molecules~\cite{Burzur2014,McCaskey2015}.
On the other hand, the primary interest in the coupling between \emph{vibrations} and \emph{spins} stems from its prominent role in the spin relaxation processes, which have been extensively studied for various systems, \emph{e.g.}\xspace, atomic spins in crystal solids~\cite{Orbach1961Dec,Stoneham1965} and other molecular systems~\cite{Villain1994,Leuenberger1999,Garanin1997,Chudnovsky2005,Park2008,Kokado2010Spin}.
However, only recently, the effect of \emph{spin-vibron} coupling on the properties of individual molecules captured in junctions has caught some attention~\cite{May2011,Ruiz2012}.
It has been suggested for sensing~\cite{Ohm2012,Palyi2012May} and cooling~\cite{Stadler2014,Stadler2015} applications in carbon nanotubes, and experimentally demonstrated to arise between a single molecular spin and a carbon nanotube~\cite{Ganzhorn2013}.
Here, we address the general question of how the interplay of the charge- and spin-vibron coupling in a single magnetic molecule affects its magnetic properties. While in this paper we deal with a general model that could be relevant for a large class of molecules, we would like to point out that the influence of (static) deformations on the magnetic anisotropy has recently been experimentally demonstrated in Co-based molecules~\cite{Parks2010}.
For the purpose of this paper, we consider a model device consisting of a spin-anisotropic molecule embedded in a molecular junction, where vibrations of the molecule couple to both, the charge of tunneling electrons and the resulting spin of the molecule.
To analyze the effect of vibrations on magnetic properties of the molecule, we derive an \textit{effective} giant-spin Hamiltonian exhibiting relevant corrections to the magnetic anisotropy constants due to the charge- and spin-vibron coupling. We show that such corrections significantly impact the spectral properties of the molecule, which, in turn, can have a profound effect on transport characteristics of the device.
In particular, we here analyze signatures in the differential conductance emerging from the modulation of the magnetic anisotropy of the molecule due to the interplay of charge- and spin-vibron couplings.
In order to calculate transport properties of the weakly coupled molecule, we use a master equation approach deriving from a real-time diagrammatic technique. An additional \textit{technical} achievement of this paper is the careful analysis of the regimes where coherent superpositions of molecular states do not affect the transport properties. We thereby validate the simpler master equation approach, where such superpositions are disregarded, for the situations studied here.
This paper is organized as follows: the model of a vibrating magnetic molecule captured in a three-terminal molecular junction is introduced in Sec.~\ref{sec:theory}, whereas the effective spin Hamiltonian including corrections to magnetic anisotropy constants due to the charge- and spin-vibron couplings is derived in Sec.~\ref{sec:states_and_effHams}.
Next, in Sec.~\ref{sec:spectrum} we discuss how these couplings affect spectral properties of the molecule.
Key transport characteristics of this system
are presented in Sec.~\ref{sec:transport}.
Finally, a summary of the main findings and conclusions are given in Sec.~\ref{sec:conclusions}. Appendix~\ref{app:coherences} contains an analysis of the role of coherent superpositions between molecular states for transport calculations.
\section{Model of a vibrating magnetic molecule in a magnetic junction}
\label{sec:theory}
In this section, we formulate the model for a magnetic molecule embedded in a junction, as depicted in Fig.~\ref{fig1}(a). The key features of such a model are captured by the general Hamiltonian
\begin{equation}\label{eq:H_tot}
\skew{3}{\hat}{\mathcal{H}}
=
\skew{3}{\hat}{\mathcal{H}}_\text{mol}
+
\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}
+
\skew{3}{\hat}{\mathcal{H}}_\text{jun}
.
\end{equation}
Importantly, the characteristics of a molecule are typically strongly impacted by its vibrational degrees of freedom. Only, when introducing the model, for conceptual clarity, we formally split the part of the Hamiltonian corresponding to the molecule, \mbox{$\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}$}, into two parts:
(i) $\skew{3}{\hat}{\mathcal{H}}_\text{mol}$ describing the charge and spin properties of a static molecule (see Sec.~\ref{sec:model_mol}), and
(ii) $\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}$ including the effects associated with molecular vibrations (see Sec.~\ref{sec:model_vib}).
Finally, the last term of Eq.~(\ref{eq:H_tot}), $\skew{3}{\hat}{\mathcal{H}}_\text{jun}$, accounts for the bare magnetic junction as well as for tunneling of electrons between electrodes of the junction and the molecule (see Sec.~\ref{sec:model_tun}).
\begin{figure}[t!]
\includegraphics[width=0.85\columnwidth]{Fig1.pdf}
\caption{%
(a) Schematic illustration of a single magnetic molecule (represented as an effective spin~$\veco{S}_n$) embedded between two ---possibly magnetic--- electrodes, with collinear (parallel or \mbox{antiparallel}) configuration of their spin moments.
%
A gate electrode is used to tune the energy spectrum of the charged molecule.
%
(b)~Effect of magnetic anisotropy~on the spectrum of a model molecule with spins~\mbox{$S_0=1/2$} and \mbox{$S_1=1$}, given by spin states \mbox{$\ket{\psi_0}\in\big\{\ket{\pm 1/2}\big\}$} for the neutral state and~\mbox{$\ket{\psi_1}\in\big\{\ket{0},\ket{\pm1}\big\}$} for the charged state with uniaxial anisotropy only (\mbox{$E=0$}). In the presence of transverse anisotropy (\mbox{$E\neq0$}), we get
%
$
\ket{\psi_1}
\in
\big\{
\mbox{$
\ket{\chi^0_1}
\equiv
\ket{0}$},
\mbox{$\ket{\chi_1^\pm}$}
\equiv
\big(\mbox{$\ket{1}$}\pm\mbox{$\ket{-1}$}\big)/\sqrt{2}
\big\}
$.
%
For further explanation see Sec.~\ref{sec:model_mol}.
}
\label{fig1}
\end{figure}
\subsection{Magnetic molecule}
\label{sec:model_mol}
We consider a class of magnetic molecules whose static properties are determined by their charge and spin states. The associated energy is described by the Hamiltonian
\begin{equation}\label{eq:H_mol}
\skew{3}{\hat}{\mathcal{H}}_\text{mol}
=
\skew{3}{\hat}{\mathcal{H}}_\text{ch}
+
\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}
.
\end{equation}
The first term of the Hamiltonian above arises due to the capacitive coupling of the molecule to a gate voltage~$V_\text{g}$, which shifts the entire spectrum of the molecule by an energy~\mbox{$\propto eV_\text{g}$} depending on its charge. Specifically, we assume that only two charge states~$n$ of the molecule are energetically accessible: the \emph{neutral} state~(\mbox{$n=N$}) and the \emph{charged} state~(\mbox{$n=N+1$}). For notational brevity we henceforth set~$N$ to 0.
In principle, the occupation of many different molecular orbitals can lead to these two charge states; the occupation number operator of the molecule therefore reads as
\mbox{$
\skew{1}{\hat}{n}
\equiv
\sum_{l,\sigma}
\skew{2}{\hat}{d}_{l\sigma}^\dagger\skew{2}{\hat}{d}_{l\sigma}^{}
,
$}
with~$\skew{2}{\hat}{d}_{l\sigma}^\dagger\,(\skew{2}{\hat}{d}_{l\sigma}^{})$ standing for the operator creating (annihilating) a spin-$\sigma$ electron in the $l$th molecular
orbital.\footnote{%
Note that the operator~$\skew{1}{\hat}{n}$ is formally defined as \mbox{$\skew{1}{\hat}{n}-N$}, that is, it counts only the number of excess electrons with respect to the neutral charge state.}
Consequently, the effect of capacitive coupling of the molecule to a gate electrode is simply given by~\mbox{$\skew{3}{\hat}{\mathcal{H}}_\text{ch}=\mathcal{E}(V_\text{g})\skew{1}{\hat}{n}$}, with a gate-voltage dependent energy~$\mathcal{E}$.
From the magnetic point of view, in each charge state~$n$ the molecule can be regarded as an effective ground-state \emph{molecular} spin~$\veco{S}_n$, whose intrinsic magnetic behavior is characterized by the giant-spin Hamiltonian~\cite{Kahn_book,Gatteschi_book},
\begin{equation}\label{eq:H_spin}
\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}
=
\sum_{n=0,1}
\!
\Big\{
\!
-D_n
\big(\skew{3}{\hat}{S}_n^z\big)^{\!2}
+
E_n
\Big[
\big(\skew{3}{\hat}{S}_n^x\big)^{\!2}
-
\big(\skew{3}{\hat}{S}_n^y\big)^{\!2}
\Big]
\Big\}
.
\end{equation}
In the equation above, the first term represents the \emph{uniaxial} component of the magnetic anisotropy, while the \emph{transverse} component is described by the second term. The relevant anisotropy constants are given by~$D_n$ and~$E_n$.
This magnetic anisotropy can, for instance, stem from a static deformation of the molecule due to the deposition into the junction.
In order to gain insight about the magnetic behavior of the static model molecule, it is instructive to analyze the eigenstates of the Hamiltonian as given in Eq.~(\ref{eq:H_mol}),
\mbox{$
\ket{\psi_n}
$,}
with
\mbox{$
\skew{3}{\hat}{\mathcal{H}}_\text{mol}\ket{\psi_n}
=
\mathcal{E}_{\psi_n}\ket{\psi_n}
$.}
In the situation when a molecule exhibits exclusively a uniaxial component of magnetic anisotropy (\mbox{$D_n\neq0$} and~\mbox{$E_n=0$}), the basis of eigenstates of the molecule is simply formed by the states~\mbox{$\big\{\ket{\psi_n}\equiv\ket{S_n,M_n}\big\}$} representing projections of the spin~$\veco{S}_n$ on the $z$-axis, that is,
\mbox{$
\skew{3}{\hat}{S}_n^z\ket{S_n,M_n}
=
M_n\ket{S_n,M_n}
$}.
Note that a~magnetic molecule in a given charge state can in general exhibit a~few spin multiplets (with different total spin~$S_n$). These spin multiplets are typically very well separated in energy, so that only states belonging to the ground spin multiplet are energetically accessible in the parameter regime under consideration. Therefore, in the following we often use a simplified notation replacing \mbox{$\ket{S_n,M_n}\rightarrow\ket{M_n}$}.
Now, if also the transverse component is present (\mbox{$D_n\neq0$} and~\mbox{$E_n\neq0$}), one finds that the eigenstates~$\big\{\ket{\psi_n}\big\}$ correspond to linear combinations of the spin projections along the~$z$-axis, given by
\mbox{$
\ket{\psi_n}
=
\sum_{M_n}
\mathcal{C}_{M_n}^{\psi_n}
\ket{M_n}
$},
where $\mathcal{C}_{M_n}^{\psi_n}$ are the expansion coefficients.
To~illustrate the effect of magnetic anisotropy on the energy spectrum of a molecule in a given vibrational state, in~Fig.~\ref{fig1}(b) we show the energy spectrum for a hypothetical molecule with~\mbox{$S_0=1/2$} and~\mbox{$S_1=1$}, additionally assuming that \mbox{$D_0=E_0=0$}, \mbox{$D_1\equiv D$} and~\mbox{$E_1\equiv E$}.
One can see that for uniaxial anisotropy, the eigenstates are conveniently labeled with~$M_n$ and they are degenerate when having equal $|M_n|$. However, for non-vanishing transverse anisotropy (\mbox{$E\neq0$}) the degeneracy of charged states,
$
\ket{\psi_1}
\in
\big\{
\mbox{$
\ket{\chi^0_1}
\equiv
\ket{0}$},
\ket{\chi_1^\pm}
\equiv
\big(\ket{1}\pm\ket{-1}\big)/\sqrt{2}
\big\}
$,
is lifted.
\subsection{Impact of molecular vibrations}
\label{sec:model_vib}
Importantly, a molecule embedded in a junction generally supports different vibrational modes. These vibrational modes are approximated as independent harmonic oscillators~\cite{Mahan_book} with angular frequencies~$\omega_q$,
\begin{equation}\label{eq:H_vib}
\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}
=
\sum\limits_{q=1}^{Q}
\hbar
\omega_q
\skew{1}{\hat}{b}_q^\dagger\skew{1}{\hat}{b}_q^{}
+
\skew{3}{\hat}{\mathcal{H}}_{\text{ch-vib}}
+
\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}} ,
\end{equation}
and they can in principle couple \emph{both} to the charge ($\skew{3}{\hat}{\mathcal{H}}_{\text{ch-vib}}$) and spin ($\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}$) degrees of freedom of the molecule.
The operator~$\skew{1}{\hat}{b}_q^\dagger$\,($\skew{1}{\hat}{b}_q^{}$) denotes the creation (annihilation) operator for the $q$th quantized vibrational mode, referred commonly to as a \emph{vibron}. We here assume the total number of vibrational modes to be $Q$.
In the absence of the coupling terms, $\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}$ and $\skew{3}{\hat}{\mathcal{H}}_{\text{ch-vib}}$, the vibronic contribution, $\ket{\vartheta}$, to the molecular eigenstates is given by
\mbox{$
\ketv{\vartheta}
\equiv
\ketv{n_\text{v}^1,\ldots,n_\text{v}^Q}
$}
with eigenenergies
\mbox{$
\mathcal{E}_\vartheta
=
\sum_{q=1}^Q
\hbar\omega_qn_\text{v}^q
$,}
where~$n_\text{v}^q$ is the occupation number of the $q$th vibrational mode.
The coupling of these vibrations to the electronic charge has been extensively studied~\cite{Mahan_book,Mitra2004,Koch2005,Koch2006Nov}, and is captured by the Hamiltonian
\begin{equation}\label{eq:H_ch-vib}
\skew{3}{\hat}{\mathcal{H}}_{\text{ch-vib}}
=
\sum\limits_{q=1}^{Q}
\lq
\hbar\omega_q
\big(\skew{1}{\hat}{b}_q^\dagger+\skew{1}{\hat}{b}_q^{}\big)
\skew{1}{\hat}{n}
,
\end{equation}
with the dimensionless coupling strength~$\lq$.
However, in a molecule in which deformations (for example, due to the embedding into the junction) influence its magnetic anisotropy~\cite{May2011,Ruiz2012}, small oscillations around the equilibrium position, are expected to lead to interactions between molecular vibrations and the spin as well~\cite{Kokado2010Spin,Ruiz2012}. This is represented by the third term of the Hamiltonian~(\ref{eq:H_vib}),
\begin{equation}\label{eq:H_spin-vib}
\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}
=
\sum\limits_{n=0,1}
\sum\limits_{q=1}^{Q}
\hbar \omega_q
\,
\skew{2.5}{\hat}{\mathcal{S}}_{\!nq}
\big(\skew{1}{\hat}{b}_q^\dagger + \skew{1}{\hat}{b}_q^{}\big)
.
\end{equation}
Here, the operator~$\skew{2.5}{\hat}{\mathcal{S}}_{\!nq}$ reads as
\begin{equation}\label{eq:opSV_def}
\skew{2.5}{\hat}{\mathcal{S}}_{\!nq}
=
\Lambda^\text{u}_{nq}
\big(\skew{3}{\hat}{S}_n^z\big)^{\!2}
+
\Lambda^\text{t}_{nq}
\Big[
\big(\skew{3}{\hat}{S}_n^x\big)^2
-
\big(\skew{3}{\hat}{S}_n^y\big)^2
\Big]
,
\end{equation}
and~the dimensionless parameters~$\Lambda^\text{u}_{nq}$ and~$\Lambda^\text{t}_{nq}$ stand for the coupling strength of vibrations to the uniaxial and transverse components of the molecular spin, respectively.
In the following discussion,~$\omega_q$, $\lq$, $\Lambda^\text{u}_{nq}$ and~$\Lambda^\text{t}_{nq}$, as well as $D_n$ and $E_n$ are treated as tunable, continuous parameters. A possibility to address the strength of the magnetic anisotropy in a molecule is \emph{via}\xspace stretching in a break junction setup~\cite{May2011,Ruiz2012}. Note that in such a case also the vibration frequency and the strength of the coupling to the charge are tunable \emph{via}\xspace the junction properties~\cite{Xiang2016,Kim2011,Bruot2011}.
Finally, it should be mentioned that in general the operator~$\skew{2.5}{\hat}{\mathcal{S}}_{\!nq}$ can take a more complex form, determined by the symmetry properties of the molecular spin and vibrations depending on how the molecule is embedded in the junction. In other words, it is conditioned by how the coupling to the electrodes of the junction and the molecular vibrations affect the ligand field, generating thus additional contributions to the magnetic anisotropy of the molecule~\cite{Garanin1997,Ganzhorn2013}.
\subsection{Tunnel coupling to electrodes}
\label{sec:model_tun}
The embedding of the molecule into an electronic junction enables electron tunneling processes between junction and molecule, which thereby change the charge- and spin-state of the molecule.
Within the model under consideration, the electrodes of the tunnel junction are represented as two reservoirs of non-interacting electrons as described by the first term of the Hamiltonian
\begin{equation}\label{eq:H_tun}
\skew{3}{\hat}{\mathcal{H}}_\text{jun}
=
\sum_{rk\sigma}
\varepsilon_{k\sigma}^r
\skew{1}{\hat}{a}_{k\sigma}^{r\dagger}
\skew{1}{\hat}{a}_{k\sigma}^{r\phantom{\dagger}}
+
\sum_{rlk\sigma}\left(
t_l^r
\skew{2}{\hat}{d}_{l\sigma}^\dagger
\skew{1}{\hat}{a}_{k\sigma}^{r\phantom{\dagger}}
+
\text{H.c.}\right)
.
\end{equation}
The operator~$\skew{1}{\hat}{a}_{k\sigma}^{r\dagger}$\,($\skew{1}{\hat}{a}_{k\sigma}^{r\phantom{\dagger}}$) is responsible for creation (an\-ni\-hilation) of an electron with energy~$\varepsilon_{k\sigma}^r$ in~drain (\mbox{$r=\text{D}$}) and source (\mbox{$r=\text{S}$}) electrodes, with~$k$ and~$\sigma$ denoting the orbital and spin quantum numbers, respectively.
Furthermore, the electronic occupation of the electrodes is governed by Fermi functions, \mbox{$f_r(\epsilon)\!=\!\big\{1+\exp[(\epsilon-\mu_r)/(k_\text{B} T)]\big\}^{-1}$}, with temperature~$T$ and a possible bias (transport) voltage~$V_\text{b}$ given by the difference of electrochemical potentials of the electrodes, \mbox{$V_\text{b}=(\mu_\text{S}-\mu_\text{D})/e$}.
Next, electron tunneling processes between electrodes and the molecule are included in the second term of Eq.~(\ref{eq:H_tun}), where~$t_l^r$ is the (spin-independent) tunneling matrix element between the~$l$th molecular orbital and the $r$th electrode.
A very convenient basis for studying transport of electrons is the basis of molecular states~\mbox{$\big\{\ket{\psi_n}\!\otimes\!\ketv{\vartheta}\big\}$}. The tunneling Hamiltonian [that is, the second term of Eq.~(\ref{eq:H_tun})] expanded in this basis takes the form~\cite{Misiorny2015}
\begin{equation}\label{eq:H_tun_expand}
\skew{3}{\hat}{\mathcal{H}}_\text{tun}
=
\sum_{rk\sigma}
\sum_{\psi_0\psi_1\vartheta}
\!\!
\mathbb{T}_r
\mathcal{T}^\sigma_{\psi_1 \psi_0}
\ket{\psi_1}\bra{\psi_0}
\otimes
{\ket{\vartheta}\bra{\vartheta}}
\,
\skew{1}{\hat}{a}_{k\sigma}^{r\phantom{\dagger}}
+
\text{H.c.}
\end{equation}
In the equation above, we split the tunneling amplitude into two factors: one quantifying the orbital overlap of the molecular states ($\mathbb{T}_r$), and the other imposing spin selection rules for transitions between molecular states ($\mathcal{T}^\sigma_{\psi_1 \psi_0}$).
The former is given by
\mbox{$
\mathbb{T}_r
=
\sum_l
t^r_l
\bra{S_1}| \skew{2}{\hat}{d}_{l}^\dagger| \ket{S_0}
$},
with \mbox{$\bra{S_1}|\skew{2}{\hat}{d}_{l}^\dagger|\ket{S_0}$} denoting the so-called \emph{reduced matrix element}, which occurs here due to application of the Wigner-Eckart theorem~\cite{Sakurai_book}.
The explicit form of the latter is
\begin{equation}\label{eq:tunT_def}
\mathcal{T}^\sigma_{\psi_1 \psi_0}
=
\sum_{M_0 M_1}
\!\!
\big(\mathcal{C}_{M_1}^{\psi_1}\big)^{\!\ast}
\mathcal{C}_{\psi_0}^{M_0}
\,
\big\langle S_0,M_0;\tfrac{1}{2},\sigma\big|S_1, M_1\big\rangle
\end{equation}
with \mbox{$\big\langle S_0,M_0;\tfrac{1}{2},\sigma\big|S_1, M_1\big\rangle$} standing for the Clebsch-Gordon coefficient.
Moreover, $\mathbb{T}_r$ is treated here as a free parameter. It enters the spin-de\-pend\-ent broadening~$\Gamma_\sigma^r$ of molecular levels, \mbox{$\Gamma_\sigma^r=2\pi\nu_\sigma^r|\mathbb{T}_r|^2$}, which arises as a result of tunneling of electrons between a molecule and the $r$th electrode. The coefficient~$\nu_\sigma^r$ stands for the spin-resolved density of states (DOS) in the $r$th electrode in a flat-band approximation [namely, the DOS is assumed to be energy-independent, $\nu_\sigma^r(\varepsilon)\approx\nu_\sigma^r$].
In the following, we allow the electrodes to be spin-polarized.
Note that only a collinear relative orientation of the spin moments in the electrodes \mbox{---that} is, the \emph{parallel} and \emph{antiparallel} magnetic configuration, as shown in Fig.~\ref{fig1}(a)--- is considered, and we take these spin moments also to be collinear with the principle ($z$)~axis of the molecule.
To quantify the~magnetic properties of the electrodes we introduce the spin-polarization coefficient~$P_r$ defined in terms of the DOS of spin-majority (\mbox{-mi}\-nor\-i\-ty) electrons, \mbox{$\nu_{+(-)}^r$}, as
\mbox{$
P_r
=
(\nu_+^r-\nu_-^r)
/
(\nu_+^r+\nu_-^r)
$}.
For equal spin-polarizations of the two electrodes (\mbox{$P_\text{S}=P_\text{D}\equiv P$}) and for symmetric tunnel-coupling (\mbox{$\mathbb{T}_\text{S}=\mathbb{T}_\text{D}\equiv\mathbb{T}$}), assumed henceforth, we can parametrize~$\Gamma_\sigma^r$ in terms of the spin-polarization coefficient~$P$ and the total broadening~\mbox{$\Gamma\equiv\Gamma^r=\Gamma_\uparrow^r+\Gamma_\downarrow^r$} as follows:
\mbox{$\Gamma_{\uparrow(\downarrow)}^\text{S}=\Gamma_{\uparrow(\downarrow)}^\text{D}=(\Gamma/2)(1\pm P)$} for the parallel magnetic configuration, and \mbox{$\Gamma_{\uparrow(\downarrow)}^\text{S}=\Gamma_{\downarrow(\uparrow)}^\text{D}=(\Gamma/2)(1\pm P)$} for the antiparallel one.
\section{Effective Hamiltonians}
\label{sec:states_and_effHams}
Due to the coupling between vibrations and the molecule's charge and spin degrees of freedom, see Eqs.~(\ref{eq:H_ch-vib})-(\ref{eq:H_spin-vib}), the molecular states~\mbox{$\big\{\ket{\psi_n}\!\otimes\!\ketv{\vartheta}\big\}$} are not eigenstates of the Hamiltonian $\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}$ any longer.
The purpose of this section is to eliminate the charge-vibron and spin-vibron couplings from the Hamiltonian~$\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}$ by application of appropriate canonical transformations,
\begin{equation}\label{eq:Canom_Transform}
\big(
\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}
\big)^{\!\prime}
=
\text{e}^{\hat{\mathcal{A}}_\text{s}}
\text{e}^{\hat{\mathcal{A}}_\text{c}}
\big(
\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}
\big)
\text{e}^{-\hat{\mathcal{A}}_\text{c}}
\text{e}^{-\hat{\mathcal{A}}_\text{s}}
.
\end{equation}
The scope of this transformation is that the new effective Hamiltonian~\mbox{$\big(\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}\big)^{\!\prime}$} \mbox{---with} renormalized \mbox{parameters---} becomes diagonal in the basis~\mbox{$\big\{\ket{\psi_n}\otimes\ketv{\vartheta}\big\}$}. Particularly, the transformation kernels~$\hat{\mathcal{A}}_\text{c}$ and~$\hat{\mathcal{A}}_\text{s}$ allow for elimination of the charge-vibron~($\skew{3}{\hat}{\mathcal{H}}_{\text{ch-vib}}$) and spin-vibron~($\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}$) interactions, respectively.
\subsection{Charge-vibron coupling in the absence of spin-vibron coupling}
\label{sec:chargevibron}
The former kernel, first introduced by Lang and Firsov~\cite{Lang1963}, is known to have the form
\mbox{$
\hat{\mathcal{A}}_\text{c}
=
\sum_{q=1}^Q
\lq
\big(\skew{1}{\hat}{b}_q^\dagger-\skew{1}{\hat}{b}_q^{\phantom{\dagger}}\big)\skew{1}{\hat}{n}
$,}
and it has proven very useful for studying the Franck-Condon phenomena in transport through single-molecule devices~\cite{Mitra2004,Koch2005,Koch2006Nov,McCaskey2015}. The Lang-Firsov transformation decouples the charge and vibronic operators, leading at the same time to an energy shift of the charged state,
\mbox{$
\mathcal{E}(V_\text{g})
\mapsto
\mathcal{E}(V_\text{g})
-
\sum_{q=1}^Q
\hbar\omega_q\lq^2
$.}
Importantly, at the same time also the tunneling Hamiltonian~(\ref{eq:H_tun_expand}) gets modified
\begin{multline}\label{eq:H_tun_expand_LF}
\text{e}^{\hat{\mathcal{A}}_\text{c}}
\skew{3}{\hat}{\mathcal{H}}_\text{tun}
\text{e}^{-\hat{\mathcal{A}}_\text{c}}
=
\mathbb{T}
\sum_{rk\sigma}
\sum_{\psi_0\psi_1}
\sum_{\vartheta\vartheta'}
\mathcal{T}^\sigma_{\psi_1 \psi_0}
\mathcal{J}_{\vartheta'\vartheta}^{}
\\
\ket{\psi_1}\bra{\psi_0}
\otimes
{\ket{\vartheta'}\bra{\vartheta}}
\,
\skew{1}{\hat}{a}_{k\sigma}^{r\phantom{\dagger}}
+
\text{H.c.}
\end{multline}
Note that in this transformed tunneling Hamiltonian the number of vibrational excitations is not conserved anymore.
The new coefficient~$\mathcal{J}_{\vartheta'\vartheta}$ is the so-called Franck-Condon matrix element~\cite{Koch2006Nov,Seldenthuis2008,Cuevas-Scheer_book},
\begin{equation}\label{eq:FCcoef_def}
\mathcal{J}_{\vartheta'\vartheta}
=
\brav{\vartheta'}
\exp\Big[
\sum_{q=1}^Q
\lq\big(\skew{1}{\hat}{b}_q^\dagger-\skew{1}{\hat}{b}_q^{}\big)
\Big]
\ketv{\vartheta}
.
\end{equation}
\subsection{Spin-vibron coupling}
\label{sec:spinvibron}
In the presence of spin-vibron interaction, Eq.~(\ref{eq:H_spin-vib}), the Lang-Firsov transformation generates an additional term in the molecular Hamiltonian,
\begin{equation}
\hspace*{-4pt}
\text{e}^{\hat{\mathcal{A}}_\text{c}}
\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}
\text{e}^{-\hat{\mathcal{A}}_\text{c}}
\!
=
\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}
-
2
\!
\sum\limits_{q=1}^{Q}
\lq
\hbar \omega_q
\skew{2.5}{\hat}{\mathcal{S}}_{\!1q}
\skew{1}{\hat}{n}
.
\!
\end{equation}
Noticeably, this term does not couple spin and vibrational degrees of freedom of the molecule, but represents a correction to the magnetic anisotropy of the molecule in the charged state.
In a next step, we derive the kernel~$\hat{\mathcal{A}}_\text{s}$ of the canonical transformation~(\ref{eq:Canom_Transform}), which can remove the spin-vibron interaction leading to an effective molecular Hamiltonian with renormalized magnetic-anisotropy parameters. The following discussion is divided into two parts: first, we consider molecules with uniaxial anisotropy, only, (that is, with \mbox{$E_n=0$} and \mbox{$\Lambda^\text{t}_{nq}=0$}), and second, we cover the more general case of molecules exhibiting both uniaxial and transverse anisotropy.
\subsubsection{Molecules with purely uniaxial magnetic anisotropy}
\label{sec:EffH_uni}
For this first case, we set \mbox{$E_n=0$} in Eq.~(\ref{eq:H_spin}) and \mbox{$\Lambda^\text{t}_{nq}=0$} in Eq.~(\ref{eq:opSV_def}).
In order to derive the transformation kernel~$\hat{\mathcal{A}}_\text{s}$, we apply the procedure described in Ref.~\cite{Wagner_book}, projecting the spin-vibron interaction term~$\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}$, Eq.~(\ref{eq:H_spin-vib}), on the states \mbox{$\big\{\ket{M_n}\otimes\ketv{\vartheta}\big\}$}, which are the eigenstates of the~Hamiltonian
\mbox{$
\skew{3}{\hat}{\mathcal{H}}_0
=
\text{e}^{\hat{\mathcal{A}}_\text{c}}
\big(
\skew{3}{\hat}{\mathcal{H}}_\text{mol} +\sum
\hbar
\omega_q
\skew{1}{\hat}{b}_q^\dagger\skew{1}{\hat}{b}_q^{}
+ \skew{3}{\hat}{\mathcal{H}}_{\text{ch-vib}}
\big)
\text{e}^{-\hat{\mathcal{A}}_\text{c}}
$.}
With this we find
\begin{equation}\label{eq:opAs_1_def}
\hat{\mathcal{A}}_\text{s}
=
\sum_{n=0,1}
\sum\limits_{q=1}^Q
\Lambda^\text{u}_{nq}
\big(\skew{3}{\hat}{S}_n^z\big)^2
\big(\skew{1}{\hat}{b}_q^\dagger - \skew{1}{\hat}{b}_q^{}\big)
.
\end{equation}
This expression agrees with that used by Ruiz-Tijerina~\emph{et al.}\xspace~\cite{Ruiz2012}, who studied the effect of magnetic anisotropy dynamically induced by mechanical stretching of a molecule on transport in the Kondo regime.
Next, inserting the operator~(\ref{eq:opAs_1_def}) into Eq.~(\ref{eq:Canom_Transform}), we obtain the effective (renormalized) Hamiltonian of the molecule with vibrations
$ \skew{3}{\hat}{\mathcal{H}}_\text{ch}^\prime
+
\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}^\prime
+
\sum
\hbar
\omega_q
\skew{1}{\hat}{b}_q^\dagger\skew{1}{\hat}{b}_q^{}
.
$
Here, the charge part of the molecular Hamiltonian is given by
\begin{equation}\label{eq:H_ch_1}
\skew{3}{\hat}{\mathcal{H}}_\text{ch}^\prime
=
\Big[
\mathcal{E}(V_\text{g})
-
\sum_{q=1}^Q
\hbar\omega_q\lq^2
\Big]\skew{1}{\hat}{n}
,
\end{equation}
with the energy shift caused by the charge-vibron interaction, as mentioned above.
Importantly, the spin-vibron coupling is eliminated at the expense of modifying the \emph{magnetic} properties of the molecule, and the spin term~$\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}^\prime$ is written as
\begin{equation}\label{eq:H_spin_1}
\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}^\prime
=
-
\sum_{n=0,1}
\!
\Big[
\big(D_n+\delta D_n^{(2)}\big)
\big(\skew{3}{\hat}{S}_n^z\big)^{\!2}
+
\delta D_n^{(4)}
\big(\skew{3}{\hat}{S}_n^z\big)^{\!4}
\Big]
.
\end{equation}
The anisotropy is affected in two ways: First, the uniaxial anisotropy constant~$D_n$ in Eq.~(\ref{eq:H_spin}) is renormalized as \mbox{$D_n\mapsto D_n+\delta D_n^{(2)}$}, with
\begin{equation}\label{eq:Dnt_def}
\delta D_n^{(2)}
=
2\delta_{n1}\sum_{q=1}^Q
\lq
\Lambda_{1q}^\text{u}
\hbar\omega_q
.
\end{equation}
Second, a new component representing a fourth-order-in-spin contribution to the uniaxial magnetic anisotropy \big[\mbox{$\propto(\skew{3}{\hat}{S}_n^z)^4$}\big] appears in Eq.~(\ref{eq:H_spin_1}), with the aniso\-tropy constant~$\delta D_n^{(4)}$ taking the form
\begin{equation}\label{eq:Dnf_def}
\delta D_n^{(4)}
=
\sum_{q=1}^Q
\big(\Lambda^\text{u}_{nq}\big)^{\!2}
\hbar\omega_q
.
\end{equation}
The result of Eqs.~(\ref{eq:H_ch_1})-(\ref{eq:Dnf_def}) is an effective molecular Hamiltonian, which is diagonal in the basis of product states~\mbox{$\big\{\ket{M_n}\otimes\ketv{\vartheta}\big\}$}. Note that the transformation with the operator $\hat{\mathcal{A}}_\text{s}$ does not further affect the tunneling Hamiltonian given in Eq.~(\ref{eq:H_tun_expand_LF}).%
\footnote{%
Note that this comes as a consequence of the present approximation that the effective molecular spin in Eq.~(\ref{eq:H_spin}) arises as a result of stabilization of a large atomic spin in the presence of the crystal/ligand field. However, in the case when the effective spin can be derived from a microscopic model of interacting electrons in different molecular orbitals, one generally expects that the transformation with the operator~$\hat{\mathcal{A}}_\text{s}$ can lead to occurrence of new effective tunneling matrix elements that depend on the magnetic states of the molecule, as shown in Ref.~\cite{Ruiz2012}.%
}
\newpage
\subsubsection{Molecules with uniaxial\\ and transverse magnetic anisotropy}
\label{sec:EffH_uni_and_trans}
The situation becomes more complicated for a molecule with an additional non-vanishing transverse component of magnetic anisotropy~(\mbox{$E_n\neq0$}).
In general, there exists no generic canonical transformation that would allow for \emph{exact} elimination of the spin-vibron coupling.
The reason is that Hamiltonians~$\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}$ and~$\skew{3}{\hat}{\mathcal{H}}_0$ do not share the same basis of eigenstates, that is,~\mbox{$\big[\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}},\skew{3}{\hat}{\mathcal{H}}_0\big]\neq0$}, and, hence, the full molecular Hamiltonian~\mbox{$\skew{3}{\hat}{\mathcal{H}}_\text{mol}+\skew{3}{\hat}{\mathcal{H}}_{\text{vib}}$} [see Eq.~(\ref{eq:H_mol}) and Eq.~(\ref{eq:H_vib})] cannot be diagonal with respect to both~$ \hat{\mathcal{H}}_0 $ and~$ \skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}} $ simultaneously.
Nonetheless, there are two particular cases for which commutation of~$\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}}$ and~$\skew{3}{\hat}{\mathcal{H}}_0$ can be restored so that they can be diagonalized in the basis~\mbox{$\big\{\ket{\psi_n}\otimes\ketv{\vartheta}\big\}$}:
the first one resorts to a specific constraint of parameters (namely, if
\mbox{$
D_n\Lambda^\text{t}_{nq}
=
-{E_n}\Lambda^\text{u}_{nq}
$}),
while the second one exploits the fact that ---independently of the anisotropy parameters--- ~\mbox{$\big[\skew{3}{\hat}{\mathcal{H}}_{\text{spin-vib}},\skew{3}{\hat}{\mathcal{H}}_0\big]=0$} for a molecular spin~\mbox{$S_n\leqslant1$}. The key advantage in the latter case is that, though not applicable to large-spin molecules (\emph{i.e.}\xspace, with \mbox{$S_n>1$}), this solution does not involve any additional restrictions regarding the properties of the molecule.
In either of these cases, the same method as in Sec.~\ref{sec:EffH_uni} can be used and we obtain
\begin{equation}\label{eq:opAs_2_def}
\hat{\mathcal{A}}_\text{s}
=
\sum_{n=0,1}
\sum\limits_{q=1}^Q
\skew{2.5}{\hat}{\mathcal{S}}_{\!nq}
\big(\skew{1}{\hat}{b}_q^\dagger - \skew{1}{\hat}{b}_q^{})
.
\end{equation}
The effective giant-spin Hamiltonian now reads as
\begin{align}\label{eq:H_spin_2}
\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}^\prime
=
\sum_{n=0,1}
\!
\Big[
&
\!
-\big(D_n+\delta D_n^{(2)}\big)
\big(\skew{3}{\hat}{S}_n^z\big)^{\!2}
-\delta D_n^{(4)}
\big(\skew{3}{\hat}{S}_n^z\big)^{\!4}
\nonumber\\[-7pt]
&\!
+
\big(E_n+\delta E_n^{(2)}\big)
\Big[
\big(\skew{3}{\hat}{S}_n^x\big)^{\!2}
-
\big(\skew{3}{\hat}{S}_n^y\big)^{\!2}
\Big]
\nonumber\\
&\!
+
\delta E_n^{(4)}
\Big[
\big(\skew{3}{\hat}{S}_n^x\big)^{\!2}
-
\big(\skew{3}{\hat}{S}_n^y\big)^{\!2}
\Big]^2
\nonumber\\
&\!
+
\delta C_n^{(4)}
\Big\{
\big(\skew{3}{\hat}{S}_n^z\big)^{\!2}
,
\big(\skew{3}{\hat}{S}_n^x\big)^{\!2}
-
\big(\skew{3}{\hat}{S}_n^y\big)^{\!2}
\Big\}
\Big]
,
\end{align}
where~$\{\bullet,\bullet\}$ in the last line denotes the anticommutator. The corrections~$\delta D_n^{(2)}$ and~$\delta D_n^{(4)}$ are given by Eq.~(\ref{eq:Dnt_def}) and Eq.~(\ref{eq:Dnf_def}), respectively, while the remaining corrections are found to be
\begin{gather}\label{eq:Ent_def}
\delta E_n^{(2)}
=
-
2\delta_{n1}\sum_{q=1}^Q
\lq
\Lambda_{1q}^\text{t}
\hbar\omega_q
,
\\\label{eq:Enf_def}
\delta E_n^{(4)}
=
-
\sum_{q=1}^Q
\big(\Lambda^\text{t}_{nq}\big)^{\!2}
\hbar\omega_q
,
\\\label{eq:Cnf_def}
\delta C_n^{(4)}
=
-
\sum_{q=1}^Q
\Lambda^\text{u}_{nq}\Lambda^\text{t}_{nq}
\hbar\omega_q
.
\end{gather}
It means that in addition to the renormalization of the \emph{strength} of the uniaxial and transverse anisotropy, in general an additional \emph{type} of anisotropy is introduced by the combined uniaxial and transverse spin-vibron coupling.
Consequently, the coupling of vibrations to the charge and spin of a molecule modifies its energy spectrum in various ways.
In the remainder of this paper, we consider these effects for different example molecules and study both the explicit impact on the energy spectra, Sec.~\ref{sec:spectrum}, as well as the resulting features expected to appear in the tunneling current through these molecules when embedded into a junction, Sec.~\ref{sec:transport}.
\section{Impact on spectral properties}
\label{sec:spectrum}
The first, obvious impact of vibrations on the spectrum of a molecule manifests as a repetition of the magnetic spectrum of the static molecule at energies corresponding to multiples of the energies~$\hbar \omega_q$ of the vibrational modes~\mbox{$q=1,\dots,Q$}.
This indeed plays a role in transport properties, as will be studied in detail in Sec.~\ref{sec:transport}, where transitions between states with different vibronic occupations occur.
In the present section, we concentrate on the \textit{nontrivial} impact of vibrations \mbox{---resulting} from the \emph{coupling} between vibrations and the charge and spin of the {molecule---} on the magnetic component of the molecular spectrum.
Since this part of the spectrum becomes modified identically in all vibrational states, below we simply focus on the vibrational ground state (with \mbox{$n_\text{v}^q=0$} for all~$q$).
\subsection{Interplay of magnetic anisotropy and vibrations}
\label{sec:E_D_spectrum}
In this subsection, employing the example molecule introduced in Sec.~\ref{sec:model_mol} with the ``static'' energy spectrum shown in Fig.~\ref{fig1}(b), we will illustrate how vibrations affect the magnetic spectrum of a molecule. To begin with, recall that in the neutral state this model molecule is characterized by a spin~\mbox{$S_0=1/2$}, corresponding to a spin doublet, \mbox{$\ket{\chi_0^\pm}\equiv\ket{\pm1/2}$}.
From Eqs.~(\ref{eq:H_ch_1})-(\ref{eq:Dnf_def}) and Eqs.~(\ref{eq:H_spin_2})-(\ref{eq:Cnf_def}), one finds that the spin-vibron interaction only results in an energy shift \mbox{$\Delta_0=-\delta D_n^{(4)}/16$}.
The situation is different in the charged state, characterized by a spin~\mbox{$S_1=1$}, in which the magnetic state of a molecule is the spin triplet:
\mbox{
$
\ket{\chi_1^0}
=
\ket{0}
$%
}
and
$
\ket{\chi_1^\pm}
=
\big(
\mbox{$\ket{1}$}
\pm
\mbox{$\ket{-1}$}
\big)
/
\sqrt{2}
.
$
In such a case,~we can simplify the effective spin Hamiltonian~$\skew{3}{\hat}{\mathcal{H}}_{\text{spin}}^\prime$, Eq.~(\ref{eq:H_spin_2}), to
\begin{equation}\label{eq:H_spin_eff}
\skew{3}{\hat}{\mathcal{H}}_{\text{spin},n=1}^\prime
=
-D_\text{eff}
\big(\skew{3}{\hat}{S}_1^z\big)^{\!2}
+
E_\text{eff}
\Big[
\big(\skew{3}{\hat}{S}_1^x\big)^{\!2}
-
\big(\skew{3}{\hat}{S}_1^y\big)^{\!2}
\Big]
,
\end{equation}
where $D_\text{eff}=D+\Delta D$ and $E_\text{eff}=E+\Delta E$ with
\begin{gather}\label{eq:DeltaD_def}
\Delta D
=
\delta D_1^{(2)}
+
\delta D_1^{(4)}
-
\delta E_1^{(4)}
,
\\\label{eq:DeltaE_def}
\Delta E
=
\delta E_1^{(2)}
+
2\delta C_1^{(4)}
.
\end{gather}
We remind that due to the capacitive coupling of the molecule to a gate electrode, the relative position of the neutral doublet and the charged triplet can be continuously adjusted by application of the gate voltage~$V_\text{g}$. For instance, it allows for compensating the shift~$\Delta_0$. This shift will therefore be omitted from now on.
To further discuss the impact on the spectrum, we assume for simplicity that only one vibrational mode of energy~$\hbar\omega$ is involved in the coupling (we hence omit the vibrational mode index `$q$'). In this example, we also take the anisotropy constants~$D$ and~$E$, as well as all coupling parameters to be positive; the case of \mbox{$D<0$} is analyzed in Sec.~\ref{sec:barrier_flip}.
The corrections to the magnetic anisotropy, Eqs.~(\ref{eq:DeltaD_def})-(\ref{eq:DeltaE_def}), take then the explicit form,
\begin{gather}\label{eq:dD_magnitude}
\frac{\Delta D}{\lambda\Lambda^\text{u}_1\hbar\omega}
=
2
+
\frac{\Lambda^\text{u}_1}{\lambda}
\big(1+\zeta^2\big)
,
\\\label{eq:dE_magnitude}
\frac{\Delta E}{\lambda\Lambda^\text{u}_1\hbar\omega}
=
-
2\zeta
\bigg[
1
+
\frac{\Lambda^\text{u}_1}{\lambda}
\bigg]
,
\end{gather}
where we introduce the coefficient~\mbox{$\zeta=\Lambda^\text{t}_1/\Lambda^\text{u}_1$}.
Let us make an estimate of the relevance of these corrections with respect to the original anisotropy parameters~$D$ and~$E$. Both corrections depend linearly on the charge-vibron coupling strength~$\lambda$ and the energy of the vibrational mode~$\hbar\omega$.
In general, one expects that the charge-vibron interaction dominates over the spin-vibron coupling, that is, \mbox{$\Lambda^\text{u}_1/\lambda\ll1$}. In this case, we can approximate \mbox{$\Delta D\approx2\lambda\Lambda^\text{u}_1\hbar\omega$} and \mbox{$\Delta E\approx-2\lambda\Lambda^\text{t}_1\hbar\omega$}.
Since the energy of the vibrational mode~$\hbar\omega$ can be significantly larger than the magnetic anisotropy~$D$, \mbox{$\hbar\omega\gg D$}~\cite{Burzur2014,McCaskey2015}, we conclude that even if the charge- and spin-vibron couplings are not particularly strong (\mbox{$\lambda\lesssim1$} and \mbox{$\Lambda^\text{u}_1/\lambda\ll1$}), the shift~$\Delta D$ can still achieve appreciable values compared to $D$ (and equivalently for~$\Delta E$ and $E$).
\begin{figure}[t!]
\includegraphics[width=0.99\columnwidth]{Fig2.pdf}
\caption{%
Effect of the charge- and spin-vibron coupling for~\mbox{$\zeta\equiv\Lambda^\text{t}_1/\Lambda^\text{u}_1<1$}, fixed $\lambda$ and a single vibrational mode of energy~$\hbar\omega$ illustrated for continuously changing values of the spin-vibron coupling~$\Lambda^\text{u}_1$.
%
At the \emph{critical} spin-vibron coupling~$\Lambda^\text{u}_{1,\text{crit}}$ the effective transverse magnetic anisotropy becomes suppressed, that is, the states~$\ket{\chi_1^+}$ and $\ket{\chi_1^-}$ are degenerate, see Eq.~(\ref{eq:Lu_crit}).
%
For \mbox{$\Lambda^\text{u}_1<\Lambda^\text{u}_{1,\text{crit}}$}, $D$ is effectively increased while $E$ is effectively reduced, and for \mbox{$\Lambda^\text{u}_1>\Lambda^\text{u}_{1,\text{crit}}$}, the energies of the two states are inverted.
Further details can be found in Sec.~\ref{sec:E_D_spectrum}.
}
\label{fig2}
\end{figure}
In Fig.~\ref{fig2}, we schematically show how the spin-vibron coupling affects the energy of the spin states,
\mbox{$
\mathcal{E}_{\chi_1^\pm}
=
-D_\text{eff}\pmE_\text{eff}
$}, taking \mbox{$\mathcal{E}_{\chi_1^0}=0$} as reference energy.
Specifically, we tune the uniaxial component of the spin-vibron coupling~$\Lambda^\text{u}_1$ here, while for simplicity fixing the vibration energy $\hbar\omega$, the charge-vibron coupling strength~$\lambda$, as well as the relation between~$\Lambda^\text{u}_1$ and~$\Lambda^\text{t}_1$ given by~$\zeta$, focusing on a value \mbox{$\zeta<1$}. Nonetheless, we recall that due to the deformation of a molecule, all parameters~$\omega$, $\lambda$, $\Lambda^\text{u}_1$ and~$\Lambda^\text{t}_1$ can in principle change.
First of all, it can be seen that the states~$\ket{\chi_1^-}$ and~$\ket{\chi_1^+}$ respond differently to changing~$\Lambda^\text{u}_1$.
Since~$\Delta D$ is positive [see Eq.~(\ref{eq:dD_magnitude})], whereas~$\Delta E$ is negative [see Eq.~(\ref{eq:dE_magnitude})], their impact on the two states is also not equally strong: While for~$\ket{\chi_1^+}$ the effect of these two corrections is additive, \mbox{$-\Delta D-|\Delta E|$}, the effect on~$\ket{\chi_1^-}$ is reduced, namely, it is
\mbox{$-\Delta D+|\Delta E|$}.\footnote{%
In particular, if $\zeta$ was increased such that~$\zeta\approx1$, one would find $\Delta D\approx|\Delta E|$ and the effect of the spin-vibron coupling on $\ket{\chi_1^-}$ would be completely suppressed.
}
A further result of this dissimilar behavior of~$\ket{\chi_1^+}$ and~$\ket{\chi_1^-}$ is that their energies can, in general, even be inverted with increasing~$\Lambda^\text{u}_1$. The crossover between these two situations happens at a critical value~$\Lambda^\text{u}_{1,\text{crit}}$, namely at
\begin{equation}\label{eq:Lu_crit}
\Lambda^\text{u}_{1,\text{crit}}
=
\sqrt{
\bigg(\!\frac{\lambda}{2}\!\bigg)^{\!\!2}
+
\frac{E}{2\zeta\hbar\omega}
}
-
\frac{\lambda}{2}
,
\end{equation}
where the degeneracy of the states~$\ket{\chi_1^+}$ and~$\ket{\chi_1^-}$ is restored (\mbox{$\mathcal{E}_{\chi_1^+}=\mathcal{E}_{\chi_1^-}$}). Gaining control over the spin-vibron coupling is therefore extremely advantageous, because it would enable enhancing the overall anisotropy (important for information storage) and at the same time it could reduce, or even fully cancel, the energy splitting between the lower lying states.
The value of $\zeta$ determines the slope of the energy of the state $\ket{\chi_1^-}$ as a function of the spin-vibron coupling (shown in Fig.~\ref{fig2} for a negative slope at $\zeta<1$). Thus, if a molecule is characterized by~\mbox{$\zeta>1$} (that is, when vibrations couple stronger to the transverse component of the molecular spin) and by vibrational modes of sufficiently large energies, it is actually possible that ---due to a large \textit{positive} slope--- the energy of~$\ket{\chi_1^-}$ can become larger than that of~$\ket{\chi_1^0}$.
More generally speaking, the value of $\zeta$ influences the energy at which the states $\ket{\chi_1^-}$ and $\ket{\chi_1^+}$ cross as well as the critical spin-vibron coupling at which the crossing occurs [see also Eq.~(\ref{eq:Lu_crit}) for the dependence of the critical coupling on $\zeta$]. For this reason, in Sec.~\ref{sec:Effect_of_zeta}, we will also discuss how the value of $\zeta$ affects the transport characteristics of the system.
\subsection{Magnetic spectrum reversal}
\label{sec:barrier_flip}
In general, the sign of corrections to the magnetic anisotropy due to spin-vibron coupling depends on whether the relevant coupling parameters~$\Lambda^\text{u}_{nq}$ and~$\Lambda^\text{t}_{nq}$ have the same or opposite signs with respect to the bare anisotropy parameters~$D_n$ and~$E_n$, see Eqs.~(\ref{eq:Dnt_def})-(\ref{eq:Dnf_def}) and Eqs.~(\ref{eq:Ent_def})-(\ref{eq:Cnf_def}) in Sec.~\ref{sec:states_and_effHams}.
In the previous subsection, we have fixed all these parameters to be \emph{positive}. In consequence, we have concluded that while the correction~$\Delta D$ to the uniaxial component of magnetic anisotropy is expected to be positive [see Eq.~(\ref{eq:dD_magnitude})], the correction~$\Delta E$ to the transverse component~$E_\text{eff}$ is negative [see Eq.~(\ref{eq:dE_magnitude})]. The latter can result in quenching the transverse anisotropy for some particular values of the spin-vibron couplings.
One should, however, notice that molecules can also be characterized by one or both \emph{negative} bare anisotropy parameters. Interestingly, in such a case we predict that the coupling of charge and spin of a molecule to its vibrations can lead to a substantial qualitative change of the magnetic spectrum. This effect may play a key role especially for a large-spin molecule (that is, with~\mbox{$S_0,S_1>1$} and~\mbox{$|S_1-S_0|=1/2$}) and in the absence of transverse magnetic anisotropy (\mbox{$E_0=E_1=0$}), where it can be observed in transport measurements as the onset of a pronounced spin blockade, as we will show in Sec.~\ref{sec:transport_blockade}.
\begin{figure}[t]
\includegraphics[scale=1]{Fig3.pdf}
\caption{
%
Energy spectrum of an exemplary molecule with \mbox{$S_0=3/2$} and \mbox{$S_1=2$} with \mbox{$D\equiv D_0=D_1<0$} and \mbox{$E_n=0$}.
%
(a)~No spin-vibron coupling (\mbox{$\Lambda^\text{u}=0$}). (b) Modification of the molecular spectrum due to \mbox{$\Lambda^\text{u}\neq0$}, with \mbox{$\Lambda^\text{u}\equiv\Lambda^\text{u}_0=\Lambda^\text{u}_1$} and \mbox{$D_\text{eff}\approx D+2\lambda\Lambda^\text{u}\hbar\omega$}.
%
Note that in both cases some compensating gate voltage is assumed to be applied, so that the ground spin states for the neutral (\mbox{$n=0$}) and charged (\mbox{$n=1$}) molecule are degenerate.
}
\label{fig3}
\end{figure}
To illustrate this point, let us consider the simplest model of a molecule for which such a situation arises: a molecule with \mbox{$S_0=3/2$} and \mbox{$S_1=2$} that exhibits only uniaxial magnetic anisotropy with~\mbox{$D\equiv D_0=D_1<0$}, and as previously, the contribution of only one vibrational mode is taken into account. The key feature of the energy spectrum of such a model molecule is that for both charge states the ground spin state(s), in each vibrational state, is formed by the state(s) characterized by the smallest projection of the spin along the $z$-axis, namely, \mbox{$\ket{0}$} and \mbox{$\ket{\pm1/2}$}.
The corresponding energy spectrum in the absence of spin-vibron coupling~(\mbox{$\Lambda^\text{u}\equiv\Lambda^\text{u}_0=\Lambda^\text{u}_1=0$}) is schematically depicted in Fig.~\ref{fig3}(a).
The situation changes as soon as~\mbox{$\Lambda^\text{u}\neq0$}. In the limit where the charge-vibron coupling dominates (\mbox{$\Lambda^\text{u}_n/\lambda\ll1$}) and for~\mbox{$\Lambda^\text{t}_n=0$}, from Eqs.~(\ref{eq:Dnt_def})-(\ref{eq:Dnf_def}) one expects a positive correction~\mbox{$\approx2\lambda\Lambda^\text{u}\hbar\omega$} to the otherwise negative uniaxial magnetic anisotropy constant~$D$ only in the charged state.
Then, for \mbox{$\Lambda^\text{u}>|D|/(2\lambda\hbar\omega)$} one finds a reversal of the magnetic spectrum in the charged state, meaning that the states with the largest projection of the spin along the $z$-axis (\mbox{$\ket{\pm S_1}$}) again become lowest in energy, as one can see in Fig.~\ref{fig3}(b). Note at the same time that the magnetic spectrum in the neutral state remains approximately unaffected by coupling to molecular vibrations.
Importantly, the flip of the magnetic spectrum in only one charge state [as shown in Fig.~\ref{fig3}(b)] has a profound consequence for transport measurements, as transitions between the ground spin states of different charge states are no longer permitted by spin selection rules, see
Eq.~(\ref{eq:tunT_def}).\footnote{%
This point also justifies our deliberate choice of a model molecule which does not possess the transverse component of magnetic anisotropy. Did the molecule exhibit the transverse magnetic anisotropy, the ground spin state would consist of a superposition of pure $S_z$-projections, and thus, the transitions in question would be still allowed, though with lower weights.}
This aspect will be further addressed in Sec.~\ref{sec:transport_blockade}.
On the other hand, at \mbox{$\Lambda^\text{u}=|D|/(2\lambda\hbar\omega)$} all the spin states in the charged state become degenerate, so that the molecule effectively behaves as if it was spin-isotropic. Actually, the spin-isotropic behavior should be observed already when \mbox{$k_\text{B} T,\Gamma\gtrsim (2S_1-1)|D_\text{eff}|$} with \mbox{$D_\text{eff}\approx D+2\lambda\Lambda^\text{u}\hbar\omega$}.
\section{Transport characteristics}
\label{sec:transport}
As discussed in the previous section, the spin-vibron coupling can significantly influence the magnetic anisotropy of a molecule.
In this section, we demonstrate how these effects manifest in the tunneling current through such a molecule in a transport setup as depicted in Fig.~\ref{fig1}(a). We focus on the two example molecules, for which we discussed the modified spectral properties in the previous section.
\subsection{Kinetic equations}
\label{sec:transport_theory}
In order to calculate the charge current through the molecule in the junction, we use a master equation approach derived from a real-time diagrammatic technique~\cite{Scholler1994,Koenig1996}. We start from the density matrix of the whole system and trace out the reservoir degrees of freedom. We are then left with the dynamics of the reduced density matrix with the elements~\mbox{$
\mathcal{P}_{\xi'}^{\xi}
\equiv
\bra{\xi} \skew{0.5}{\hat}{\varrho}^\text{red} \ket{\xi'}
$}. Here the states~\mbox{$\ket{\xi}\in\{\ket{\psi_n}\otimes\ketv{\vartheta}\}$} denote the eigenstates of the vibrating molecule, when decoupled from the electronic reservoirs.
We are interested in transport in the stationary state and in a situation where the molecule is weakly coupled to the electrodes, \mbox{$\Gamma\ll k_\text{B} T$}. For this reason, we restrict our calculations to the sequential tunneling limit, where only first-order contributions in $\Gamma/(k_\text{B} T)$ are taken into account in the tunneling dynamics. Then, for the exemplary molecules discussed in Sec.~\ref{sec:spectrum}, the dynamics of the diagonal elements of the reduced density matrix, \mbox{$\mathcal{P}_\xi^\xi\equiv\mathcal{P}_\xi$}, is governed by the Master equation
\begin{equation}\label{eq:MasterEq}
\dfrac{\text{d}\mathcal{P}_{\xi}}{\text{d} t}
=
0
=
\sum_{\xi\neq\xi^\prime}
\left(W_{\xi\xi^\prime}
\mathcal{P}_{\xi^\prime}-W_{\xi^\prime\xi}
\mathcal{P}_{\xi}\right)
.
\end{equation}
The kernel $W_{\xi\xi^\prime}=\sum_{r=\text{S,D}}W_{\xi\xi^\prime}^{r}$ takes into account transition rates between molecular states due to (vibron-dependent) electron tunneling between the molecule and the source (\mbox{$r=\text{S}$}) or the drain (\mbox{$r=\text{D}$}). The elements of this kernel can be found employing Fermi golden rule.
For instance, the transition from a neutral state~\mbox{$\ket{\xi_0}=\ket{\psi_0}\otimes\ketv{\vartheta}$} to a charged one~\mbox{$\ket{\xi_1}=\ket{\psi_1}\otimes\ketv{\vartheta'}$} induced by tunneling of a single electron with spin~$\sigma$ from the $r$th electrode to the molecule occurs with the rate
\begin{equation}\label{eq:W_def}
W_{\xi_1\xi_0}^{r\sigma}
=
\frac{\Gamma_\sigma^r}{\hbar}
\big|\mathcal{T}^\sigma_{\psi_1\psi_0}\big|^2
\,
\big|\mathcal{J}_{\vartheta'\vartheta}\big|^2
f_r(\mathcal{E}_{\xi_1}-\mathcal{E}_{\xi_0})
,
\end{equation}
with the coefficients $\mathcal{T}^\sigma_{\psi_1\psi_0}$ and $\mathcal{J}_{\vartheta_1\vartheta_0}$ given by Eq.~(\ref{eq:tunT_def}) and Eq.~(\ref{eq:FCcoef_def}), respectively.
It is important to emphasize that, while diagonal and off-diagonal elements of the reduced density matrix are decoupled in the example cases studied here, this is by no means a generally valid statement.
In Appendix~\ref{app:coherences}, we show in detail how this decoupling occurs here, starting from a full generalized kinetic equation that involves both the diagonal (\emph{occupation probabilities}) and the off-diagonal (\emph{coherences}) elements of the reduced density matrix of the molecule~$\skew{0.5}{\hat}{\varrho}^\text{red}$~\cite{Braun2004,Weymann2005,Sothmann2010}.
We write the tunneling current through the device as the average of the currents through the tunnel barriers connecting the molecule to the drain~($I_\text{D}$) and the source~($I_\text{S}$),
\begin{equation}\label{eq:current}
I
\equiv
\frac{I_\text{D}-I_\text{S}}{2}
=
\frac{e}{2} \sum_{\xi,\xi'}\left(n_\xi-n_{\xi'}\right)\left(W_{\xi\xi^\prime}^{\text{D}}-W_{\xi\xi^\prime}^{\text{S}}\right)\mathcal{P}_{\xi^\prime}
,
\end{equation}
with the occupation probabilities~$\mathcal{P}_{\xi^\prime}$ obtained from Eq.~\eqref{eq:MasterEq}. The variables $n_\xi$ take the value 0 or 1, depending on whether the molecule in state $\xi$ is neutral or charged, respectively.
In what follows, we first give a general overview of features arising in transport spectroscopy due to the interplay of magnetic anisotropy and vibrations. Next, we present a specific case where transport characteristics of the device change radically if spin-vibron coupling is induced in the system.
In our discussion, we employ the two examples introduced in detail in Sec.~\ref{sec:spectrum}.
\subsection{Effect of the interplay of magnetic anisotropy and vibrations on transport characteristics}
\label{sec:spectroscopy}
We will now investigate the impact of the spectral features for the model molecule discussed in Sec.~\ref{sec:E_D_spectrum} on the tunneling current through it. We therefore come back to the simple molecule with spin values~\mbox{$S_0=1/2$} and~\mbox{$S_1=1$}, whose spin-eigenstates in the neutral state are given by \mbox{$\ket{\chi_0^\pm}\equiv\ket{\pm1/2}$}, while in the charged state by
\mbox{$
\ket{\chi_1^0}
=
\ket{0}
$}
and
\mbox{$
\ket{\chi_1^\pm}
=
\big(
\ket{1}
\pm
\ket{-1}
\big)
/
\sqrt{2}
.
$}
Its effective energy spectrum (now including vibrational states) is schematically shown in Fig.~\ref{fig4}(a).
Moreover, the following numerical results are obtained for realistic values of relevant parameters, that is, within the range of experimentally observed values, see \emph{e.g.}\xspace, Ref.~\cite{Burzur2014}.
Specifically, we assume that the coefficients characterizing intrinsic (static) magnetic anisotropy are~\mbox{$D=500$}~$\mu$eV and \mbox{$E/D=0.15$}, whereas the energy of a molecular vibrational mode is~\mbox{$\hbar\omega/D=4$}.
We also note that except Sec.~\ref{sec:magnetic_electrodes}, we consider here nonmagnetic electrodes (\mbox{$P=0$}).
\begin{figure*}[t!]
\includegraphics[width=1\textwidth]{Fig4.pdf}
\caption{
Effect of the charge- and spin-vibration couplings on transport characteristics of a tunnel junction containing a single molecule.
%
\emph{Left} (\emph{right}) box represents the case without (with) the spin-vibron coupling being included.
%
(a,f) Schematic depiction of effective energy spectra for a molecule studied in Sec.~\ref{sec:spectroscopy}, where two consecutive vibronic states~$\ketv{n_\text{v}}$ (for \mbox{$n_\text{v}=0,1$}) are shown.
%
(b,d) Differential conductance~$\text{d} I/\text{d} V_\text{b}$ as a function of gate~$V_\text{g}$ and bias~$V_\text{b}$ voltages for \mbox{$\lambda=1.5$} and nonmagnetic electrodes (\mbox{$P=0$}): (b) \mbox{$\Lambda^\text{u}_1=\Lambda^\text{t}_1=0$}, and (d) \mbox{$\Lambda^\text{u}_1=0.05$} with \mbox{$\zeta=0.15$}.
%
Here, \mbox{$G_0\equiv2e^2/h$} stands for the conductance quantum.
%
(c) and~(e) Cross-sections of the density plots in (b) and~(d), respectively, taken at \mbox{$eV_\text{g}/D=-0.5$} [that is, along the finely dashed lines in (b,d)], with the corresponding spectra given in~(a) and~(f).\textsuperscript{\ref{fn:Vg_comment}}
%
Vertical thin dotted-dashed lines in (c,e), indicating the position of resonances in~(c), serve merely as a guide for the eye.
%
Parameters assumed in calculations: \mbox{$\Gamma/D=0.01$}, \mbox{$k_\text{B} T/D=0.02$}, \mbox{$E/D=0.15$} and~\mbox{$\hbar\omega/D=4$} with \mbox{$D=500$}~$\mu$eV.
}
\label{fig4}
\end{figure*}
\subsubsection{No spin-vibron coupling}
To begin with, let us first consider the case where the molecule exhibits only the \textit{intrinsic} component of magnetic anisotropy, meaning that only charge-vibron (\mbox{$\lambda\neq0$}) but no spin-vibron coupling (\mbox{$\Lambda^\text{u}_1=\Lambda^\text{t}_1=0$}) is present. The corresponding spectrum together with the resulting differential conductance~\mbox{$\text{d} I/\text{d}V_\text{b}$} is shown in the left box of Fig.~\ref{fig4}.%
\footnote{%
For the sake of simplicity and in order to enable easy comparison between the case without and with the spin-vibron coupling being present, we assume that some compensating gate voltage~$V_\text{g}^\prime$ is always applied. As a result, at~\mbox{$V_\text{g}=0$} the neutral doublet is degenerate with the charged ground state, see Fig.~\ref{fig4}(a,f).
\label{fn:Vg_comment}
}
One can generally see that the spectroscopic features at low bias-voltage (\mbox{$eV_\text{b}<2\hbar\omega$})%
\footnote{%
The factor `2' stems from the fact that the bias voltage~$V_\text{b}$ is applied symmetrically to the electrodes, that is, \mbox{$\mu_{\text{S}(\text{D})}=\mu_0\pm eV_\text{b}/2$}.
}
become duplicated whenever the bias voltage~$eV_\text{b}$ exceeds twice the energy $n_\text{v}\hbar\omega$ (for~\mbox{$n_\text{v}=1,2,3\ldots$}) of the excited molecular vibrational state~$\ketv{n_\text{v}}$.
The analysis of the position of resonances allows for extraction of the magnetic-anisotropy parameters~$D$ and~$E$, Eq.~(\ref{eq:H_spin}). For this purpose, in Fig.~\ref{fig4}(c) we plot a representative cross-section from Fig.~\ref{fig4}(b) [see Fig.~\ref{fig4}(a) for the corresponding energy spectrum], and discuss the origin of resonances labeled~\mbox{\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace-\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{197}}}\xspace}. These resonances essentially emerge due to transitions between different spin states, which follow the selection rules imposed thy the Clebsch-Gordon coefficients in Eq.~(\ref{eq:tunT_def}).
Specifically, the resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace is related to the ground-to-ground-state transitions \mbox{$\ket{\chi_0^\pm}\rightarrow\ket{\chi_1^-}$} \mbox{---note} that it is accompanied by a resonance mirrored with respect to \mbox{$V_\text{g}=0$} representing transition in the opposite direction, \mbox{$\ket{\chi_1^-}\rightarrow\ket{\chi_0^\pm}$}.
On the other hand, resonances~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace correspond to the ground-to-excited-state transitions \mbox{$\ket{\chi_0^\pm}\rightarrow\ket{\chi_1^+}$} and \mbox{$\ket{\chi_0^\pm}\rightarrow\ket{\chi_1^0}$}, respectively. Consequently, from the relative position of resonances~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace,~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace one can deduce~$D$ and~$E$, as can be seen in Fig.~\ref{fig4}(a).
Furthermore, resonances~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace can be observed only when a molecule becomes reduced (that is, it accepts one extra electron). Since the neutral state involves only a doublet state, no analogous resonances appear for the reverse process (oxidation).
All the resonances discussed so far stem from transitions between molecular spin states belonging to the ground molecular vibrational state, that is, for~\mbox{$n_\text{v}=0$}. However, when also transitions between different vibrational states are energetically permitted, the excited-to-excited-state transitions become visible for the oxidation process. Resonances representing such transitions are, for instance, those labeled as~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{195}}}\xspace~(for \mbox{$\ket{\chi_1^0}\otimes\ketv{0}\rightarrow\ket{\chi_0^\pm}\otimes\ketv{1}$}) and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{196}}}\xspace~(for \mbox{$\ket{\chi_1^+}\otimes\ketv{0}\rightarrow\ket{\chi_0^\pm}\otimes\ketv{1}$}). The characteristic property of these resonances, which can be seen in Fig.~\ref{fig4}(b), is that they do not continue to resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace. Instead, they terminate at resonances associated with single-electron-tunneling-in transitions that lead to occupation of relevant excited states, namely, resonances~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{195}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{196}}}\xspace terminate at~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace, respectively.
Finally, the last pronounced resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{197}}}\xspace in Fig.~\ref{fig4}(c) arises owing to transitions between ground spin states of two neighboring vibrational states, that is, \mbox{$\ket{\chi_1^-}\otimes\ketv{n_\text{v}}\rightarrow\ket{\chi_0^\pm}\otimes\ketv{n_\text{v}^\prime}$} with \mbox{$n_\text{v}^\prime-n_\text{v}=1$}. Since the dominating contribution comes from the transition between the ground~(\mbox{$n_\text{v}=0$}) and first excited (\mbox{$n_\text{v}^\prime=1$}) vibrational states, resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{197}}}\xspace in Fig.~\ref{fig4}(b) reaches resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace. Note that from the position of~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{197}}}\xspace one can easily determine the energy of the vibrational mode, see Fig.~\ref{fig4}(a).
The physical origin of resonances visible at larger bias voltage (\mbox{$eV_\text{b}\geqslant2\hbar\omega$}) can be understood using the same arguments as above. The only difference is now that transitions take place between states with different numbers of molecular vibrational excitations.
Moreover, the intensity of equivalent resonances (that is, associated with the same type of spin transitions but occurring between different vibrational states) is attenuated.
This effect is governed by the Franck-Condon factors, Eq.~(\ref{eq:FCcoef_def}), which basically put a weight on transition rates determined by the nuclear wave function overlap between the various vibrational states of the molecules~\cite{Koch2006Nov,Seldenthuis2008}.
\subsubsection{Spectroscopic signatures of spin-vibron coupling}
\label{sec:spectroscopy_spin-vib}
The situation changes if also the spin-vibron coupling becomes active, which is illustrated in the right box of Fig.~\ref{fig4}, with the density plot of the differential conductance~\mbox{$\text{d} I/\text{d}V_\text{b}$} given in panel~(d) and a relevant cross-section for \mbox{$eV_\text{g}/D=-0.5$} shown in panel~(e).
The position of resonances~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace associated with the value of the uniaxial and transverse component of magnetic anisotropy, respectively, is shifted; compare in Fig.~\ref{fig4} panel~(c) for~\mbox{$\Lambda^\text{u}_1=0$} with panel~(e) for~\mbox{$\Lambda^\text{u}_1\neq0$}. In particular, resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace moves towards larger bias voltages (\mbox{$D_\text{eff}>D$}), while for resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace the opposite behavior is observed (\mbox{$E_\text{eff}<E$}), see the pertinent energy spectrum schematically shown in Fig.~\ref{fig4}(f).
Physically, it corresponds to increasing the energy barrier for spin reversal (determined by~$D_\text{eff}$), while reducing the effect of under-barrier transitions (introduced by~$E_\text{eff}$).
Moreover, we also note that resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{195}}}\xspace from Fig.~\ref{fig4}(c) is absent in Fig.~\ref{fig4}(e). The underlying transition does not arise in the present situation, because the energy of the state~\mbox{$\ket{\chi_0^\pm}\otimes\ketv{1}$} is smaller than that for \mbox{$\ket{\chi_1^0}\otimes\ketv{0}$}, compare panels~(a) and~(f) in Fig.~\ref{fig4}.
In experiment, measuring the shifts~\mbox{$\Delta D=D_\text{eff}-D$} and~\mbox{$\Delta E=E_\text{eff}-E$} would allow for estimating the spin-vibron coupling strengths~$\Lambda^\text{u}_1$ and~$\Lambda^\text{t}_1$ by means of Eqs.~(\ref{eq:DeltaD_def})-(\ref{eq:DeltaE_def}).
Moreover, if one could control and increase further the strength of the spin-vibron coupling, it should in principle be possible to diminish the gap between states~$\ket{\chi_1^\pm}$ beyond the detection limit set here predominantly by temperature~$T$.
One of promising ways to achieve this goal may be to tune the coupling \emph{via}\xspace stretching of the molecule embedded in a mechanically controllable break junction. Realistic changes of the coupling strength obtained with this method are expected to be of the order of a few percent~\cite{Adamczewska_cooment}. It is also for this reason that we chose to show the example in the right box of Fig.~\ref{fig4} and to not consider the case where~$E_\text{eff}$ can get fully suppressed (up to $\Gamma$ and below) \emph{via}\xspace the spin-vibron coupling.
Nevertheless, for some specific molecules it may still be possible to completely switch off the transverse component of magnetic anisotropy in this way.
\subsubsection{Asymmetry effect of spin-vibron coupling}
\label{sec:Effect_of_zeta}
\begin{figure}[t]
\includegraphics[scale=1]{Fig5.pdf}
\caption{
Influence of the asymmetry between the transverse and the uniaxial component of the spin-vibron coupling (quantified by~\mbox{$\zeta=\Lambda^\text{t}_1/\Lambda^\text{u}_1$}) on the differential conductance shown for indicated values of~$\zeta$.
%
For clarity, curves for~\mbox{$\zeta>0.15$} are shifted vertically, with the bottom curve for~\mbox{$\zeta=0.15$} being identical to that presented in Fig.~\ref{fig4}(e).
%
Note that, as previously, some compensating gate voltage is applied to fix the position of the (left-most) resonance corresponding to the ground-to-ground-state transitions, and thus, to enable easy comparison of different curves.
%
Other parameters are taken the same as in the right box of Fig.~\ref{fig4}.
}
\label{fig5}
\end{figure}
In the previous subsection, we made the assumption that the ratio of the transverse to the uniaxial component of the spin-vibron coupling, \mbox{$\zeta=\Lambda^\text{t}_1/\Lambda^\text{u}_1$}, is approximately equal to \emph{$\zeta\approx E/D=0.15$}. However, in real systems this condition does not necessarily have to be satisfied. For this reason, here we discuss how the asymmetry between different components of spin-vibron coupling (quantified by~$\zeta$) becomes visible in transport spectroscopy.
First of all, recall from Sec.~\ref{sec:E_D_spectrum} that while the correction~$\Delta D$ to the uniaxial magnetic anisotropy [see Eq.~(\ref{eq:dD_magnitude})] only weakly depends on~$\zeta$, in the case of the correction~$\Delta E$ to the transverse magnetic anisotropy [see Eq.~(\ref{eq:dE_magnitude})] this dependence is linear. As a result, the value of~$\zeta$ should more significantly affect transport features associated with the energy scale~$2E_\text{eff}$ rather than the ones associated with~$D_\text{eff}$. In particular, the position of resonances \mbox{\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace{\,}--\,\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace} and \protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{196}}}\xspace in Fig.~\ref{fig4} discussed in the former subsection are thereby
modified.\footnote{%
Experimentally, it might be difficult to discern the swap between resonances \protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace and \protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace, discussed in the following, and it might therefore seem as if only resonance \protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace got affected.}
In Fig.~\ref{fig5} we analyze how the differential conductance plotted in Fig.~\ref{fig4}(e) [shown here for reference as the green curve for~\mbox{$\zeta=0.15$}] depends on the value of~$\zeta$ ---note that the coupling parameter~$\Lambda^\text{u}_1$ is fixed in the present considerations (\mbox{$\Lambda^\text{u}_1=0.05$}).
As discussed in Sec.~\ref{sec:E_D_spectrum}, the relation between~$\Lambda^\text{u}_1$ and~$\Lambda^\text{u}_{1,\text{crit}}$ [see Eq.~(\ref{eq:Lu_crit})] determines the ground spin state of a charged molecule, namely: $\ket{\chi_1^-}$ if \mbox{$\Lambda^\text{u}_1<\Lambda^\text{u}_{1,\text{crit}}$}, and $\ket{\chi_1^+}$ if \mbox{$\Lambda^\text{u}_1>\Lambda^\text{u}_{1,\text{crit}}$}, which has been graphically depicted in Fig.~\ref{fig2}.
Importantly, when increasing~$\zeta$ the critical value~$\Lambda^\text{u}_{1,\text{crit}}$ is effectively diminished.
Therefore, one finds that at fixed $\Lambda^\text{u}_1$, $\ket{\chi_1^-}$ is the ground state for \mbox{$\zeta\lesssim\zeta^\ast$}, while $\ket{\chi_1^+}$ is the ground state for \mbox{$\zeta\gtrsim\zeta^\ast$}, with
\begin{equation}
\zeta^\ast
=
\frac{
E
}{
2\lambda\Lambda^\text{u}_1
\big(1+\Lambda^\text{u}_1/\lambda\big)
\hbar\omega
}
.
\end{equation}
For the parameters used in Fig.~\ref{fig5}, one finds \mbox{$\zeta^\ast\approx0.24$}.
In consequence, one expects that: (i) \mbox{$0<E_\text{eff}<E$} for \mbox{$\zeta\lesssim\zeta^\ast$}, and in particular, \mbox{$E_\text{eff}\approx E$} for negligibly small~$\zeta$; (ii) \mbox{$E_\text{eff}<0$} for \mbox{$\zeta\gtrsim\zeta^\ast$}, and additionally if \mbox{$\zeta>2\zeta^\ast$} one finds \mbox{$|E_\text{eff}|>E$}.
These distinctive regimes translate into specific shifts of characteristic resonances in the differential conductance, see Fig.~\ref{fig5}.
To illustrate this point, as an example, we have schematically indicated there with thin lines the evolution of resonances marked as \protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace (dashed line) and \protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace (dotted-dashed line), corresponding to transitions \mbox{$\ket{\chi_0^\pm}\rightarrow\ket{\chi_1^-}$} and \mbox{$\ket{\chi_0^\pm}\rightarrow\ket{\chi_1^+}$}, respectively.
For large~$\zeta$ (that is, for \mbox{$\zeta\gtrsim2\zeta^\ast$}) the two resonances are well separated, which would allow for a more accurate readout of excitation energies.
\subsubsection{Potential of magnetic electrodes}
\label{sec:magnetic_electrodes}
\begin{figure}[t]
\includegraphics[scale=1]{Fig6.pdf}
\caption{
%
Selective effect of two different collinear magnetic configurations of the device [that is, for \emph{parallel} (solid lines) and \emph{antiparallel} (dashed lines) relative orientation of the spin moments in the electrodes (for \mbox{$P=0.5$})] on differential conductance~\mbox{$\text{d} I/\text{d}V_\text{b}$}.
%
Note that solid lines in panels~(a) and~(b) are identical to those in panels~(c) and~(e) of~Fig.~\ref{fig4}, respectively, obtained for nonmagnetic electrodes (\emph{i.e.}\xspace, for \mbox{$P=0$}).
%
All remaining parameters as in Fig.~\ref{fig4}.
}
\label{fig6}
\end{figure}
Finally, we note that the advantage of using a magnetic junction is that one can selectively enhance or decrease resonances.
So far, we have concentrated exclusively on transport characteristics of the device in the case of \emph{nonmagnetic} electrodes, see Fig.~\ref{fig4} and Fig.~\ref{fig5}. Noteworthily, when using \emph{magnetic} electrodes, by switching the device from the parallel into the antiparallel magnetic configuration, one can adjust the intensity of certain resonances.
In Fig.~\ref{fig6} we compare cross-sections of the differential conductance at a fixed gate voltage obtained by changing the relative orientation of spin moments of the source and the drain from parallel (solid lines) to antiparallel (dashed lines).
Importantly, note that the solid lines for the parallel magnetic configuration are in fact identical to those calculated in Figs.~\ref{fig4}(c,e) for nonmagnetic electrodes.
It can be seen that while a majority of resonances is only weakly affected by the change of the magnetic configuration, two resonances visibly react to it: resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{194}}}\xspace becomes more pronounced and the intensity of resonance~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace gets diminished. In the latter case, by reducing the disproportion between the heights of resonances~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{192}}}\xspace and~\protect\raisebox{-1.25pt}{\protect\scalebox{1.3}{\ding{193}}}\xspace, one expects to better resolve the merging of the two resonances when for example $\zeta$ or $\Lambda^\text{u}_1$ are changed as discussed in the previous section.
The mechanism underlying this effect stems from the spin-asymmetry of the tunnel coupling of a molecule to the drain and source electrodes, given in the end of Sec.~\ref{sec:theory}. It basically leads to unequal occupation probabilities of the neutral-doublet states~$\ket{\chi_0^-}$ and~$\ket{\chi_0^+}$, which affect, in turn, the current flowing through the molecule, Eq.~(\ref{eq:current}).
\subsection{Vibrationally induced spin blockade in transport}
\label{sec:transport_blockade}
Finally, we show that the reversal of the magnetic spectrum in a large-spin molecule due to the coupling of spin and charge to molecular vibrations, non-trivially manifests in transport spectroscopy. As already announced in Sec.~\ref{sec:barrier_flip}, it can lead to the occurrence of a spin-blockade in transport, which we investigate in the present section.
For this purpose, we employ the minimal model of a molecule capable of supporting such an effect, characterized by spins \mbox{$S_0=3/2$} and \mbox{$S_1=2$}, which exhibits only a (negative) uniaxial component of magnetic anisotropy, here assumed to be~\mbox{$D\equiv D_0=D_1=-125$}~$\mu$eV. The relevant magnetic spectrum of such a molecule is schematically shown in Fig.~\ref{fig3}.
For conceptual simplicity, we again include only one vibrational mode with energy~\mbox{$\hbar\omega=2$ meV}, and take the coupling parameters \mbox{$\lambda=1.5$} and \mbox{$\Lambda^\text{u}\equiv\Lambda^\text{u}_0=\Lambda^\text{u}_1=0.05$}, while consistently neglecting the transverse component of the coupling, that is, \mbox{$\Lambda^\text{t}_0=\Lambda^\text{t}_1=0$}. For other parameters see the caption of Fig.~\ref{fig7}.
\begin{figure}[t]
\includegraphics[scale=1]{Fig7.pdf}
\caption{
The effect of spin blockade in transport induced by the reversal of the magnetic spectrum due to the spin-vibron coupling.
%
Differential conductance~$\text{d} I/\text{d}V_\text{b}$ of a device based on a model molecule with~\mbox{$S_0=3/2$} and \mbox{$S_1=2$}, whose energy spectra are schematically shown in Fig.~\ref{fig3}, is plotted as a function of the gate~$V_\text{g}$ and bias~$V_\text{b}$ voltage for:
%
(a) \mbox{$\Lambda^\text{u}_0=\Lambda^\text{u}_1=0$}, and (b) \mbox{$\Lambda^\text{u}_0=\Lambda^\text{u}_1=0.05$}.
%
Note that the energy unit \mbox{$\Delta\mathcal{E}=4|D|$} corresponds to the difference between energies of the spin states \mbox{$\ket{0}$} and \mbox{$\ket{\pm 2}$} of the charged molecule without spin-vibron coupling, see also the right side of Fig.~\ref{fig3}(a).
%
NDC stands here for `negative differential conductance'.
%
The other parameters are \mbox{$\Gamma/\Delta\mathcal{E}=0.01$}, \mbox{$P=0$}, \mbox{$\lambda=1.5$}, \mbox{$k_\text{B} T/\Delta\mathcal{E}=0.02$}, \mbox{$E=0$}, \mbox{$\hbar\omega/\Delta\mathcal{E}=4$} with \mbox{$\Delta\mathcal{E}=4|D|=500$}~$\mu$eV.
}
\label{fig7}
\end{figure}
We show the differential conductance of this model system in Fig.~\ref{fig7} for both cases without [panel~(a)] and with [panel~(b)] the spin of the molecule being coupled to its vibrations.
In the former situation [panel (a)], one can see that the behavior of the differential conductance as a function of bias and gate voltages qualitatively resembles that for the molecule analyzed in Fig.~\ref{fig4}(b), but with more transitions since the molecule is characterized by a larger spin. The observed resonances can be attributed to specific transitions between different charge states~\mbox{$\ket{M_0}\otimes\ketv{n_\text{v}}$} and~\mbox{$\ket{M_1}\otimes\ketv{n_\text{v}^\prime}$} [see Fig.~\ref{fig3}(a)] that satisfy the spin selection rule~\mbox{$|M_1-M_0|=1/2$}.
The only new features are some (blue) spots of negative differential conductance (NDC, marked by arrows), which signify a reduction of transport. The NDC arises when the molecule gets trapped in the excited doublet state for~\mbox{$n=0$} (\emph{i.e.}\xspace, the state $\ket{\pm 3/2}$), before the transition to the highest-in-energy doublet state for~\mbox{$n=1$} (\emph{i.e.}\xspace, the state $\ket{\pm 2}$) becomes energetically permitted by application of a bias voltage. This NDC is possible since the energy required for the transitions \mbox{$\ket{\pm 1/2}\rightarrow\ket{\pm 1}$} and \mbox{$\ket{\pm 1}\rightarrow\ket{\pm 3/2}$} is the same, while the excitation energy for \mbox{$\ket{\pm 3/2}\rightarrow\ket{\pm 2}$} is two times larger. See also the spectra in Fig.~\ref{fig3}(a) for clarification.
Also in the presence of spin-vibron coupling [see Fig.~\ref{fig7}(b)], extended regions of NDC are visible. However, what is more striking is that at low bias voltage, \mbox{$eV_\text{b}\lesssim2\Delta\mathcal{E}$}, transport is \textit{fully suppressed}. The reason for this is that for the present, purposefully chosen set of parameters, one finds from Eqs.~(\ref{eq:Dnt_def})-(\ref{eq:Dnf_def}) that while the uniaxial magnetic anisotropy constant for the neutral state remains approximately the same, in the charged state the new effective anisotropy constant \mbox{$D_\text{eff}\approx D+2\lambda\Lambda^\text{u}\hbar\omega$} is positive. As a result, an energy barrier for spin reversal in the charged state forms, as illustrated in Fig.~\ref{fig3}(b).
Most noticeably, the reversal of the magnetic spectrum entails that only transitions between ground and excited spin states (of the neutral and the charged molecule, respectively) are allowed by spin selection rules.
\section{Summary and conclusions}
\label{sec:conclusions}
The main purpose of this paper was to investigate the effect of the coupling of molecular vibrations to the charge and spin of a molecule on magnetic properties of such a molecule.
By deriving the effective giant-spin Hamiltonian, Eq.~(\ref{eq:H_spin_2}), we have found that these vibronic couplings result in modifications of the magnetic anisotropy parameters of the molecule, along both the uniaxial [see Eqs.~(\ref{eq:Dnt_def})-(\ref{eq:Dnf_def})] and transverse [see Eqs.~(\ref{eq:Ent_def})-(\ref{eq:Cnf_def})] directions, by inducing additional magnetic anisotropy components.
Depending on the intrinsic magnetic anisotropy of the molecule, its vibrational energy and the coupling strength to its spin, this interaction can lead to diverse effects ranging from enhancing to quenching or even inverting different components of the magnetic anisotropy.
In order to illustrate how the effect of spin-vibron coupling manifests in transport spectroscopy, we have considered a device consisting of a single magnetic molecule inserted in a capacitively gated three-terminal junction. We have perturbatively calculated stationary transport in first order of the tunnel-coupling using a real-time diagrammatic technique.
In our calculations, we have paid particular attention to justify the conditions under which coherent superpositions between molecular states (represented by the off-diagonal components of the reduced density matrix of a molecule) play no role for transport.
Our results show that the modulations of the magnetic anisotropy can lead to distinct effects in the differential conductance. In particular, in certain molecular regimes even a blockade of transport can occur.
We expect that the effects under discussion, stemming from the spin-vibron coupling, should be observable especially in molecules based on individual metallic/magnetic ions, such as, Co-based complexes~\cite{Parks2010} or metal complexes derived from phthalocyanine (based on single ions of Cu, Mn, Fe, Ni)~\cite{Mugarza_Nat.Commun.2/2011,Urdampilleta_Nat.Mater.10/2011,Rakhmilevitch2014}. In such molecules, their magnetic core is particularly sensitive to changes of the crystal field of surrounding ligands associated with molecular vibrations. For instance, such a mechanism has been proposed~\cite{Ruiz2012} to explain the experiment by Parks~\emph{et al.}\xspace~\cite{Parks2010}.
In general, junctions containing a single magnetic molecule owe their interest to envisioned applications of such systems as information storing and processing devices. In this context, the analysis conducted in this paper provides an insight on how to harness molecular vibrations to control the magnetic anisotropy. We show that it constitutes a possible mechanism to enhance a magnetic bistability of such molecules, which is a necessary requirement for a binary memory element. For instance, by mechanically stretching the junction or by deforming the molecule using other means, the energy of the vibrational modes, as well as, the coupling strength to the molecular spin can be tuned to increase the energy barrier for spin reversal while reducing the effect of magnetization tunneling under the barrier. Consequently, our results indicate a way to improve the robustness of spintronics devices based on single magnetic molecules.
\acknowledgments
We thank Ma\l{}gorzata Ademczewska-Wawrzyniak for fruitful discussion.
Financial support from the Knut and Alice Wallenberg Foundation (J.S. and M.M.) and the Swedish VR (J.S.) is acknowledged.
M. M. also acknowledges financial support from the Polish Ministry of Science and Higher Education through a Iuventus Plus project (IP2014 030973) in years 2015-2017 and a young scientist fellowship (0066/E-336/9/2014).
| -71,266.710097 |
[
-1.8994140625,
1.927734375
] | 34.66899 |
[
-2.88671875,
0.4638671875,
-1.6044921875,
-5.48046875,
-1.1328125,
7.703125
] |
[
5.2734375,
7.8671875,
3.77734375,
7.95703125
] | 665 | 8,943 |
[
-2.69140625,
2.833984375
] | 28.976506 |
[
-6.1015625,
-4.13671875,
-4.0546875,
-2.55078125,
2.130859375,
11.96875
] | 1.34284 | 16.888898 | 23.079504 | 3.926844 |
[
2.054715633392334
] | -48,689.323836 | 6.994744 | -69,594.242756 | 0.602679 | 6.047275 |
[
-2.634765625,
-3.849609375,
-4.21484375,
-5.3203125,
2.376953125,
12.890625
] |
[
-5.390625,
-2.41796875,
-2.4375,
-1.69921875,
3.2578125,
5.23828125
] | |
BkiUd7U5qoaAwpKBAlSB
|
\section{Introduction}
Vector is the general format of input data of algorithm, and each component
of vector is stored in classical memory sequentially in general. Thus there
are two questions for general quantum algorithm, the first question is that
which state is suitable to represent all information of vector for further
quantum computation, and the second question is that how to load all
information of vector into quantum registers (or quantum state) without
losing information. Loading data set such as vector into classical registers
of CPU from classical memory is called \textbf{classical loading scheme (CLS)%
}. Similar to CLS, designing unitary operation to load all information of
vector into quantum registers of quantum CPU from classical memory is called
\textbf{quantum loading scheme (QLS)}. CLS or QLS assembles classical memory
and CPU as a whole computer. QLS\ makes quantum CPU is compatible with
classical memory.
An $N-$dimensional vector is denoted as $\vec{a}=\{a_{0},a_{1},...,a_{N-1}\}$%
, where the components $a_{0},a_{1},...,a_{N-1}$ are real numbers. It has
been shown that entangled state $\frac{1}{\sqrt{N}}({\sum\limits_{i=0}^{N-1}{%
{{\left\vert {i}\right\rangle }}}_{{register1}}{{{\left\vert a_{{i}%
}\right\rangle }}}_{{register2}}})${\ is suitable for the representation of
vector without losing any information of vector \cite{Pang
PostDocReport,Pang QVQ2,Pang QDCT,Pang QVQ1}. Let initial state ${\left\vert
{\phi _{0}}\right\rangle }$ be ${\left\vert {\phi _{0}}\right\rangle }={%
\left\vert {0}\right\rangle _{{{{q_{1}q_{2}...q_{n}}}}}\left\vert {0}%
\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {ancilla_{1}}%
\right\rangle }$, where the $m+n$ qubits ${{{q_{1},...,q_{n},p_{1},...,p_{m}}%
}}$ are collected as a whole object, dividing them is prohibited, and where
the ancillary state ${\left\vert {ancilla_{1}}\right\rangle }$ is \emph{known%
}. QLS can be described as how to design unitary operation $%
U_{(0,1,...,N-1)} $ such that
\begin{equation}
{\left\vert {\phi }\right\rangle =U_{(0,1,...,N-1)}{\left\vert {\phi _{0}}%
\right\rangle }=}\frac{1}{\sqrt{N}}({\sum\limits_{i=0}^{N-1}{{{\left\vert {i}%
\right\rangle }}}_{{{{q_{1}q_{2}...q_{n}}}}}{{{\left\vert a_{{i}%
}\right\rangle }}}_{{{{p_{1}p_{2}...p_{m}}}}}}){\left\vert {ancilla_{2}}%
\right\rangle }\mathrm{,} \label{eqTarget}
\end{equation}%
where $N=2^{n}$ and the ancillary state ${\left\vert {ancilla_{2}}%
\right\rangle }$ is \emph{known }}(all {ancillary states is \emph{known }}in
this paper.){. }
Nielsen and Chuang pointed out that quantum computer should have loading
scheme in principle to load classical database record into quantum registers
from classical database \cite[section 6.5]{Nielsen, QCZhao}. However, there
is no detailed work on QLS up till now. In fact, the research of QLS is
motivated by the quantum algorithm of image compression \cite{Pang
PostDocReport,Pang QVQ2,Pang QDCT,Pang QVQ1,Lattorre}. In this paper, we
present a QLS based on the path interference, which has been widely used in
quantum information processing, e.g. non-unitary computation\cite%
{Kwiat,LongGuiLu}. The unitary computation using path interference is
demonstrated in this paper, and the output of the unitary computation can be
measured with successful probability 100\% in theory. The time complexity of
our QLS is $O(log_{2}N)$, which exhibits a speed-up over CLS with time
complexity $O(N)$.
\section{The Design of QLS}
\subsection{Loading 2D Vector into Quantum Registers from Classical Memory}
The design of unitary operation $U_{(0,1)}$ that loads 2D vector $\vec{a}%
=\{a_{0},a_{1}\}$ is described conceptually as follows (see Fig.\ref{figU01}%
):
\textbf{Step 1} The switch $S_{1}$ applies rotation on the initial ancilla
state and transforms $\left\vert {Off_{0}}\right\rangle $ into%
\begin{equation*}
{\left\vert {Off_{0}}\right\rangle \overset{S_{1}}{\rightarrow }\frac{{{%
\left\vert {Off_{1}}\right\rangle }+{\left\vert {On_{1}}\right\rangle }}}{%
\sqrt{2}}}
\end{equation*}%
and generate the following state $\left\vert {\phi _{1}}\right\rangle $
\begin{equation}
{\left\vert {\phi _{1}}\right\rangle }{=}\frac{1}{\sqrt{2}}{\left\vert {0}%
\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert {0}\right\rangle }_{{_{%
{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {On_{1}}\right\rangle +}\frac{1}{%
\sqrt{2}}{\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{%
\left\vert {0}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {%
Off_{1}}\right\rangle } \label{eqFai1}
\end{equation}
\textbf{Step 2 }Perform unitary operations $I_{0}$ and $A_{0}$ along `$%
On_{1} $' path, while perform unitary operations $I_{1}$ and $A_{1}$ along `$%
Off_{1} $' path.
\begin{equation*}
\begin{tabular}{lll}
$\left\{
\begin{tabular}{c}
${\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}\overset{I_{0}}{%
\rightarrow }{\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}$ \\
${{\left\vert {0}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}}\overset{%
A_{0}}{{\rightarrow }}{{\left\vert a_{{0}}\right\rangle }_{{_{{{{%
p_{1}p_{2}...p_{m}}}}}}}}$%
\end{tabular}%
\right. $ & , & $\left\{
\begin{tabular}{c}
${\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}\overset{I_{1}}{%
\rightarrow }{\left\vert {1}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}$ \\
${{\left\vert {0}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}}\overset{%
A_{1}}{{\rightarrow }}{{\left\vert a_{{1}}\right\rangle }_{{_{{{{%
p_{1}p_{2}...p_{m}}}}}}}}$%
\end{tabular}%
\right. $%
\end{tabular}%
\end{equation*}
We assume the output of two pathes are simultaneous, then the state ${%
\left\vert {\phi _{2}}\right\rangle }$ is generated as
\begin{equation*}
\left\{
\begin{tabular}{c}
$\frac{1}{\sqrt{2}}{\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{%
\left\vert {0}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {%
On_{1}}\right\rangle }\overset{A_{0}I_{0}}{{\rightarrow }}\frac{1}{\sqrt{2}}{%
\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{0}%
}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {On_{1}}%
\right\rangle }$ \\
$\frac{1}{\sqrt{2}}{\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{%
\left\vert {0}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {%
Off_{1}}\right\rangle }\overset{A_{1}I_{1}}{{\rightarrow }}\frac{1}{\sqrt{2}}%
{\left\vert {1}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{1}%
}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {Off_{1}}%
\right\rangle }$%
\end{tabular}%
\right.
\end{equation*}
\begin{equation}
\Rightarrow {\left\vert {\phi _{2}}\right\rangle =}\frac{1}{\sqrt{2}}{%
\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{0}%
}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}{\left\vert {On_{1}}%
\right\rangle +}\frac{1}{\sqrt{2}}{\left\vert {1}\right\rangle }_{{{{%
q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{1}}\right\rangle }_{{_{{{{%
p_{1}p_{2}...p_{m}}}}}}}{\left\vert {Off_{1}}\right\rangle } \label{eqFai2}
\end{equation}
The functions of $I_{0}$ and $I_{1}$ are to generate subscripts $0$ and $1$
respectively, and the functions of $A_{0}$ and $A_{1}$ are to generate
numbers $a_{0}$ and $a_{1}$ respectively. Because value $a_{0}$ and $a_{1}$
are both known numbers, flipping part of the $m+n$ qubits ${{{%
q_{1},...,q_{n},p_{1},...,p_{m}}}}$ will generate states ${{\left\vert a_{{0}%
}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}}$ or ${{\left\vert a_{{1}%
}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}}$. Thus, the unitary
operations $I_{0}$, $I_{1}$, $A_{0}$, $A_{1}$ is easy to be designed.
\textbf{Step 3 }The switch $S_{2}$ applies rotation on the initial ancilla
state {as}
\begin{equation*}
\left\{
\begin{tabular}{c}
${\left\vert {Off_{1}}\right\rangle }\overset{S_{2}}{\rightarrow }{\frac{{{%
\left\vert {Off_{2}}\right\rangle }-{\left\vert {On_{2}}\right\rangle }}}{%
\sqrt{2}}}$ \\
${\left\vert {On_{1}}\right\rangle }\overset{S_{2}}{\rightarrow }{\frac{{{%
\left\vert {Off_{2}}\right\rangle }}+{{\left\vert {On_{2}}\right\rangle }}}{%
\sqrt{2}}}$%
\end{tabular}%
\right.
\end{equation*}%
and generate the following state ${\left\vert {\phi _{3}}\right\rangle }$
\begin{eqnarray}
{\left\vert {\phi _{3}}\right\rangle } &=&\frac{1}{2}({\left\vert {0}%
\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{0}}\right\rangle }%
_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}+{\left\vert {1}\right\rangle }_{{{{%
q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{1}}\right\rangle }_{{_{{{{%
p_{1}p_{2}...p_{m}}}}}}}){\left\vert {Off_{2}}\right\rangle } \notag \\
&&{+}\frac{1}{2}({\left\vert {0}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{%
\left\vert a_{{0}}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}-{%
\left\vert {1}\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{1}%
}\right\rangle }_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}){\left\vert {On_{2}}%
\right\rangle } \label{eqFai3}
\end{eqnarray}
\textbf{Step 4 }Apply phase transformaition $B$ along `$On_{2}$' path.
\begin{equation}
B={\left\vert {0}\right\rangle \left\vert a_{{0}}\right\rangle }\langle a_{{0%
}}|\langle 0|-{\left\vert {1}\right\rangle \left\vert a_{{1}}\right\rangle }%
\langle a_{{1}}|\langle 1| \label{eqB}
\end{equation}
It's a very fast operation and generates the state ${\left\vert {\phi _{4}}%
\right\rangle }$
\begin{equation}
{\left\vert {\phi _{4}}\right\rangle }=\frac{1}{\sqrt{2}}({\left\vert {0}%
\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{0}}\right\rangle }%
_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}+{\left\vert {1}\right\rangle }_{{{{%
q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{1}}\right\rangle }_{{_{{{{%
p_{1}p_{2}...p_{m}}}}}}})(\frac{{{\left\vert {Off_{2}}\right\rangle }}+{{%
\left\vert {On_{2}}\right\rangle }}}{\sqrt{2}}) \label{eqFai4}
\end{equation}
\textbf{Step 5 }The switch $S_{3}$ applies rotation on the initial ancilla
state {as}
\begin{equation*}
\left\{
\begin{tabular}{c}
${\left\vert {Off_{2}}\right\rangle }\overset{S_{3}}{\rightarrow }{\frac{{{%
\left\vert {Off_{3}}\right\rangle }+{\left\vert {On_{3}}\right\rangle }}}{%
\sqrt{2}}}$ \\
${\left\vert {On_{2}}\right\rangle }\overset{S_{3}}{\rightarrow }{\frac{{{%
\left\vert {Off_{3}}\right\rangle }}-{{\left\vert {On_{3}}\right\rangle }}}{%
\sqrt{2}}}$%
\end{tabular}%
\right.
\end{equation*}%
and generate the final state ${\left\vert {\phi }\right\rangle }$
\begin{equation}
{\left\vert {\phi }\right\rangle }=\frac{1}{\sqrt{2}}({\left\vert {0}%
\right\rangle }_{{{{q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{0}}\right\rangle }%
_{{_{{{{p_{1}p_{2}...p_{m}}}}}}}+{\left\vert {1}\right\rangle }_{{{{%
q_{1}q_{2}...q_{n}}}}}{\left\vert a_{{1}}\right\rangle }_{{_{{{{%
p_{1}p_{2}...p_{m}}}}}}}){{\left\vert {Off_{3}}\right\rangle }}
\label{eqFai}
\end{equation}
Fig.\ref{figU01} and Eq.(\ref{eqU01}) illustrate the processing of operation
$U_{(0,1)}$.
\begin{equation}
\begin{tabular}{c}
$|0\rangle |0\rangle |Off_{0}\rangle \overset{S_{1}}{\rightarrow }%
\left\langle
\begin{tabular}{c}
$\frac{1}{\sqrt{2}}|0\rangle |0\rangle |On_{1}\rangle \overset{A_{0}I_{0}}{%
\rightarrow }\frac{1}{\sqrt{2}}|0\rangle |a_{0}\rangle |On_{1}\rangle $ \\
$\frac{1}{\sqrt{2}}|0\rangle |0\rangle |Off_{1}\rangle \overset{A_{1}I_{1}}{%
\rightarrow }\frac{1}{\sqrt{2}}|1\rangle |a_{1}\rangle |Off_{1}\rangle $%
\end{tabular}%
\right\rangle \overset{S_{2}}{\rightarrow }$ \\
$\left\langle
\begin{tabular}{c}
$\frac{1}{2}(|0\rangle |a_{0}\rangle +|1\rangle |a_{1}\rangle
)|Off_{2}\rangle $ \\
$\frac{1}{2}(|0\rangle |a_{0}\rangle -|1\rangle |a_{1}\rangle )|On2\rangle
\overset{B}{\rightarrow }\frac{1}{2}(|0\rangle |a_{0}\rangle +|1\rangle
|a_{1}\rangle )|On_{2}\rangle $%
\end{tabular}%
\right\rangle \overset{S_{3}}{\rightarrow }{\left\vert {\phi }\right\rangle }
$%
\end{tabular}
\label{eqU01}
\end{equation}%
Here as well as in the following discussions, all subscripts of registers
are ignored.
\begin{figure}[h]
\epsfig{file=fig1_U01.eps,width=9cm,}
\caption{\textbf{The Illustration of the processing of unitary operation }$%
U_{(0,1)}$\textbf{\ that transforms state from }$\left\vert {0}\right\rangle
\left\vert {0}\right\rangle \left\vert {Off}_{{0}}\right\rangle $\textbf{\
into }$\frac{1}{\protect\sqrt{2}}(|0\rangle |a_{0}\rangle +|1\rangle
|a_{1}\rangle )\left\vert {Off}_{{3}}\right\rangle $. M: Mirror}
\label{figU01}
\end{figure}
\subsection{Loading 4D Vector or Multi-Dimensional into Quantum Registers}
The design of unitary operation $U_{(0,1,2,3)}$ is described conceptually as
follows (see Fig.\ref{figU0123}):
\begin{description}
\item[Step 1] Construct unitary operation $S_{4}$, $S_{5}$, $S_{6},B^{\prime
}$ as following:
\end{description}
$\left\{
\begin{tabular}{c}
${\left\vert {Off}_{i}\right\rangle }\overset{S_{i}}{\rightarrow }{\frac{{{%
\left\vert {Off_{i+1}}\right\rangle }+{\left\vert {On_{i+1}}\right\rangle }}%
}{\sqrt{2}}}$ \\
${\left\vert {On}_{i}\right\rangle }\overset{S_{i}}{\rightarrow }{\frac{{{%
\left\vert {Off_{i+1}}\right\rangle }}-{{\left\vert {On_{i+1}}\right\rangle }%
}}{\sqrt{2}}}$%
\end{tabular}%
\right. $, $\left\{
\begin{tabular}{c}
${\left\vert {Off}_{5}\right\rangle }\overset{S_{5}}{\rightarrow }{\frac{{{%
\left\vert {Off_{i+1}}\right\rangle }-{\left\vert {On_{i+1}}\right\rangle }}%
}{\sqrt{2}}}$ \\
${\left\vert {On}_{5}\right\rangle }\overset{S_{5}}{\rightarrow }{\frac{{{%
\left\vert {Off_{i+1}}\right\rangle }}+{{\left\vert {On_{i+1}}\right\rangle }%
}}{\sqrt{2}}}$%
\end{tabular}%
\right. $, $\ B^{\prime }=|\alpha \rangle \langle \alpha |-|\beta \rangle
\langle \beta |$,
where $i=4,6$, $|\alpha \rangle =\frac{1}{\sqrt{2}}(|0\rangle |a_{0}\rangle
+|1\rangle |a_{1}\rangle )$ and $|\beta \rangle =\frac{1}{\sqrt{2}}%
(|2\rangle |a_{2}\rangle +|3\rangle |a_{3}\rangle )$
\begin{description}
\item[Step 2] Assemble unitary operations $S_{4}$, $S_{5}$, $S_{6},B^{\prime
},U_{(0,1)}$ and $U_{(2,3)}$ according to Fig.\ref{figU0123} to form unitary
operations $U_{(0,1,2,3)}$.
\end{description}
Eq.(\ref{eqU0123}) illustrates the processing of operation $U_{(0,1,2,3)}$.
\begin{equation}
\begin{tabular}{c}
$|0\rangle |0\rangle |Off_{4}\rangle \overset{S_{4}}{\rightarrow }%
\left\langle
\begin{tabular}{c}
$\frac{1}{\sqrt{2}}|0\rangle |0\rangle |On_{5}\rangle \overset{U_{(0,1)}}{%
\rightarrow }\frac{1}{2}(|0\rangle |a_{0}\rangle +|1\rangle |a_{1}\rangle
)|On_{5}\rangle $ \\
$\frac{1}{\sqrt{2}}|0\rangle |0\rangle |Off_{5}\rangle \overset{U_{(2,3)}}{%
\rightarrow }\frac{1}{2}(|2\rangle |a_{2}\rangle +|3\rangle |a_{3}\rangle
)|Off_{5}\rangle $%
\end{tabular}%
\right\rangle \overset{S_{5}}{\rightarrow }$ \\
$\left\langle
\begin{tabular}{c}
$\frac{1}{2}[\frac{1}{\sqrt{2}}(|0\rangle |a_{0}\rangle +|1\rangle
|a_{1}\rangle )+\frac{1}{\sqrt{2}}(|2\rangle |a_{2}\rangle +|3\rangle
|a_{3}\rangle )]|Off_{6}\rangle $ \\
\\
$\frac{1}{2}[\frac{1}{\sqrt{2}}(|0\rangle |a_{0}\rangle +|1\rangle
|a_{1}\rangle )-\frac{1}{\sqrt{2}}(|2\rangle |a_{2}\rangle +|3\rangle
|a_{3}\rangle )]|On_{6}\rangle \overset{B^{\prime }}{\rightarrow }$ \\
$\frac{1}{2}[\frac{1}{\sqrt{2}}(|0\rangle |a_{0}\rangle +|1\rangle
|a_{1}\rangle )+\frac{1}{\sqrt{2}}(|2\rangle |a_{2}\rangle +|3\rangle
|a_{3}\rangle )]|On_{6}\rangle $%
\end{tabular}%
\right\rangle \overset{S_{6}}{\rightarrow }{\left\vert {\phi }\right\rangle }
$%
\end{tabular}
\label{eqU0123}
\end{equation}
\begin{figure}[h]
\epsfig{file=fig2_U0123.eps,width=8cm,}
\caption{\textbf{The illustration of unitary operation }$U_{(0,1,2,3)}$%
\textbf{\ that transforms state from }$\left\vert {0}\right\rangle
\left\vert {0}\right\rangle \left\vert {Off}_{{4}}\right\rangle $\textbf{\
into }$(\protect\underset{i=0}{\protect\overset{3}{\sum }}\frac{1}{2}%
\left\vert {i}\right\rangle \left\vert a_{{i}}\right\rangle )\left\vert {Off}%
_{{7}}\right\rangle $.}
\label{figU0123}
\end{figure}
If the unitary operations $U_{(0,1)}$ and $U_{(2,3)}$ embedded in Fig.\ref%
{figU0123} are replaced by $U_{(0,1,2,3)}$ and $U_{(4,5,6,7)}$ respectively,
then $U_{(0,1,...,7)}$ is constructed. Similar to Fig.\ref{figU0123}, we can
apply the same method to construct unitary operation $U_{(0,1,...,2^{n})}$.
If $N\neq 2^{n}$, we could add extra zero components to create a $2^{n}$%
-dimensional vector.
\subsection{Loading Vector into State $\frac{1}{\protect\sqrt{N}}({%
\sum\limits_{i=0}^{N-1}{{{\left\vert {i}\right\rangle \left\vert
0\right\rangle )}}}}$ to Form Entangled State $\frac{1}{\protect\sqrt{N}}({%
\sum\limits_{i=0}^{N-1}{{{\left\vert {i}\right\rangle \left\vert a_{{i}%
}\right\rangle )}}}}$}
Grover's algorithm \cite{Nielsen} has the function that find the index $%
i_{0} $ of a special database record $record_{i_{0}}$ from the index
superposition of state $\frac{1}{\sqrt{N}}({\sum\limits_{i=0}^{N-1}{{{%
\left\vert {i}\right\rangle )}}}}$ taking $O(\sqrt{N})$ steps. And the
record $record_{i_{0}}$\ is the genuine answer wanted by us. However, the
corresponding record $record_{i_{0}}$ can not be measured out unless the
1-1mapping relationship between index $i$ and the corresponding record $%
record_{i}$ is bound in the entangled state $\frac{1}{\sqrt{N}}({%
\sum\limits_{i=0}^{N-1}{{{\left\vert {i}\right\rangle {{{\left\vert
record_{i}\right\rangle }}})}}}}$. That is, we need a unitary$\ $operation $%
U_{L}$ such that
\begin{equation}
\frac{1}{\sqrt{N}}({\sum\limits_{i=0}^{N-1}{{{\left\vert {i}\right\rangle
\left\vert 0\right\rangle )\left\vert {ancilla_{4}}\right\rangle }}}}\overset%
{U_{L}}{{\rightarrow }}\frac{1}{\sqrt{N}}({\sum\limits_{i=0}^{N-1}{{{%
\left\vert {i}\right\rangle \left\vert a_{{i}}\right\rangle )\left\vert {%
ancilla_{3}}\right\rangle }}}} \label{eqUL}
\end{equation}
Ref.\cite{Pang QVQ1, Pang QDCT} generalize Grover's algorithm to the general
search case with complex computation, and $U_{L}$ is required in this
general search case.
$U_{L}$ can be designed using the same method shown in Fig.\ref{figU01} and
Fig.\ref{figU0123}. Fig.\ref{figUL} shows the design of the inverse unitary
operation $(U_{L})^{\dagger }$ at the case $N=2$. $U_{L}$ has time
complexity $O(log_{2}N)$ (unit time: phase transformation and flipping the
qubits of registers).
\begin{figure}[h]
\caption{\textbf{The Illustration of Unitary Operation }$(U_{L})^{\dagger }$%
: $\frac{1}{\protect\sqrt{2}}({\sum\limits_{i=0}^{1}{{{\left\vert {i}%
\right\rangle \left\vert a_{{i}}\right\rangle )\left\vert {Off_{3}}%
\right\rangle }}}}\rightarrow \frac{1}{\protect\sqrt{2}}({%
\sum\limits_{i=0}^{1}{{{\left\vert {i}\right\rangle \left\vert
0\right\rangle )\left\vert {Off}\right\rangle }}}}$. Operation $U_{L}$ can
be designed using the same method shown in Fig.\protect\ref{figU01} and Fig.%
\protect\ref{figU0123}. $S_{0}$: ${\left\vert {Off_{0}}\right\rangle }%
\rightarrow \frac{1}{\protect\sqrt{2}}({{\left\vert {Off}\right\rangle }+{%
\left\vert {On}\right\rangle }})${, }${\left\vert {On_{0}}\right\rangle }%
\rightarrow \frac{1}{\protect\sqrt{2}}({{\left\vert {Off}\right\rangle }}-{{%
\left\vert {On}\right\rangle }})$. Phase transformation $D={\left\vert i_{{1}%
}\right\rangle \left\vert 0\right\rangle }\langle 0|\langle i_{{1}}|-{%
\left\vert i_{{0}}\right\rangle \left\vert 0\right\rangle }\langle 0|\langle
i_{{0}}|$, where $i_{{0}}=0$, $i_{{1}}=1$.}
\label{figUL}\epsfig{file=fig3_UL.eps,width=8cm,}
\end{figure}
It has been demonstrated that giant molecules, such as charcoal $c_{60}$,
exhibit quantum interference \cite{Zeilinger}. Thus many freedom degrees of
giant molecule can be regarded as qubits to realize the QLS presented in
this paper. In addition, one of QLS application is that QLS can load the
data of image with huge size into quantum registers at a time for further
image compression \cite{Pang PostDocReport,Pang QVQ2,Pang QDCT,Pang QVQ1},
while only one data can be loaded into registers at a time for classical
computer.
\section{Conclusion}
Designing simple and fast unitary operation to load classical data set, such
as vector, into quantum registers from classical memory is called quantum
loading scheme (QLS). QLS\ makes quantum CPU is compatible with classical
memory, and it assembles classical memory and quantum CPU as a whole. QLS is
the base of further quantum computation. The QLS with time complexity $%
O(log_{2}N)$ (unit time: phase transformation and flipping the qubits of
registers)\ is presented in this paper, while classical loading scheme (CLS)
has time complexity $O(N)$ (unit: addition) because all computation
instructions have to be executed one by one. Path interference is applied to
design QLS in this paper so that the complexity of designing quantum
algorithm is decomposed as the design of many simple unitary operations. In
addition, this paper demonstrates that using path interference to design
unitary operation and parallel quantum computation is possible.
\begin{acknowledgments}
The author thanks Dr. Z.-W Zhou\ who is at Key Lab. of Quantum Information,
Uni. of Science and Technology of China for that he points out two errors in
author's primary idea. The author's first error is that the result generate
with probability 50\% for 2D vector, the second error is the defect of Fig.%
\ref{figU0123} that the output is direct product state. Dr. Z.-W Zhou tries
his best to help author for nearly 3 years. The author thanks his teacher,
prof. G.-C Guo. The author is brought up from Guo's Lab.. The author thanks
prof. V. N. Gorbachev who is at St.-Petersburg State Uni. of Aerospace
Instrumentation for the useful discussion with him. The author thanks Mir.
N. Kiesel who is at Max-Plank-Institute fur Quantenoptik, Germany for his
checking the partial deduction of section 2 of this paper. The author thanks
prof. G.-L Long who is at Tsinghua Uni., China for the useful discussion
with him and the author obtains some heuristic help from his eprint file
quant-ph/0512120. The author thanks associate prof. Shiuan-Huei Lin who is
at National Chiao Tung Uni., Taiwan., China for encouraging the author. The
author thanks prof. Hideaki Matsueda who is at Kochi Uni., Japan for
encouraging the author. The author thanks prof. J. Zhang and B.-P Hou who
are at Sichuan Normal Uni., China for their help. The author thanks prof.
Z.-F. Han, Dr. Y.-S. Zhang, Dr. Y.-F. Huang, Dr. Y.-J. Han, Mr. J.-M Cai,
Mr. M.-Y. Ye, and Mr. M.-Gong for their help and suggestions. One of
reviewers presents many significative suggestions to improve the readability
of this paper, the author thanks the reviewer.
\end{acknowledgments}
| -39,513.763925 |
[
-1.9072265625,
1.80078125
] | 14.750542 |
[
-3.818359375,
-0.3203125,
-1.84375,
-5.203125,
-0.2115478515625,
7.1015625
] |
[
-0.6865234375,
7,
-0.044921875,
4.7890625
] | 194 | 2,066 |
[
-2.732421875,
2.880859375
] | 38.998435 |
[
-5.40234375,
-3.1875,
-2.32421875,
-1.4033203125,
1.51171875,
7.83984375
] | 2.792906 | 12.221274 | 29.767667 | 4.908039 |
[
3.1833713054656982
] | -27,451.623375 | 6.932236 | -39,425.231324 | 1.04734 | 5.653678 |
[
-3.01171875,
-3.259765625,
-3.576171875,
-4.98046875,
2.5546875,
11.2578125
] |
[
-5.0859375,
-0.490234375,
-1.5556640625,
-0.74609375,
1.900390625,
2.19921875
] | |
BkiUd_M5i7PA9MKga2jt
|
\section{Introduction}
\label{sect:introduction}
It is widely accepted that most -- if not all -- nearby galaxies with a bulge component host a super massive black hole (SMBH) in their centre \citep[e.g.][]{ferrarese+merrit2000,kormendy+richstone1995,ferrarese+ford2005}. A considerable amount of observational evidence supports a connection between SMBH and the host galaxy growth, the main one being a strong correlation between the mass of the SMBH and the properties of the bulge of the host galaxy \citep{gebhardt2000}; nevertheless, the details of coevolution of the SMBH and the host galaxy remain the subject of an ongoing debate.
Despite the ubiquity of SMBHs at the centre of galaxies, only a small fraction of these are active galactic nuclei (AGN) in the local universe \citep[e.g.][]{kewley2006,ho+filippenko1997}. The questions of what triggers activity in a galactic centre and if this ignition mechanism is related to the host galaxy properties, arise naturally. The lack of activity in the nucleus can be related to a lack of accretion material or the absence of a fuelling mechanism. The amount of gas needed to fuel an AGN over a normal duty cycle is a large fraction of the total gas contained in the galaxy \citep{combes2001}, thus a fuelling mechanism must be able to remove most of the angular momentum of a large amount of gas so this can be transferred from the kpc-scale into the sub-pc central region to feed the AGN.
Gravitational mechanisms such as galaxy interactions can drive gas inwards. Major mergers are thought to be responsible for the high accretion rate observed in luminous quasars, while minor mergers can produce Seyfert-level luminosities. Alternatively, secular mechanisms can also remove angular momentum and drive gas towards the nucleus. Observations point to secular processes being the most common triggering mechanisms for medium to low luminosity AGNs. \citep[e.g.][]{hopkins2014,treister2012,fan2016,goulding2017b}
Non-axisymmetric potentials, such as spiral structures and bars can produce radial inflows to the central region, which can be observed as line-of-sight velocity distortions \citep{lin+shu1964,lindblad1964}. Gas transport via bars is efficient \citep[e.g.][]{mundel+shone1999} from large scale down to the inner kpc, where gas can get stalled in rings at the inner Lindblad resonance region \citep{combes+gerin1985}. From this scale to the centre other mechanisms can be invoked to transport gas to the AGN, such as inner spiral structures and bars within bars \citep{shlosman1989}. \\
The incidence of bars is similar in both active and non-active galaxies and thus a strong correlation between bar presence and activity is yet to be found \citep[e.g.][]{knapen2000,cisternas2013,galloway2015,cheung2015,goulding2017a} .
However a difference between active and inactive galaxies has been observed by \citet{simoeslopes2007} and \citet{martini+pogge1999}. Using a sample of active and control galaxy pairs, they observed nuclear dust spirals and structures in 100\% of the active early-type galaxies, but in only 25\% of the control sample. This dust excess has been confirmed by \citet{martini+dicken2013}, and is thought to trace the feeding channels to the AGN \citep{ferrarese+ford2005,kormendy+ho2013,storchibergmann2007}.
Similarly, in a study of ionised gas dynamics in a matched sample of active and inactive galaxies, \citet{dumas2007}, identified increased kinematics disturbance as a function of accretion rate in the inner 1 kpc of AGN, where activity and dynamical timescales become comparable. \\
The non-axisymmetric gravitational potential created by a bar can produce important kinematic effects on the gas which have been studied by hydrodynamical \citep[e.g][]{lindblad+lindblad1996, kim2012, athanassoula1992} and/or N-body simulations \citep[e.g.][]{sellwood1981,emsellem2001a}.
A different method to gain insight on these kinematic effects is to quantify the line-of-sight deviations from pure rotation, which can be achieved from linear perturbation theory \citep{lin+shu1964}. The non-axisymmetric distortions to the planar flow can be decomposed in their harmonic components. \citet{franx1994} and \citet{schoenmakers1997} pioneered an approach to interpret these harmonic coefficients based on epicycle theory, and thus only valid for small departures from circular orbit speed. Since then, this harmonic decomposition analysis has been carried by several authors, e.g., \citet{wong2004,emsellem2001b}. \\
One aspect of the SMBH-host galaxy connection is the interaction between the gas of the host galaxy and the energy generated by the AGN, which produces a feedback process that has been theorised as an important component in galaxy evolution as it can help to regulate the growth of the galaxy, preventing it from becoming too massive \citep{dimatteo2005, wagner+bicknell2011,fabian2012}. \\
This interaction can occur, broadly, in two main modes: radiative or quasar mode and kinetic or radio mode \citep{croton2006,fabian2012}. The former is driven by a wind caused by the accretion of material into the SMBH, producing wide-angle sub-relativistic outflows. The latter is driven by relativistic radio jets.
Both these winds and jets can have important consequences in the galaxy evolution as they can heat and ionise cold gas when colliding with it, preventing the gas from collapsing under self-gravity, thus halting the accretion onto the SMBH and quenching star formation \citep[e.g.][]{hardcastle2013,best2005}. The jets can also directly expel gas from the galaxy removing the components for further star formation \citep{nesvadba2006}. However, some simulations \citep[e.g.][]{antonuccio-delogu2010,silk+nusser2010} reveal that jet activity is able to trigger star formation by producing high density cavities with low temperature, which are embedded in a cocoon around the jet \citep[e.g.][]{best1998,jarvis2001}, a scenario known as positive radio mode feedback.
The alignment of the outflowing gas with the jet suggests that the outflows are driven by the transfer of energy and momentum from the radio jet to the ISM, as shown by hydrodynamical simulations \citep{wagner2012}. Jet-driven outflows have been observed in neutral and molecular gas \citep[e.g.][]{morganti2005,dasyra+combes2012} and in the ionised gas \citep[e.g.][]{holt2011,riffel2006,couto2017}. Kinematic features consistent with gas inflows and outflows have been found in the ionised and molecular gas of the central region of nearby galaxies \citep[e.g.][and references therein]{lena2016,schnorrmuller2016,schnorrmuller2017,garciaburillo2005}. \\
In this work we analyse the bar- and jet- induced perturbations on the molecular and ionised gas in NGC 3393 using new data from ALMA (CO J:2-1) and GEMINI-GMOS/IFU (optical spectra with information from stars and gas).
NGC 3393 is a nearby, bright \citep[$m_{b}=13.1$ according to][]{devaucouleurs1991}, spiral (Sa) galaxy, at an estimated redshift of 0.012442 (optical), or 0.012509 (from the 21 cm line) which corresponds to a luminosity distance of $52$ Mpc and a scale of $0.25$ kpc/arcsec, assuming $H_{o} = 73$ km s$^{-1}$ Mpc$^{-1}$. The galaxy covers over one arcmin on the sky, it is observed nearly face-on, and it has been classified optically as a Seyfert 2 \citep{veroncetty2003}. It is interacting weakly with a nearby companion that is 60 kpc away \citep{schmitt2001}.
From HI single dish observations the maximum rotation velocity corrected by inclination is $158 \pm 7$ km/s, and the central velocity dispersion is $197 \pm 28 $km/s \citep{leda-paturel2003}.
NIR images show a stellar bar in PA $\sim 159 \degree - 165 \degree$, with maximum ellipticity $e_{max} = 0.2$, and semi major axis (SMA) $13 \arcsec$. A faint nuclear bar has also been posited in position angle (PA) $\sim 145\degree - 150\degree$, $e_{max}=0.46$, and SMA $2 \arcsec$ \citep{alonso-herrero1998,jungwiert1997}.
\citet{lasker2016} modelled the light distribution of the galaxy using HST imaging and found two prominent rings: the first is actually a partial ring formed by two asymmetric tightly-wound spiral arms in an outer disk of radius $40 \arcsec$; the second is an inner ring that appears elongated, with a radius of $13\arcsec$, i.e. coincident with the outer bar identified by \citet{alonso-herrero1998}. These authors derive a PA of $140\degree$ for the $2\arcsec$ SMA inner bar.
A high resolution HST [OIII] emission line image of the NLR \citep{Schmitt2003} shows an S-shaped morphology with arms that show an opening angle of $90 \degree $, and an extension of $5.6\arcsec$ pc (1410 pc along the ionisation axis PA) $\times\ 3\arcsec$ (740 pc), with the ionisation axis oriented in PA $65\degree$. Their derived PA is only slightly different from the value of $55\degree$ quoted by \citet{schmitt1996} and \citet{cooke2000}.
The sense of curvature of this S-shape is the same as the large-scale spiral arms. The [OIII] emission extends up to $r \sim 15\arcsec$ (3750 pc) along PA $44 \degree$.
This S-shaped structure of high-excitation gas surrounds a three-component radio structure, as observed with the VLA at 1.5, 4 and 8.4 GHz \citep{koss2015,cooke2000}. The central radio source is unresolved and has a flatter spectrum than the lobes. Chandra data was also obtained by \citet{bianchi2006} and \citet{levenson2006}, who found soft X-ray emission that has strong morphological correlations with the extended [OIII] emission.\\
The kinematics of the NLR has been studied by \citet{cooke2000} using Fabry-P\'erot [NII] data. They found a skew between the velocity fields of the inner region and that of the outer arms, and fitted a rotation curve which indicates that the major axis of the galaxy goes from NE to SW along PA $68 \degree$, with the NE gas receding and the SW gas approaching. Assuming trailing arms, \citet{cooke2000} concludes that the galaxy is rotating counter clockwise.
Using Chandra X-ray observations, \citet{fabbiano2011} reported the presence of two X-ray sources which they suggested were obscured AGNs, separated by $ \sim 130$ pc, with lower mass limits of $\sim 8 \times 10^{5}$ M$_{\odot}$ for the NE source and $\sim 10^{6}$ M$_{\odot}$ for the SW source.
More recent observations and analysis by \citet{koss2015} found the same morphological correlations between the [OIII], X-ray and radio emission, but they conclude that the double SMBH detection is most likely spurious, resulting from the low number of X-ray counts ($<160$) at $6-7$ keV and data smoothing with a few counts per pixel on scales much smaller than the PSF. \\
NGC 3393 is a Compton thick galaxy \citep{koss2015} and has polarised broad $H_{\alpha}$ and $H_{\beta}$ emission lines \citep{kay2002,ramosalmeida2016}. A water maser emitting disk has been observed in the nuclear region using VLBI observations \citep{kondratko2008}: this water maser disk is observed edge on, with a major axis in $ PA \sim -34 \degree $, i.e. perpendicular to the NLR axis. The kinematics of the water masers in the disk are consistent with Keplerian rotation, with an enclosed mass of $(3.1 \pm 0.2) \times 10^7 M_{\odot}$. \\
This paper is organised as follows. In Section \ref{sect:observations} we describe the observations and data reductions. In Section \ref{sect:results} we present the methods used for the analysis and the subsequent results. In Section \ref{sect:discussion} we present a discussion of the results and in Section \ref{sect:conclusions} we present our conclusions.
\section{Observations and Data Reduction}
\label{sect:observations}
\subsection{GEMINI-GMOS/IFU}
The observations were obtained with the Integral Field Unit of the Gemini Multi Object Spectrograph (GMOS-IFU; \citet{allington-smith2002,hook2004}) at the Gemini South Telescope on June 20, 2015 (project GS-2015A-Q-12). The observations were made in "single-slit" mode, using the IFU-R mask, and the B600+G5323 grating with four exposures of 720 s each, adding small spatial ($0\farcs5$) and spectral (50 \AA) offsets for every exposure. The spectral coverage of the observations was $\lambda4092 - \lambda7336 $\AA, at a spectral resolution of $R = 1688$, covering the emission lines H$_{\beta}$, [OIII]$\lambda4959,5007$, [OI]$\lambda6300$, H$_{\alpha}$, [NII]$\lambda6548,6583$, and [SII]$\lambda6717,6731$, in addition to several stellar absorption lines. The standard star used for flux calibration is LTT 7987, which was observed on May 30, 2015.
The field of view of the observations was $3 \farcs 8 \times 4 \farcs 9$, which corresponds to a size of $0.96$ kpc $\times$ $ 1.24 $kpc in the galaxy, sampled at $0 \farcs 08$. \\
Seeing during the observations was $0 \farcs 62$ as measured from the FWHM of the spatial profile of the stars in the acquisition image; at the galaxy this corresponds to $155$ pc.
The data processing was performed using tasks from the GEMINI.GMOS package for IRAF, following \citet{lena2014}. This process includes bias subtraction, flat fielding, sky subtraction, wavelength calibration, flux calibration, differential atmospheric dispersion, and finally, the building of the data cubes with a sampling of $0\farcs2 \times 0\farcs2$. The four individual data cubes are combined to avoid the detector gaps, obtaining the final data cube used throughout this paper. \\
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{fig1_v2_2.png}
\caption{Left: HST F606W image for the galaxy, overlaid with the GMOS FOV. Orientation is N to the top and E to the left; Middle: GMOS continuum image, made by collapsing channels of the data cube that did not include strong emission lines. Orientation is shown with the compass in the top left corner; Right: Example spectra for the three points marked on the continuum map, showing the most prominent emission lines [OIII], [NII], H$_{\alpha}$, and [SII]. Units for both colour bars are erg/cm$^{2}$/s/\AA.}
\label{hst_and_gmoscont}
\end{figure*}
\FloatBarrier
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{orientation_fig.png}
\caption{ The moment 1 (velocity) map of the [NII] line from our GEMINI-GMOS/IFU observations is shown in color following the color bar at the top of the panel (scale in km/s). Blue contours indicate the moment 0 map of [NII], pink contours correspond to the VLA 8.4 GHz continuum map, and black contours show the moment 0 map from ALMA CO J:2-1 observations.
}
\label{gmos_orientation}
\end{figure}
\FloatBarrier
\subsection{ALMA: CO J:2-1}
NGC 3393 was observed on May 3, 2016 as part of Project 2015.1.00086.S (P.I. Nagar).
Four basebands (spw's) were used: each set to an effective bandwidth of 1.875 GHz.
Spw1 was centred on the CO J:2-1 line ($\nu_{rest} = 230.538$ GHz), with a channel
width of $2.5$ km/s.
Spw2 was set to 'TDM' mode, for highest sensitivity, and used to cover the continuum centred on
$\nu_{rest} = 232.538$ GHz with 40.8 km/s channels.
Spw3 was centred on the CS J:5-4 line ($\nu_{rest} = 244.935$ GHz), with a channel
width of 5.1 km/s.
Spw4 was set to 'TDM' mode, for highest sensitivity, and used to cover the continuum centred on
$\nu_{rest} = 246.936$ GHz with 31.3 km/s channels.
Forty one antennas were used and the total integration time on NGC 3393 was $\sim$26 min. Six minute
scans on NGC 3393 were interleaved with one minute scans on the nearby 'phase-calibrator'
J1037-2934. The latter is a well-studied compact quasar at redshift 0.312 with a position
accurate to better than 1 mas. No flux calibrator was observed within this 'scheduling block'.
Data were calibrated and imaged using CASA version 4.7, and mostly followed the calibration
script provided by the ALMA observatory (the CASA calibration pipeline was not available at the time
of the release of this dataset). Since a flux calibrator was not observed for this project,
flux calibration was performed by setting the flux of the phase calibrator J1037-2934 to
602 mJy at 235.7 GHz (a value provided in the ALMA observatories calibration script).
The ALMA calibrator database shows that this source had a measured flux of
630 mJy when observed 12 days later at the same frequency. The continuum was imaged from line-free
channels in all four spws.
This continuum image, at an effective frequency of 239 GHz, was made using 'Briggs weighting' with
robust=2, i.e. 'natural' weighting.
The synthesised beam of this image has a major (minor) axis of 0\farcs71 (0\farcs61) in
PA 85\arcdeg\ and the r.m.s. noise is 0.023 mJy/beam.
The continuum-subtracted $uv$-data were then used to image the CO J:2-1 line.
The final CO J:2-1 data we use and show in this work come from two datacubes:
(a) a higher spatial and spectral resolution cube, made using 'Briggs weighting' with robust=0.2,
and using the intrinsic spectral resolution. This datacube has a synthesised beam with
major (minor axis) of
0\farcs58 (0\farcs5) with a beam PA of $-$72\arcdeg\ and an r.m.s. noise of
0.7 mJy/beam per 2.5 km/s channel;
(b) a lower spatial and spectral resolution (but higher signal to noise) cube, made using
'natural' weighting, and 4-channel spectral averaging. This datacube has
a synthesised beam with major (minor) axis of
0\farcs73 (0\farcs62) with a beam PA of 86\arcdeg\ and an r.m.s. noise of
0.45 mJy/beam per 10 km/s channel. \\
\section{Results}
\label{sect:results}
\subsection{Stellar kinematics}
\label{subsect:stellar_kinematics}
The stellar kinematics was obtained from the absorption lines in the GEMINI-GMOS/IFU datacube. To model the stellar kinematics we used the penalised pixel-fitting (pPXF v5.2.1) routine developed by \citet{cappellari2004} and upgraded in \citet{cappellari2017}, where the line-of-sight velocity distribution (LOSVD) is recovered by fitting an optimised template to the galaxy spectrum. We used the INDO-US spectral templates library \citep{valdes2004}.
To reach the highest possible S/N to measure the stellar kinematics reliably, we spatially binned the data cube (S/N $=50$) by using the Voronoi binning method described in \citet{cappellari2003}. The spectra show no absorption lines in the region
near the northern radio lobe: this region was thus masked before running pPXF. \\
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{stars_velmap.png}
\label{stellar_velmap}
\caption{ Left: Stellar velocity map from pPXF; regions where no absorption line information could be recovered are masked. Right: best-fit Bertola model to the stellar velocity field; centre and inclination ($i=25 \degree$) were kept fixed. The free parameters fitted were $A=207$ km/s, $C=0.82$, $p=1$, and $PA = 40 \degree$.
}
\label{stellar_kinematics}
\end{figure}
The pPXF-derived stellar kinematic velocity map is shown in Fig. \ref{stellar_kinematics}. Throughout this paper we adopt an inclination of $25\degree$ based on the axis ratio $a/b = 1.1$ \citep[v1.10,][]{devaucouleurs1991}. We model the stellar velocity field obtained from pPXF using a spherical potential with pure circular rotation, assuming that the kinematical centre is cospatial with the peak in the continuum emission. The observed radial velocity from this potential is given by \citep{bertola1991}:
$$ V = V_{sys} + \frac{AR\cos(\psi - \psi_{0})\cos^{p}\theta }{ (R^{2}[\sin^{2}(\psi - \psi_{0}) + \cos^{2}(\psi - \psi_{0})] + c^{2}\cos^{2}\theta )^{p/2} } \; , $$
\noindent where $V_{sys}$ is the systemic velocity, $R$ is the radius, $\theta$ is the disk inclination, $\psi_{0}$ is the position angle of the line of nodes, $A$ is the amplitude of the rotation curve, $c$ is the concentration parameter regulating the compactness of the region with a strong velocity gradient, and $p$ regulates the slope of the 'flat' portion of the velocity curve.
We perform a least-square minimization using the IDL routine MPFIT2DFUN \citep{mpfit-markwardt2009} to obtain the best fitting parameters. The resulting model is shown in Fig. \ref{stellar_kinematics}, where we kept centre and inclination ($25\degree$) fixed, and the free parameters of the fit and the best fitted values were: $A=207$ km/s, $c=0.82$, $p=1$, $\Psi_0 = 40 \degree$.
\subsection{Ionised gas}
\label{sect:ionised_gas}
To model the ionised gas emission in the GMOS-IFU data we use custom IDL routines. We begin our analysis by generating moment images for the most prominent spectral emission lines, [OIII]$\lambda 5007$ and the [NII]$\lambda6549,\lambda6585$ doublet. These moment images are created by collapsing one axis of the data cube.
The moment zero, i.e., integrated flux maps (Fig. \ref{onecomp_gh_gmos}) show an S-shaped morphology of the ionised gas, where two arm-like features leave the centre as a straight line along PA $55 \degree$ and then curve, to the NW in the NE arm, and to the SE in the SW arm. The brightest emission is observed within an opening angle of $90 \degree$ from the nucleus.
The black contours in the [NII] moment zero map (Fig. \ref{onecomp_gh_gmos}) correspond to the 8.4 GHz VLA map, and indicate that the gas in the S-shaped arms seems to surround both NE and SW radio lobes.
This morphology, and the interaction between radio jet and ionised gas, was previously observed and analysed by \citet{cooke2000}, and \citet{maksym2017}. \\
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{fig3_2.png}
\caption{Moment maps for the [NII]$\lambda6585$ (top) and [OIII]$\lambda5007$ (bottom) emission lines. First column: integrated flux, second: velocity map, third: velocity dispersion. The fourth column shows the structure map (top) and the h3 moment (bottom) from a one-component Gaussian-hermite fit to the [OIII] emission line; this moment represents the asymmetric deviations from a Gaussian profile.
Black contours superposed on the moment 0 map of the [NII] line correspond to the VLA 8.4 GHz continuum map. Black contours in the moment 1, 2 and 3 maps correspond to the moment 0 of the respective line. Black circles in the moment 0 map of the [OIII] line (bottom left corner) show the apertures positions along an s-shaped slit used to extract the position-velocity diagram shown in Fig. \ref{gmos_pvdiagram}, the coloured apertures mark specific regions that can be identified in the pv-diagram.
The galaxy major axis (PA $40 \degree$) is marked as a black line, delimiting the near and far side. The compass shown on top left corner shows the orientation of our GEMINI-GMOS/IFU data. The green cross marks the position of the stellar continuum peak, which we assume to trace the position of the nucleus. The green diamond shows the position of the SW secondary BH reported by \citet{fabbiano2011}.
Colour bar units for moment 0 maps and structure map are erg/cm$^{2}$/s/\AA for, km/s for moments 1 and 2, and unitless for h3 moment.
}
\label{onecomp_gh_gmos}
\end{figure*}
Moment one, i.e., velocity maps, are shown in the second column of Fig. \ref{onecomp_gh_gmos}, where a gradient can be observed from NE to SW. However, given the complex kinematics of the NLR a precise determination of its PA is not possible using this moment image. Two high-velocity features are found to the NE and SW of the nucleus. The moment one map for the [OIII] emission line shows that a redshifted component covers a large fraction of the FOV. For the [NII] line we observe that the NE region shows larger redshifts with increasing distance from the nucleus. However, with the exception of the blueshifted blob observed S of the nucleus, the SW region is not as blueshifted as expected from an inclined disk in pure rotation. \\
The moment two (velocity dispersion) maps, shown in the third column of Fig. \ref{onecomp_gh_gmos}, presents a large dispersion in the central region, extending from the centre in a section along the minor axis (referred to as the equatorial region hereafter), and a second high dispersion area is seen in the inner part of the NE arm. The dispersion is higher for the [OIII] emission line. \\
The moment 3 (h3) map (bottom-right corner in Fig. \ref{onecomp_gh_gmos}) obtained from a one-component Gaussian fit, describes asymmetric deviations from a Gaussian profile, presents some skewness in the equatorial region, where negative values are indicative of a blueshifted wing or component. A large area in the SE region present a positive skewness, which indicates a strong redshifted wing or component.
Similar distributions were observed for all moment maps of all strong emission lines fitted with a single Gaussian using PROFIT ([SII], H$\alpha$, H$\beta$, [OI]; not shown). \\
To better examine the kinematics we defined a curved 'slit' which closely follows the S-shaped arms seen in the emission lines: the aperture positions of this slit are shown in Fig. \ref{onecomp_gh_gmos} (bottom left panel) and the position-velocity (pv) diagrams of the $H\alpha$ and [NII] lines, extracted from these apertures are shown in Fig.\ref{gmos_pvdiagram}. The pure rotation model fitted to the stellar kinematics is shown as the solid black line. While we do see some gas at velocities close to this model, there are large deviations from the model which indicate the presence of multiple kinematic components. To guide the eye specific apertures along the slit are coloured (Fig. \ref{onecomp_gh_gmos}), and the pv-diagram is marked with a corresponding colour line at this aperture. The yellow and red apertures are near the SW radio lobe, while the magenta and green line apertures are near the NE radio lobe. The apertures near the yellow vertical line reveal gas that appears to follow the pure rotation model plus a redshifted wing with velocities reaching 500 km/s; no equivalent blueshifted wing is observed. In the apertures near the red vertical line we see little to no gas following the rotation model: instead we observe strongly blueshifted emission with velocities near $- 400$ km/s. Apertures near the magenta line show a small fraction of gas following the rotation model, and a dominant component of gas is redshifted by up to $\sim 300$ km/s. Apertures near the green line show a highly broadened profile: while the median velocity is roughly close to the rotation model, we see large ($\pm$300 km/s) redshifted and blueshifted velocities. In this region, an additional extreme redshifted component is observed, most clearly seen in the [NII]$\lambda$6585 line: this weak redshifted wing reaches velocities of 1000 km/s.
Similar characteristics are observed in the pv-diagram of the [OIII] emission line (not shown).
\begin{figure}
\centering
\subfloat{
\includegraphics[width=.5\textwidth]{pvdiagram_Halpha.png}
\label{gmos_pvdiagram_ha}
}
\caption{Position-velocity diagram of the continuum-subtracted GEMINI-GMOS/IFU data cube, centred on the H$_{\alpha}$ emission line, extracted along the S-shaped 'slit' shown on bottom right panel of Fig. \ref{onecomp_gh_gmos}. The solid black line shows the expectations of the Bertola rotation model derived from the stellar kinematics. For reference, vertical yellow, red, magenta and green lines show the position of specific apertures that are marked with the same colours in Fig. \ref{onecomp_gh_gmos} (bottom left corner). The dashed black lines show the zero-velocity for each emission line. Colour bar units are ergs/cm$^{2}$/s/\AA.
}
\label{gmos_pvdiagram}
\end{figure}
The kinematics of NGC 3393 were classified as turbulent by \citet{fischer2013}, based on HST spectroscopy, since they could not be satisfactorily fitted with a biconical outflow model.
Given the complex kinematics of the NLR, the large velocity dispersion, the Gaussian skewness observed in the equatorial region, the multiple kinematical components observed in the pv-diagram, and a visual inspection of the spectra in the mentioned areas (Fig. \ref{multicomp_spectra}), the need for a multiple-component Gaussian fit is clear. \\
A visual inspection of the emission line profiles for the [OIII], H$\alpha$ and [NII] lines show a large difference of shape and width of the profiles in different regions of the FOV. To understand this difference we use the measurements described in \citet{whittle1985}, where velocity widths are measured at some fraction of the cumulative line flux, the integral nature of these measurements makes them relatively insensitive to the nature of the profiles. For every spaxel we calculated: (a)\textbf{W80}, a line width parameter which measures the velocity width that encloses 80 per cent of the total flux. This parameter does not discard information of broad wings. (b) \textbf{A}, an asymmetry parameter as defined in \citet{liu2013}, where a symmetric profile will have a value of A$=0$, and the presence of redshifted (blueshifted) wings will give a positive (negative) value. (c) \textbf{K}, a shape parameter that is related to the line kurtosis, for a Gaussian profile K$=0$, while profiles that have broad wings will have K $>1$ and stubby profiles will have K $<1$. The values obtained for these parameters for the [OIII] emission line are shown in Fig. \ref{lineprof_par}. We use these results to identify spaxels in the FOV for which multiple component fits are required and factible. We consider pixels for which W80 $> 350$ km/s, A $> 0.1$ or A $<-0.1$, and K $>1.1$ or K $< 0.8$, we created a mask by weighting the values for W80, A and K, to 80, 20 and 20 per cent respectively. furthermore, regions with low S/N are masked out as the profiles are not as reliable. This mask contains the spaxels that require two Gaussian fits. Visual inspection of the maps show considerable larger values for W80 in some areas. For this area we consider the possibility of a three Gaussian component fit, and thus we made a second mask for regions where W80 $>$ 500 km/s.
For the multi-component analysis we use a custom IDL routine based on the routine PROFIT \citep{riffel2010}, fitting multiple Gaussian profiles to the emission lines of the GMOS-IFU spectra. We performed a two and three component Gaussian fit to the emission lines. We use a narrow component ($ \sigma < 115$ km/s ), a broad redshifted component ($ 115 < \sigma < 230$ km/s), and a third broad blueshifted ($ 115 < \sigma < 230$ km/s) component, based on the masks described above. Note that the 'broad' nomenclature used here should not be confused with the broad component typical of a Type 1 AGN.\\
The result of the multiple-component fit is shown in Fig. \ref{gmos_multicomp_mom1}. To fit the narrow component we use as a first guess for each spaxel the velocity predicted by the pure rotation model fitted to the stellar kinematics, this component traces, more or less, gas rotating in a disk, with a kinematic major axis in PA $40 \degree$ (this value is in agreement with the $37 \pm 3\degree$ obtained by \citet{cooke2000}). From the moment zero maps it can be observed that this component is weaker for the [OIII] line which seems to be more dominated by the broad red and blueshifted components.
The broad redshifted components trace more complex kinematics which seem to be closely related to the radio jets. We have marked four areas of more interesting kinematics (regions marked 'O1', 'O2', 'O3' and 'O4' labels in Fig. \ref{gmos_multicomp_mom1}), these areas coincide with the coloured vertical lines we used in the pv-diagram. The 'O1' area, as we observed in the pv-diagram, has a broad profile (top left panel on Fig. \ref{multicomp_spectra}) that is slightly asymmetric, however no clear multiple peaks are observed and the separation between model and disturbed gas is not clear. A very broad weak redshifted wing is also observed in this area. The regions 'O2' and 'O3' seem to be outflow dominated and have strong high-velocity redshifted emission and appear to be closely related to the NE and SW radio lobes respectively.
The 'O4' area shows the widest profiles observed in the FOV (bottom right panel in Fig. \ref{multicomp_spectra}), where three clear peaks are observed, the dominant component is the blueshifted emission, followed by a component that seems to follow rotation on the disk, and a weaker redshifted wing. This profile extends along a region perpendicular to the radio jet axis.
\begin{figure*}
\centering
\includegraphics[width=0.79\textwidth]{lineprof_par.png}
\caption{Profile measurements for emission line profiles described in \citet{liu2013,whittle1985}. From left to right: W80 (velocity width that encloses 80 per cent of the total flux), A (asymmetry of the profile parameter), K (shape of the profile parameter) and V$_{med}$ (median value of the integrated flux profile). For W80 and V$_{med}$ the colour bar is in km/s, while A and K are unitless parameters. Grey contours correspond to the [NII] moment 0 map. Labels 'O1', 'O2, 'O3' and 'O4' correspond to areas of interest, explained in Sect. \ref{sect:ionised_gas}.
}
\label{lineprof_par}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.72\textwidth]{fig6.png}
\caption{Moment 0 and 1 maps from our multiple-component Gaussian fits to the [OIII] and [NII] emission lines.
First and second column show the moment 0 and moment 1 map for the [OIII] line. Third and fourth column show the moment 0 and 1 for the [NII] line. First, second and third row show the narrow, broad redshifted and broad blueshifted components respectively. Crosses marked as 'O1', 'O2', 'O3', and 'O4' show the areas of interest defined in the text. Black contours in the moment 1 maps show the moment 0 map of the corresponding emission line, obtained from the axis collapse, as shown in Fig. \ref{onecomp_gh_gmos}. Pink contours correspond to the VLA 8.4 GHz map. Units for the moment 0 maps are erg/cm$^{2}$/s/\AA , and km/s for the moment 1 maps. }
\label{gmos_multicomp_mom1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig5.png}
\caption{ Examples of multiple-component Gaussian fits to the [OIII] emission line in the 'O1', 'O2', 'O3', and 'O4' areas shown in Fig. \ref{gmos_multicomp_mom1}. The narrow, broad redshifted, and broad blueshifted components are shown in yellow, red, and blue, respectively and their sum is shown by the dashed green line. Dotted vertical line shows systemic velocity.
}
\label{multicomp_spectra}
\end{figure}
\subsection{Molecular gas : CO J:2-1}
\label{subsect:molecular_gas}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{Hst_overlays_2.png}
\caption{The structure map (obtained using a F606W HST image) of NGC 3393 is shown in colour, with overlays of the ALMA CO J:2-1 moment 0 map (cyan contours) and the GMOS [NII] moment 0 map (black contours). The grey line marks the PA of the large-scale bar, and the magenta line corresponds to the PA and estimated extension of the nuclear bar. Colour bar units are ergs/cm$^{2}$/s/\AA.
}
\label{overlay_hst}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig9.png}
\caption{Moment maps for ALMA CO J:2-1 data. Top: integrated flux (moment 0) map following the colour bar (units of Jy/beam). Middle: velocity map (moment 1) after subtraction of a CO systemic velocity of $3746$ km/s; the grey circle separates the inner region and outer region referred to in the text. Bottom: velocity dispersion (moment 2) map. Moment 1 and 2 colour bar units are km/s. In all panels N is up and E is to the left, and the black line marks the adopted major axis PA of $40 \degree$.
}
\label{alma_moments}
\end{figure}
To illustrate the distribution of the molecular gas relative to the other components of the galaxy we show in Fig. \ref{overlay_hst} both the CO J:2-1 (ALMA; cyan) and [NII] (GEMINI-GMOS/IFU; black) integrated flux maps overlaid on an HST F606W image to which we applied an unsharp-mask filter, with the goal of emphasising structures, such as dusty regions (we refer to this filtered image as 'structure map' from now on). It can be observed that the ionised and molecular gas barely overlap, due to the limited FOV of our GMOS-IFU data, this can be observed better in Fig. \ref{gmos_orientation}. There is little to no CO J:2-1 molecular gas observed in the central region. This implies either a true lack of molecular gas or is a critical density effect, that is, the gas density may be high enough that the CO J:2-1 transition is collisionaly, rather than radiatively, de-excited. The outer distribution of CO J:2-1 seems to follow the inner ring-like structure, where the structure map shows the presence of dust.
The observed molecular gas morphology in the nuclear region could be a result of the molecular gas and radio jet interaction: the latter can produce entrainment of the gas along the jet P.A, and push gas away from the centre perpendicular to this PA. Alternatively, the molecular gas density is high enough that the CO J:2-1 transition is 'dark' and a dense molecular gas tracer is required.
The CO J:2-1 moment 0 map is shown in Fig. \ref{alma_moments}: the distribution of the molecular gas is fragmented and does not cover a large fraction of the field of view. The SE component close to the centre is the region with the brightest CO J:2-1 emission. The outer region show large scale spiral arms that broadly resemble a ring. There is some emission present in the region between the SE component and the outer arms. \\
Figure \ref{alma_moments} presents the CO J:2-1 moment 1 map. We classified the kinematics in two regions, an inner region inside a circle of $5 \arcsec$ (marked in grey in Fig. \ref{alma_moments}) and an outer region, outside this circle. We can see a gradient in velocity from NE to SW in the outer distribution of gas, as it is expected for gas rotating in the disk. However, in the inner region the kinematics does not follow the outer region's rotation: in the SE inner component there seems to be a gradient of velocity in PA $-45 \degree$, and the NW inner component shows mainly blueshifted velocities.
We model the kinematics in the outer region with a pure circular rotation model obtained from an exponential disk potential, defined by:
$$ \Phi(R,z) = -2\pi G \Sigma_{0} r_{d}^{2} \int_{0}^{\infty} \frac{J_{0}(kR)e^{-k|z|} }{[ 1+(kr_{d})^{2} ]^{3/2}} dk \; ,$$
\noindent where $(R,z)$ are cylindrical coordinates, $G$ is the gravitational constant, $\Sigma_{0}$ is the central surface brightness, $r_{d}$ is the disk scale length, and $J_{0}$ is the zeroth order cylindrical Bessel function. For this model we assume an infinitesimally thin disk ($z \rightarrow 0$). The rotation velocity from this potential is given by
$$V^{2}_{ROT} (R) = R \frac{\partial \Phi}{\partial R} \; $$
\noindent We perform a least-square minimization (using the MPFIT2DFUN routine in IDL) to obtain the parameters that best fit the observed CO J:2-1 velocity field (Fig. \ref{alma_model_expdisk}). The PA cannot be well constrained due to the scarcity of the data along the major axis, we thus fix the major axis PA to $40 \degree$ (See Sect. 5) after verifying that the velocity profiles of the CO J:2-1 along, and near, the minor axis is most consistent with this major axis PA (e.g., Fig. \ref{rotcurve_alma_major_minor_axis}), and also fix the inclination to $i = 25 \degree$. Based on the rotation curve along the minor axis, we use a $-26$ km/s offset from the systemic velocity \citep[$V_{sys} = 3746$ km/s from HIPASS;]{hipass-meyer2004}, to better fit the CO J:2-1 data.
For the free parameters $r_{d}$ and $M_{d}$, the mass and scale length of the disk respectively, the best fitted values obtained are $r_{d} = 1 $kpc and $M_{d} = 2.7 \times 10^{10}$ M$_{\odot}$.
For comparison, the values obtained by the HST light distribution modelling of \citet{lasker2016} are $r_{d} = 1.38$ kpc and $M_{d} = 2.04 \times 10^{10}$ M$_{d}$. The resulting exponential disk model and the residual from the model subtraction to the CO J:2-1 velocity map are shown in Fig. \ref{alma_model_expdisk}. The rotation curves of the CO J:2-1 and the exponential disk model are shown in Fig. \ref{rotcurve_alma_major_minor_axis}. \\
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig10.png}
\caption{Left: Our pure rotation model derived from fitting the outer (outside the grey circle in Fig. \ref{alma_moments}) CO velocity field with a model based on an exponential disk potential (see text). Right: residual (observed - model) velocity field of the outer disk.}
\label{alma_model_expdisk}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{alma_expdiskmodel_fit_rotcurves.png}
\caption{ CO J:2-1 rotation curves in the outer (outside the grey circle in Fig. \ref{alma_moments}) region along the major axis, minor axis (red triangles) and PA $10 \degree$ (blue triangles). Plus symbols show the best-fitted pure rotation exponential disk model along the same PAs.
}
\label{rotcurve_alma_major_minor_axis}
\end{figure}
The velocity dispersion (moment 2) map is shown in the rightmost panel of Fig. \ref{alma_moments}: the highest values are observed in the inner region, with the highest dispersion found in the SE component. Example spectra of this region can be found in Fig. \ref{secomp_spectra}.
The pv-diagram taken along a slit of PA $-50 \degree$ (Fig. \ref{pvd_alma_secomp}), centred on the SE feature, shows a clear gradient in velocity along the SE feature. A second pv-diagram extracted along the minor axis, centred on the galaxy centre, using a natural weighted, 4-channel averaged image ($10$ km/s) is shown in Fig. \ref{pvd_alma_secomp}. The clear gradient of the SE component seems to follow a PA close to the minor axis, and to the PA of the nuclear bar (Fig. \ref{overlay_hst}). The unusual kinematics of this inner region is addressed in Sect. \ref{subsect:nuclear_bar}, where we argue that it is a possible result of the interaction between the large scale bar and the nuclear bar.
A map of the 230 GHz continuum (close to the CO J:2-1 emission line, Fig. \ref{secomp_spectra}) shows three components whose positions closely corresponds to the nucleus and radio lobes observed in the VLA 8.4 GHz data \citep[See Table 2 and 3 of ][]{koss2015}. The 230~GHz total fluxes are 0.2~mJy for the nuclear source, 0.31 mJy for the SW source, and 0.08 mJy for the NE source. Given the match in position of the three components observed in the milimetre continuum with the radio lobes observed in the VLA 4.9 - 8.4 GHz emission, and that the observed 230 GHz fluxes are in close agreement with the extrapolation of the 8.4 GHz fluxed and the 4.9 to 8.4 GHz spectral indices, we conclude these are the same components as those observed and analised in detail by \citet{koss2015}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{combined_fig12.png}
\caption{Top: observed CO J:2-1 velocity field of the inner region is shown in colour following the colour bar (units of km/s). Black contours show the ALMA 230 GHz continuum map.
Bottom: example CO J:2-1 spectra of three distinct positions (1 to 3) in the inner SE feature as identified in the left panel, plus, for comparison, the spectrum of a fourth position (4) located outside the inner region. }
\label{secomp_spectra}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig7.png}
\caption{Left: $[SII] \lambda 6716 / \lambda 6731$ line ratio map. Right: [OIII]/H$\beta$ line ratio map. Black contours correspond to the VLA 8.4 GHz map. The major axis of the galaxy (PA $= 40\degree$) is shown with the black line.}
\label{lineratios_sii_oiiihb}
\end{figure}
\section{Discussion}
\label{sect:discussion}
The main nuclear radio features of the galaxy are the nuclear component, with a flat spectrum, and two hotspots, NE ($\sim$2\arcsec) and SW ($\sim$1\arcsec) from the nucleus, with a steeper spectra, indicative of a two-sided jet. The larger flux in the SW lobe is attributed to Doppler boosting by \citet{koss2015}, assuming that the NE component is receding and the SW is approaching. There are no other previous studies that allude to the jet position in the galaxy in greater detail. \\
Based on the kinematics presented here, the edge-on maser disk with extension $1 \sim $pc in PA perpendicular to both the jet axis and the major axis of the galaxy, and the relatively low inclination of the galaxy disk (as compared to the maser disk), we interpret the jets as being launched into the disk of the galaxy. The SW lobe may represent the point where the radio jet leaves the disk of the galaxy. This scenario implies a maximum interaction between the radio jet and the dense gas in the galaxy disk. \\
The S-shaped morphology and the possible origin of these arms have been discussed in detail in \citet{cooke2000} and \citet{maksym2017}, where radial jet motion, entrainment of patterns in the ISM and accretion disk precession have been suggested as possibilities.
Maps of gas excitation (as traced by the [OIII]/H$\beta$ ratio) and density (high ratios of $[SII] \lambda 6716 / \lambda 6731$ denote higher densities) are shown in Fig. \ref{lineratios_sii_oiiihb}. The region of higher gas excitation appears to be biconical, centred on the nucleus, in PA $\sim$50$\degree$, and with an opening angle of 45\arcdeg. This conical morphology is most obvious to the SW. Similar morphologies can be observed in the line ratio maps of [OIII]/H$\alpha$, and [SII]/H$\alpha$, where the ratio is lower inside the bicone, and higher outside, which indicates possible shocks and lower photoionisation outside the cones. However, giving that the profiles of the H$\alpha$ line is blended with the [NII] doublet in the areas where the kinematics are more complex, the maps produced are not as reliable as that of [OIII]/H$\beta$. Despite this, the general biconical shape, with high excitation inside the cones, is maintained but the specific values for the line ratios can change depending on the Gaussian fit or spectral window used to obtain the flux of the lines.
Similar results, with Seyfert-like emission inside the biconical shape that encloses the S-shaped arms and LINER-like emission outside the s-shaped arms, has been found by \citet{maksym2017}.
It is thus likely that an ionisation cone is present in the galaxy along a PA $\sim 50 \degree$ and with an inclination similar to the galaxy inclination. However, the gas illuminated in this ionisation cone has kinematics which is most likely dominated by radio jet interactions with the gas in the FOV of our GMOS/IFU data. As shown in \citet{cooke2000} a high-excitation gas region extends to $\sim 20 \arcsec$ and thus a larger FOV will help to constrain the presence and characteristics of this ionisation cone.
\subsection{Outflows}
The masking and multi-component Gaussian fit to the [OIII] emission line discussed in Sect. \ref{sect:results} shows four areas of interest that we have labeled 'O1', 'O2', 'O3' and 'O4' (Fig. \ref{gmos_multicomp_mom1}).
The 'O1' region is $1\farcs8$ NE of the nucleus, near the NE radio lobe. The spectrum of this region shows a wide profile ($\sim 500$ km/s) which we have fitted with the broad redshifted component. However, both the broad redshifted component and the narrow component, which follows the expectations of rotation in the galaxy disk, have a similar velocity, and thus we do not consider a flow of gas in this region. The $[SII] \lambda 6716 / \lambda 6731$ line ratio map shows an area of larger electron density near the 'O1' region (Fig. \ref{lineratios_sii_oiiihb}), which can indicate that shocks are being produced by the interaction between the radio lobe and the ionised gas. \\
The 'O2' region, near the NE radio lobe, shows a wide profile that seems to have multiple components present (Fig. \ref{multicomp_spectra}) and is fitted by a narrow component and broad redshifted component; this broad component shows velocities redshifted $\sim 180$ km/s from systemic velocity. We consider this to be an outflow produced by the radio jet in the plane of the disk.
The 'O3' region, near the SW radio lobe, shows a profile with a clear broad, redshifted wing. We fitted this profile with a narrow component that seems to be following the rotation of the disk, and a broad redshifted component, this broad component is redshifted $\sim 200$ km/s from systemic velocity. We consider this redshifted emission to be an outflow produced by the radio jet in the plane of the disk. \\
The 'O4' area shows the widest profiles ($\sim 700$ km/s) observed in our FOV and clearly shows three different components, which we fitted with three Gaussian profiles: narrow, broad redshifted and broad blueshifted components. This area extends between the nuclear and SW radio components, and along the equatorial region (that is, perpendicular to the radio jet and nuclear maser disk rotation axis). This region shows a lower electron density (Fig. \ref{lineratios_sii_oiiihb}) and high-velocity blueshifted emission is observed along the area, while redshifted emission is observed on a more concentrated subsection of the area, closer to the SW radio lobe. We interpret this to be an equatorial (w.r.t. the central engine) outflow.\\
In the following sections we will discuss the NE ('O2') and SW ('O3') jet-driven outflows, and the equatorial outflow ('O4') in more detail.
\subsubsection{Jet driven outflow}
\label{subsect:jetoutflow}
To estimate the ionised gas outflow rate we estimate the mass of the gas and the velocity of the outflow, following \citet{lena2015}. The gas mass is given by
\begin{equation}
M_{g} = N_{e} m_{p} V f
\label{eq1_mass}
\end{equation}
where $N_{e}$ is the electron density, $m_{p}$ is the proton mass, $V$ is the volume considered and $f$ is the filling factor, which can be estimated by
\begin{equation}
L_{H_{\alpha}} \sim f N_{e}^{2} j_{H_{\alpha}}(T) V
\label{eq2_fillingfactor}
\end{equation}
where $L_{H_{\alpha}}$ is the $H_{\alpha}$ luminosity emitted by the volume $V$, and $j_{H_{\alpha}} = 3.534 \times 10^{-25}$ erg cm$^{-3}$ s$^{-1}$ \citep{Osterbrock1989}. Combining these two equations we obtain:
\begin{equation}
M_{g} = \frac{m_{p}L_{H_{\alpha}}}{n_{e}j_{H_{\alpha}}(T)}
\label{eq3_finalmass}
\end{equation}
To estimate the outflow rate we use an aperture of $0\farcs6$ for each component. For the 'O2' outflow, the mean $[SII] \lambda 6716 / \lambda 6731$ line ratio of the broad redshifted Gaussian component is $0.92$, which corresponds to an electron density of $740$ cm$^{-3}$, the H$\alpha$ luminosity of the same Gaussian component in this aperture is $3.4 \times 10^{40}$ erg s$^{-1}$, and the gas mass that is outflowing, assuming a luminosity distance to the galaxy of $52 $ Mpc, is $11 \times 10^{4} \; M_{\odot}$; The mean deprojected velocity in the aperture is $180$ km/s. This gives an outflow rate for 'O2' of $\dot M = 0.07$ M$_{\odot}/$yr.
For 'O3', the electron density is $980$ cm$^{-3}$, the H$\alpha$ luminosity is $3.6 \times 10^{40}$, the mass is $9 \times 10^{4}$ M$_{\odot}$. The mean deprojected velocity is $210 $ km/s. Thus, the outflow rate for 'O3' is $\dot M = 0.06$ M$_{\odot}/$yr.
\subsubsection{Equatorial outflow}
\label{subsect:eqoutflow}
The [OIII] multi-component Gaussian fits shows the presence of a strong broad blueshifted component along the equatorial region, on a strip $\sim 1\arcsec$ wide along PA $147 \degree$. The velocity map of this component is shown in Fig. \ref{gmos_multicomp_mom1} (marked as 'O4') where high blueshifted velocities can be observed in the equatorial region, perpendicular to both the galaxy disk PA and the radio jet axis. The presence of a weak redshifted component can be inferred from a redshifted wing, visible on a compact fraction of the equatorial region (Fig. \ref{gmos_multicomp_mom1}), and is not present on the entire equatorial region as the blueshifted component is.
This distribution and the presence of a blue component in the region perpendicular to the radio jet axis indicates the presence of an equatorial outflow along PA $147 \degree$, which is in good agreement with the water maser disk PA, which is nearly perpendicular to the radio jet axis and extends for a $\sim 1$pc \citet{kondratko2008}. In this scenario the redshifted component will be behind the galaxy disk and thus will appear weaker due to obscuration in the disk, as indicated in \citet{cooke2000} the presence of a dust lane is observed in the central $0\farcs5$ of the continuum peak (HST f547m filter) along PA $\sim 115\degree$, this dust lane is also observed by \citet{pogge1997} in the HST F606W. The interaction of a strong equatorial outflow from the accretion disk with the surrounding gas can push the ionised gas outwards.
Although the presence of outflows along the radio jet is more common \citep[e.g.][]{das2007,barbosa2009,storchi-bergmann2010}, equatorial outflows have been included in theoretical models of accretion disk winds \citep{li+ostriker2013}, and outflowing torus models \citep{honig2013,elitzur2012}, and observationally found in NGC 5929 \citep{riffel2014,riffel2015} and NGC 1386 \citep{lena2015c} where a distinct component that involves rotation and/or outflow extends to 2-3\arcsec ($\sim$ 200 pc) at either side of the nucleus, an extension similar to that found in NGC 3393.
In this scenario, and given that the galaxy is almost face-on, we assume that the blueshifted gas is in front and possibly leaving the disk and the redshifted gas is behind.
The mean $[SII] \lambda 6716 / \lambda 6731$ line ratio of the blue component in the equatorial region is $0.93$ which corresponds to an electron density of $720$ cm$^{-3}$.
The $H_{\alpha}$ luminosity of the region is $ 5 \times 10^{40}$ erg s$^{-1}$. From Eqn. \ref{eq3_finalmass} we obtain a mass of $M_{g} = 2 \times 10^{5} \; M_{\odot}$, the mean observed velocity is $-420$ km/s and we consider this to be the true velocity of the gas, i.e. this outflowing gas is not in the plane of the disk, but leaving it, and approaching in the line of sight. \\
The estimated equatorial outflow rate, under these assumptions, is $\dot M = 0.24$ M$_{\odot}/$yr.
An alternative method to derive the equatorial outflow rate is to assume a hollow cylinder geometry that is expanding from the centre, with a height of $0\farcs5$. In this case the estimated outflow rate is $7$ M$_{\odot}$/yr, assuming a filling factor $f = 0.01$ \citep[following][]{riffel2015}. The differences in these two outflow rate estimates suggests the need of a filling factor closer to $\sim$0.001 or a significantly smaller height. \\
\subsection{Bar perturbations}
\label{subsect:bar_descp}
To understand the role of the bar-induced perturbations to the molecular gas kinematic we have applied the harmonic decomposition formalism described in \citet{sfdz1997} and \citet{wong2004}. It is important to remark that this formalism is based on linear epicycle theory, and thus it is only valid for weak bars, as the departure from circular orbits must be small compared to the circular velocity. The line of sight velocity towards a given point in a galaxy velocity field can be decomposed in a Fourier series as:
$$ V_{LOS} (R) = c_{0} + \sum _{j=1}^{n} [ c_{j} \cos(j\psi) + s_{j} \sin(j\psi) ]\sin i \; , $$
\noindent where $(R,\psi)$ are polar coordinates, $i$ is the inclination of the disk, $c_{0}$ corresponds to the systemic velocity ($V_{sys}$), and $j$ is the harmonic number. The coefficients $c_{j}$ and $s_{j}$ are a function of the circular velocity ($V_{c}$), the bar pattern speed ($\Omega_{p}$), ellipticity of the potential ($\epsilon$) and the bar viewing angle ($\theta_{obs}$), which corresponds to the bar PA from the minor axis of the galaxy disk \citep[see e.g. Fig. 2 of][for a definition of this angle]{wong2004}. A bar creates a bisymmetric gravitational potential which has a predominant $m=2$ Fourier component, and thus we only consider the harmonics $j=m-1$ and $j=m+1$ \citep{schoenmakers1997}.
For the circular velocity ($V_{c}$) we used the value obtained from the best fit exponential disk potential (Fig. \ref{alma_model_expdisk} and Sect. \ref{subsect:molecular_gas}).
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{combined_fig13_2.png}
\caption{Left: Velocity fields resulting from the bar perturbation model described in Sect. \ref{subsect:bar_descp}, when varying $\Omega_{p}$ (x-axis, values from 45 to 85 km/s/kpc, with a 10 km/s/kpc step), and $\lambda$ (y-axis, values of 0.05, 0.1, 0.2, 0.3, and 0.4). All models use the intrinsic rotation curve derived from the best fit exponential disk model with parameters as explained in Sect. \ref{subsect:molecular_gas}, and the bar parameters used were those of the large-scale bar: $PA_{bar} = 160\degree$ and $\epsilon=0.15$. All panels follow the colour bar shown at the top (units of km/s). Top Right: velocity field of the best-fit bar perturbation model (see text) to the large scale (outside the grey circle in Fig. \ref{alma_moments}) CO velocity field following the color bar above the panel (units of km/s). The values of the exponential disk parameters, disk PA and inclination, were fixed to the values outlined in Fig. \ref{alma_model_expdisk}, and the bar PA ($PA_{bar} = 160\degree$) and ellipticity ($\epsilon = 0.15$) were set to the values of the large-scale bar. The best fit values for $\lambda$ and $\Omega_{p}$ are 0.1 and 54 km/s/kpc, respectively. Overlaid black contours show the integrated intensity (moment 0) of CO J:2-1. Bottom Right: residual (observed - model) velocity map for the bar perturbation model following the color bar above the panel (units of km/s): only the large scale velocity residuals are shown.
}
\label{bar_models}
\end{figure*}
A constant damping term ($\lambda$), simulating a radial frictional force, is introduced \citep[e.g.][]{lindblad+lindblad1994,wada1994} to avoid sudden changes in orbit axes and thus multiple orbits at a given point. Usual values for this parameter are in the range $0.05 < \lambda < 0.4$.
For the case of NGC 3393, $\Omega_{p}$ and $\lambda$ are the parameters with the largest uncertainties. We thus build a library of models with different parameter combinations, a section of which is shown in Fig. \ref{bar_models}, where the two parameters are varied over $ 0.01 < \lambda < 0.2$ and $ 25 < \Omega_{p} < 85$ km/s/kpc. The primary effect of changing $\Omega_{p}$ is the variation in the radii of the resonances, but it also effects the magnitude of twists in the isophotes. The effect of increasing $\lambda$ is to smoothen the sudden twists seen near the resonances. \\
We attempted to fit the full CO velocity field to this perturbation theory model, with $\Omega_{p}$ and $\lambda$ as free parameters, and the perturbations terms set by the large-scale bar PA and ellipticity. However, we could not find any suitable set of parameters (even if the bar PA was varied) which could match both the outer CO velocities and the highly perturbed velocities in the SE inner CO component. We are thus forced to conclude that a single $m=2$ (i.e. bar) mode is unable to explain the complex molecular kinematics seen in NGC 3393. The remaining option is thus to attempt to separately fit perturbations in the large-scale molecular kinematics (driven by the large-scale bar) and in the inner molecular kinematics (driven by the nuclear bar), which we do in the following subsections.
\subsubsection{Large-scale bar}
\label{subsect:large_scale_bar}
To obtain the (large-scale) bar perturbation model which best fits the outer CO kinematics we fit the observed CO velocity field (outside the grey circle in Fig. \ref{alma_moments}) to the predictions of our bar-perturbation models by using the same least-square minimization routine as explained above, in order to obtain the best-fitted parameters. We fix the exponential disk model parameters to those obtained above. We also fix the ellipticity of the potential ($\epsilon = 0.15$) and the bar PA ($PA_{bar} = 160 \degree$) to the values obtained for the large-scale bar by \citet{jungwiert1997}. The free parameters are thus the damping term ($\lambda$) and the bar pattern speed ($\Omega_{p}$).
The resulting best-fit model obtained and its velocity residuals are shown in Fig. \ref{bar_models}. The r.m.s. of this residual map is lower than that obtained when only the pure rotational model of the exponential disk is used, though the difference is not large. To further test the suitability of the best fit model we compare the model with data extracted along the minor axis in the pv-diagram shown in Fig. \ref{pvd_alma_secomp}. In the outer region, both model and data are close to zero as expected along the minor axis, however the model does not fit the data in the inner region, where the large velocity gradient of the SE feature is observed. This exercise shows that a bar perturbation model could maintain velocities similar to the exponential disk model in the outer region while having a different PA and velocity distribution inside the resonance radius.\\
The resonances observed in the best fitted model correspond to the ILR (at r = 3.7\arcsec), OLR (r = 20\arcsec), an CR (r = 13.5\arcsec) which is in good agreement with the length of the large scale bar (SMA $\sim$ 13\arcsec according to \citet{alonso-herrero1998,lasker2016}). The ILR encloses the nuclear region, including both SE and SW features, and the OLR encloses the outer distribution of molecular gas. These two resonances could help explain the difference in PA of the ALMA and GMOS data compared to the large-scale kinematics position angle \citep[PA = 68$\degree$, according to][]{cooke2000}. It is also interesting to note that the kinematics enclosed in the ILR resemble the observed stellar velocity map, specially the S-shaped zero velocity line (Fig. \ref{stellar_kinematics}). \\
The primary limitation in the analysis above is the sparse filling factor of CO velocities over the FOV. Deeper integrations with ALMA are thus crucially required. Alternatively, deep observations of the ionised gas kinematics over the full galaxy (using e.g., VLT/MUSE) are required. Indeed, the latter have been recently obtained by another team.
\subsubsection{Nuclear bar}
\label{subsect:nuclear_bar}
The presence of an additional nuclear bar has been suggested by NIR imaging \citep{alonso-herrero1998} and by light distribution modelling from HST imaging \citep{lasker2016}. The extension of (SMA) this nuclear bar is $\sim 2\arcsec$, and it is offset from the large scale bar by $10\degree - 20 \degree$. \\
Considering that both the PA and extension of the inner features of our CO J:2-1 data agree with those of the nuclear bar, we build bar perturbation models for the nuclear bar (Fig. \ref{chosen_innerbar_model}), assuming that the nuclear bar is decoupled from the large-scale bar and thus has a larger pattern speed. The disk parameters (exponential disk parameters, disk PA and inclination) were kept fixed to the values used above. Typical ranges were assumed for the free parameters: $0.15 < \lambda < 0.4$, $ 0.1 < \epsilon < 0.8$, $ 10 < \Omega_{p} < 200 $ km/s/kpc. The best fitted values obtained were: $\lambda = 0.1 $, $\Omega_{p} = 73$ km/s/kpc and $\epsilon = 0.35$, and $\theta_{obs} = -85\degree$. The latter value implies that the large-scale and nuclear bars are almost aligned, and that the nuclear bar is completely embedded in the large-scale bar; which is consistent with the results of \citet{alonso-herrero1998}, where the PA difference between both bars is in the range $10\degree-20 \degree$.
The resulting model is overlaid on the PV-diagram along the minor axis (Fig. \ref{pvd_alma_secomp}), where it can be observed that it follows the same gradient of the inner SE feature and the Keplerian-like fall-off, while it is also close to the gradient of the inner NW feature. \\
Considering that we can reproduce a similar gradient as that observed in the inner SE feature, it is possible that an interaction of the large-scale bar and the nuclear bar exists. If these components present kinematical coupling, they can share a resonance, usually the ILR for the large-scale bar coincides with the CR of the nuclear bar. If this is the case for NGC 3393 it is possible that the molecular gas observed is near the ILR of the large-scale bar where it can accumulate into rings. However, the presence of an inner bar could, in principle, allow the gas to overcome this limit and continue to flow to the central regions \citep{shlosman1989}.
This simple toy model indicates that it is feasible that a large-scale and nuclear bar interaction can produce a feature similar to the one observed in the inner region of our ALMA data. However, the kinematical coupling between both bars and the consequent complex modelling of its effect is beyond the scope of this paper. \\
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig14_2.png}
\caption{Top: pv-diagram centred on the SE component in the CO maps, extracted along a PA of $-50 \degree$. Bottom: pv-diagram extracted from a natural weighted, 4 channel averaged cube with 10 km/s spectral resolution, with a slit centred on the galaxy nucleus and extracted along the minor axis. The black line shows the prediction of the large-scale bar perturbation model described in Sect. \ref{subsect:large_scale_bar} (see Fig. \ref{bar_models}) and the blue line shows the prediction of the nuclear bar model. Color bar units are Jy/beam.
}
\label{pvd_alma_secomp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{inner_bar_model.png}
\caption{Velocity field of the best fit bar perturbation model for the nuclear bar shown in colour following the colour bar above the panel (units of km/s). Here the fit was made only to the inner region (inside the grey circle in Fig. \ref{alma_moments}) CO velocity field. The values of the exponential disk, disk PA and inclination, were fixed to the values outlined in Fig. \ref{alma_model_expdisk}. The best fit values obtained were $\theta_{obs} = -85\degree$, $\epsilon = 0.35$, $\lambda = 0.1$ and $\Omega_{p} = 73$ km/s/kpc.}
\label{chosen_innerbar_model}
\end{figure}
\subsection{SMBH}
Evidence for a secondary SMBH has been presented by \citet{fabbiano2011}. This BH is located $0\farcs 6$ SW from the nucleus (Fig. \ref{onecomp_gh_gmos}). As it can be seen in the moment 1 maps of the ionised emission lines, there is no clear kinematical component connected to the position of this posited secondary BH. However, the kinematics of the ionised gas are deeply perturbed by the radio jet and thus any kinematical signature of a secondary SMBH can be easily lost. \\
An alternative explanation to the unusual gradient observed in the SE component of the molecular gas might be linked to this posited secondary SMBH. A pv-diagram along a PA of $50 \degree$ centred on the feature (Fig. \ref{pvd_alma_secomp}) shows a mostly smooth gradient that goes from $\sim - 100$ km/s to $\sim 100$ km/s, which could indicate a nuclear disk related to the secondary SMBH. However, the kinematic and morphological centre of the SE feature does not corresponds to the posited position of the secondary BH. Alternatively, the inner CO emission (i.e. both the SE and NW inner components) could be centred around the secondary SMBH; although we cannot rule out this case, it would require a very special geometry. Simpler explanations, such as the nuclear bar perturbations are thus favoured by us.
The posited secondary SMBH is located in between the nuclear and the SW source, and while there is no clear detection of another source in that position, the presence of another continuum source here cannot be discarded unequivocally. Thus, while we find no evidence supporting the existence of this secondary SMBH, it is important to note that both the observational and interpretive constraints are not strong enough to disprove the presence of the secondary SMBH.
If we assume that the equatorial outflow is axisymmetric (though we have only detected the blueshifted gas in this outflow) then the mass outflow rate for the equatorial outflow can reach $\dot M \sim 0.6$ M$_{\odot}/$yr. The outflow rates presented here are a lower limit, as we only consider the ionised gas. \\
To contrast the outflow rates estimated here, we now estimate the inflow rate necessary to feed the SMBH. The bolometric luminosity estimated using the 2-10 keV luminosity is $2.4 \times 10^{44}-7.6 \times 10^{44}$ erg/s \citep{kondratko2008}. Assuming a standard accretion efficiency of $10\%$ the accretion mass rate required is $\dot M = 0.04-0.1$ M$_{\odot}/$yr, a factor $\sim 8$ lower than our estimated outflow rate, which is not unusual in nearby galaxies \citep{barbosa2009,muller-sanchez2011}. \\
The difference between inflow and outflow rates can indicate that the outflowing gas does not originate from close to the central engine, but is circumnuclear ISM gas that is being pushed away by the radio jet. While we do not find direct evidence of inflows in the ionised gas, if the SE component of the observed CO J:2-1 molecular gas was inflowing with a velocity of 10 km/s the necessary inflow rate could be achieved.
The total CO mass of the SE feature is M(H$_{2}$) = 5.4 $\times 10^{7}$ M$_{\odot}$, assuming $\alpha_{CO} = 4.6 $ M$_{\odot}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$. If the all the molecular mass enclosed in the SE feature was inflowing with a velocity of 10 km/s, the potential accretion rate produced would be 0.32 M$_{\odot}$/yr. While we do not find direct evidence that the molecular gas is inflowing, a velocity of 10 km/s would fall under the detection limit of our analysis, and the interaction between the large scale and nuclear bar can make this inflow possible.
\section{Conclusions}
\label{sect:conclusions}
We have analysed the kinematics of the stars, ionised gas, and molecular gas in the central kpcs of the Seyfert 2 galaxy NGC 3393 using optical integral field observations from GEMINI-GMOS/IFU and ALMA. From our detailed analysis of these data we conclude that:
- NGC 3393 presents a complex multi-phase ISM, with strong interactions between the nuclear radio jet and the ionised gas produces complex kinematics. We have found that it is necessary to fit the emission line profiles observed with multiple Gaussian components. We identify three components in the ionised gas, which we refer to as the narrow, broad redshifted, and broad blueshifted components.
- The narrow ionised gas component has a low velocity dispersion ($\sigma < 115$ km/s) and, more or less, follows pure rotation in the galaxy disk; nevertheless this component is perturbed in the regions near the radio lobes.
- The broad redshifted component ($ 115 < \sigma < 230 $ km/s) can be observed in regions near the radio lobes. We identify two outflows in this component named as 'O2' and 'O3'. 'O2' seems associated to the NE radio lobe, while 'O3' is near the SW radio lobe. The 'O2' outflow is redshifted on the far side of the galaxy which can indicate gas being pushed away by the radio jet. As the SW radio lobe appears to be the approaching component of the radio jet, it is possible that this jet is starting to leave the galaxy in the region of the 'O3' outflow, and thus the redshifted gas observed can be gas in the disk, being pushed away from the line of sight by the radio jet, or if the jet remains in the disk the outflow is produced by the gas being pushed by the jet inside the disk plane.
- The broad blueshifted component ($ 115 < \sigma < 230 $ km/s) presents large blueshifted velocities distributed along the equatorial region, perpendicular to the radio jet axis. A weaker redshifted wing it is also visible in the same region. We interpret this component as being part of an equatorial outflow, expanding perpendicular to the radio jet axis, and whose emission is attenuated by dust in the plane of the galaxy.
- From the measured velocities, $H_{\alpha}$ fluxes, and electron densities, of the outflowing components, we estimate a total outflow rate of $\dot M \sim 0.13$ M$_{\odot}$/yr for the jet driven outflows, and $\dot M \sim 0.24$ M$_{\odot}$/yr for the equatorial outflow.
If we consider a symmetric component for the equatorial outflow, the total outflow rate can reach $\dot M \sim 0.6$ M$_{\odot}$/yr for the ionised gas. This outflow rate is $\sim 8$ times larger than the accretion rate necessary to fuel the AGN. While we found no direct evidence for gas inflows, we note that the necessary inflow rate can be provided if the SE component of the CO J:2-1 emission is inflowing at a velocity of $\sim 10$ km/s, a velocity which would be close to the detection limit of our observations and analysis.
- We were forced to analyse the kinematics for the CO J:2-1 emission separately for two regions, an inner region within $5''$ of the nucleus, and the region outside said circle, since we could not find a global model which could fit both regions together. We do not detect CO J:2-1 emission at either position of the SMBH or at the position of the posited secondary SMBH.
- The outer region of the CO J:2-1 emission seems to trace the rotation in the outer disk, and can be fitted with an exponential disk rotation model, though obvious residuals remain. To understand the role of the large-scale bar in the kinematics observed on the CO J:2-1 emission we applied the harmonic decomposition formalism to the CO velocity field. Specifically, we fitted a bar-perturbation model to the outer region of our CO J:2-1 velocity field. We found, over a range of different $\Omega_{p}$ and $\lambda$, the presence of a resonance between the inner and outer region, and a resonance that encloses the outer region of the CO emission. These resonances could explain the difference in PA found in the ALMA and GEMINI-GMOS/IFU data compared to the large-scale kinematics observed by \citet{cooke2000}, and the observed distribution of CO J:2-1 emission.
This model, however, does not fit the observed CO kinematics of the inner region.
- The inner region of the CO J:2-1 emission presents highly disturbed kinematics, with the presence of an off-nuclear velocity gradient centred in the SE component. We found this gradient can not be explained by the large-scale bar model, nor by the presence of the posited secondary SMBH or any disk related to it. We fitted a second bar perturbation model based on the parameters of the nuclear bar and found a good fit to the inner region kinematics. This toy model indicates
that the kinematics observed in the inner region of the CO J:2-1 emission can be attributed solely, or at least dominantly, to perturbation by the nuclear bar, together with interactions between the large-scale and nuclear bars.
\section*{Acknowledgements}
This work was supported by CONICYT PhD fellowship 2015-21151141.
NN acknowledges Fondecyt 1171506, Conicyt ALMA 3114007, and BASAL PFB-06/2006.
This paper makes use of the following ALMA data: ADS/JAO.ALMA\# 2015.1.00086.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'ia e Innovaci\'on Productiva (Argentina), and Minist\'erio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil).
\bibliographystyle{mnras}
| -34,866.674685 |
[
-3.1328125,
3.005859375
] | 58.844765 |
[
-3.630859375,
0.4443359375,
-1.689453125,
-6.078125,
-0.0031681060791015625,
8.2734375
] |
[
3.64453125,
7.30078125,
3.921875,
5.828125
] | 711 | 11,389 |
[
-2.51953125,
2.595703125
] | 25.533233 |
[
-6.25,
-2.666015625,
-3.396484375,
-2.419921875,
1.3740234375,
10.8984375
] | 1.488961 | 39.896241 | 17.358855 | 2.87735 |
[
2.5237512588500977
] | -25,220.228375 | 5.130389 | -33,351.876274 | 0.444977 | 5.988722 |
[
-3.96484375,
-3.861328125,
-2.85546875,
-3.349609375,
2.65234375,
10.375
] |
[
-6.1328125,
-2.13671875,
-2.189453125,
-1.6083984375,
3.875,
5.35546875
] | |
BkiUbiQ5qsMAIvuccWuz
|
\section{Introduction}
A graph $H$ is an immersion of a graph $G$ if it can be obtained from $G$ by removing vertices or edges, and splitting off adjacent pairs of edges.
The class of all graphs was proved to be well-quasi-ordered under the
the immersion relation by Robertson and Seymour in the
last paper of their Graph Minors series~\cite{RobertsonS10}. Certainly, this work was
mostly dedicated to minors and not immersions and has been the source of
many theorems regarding the structure of graphs excluding some graph $H$ as a minor. Moreover, the minor relation has been extensively studied
the past two decades and many structural results have been proven for minors with interesting algorithmic consequences (see, for example,~\cite{RobertsonS-XVI,RobertsonS-GMXIII,RobertsonST94,DemaineFHT05sube,Kostochka84,Thomason01}).
However, structural results for immersions started appearing only recently.
In 2011, DeVos et al. proved that if the minimum degree of a graph $G$ is $200t,$ then $G$ contains the complete graph on $t$ vertices as an immersion~\cite{2011arXiv1101.2630D}.
In~\cite{FerraraGTW08} Ferrara et al.,
provided a lower bound (depending on graph $H$) on the minimum degree of a
graph $G$ that ensures that $H$ is contained in $G$ as an immersion.
Furthermore, Wollan recently proved a structural theorem for graphs excluding complete graphs as immersions
as well as a sufficient condition such that any graph which satisfies the condition admits a wall as an immersion~\cite{abs-1302-3867}. The result in~\cite{abs-1302-3867} can be seen as an immersion counterpart of the grid exclusion theorem~\cite{RobertsonST94}, stated for walls instead of grids and using an alternative graph parameter instead of treewidth.
In terms of graph colorings, Abu-Khzam and Langston in~\cite{Abu-KhzamL03} provided evidence supporting the immersion ordering analog of Hadwiger's Conjecture, that is, the conjecture stating that if the chromatic number of a graph $G$ is at least $t$, then $G$ contains the complete graph on $t$ vertices as an immersion, and proved it for $t\leq 4$. For $t=5,6,7$, see~\cite{Lescure1988325,1213.05137}.
For algorithmic results on immersions, see~\cite{KawarabayashiK12,BelmonteHKPT12,GiannopoulouSZ12,GroheKMW11}.
In this paper, we prove structural results for the immersion relation on graphs embeddable on a fixed surface.
In particular, we show that if $G$ is a graph that is embeddable on a surface of Eüler genus $\gamma$ and
$H$ is a connected graph then one of the following is true:
either $G$ has bounded treewidth (by a function that depends only on $\gamma$ and $H$), or its edge connectivity is bounded by the maximum degree of $H$, or it contains $H$ as a (strong) immersion.
Furthermore, we refine our results
to obtain a counterpart of the grid exclusion theorem for
immersions. In particular,
we prove (Theorem~\ref{thm:exclgrdimrs}) that there exists a function $f:\Bbb{N}\rightarrow\Bbb{N}$ such that if $G$ is a $4$-edge-connected graph embedded on a surface of Eüler genus $\gamma$ and the treewidth of $G$ is at least $f(\gamma)\cdot k$, then $G$ contains the $k\times k$-grid as an immersion.
Notice that the edge connectivity requirement is necessary here as big treewidth alone is not enough to ensure the existence of a graph with a vertex of degree 4 as an immersion. Although a wall of height at least $h$ has treewidth at least $h$, it does not contain the complete graph on $t$ vertices as an immersion, for any $t\geq 5$.
Finally, our results imply that when restricted to graphs of sufficiently big treewidth embeddable on a fixed surface, large edge connectivity forces the existence of a large clique as an immersion.
Our result reveals several aspects of the behavior of the immersion relation on surface embeddable
graphs. The proofs exploit variants of the grid exclusion theorem for surfaces
proved in~\cite{FominGT11} and~\cite{GiannopoulouT11} and the results of Biedl and Kaufmann~\cite{BiedlK97} on optimal orthogonal drawings of
graphs.
The paper is organized as follows. In Section~\ref{prels} we give some basic definitions and preliminaries. In Section~\ref{prelsom}
we give a series main combinatorial results. Based on the results of Section~\ref{prelsom}, we prove the main theorem and we derive its corollaries
in Section~\ref{maint}.
\newpage
\section{Preliminaries}
\label{prels}
For every positive integer $n$, let $[n]$ denote the set $\{1,2,\dots,n\}$.
A {\em graph} $G$ is a pair $(V,E)$ where $V$ is a finite set, called the {\em vertex set}
and denoted by $V(G)$, and $E$ is a set of 2-subsets of $V$, called the {\em edge set}
and denoted by $E(G)$. If we allow $E$ to be a multiset then $G$ is called a multigraph.
Let $G$ be a graph. For a vertex $v$, we denote by $N_G(v)$ its \emph{(open) neighborhood},
that is, the set of vertices which are adjacent to $v$, and by $E_{G}(v)$ the set of edges containing $v$.
Notice that if $G$ is a multigraph $|N_{G}(v)|\leq |E_{G}(v)|$.
The degree of a vertex $v$ is $\deg_{G}(v)=|E_{G}(v)|$.
We denote by $\Delta(G)$ the maximum degree over all vertices of $G$.
If $U\subseteq V(G)$ (respectively $u\in V(G)$ or $E\subseteq E(G)$ or $e\in E(G)$) then
$G-U$ (respectively $G-u$ or $G-E$ or $G-e$) is the graph obtained from $G$
by the removal of vertices of $U$ (respectively of vertex $u$ or edges of
$E$ or of the edge $e$). We say that a graph $H$ is a subgraph of a graph $G$, denoted by $H\subseteq G$,
if $H$ can be obtained from $G$ after deleting edges and vertices.
We say that a graph $H$ is an {\em immersion} of a graph $G$ (or $H$ is {\em immersed} in $G$),
$H\leq_{\text{im}} G$, if there is an injective mapping $f: V(H) \to V(G)$ such that, for every edge $\{u,v\}$ of $H$,
there is a path from $f(u)$ to $f(v)$ in $G$ and for any two distinct edges of $H$ the corresponding
paths in $G$ are {\em edge-disjoint}, that is, they do not share common edges.
The function $f$ is called a {\em model of $H$ in $G$}.
Let $P$ be a path and $v,u\in V(P)$. We denote by $P[v,u]$ the subpath of $P$ with endvertices $v$ and $u$.
Given two paths $P_{1}$ and $P_{2}$ who share a common endpoint $v$, we say that they are {\em well-arranged} if their common vertices appear in the same order in both paths.
A {\em tree decomposition} of a graph $G$ is a pair $(T, B)$, where $T$ is a tree and $B$ is a function
that maps every vertex $v\in V(T)$ to a subset $B_{v}$ of $V(G)$ such that:
\begin{enumerate}[(i)]
\item $\bigcup_{v\in V(T)}B_{v}=V(G)$,
\item for every edge $e$ of $G$ there exists a vertex $t$ in $T$ such that $e \subseteq B_t$, and
\item for every $v\in V(G)$, if $r,s \in V(T)$ and $v\in B_{r} \cap B_{s}$, then for every vertex $t$ on the
unique path between $r$ and $s$ in $T$, $v\in B_t$.
\end{enumerate}
The width of a tree decomposition $(T,B)$ is width$(T,B) := \max\{|B_{v}|-1\mid v\in V(T)\}$ and the treewidth
of a graph $G$ is the minimum over the width$(T,B)$, where $(T,B)$ is a tree decomposition of $G$.
\paragraph{\bf Surfaces.}
A \emph{surface} $\Sigma$ is a compact 2-manifold without boundary
(we always consider connected surfaces).
Whenever we refer to a {\em
$\Sigma$-embedded graph} $G$ we consider a 2-cell embedding of
$G$ in $\Sigma$. To simplify notations, we do not distinguish
between a vertex of $G$ and the point of $\Sigma$ used in the
drawing to represent the vertex or between an edge and the line
representing it. We also consider a graph $G$ embedded in
$\Sigma$ as the union of the points corresponding to its vertices
and edges. That way, a subgraph $H$ of $G$ can be seen as a graph
$H$, where $H\subseteq G$ in $\Sigma$.
Recall that $\Delta \subseteq \Sigma$ is
an open (respectively closed) disc if it is homeomorphic to
$\{(x,y):x^2 +y^2< 1\}$ (respectively $\{(x,y):x^2 +y^2\leq 1\}$).
The {\em Eüler genus} of a non-orientable surface $\Sigma$
is equal to the non-orientable genus
$\tilde{g}(\Sigma)$ (or the crosscap number).
The {\em Eüler genus} of an orientable surface
$\Sigma$ is $2{g}(\Sigma)$, where ${g}(\Sigma)$ is the orientable genus
of $\Sigma$. We refer to the book of Mohar and Thomassen \cite{MoharT01} for
more details on graphs embeddings.
The {\em Eüler genus} of a graph $G$ (denoted by ${\mathbf{eg}}(G)$) is the minimum integer $\gamma$ such
that $G$ can be embedded on a surface of the Eüler genus $\gamma$.
\paragraph{\bf Walls.} Let $k$ and $r$ be positive integers where $k,r\geq 2$. The
\emph{$(k\times r)$-grid} $\Gamma_{k,r}$ is the Cartesian product of two paths of
lengths $k-1$ and $r-1$ respectively.
A \emph{wall of height $k$}, $k\geq 1$, is the graph obtained from a
$((k+1)\times (2\cdot k+2))$-grid with vertices $(x,y)$,
$x\in\{1,\dots,2\cdot k+4\}$, $y\in\{1,\dots,k+1\}$, after the removal of the
``vertical'' edges $\{(x,y),(x,y+1)\}$ for odd $x+y$, and then the removal of
all vertices of degree 1.
We denote such a wall by $W_{k}$.
The {\em corners} of the wall $W_{k}$ are the vertices $c_{1}=(1,1)$, $c_{2}=(2\cdot k+1,1)$, $c_{3}=(2\cdot k + 1 + (k+1\mod 2),k+1)$ and $c_{4}=(1+(k+1\mod 2),k+1)$. (The square vertices in Figure~\ref{fig:layerw}.)
A {\em subdivided wall $W$ of height $k$} is a wall obtained from $W_{k}$ after replacing some of its
edges by paths without common internal vertices. We call the resulting graph $W$ a {\em subdivision}
of $W_{k}$ and the new vertices {\em subdivision vertices}. The non-subdivision vertices are called {\em original}.
The {\em perimeter} $P$ of a subdivided wall (grid) is the cycle defined by its boundary.
Let $W$ be a subdivided wall in a graph $G$ and $K'$ be the connected component of $G\setminus P$
that contains $W\setminus P$. The {\em compass} $K$ of $W$ in $G$ is the graph $G[V(K')\cup V(P)]$.
Observe that $W$ is a subgraph of $K$ and $K$ is connected.
The {\em layers} of a subdivided wall $W$ of height $k$ are recursively defined as follows.
The first layer of $W$, denoted by $L_{1}$, is its perimeter. For $i=2,\cdots,\lceil \frac{k}{2}\rceil$, the $i$-th layer of $W$, denoted by $L_{i}$, is the
$(i-1)$-th layer of the subwall $W'$ obtained from $W$ after removing from $W$ its perimeter and (recursively) all occurring
vertices of degree 1 (see Figure~\ref{fig:layerw}).
\begin{figure}[h]
\begin{center}
\scalebox{0.5}{\input{1.pdf_t}}
\end{center}
\caption{The first (magenta-dashed) and second (red-dotted) layers of a wall of height 5}
\label{fig:layerw}
\end{figure}
We denote by $A_{i}$ the annulus defined by the cycles $L_{i}$ and $L_{i+1}$, that is, by $i$-th and $(i+1)$-th layer, $i\in [\lceil \frac{k}{2}\rceil-1]$.
Given an annulus $A$ defined by two cycles $C_{1}$ and $C_{2}$, we denote by $A^{^{o}}$ the interior of $A$, that is, $A\setminus (C_{1}\cup C_{2})$.
A subdivided wall of height $k$ is called {\em tight} if
\begin{enumerate}
\item the closed disk defined by the innermost ($\lceil \frac{k}{2}\rceil$-th) layer of $W$ is edge-maximal (for reasons of uniformity we will
denote this disk by $A_{\lceil \frac{k}{2}\rceil}$),
\item for every $i\in [\lceil \frac{k}{2}\rceil-1]$ the annulus $A_{i}$ is edge-maximal under the condition that $A_{i+1}$ is edge-maximal.
\end{enumerate}
If $W$ is a subdivided wall of height $k$, we call {\em brick} of $W$ any facial cycle whose non-subdivided
counterpart in $W_{h}$ has length 6. We say that two bricks are {\em neighbors} if their intersection contains an edge.
Let $W_{k}$ be a wall. We denote by $P^{(h)}_{j}$ the shortest path connecting vertices $(1,j)$ and $(2\cdot k+2,j)$ and call these paths the {\em horizontal paths of $W_{k}$}.
Note that these paths are vertex-disjoint.
We call the paths $P^{(h)}_{k+1}$ and
$P^{(h)}_{1}$ the {\em southern path of $W_{k}$} and {\em northern part of $W_{k}$} respectively.
Similarly, we denote by $P^{(v)}_{i}$ the shortest path connecting vertices $(i,1)$ and $(i,k+1)$ with the assumption that for,
$i<2\cdot k+2$, $P^{(v)}_{i}$ contains only vertices $(x,y)$ with $x=i,i+1$.
Notice that there exists a unique subfamily ${\cal P}_{v}$ of $\{P^{(v)}_{i}\mid i<2\cdot k+2\}$ of $k+1$ vertical paths with one endpoint in the southern path of $W_{k}$ and one in the northern path of $W_{k}$.
We call these paths {\em vertical paths of $W_{k}$} and denote them by $P^{[v]}_{i}$,
$i\in [k]$, where $P^{(v)}_{1}=P^{[v]}_{1}$ and $P^{(v)}_{2\cdot k+1}=P^{[v]}_{k+1}$. (See Figure~\ref{fig:vrtpths}.)
\begin{figure}[h]
\begin{center}
\scalebox{0.5}{\input{vrtpths.pdf_t}}
\caption{The vertical paths of a wall of height 5}
\label{fig:vrtpths}
\end{center}
\end{figure}
The paths $P^{[v]}_{1}$ and $P^{[v]}_{k+1}$ are called the {\em western part of $W_{k}$} and the
{\em eastern part of $W_{k}$} respectively. Note that the perimeter of the wall can alternatively be
defined as the cycle $P^{h}_{1}\cup P^{h}_{k+1}\cup P^{[v]}_{1}\cup P^{[v]}_{k+1}$.
Notice now that each vertex $u \in V(W_{k})\setminus V(P)$, is contained in exactly one vertical path, denoted by $P^{(v)}_{u}$, and in exactly one horizontal path, denoted by $P^{(h)}_{u}$, of $W_{k}$. If $W$ is a subdivision of $W_{k}$, we will use the same notation for
the paths obtained by the subdivisions of the corresponding paths of $W_{k}$, with further assumption that $u$ is an original vertex of $W$. \\
Given a wall $W$ and a layer $L$ of $W$, different from the perimeter of $W$. Let $W'$ be the subwall of $W$ with perimeter $L$. $W'$ is also called the {\em subwall of $W$ defined by $L$}.
We call the following vertices, {\em important} vertices of $L$;
the original vertices of $W$ that belong to $L$ and have degree 2 in the underlying non-subdivided wall of $W'$ but are not the corners of $W'$ (where we assume that $W'$ shares the original vertices of $W$).
(See Figure~\ref{fig:imptvrt})
\begin{figure}[h]
\begin{center}
\scalebox{0.5}{\input{impvrt.pdf_t}}
\caption{The important vertices the second layer of a wall of height 5}
\label{fig:imptvrt}
\end{center}
\end{figure}
\begin{observation}\label{obs:impvrt}
A layer $L$ of a wall $W$ that is different from its perimeter and defines a subwall $W'$ of $W$ of height $k$ contains exactly $4k-2$ important vertices.
\end{observation}
From Lemma 6 in~\cite{FominGT11} and Lemma 3 in~\cite{GiannopoulouT11} we obtain the following.
\begin{lemma}\label{lem:twbndwll}
Let $G$ be a graph embedded in a surface of Eüler genus $\gamma$.
If ${\mathbf{tw}}(G)\geq 48\cdot (\gamma+1)^{\frac{3}{2}}\cdot (k+5)$, $G$ contains
as a subgraph a subdivided wall of height $k$, whose compass in $G$ is
embedded in a closed disk $\Delta$.
\end{lemma}
\paragraph{Confluent paths.} Let $G$ be a graph embedded in some surface $\Sigma$
and let $x\in V(G)$. We define a {\em disk around $x$}
as any open disk $\Delta_{x}$ with the property that each point in $\Delta_{x}\cap G$ is either $x$ or belongs to the edges incident to $x$.
Let $P_{1}$ and $P_{2}$ be two edge-disjoint
paths in $G$.
We say that $P_{1}$ and $P_{2}$ are {\em confluent} if
for every $x\in V(P_{1})\cap V(P_{2})$, that is not an endpoint
of $P_{1}$ or $P_{2}$, and for every disk $\Delta_{x}$ around $x$,
one of the connected components of
the set $\Delta_{x}\setminus P_{1}$ does not contain any point of $P_{2}$.
We also say that a collection of paths is {\em confluent} if the paths in it are pairwise confluent.
Moreover, given two edge-disjoint paths $P_{1}$ and $P_{2}$ in $G$ we say that a vertex $x\in V(P_{1})\cap V(P_{2})$ that is not an endpoint of $P_{1}$ or $P_{2}$ is an {\em overlapping vertex of $P_{1}$ and $P_{2}$} if there exists a $\Delta_{x}$ around $x$ such that both connected components of $\Delta_{x}\setminus P_{1}$ contain points of $P_{2}$. (See, Figure~\ref{fig:cnflexmpl}.) For a family of paths ${\cal P}$, a vertex $v$ of a path $P\in {\cal P}$ is called an {\em overlapping vertex of ${\cal P}$} if there exists a path $P'\in {\cal P}$ such that $v$ is an overlapping vertex of $P$ and $P'$.
\begin{figure}[h]
\begin{center}
\scalebox{0.79}{\input{cnflexmpl.pdf_t}}
\caption{The vertex $x$ is an overlapping vertex of the two paths on the left (dashed and dotted), while it is not an overlapping vertex of the paths on the right.}
\label{fig:cnflexmpl}
\end{center}
\end{figure}
\paragraph{Orthogonal drawings.} An {\em orthogonal drawing} of a graph $G$ in a grid $\Gamma$ is a mapping which maps vertices
$v\in V(G)$ to subgrids $\Gamma(v)$ (called {\em boxes}) such that for every $u_{1},u_{2}\in V(G)$
with $u_{1}\neq u_{2}$, $\Gamma(u_{1})\cap \Gamma(u_{2})=\emptyset$, and edges
$\{u_{1},u_{2}\}\in E(G)$ to $(u_{1}',u_{2}')$-paths whose internal vertices belong to
$\Gamma - \bigcup_{v\in V(G)}\Gamma(v)$, their endpoints $u_{i}'$ (called {\em joining vertices of $\Gamma(u_{i})$}) belong to the perimeter of
$\Gamma(u_{i})$, $i\in [2]$, and for every two disjoint edges $e_{i}\in E(G)$, $i\in [2]$, the corresponding
paths are edge-disjoint.
We need the following result.
\begin{lemma}[\cite{BiedlK97}] \label{lem:grdembtamtol}
If $G$ is a simple graph then it admits an orthogonal drawing in an
$(\frac{m+n}{2}\times \frac{m+n}{2})$-grid. Furthermore, the box size of each vertex
$v$ is $\frac{\deg(v)+1}{2}\times \frac{\deg(v)+1}{2}$.
\end{lemma}
\section{Preliminary Combinatorial Lemmata}
\label{prelsom}
Before proving the main result of this section we first state the following lemma which we will need later on.
\begin{lemma}[\hspace{-.1mm}\cite{kurim}]
\label{tllds}
Let $r$ be a positive integer. If $G$ is a graph embedded in a surface $\Sigma$, $v,v_{1},v_{2},\dots,v_{r}\in V(G)$, and ${\cal P}$
is a collection of $r$ edge-disjoint paths from $v$ to $v_{1},v_{2},\dots, v_{r}$ in $G$, then $G$ contains a confluent collection ${\cal P}'$ of $r$ edge-dsjoint
paths from $v$ to $v_{1},v_{2},\dots,v_{r}$ such that $E(\bigcup_{P\in{\cal P}'}P)\subseteq E(\bigcup_{P\in{\cal P}}P)$.
\end{lemma}
\paragraph{Detachment tree of ${\cal P}$ in $u$.}
Let $G$ be a graph embedded in a closed disk $\Delta$, $v,v_{1},v_{2},\dots,v_{k}$ be distinct vertices of $G$, and ${\cal P}=\{P_{i}\mid i\in [k]\}$ be a family of $k$ confluent edge-disjoint paths such that $P_{i}$ is a path from $v$ to $v_{i}$, $i \in[k]$. Let also $u\in V(G)\setminus \{v,v_{i}\mid i\in [k]\}$ be an internal vertex of at least two paths in ${\cal P}$.
Let ${\cal P}_{u}= \{P_{i_{1}}$, $P_{i_{2}}$, $\dots$, $P_{i_{r}}\}$ denote the family of paths in ${\cal P}$ that contain $u$ and $\Delta_{u}$ be a disk around $u$.
Given any edge $e$ with $u\in e$ we denote by $u_{e}$ its common point with the boundary of $\Delta_{u}$.
Moreover, we denote by $e_{i_{r}}^{1}$ and $e_{i_{r}}^{2}$ the edges of $P_{i_{j}}$ incident to $u$, $j\in [r]$.
We construct a tree $T_{u}$ in the following way and call it {\em detachment tree of ${\cal P}$ in $u$}.
Consider the outerplanar graph obtained from the boundary of $\Delta_{u}$ by adding the edges $\{u_{e_{i_{j}}}^{1},u_{e_{i_{j}}}^{2}\}$, $j\in [r]$.
We subdivide the edges $\{u_{e{_{i_{j}}^{1}}},u_{e_{i_{j}}^{2}}\}$, $j\in [r]$, resulting to a planar graph. For every bounded face $f$ of the graph, let $V(f)$ denote the set of vertices that belong to $f$. We add a vertex $v_{f}$ in its interior and we make it adjacent to the vertices of $(V(f)\cap \{u_{e}\mid e\in u\})\setminus\{u_{e_{i_{j}}}^{1},u_{e_{i_{j}}}^{2}\mid j\in [r] \}$. Finally we remove the edges that lie in the boundary of $\Delta_{u}$. We call this tree $T_{u}$. Notice that for every $e$ with $u\in e$, the vertex $u_{e}$ is a leaf of $T_{u}$. (See Figure~\ref{fig:dtchtr}.)
We replace $u$ by $T_{u}$ in the following way. We first subdivide every edge $e\in G$ incident to $u$, and denote by $u_{e}$ the vertex added after the subdivision of the edge $e$. We denote by $G_{s}$ the resulting graph.
Consider now the graph $G^{r}=(G^{s}\setminus u)\cup T_{u}$ (where, without loss of generality, we assume that $V(G\setminus u)\cap V(T_{u})=\{u_{e}\mid u\in e\}$). The graph $G^{r}$ is called {\em the graph obtained from $G$ by replacing $u$ with $T_{u}$}.
\begin{figure}[h]
\begin{center}
\scalebox{.7}{\input{dtchtr2.pdf_t}}
\caption{Example of the construction of a detachment tree.}
\label{fig:dtchtr}
\end{center}
\end{figure}
\begin{observation}\label{obs:walinvrnc}
Let $k,h$ be positive integers and $G$ be a multigraph containing as a subgraph a subdivided wall $W$ of height $h$, whose compass $C$ is embedded in a closed disk $\Delta$. Furthermore, let $v$, $v_{i}$, $i\in [k]$, be vertices of $W$ such that there exists a confluent family ${\cal P}$ of $k$ edge-disjoint paths from $v$ to the vertices $v_{i}$, $i\in [k]$. Finally, let $u\in V(C)\setminus\{v,v_{i}\mid i\in [k]\}$ belonging to more than one paths of ${\cal P}$. The graph $G^{r}$ obtained from $G$ by replacing $u$ with $T_{u}$ contains as a subgraph a subdivided wall $W'$ of height $h$, whose compass is embedded in $\Delta$ and there exists a family ${\cal P}'$ of $k$ confluent edge-disjoint paths from $v$ to $v_{i}$, $i\in [k]$, in $W'$ whose paths avoid $u$.
\end{observation}
\begin{proof}
Notice first that it is enough to prove the observation for the case where $u\in V(W)$. Let $e_{1}$, $e_{2}$ (and possibly $e_{3}$) be the edges incident to $u$ that also belong to $W$. Notice now that the vertices $u_{e_{1}}$, $u_{e_{2}}$ (and $u_{e_{3}}$) are leaves of $T_{u}$. Thus, from a folklore result, there exists a vertex $u' \in V(T_{u})$ such that there exist 2 (or 3) internally vertex-disjoint paths from $u'$ to $u_{e_{1}}$ and $u_{e_{2}}$ (and possibly $u_{e_{3}}$).
\end{proof}
We now state the following auxiliary definitions.
Let $G$ be a multigraph that contains a wall of height $k$ whose compass is embedded in a closed disk.
Let $v\in A_{\lceil \frac{k}{2}\rceil}$, that is, let $v$ be a vertex contained in the closed disk defined by the innermost layer of $W$, and let $P$ be a path from $v$ to the perimeter of $W$. For each layer $j$ of the wall, $2\leq j\leq \lceil\frac{k}{2}\rceil$, we denote by $x_{P}^{j}$ the first vertex of $P$ (starting from $v$) that also belongs to $L_{j}$ and we call it {\em incoming vertex of $P$ in $L_{j}$}.
We denote by $P^{j}$ the maximal subpath of $P$ that contains $v$ and is entirely contained in the wall defined by $L_{j}$. Moreover, we denote by $y_{P}^{j}$ its endpoint in $L_{j}$ and call it {\em outgoing vertex of $P$ in $L_{j}$}.
Notice that $x_{P}^{j}$ and $y_{P}^{j}$ are not necessarily distinct vertices.
\begin{lemma}\label{lem:vrtxord}
Let $\lambda$ and $k$ be positive integers. Let $G$ be a graph and $W$ be a tight subdivided wall of $G$ of height $k$, whose
compass is embedded in a closed disk $\Delta$. Let also $v$ be a vertex such that $v\in A_{\lceil \frac{k}{2}\rceil}$. If there exist $\lambda$ vertex-disjoint paths
$P_{i}$, $i\in [\lambda]$, from $v$ to vertices of the perimeter then there is a brick $B$ of $W$ with $B\cap A^{^{o}}_{j-1}\neq \emptyset$ that contains both $y_{P_{i}}^{j}$ and $x_{P_{i}}^{j-1}$.
\end{lemma}
\begin{proof}
Assume the contrary. Then it is easy to see that we can construct an annulus $A'_{j}$ such that
$A_{j}\subsetneq A_{j}'$ and $|E(A_{j})|< |E(A_{j}')|$, a contradiction to the tightness of the wall. (See Figure~\ref{fig:anlexmpl}.)
\end{proof}
\begin{figure}[h]
\begin{center}
\scalebox{0.73}{\input{anlexmpl.pdf_t}}
\caption{We replace the dotted line of the wall by the dashed line.}
\label{fig:anlexmpl}
\end{center}
\end{figure}
\begin{lemma}\label{vertxdispths}
Let $k$ be a positive integer and $G$ be a multigraph that contains as a subgraph a subdivided wall $W$ of height at least $4\cdot k^{2} +1$,
whose compass $K$ is embedded in a closed disk $\Delta$. Let also $V$ be a set of $k$ vertices lying in the perimeter $P$ of $W$,
whose mutual distance in the underlying non-subdivided wall is at least 2. If there exist a vertex $v\in A_{2\cdot k^{2}+1}$ and $k$ internally vertex-disjoint paths from
$v$ to vertices of $P$, then there exist $k$ internally vertex-disjoint paths from $v$ to the vertices of $V$ in $K$.
\end{lemma}
\begin{proof}
Assume first, without loss of generality, that the wall $W$ is tight. Let then $P_{1},P_{2},\dots,P_{k}$
be the paths from $v$ to $P$ and let $[P_{1},P_{2},\dots,P_{k},P_{1}]$ be the clockwise cyclic ordering
according to which they appear in $W$.
Our objective is to reroute the paths $P_{i},i\in [k]$, so that they end up to the vertices of $V$.
To do so our first step is to identify a layer of the wall for which there exist two consecutive paths whose incoming vertices on the layer are ``sufficiently far apart".
Let $j_{0}=k^{2}+1$. Consider the layer $L_{j_{0}}$ and
for every $i\in [k]$ let $T_{i}$ denote the path of $L_{j_{0}}$ starting from $x_{i}^{j_{0}}$ and ending in $x_{i+1}^{j_{0}}$ (considered clockwise),
that is, the path of $L_{j_{0}}$ starting from the incoming vertex of $P_{i}$ in $L_{j_{0}}$ and ending to the incoming vertex of $P_{i+1}$ in $L_{j_{0}}$,
where in the case $i=k$ we abuse notation and assume that $x_{k+1}^{j_{0}}=x_{1}^{j_{0}}$ (see Figure~\ref{fig:imppths}).
Let also $i_{0}\in [k]$ be the index such that the path $T_{i_{0}}$
contains the maximum number of important vertices amongst the $T_{i}$'s.
Without loss of generality we may assume that $i_{0}=\lceil\frac{k}{2}\rceil$. From Observation~\ref{obs:impvrt}, as $L_{j_{0}}$ defines a subwall of $W$ of height $2 k^{2}+1$, $L_{j_{0}}$ contains exactly $8k^{2}+2$ important vertices.
Thus, at least $7k$ important vertices are internally contained in $T_{i_{0}}$.
This concludes the first step of the proof.
\begin{figure}[h]
\begin{center}
\scalebox{0.8}{\input{imppths.pdf_t}}
\caption{The $T_{i}$'s, $i\in [k]$}
\label{fig:imppths}
\end{center}
\end{figure}
Let now $j_{1}=k+1$. At the next step, using the part of the wall that is contained in $A[L_{j_{0}},L_{j_{1}}]$, that is,
in the annulus between the $j_{0}$-th and the $j_{1}$-th layer of the wall,
we find $k$ internally vertex-disjoint paths from the incoming vertices of the paths in $L_{j_{0}}$ to $k$ consecutive important vertices of the
$k+1$-th layer of the wall. These are the paths that will allow us to reroute the original paths.
Continuing the proof, let $u_{1},u_{2},\dots,u_{k}$ be a set of successive important vertices appearing clockwise in $T_{i_{0}}$
such that the paths $T_{i_{0}}[x_{i_{0}}^{j_{0}},u_{1}]$ and $T_{i_{0}}[u_{k},x_{i_{0}+1}^{j_{0}}]$ internally contain at least $3k$ important vertices.
Notice that, without loss of generality, we may assume that the vertices $u_{i}$, $i\in [k]$, belong to the northern part of $W'$.
Recall here that each original vertex $w$ of $W'\subseteq W\setminus P$ is contained in exactly one vertical path $P^{(v)}_{w}$ of $W$.
For every $i\in [k]$ we assign the path $R_{i}$ to the vertex $u_{i}$ in the following way. Let $R_{i}$ be the maximal subpath of $P_{u_{i}}^{(v)}$ whose endpoints are $u_{i}$ and the important vertex of $L_{j_{1}}$ that also belongs to $P_{u_{i}}^{(v)}$, which from now on we will denote by $u_{i}^{f}$.
Note here that the paths $R_{i}$, $i\in [k]$, are vertex-disjoint
and do not contain any of the vertices belonging to the interior of the disk defined by $L_{j_{0}}$ in the compass of $W$
(See, for example, Figure~\ref{fig:pthsexmpl}).
\begin{figure}[h]
\begin{center}
\scalebox{0.45}{\input{pthsexmpl.pdf_t}}
\caption{The important vertices of $L_{j_{0}}$, the layers $L_{1}'$ and $L_{2}'$, and the paths $R_{i}$}
\label{fig:pthsexmpl}
\end{center}
\end{figure}
\noindent Notice now that $T_{i_{0}}$, and thus $L_{j_{0}}$, contains a path $F_{1}$ from $x_{i_{0}}^{j_{0}}$ to $u_{i_{0}}$ and a path $F_{2}$ from $u_{i_{0}+1}$
to $x_{i_{0}+1}^{j_{0}}$ that are vertex-disjoint and do not contain vertices of any path other than $P_{i_{0}}$ and $P_{i_{0}+1}$.
Consider now the $\lceil \frac{k-2}{2}\rceil$ consecutive layers of $W$ preceeding $L_{j_{0}}$, that is, the layers $L_{j}'=L_{j_{0}-j}$, $j\in [\lceil \frac{k-2}{2}\rceil]$. For every $j\in [\lceil \frac{k-2}{2}\rceil]$ let $u_{i_{0}-j}^{j}$
be the first time the path $R_{i_{0} -j}$ meets $L_{j}'$ starting from $u_{i_{0} -j}$
and $u_{i_{0}+1+j}^{j}$ be the first time the path $R_{i_{0}+1+j}$
meets $L_{j}'$ starting from $u_{i_{0}+1+j}$. (See, for example, the vertices inside the squares in Figure~\ref{fig:pthsexmpl}.)
We need to prove the following. \\
\noindent{\em Claim:}
For every $j\in [\lceil\frac{k-2}{2}\rceil]$, there exist two vertex-disjoint paths $F^{1}_{j}$ and $F^{2}_{j}$ between the pairs of vertices $(x_{i_{0}-j}^{j_{0}-j},u_{i_{0}-j}^{j})$ and $(x_{i_{0}+1+j}^{j_{0}-j},u_{i_{0}+1+j}^{j})$ that do not intersect the paths $\{R_{l}\mid i_{0}-j < l< i_{0}+1+j\}$.\\
\noindent{\em Proof of Claim:}
Indeed, this holds by inductively applying the combination of Lemma~\ref{lem:vrtxord} with the assertion that
for every $j\leq 2\cdot k^{2}+1$ and every $p,q$ with $1<p<q<k$, the outgoing vertices of $P_{p-1}$ and $P_{q+1}$ and the incoming vertices of $P_{p}$ and $P_{q}$ in the layer $L_{j}$, $y_{p-1}^{j}$, $y_{q+1}^{j}$, $x_{p}^{j}$, and $x_{q}^{j}$ respectively appear in $L_{j}$ respecting the clockwise order
$$[y_{p-1}^{j},x_{p}^{j},x_{q}^{j},y_{q+1}^{j}]$$
in the tight wall $W$. This completes the proof of the claim.\hfill$\diamond$\\
\noindent We now construct the following paths.
First, let $$Q_{i_{0}}=F_{1}\cup R_{i_{0}}$$ and $$Q_{i_{0}+1}=F_{2}\cup R_{i_{0}+1},$$ that is, $Q_{i_{0}}$ is the union of the paths $F_{1}$ and $R_{i_{0}}$, and $Q_{i_{0}+1}$ is the union of the paths $F_{2}$ and $R_{i_{0}+1}$.
Then, for every $j\in [\lceil \frac{k-2}{2}\rceil]$, let $$Q_{i_{0}-j}=P_{i_{0}-j}[x_{i_{0}-j}^{j_{0}},x_{i_{0}-j}^{j_{0}-j}]\cup F_{j}^{1}\cup R_{i_{0} -j}[u_{i_{0}-j}^{j},u_{i_{0}-j}^{f}],$$ that is, $Q_{j_{0}-j}$ is the union of the following paths; (a) the subpath of $P_{i_{0}-j}$ between its incoming vertex in the $j_{0}$-th layer and its incoming vertex in the $(j_{0}-j)$-th layer, (b) the path $F_{j}^{1}$ defined in the claim above, and (c) the subpath of $R_{i_{0}-j}$ between the vertices $u_{i_{0}-j}^{j}$ and $u_{i_{0}-j}^{f}$.
Finally, for every $j\in [\lceil \frac{k-2}{2}\rceil]$ ($j\in [\lceil \frac{k-2}{2}\rceil-1]$, if $k$ is odd) let $$Q_{i_{0}+1+j}=P_{i_{0}+1+j}[x_{i_{0}+1+j}^{j_{0}},x_{i_{0}+1+j}^{j_{0}-j}]\cup F_{j}^{2}\cup R_{i_{0} +1+j}[u_{i_{0}+1+j}^{j},u_{i_{0}+1+j}^{f}],$$
that is, $Q_{i_{0}+1+j}$ is the union of the following three paths; (a) the subpath of $P_{i_{0}+1+j}$ between its incoming vertex in the $j_{0}$-th layer and its incoming vertex in the $(j_{0}-j)$-th layer, (b) the path $F_{j}^{2}$ defined in the claim above, and (c) the subpath of $R_{i_{0}+1+j}$ between the vertices $u_{i_{0}+1+j}^{j}$ and $u_{i_{0}+1+j}^{f}$.
From the claim above and Lemma~\ref{lem:vrtxord} we get that the above paths are vertex-disjoint. This concludes the second step of the proof.
We claim now that we may reroute the paths $P_{i}$, $i\in [k]$, in such a way that they end up to the vertices $u_{i}^{f}$, $i\in [k]$. Indeed, let $P_{i}'=P_{i}[v,x_{i}^{j_{0}}]\cup Q_{i}$, $i\in [k]$. From their construction these paths are vertex-disjoint and end up to the vertices $u_{i}^{f}$, $i\in [k]$. (For a rough estimation of the position of the paths in the wall see Figure~\ref{fig:pthsexmpl2}.)
\begin{figure}[h]
\begin{center}
\scalebox{0.45}{\input{pthsexmpl2.pdf_t}}
\caption{Part of the rerouted paths}
\label{fig:pthsexmpl2}
\end{center}
\end{figure}
Concluding the proof,
as the mutual distance of the vertices of $V$ in the underlying non-subdivided wall is at least 2, it is easy to notice that in the annulus defined by $L_{j_{1}}$ and $L_{1}$ there exist $k$ vertex-disjoint paths from the vertices $u_{i}^{f}$, $i\in [k]$, to the vertices of $V$.
\end{proof}
We now prove the main result of this section.
\begin{lemma}\label{lem:fixhighrdeg}
Let $k$ be a positive integer and $G$ be a $k$-edge-connected multigraph embedded in a surface of
Eüler genus $\gamma$ that contains a subdivided wall $W$ of height at least $4\cdot k^{2}+1$ as a subgraph, whose
compass $C$ is embedded in a closed disk $\Delta$. Let also $S$ be a set of vertices in the
perimeter of $W$ whose mutual distance in the underlying non-subdivided wall is at least 2. If $|S|\leq k$ then
there exist a vertex $v$ in $W$ and $|S|$ edge-disjoint paths from $v$ to the vertices of $S$.
\end{lemma}
\begin{proof}
Let $v\in A_{2k^{2}+1}$ and $u\in L_{1}$ be vertices belonging to the closed disk defined by the layer $L_{2 k^{2}+1}$ and to the perimeter of the wall respectively.
As $G$ is $k$-edge-connected there exist $k$ edge-disjoint paths $P_{1},P_{2},\dots,P_{k}$ connecting $v$ and $u$.
By Lemma~\ref{tllds}, we may assume that the paths are confluent. Let ${\cal P}'=\{P_{i}'\mid i\in [k]\}$ be the family of paths $P_{i}'=P_{i}[v,x_{i}^{1}]$, $i\in [k]$,
that is, let ${\cal P}'$ be the family of paths consisting of he subpaths of $P_{i}$, $i\in [k]$, between $v$ and the first vertex on which they meet the perimeter of $W$.
Let $V$ be the set of vertices in $V(C)\setminus (V(L_{1})\cup\{v\})$ that are contained in more than one path in ${\cal P}'$. We obtain the graph $\hat{G}$ by replacing every vertex $z\in V$ with the detachment tree
of ${\cal P}'$ in $z$. From Observation~\ref{obs:walinvrnc}, $\hat{G}$ contains a wall $\hat{W}$ of height $4k^{2}+1$ whose compass is embedded in $\Delta$. Notice also that, as no changes have occurred in the perimeter of $W$, $W$ and $\hat{W}$ share the same perimeter. Furthermore, $\hat{W}$ contains $k$ internally vertex-disjoint paths from $v$ to the perimeter of $\hat{W}$. Thus, from Lemma~\ref{vertxdispths}, $\hat{W}$ contains $k$ vertex-disjoint paths from $v$ to $S$. It is now easy to see, by contracting each one of the trees $T_{z}$, $z\in V(C)\setminus (V(L_{1})\cup\{v\})$, to a single vertex that $W$ contains $k$ edge-disjoint paths from $v$ to $S$.
\end{proof}
\section{Main Theorem}
\label{maint}
By combining Lemmata~\ref{lem:fixhighrdeg},~\ref{lem:twbndwll} and~\ref{lem:grdembtamtol} we obtain the following.
\begin{theorem}\label{mainthm}
There exists a computable function $f:\mathbb{N}\rightarrow \mathbb{N}$ such that for every
multigraph $G$ of Eüler genus $\gamma$ and every connected graph $H$ one of the following holds:
\begin{enumerate}
\item ${\mathbf{tw}}(G)\leq f(\gamma)\cdot \lambda \cdot k$, where $\lambda=\Delta(H)$ and $k={\mathbf{m}}(H)$
\item $G$ is not $\lambda$-edge-connected,
\item $H\leq_{\text{im}} G$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let
$$f(\gamma,\lambda,k)=48\cdot \left(\gamma+1\right)^{\frac{3}{2}}\cdot \left(\frac{4\left(4\lambda+1\right)k}{2}+5\right),$$
\noindent and assume that ${\mathbf{tw}}(G)\geq f(\gamma,\lambda,k)$ and $G$ is $\lambda$-edge-connected.
From Lemma~\ref{lem:twbndwll}, we obtain that $G$ contains as a subgraph a subdivided wall $W$ of height $2 (2\lambda +1)k$ whose compass is embedded in a closed disk.
In what follows we will construct a model of $H$ into the wall.
From Lemma~\ref{lem:grdembtamtol}, $H$ admits an a orthogonal drawing $\psi$ in an $$\left(\frac{{\mathbf{m}}(H)+{\mathbf{n}}(H)}{2}\times \frac{{\mathbf{m}}(H)+{\mathbf{n}}(H)}{2}\right)\text{-grid},$$ where the box size of each vertex $v\in V(H)$ is $\frac{\deg(v)+1}{2}\times\frac{\deg(v)+1}{2}$.
Notice now that $\psi$ can be scaled to an orthogonal drawing $\phi$ to the grid $\Gamma$ of size $$\left(\frac{2\left(4\lambda+1\right)\left({\mathbf{m}}(H)+{\mathbf{n}}(H)\right)}{2}+1\right)\times 2\left(\frac{2\left(4\lambda+1\right)\left({\mathbf{m}}(H)+{\mathbf{n}}(H)\right)+2}{2}+1\right),$$
where the box size of each vertex is $(4(\deg(v))^{2}+2)\times 2(4(\deg(v))^{2}+2)$, the joining vertices of each box have mutual distance at least 2 in the perimeter of the box and no joining vertex is a corner of the box.
Moreover, for every vertex $u$, $u\in {\mathbf{Im}}(\phi)\setminus \cup_{v\in V(H)}\Gamma(v)$ of degree $4$, that is, for every vertex in the image of $\phi$ that is the intersection of two paths, there is a box in the grid of size $(4\deg(u)^{2}+2)\times 2(4\deg(u)^{2}+2)$, denoted by $Q(u)$, containing only this vertex and vertices of the paths it belongs to.
We denote by $u^{i}$, $i\in [4]$, the vertices of ${\mathbf{Im}}(\phi)$ belonging to the boundary of $Q(u)$ and, for uniformity, also call them {\em joining vertices of $Q(u)$}.
Towards finding a model of $H$ in the wall observe
that the grid $\Gamma$ contains as a subgraph a wall of height $\left(4\lambda+1\right)\left({\mathbf{m}}(H)+{\mathbf{n}}(H)\right)$
such that each one of the boxes, either $\Gamma(v)$, $v\in V(H)$, or $Q(v)$, where $v$ is the intersection of two paths in the image of $\phi$ contains a wall $W(v)$ of height $4\deg(v)^{2}+1$ and the joining vertices of $\Gamma(v)$ (the vertices $v^{i}$, $i\in [4]$, respectively) belong to the perimeter of the wall and have distance at least 2 in it. Consider now the mapping of $H$ to $W$ where the boxes $\Gamma(v)$ and $Q(v)$ are mapped into subwalls $W(v)$ of $W$ of height $4\deg(v)^{2}+1$ joined together by vertex-disjoint paths as given by the orthogonal drawing $\phi$.
From Lemma~\ref{lem:fixhighrdeg}, as every $W(v)$ has height $4\deg(v)^{2}+1$ and its compass is embedded in a closed disk, there exist a vertex $z_{v}\in V(W(v))$ and $\deg(v)$ edge-disjoint paths from $z_{v}$ to the joining vertices of $W(v)$.
It is now easy to see that $W$ contains a model of $H$.
\end{proof}
\noindent Notice now that in the case when $\Delta(H)=O(1)$ we get the following.
\begin{theorem}\label{bdthm}
There exists a computable function $f:\mathbb{N}\rightarrow \mathbb{N}$ such that for every
multigraph $G$ of Eüler genus $\gamma$ and every connected graph $H$ one of the following holds:
\begin{enumerate}
\item ${\mathbf{tw}}(G)\leq f(\gamma) \cdot {\mathbf{n}}(H)$,
\item $G$ is not $\Delta(H)$-edge-connected,
\item $H\leq_{\text{im}} G$.
\end{enumerate}
\end{theorem}
\noindent The following two corollaries are immediate consequences of Theorems~\ref{mainthm} and~\ref{bdthm}.
\begin{corollary}
There exists a computable function $f:\mathbb{N}\rightarrow \mathbb{N}$ such that
for every multigraph $G$ of Eüler genus $\gamma$ and every $k\in \mathbb{N}$ one of the following holds:
\begin{enumerate}
\item ${\mathbf{tw}}(G)\leq f(\gamma)\cdot k^{3}$,
\item $G$ is not $k$-edge-connected,
\item $K_{k+1} \leq_{\text{im}} G$.
\end{enumerate}
\end{corollary}
\begin{corollary}
There exists a computable function $f:\mathbb{N}\rightarrow \mathbb{N}$ such that
for every multigraph $G$ of Eüler genus $\gamma$ and every $k\in \mathbb{N}$ one of the following holds:
\begin{enumerate}
\item ${\mathbf{tw}}(G)\leq f(\gamma)\cdot k^{2}$,
\item $G$ is not $4$-edge-connected,
\item $(k\times k)$-grid is an immersion of $G$.
\end{enumerate}
\end{corollary}
\noindent However, when $H$ is the grid a straightforward argument gives the following result.
\begin{theorem}\label{thm:exclgrdimrs}
There exists a computable function $f:\mathbb{N}\rightarrow \mathbb{N}$ such that
for every multigraph $G$ that is embedded in a surface of Eüler genus $\gamma$ and every $k\in \mathbb{N}$ one of the following holds:
\begin{enumerate}
\item ${\mathbf{tw}}(G)\leq f(\gamma)\cdot k$.
\item $G$ is not $4$-edge-connected.
\item $(k\times k)$-grid is an immersion of $G$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let
$$f(\gamma,k)=48\cdot (\gamma+1)^{\frac{3}{2}}\cdot ((4^{3}+3)\cdot k+5).$$
\noindent Assume that $G$ is $4$-edge-connected and that ${\mathbf{tw}}(G)\geq f(\gamma,k)$.
As ${\mathbf{tw}}(G)\geq f(\gamma,k)$, from Lemma~\ref{lem:twbndwll} it follows that $G$ contains as a subgraph
a subdivided wall $W$ of height $(4^{3}+3)k$, whose compass in $G$ is embedded in a closed disk $\Delta$.
Consider the $k^{2}$ subwalls of $W$ of height $(4^{3}+1)$ that occur after removing from it the paths
$P_{(4^{3}+3)j}^{[v]}$, $P_{(4^{3}+3)j}^{[h]}$, $i,j\in [k]$.
For every $i,j\in [k]$, we denote by $W_{(i,j)}$ the subwall that is contained inside the disk that is defined by the paths
$P^{(h)}_{(4^{3}+3)(i-1)}$, $P^{(h)}_{(4^{3}+3)i}$, $P^{[v]}_{(4^{3}+3)(j-1)}$, and $P^{[v]}_{(4^{3}+3)j}$.
In the case where $j=1$ and $i=1$, we abuse notation and consider as $P^{(h)}_{(4^{3}+3)(j-1)}$ and $P^{[v]}_{(4^{3}+3)(j-1)}$ the paths $P^{(h)}_{1}$ and $P^{[v]}_{1}$, respectively.
From Lemma~\ref{lem:fixhighrdeg} and the hypothesis that $G$ is $4$-edge-connected, for $k=4$, it follows that
in the compass of each one of the subwalls $\{W_{(i,j)}\mid i,j\in [k]\}$ we may find a vertex $v_{(i,j)}$ and four edge-disjoint paths from $v_{(i,j)}$
to the vertices $v_{(i,j)}^{n}$, $v_{(i,j)}^{s}$, $v_{(i,j)}^{w}$, and $v_{(i,j)}^{e}$, that lie in the
northern, southern, western, and eastern path of the wall, respectively.
Finally, we consider the function $g((i,j))=v_{(i,j)}$ that maps the vertex $(i,j)$ of the $(k\times k)$-grid to the vertex $v_{(i,j)}$ of the wall $W_{(i,j)}$.
Is now easy to see that $g$ is an immersion model of the $(k\times k)$-grid in the compass of the wall $W$ and the theorem follows as $f$ is linear on $k$.
\end{proof}
\section{Conclusions}
In this paper, we proved sufficient conditions for the containment of any connected graph $H$ as an immersion in graphs of bounded genus.
We would like to remark here that our proofs also hold if we, instead, consider the strong immersion
relation where we additionally ask that the paths of the model $f$ of $H$ in $G$ that
correspond to the edges of $H$ are internally disjoint from $f(V(H))$.
In our results, it appears that both big treewidth and
the edge connectivity requirement are necessary in order to enforce the appearance of
a graph as an immersion. A natural open problem to investigate is the existence
of counterparts of our results for the case of the topological minor relation.
Certainly, here edge connectivity should be replaced by vertex connectivity. However, what we can only report
is that stronger conditions than
just asking for sufficiently big treewidth are required for such an extension.
| -48,678.690044 |
[
-2.796875,
2.521484375
] | 48.381601 |
[
-3.205078125,
0.88720703125,
-1.806640625,
-6.1796875,
-1.0771484375,
8.6484375
] |
[
3.1328125,
8.1640625,
3.115234375,
7.74609375
] | 274 | 6,047 |
[
-3.12890625,
3.880859375
] | 34.66578 |
[
-5.2421875,
-3.38671875,
-4.6875,
-2.1328125,
1.31640625,
12.0625
] | 1.135315 | 25.258179 | 18.052805 | 1.684754 |
[
2.3240485191345215
] | -31,474.409896 | 4.661154 | -47,605.475834 | 0.532179 | 5.583195 |
[
-1.8994140625,
-3.037109375,
-3.619140625,
-5.0390625,
1.9384765625,
11.703125
] |
[
-6.08984375,
-2.15625,
-1.833984375,
-1.146484375,
3.4375,
4.3671875
] | |
BkiUd5jxaKPQof3xyoeX
|
\section{Introduction}
We consider the problem of graph estimation in Gaussian graphical models. The graph of a random vector $(X_{1},\ldots,X_{p})$ represents the conditional dependences between the variables $X_{1},\ldots,X_{p}$. More precisely, assume that $(X_{1},\ldots,X_{p})$ is a centered Gaussian vector with covariance matrix $\Sigma$. Then, the law ${\mathbb{P}}_{\Sigma}$ of $(X_{1},\ldots,X_{p})$ is a graphical model according to a graph $G$, if for any nodes $a$ and $b$ that are not neighbours in $G$, the variables $X_{a}$ and $X_{b}$ are independent conditionally on the remaining variables. There exists a unique graph $G_{\Sigma}$ which is minimal for the inclusion and such that ${\mathbb{P}}_{\Sigma}$ is a graphical model according to $G_{\Sigma}$. Our aim is to estimate this graph from a $n$-sample of ${\mathbb{P}}_{\Sigma}$. We will pay a special attention to the case where $n<p$. In what follows, we shall always assume that $\Sigma$ is non-singular.
The problem of graph estimation in Gaussian graphical model when the sample size $n$ is smaller (or much smaller) than the number $p$ of variables is motivated by applications in post-genomic and is a current active field of research in statistics. Biotechnological
developments in proteomics or transcriptomics enable to produce a
huge amount of data. One of the challenges
for the statistician is to infer from these data the regulation network of a
family of genes (or proteins). The task is difficult due to the very high-dimensional nature of the data and
the small sample size. For example, microarrays measure the expression
levels of a few thousand genes and the sample size $n$
is no more than a few tens.
The Gaussian graphical modeling has been proposed as a tool to handle this issue, see e.g. the papers of
Kishino and Waddell~\cite{KW}, Dobra {\it et al}~\cite{Detal}, Wu and
Ye~\cite{WY}. The gene expression levels are modeled
by a Gaussian law ${\mathbb{P}}_{\Sigma}$ and the regulation network of the genes
is then depicted by the graph $G_{\Sigma}$ of the conditional dependences.
Many estimation procedures have been proposed recently to perform graph estimation in Gaussian graphical model when $n<p$. A first class of procedures is based on multiple testing on empirical partial covariance. Actually, if $G_{\Sigma}$ denotes the (minimal) graph of the law ${\mathbb{P}}_{\Sigma}$, there is an edge in $G_{\Sigma}$ between $a$ and $b$, if and only if the conditional covariance of $X_{a}$ and $X_{b}$ given all the other variables is non-zero. When $n<p$, the empirical version of the latter conditional covariance cannot be computed, so several papers suggest to use instead the empirical conditional covariance of $X_{a}$ and $X_{b}$ given $\ac{X_{s},\ s\in S}$ for some subsets $S$ of $\ac{1,\ldots,p}\setminus\ac{a,b}$ with cardinality less than $n-2$. A multiple testing procedure is then applied to detect if the conditional covariance $\mbox{cov}(X_{a},X_{b} | X_{s},\ s\in S)$ is non-zero. Wille and B\"uhlmann \cite{WB06} restrict to the sets $S$ of cardinality less than one, Castelo and Roverato consider the sets $S$ with cardinality at most $q$ (for some fixed $q$) and Spirtes {\it et al.} \cite{SGS00} (see also Kalisch and B\"uhlmann \cite{KB08}) propose a procedure which avoid an exhaustive search over all $S$.
A second class of procedures relies on the fact that the entries $\Omega_{a,b}$ of the inverse covariance matrix $\Omega=\Sigma^{-1}$ are non-zero if and only if there is an edge between $a$ and $b$ in $G_{\Sigma}$. Several papers then suggest to perform a sparse estimation of $\Omega$ in order to estimate the graph $G_{\Sigma}$, see Huang {\it et al.}~\cite{HLPL}, Yuan and Lin~\cite{YL}, Banerjee {\it et al.}~\cite{BGA}, Friedman {\it et al.}~\cite{FHT}, and Fan \emph{et al.}~\cite{Fan08}. They propose to maximise the log-likelihood of $\Omega$ under $l^1$ constraints to enforce sparsity and they design optimisation algorithms to perform this maximisation.
Finally, a third class of procedures uses the fact that the coefficients $\theta_{a,b}$ of the regression of $X_{a}$ on $\ac{X_{b},\ b\neq a}$ are non-zeros if and only if there is an edge between $a$ and $b$ in $G_{\Sigma}$. Meinshausen and B\"uhlmann~\cite{MB06} or Rocha {\it et al.} \cite{RZY09} perform regressions with $l^1$ constraints, whereas Giraud \cite{giraud08} (see also Verzelen \cite{Verzelen08}) proposes an exhaustive search to obtain a sparse estimate of the matrix $\theta$ and then detect the graph $G_{\Sigma}$.
In this paper, we propose a new estimation scheme which combines the good properties of these different procedures. Actually, the procedures based on the empirical covariance or on $l^1$ regularisation share some nice computational properties and they can handle several hundred of variables $X_{1},\ldots, X_{p}$. Nevertheless, the theoretical results assessing their statistical accuracy are either of asymptotic nature or rely on strong assumptions on the covariance \cite{fan_covariance,rothman08}. Moreover, their performance heavily depends on one (or several) tuning parameter, which is usually not dimensionless and whose optimal value is unknown. On the other hand, the exhaustive search of~\cite{giraud08} has a good statistical accuracy and strong theoretical results have been established, but the computational complexity of the exhaustive search is huge and it cannot be performed when the number $p$ of variables is larger than a few tens. Our strategy here is to build a data-driven family of candidates graphs with several fast above-mentioned procedures and then to apply the selection criterion presented in~\cite{giraud08} to select one graph among them. The resulting estimation procedure can handle several hundred of variables $X_{1},\ldots,X_{p}$ and presents good statistical properties.
Actually,
the procedure is shown to be consistent in a high-dimensional setting and
its risk is controlled by a non-asymptotic oracle-like
inequality. The assumptions needed to establish these results are
weaker than those
commonly used in the literature.
In addition, numerical experiments show a nice behavior on simulated examples.
The procedure is implemented in the $R$-package {\it GGMselect} available on
\url{http://w3.jouy.inra.fr/unites/miaj/public/logiciels/GGMselect/}
\medskip
The remaining of the paper is organized as follows. We describe the estimation procedure in the next section and state some theoretical results on its statistical accuracy in Section~3. In Section~4, we carry out some numerical experiments and Section~5 is devoted to the proofs.
\medskip
\noindent\textit{Notations.} To estimate the graph $G_{\Sigma}$, we will start from a $n$-sample $X^{(1)},\ldots,X^{(n)}$ of the law ${\mathbb{P}}_{\Sigma}$. We denote by $\mathbf{X}$ the $n\times p$ matrix whose rows are given by the vectors $X^{(i)}$, namely $\mathbf{X}_{i,a}=X^{(i)}_{a}$ for $i=1,\dots,n$ and $a=1,\ldots,p$. We write $\mathbf{X}_{a}$ for the $a^{\textrm{th}}$ column of $\mathbf{X}$.
We also set $\Gamma=\ac{1,\ldots,p}$ and for any graph $G$ with nodes indexed by $\Gamma$, we write $d_{a}(G)$ for the degree of the node $a$ in the graph $G$ (which is the number of edges incident to $a$) and $\degr(G)=\max_{a\in \Gamma}d_{a}(G)$ for the degree of $G$. Moreover, the notation $a\stackrel{G}{\sim}b$ means that the nodes $a$ and $b$ are neighbours in the graph $G$.
Finally, we write $\Theta$ for the set of $p\times p$ matrices with 0 on the diagonal, $\|\cdot\|_{q\times p}$ for the Frobenius norm on $q\times p$ matrices
$$\|A\|^2_{q\times p}=\textrm{Tr}(A^TA)=\sum_{i=1}^q\sum_{j=1}^pA_{i,j}^2\ ,$$
$\|\cdot\|_{n}$ for the Euclidean norm on ${\mathbb{R}}^n$ divided by $\sqrt n$, and for any $\beta\in{\mathbb{R}}^p$ we defined supp$(\beta)$ as the set of the labels $a\in\Gamma$ such that $\beta_{a}\neq 0$.
\section{Estimation procedure}\label{procedure}
GGMselect is a two-stage estimation procedure which first builds a data-driven family $\widehat \mathcal{G}$ of candidate graphs and then applies a selection procedure to pick one graph among these. We present the selection procedure in the next paragraph and then describe different possible choices for the family of candidate graphs $\widehat \mathcal{G}$.
\subsection{\label{SelProc.st}Selection procedure}
We assume here that we have at hand a family $\widehat \mathcal{G}$ of candidate graphs, which all have a degree smaller than $n-2$.
To select a graph $\widehat G$ among the family $\widehat\mathcal{G}$, we use the selection criterion introduced in~\cite{giraud08}. We briefly present this criterion here and refer to~\cite{giraud08} for further details.
We write $\theta$ for the $p\times p$ matrix such that
$${\mathbb{E}}_{\Sigma}\cro{X_{a}\big| X_{b},\ b\neq a}=\sum_{b\neq a}\theta_{a,b}X_{b}\quad\textrm{and}\quad\theta_{a,a}=0\quad \textrm{for all } a \in \ac{1,\ldots,p}.$$
The matrix $\theta$ minimizes $\|\Sigma^{1/2}(I-\theta')\|_{p\times p}$ over the set $\Theta$ of $p\times p$ matrices $\theta'$ with 0 on the diagonal. Since $\mathbf{X}^T\mathbf{X}/n$ is an empirical version of $\Sigma$, an empirical version of $\|\Sigma^{1/2}(I-\theta)\|_{p\times p}$ is $\|\mathbf{X}(I-\theta)\|_{n\times p}$ divided by $\sqrt{n}$. Therefore, for any graph $G$ in $\widehat\mathcal{G}$, we associate an estimator $\widehat \theta_{G}$ of $\theta$ by setting
\begin{equation} \label{thetahat}
\widehat\theta_{G}=\textrm{argmin}\ac{\|\mathbf{X}(I-\theta')\|_{n\times p}:\theta'\in\Theta_{G}},
\end{equation}
where $\Theta_{G}$ is the set of $p\times p$ matrices $\theta'$ such that $\theta'_{a,b}$ is non-zero if and only if there is an edge between $a$ and $b$ in $G$.
Finally, we select a graph $\widehat G$ in $\widehat \mathcal{G}$ by taking any minimizer over $\widehat \mathcal{G}$ of the criterion
\begin{equation}\label{definition_critere}
\mathrm{Crit}(G)= \sum_{a=1}^p\left[\|{\bf X}_a-{\bf X}[\widehat{\theta}_{G}]_{a} \|_n^2\left(1+\frac{\mathrm{pen}[d_{a}(G)]}{n-d_{a}(G)}\right)\right]\ ,
\end{equation}
where $d_{a}(G)$ is the degree of the node $a$ in the graph $G$ and the penalty function $\mathrm{pen}: \mathbb{N}\rightarrow \mathbb{R}^+$ is of the form of the penalties introduced in Baraud \emph{et al.} \cite{BGH09}.
To compute this penalty, we define for any integers $d$ and $N$ the $\text{DKhi}$ function by
\begin{eqnarray*}
\text{DKhi}(d,N,x)=\mathbb{P}\left(F_{d+2,N}\geq \frac{x}{d+2}\right)- \frac{x}{d}\,\mathbb{P}\left(F_{d,N+2}\geq \frac{N+2}{Nd}x\right) , \, x>0\,,
\end{eqnarray*}
where $F_{d,N}$ denotes a Fisher random variable with $d$ and $N$ degrees of freedom. The function $x\mapsto \text{DKhi}(d,N,x)$ is decreasing and we write $\text{EDKhi}[d,N,x]$ for its inverse, see \cite{BGH09} Sect. 6.1 for more details. Then, we fix some constant $K>1$ and set
\begin{equation}\label{definition_penalite}
\mathrm{pen} (d) = K\,\frac{n-d}{n-d-1}\,\text{EDKhi}\left[d+1,n-d-1,\left(\binom{p-1}{d}(d+1)^2\right)^{-1}\right].
\end{equation}
When $d$ remains small compared to $n$, the penalty function increases approximately linearly with $d$. Actually, when $ d\leq \gamma \, n/ \pa{2\pa{1.1+\sqrt{\log p}}^2}$ for some $\gamma<1,$
we approximately have for large values of $p$ and $n$
$$\mathrm{pen}(d)\lesssim K \pa{1+e^{\gamma}\sqrt{2\log p}}^2(d+1),$$
see Proposition~4 in Baraud {\it et al.}~\cite{BGH09} for an exact bound.
The selection procedure depends on a dimensionless tuning parameter $K$.
A larger value for $K$ yields a procedure more conservative. In theory (and in practice) $K$ has to be larger than one. We propose to set $K$ between $2$ and $3$ depending on the desired level of false discovery rate of edges.
\subsection{\label{FamGr.st}Family $\widehat\mathcal{G}$ of candidate graphs}
The computational complexity of the minimization of the criterion~(\ref{definition_critere}) over the family $\widehat \mathcal{G}$ is linear with respect to its size. In particular, minimizing~(\ref{definition_critere}) over all the graphs with degree smaller than some integer $D$, as proposed in~\cite{giraud08}, is untractable when $p$ is larger than a few tens. To overcome this issue, we propose to build a much smaller (data-driven) family $\widehat\mathcal{G}$ of candidate graphs, with the help of various fast algorithms dedicated to graph estimation.
We present below four way to build a family of candidate graphs, chosen on the basis of theoretical results and simulation studies. They will be denoted
by $\widehat{\mathcal{G}}_{\mathrm{QE}}$, $\widehat{\mathcal{G}}_{\mathrm{C0} 1}$, $\widehat{\mathcal{G}}_{\mathrm{LA}}$, and $\widehat{\mathcal{G}}_{\mathrm{EW}}$.
Even if the procedure applies for any family $\widehat\mathcal{G}$, we advise to choose in practice one of these four families or the union of them.
In the following, we describe these four families, provide algorithms to compute them efficiently, and discuss their computational complexity and their size. Each family depends on an integer $D$, smaller than $n-2$, which corresponds to the maximal degree of the graphs in this family.
\subsubsection*{\bf Quasi-exhaustive family $\widehat\mathcal{G}_{\mathrm{QE}}$}
\textit{Description.} Roughly, the idea is to break down the minimization of the criterion~(\ref{definition_critere}) over all the graphs of degree at most $D$ into $p$ independent problems. For each node $a\in\Gamma$, we estimate the neighborhood of $a$ by
$$\widehat{\mathrm{ne}}(a)=\textrm{argmin}\ac{\|\mathbf{X}_{a}-\textrm{Proj}_{V_{S}}(\mathbf{X}_{a})\|_n^2\left(1+\frac{\mathrm{pen}(|S|)}{n-|S|}\right): \ S\subset{\Gamma\setminus\{a\}} \textrm{ and } |S|\leq D},$$
where $\mathrm{pen}$ is the penalty function (\ref{definition_penalite}) and $\textrm{Proj}_{V_{S}}$ denotes the orthogonal projection from ${\mathbb{R}}^n$ onto $V_{S}=\ac{\mathbf{X}\beta: \beta\in{\mathbb{R}}^p\textrm{ and supp}(\beta)=S}$.
We know from Verzelen \cite{Verzelen08} that $\widehat{\mathrm{ne}}(a)$ is a good estimator of the true neighborhood of $a$, from a non-asymptotic point of view.
We then build two nested graphs $\widehat{G}_{K,\text{and}}$ and $\widehat{G}_{K,\text{or}}$ as in Meinshausen and B\"uhlmann \cite{MB06}
\begin{eqnarray*}
a\stackrel{\widehat{G}_{K,\text{and}}}{\sim}b& \Longleftrightarrow &a\in \widehat{\mathrm{ne}}(b)\textrm{ \underline{and} } b\in \widehat{\mathrm{ne}}(a)\,, \\
a\stackrel{\widehat{G}_{K,\text{or}}}{\sim}b& \Longleftrightarrow & a\in \widehat{\mathrm{ne}}(b)\textrm{ \underline{or} } b\in \widehat{\mathrm{ne}}(a) \,,
\end{eqnarray*}
and define the family $\widehat{\mathcal{G}}_{\mathrm{QE}}$ as the collection of all the graphs that lie between $\widehat{G}_{K,\text{and}}$ and $\widehat{G}_{K,\text{or}}$
\begin{eqnarray*}
\widehat{\mathcal{G}}_{\mathrm{QE}} = \left\{G,\ \widehat{G}_{K,\text{and}} \subset G \subset \widehat{G}_{K,\text{or}}\text{ and }\degr(G)\leq D\right\}.
\end{eqnarray*}
It is likely that the graph $\widehat G_{\textrm{exhaustive}}$ which minimizes~(\ref{definition_critere}) over all the graphs of degree at most $D$ belongs to the family $\widehat{\mathcal{G}}_{\mathrm{QE}}$.
In such a case, the minimizer $\widehat{G}_{\mathrm{QE}}$ of the criterion~(\ref{definition_critere}) over $\widehat\mathcal{G}_{\mathrm{QE}}$ coincides with the estimator $\widehat G_{\textrm{exhaustive}}$ of~\cite{giraud08}.
\smallskip
\fbox{
\begin{minipage}{0.9\textwidth}
{\bf QE Algorithm}
\begin{enumerate}
\item Compute $\widehat{\mathrm{ne}}(a)$ for all $a\in\Gamma$.
\item Compute the graphs $\widehat{G}_{K,\text{and}}$ and $\widehat{G}_{K,\text{or}}$.
\item Work out the family $\widehat{\mathcal{G}}_{\mathrm{QE}}$.
\end{enumerate}
\end{minipage}
}
\smallskip
\noindent\textit{Complexity.} The complexity of the computation of the collections $\widehat{\mathrm{ne}}(a)$ is much smaller than the complexity of the computation of $\widehat{G}_{\textrm{exhaustive}}$. Nevertheless, it still remains of order $np^{D+1}D^3$ and the size of the family $\widehat{\mathcal{G}}_{\mathrm{QE}}$ can be of order $2^{pD/2}$ in the worst cases. However, for sparse graphs $G_{\Sigma}$, the graphs $\widehat{G}_{K,\text{and}}$ and $\widehat{G}_{K,\text{or}}$ are quite similar in practice, which makes the size of $\widehat{\mathcal{G}}_{\mathrm{QE}}$ much smaller. The procedure then remains tractable
for $p$ and $D$ reasonably small. Computational times for some examples are given in Section~\ref{section_comparaison_temporel}.
\subsubsection*{\bf C01 family $\widehat{\mathcal{G}}_{\mathrm{C0} 1}$}
\noindent\textit{Description.} The family $\widehat{\mathcal{G}}_{\mathrm{C0} 1}$ derives from the estimation procedure proposed in Wille and B\"uhlmann \cite{WB06} and is based on the 0-1 \emph{conditional independence graph} $G_{01}$. This graph is defined as follows.
For each pair of nodes $(a,b)$, we write $R_{a,b|\emptyset}$ for the correlation between the variable $X_a$ and $X_b$ and $R_{a,b|c}$ for the correlation of $X_a$ and $X_b$ conditionally on $X_c$. Then, there is an edge between $a$ and $b$ in $G_{01}$, if and only if
$R_{a,b|\emptyset}\neq 0$ and $R_{a,b|c}\neq 0$ for all $c\in \Gamma\setminus\{a,b\}$, viz
\begin{eqnarray}
a\stackrel{{G}_{01}}{\sim}b &\Longleftrightarrow& \min\left\{|R_{a,b|c}|,\ c\in \{\emptyset\}\cup\Gamma\setminus\{a,b\} \right\}>0\,.
\end{eqnarray}
Although the 0-1 conditional independence graph $G_{01}$ does not usually coincide with the graph $G_{\Sigma}$, there is a close connection between both graphs in some cases (see \cite{WB06}). Wille and B\"uhlmann \cite{WB06} propose then to estimate the graph $G_{01}$ in order to get an approximation of $G_{\Sigma}$. The following construction of the family $\widehat\mathcal{G}_{01}$ derives from their estimation procedure. We write $P(a,b|c)$ for the $p$-value of the likelihood ratio test of the hypothesis "$R_{a,b|c}=0$" and set
$$P_{\max}(a,b)=\max\left\{P(a,b|c),\ c\in\{\emptyset\}\cup\Gamma\setminus\{a,b\}\right\}\,.$$
For any $\alpha>0$, the graph $\widehat{G}_{01,\alpha}$ is defined by
$$a\stackrel{\widehat{G}_{01,\alpha}}{\sim}b\ \ \Longleftrightarrow \ \ P_{\max}(a,b)\leq \alpha$$
and the family $\widehat{\mathcal{G}}_{\CO1}$ is the family of nested graphs
$$ \widehat{\mathcal{G}}_{\mathrm{C0} 1}= \left\{\widehat{G}_{01,\alpha},\ \alpha>0 \text{ and } \degr(\widehat{G}_{01,\alpha})\leq D \right\}.$$
\fbox{
\begin{minipage}{0.9\textwidth}
{\bf C01 Algorithm}
\begin{enumerate}
\item Compute the $p(p-1)/2$ values $P_{\text{max}}(a,b)$.
\item Order them.
\item Extract from these values the nested graphs $\ac{\widehat G_{01,\alpha}:\alpha>0}$.
\item Stop when the degree becomes larger than $D$.
\end{enumerate}
\end{minipage}
}
\smallskip
\noindent\textit{Complexity.} The algorithm goes very fast since its complexity is of order $np^3$. The size of the family $\widehat{\mathcal{G}}_{\mathrm{C0} 1}$ is smaller than $pD$.
\subsubsection*{\bf Lasso-And family $\widehat{\mathcal{G}}_{\mathrm{LA}}$}
\noindent\textit{Description}. The Lasso-And family $\widehat{\mathcal{G}}_{\mathrm{LA}}$ derives from the estimation procedure proposed by Meinshausen and B\"uhlmann \cite{MB06} and is based on the
LARS-lasso algorithm of Hastie \emph{et al.} \cite{lars}.
For any $\lambda>0$, we define the $p\times p$ matrix $\widehat{\theta}^{\lambda}$ by
\begin{eqnarray}\label{lasso_general}
\widehat{\theta}^{\lambda}=\arg\!\min\left\{\|{\mathbf{X}}-{\mathbf{X}}\theta'\|_{n\times p}^2+\lambda\|\theta'\|_1: \ \theta'\in\Theta \right\},
\end{eqnarray}
where $\Theta$ is the set of $p\times p$ matrices with 0 on the diagonal and $\|\theta'\|_1=\sum_{a\neq b}|\theta'_{a,b}|$. Then, we define the graph $\widehat{G}^{\lambda}_{\text{and}}$ by setting an edge between $a$ and $b$ if both $\widehat{\theta}_{a,b}^{\lambda}$ \underline{and} $\widehat{\theta}_{b,a}^{\lambda}$ are non-zero, namely
$$a\stackrel{\widehat{G}^{\lambda}_{\text{and}}}{\sim}b\ \ \Longleftrightarrow \ \ \widehat{\theta}_{a,b}^{\lambda}\neq 0 \textrm{ \underline{and} } \widehat{\theta}_{b,a}^{\lambda}\neq0\,.$$
This graph $\widehat{G}^{\lambda}_{\text{and}}$ is exactly the estimator~(7) introduced by Meinshausen and B\"uhlmann \cite{MB06}. We note that the size of $\widehat{G}^{\lambda}_{\text{and}}$ has a tendency to increase when the tuning parameter $\lambda$ decreases. Hence, we define the family $\widehat{\mathcal{G}}_{\mathrm{LA}}$ as the set of graphs $\widehat{G}^{\lambda}_{\text{and}}$ with $\lambda$ large enough to ensure that $\degr(\widehat{G}^{\lambda}_{\text{and}})\leq D$, viz
\begin{eqnarray*}
\widehat{\mathcal{G}}_{\mathrm{LA}} = \left\{\widehat{G}^{\lambda}_{\text{and}}\ , \lambda > \widehat{\lambda}_{\text{and},D}\right\},
&\text{ where}& \widehat{\lambda}_{\text{and},D}= \sup\left\{\lambda,\ \degr(\widehat{G}^{\lambda}_{\text{and}})>D\right\}.
\end{eqnarray*}
From a computational point of view, the family $\widehat{\mathcal{G}}_{\mathrm{LA}}$ can be efficiently computed with the LARS-lasso algorithm.
The optimization problem~(\ref{lasso_general}) is broken into the $p$ independent minimization problems
\begin{equation}\label{lasso_particulier}
\widehat{\theta}_a^{\lambda}=\arg\!\min\left\{\|{\mathbf{X}}_a-{\mathbf{X}}v\|^2+\lambda\|v\|_1: \ v\in{\mathbb{R}}^p\textrm{ and } v_{a}=0\right\}, \ \text{for any $a\in\Gamma$,}
\end{equation}
with $\|v\|_1=\sum_{b=1}^p|v_{b}|$. When $\lambda$ decreases, the support of $\widehat \theta_{a}^{\lambda}$ is piecewise constant and the LARS-lasso algorithm provides the sequences $(\lambda_a^l)_{l\geq 1}$ of the values of $\lambda$ where the support of $\widehat\theta^\lambda_{a}$ changes, as well as the sequence of the supports $\pa{\textrm{supp}(\theta^{\lambda_a^l})}_{l\geq 1}$.
We work out the family
$\widehat{\mathcal{G}}_{\mathrm{LA}}$ by gathering these $p$ sequences as described below.
\smallskip
\fbox{
\begin{minipage}{0.9\textwidth}
{\bf LA Algorithm}
\begin{enumerate}
\item Compute with LARS-lasso the $\pa{\lambda_a^l,\textrm{supp}(\widehat\theta^{\lambda_a^l})}_{l\geq 1}$ for all $a\in\Gamma$.
\item Order the sequence $\ac{\lambda_a^l: a\in\Gamma,\ l\geq 1}$.
\item Compute $\widehat{G}^{\lambda^l_{a}}_{\text{and}}$ for all $\lambda^l_{a}>\widehat{\lambda}_{\text{and},D}$.
\end{enumerate}
\end{minipage}
}
\smallskip
\noindent\textit{Complexity.} The complexity of the LARS-lasso algorithm is unknown in general. Nevertheless, according to Hastie \emph{et al.} \cite{lars} the algorithm requires $O(np(n\wedge p))$ operations in most cases. Hence, the whole complexity of the LA algorithm is generally of the order $p^2n(n\wedge p)$. Finally, the size of the family $\widehat\mathcal{G}_{\mathrm{LA}}$ cannot be bounded uniformly, but it remains smaller than $pD$ in practice.
\subsubsection*{\bf Adaptive lasso family $\widehat\mathcal{G}_{\mathrm{EW}}$}
\textit{Description.} The family $\widehat\mathcal{G}_{\mathrm{EW}}$ is a modified version of $\widehat\mathcal{G}_{\mathrm{LA}}$ inspired by the adaptive lasso of
Zou~\cite{zou_adaptive}. The major difference between $\widehat\mathcal{G}_{\mathrm{EW}}$ and $\widehat\mathcal{G}_{\mathrm{LA}}$ lies in the replacement of the $l^1$ norm $\|\theta'\|_{1}$ in~(\ref{lasso_general}) by $\|\theta'/\widehat\theta^{\mathrm{init}}\|_{1}$, where $\widehat\theta^{\mathrm{init}}$ is a preliminary estimator of $\theta$ and $\theta'/\widehat\theta^{\mathrm{init}}$ stands for the matrix with entries $(\theta'/\widehat\theta^{\mathrm{init}})_{a,b}=\theta'_{a,b}/\widehat\theta^{\mathrm{init}}_{a,b}$. Zou suggests to take for $\widehat\theta^{\mathrm{init}}$ a ridge estimator. Here, we propose to use instead the Exponential Weights estimator $\widehat\theta^{EW}$ of Dalalyan and Tsybakov~\cite{DT08,DT09}. The choice of this estimator appears more natural to us since it is designed for the sparse setting and enjoys nice theoretical properties. Moreover, we have observed on some simulated examples, that the adaptive lasso with the Exponential Weights initial estimator performs much better than the adaptive lasso with the ridge initial estimator.
To build the family $\widehat\mathcal{G}_{\mathrm{EW}}$ we thus start by computing the
Exponential Weight estimator $\widehat\theta^{EW}$. For each $a\in\Gamma$, we set $H_{a}=\ac{v\in{\mathbb{R}}^p: v_{a}=0}$ and
\begin{equation}\label{EW}
\widehat\theta^{EW}_{a}=\int_{H_{a}}v\, e^{-\beta\|\mathbf{X}_{a}-\mathbf{X} v\|^2_{n}}\, \prod_{j}\pa{1+(v_{j}/\tau)^2}^{-\alpha}\, {dv \over \mathcal{Z}_{a}}\,,
\end{equation}
with
$\mathcal{Z}_{a}=\int_{H_{a}}e^{-\beta\|\mathbf{X}_{a}-\mathbf{X} v\|^2_{n}}\, \prod_{j}\pa{1+(v_{j}/\tau)^2}^{-\alpha}\, dv$ and $\alpha,\beta,\tau>0$.
We note that $\widehat\theta^{EW}_{a}$ with $\beta=n/(2\sigma_{a}^2)$ and $\sigma_{a}^2=\mathrm{var}(X_{a}\, |\,X_{-a})$ is simply the Bayesian estimator of $\theta_{a}$ with prior distribution $d\pi(v)\propto \prod_{j}\pa{1+(v_{j}/\tau)^2}^{-\alpha}\, dv$ on $H_{a}$.
In the Gaussian stetting,
Dalalyan and Tsybakov~\cite{DT08} give a sharp and assumption-free sparse inequality for $\widehat\theta^{EW}_{a}$ with $\beta\leq n/(4\sigma_{a}^2)$, see Corollary~4 in \cite{DT08}.
The construction of $\widehat\mathcal{G}_{\mathrm{EW}}$ is now similar to the construction of $\widehat\mathcal{G}_{\mathrm{LA}}$.
For any $\lambda>0$ we set
\begin{eqnarray}\label{adaptive_lasso_general}
\widehat{\theta}^{EW,\lambda}=\arg\!\min\left\{\|{\mathbf{X}}-{\mathbf{X}}\theta'\|_{n\times p}^2+\lambda
\|\theta'/ \widehat{\theta}^{EW}\|_1:\theta'\in\Theta\right\},
\end{eqnarray}
and we define the graph $\widehat{G}^{\mathrm{EW},\lambda}_{\text{or}}$
by setting an edge between $a$ and $b$ if either
$\widehat{\theta}_{b,a}^{EW,\lambda}$ \underline{or} $\widehat{\theta}_{a,b}^{EW,\lambda}$ is non-zero:
$$a\stackrel{\widehat{G}^{\mathrm{EW},\lambda}_{\text{or}}}{\sim}b\ \ \Longleftrightarrow \ \ \widehat{\theta}_{a,b}^{\mathrm{EW},\lambda}\neq 0 \ \textrm{ \underline{or} } \ \widehat{\theta}_{b,a}^{\mathrm{EW},\lambda}\neq0\,.$$
Finally, the family $\widehat{\mathcal{G}}_{\mathrm{EW}}$ is given by
\begin{eqnarray*}
\widehat{\mathcal{G}}_{\mathrm{EW}} = \left\{\widehat{G}^{\mathrm{EW},\lambda}_{\text{or}},\ \lambda > \widehat{\lambda}^{EW}_{\text{or},d}\right\},
&\text{where}& \widehat{\lambda}^{\mathrm{EW}}_{\text{or},D}= \sup\left\{\lambda,\ \degr(\widehat{G}^{\mathrm{EW},\lambda}_{\text{or}})>D\right\}.
\end{eqnarray*}
The Exponential Weight estimator $\widehat\theta^{EW}$ can be computed with a Langevin Monte-Carlo algorithm. We refer to Dalalyan and Tsybakov~\cite{DT09} for the details. Once $\widehat\theta^{EW}$ is computed, the family $\widehat{\mathcal{G}}_{\mathrm{EW}}$ is obtained as before with the help of the LARS-lasso algorithm.
\smallskip
\fbox{
\begin{minipage}{0.9\textwidth}
{\bf EW Algorithm}
\begin{enumerate}
\item Compute $\widehat\theta^{EW}$ with a Langevin Monte-Carlo algorithm.
\item Compute with LARS-lasso the $\pa{\lambda_a^l,\textrm{supp}(\widehat\theta^{\lambda_a^l})}_{l\geq 1}$ for all $a\in\Gamma$.
\item Order the sequence $\ac{\lambda_a^l: a\in\Gamma,\ l\geq 1}$.
\item Compute $\widehat{G}^{\mathrm{EW},\lambda^l_{a}}_{\text{or}}$ for all $\lambda^l_{a}>\widehat{\lambda}_{\text{or},D}$.
\end{enumerate}
\end{minipage}
}
\smallskip
\noindent\textit{Complexity.}
The complexity of the first step depends on the choices of the tuning parameters. Some examples are given in Section \ref{section_comparaison_temporel}.
The complexity of steps 2, 3 and 4 is the same as for the $\mathrm{LA}$-algorithm, of the order $p^2n(n\wedge p)$ in practice.
Finally, as for $\widehat{\mathcal{G}}_{\mathrm{LA}}$, we do not know a general bound for the size of $\widehat{\mathcal{G}}_{\mathrm{EW}}$, but it remains smaller than $pD$ in practice.
\section{Theoretical results} \label{section_theorique}
We present in this section some theoretical results which assess the performance of our selection procedure. We present two kind of results: a non-asymptotic oracle-like inequality concerning the estimation of $\theta$ and a consistency result for the estimation of $G_{\Sigma}$.
\subsection{A non-asymptotic oracle-like inequality}
Next theorem states an oracle-like inequality in the spirit of Theorem~1 in~\cite{giraud08}. We associate to the graph $\widehat G$ selected by the procedure of Section~\ref{procedure}, the estimator $\tilde \theta=\widehat\theta_{\widehat G}$ of the matrix $\theta$, where $\widehat \theta_{G}$ is given by~(\ref{thetahat}) for any graph $G\in\widehat\mathcal{G}$. The quality of the estimation of $\theta$ is quantified by the $\mathrm{MSEP}$ of $\tilde \theta$ defined by
$$\mathrm{MSEP}(\tilde \theta)={\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}}.$$
We refer to the introduction of Giraud~\cite{giraud08} for a discussion on the relevance of the use of the $\mathrm{MSEP}$ of $\tilde \theta$ to assess the quality of the estimator $\widehat G$. In the sequel, $I$ stands for the identity matrix of size $p$.
Roughly speaking, next theorem ensures that the estimator $\tilde \theta$ performs almost as well as the best estimator in the family $\ac{\widehat\theta_{G},\ G\in\widehat \mathcal{G}}$.
\newpage
\begin{thrm}\label{proposition_risque}
Assume that $n\geq 9$.
Let $\widehat{\mathcal{G}}$ be any (data-driven) family of graphs with maximal degree $D_{\widehat{\mathcal{G}}}= \max \{\degr(G),\ G\in \widehat{\mathcal{G}}\}$ fulfilling
\begin{eqnarray}\label{condition_degree}
1\leq D_{\widehat{\mathcal{G}}}\leq \gamma\ \frac{n }{2(1.1+ \sqrt{\log p} )^2}\ ,\hspace{0.5cm}\text{for some }\gamma<1\,.
\end{eqnarray}
Then, the $\mathrm{MSEP}$ of the estimator $\widetilde{\theta}$ is upper bounded by
\begin{equation}\label{borne_oracle}
\mathrm{MSEP}(\widetilde{\theta}) \leq L_{K,\gamma}\log(p)\pa{ {\mathbb{E}}\cro{\inf_{G\in\widehat \mathcal{G}}\pa{\mathrm{MSEP}(\widehat{\theta}_G)}}\vee {\mathrm{MSEP}(I)\over n}}
+R_n\,.
\end{equation}
where $L_{K,\gamma}$ is a positive constant depending on $K$ and $\gamma$ only and the residual term $R_{n}=R_n(\Sigma,\gamma)$ (made explicit in the proof) is of order $n^3\text{tr}(\Sigma) e^{-n(\sqrt{\gamma}-\gamma)^2/4}$.
\end{thrm}
If we forget the term $n^{-1}\mathrm{MSEP}(I)$ in (\ref{borne_oracle}), Theorem~\ref{proposition_risque} states that under Condition~(\ref{condition_degree}) the MSEP of $\tilde \theta$ nearly achieves, up to a $\log (p)$ factor, the average minimal MSEP of the family of estimators $\{\widehat\theta_{G},\ G\in\widehat\mathcal{G}\}$. Hence, $\widetilde{\theta}$ performs almost as well as the oracle up to a $\log p$ factor. This logarithmic factor is proved to be unavoidable from a minimax point of view (see \cite{Verzelen08} Sect. 4.2).
Let us now compare the additional term $n^{-1}\mathrm{MSEP}(I)$ appearing in (\ref{borne_oracle}) with the risk $\mathrm{MSEP}(\widehat{\theta}_G)$. This additional term is equal to $n^{-1}\sum_a{\sigma_a^2}$, where $\sigma_{a}^2$ stands for the conditional variance of $X_{a}$ given the remaining variables.
Hence, this quantity is usually smaller than
the risk $\mathrm{MSEP}(\widehat{\theta}_G)$ which is the sum of a bias term and a variance term of order $n^{-1}\sum_a d_{a}(G){\sigma_a^2}$. Nevertheless, when the true graph $G_{\Sigma}$ is empty and $\widehat{\mathcal{G}}$ contains the empty graph, the additional term $n^{-1}\mathrm{MSEP}(I)$ is dominant and the estimator $\tilde \theta$ is not optimal. Such a drawback is actually unavoidable in model selection when the target is too close to zero (see Birg\'e and Massart \cite{BM01} Sect.2.3.3 for a discussion).
Finally, we can compare the $\mathrm{MSEP}$ of $\tilde \theta$ to the $\mathrm{MSEP}$ of $\widehat\theta_{G_{\Sigma}}$ when $G_{\Sigma}$ belongs to $\widehat \mathcal{G}$ with large probability. Roughly speaking, the MSEP of $\tilde \theta$ is in this case smaller (up to a $\log p$ factor) than the MSEP of $\widehat{\theta}_{G_{\Sigma}}$. This means that $\tilde \theta$ performs almost as well as if we knew the true graph $G_{\Sigma}$ in advance .
\begin{cor}\label{corollaire_risque}
Under the assumption of the above theorem, if the minimal graph $G_{\Sigma}$ belongs to the family $\widehat{\mathcal{G}}$ with large probability
\begin{equation}\label{condition_collection}
\mathbb{P}\left(G_{\Sigma}\in\widehat{\mathcal{G}}\right)\geq 1 - L(\alpha)\exp(-\beta n^\delta), \hspace{1cm}\text{for some }\alpha,\beta,\delta>0\
\end{equation}
then, the $\mathrm{MSEP}$ of the estimator $\widetilde{\theta}$ is upper bounded by
\begin{equation}\label{borne_oracle2}
\mathrm{MSEP}(\widetilde{\theta}) \leq L_{K,\gamma}\log(p)\pa{ \mathrm{MSEP}(\widehat{\theta}_{G_{\Sigma}})\vee {\mathrm{MSEP}(I)\over n}}
+R_n\,.
\end{equation}
where the residual term $R_{n}=R_n(\Sigma,\gamma,\alpha,\beta,\delta)$ is of order $n^3\text{tr}(\Sigma) [e^{-n(\sqrt{\gamma}-\gamma)^2/4}+ \sqrt{L(\alpha)}e^{-\frac{\beta}{2}n^\delta}]$.
\end{cor}
\subsection{Consistency of the selection procedure}
The next theorem states, under mild assumptions, a consistency result
for our selection procedure in a high dimensional setting. In the spirit of the results of Meinshausen and B\"uhlmann~\cite{MB06}, we consider the case where the number of variables $p$ increase with the sample size $n$.\\
We make the following assumptions:
\begin{eqnarray*}
\text{{\bf (H.1)}}& & p_n\geq n\ . \\
\text{{\bf (H.2)}}& &\degr(G_{\Sigma_n})\leq \frac{n^s}{\log p_n}\wedge \frac{n}{\log^2 p_n} \text{ for some }s<1\ .\\
\text{{\bf (H.3)}}& &\min_{a\neq b,\ b\in\mathrm{ne}_{G_{\Sigma_n}}(a)}\theta_{a,b}^2\min_{a\neq b}\frac{\mbox{Var}(X_a|X_{-a})}{\mbox{Var}(X_b|X_{-b})}\geq n^{s'-1}\text{ for some }s'>s\ .
\end{eqnarray*}
\begin{thrm}\label{proposition_consistance}
Assume that the family $\widehat{\mathcal{G}}$ of candidate graphs contains the true graph with probability going to 1 and {\bf (H.1)}, {\bf (H.2)}, {\bf (H.3)} are fulfilled. Then, the estimation procedure GGMselect with $K > \left[3\vee {2.5\over (1-s)}\right]$ and
\begin{eqnarray*}
D_{\widehat{\mathcal{G}}}= \max \{\degr(G),\ G\in \widehat{\mathcal{G}}\}&\leq& \frac{n}{\log^2 p_n}
\end{eqnarray*}
is consistent. More precisely, there exist some universal constant $L$ and some integer $n_0=n_0\left[K, s, s'\right]$ not depending on the true graph $G_{\Sigma_n}$ nor on the covariance $\Sigma_n$ such that
\begin{equation*}\mathbb{P} \left[ \widehat{G} = G_{\Sigma_n}\right]\geq 1- Lp_n^{-1/2}- \mathbb{P}\left[G_{\Sigma_n}\notin\widehat{\mathcal{G}}\right],\quad \textrm{ for any }n\geq n_{0}\,.
\end{equation*}
\end{thrm}
Let us discuss the assumptions of the theorem and their similarity with some of the hypotheses made in Meinshausen and B\"uhlmann \cite{MB06}. The Assumption~{\bf (H.2)} is met if $p_{n}$ grows polynomially with respect to $n$ and the degree of the true graph does not grow faster than $n^{\kappa}$ with $\kappa<s$ (which corresponds to Assumptions 1 and 2 in~\cite{MB06}). We mention that {\bf (H.2)} is not satisfied when $p_{n}$ grows exponentially with $n$ unless $G_{\Sigma_n}$ is empty. It is actually impossible to consistently estimate a non-empty graph if $p_n$ is of order $\exp(n)$, see Wainwright \cite{wainwright07}.
The Assumption {\bf (H.3)} ensures that the conditional variances as well as the non-zero terms $\theta_{a,b}$ are large enough so that the edges can be detected.
To compare with~\cite{MB06}, Assumption {\bf (H.3)} is met as soon as Assumption 2 and 5 in~\cite{MB06} are satisfied. In addition, we underline that we make
no assumption on the $l_1$-norm of the prediction coefficients or on the signs of $\theta_{a,b}$ (Assumption 4 and 6 in \cite{MB06}).
Finally, we do not claim that the condition $K>\left[2.5/(1-s)\vee 3\right]$ is minimal to obtain consistency. It seems from simulation experiments that smaller choices of $K$ also provide good estimations.
\section{Numerical study}\label{section_simulations}
It is essential to investigate the performance of statistical
procedures on data. Since we do not know the actual underlying
graph of conditional dependences on real data sets, we opt for a
numerical study with simulated data. Our aims in this study are
to evaluate the feasibility of the GGMselect procedure and to
compare its performances with those of
recent graph-selection procedures.
\paragraph{Simulating the data}
The matrix $\mathbf{X}$ is composed of $n$ i.i.d. rows with Gaussian
$\mathcal{N}_{p}(0,\Omega^{-1})$ distribution where the inverse covariance
matrix $\Omega$ is constructed according to the following procedure.
We set $\Omega=BB^T+D$, where $B$ is a
random sparse lower triangular matrix and $D$ is a diagonal matrix
with random entries of order $10^{-3}$. The latter matrix $D$ prevents
$\Omega$ from having too small eigenvalues. To generate $B$ we split
$\ac{1,\ldots,p}$ into three consecutive sets $I_{1}$, $I_{2}$, $I_{3}$
of approximately equal size, and choose two real numbers
$\eta_\mathrm{int}$ and
$\eta_\mathrm{ext}$ between 0 and 1. For any $a, b$ such that $1\leq
a<b \leq p$, we set $B_{a,b}=0$ with probability
$1-\eta_\mathrm{int}$ if $a$ and $b$ are in the same set, and we set $B_{a,b}=0$ with probability $1-\eta_\mathrm{ext}$ if $a$ and $b$
belong to two different sets. Then, the lower diagonal values that
have not been set to 0 are drawn according to a uniform law on
$\cro{-1,1}$ and the diagonal values are drawn according to a uniform
law on $\cro{0,\varepsilon}$. Finally, we rescale $\Omega$ in order to
have 1 on the diagonal of $\Sigma=\Omega^{-1}$. This matrix $\Sigma$ defines
a graph $G = G_{\Sigma}$ and a matrix
$\theta$ defined as in Section~\ref{SelProc.st}. The sparsity of the
graph is measured via a sparsity index noted $I_{s}$, defined as the
average number of edges per nodes in the graph.
In our simulation study we set
$\eta=\eta_\mathrm{int}=5\eta_\mathrm{ext}$, and $\varepsilon=0.1$.
We evaluate the value of $\eta$ corresponding to a desired value of the sparsity index $I_{s}$ by simulation.
Choosing $I_{s}$ small, we get sparse graphs whose edges distribution
is not uniform, see Figure~\ref{dessGr.fg}.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=12.5cm,width=6cm,angle=270]{dessGr.ps}}
\end{center}
\caption{\label{dessGr.fg} One simulated graph $G$ with $p=30$ and
$I_{s}=3$. The degree $\deg(G)$ of the graph equals 8.}
\end{figure}
\paragraph{GGMSelect: choice of graphs families}
Our procedure is applied for the families of graphs presented in
Section~\ref{FamGr.st}. The methods are respectively denoted {\tt QE},
{\tt C01}, {\tt LA} and {\tt EW}.
As it was already mentioned in Section~\ref{FamGr.st}, the size of
the family $\widehat{\mathcal{G}}_{\mathrm{QE}}$ may be very large leading to memory
size problems in the computational process. In that case, as soon as a
memory size problem is encountered, the
research between $\widehat{G}_{K,\text{and}}$ and
$\widehat{G}_{K,\text{or}}$ is stopped and prolonged by a stepwise
procedure.
The family $\widehat{\mathcal{G}}_{\mathrm{EW}}$ is based on the calculation
of exponential weight estimators $\widehat{\theta}^{\mathrm{EW}}$. This
calculation depends on parameters, denoted
$\alpha, \beta, \sigma, \tau$~in~\cite{DT09}, that defined the
aggregation procedure, and on parameters, denoted $h$ and
$T$~in~\cite{DT09}, used in the
Langevin Monte-Carlo algorithm. We chose these parameters as
follows. The matrix
$\mathbf{X}$ being scaled such that the norm of each column equals 1, we took
$\sigma=1/\sqrt{n}$, and we
set $\alpha=0$, $\beta=2/n$, $\tau=1/\sqrt{n(p-1)}$ and $h=10^{-3}$,
$T=200$. Using these parameters values we did not encountered
convergence problems in our simulation study.
Our procedure depends on two tuning parameters: $K$ occurring in the penalty
function (see Equation~\ref{definition_penalite}) and $D$ the maximum
degree of the graph. We choose $K=2.5$ in all simulation
experiments. In practice, we would like to choose $D$ as large as
possible, and this can be done for all methods except {\tt QE} where the complexity of the algorithm increases exponentially
with $D$.
All these methods are implemented in R-2.7.2 in
the package {\tt GGMselect} available on request.
\subsection{CPU times}\label{section_comparaison_temporel}
We assess the practical feasibility of the methods we propose
from the point of view of the memory size and computer time.
To this aim,
we simulate graphs with $p=30, 100, 200, 300$ nodes, sparsity $I_{s}=3$
and $n=50$.
The simulation were run on a Bi-Pro Xeon quad core 2.66 GHz with 24 Go RAM.
The computer time being strongly dependent on the simulated graph we
calculate the mean of computer times over $N_{G}=100$ simulated
graphs. For each of these graphs, one matrix $\mathbf{X}$ is simulated. The
results are given in Table~\ref{CompT.tb}. The maximum degree $D$ of the
estimated graph was set to 5, except for the {\tt QE} method where
$D=3$ and 5. The maximum allowed memory size is
exceeded for the {\tt QE} method when $D=5$ and $p=100, 200$, and when
$D=3$ for $p=300$. The {\tt LA} and {\tt C01} methods are running very
fast. The computing time for the {\tt EW} method increases quickly
with $p$: in
this simulation study, it is roughly proportional to $\exp\left(\sqrt{p}/2\right)$, see Figure~\ref{dessCT.fg}. This order of magnitude is obviously
dependent on the choice of the parameters occurring in the Langevin
Monte-Carlo algorithm for calculating $\widehat\theta^{EW}$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{l|rr|rr|rr|r|r}
$p$ & \multicolumn{2}{c|}{{\tt QE} $D=3$} &
\multicolumn{2}{c|}{{\tt QE} $D=5$} &
\multicolumn{2}{c|}{{\tt EW} $D=5$} &
{\tt LA} $D=5$ & {\tt C01} $D=5$ \\ \hline
30 & 16 & $[1.9, 1366]$ & 146 & $[125, 975]$ & 7.4 & $[6, 9.3]$ & 0.47 & 0.05 \\
100 & $1956$ & $[240, 5628]$ & \multicolumn{2}{c|}{$>$ams} & 112 &
$[103, 121]$ & 3.05 & 0.13 \\
200 & $4240 $ & $[4008, 5178]$& \multicolumn{2}{c|}{$>$ams} & 856 & $[813, 943]$
& 8.0 \,\ & 0.65 \\
300 & \multicolumn{2}{c|}{$>$ams} & \multicolumn{2}{c|}{$>$ams} &
$4305$ & $[4240, 4444]$ & 14.7\,\ \ & 2.12 \\
\end{tabular}
\end{center}
\caption{\label{CompT.tb} Means and ranges (in square brackets) of
computing times in seconds
calculated over $N_{G}=100$ simulated
graphs. For {\tt LA} and {\tt C01} there is no variability in the computing
times. {\rm $>$ams} means that the maximum allowed memory size was exceeded.}
\end{table}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm,width=4cm,angle=270]{dessCT.ps}
\end{center}
\caption{\label{dessCT.fg}Graphic of $\log^{2}(\mbox{CPU time})$
versus $p$ for the {\tt EW} method.}
\end{figure}
\subsection{Methods comparison}
We compare our methods with the following ones:
\begin{itemize}
\item the 0-1 conditional independence approach proposed by Wille and
B\"uhlman~\cite{WB06}, with the decision rule based on the adjusted
p-values following the Benjamini-Hochberg procedure taking
$\alpha=5\%$.
\item the lasso approach, with the two variants {\tt and} and {\tt or}
proposed by Meinshausen and B\"uhlmann~\cite{MB06}
, taking $\alpha=5\%$.
\item the adaptive glasso method proposed by Fan {\it et
al.}~\cite{Fan08}. It works in two steps. First, the matrix
$\Omega$ is estimated using
the glasso method. Then the glasso procedure is run again using
weights in the penalty that depend on the previous estimate of
$\Omega$, see Equation~(2.5) in~\cite{Fan08}.
At each step the
regularization parameter is calculated by K-fold
cross-validation.
\end{itemize}
These methods will be denoted as {\tt WB}, {\tt MB.and}, {\tt MB.or} and
{\tt Aglasso}. They were implemented in R-2.7.2 using the packages
{\tt lars} for the {\tt MB} methods and the package {\tt glasso} for the last
one.
\paragraph{Assessing the performances of the methods}
We assess the performances of the investigated methods on the
basis of $N_{G} \times N_{X}$ runs where $N_{G}$ is the number of
simulated graphs and $N_{X}$ the number of matrices $\mathbf{X}$ simulated
for each of these graphs. We compare
each simulated graph $G$ with the estimated graphs $\widehat{G}$ by
counting edges that are correctly identified as present or absent, and
those that are wrongly identified. We thus estimate the false
discovery rate (or FDR) defined as the expected
proportion of wrongly detected edges among edges detected as
present, and the
power defined as the expected proportion of rightly detected edges
among edges present in the graph.
The statistical procedures
designed to select graphs have one or several parameters that must be
tuned. The quality of the final estimation is then affected as well by
the intrinsic ability of the procedure to select an accurate graph, as
by the parameter tuning.
First, we investigate the first issue by varying the
values of the tuning parameters and plotting {\it power versus FDR
curves}. We choose $p=100$, $n=50$ and $I_{s}=3$. Then, taking the point
of view of a typical user, we compare the different procedures with
the tuning parameter recommended in the literature.
We investigate the effect of $n$ by choosing $n=30, 50, 100, 150$,
keeping $p=100$. We also evaluate the effect of graph sparsity taking
$I_{s} = 1, 2, 3, 4, 5$, $p=30$ to keep the computer time under
reasonable values, and $n=30$.
\subsubsection{{\it Power versus FDR
curves} when $p=100$}
The number of nodes $p$ and the number of observations $n$ being fixed
to $p=100$, $n=50$, for each of the $N_{G}=20$ simulated graphs, we
estimated the FDR, the power and the MSEP on the basis of
$N_{X}=20$ simulations. These calculation are done for different values of
the tuning parameter.
The means over the $N_{G}$ graphs are shown
at Figure~\ref{dess7.fg}. The standard errors of the
means over the $N_{G}$ graphs are smaller than 0.0057 for the FDR,
and 0.018 for the power.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=12.5cm,width=7cm,angle=270]{dess7.ps}}
\end{center}
\caption{\label{dess7.fg}Graphics of power versus FDR for the case
$p=100$, $n=50$ and $I_{s}=3$. The marks on the first graphic
correspond to different values of the tuning parameter. The curves
for small FDR values are magnified on the second graphic. The FDR
and power values corresponding to the tuning parameter recommended
in the literature are superimposed on the curves (dashed lines) : $K=2.5$ for {\tt GGMselect}, $\alpha=5\%$ for {\tt WB} and {\tt MB} methods. For {\tt
Aglasso}, with $\lambda$ chosen by 5-fold cross-validation, the FDR equals
0.90 and the power equals 0.59 (not shown).}
\end{figure}
\paragraph{Choice of the family of candidate graphs in our procedure}
The {\tt QE} method presents good performances: the
FDR stays small and the power is high. Though it was performed with $D=3$,
while {\tt EW}, {\tt LA} and {\tt C01} were performed with $D=5$,
it works the best. The {\tt EW}
method is more powerful than {\tt LA} and {\tt C01} if one accepts a
FDR greater than 2.5\%.
\paragraph{Comparison with the other methods}
The procedures {\tt LA} and {\tt C01} behave similarly to {\tt WB}
method. The {\tt MB.or} method presents higher values of the power
when the FDR is larger than 5\%. The {\tt MB.and} keeps down the
FDR but lacks of power. The {\tt Aglasso} method behaves completely in a
different way: the curve stays under the others as long as the FDR is
smaller than 20\%. When the regularization parameter is chosen by
5-fold cross-validation, the power equals $59\%$ at the price of a very
large FDR equal to 90\% (not shown). In the following we do not consider anymore
the adaptive glasso method, and focus on methods that have a good
control of the FDR.
\subsubsection{Effect of the number of observations $n$}
Keeping $p=100$ and $I_{s}=3$, the variations of the FDR and power
values versus the number of observations, are shown at
Figure~\ref{dess4.fg}. The {\tt QE}
method is applied with $D=3$ while {\tt EW}, {\tt LA} and {\tt C01} are
applied with $D=5$. For all methods the power
increases with $n$ while the FDR decreases for {\tt EW} and increases
for {\tt MB.or}, {\tt LA} and {\tt C01}. {\tt QE} and {\tt EW} are the
most powerful. When $n$ is small, the {\tt QE}
method stays more powerful than {\tt EW} in spite of a smaller $D$.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=12.5cm,width=6cm,angle=270]{dess4.ps}}
\end{center}
\caption{\label{dess4.fg}FDR and power
estimated values as a function of $n$ for $p=100$ and $I_{s}=3$. The results
are calculated on the basis
of $N_{G}=20$ simulated graphs and $N_{X}=20$ runs of matrices $\mathbf{X}$ for each
simulated graph. Our procedures were carried out with $K=2.5$. The
value of $D$ was equal to 3 for the {\tt QE} method and 5 for the
others. For the procedures {\tt MB.or}, {\tt MB.and} and {\tt WB}
the tuning parameter $\alpha$ was taken equal to $5\%$.}
\end{figure}
\subsubsection{Effect of graph sparsity}
We have seen that when $p$ is large, the GGMselect procedures using the
graphs families {\tt QE} and {\tt EW} are powerful and have a good
control of the FDR. Nevertheless, the simulated graphs were sparse,
$I_{s}=3$, and it may be worthwhile testing how the methods perform
when the graph sparsity varies. Because the performances depend
strongly on the simulated graph, the FDR and power are estimated on
the basis of a
large number of simulations: the number of simulated graphs $N_{G}$
equals 50 and the number of simulated matrices $\mathbf{X}$ for each graph,
$N_{X}$ equals 50. In order to keep reasonable computing times, we
choose $p=30$. The results are shown
at Figure~\ref{dess1.fg}. The standard errors of the
means over the $N_{G}$ graphs are smaller than 0.0055 for the FDR,
and 0.025 for the power.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=12.5cm,width=6cm,angle=270]{dess1.ps}}
\end{center}
\caption{\label{dess1.fg} Graphs of FDR and power
estimated values versus the graph sparsity $I_{s}$, for
$p=30$ and $n=30$. The results are
calculated on the basis
of $N_{G}=50$ simulated graphs and $N_{X}=50$ runs of matrices $\mathbf{X}$ for each
simulated graph. Our procedures were carried out with $K=2.5$ and
$D=5$. For the procedures {\tt MB.or}, {\tt MB.and} and {\tt WB} the
tuning parameter $\alpha$ was taken equal to $5\%$.}
\end{figure}
For all methods the power decreases when $I_{s}$ increases. The FDR
values are slightly increasing with $I_{s}$ for the {\tt EW} and {\tt
MB.or} methods. The superiority of {\tt QE} over the others is
clear. {\tt EW} is more powerful then {\tt LA}, {\tt C01}, {\tt MB}
and {\tt WB} methods but its FDR is greater.
\subsubsection{GGMselect : mixing the graphs families}
Our procedure allows to mix several graphs families. It may happen
that some graphs, or type of graphs, are known to be good candidates
for modelling the observed data set. In that case, they can be
considered in the procedure, and thus compete with
$\widehat{\mathcal{G}}_{\mathrm{EW}}$ or
$\widehat{\mathcal{G}}_{\mathrm{QE}}$. This can be done with the function {\tt selectMyFam} of the package {\tt GGMselect}.
Considering the results of our
simulation study, we could ask if mixing $\widehat{\mathcal{G}}_{\mathrm{LA}}$
or $\widehat{\mathcal{G}}_{\mathrm{C0} 1}$ with $\widehat{\mathcal{G}}_{\mathrm{EW}}$
would not give a better control of the FDR than {\tt EW} while keeping
high values of the power. To answer this question we carried out
simulation studies taking
$\widehat{\mathcal{G}}_{{\tt mix}} = \widehat{\mathcal{G}}_{\mathrm{C0} 1} \cup
\widehat{\mathcal{G}}_{\mathrm{LA}} \cup \widehat{\mathcal{G}}_{\mathrm{EW}}$
as the family of graphs. In all considered cases for $p$, $n$,
$I_{s}$, the FDR and power values based on $\widehat{\mathcal{G}}_{{\tt
mix}}$ are
similar to those based on $\widehat{\mathcal{G}}_{\mathrm{EW}}$. This result
can be explained by studying the behavior of the MSEP estimated by
averaging the quantities $\|\Sigma^{1/2}
(\widehat{\theta}_{\widehat{G}} - \theta)\|^{2}$ over
the $N_{G} \times N_{X}$ runs. The results are given at
Figure~\ref{dess11.fg}. One can see that the smallest values of the MSEP are
obtained for {\tt QE}, then {\tt EW}. Moreover, the MSEP decreases when the
power increases, while it does not show any particular tendency when
the FDR varies. Considering these tendencies together with the fact
that our procedure aims at
minimizing the MSEP, we can understand why we do not improve the
performances of {\tt EW} by considering $\widehat{\mathcal{G}}_{{\tt
mix}}$.
\begin{figure}[htbp]
\begin{center}
{\includegraphics[height=12.5cm,width=5cm,angle=270]{dess11.ps}}
\end{center}
\caption{\label{dess11.fg}Values of the MSEP for
the simulation results given at Figure~\ref{dess1.fg}. The first
graphic on the left presents the ratio of the MSEP over the MSEP
of the {\tt QE} method. The two others present the MSEP
versus the FDR and the power.}
\end{figure}
\subsection{Summary}
We recommend to use the {\tt QE} method if the calculation of $\widehat{G}_{K,\text{and}}$ and
$\widehat{G}_{K,\text{or}}$ is possible. Next, working out the
family $\widehat{\mathcal{G}}_{\mathrm{QE}}$ can always be done using
some suitable algorithms if necessary (as a stepwise procedure for
example). When $p$ is large, {\tt QE} can be used for small values of
$D$ ($D=3$ or even $D=2$). It may perform better than all the
others when $n$ is small. The procedure
based on $\widehat\mathcal{G}_{\mathrm{EW}}$ can be used for large $p$: the gain in
power over {\tt LA}, {\tt C01}, {\tt MB} and {\tt WB} methods is
significant, but the FDR is slightly greater.
The {\tt LA} and {\tt C01} methods are running very quickly, keep
the FDR under control and
are slightly more powerful than {\tt WB} and {\tt MB.and}.
\section{Proofs}\label{section_preuves}
\subsection{Proof of Theorem \ref{proposition_risque}}
We write $\mathcal{G}_{D}$ for the family of all the graph with nodes in $\Gamma$ and degree less than $D$. We remind the reader that
for any graph $G\in\mathcal{G}_{D}$ we have noted $\Theta_{G}$ the space of $p\times p$ matrices $\theta$ such that $\theta_{a,b}$ is non zero if and only if there is en edge between $a$ and $b$ in $G$. We also set $\bar\Theta_{D_{\max}}=\cup_{G\in\mathcal{G}_{D_{\max}}}\Theta_{G}$.
We set $\lambda=(1-\sqrt{\gamma})^2$ and introduce the event
$${\mathbb{B}}=\ac{\lambda\|\Sigma^{1/2}A\|_{p \times p}\leq {1\over \sqrt n}\|\mathbf{X} A\|_{n\times p}\leq \lambda^{-1}\|\Sigma^{1/2}A\|_{p \times p},\ \textrm{for all }A\in\theta+\bar\Theta_{D_{\max}}}.$$
On this event we can control the $L^2$-loss of $\tilde \theta$ by the empirical loss since
\begin{equation}\label{borne1}
\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}{\mathbf 1}_{{\mathbb{B}}}\leq {\lambda^{-2}\over n}\|\mathbf{X}(\tilde \theta-\theta)\|^2_{n \times p}{\mathbf 1}_{{\mathbb{B}}}\,.
\end{equation}
Furthermore, according to Lemma 1 in \cite{giraud08}, we have ${\mathbb{P}}({\mathbb{B}}^c)\leq 2e^{-n(\sqrt{\gamma}-\gamma)^2/2}$ when Condition~(\ref{condition_degree}) is met. To bound the risk of the procedure, we consider apart the events ${\mathbb{B}}$ and ${\mathbb{B}}^c$.
\subsubsection{Bound on ${\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p \times p}{\mathbf 1}_{{\mathbb{B}}}}$}
We have $\mathbf{X}=\mathbf{X}\theta+\boldsymbol{\epsilon}$, where $\boldsymbol{\epsilon}$ is a $n\times p$ matrix distributed as follows: for each $a\in\Gamma$, the column $\boldsymbol{\epsilon}_{a}$ is independent of $\mathbf{X}_{-a}$ and is distributed according to the Gaussian law $\mathcal{N}(0,\sigma^2_{a}I_{n})$, with $\sigma^2_{a}=1/\Omega_{a,a}$. For any $G\in\mathcal{G}_{D}$, we write henceforth $\theta^{G}$ for the orthogonal projection of $\theta$ on $\Theta_{G}$ according to the Euclidean norm $\|\Sigma^{1/2}\cdot\|_{p\times p}$ on ${\mathbb{R}}^{p\times p}$.
Similarly, we write $\bar \theta^{G}$ for the orthogonal projection of $\theta$ on $\Theta_{G}$ according to the (random) Euclidean norm $\|\mathbf{X}\cdot\|_{n\times p}$ on ${\mathbb{R}}^{p\times p}$.
For any $G\in\mathcal{G}_{D}$, we write $d_{a}(G)$ for the degree of the node $a$ in $G$ and introduce the positive quantity
\begin{multline*}
R(G)=\sum_{a=1}^p\pa{1+{\mathrm{pen}(d_{a}(G))\over n-d_{a}(G)}}\pa{\|\mathbf{X}(\theta_{a}-\bar \theta_{a}^G)\|^2+2|<\mathbf{X}\theta_{a}-\mathbf{X}\bar\theta^G_{a},\boldsymbol{\epsilon}_{a}>|}\\
+\sum_{a=1}^p{\mathrm{pen}(d_{a}(G))\over n-d_{a}(G)}\|\boldsymbol{\epsilon}_{a}\|^2,
\end{multline*}
where $\|.\|$ and $<.,.>$ denote the canonical norm and scalar product on ${\mathbb{R}}^n$.
Following the same lines as in the beginning of the proof of Theorem~2 in Baraud {\it et al.}~\cite{BGH09}, we get for any $G^*$ in $\widehat \mathcal{G}$
\begin{equation}\label{borne2}
{K-1\over K}\|\mathbf{X}(\tilde \theta-\theta)\|_{n\times p}^2{\mathbf 1}_{{\mathbb{B}}}\leq R(G^*){\mathbf 1}_{{\mathbb{B}}}+\Delta(\widehat G){\mathbf 1}_{{\mathbb{B}}}
\end{equation}
with
$$\Delta(G)=\sum_{a=1}^p{\sigma^2_{a}}\pa{KU_{\mathrm{ne}_{G}(a)}-{\mathrm{pen}(d_{a}(G))\over n-d_{a}(G)}V_{\mathrm{ne}_{G}(a)}}_{+}$$
where $U_{\mathrm{ne}_{G}(a)}$ and $V_{\mathrm{ne}_{G}(a)}$ are two independent $\chi^2$ random variables with $d_{a}(G)+1$ and $n-d_{a}(G)-1$ degrees of freedom.
We note that under Condition (\ref{condition_degree}) there exists some constant $c(\gamma)$ depending on $\gamma$ only, such that
$$\mathrm{pen}(d)\leq c(\gamma) K (d+1)\log (p),\quad\textrm{for all } d\in\ac{0,\ldots,D_{\max}},$$
see Proposition~4 in Baraud {\it et al.}~\cite{BGH09}. In particular,
we have for any $G\in\mathcal{G}_{D}$
$${\mathrm{pen}(d_{a}(G)) \over n-d_{a}(G)}\leq{c(\gamma)K(D_{\max}+1)\log(p)\over n/2}\leq 4K\gamma c(\gamma) = L_{\gamma,K}.$$
Using this bound together with
$$|2<\mathbf{X}\theta-\mathbf{X}\bar\theta_{a}^G,\boldsymbol{\epsilon}_{a}>|\leq \|\mathbf{X}(\theta_{a}-\bar \theta_{a}^G)\|^2+\sigma_{a}^2\xi_{a,G}^2,$$
where for any $G\in\mathcal{G}$ and $a\in\ac{1,\ldots,p}$, the random variable
$$\xi_{a,G}=<\mathbf{X}(\theta_{a}-\bar \theta_{a}^G),\boldsymbol{\epsilon}>/(\sigma_{a}\|\mathbf{X}(\theta_{a}-\bar \theta_{a}^G) \|)$$ is standard Gaussian, we obtain
\begin{eqnarray*}
R(G)&\leq&(1+L_{\gamma,K})\sum_{a=1}^p\pa{2\|\mathbf{X}(\theta_{a}-\bar \theta_{a}^G)\|^2+\sigma_{a}^2\xi_{a,G}^2}+{\mathrm{pen}(d_{a}(G)) \over n-d_{a}(G)}\,\|\boldsymbol{\epsilon}_{a}\|^2\\
&\leq & 2(1+L_{\gamma,K})\|\mathbf{X}(\theta-\bar \theta^G)\|_{n\times p}^2+(4+L_{\gamma,K})\sum_{a=1}^p\mathrm{pen}(d_{a}(G))\sigma_{a}^2+r(\mathcal{G}_{D})
\end{eqnarray*}
where
$$r(\mathcal{G}_{D})=\sum_{a=1}^p\sigma_{a}^2\pa{(1+L_{\gamma,K})\sum_{G\in\mathcal{G}}\cro{\xi_{a,G}^2-\mathrm{pen}(d_{a}(G))}_{+}+L_{\gamma,K}\cro{\|\boldsymbol{\epsilon}_{a}\|^2/\sigma_{a}^2-3n/2}_{+}}.$$
Furthermore, we have $\|\mathbf{X}(\theta-\bar \theta^G)\|_{n\times p}\leq \|\mathbf{X}(\theta- \theta^G)\|_{n \times p}$ and on the event ${\mathbb{B}}$ we also have $\|\mathbf{X}(\theta- \theta^G)\|^2_{n\times p}\leq n\lambda^{-2}\|\Sigma^{1/2}(\theta- \theta^G)\|^2_{p \times p}$ so that on ${\mathbb{B}}$
$$R(G)\leq L'_{\gamma,K}\pa{n\lambda^{-2}\|\Sigma^{1/2}(\theta-\theta^G)\|^2_{p \times p}+\sum_{a=1}^p\mathrm{pen}(d_{a}(G))\sigma_{a}^2}+r(\mathcal{G}_{D}),$$
with $L'_{\gamma,K}=\max(2+2L_{\gamma,K},4+L_{\gamma,K})$.
Putting this bound together with~(\ref{borne1}) and(\ref{borne2}), we obtain
\begin{eqnarray*}
\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}{\mathbf 1}_{{\mathbb{B}}} &\leq&
{K\over n\lambda^2(K-1)}\pa{\inf_{G^*\in\widehat \mathcal{G}}R(G^*)+\Delta(\widehat G)}{\mathbf 1}_{{\mathbb{B}}}\\
&\leq& L''_{\gamma,K}\inf_{G^*\in\widehat\mathcal{G}}\pa{\|\Sigma^{1/2}(\theta-\theta^{G^*})\|^2_{p\times p}+\sum_{a=1}^p\mathrm{pen}(d_{a}(G^*)){\sigma_{a}^2\over n}}\\
& &+L''_{\gamma,K}n^{-1}\pa{r(\mathcal{G}_{D})+\Delta(\widehat G)}.
\end{eqnarray*}
We note that
$$n^{-1}{\mathbb{E}}(r(\mathcal{G}_{D}))\leq \sum_{a=1}^p{\sigma_{a}^2\over n} (1+L_{\gamma,K})(3+\log(p))$$
and we get from the proof of Theorem 1 in \cite{giraud08} that
\begin{eqnarray*}
n^{-1}{\mathbb{E}}(\Delta(\widehat G))&\leq& n^{-1}{\mathbb{E}}\pa{\sup_{G\in\mathcal{G}_{D}}\Delta(G)}\\
&\leq& K\sum_{a=1}^p{\sigma_{a}^2\over n} (1+\log(p)).
\end{eqnarray*}
Since $\mathrm{pen}(d)\leq c(\gamma) K (d+1)\log (p)$, the latter bounds enforce the existence of constants $L_{\gamma,K}$ and $L'_{\gamma,K}$ depending on $\gamma$ and $K$ only, such that
\begin{eqnarray*}
\lefteqn{{\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}{\mathbf 1}_{{\mathbb{B}}}}}\\
&\leq&L_{\gamma,K}\,{\mathbb{E}}\cro{\inf_{G^*\in\widehat\mathcal{G}}\pa{\|\Sigma^{1/2}(\theta-\theta^{G^*})\|^2_{p \times p}+\sum_{a=1}^p\big(\log (p)\vee\mathrm{pen}[d_{a}(G^*)]\big){\sigma_{a}^2\over n}}}\\
&\leq& L'_{\gamma,K}\,\log(p)\pa{ {\mathbb{E}}\cro{\inf_{G^*\in\widehat\mathcal{G}}\textrm{MSEP}(\widehat\theta_{G^*})}\vee\sum_{a=1}^p{\sigma_{a}^2\over n}}.
\end{eqnarray*}
Finally, we note that $\sum_{a=1}^p{\sigma_{a}^2/ n}=\mathrm{MSEP}(I)$.
\subsubsection{Bound on ${\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}{\mathbf 1}_{{\mathbb{B}}^c}}$}\label{section_risque_petit_evenement}
We now prove the bound
${\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}{\mathbf 1}_{{\mathbb{B}}^c}} \leq Ln^3\text{tr}(\Sigma)\sqrt{\mathbb{P}(\mathbb{B}^c)}$.
We have
$${\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p\times p}{\mathbf 1}_{{\mathbb{B}}^c}}=
\sum_{a=1}^p{\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta_{a}-\theta_{a})\|^2{\mathbf 1}_{{\mathbb{B}}^c}}$$
and we will upper bound each of the $p$ terms in this sum. Let $a$ be any node in $\Gamma$.
Given a graph $G$, the vector $[\widehat{\theta}_{G}]_a$ depends on $G$ only through the neighborhood $\mathrm{ne}_G(a)$ of $a$ in $G$.
Henceforth, we write $\widehat{\theta}_{\mathrm{ne}_{\widehat{G}}(a)}$ for $\tilde \theta_{a}$ in order to emphasize this dependency. By definition $\widehat\theta_{\mathrm{ne}_{\widehat{G}}(a)}$ is the least-squares estimator of $\theta_a$ with support included in $\mathrm{ne}_{\hat{G}}(a)$. Let us apply the same arguments as in the proof of Lemma 7.12 in \cite{Verzelen08}. By Cauchy-Schwarz inequality, we have
\begin{equation}\label{majoration1}
{\mathbb{E}}\cro{\|\Sigma^{1/2}(\tilde \theta_{a}-\theta_a)\|^2\mathbf{1}_{\mathbb{B}^c}} \leq \sqrt{\mathbb{P}(\mathbb{B}^c)}\ \sqrt{\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}_{\widehat{G}}(a)}-\theta_a)\|^4}}.
\end{equation}
Let $\mathcal{N}_D(a)$ be the set made of all the subsets of $\Gamma\setminus\{a\}$ whose size are smaller than $\gamma n/[2(1.1+\sqrt{\log(p)})^2]$. By Condition (\ref{condition_degree}), it holds that
the estimated neighborhood $\mathrm{ne}_{\widehat{G}}(a)$ belongs to $\mathcal{N}_D(a)$, so H\"older inequality gives
\begin{eqnarray*}
\lefteqn{\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}_{\widehat{G}}(a)}-\theta_a)\|^4}=\sum_{\mathrm{ne}(a) \in \mathcal{N}_D(a)}\mathbb{E}\cro{\mathbf{1}_{\mathrm{ne}_{\widehat{G}}(a)=\mathrm{ne}(a)}\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^4}}\\
&\leq& \sum_{\mathrm{ne}(a) \in \mathcal{N}_D(a)}\mathbb{P}\left[\mathrm{ne}_{\widehat{G}}(a)=\mathrm{ne}(a)\right]^{1/u}\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{4v}}^{1/v}\\
&\leq& \sum_{\mathrm{ne}(a) \in \mathcal{N}_D(a)}\mathbb{P}\left[\mathrm{ne}_{\widehat{G}}(a)=\mathrm{ne}(a)\right]^{1/u} \sup_{\mathrm{ne}(a) \in \mathcal{N}_D(a)}\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{4v}}^{1/v},
\end{eqnarray*}
where $v=\left\lfloor\frac{n}{8}\right\rfloor$, and $u=\frac{v}{v-1}$
(we remind the reader that $n$ is larger than $8$). In particular, we have the crude bound
\begin{multline*}
\sqrt{\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}_{\widehat{G}}(a)}-\theta_a)\|^4}}\\
\leq \left[\text{Card}(\mathcal{N}_D(a))\right]^{1/2v}\sup_{\mathrm{ne}(a) \in \mathcal{N}_D(a)}\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{4v}}^{1/2v},
\end{multline*}
since the sum is maximum when every $\mathbb{P}[\mathrm{ne}(a)=\mathrm{ne}_{\widehat{G}}(a)]$ equals $[\text{Card}(\mathcal{N}_D(a))]^{-1}$.
We first bound the term $\left[\text{Card}(\mathcal{N}_D(a))\right]^{1/2v}$. The size of the largest subset in $\mathcal{N}_D(a)$ is smaller than $n/(2\log (p))$, so the cardinality of $\mathcal{N}_D(a)$ is smaller than $p^{D_{\widehat \mathcal{G}}}$. Since $n$ is larger than 8, we get
\begin{eqnarray*}
\left[\text{Card}(\mathcal{N}_D(a))\right]^{1/2v}\leq \exp\left[\frac{n}{4\lfloor n/8\rfloor}\right]\leq L\ ,
\end{eqnarray*}
which ensures the bound
\begin{equation}
\sqrt{\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}_{\widehat{G}}(a)}-\theta_a)\|^4}}\leq L\sup_{\mathrm{ne}(a) \in \mathcal{N}_D(a)}\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{4v}}^{1/2v}. \label{majoration_holder}
\end{equation}
To conclude, we need to upper bound this supremum. Given a subset $\mathrm{ne}(a)$ in $\mathcal{N}_D(a)$, we define $\theta_{\mathrm{ne}(a)}$
as the vector in ${\mathbb{R}}^p$
such that $\Sigma^{1/2}\theta_{\mathrm{ne}(a)}$ is the orthogonal projection of $\Sigma^{1/2}\theta_a$ onto the linear span $\ac{\Sigma^{1/2}\beta:\textrm{supp}(\beta)\subset\mathrm{ne}(a)}$.
Pythagorean inequality gives
\begin{eqnarray*}
\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{2}=\|\Sigma^{1/2}(\theta_{\mathrm{ne}(a)}-\theta_a)\|^{2}+\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_{\mathrm{ne}(a)})\|^{2}
\end{eqnarray*}
and we obtain from Minkowski's inequality that
\begin{multline*}
\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{4v}}^{1/(2v)} \\
\leq \|\Sigma^{1/2}(\theta_{\mathrm{ne}(a)}-\theta_a)\|^{2}+\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_{\mathrm{ne}(a)})\|^{4v}}^{1/(2v)}.\\
\end{multline*}
The first term is smaller than $\mbox{Var}(X_a)$. In order to bound the second term, we use the following lemma which rephrases Proposition~7.8 in~\cite{Verzelen08}.
\begin{lemma}\label{lemme_majoration_risque_lp}
For any neighborhood $\mathrm{ne}(a)$ and any $r>2$ such that $n-|\mathrm{ne}(a)|-2r+1>0$,
$$\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_{\mathrm{ne}(a)})\|^{2r}}^{1/r} \leq Lr|\mathrm{ne}(a)|n\mbox{Var}(X_a)\,.$$
\end{lemma}
Since $v$ is smaller than $n/8$ and since $|\mathrm{ne}(a)|$ is smaller than $n/2$, it follows that for any model $\mathrm{ne}(a)\in\mathcal{N}_D(a)$, $n-|\mathrm{ne}(a)|-4v+1$ is positive and
\begin{eqnarray*}
\mathbb{E}\cro{\|\Sigma^{1/2}(\widehat{\theta}_{\mathrm{ne}(a)}-\theta_a)\|^{4v}}^{1/(2v)} \leq \mbox{Var}(X_a)\left[1+ Ln^2v\right]
\leq Ln^3\Sigma_{a,a}\,.
\end{eqnarray*}
Gathering this last upper bound with (\ref{majoration1}) and (\ref{majoration_holder}), we get that
\begin{eqnarray*}
\mathbb{E}\cro{\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2_{p \times p}\mathbf{1}_{\mathbb{B}^c}}&\leq& Ln^3\text{tr}(\Sigma) \sqrt{\mathbb{P}(\mathbb{B}^c)}.
\end{eqnarray*}
\subsubsection{Conclusion}
Finally, putting together the bound on $ \mathbb{E}[\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2\mathbf{1}_{\mathbb{B}}]$, the bound on $ \mathbb{E}[\|\Sigma^{1/2}(\tilde \theta-\theta)\|^2\mathbf{1}_{\mathbb{B}^c}]$, and the bound ${\mathbb{P}}({\mathbb{B}}^c)\leq 2pe^{-n(\sqrt{\gamma}-\gamma)^2/2}$, we obtain
$$\mathrm{MSEP}(\widetilde{\theta}) \leq L_{K,\gamma}\log(p)\pa{ {\mathbb{E}}\cro{\inf_{G\in\widehat \mathcal{G}}\pa{\mathrm{MSEP}(\widehat{\theta}_G)}}\vee {\mathrm{MSEP}(I)\over n}}
+R_n\,,$$
with $R_{n}\leq Ln^3\text{tr}(\Sigma)
e^{-n(\sqrt{\gamma}-\gamma)^2/4}$.
\subsection{Proof of Corollary \ref{corollaire_risque}}
The result is proved analogously except that we replace the event $\mathbb{B}$ by
$$\mathbb{B}'= \mathbb{B}\cup \left\{ G_{\Sigma}\in\widehat{\mathcal{G}}\right\}\ .$$
Hence, the residual term now satisfies
\begin{eqnarray*}
R_n &\leq &Ln^3\text{tr}(\Sigma) \sqrt{\mathbb{P}(\mathbb{B}^c)}\\ & \leq &Ln^3\text{tr}(\Sigma) \left[e^{-n(\sqrt{\gamma}-\gamma)^2/4}+ \sqrt{L(\alpha)}e^{-\frac{\beta}{2}n^\delta}\right]\ .
\end{eqnarray*}
\subsection{Proof of Theorem \ref{proposition_consistance}}
In this proof, the notations $o(1)$, $O(1)$ respectively refer to sequences that converge to $0$ or stay bounded when $n$ goes to infinity. These sequences may depend on $K$, $s$, $s'$ but \emph{do not} depend on $G_n$, on the covariance $\Sigma$, or a particular subset $S\subset\Gamma$. The technical lemmas are postponed to Section \ref{section_lemmas}. In the sequel, we omit the dependency of $p$ and $\Sigma$ on $n$ for the sake of clarity.
First, observe that the result is trivial if $n/\log(p)^2<1$, because the assumptions imply that $G_{\Sigma}$ is the empty graph whereas the family $\widehat{\mathcal{G}}$ contains at most the empty graph. In the sequel, we assume that $n/\log(p)^2\geq 1$.\\
Let us set $D_\textrm{max}=n/\log(p)^2$. We shall prove that for some $L>0$,
\begin{eqnarray}\label{equation_consistance_preuve1}
\mathbb{P}\left(\mathrm{Crit}(G_{\Sigma}) = \inf_{G',\ \deg(G')\leq D_\textrm{max}} \mathrm{Crit}(G')\right) \geq 1- Lp^{-1/2}\ ,
\end{eqnarray}
for $n$ larger than $n_0(K,s,s')$. Since $\widehat{G}$ minimizes the criterion $\mathrm{Crit}(.)$ on the family $\widehat{\mathcal{G}}$, this will imply the result of the theorem.\\
In fact, we shall prove a slightly stronger result than (\ref{equation_consistance_preuve1}). Let $a$ be a node in $\Gamma$ and let $\mathrm{ne}(a)$ be a subset of $\Gamma\setminus\{a\}$. As defined in Section \ref{section_risque_petit_evenement}, $\widehat{\theta}_{\mathrm{ne}(a)}$ is the least-squares estimator of $\theta_a$ whose support is included in $\mathrm{ne}(a)$.
\begin{eqnarray*}
\widehat{\theta}_{\mathrm{ne}(a)} = \arg\inf_{\theta'_a,\ \mathrm{supp}(\theta'_a)\subset \mathrm{ne}(a)} \|{\bf X}_a-{\bf X}\theta'_a\|_n^2\ .
\end{eqnarray*}
If $G$ is a graph such that the neighborhood $\mathrm{ne}_G(a)$ equals $\mathrm{ne}(a)$, then $\widehat{\theta}_{\mathrm{ne}(a)}=[\widehat{\theta}_G]_a$. We then define the partial criterion $\mathrm{Crit}(a,\mathrm{ne}(a))$ by
\begin{eqnarray*}
\mathrm{Crit}(a,\mathrm{ne}(a))=\|{\bf X}_a- {\bf X}\widehat{\theta}_{\mathrm{ne}(a)}\|_n^2\left(1+\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right) \ .
\end{eqnarray*}
Observe that for any graph $G$, $\mathrm{Crit}(G)=\sum_{a=1}^p\mathrm{Crit}(a,\mathrm{ne}_{G}(a))$. We note $\widehat{\mathrm{ne}}(a)$ the set that minimizes the criterion $\mathrm{Crit}(a,.)$ among all subsets of size smaller than $D_\textrm{max}$.
\begin{eqnarray*}
\widehat{\mathrm{ne}}(a) = \arg \inf_{\mathrm{ne}(a)\in \mathcal{N}_{D_\textrm{max}}(a)} \mathrm{Crit}(a,\mathrm{ne}(a))\ .
\end{eqnarray*}
If for all nodes $a\in\Gamma$, the selected set $\widehat{\mathrm{ne}}(a)$ equals $\mathrm{ne}_{G_{\Sigma}}(a)$, then $G_{\Sigma}$ minimizes the criterion $\mathrm{Crit}(.)$ over all graphs of degree smaller than $D_\textrm{max}$. Consequently, the property (\ref{equation_consistance_preuve1}) is satisfied if for any node $a\in\Gamma$, it holds that
\begin{eqnarray}\label{equation_consistance_preuve2}
\mathbb{P}\left[\widehat{\mathrm{ne}}(a) = \mathrm{ne}_{G_{\Sigma}}(a)\right]\geq 1- 7p_{n}^{-3/2}\ ,
\end{eqnarray}
for $n$ larger than some $n_0[K,s,s']$.\\
Let us fix some node $a\in\Gamma$. We prove the lower bound (\ref{equation_consistance_preuve2}) in two steps:
\begin{enumerate}
\item With high probability, the estimated neighborhood $\widehat{\mathrm{ne}}(a)$ does not strictly contain the true one $\mathrm{ne}_{G_{\Sigma}}(a)$.
\begin{eqnarray}\label{equation_consistance_preuve3}
\mathbb{P}\left[\widehat{\mathrm{ne}}(a) \varsupsetneq \mathrm{ne}_{G_{\Sigma}}(a) \right]\leq p_{n}^{-3/2}\ ,
\end{eqnarray}
for $n$ larger than some $n_0[K,s,s']$.
\item With high probability, the estimated neighborhood $\widehat{\mathrm{ne}}(a)$ contains the true one $\mathrm{ne}_{G_{\Sigma}}(a)$.
\begin{eqnarray}\label{equation_consistance_preuve4}
\mathbb{P}\left[\widehat{\mathrm{ne}}(a) \nsupseteq \mathrm{ne}_{G_{\Sigma}}(a) \right]\leq 6p_{n}^{-3/2}\ ,
\end{eqnarray}
for $n$ larger than some $n_0[K,s,s']$.
\end{enumerate}
The remaining part of the proof is deserved to (\ref{equation_consistance_preuve3}) and (\ref{equation_consistance_preuve4}). \\
Let us recall some notations and let us introduce some other ones. The component $X_a$ decomposes as
\begin{eqnarray*}
X_a= X\theta_a+\epsilon_a\ ,
\end{eqnarray*}
where $\epsilon_a$ follows a centered normal distribution with variance $\Omega_{a,a}^{-1}=\mbox{Var}(X_a|X_{-a})$. The variables $\epsilon_a$ are independent of $X_{-a}$.
Given a set $S\subset \Gamma$, $\Pi_S$ stands for the projection of $\mathbb{R}^n$ into the space generated by $({\bf X}_a)_{a\in S}$, whereas $\Pi_S^{\perp}$ denotes the projection along the space generated by $({\bf X}_a)_{a\in S}$. The notation $\langle.,.\rangle_n$ refers to the empirical inner product associated with the norm $\|.\|_n$.
For any neighborhood $\mathrm{ne}(a)\subset\Gamma\setminus\{a\}$ such that $|\mathrm{ne}(a)|\leq D_\textrm{max}$, let us define $\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))$ by
\begin{eqnarray*}
\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))= \mathrm{Crit}(a,\mathrm{ne}(a)) - \mathrm{Crit}(a,\mathrm{ne}_{G_{\Sigma}}(a))\ .
\end{eqnarray*}
\subsubsection{Bound on $\mathbb{P}\left(\widehat{\mathrm{ne}}(a) \varsupsetneq \mathrm{ne}_{G_{\Sigma}}(a) \right)$}
We shall upper bound the probability that $\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))$ is negative for at least one of the neighborhoods $\mathrm{ne}(a)\in\mathcal{N}_{D_{\textrm{max}}}(a)$ such that $\mathrm{ne}(a)$ strictly contains $\mathrm{ne}_{G_{\Sigma}}(a)$. For such a set $\mathrm{ne}(a)$, $\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))$ decomposes as (see e.g. Lemma 7.1 in \cite{Verzelen08})
\begin{eqnarray*}
\lefteqn{ \Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a)) }\\ & = &\|\Pi_{\mathrm{ne}(a)}^{\perp} \boldsymbol{\epsilon}_a\|_n^2\left[1+\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right]- \|\Pi_{\mathrm{ne}_{G_{\Sigma}}(a)}^{\perp} \boldsymbol{\epsilon}_a\|_n^2\left[1+\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}\right]\\
&= & -\|\Pi_{\mathrm{ne}_{G_{\Sigma}}(a)^{\perp}\cap \mathrm{ne}(a) }\boldsymbol{\epsilon}_a\|_n^2\left[1+\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}\right]\\ & & \mbox{}+\|\Pi_{\mathrm{ne}(a)}^{\perp} \boldsymbol{\epsilon}_a\|_n^2\left[\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}-\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}\right]\ .
\end{eqnarray*}
Hence, $\Delta(m,\mathrm{ne}_{G_{\Sigma}}(a))>0$ if
\begin{eqnarray}\label{minoration_importante_consistance}
\lefteqn{\frac{\|\Pi_{\mathrm{ne}_{G_{\Sigma}}(a)^{\perp}\cap \mathrm{ne}(a) }\boldsymbol{\epsilon}_a\|_n^2/(|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|)}{\|\Pi_{\mathrm{ne}(a)}^{\perp} \boldsymbol{\epsilon}_a\|_n^2/(n-|\mathrm{ne}(a)|)}} & &\nonumber \\ & & <\, \frac{\mathrm{pen}(|\mathrm{ne}(a)|)-\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\left[1+\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}\right]^{-1}\ .
\end{eqnarray}
To conclude, it remains to prove that the bound (\ref{minoration_importante_consistance}) holds with high probability. Let us call $A_1$ the right expression of (\ref{minoration_importante_consistance}) and let us derive a lower bound of $A_1$. Afterwards, we shall upper bound with high probability the left expression of (\ref{minoration_importante_consistance}).~\\~\\
{\bf Upper bound of $A_1$.} We first upper bound the penalty function.
\begin{lemma}\label{lemme_penalite_general}
Let $d_1\geq d_2$ be two positive integers such that $d_1\leq e^{-3/2}(p-1)$. We have
\begin{eqnarray}\label{minoration_difference_penalite}
\mathrm{pen}(d_1)-\mathrm{pen}(d_2)\geq 2K(d_1-d_2) \log\left(\frac{p-d_1}{d_1}\right)\ .
\end{eqnarray}
\end{lemma}
A proof of this lemma is provided in Section \ref{section_lemmas}.
By Proposition 4 in \cite{BGH09}, the penalty $\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)$ satisfies
$$\mathrm{pen}\left(|\mathrm{ne}_{G_{\Sigma}}(a)|\right)\leq LK\frac{|\mathrm{ne}_{G_{\Sigma}}(a)|}{n}\log\left(\frac{p-1}{|\mathrm{ne}_{G_{\Sigma}}(a)|}\right)\ ,$$
where $L$ is some numerical constant. This last term converges towards $0$ as $n$ goes to infinity since $|\mathrm{ne}_{G_{\Sigma}}(a)|\leq (n^s/\log(p))\wedge(n/\log(p)^2)$ (Assumption 2). Gathering this upper bound with Lemma \ref{lemme_penalite_general}, we get
\begin{eqnarray}
A_1\geq
2K\frac{\log\left(\frac{p-|\mathrm{ne}(a)|}{|\mathrm{ne}(a)|}\right)}{1+\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}}\geq 2K\log\left(\frac{p}{|\mathrm{ne}(a)|}\right)\left(1-o(1)\right)\ . \label{mino1_consistance}
\end{eqnarray}
\medskip
{\bf Lower bound of the left part of (\ref{minoration_importante_consistance})}.
The random variables involved in this expression follow a
Fisher distribution with $|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|$ and $n-|\mathrm{ne}(a)|$ degrees of freedom. To conclude, we only need to compare the quantile of such a variable with the bound (\ref{mino1_consistance}).
Let $u\in (0,1)$ and let $F^{-1}_{D,N}(u)$ denote
the $1-u$ quantile of a Fisher random variable with $D$ and $N$ degrees of freedom. By Lemma 1 in \cite{Baraud03}, it holds that
\begin{eqnarray*}
DF^{-1}_{D,N}(u)&\leq &D+2 \sqrt{D\left(1+2\frac{D}{N}\right)\log\left(\frac{1}{u}\right)}\\
& &+\left(1+2\frac{D}{N}\right)\frac{N}{2}\left[\exp\left(\frac{4}{N}\log\left(\frac{1}{u}\right)\right)-1\right]\ .
\end{eqnarray*}
Let us set $u$ to
\begin{eqnarray*}
u= \left\{p^{3/2}e^{|\mathrm{ne}(a)\setminus \mathrm{ne}_{G_{\Sigma}}(a)|}\binom{p-|\mathrm{ne}_{G_{\Sigma}}(a)|-1}{|\mathrm{ne}(a)\setminus \mathrm{ne}_{G_{\Sigma}}(a)|}\right\}^{-1}\ .
\end{eqnarray*}
Since we consider the case $n/\log(p)^2 \geq 1$ and $p\geq n$, the term $4/(n-|\mathrm{ne}(a)|)\log(1/u)$ goes to $0$ with $n$ (uniformly w.r.t. $\mathrm{ne}(a)$).
\begin{eqnarray*}
A_2= F^{-1}_{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|,n-|\mathrm{ne}(a)|}(u) &\leq& 1+ 2 \sqrt{\frac{1}{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\left(1+o(1)\right)\log\left(\frac{1}{u}\right)}\\
& &+\frac{2}{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\left(1+o(1)\right)\log\left(\frac{1}{u}\right)\ .
\end{eqnarray*}
The term $\log(1/u)/|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|$ goes to infinity with $n$ (uniformly w.r.t. $\mathrm{ne}(a)$). Hence, we get
\begin{eqnarray}
A_2 &\leq& 1+\frac{2}{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\log\left(\frac{1}{u}\right)\left(1+ o(1)\right)\nonumber\ .
\end{eqnarray}
Applying the classical inequality $\log\binom{l}{k}\leq k\log(el/k)$, we obtain
\begin{eqnarray}
A_2 &\leq & \left[3\frac{\log(p)}{|\mathrm{ne}(a)\setminus \mathrm{ne}_{G_{\Sigma}}(a)|}+ 2\log\left(\frac{p}{|\mathrm{ne}(a) \setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\right)\right]\left(1+ o(1)\right)\nonumber\\
& \leq & 5\log\left(\frac{p}{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\right)\left(1+o(1)\right)\label{majo1consistance}\ .
\end{eqnarray}
\medskip
{\bf Conclusion.}
Let us compare the lower bound (\ref{mino1_consistance}) of $A_1$ with the upper bound (\ref{majo1consistance}) of $A_2$.
\begin{itemize}
\item Let us first assume that $|\mathrm{ne}(a)|\leq 2|\mathrm{ne}_{G_{\Sigma}}(a)|$. Then, we have
$$A_1\geq 2K\log\left(\frac{p}{|\mathrm{ne}_{G_{\Sigma}}(a)|}\right)\left(1-o(1)\right)\geq 2K(1-s)\log(p)\left(1-o(1)\right)\ ,$$
since $|\mathrm{ne}_{G_{\Sigma}}(a)|\leq n^s/\log(p) \leq p^s$. In particular,
$$ A_2\leq 5\log\left(\frac{p}{|\mathrm{ne}(a)\setminus\mathrm{ne}_{G_{\Sigma}}(a)|}\right)\left(1+o(1)\right)< A_1 ,$$ for $n$ large enough since we assume that $2K(1-s)>5$.
\item If $|\mathrm{ne}(a)|> 2|\mathrm{ne}_{G_{\Sigma}}(a)|$, we also have
$$ A_2\leq 5\log\left(\frac{p}{|\mathrm{ne}(a)|}\right)\left(1+o(1)\right)< A_1\ ,$$
for $n$ large enough since we assume that $2K>5$.
\end{itemize}
It follows from Ineq. (\ref{minoration_importante_consistance}) and the definition of $A_1$ and $A_2$ that
\begin{eqnarray*}
\mathbb{P} \left[\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))<0 \right]\leq \left\{p^{3/2}e^{|\mathrm{ne}(a)\setminus \mathrm{ne}_{G_{\Sigma}}(a)|}\binom{p-|\mathrm{ne}_{G_{\Sigma}}(a)|}{|\mathrm{ne}(a)\setminus \mathrm{ne}_{G_{\Sigma}}(a)|}\right\}^{-1}\ ,
\end{eqnarray*}
for $n$ larger than some positive constant that may depend on $K$, $s$, but does \emph{not} depend on $\mathrm{ne}(a)$.
Applying this bound to any neighborhood $\mathrm{ne}(a)$ that strictly contains $\mathrm{ne}_{G_{\Sigma}}(a)$ yields Statement (\ref{equation_consistance_preuve3}):
\begin{eqnarray*}
\mathbb{P}\left[\widehat{\mathrm{ne}}(a) \varsupsetneq \mathrm{ne}_{G_{\Sigma}}(a) \right]\leq p^{-3/2}\ ,
\end{eqnarray*}
for $n$ large enough.
\subsubsection{Bound on $\mathbb{P}\left(\widehat{\mathrm{ne}}(a) \nsupseteq \mathrm{ne}_{G_{\Sigma}}(a) \right)$}
Again, we shall prove that $\Delta[\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a)]$ is positive for $\mathrm{ne}(a)\nsupseteq \mathrm{ne}_{G_\Sigma}(a)$ with overwhelming probability. We recall that $\theta_{\mathrm{ne}(a)}$ is the vector in $\mathbb{R}^{p}$ such that $\Sigma^{1/2}\theta_{\mathrm{ne}(a)}$ is the orthogonal projection of $\Sigma^{1/2}\theta_{a}$ onto the linear span $\left\{\Sigma^{1/2}\beta\ : \mathrm{supp}(\beta)\subset\mathrm{ne}(a)\right\}$. Moreover, $\|\Sigma^{1/2}(\theta_a-\theta_{\mathrm{ne}(a)})\|^2=\mbox{Var}(X_a|X_{\mathrm{ne}(a)})-\mbox{Var}(X_a|X_{-a})$ (see e.g. Lemma 7.1 in \cite{Verzelen08}).
Then, $\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))$ decomposes as
\begin{eqnarray*}
\lefteqn{\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a)) = \left\|\Pi_{\mathrm{ne}(a)}^{\perp} \left[\boldsymbol{\epsilon}_a+{\bf X }(\theta_a-\theta_{\mathrm{ne}(a)})\right]\right\|_n^2\left[1+\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right]} \\& & \mbox{}- \left\|\Pi_{\mathrm{ne}_{G_{\Sigma}}(a)}^{\perp} \boldsymbol{\epsilon}_a\right\|_n^2\left[1+\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}\right]\ .\hspace{3cm}
\end{eqnarray*}
Let $\kappa=6/7$ and let us define $$E_{\mathrm{ne}(a)}= \kappa^{-1}\left\langle \frac{\Pi_{\mathrm{ne}(a)}^{\perp}{\bf X }(\theta-\theta_{\mathrm{ne}(a)})}{\|\Pi_{\mathrm{ne}(a)}^{\perp}{\bf X }(\theta-\theta_{\mathrm{ne}(a)})\|_n},\Pi_{\mathrm{ne}(a)}^{\perp} \boldsymbol{\epsilon}_a \right\rangle_n^2+ \|\Pi_{\mathrm{ne}(a)} \boldsymbol{\epsilon}_a\|_n^2\ .$$
We recall that $\langle.,.\rangle_n$ is the inner product associated to the norm $\|.\|_n$.
The quantity $\Delta(\mathrm{ne}(a),\mathrm{ne}_{G_{\Sigma}}(a))$ is positive if
\begin{eqnarray}
\lefteqn{(1-\kappa)\|\Pi_{\mathrm{ne}(a)}^{\perp} {\bf X }(\theta-\theta_{\mathrm{ne}(a)})\|_n^2 > E_{\mathrm{ne}(a)}\left[1+\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right]}\nonumber\\
& + & \|\boldsymbol{\epsilon}_a\|_n^2\left[\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}- \frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|} \right]\ .\label{minoration_importante_deuxieme_cas}
\end{eqnarray}
We respectively call $A_3$ and $A_4$ the right and the left terms of the inequality. To conclude, we need to control the deviations of these terms in order to prove that (\ref{minoration_importante_deuxieme_cas}) holds with high probability.\\
{\bf Upper Bound of $A_3$}.
On an event $\mathbb{A}$ of probability larger than $1- 2p^{-3/2}$, the random variable $\|\boldsymbol{\epsilon}_a\|_n^2$ satisfies (see Lemma 1 in \cite{Laurent00})
$$1- 2\sqrt{\frac{3\log(p)}{2n}}\leq \frac{\|\boldsymbol{\epsilon}_a\|_n^2}{\mbox{Var}(X_a|X_{-a})}\leq 1+ 2\sqrt{\frac{3\log (p)}{2n}}+3\frac{\log (p)}{n}\ .$$
Let us bound the other random variables involved in (\ref{minoration_importante_deuxieme_cas}). As explained in
the proof of Th.3.1 in \cite{Verzelen08}, the random variables $\|\Pi_{\mathrm{ne}(a)}^{\perp} {\bf X }(\theta-\theta_{\mathrm{ne}(a)})\|_n^2$ and $E_{\mathrm{ne}(a)}$ follow distributions of linear combinations of $\chi^2$ random variables.
We apply again Lemma 1 in \cite{Laurent00} . On a event $\mathbb{A}_{\mathrm{ne}(a)}$ of probability larger than $1- 2p^{-3/2}e^{-|\mathrm{ne}(a)|}\binom{p-1}{|\mathrm{ne}(a)|}^{-1}$, it holds that
\begin{eqnarray*}
\lefteqn{\frac{\|\Pi_{\mathrm{ne}(a)}^{\perp} {\bf X }(\theta-\theta_{\mathrm{ne}(a)})\|_n^2}{\mbox{Var}(X_a|X_{\mathrm{ne}(a)})-\mbox{Var}(X_a|X_{-a})} \geq 1 - \frac{|\mathrm{ne}(a)|}{n}}\hspace{4cm}\\ & & \mbox{}- 2\sqrt{\frac{\frac{3}{2}\log (p) + |\mathrm{ne}(a)|\left[2+\log\left(p-1\right)\right]}{n} }
\end{eqnarray*}
and
\begin{eqnarray*}
\lefteqn{\frac{E_{\mathrm{ne}(a)}}{\mbox{Var}(X_a|X_{-a})} \leq \frac{|\mathrm{ne}(a)|+\kappa^{-1}}{n}}\\ & & \mbox{}+ \frac{2}{n}\sqrt{(|\mathrm{ne}(a)|+\kappa^{-2})\left[|\mathrm{ne}(a)|\left(2+\log\left(\frac{p-1}{|\mathrm{ne}(a)|}\right)\right)+\frac{3}{2}\log(p)\right]}\\ & &\mbox{} +\frac{2\kappa^{-1}}{n}\left[|\mathrm{ne}(a)|\left(2+\log\left(\frac{p-1}{|\mathrm{ne}(a)|}\right)\right)+\frac{3}{2}\log(p)\right]\ .
\end{eqnarray*}
We derive that
\begin{eqnarray*}
\frac{E_{\mathrm{ne}(a)}}{\mbox{Var}(X_a|X_{-a})} &\leq & \frac{2\kappa^{-1}}{n}\left[|\mathrm{ne}(a)|\log\left(\frac{p-1}{|\mathrm{ne}(a)|}\right)+\frac{3}{2}\log(p)\right]\left(1+o(1)\right)\\ &+&\frac{\sqrt{6|\mathrm{ne}(a)|\log(p)}}{n}+\frac{\kappa^{-1}}{n}\ .
\end{eqnarray*}
\begin{itemize}
\item {\bf CASE 1}: {\bf $\mathrm{ne}(a)$ is non empty}.
\begin{eqnarray*}
\frac{E_{\mathrm{ne}(a)}}{\mbox{Var}(X_a|X_{-a})} \leq \kappa^{-1}\frac{2|\mathrm{ne}(a)|\log\left(\frac{p-1}{|\mathrm{ne}(a)|}\right)+3\log(p)}{n}(1+o(1))\ .
\end{eqnarray*}
Let us upper bound the terms involving $\mathrm{pen}(|\mathrm{ne}(a)|)$ in (\ref{minoration_importante_deuxieme_cas}) on the event $\mathbb{A}\cap \mathbb{A}_{\mathrm{ne}(a)}$.
\begin{eqnarray*}
\lefteqn{\left\{E_{\mathrm{ne}(a)}\left[1+\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right]- \|\boldsymbol{\epsilon}_a\|_n^2\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right\}/\mbox{Var}(X_a|X_{-a})}\hspace{2cm}\\ & \leq & \frac{\kappa^{-1}}{n}\left(2|\mathrm{ne}(a)|\log\left(\frac{p-1}{|\mathrm{ne}(a)|}\right)+3\log(p)\right)(1+o(1))\\ & & \mbox{}- \frac{2K}{n}|\mathrm{ne}(a)|\log\left(\frac{p-1}{|\mathrm{ne}(a)|}\right)(1+o(1))\ .
\end{eqnarray*}
This last quantity is negative for $n$ large enough since $K\geq 3$. \\
\item {\bf CASE 2}: {\bf $\mathrm{ne}(a)$ is empty}. We get the upper bound
\begin{eqnarray*}
\lefteqn{E_{\mathrm{ne}(a)}\left[1+\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}\right]- \|\boldsymbol{\epsilon}_a\|_n^2\frac{\mathrm{pen}(|\mathrm{ne}(a)|)}{n-|\mathrm{ne}(a)|}}\hspace{3cm}\\ & \leq & \frac{\kappa^{-1}+ 3\log (p)}{n}\mbox{Var}(X_a|X_{-a})\\ &\leq& (3+\kappa^{-1})n^{s-1}\mbox{Var}(X_a|X_{-a})\ .
\end{eqnarray*}
Indeed, $\log(p)$ has to be smaller than $n^s$. If this is not the case, then $\mathrm{ne}_{G_{\Sigma}}(a)$ should be empty and $\mathrm{ne}(a)$ cannot satisfy $\mathrm{ne}_{G_{\Sigma}}(a)\nsubseteq \mathrm{ne}(a)$.\\
\end{itemize}
We conclude that on the event $\mathbb{A}\cap\mathbb{A}_{\mathrm{ne}(a)}$,
\begin{eqnarray*}
A_3\leq (3+\kappa^{-1})n^{s-1}\mbox{Var}(X_a|X_{-a})+ \|\boldsymbol{\epsilon}_a\|_n^2\frac{\mathrm{pen}(|\mathrm{ne}_{G_{\Sigma}}(a)|)}{n-|\mathrm{ne}_{G_{\Sigma}}(a)|}\ ,
\end{eqnarray*}
for $n$ large enough.
Let us upper bound the penalty term as done in the upper bound of $A_1$. $$\mathrm{pen}(\mathrm{ne}_{G_{\Sigma}}(a))\leq LK\frac{|\mathrm{ne}_{G_{\Sigma}}(a)|}{n}\log\left(\frac{p-1}{|\mathrm{ne}_{G_{\Sigma}}(a)|}\right)\ .$$
Since $|\mathrm{ne}_{G_{\Sigma}}(a)|$ is assumed to be smaller than $\frac{n^s}{\log(p)}$, the term $A_3$ is upper bounded as follows
\begin{eqnarray}\label{borne_c}
A_3\leq (K+1)n^{s-1}\mbox{Var}(X_a|X_{-a})O(1)\ .
\end{eqnarray}
for $n$ large enough.\\
{\bf Lower Bound of $A_4$}.
Let us lower bound the left term $A_4$ in (\ref{minoration_importante_deuxieme_cas}) on the event $\mathbb{A}\cap\mathbb{A}_{\mathrm{ne}(a)}$.
\begin{eqnarray*}
A_4 &\geq & (1-o(1))(1-\kappa)\left[\mbox{Var}(X_a|X_{\mathrm{ne}(A)})-\mbox{Var}(X_a|X_{-a})\right]\\ & \geq & (1-o(1))(1-\kappa)\min_{b\in\Gamma\setminus\{a\}}\left(\theta_{a,b}\right)^2\min_{b,c\in\Gamma\setminus\{a\}}\frac{\mbox{Var}(X_b|X_{-b})}{\mbox{Var}(X_c|X_{-c})}\mbox{Var}(X_a|X_{-a})\\
&\geq & (1-\kappa)(1-o(1))n^{s'-1}\mbox{Var}(X_a|X_{-a})\ .
\end{eqnarray*}
Thanks to the last bound and (\ref{borne_c}) and since $s'$ is larger than $s$, $A_3< A_4$ on the event $\mathbb{A}\cap\mathbb{A}_{\mathrm{ne}(a)}$ and for $n$ large enough (not depending on $\mathrm{ne}(a)$). Hence, for $n$ large enough the inequality (\ref{minoration_importante_deuxieme_cas}) holds simultaneously for all neighborhoods $\mathrm{ne}(a)$ such that $\mathrm{ne}_{G_{\Sigma}}(a)\nsubseteq \mathrm{ne}(a)$ with probability larger than $1-2 p^{-3/2}- 2(e/(e-1))p^{-3/2}$. We conclude that
\begin{eqnarray*}
\mathbb{P}\left(\widehat{\mathrm{ne}}(a) \nsupseteq \mathrm{ne}_{G_{\Sigma}}(a) \right)\leq 6p^{-3/2}\ ,
\end{eqnarray*}
for $n$ large enough.
\subsection{Lemmas}\label{section_lemmas}
\begin{lemma}\label{lemme_minoration_penalite_seul}
For any positive integer $d\leq e^{-3/2}(p-1)$, $$\mathrm{EDKhi}\left[d+1,n-d-1,\left[\binom{p-1}{d}(d+1)^2\right]^{-1}\right] \geq d+1\ .$$
\end{lemma}
\begin{lemma}\label{lemme_penalite1}
For any positive number $x$ and any positive integers $d$ and $N$, $\mathrm{EDKhi}(d,N,x)$ is an increasing function with respect to $d$ and a decreasing function with respect to $N$.
\end{lemma}
\begin{lemma}\label{lemme_penalite2}
For any integer $d\geq 2$, the function $$\frac{\mathbb{E}\left[(X_d-x\frac{X_N}{N})_+\right]}{\mathbb{E}\left[(X_2-x)_+\right]}$$ is increasing with respect to $x$ as soon as $x\geq d$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemme_penalite_general}]
Let us write $L_1=\log\left(\binom{p-1}{d_1}\right)$ and $L_2=\log\left(\binom{p-1}{d_2}\right)$. Lemma \ref{lemme_penalite1} ensures that
\begin{eqnarray}\label{lemme_pena_minoration_2}
\mathrm{EDKhi}\left(d_1+1,n-d_1-1,e^{-L_1}\right)\geq \mathrm{EDKhi}\left(d_2+1,n-d_2-1,e^{-L_1}\right)\ .
\end{eqnarray}
Let $x_1\geq x_2$ be two positive numbers larger than some integer $d_2+1$. By Lemma \ref{lemme_penalite2}, it holds that
\begin{eqnarray*}
\frac{\mathrm{DKhi}(d_2+1,n-d_2-1,x_1)}{\mathrm{DKhi}(d_2+1,n-d_2-1,x_2)}\geq \frac{\mathbb{E}\left[(X_2-x_1)_+\right]}{\mathbb{E}\left[(X_2-x_2)_+\right]} = e^{-(x_1-x_2)/2}\ .
\end{eqnarray*}
By Lemma \ref{lemme_minoration_penalite_seul}, $\mathrm{EDKhi}(d_2+1,n-d_2-1,e^{-L_2})$ is larger than $d_2+1$. Setting $x_1=\mathrm{EDKhi}(d_2+1,n-d_2-1,e^{-L_1})$ and $x_2=\mathrm{EDKhi}(d_2+1,n-d_2-1,e^{-L_2})$, we obtain
\begin{eqnarray}\label{lemme_pena_minoration_1}
\mathrm{EDKhi}(d_2+1,n-d_2-1,e^{-L_1})- \mathrm{EDKhi}(d_2+1,n-d_2-1,e^{-L_2})\geq 2(L_1-L_2)\ ,
\end{eqnarray}
for $d_2\geq 1$. Gathering the bounds (\ref{lemme_pena_minoration_2}), (\ref{lemme_pena_minoration_1}) with the definition (\ref{definition_penalite}) of the penalty enables to conclude
\begin{eqnarray}
\mathrm{pen}(d_1)-\mathrm{pen}(d_2)\geq 2K(d_1-d_2) \log\left(\frac{p-d_1}{d_1}\right)\ .
\end{eqnarray}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemme_minoration_penalite_seul}]
We write henceforth $X_{d}$ and $X'_{N}$ for two independent $\chi^2$ random variables with $d$ and $N$ degrees of freedom. Applying Jensen inequality we obtain that for any $x>0$ and any $d\geq 2$
\begin{eqnarray*}
d\times \mathrm{DKhi}(d,N,x)&=&{\mathbb{E}}\cro{\pa{X_{d}-x\, {X'_{N}\over N}}_{+}}\\
&\geq& {\mathbb{E}}\cro{\pa{X_{d}-x}_{+}}\\
&\geq& {\mathbb{E}}\cro{\pa{X_{2}-x}_{+}}=2e^{-x/2}.
\end{eqnarray*}
Setting $x=\mathrm{EDKhi}(d,N,e^{-L})$ with $L\geq0$, we obtain
$$\mathrm{EDKhi}(d,N,e^{-L})\geq 2L-2\log(d),\quad \textrm{for } d\geq 2.$$
It follows that
\begin{eqnarray*}
\mathrm{EDKhi}\left[d+1,n-d-1,\left[\binom{p-1}{d}(d+1)^2\right]^{-1}\right] & \geq &2\log\binom{p-1}{d}\\ & \geq & 2d\log\left(\frac{p-1}{ed}\right)\ ,
\end{eqnarray*}
which is larger than $d$ if $d\leq e^{-3/2}(p-1)$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemme_penalite1}]
By definition (\ref{definition_penalite}) of the function $\mathrm{EDKhi}$, we only have to prove that $\mathrm{DKhi}(d,N,x)$ is increasing with respect to $d$ and decreasing with respect to $n$.\\
Conditioning on $X_N$ (resp. $X_d$) it suffices to prove the two following facts:
FACT 1: Let $d$ be a positive integer. For any positive number $x$,
\begin{eqnarray*}
d\mathbb{E}\left[(X_{d+1}-x)_+\right]\geq (d+1)\mathbb{E}\left[(X_{d}-x)_+\right]\ .
\end{eqnarray*}
FACT 2: Let $N$ be a positive integer. For any positive numbers $x$ and $x'$,
\begin{eqnarray*}
\mathbb{E}\left[\left(x'-x\frac{X_N}{N}\right)_+\right]\geq \mathbb{E}\left[\left(x'-x\frac{X_{N+1}}{N+1}\right)_+\right]\ .
\end{eqnarray*}
{\bf Proof of FACT 1}. Let $(Z_1,\ldots ,Z_{d+1})$ be $d+1$ independent $\chi^2$ random variables with $1$ degree of freedom. Let $Y=\sum_{i=1}^{d+1}Z_i$ and for any $i\in \{1,\ldots d+1\}$, let $Y^{(i)}$ be the sum $Y^{(i)}= \sum_{j\neq i}Z_j$. The variable $Y$ follows a $\chi^2$ distribution with $d+1$ degrees of freedom, while the variables $Y^{(i)}$ follow $\chi^2$ distribution with $d$ degrees of freedom. It holds that
\begin{eqnarray}\label{ineq_fact1}
d\left(Y-x\right)_+\geq \sum_{i=1}^{d+1}\left(Y^{(i)}-x\right)_{+}\ .
\end{eqnarray}
Indeed, if all the variables $Y^{(i)}$ are larger than $x$, one observes that $d\left(Y-x\right)_+= d(\sum_{i=1}^{d+1}Z_i-dx)$ while the second term equals $d\sum_{i=1}^{d+1}Z_i-d(d+1)x$. If some of the variables $Y^{(i)}$ are smaller than $x$, it is sufficient to note that the variables $Y^{(i)}$ are smaller than $Y$. We prove FACT 1 by integrating the inequality (\ref{ineq_fact1}).\\
{\bf Proof of FACT 2}.
It is sufficient to prove that for any positive number $x$,
\begin{eqnarray*}
\mathbb{E}\left[\left(x-\frac{X_N}{N}\right)_+\right]\geq \mathbb{E}\left[\left(x-\frac{X_{N+1}}{N+1}\right)_+\right]\ .
\end{eqnarray*}
Observe that $\mathbb{E}\left[\left(x-\frac{X_N}{N}\right)_+\right]= (x-1)+ \mathbb{E}\left[\left(\frac{X_N}{N}-x\right)_+\right]$. Hence, it remains to prove that
\begin{eqnarray}\label{ineq_fact2}
(N+1)\mathbb{E}\left[\left(X_N-Nx\right)_+\right]\geq N\mathbb{E}\left[\left(X_{N+1}-(N+1)x\right)_+\right]\ .
\end{eqnarray}
As in the proof of FACT 1, let $(Z_1,\ldots ,Z_{d+1})$ be $d+1$ independent $\chi^2$ random variables with $1$ degree of freedom. Let $Y=\sum_{i=1}^{d+1}Z_i$ and for any $i\in \{1,\ldots d+1\}$, let $Y^{(i)}$ be the sum $Y^{(i)}= \sum_{j\neq i}Z_i$. It holds that
\begin{eqnarray}\label{ineq2_fact2}
\sum_{i=1}^{N+1}\left(Y^{(i)}-Nx\right)_{+}\geq N \left(Y-(N+1)x\right)_{+} \ .
\end{eqnarray}
This bound is trivial if $Y\leq (N+1)x$. If $Y$ is larger than $(N+1)x$, then the second term equals $(N+1)\sum_{i=1}^{N+1}(Y^{(i)}- Nx)$, which is clearly smaller than the first term. Integrating the bound (\ref{ineq2_fact2}) enables to prove (\ref{ineq_fact2}) and then FACT 2.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemme_penalite2}]
We show that the derivate of the function ~\\ $\mathbb{E}\left[(X_d-x\frac{X_N}{N})_+\right]/\mathbb{E}\left[(X_2-x)_+\right]$ in non-negative for any $x\geq d$. Thus, we have to prove the following inequality:
\begin{eqnarray*}
\frac{\mathbb{E}\left[\left(X_d- x \frac{X_{N}}{N}\right)_+\right]}{\mathbb{E}\left[\frac{X_N}{N}\mathbf{1}_{X_d\geq x\frac{X_{N}}{N}}\right]} \geq \frac{\mathbb{E}\left[\left(X_2- x \right)_+\right]}{\mathbb{P}(X_2\geq x)}=2\ .
\end{eqnarray*}
Hence, we aim at proving that the function
\begin{eqnarray*}
\Psi(x)= \mathbb{E}\left[\left(X_d- x \frac{X_{N}}{N}\right)_+\right]- 2 \mathbb{E}\left[\frac{X_N}{N}\mathbf{1}_{X_d\geq x\frac{X_{N}}{N}}\right]
\end{eqnarray*}
is positive. Observe that $\Psi(x)$ converges to $0$ when $x$ goes to infinity. Let us respectively note $f_{X_d}(t)$ and $f_{\frac{X_N}{N}}(t)$ the densities of $X_d$ and $X_N/N$.
\begin{eqnarray*}
\Psi'(x)=\int_{t=0}^{\infty}t\left[2tf_{X_d}(xt)-\int_{u=xt}^{\infty}f_{X_d}(u)du\right]f_{\frac{X_N}{N}}(t)dt\ .
\end{eqnarray*}
Integrating by part the density of a $\chi^2$ distribution, we get the lower bound
\begin{eqnarray*}
\int_{u=xt}^{\infty}f_{X_d}(u)du\geq \frac{(1/2)^{d/2}}{\Gamma(d/2)}2(xt)^{d/2-1}e^{-xt/2}\ .
\end{eqnarray*}
\begin{eqnarray*}
\Psi'(x) &\leq &\frac{(1/2)^{d/2-1}}{\Gamma(d/2)}\int_{t=0}^{\infty}t(xt)^{d/2-1}e^{-xt/2}(t-1)f_{\frac{X_N}{N}}(t)dt\\
& \leq& \frac{(1/2)^{(N+d)/2-1}}{\Gamma(d/2)\Gamma(N/2)}N^{N/2}x^{d/2-1}\int_{t=0}^{\infty}t^{d/2}(t-1)t^{N/2-1}e^{-(x+N)t/2}dt\\
&\leq& \frac{2N^{N/2}x^{d/2-1}}{\Gamma(d/2)\Gamma(N/2)(x+N)^{(d+N)/2}}\int_{t=0}^{\infty}t^{(d+N)/2-1}\left(\frac{2t}{x+N}-1\right)e^{-t}dt\\
&\leq &\frac{2N^{N/2}x^{d/2-1}}{\Gamma(d/2)\Gamma(N/2)(x+N)^{(d+N)/2}} \left[\frac{2\Gamma\left(\frac{d+N}{2}+1\right)}{x+N}- \Gamma\left(\frac{d+N}{2}\right)\right]\\
&\leq & \frac{2N^{N/2}x^{d/2-1}\Gamma\left(\frac{d+N}{2}\right)}{\Gamma(d/2)\Gamma(N/2)(x+N)^{(d+N)/2}}\left[\frac{d+N}{x+N}-1\right]\leq 0\ ,
\end{eqnarray*}
since $x\geq d$. Hence, $\Psi$ is decreasing to $0$ for $x$ larger than $d$ and it is therefore non-negative.
\end{proof}
\addcontentsline{toc}{section}{References}
\bibliographystyle{acmtrans-ims}
| -102,286.856124 |
[
-2.123046875,
2
] | 36.272235 |
[
-2.8359375,
0.411865234375,
-1.5888671875,
-5.7265625,
-0.92919921875,
8.296875
] |
[
2.974609375,
8.7734375,
1.3017578125,
7.1015625
] | 662 | 9,899 |
[
-3.4921875,
4.0625
] | 34.864197 |
[
-5.859375,
-4.6171875,
-4.890625,
-2.439453125,
2.2265625,
13.0625
] | 0.814456 | 21.656148 | 22.042631 | 2.50216 |
[
1.5848718881607056
] | -60,142.158864 | 6.635822 | -102,566.136129 | 0.091341 | 6.237024 |
[
-2.021484375,
-3.583984375,
-4.16015625,
-5.41015625,
2.26953125,
12.6328125
] |
[
-5.70703125,
-2.037109375,
-2.02734375,
-1.203125,
3.578125,
4.390625
] | |
BkiUd4fxK0zjCobCPBuN
|
\section{Introduction}
The introduction of renormalization group (RG) theory in statistical physics \cite{xxx} has greatly deepened our understanding of phase transitions.
Our understanding of RG, however, is far from complete.
The actual implementation of the RG procedure remains a highly nontrivial task.
The critical manifold of a lattice model is defined as the set of coupling constants for which the long range physics of the system is described by a unique underlying scale-invariant field theory.
However, the same lattice model may admit different critical behaviors described by different field theories, upon changing the coupling constants.
This is the case, for instance, in the tricritical Ising model to be discussed later.
Thus, the critical manifold is always defined with respect to the field theory underlying the lattice model.
It could be defined in any space of coupling constants associated with a finite number of coupling terms, with co-dimension in that space equal to the number of relevant operators of the system.
General RG theory requires that the RG flow should go into a unique fixed-point Hamiltonian, if the starting point of the flow is on the critical manifold.
There are various ``natural'' RG procedures where different points on a critical manifold do not go to the same critical fixed-point, the most well-known example being the decimation rule in dimension higher than one \cite{cardy}.
By contrast, when an RG procedure satisfies this requirement, the attractive basin of the critical fixed-point is the entire critical manifold, and a computational scheme should exist, at least in principle, to identify the critical manifold.
Whether or not this approach can be successfully pursued were a stringent test of the RG procedure under consideration.
Conversely, the knowledge of the critical manifold provides a straightforward way to check the validity of any RG procedure: one could simply simulate the RG flow starting from two different points in the critical manifold and verify that they eventually land on the same fixed-point.
This consideration alone should be enough motivation for developing a method to compute the critical manifold.
Another issue for which the knowledge of the critical manifold would be of interest is the study of the geometry of the coupling constant space, i.e. the parameter manifold of a classical or quantum many-body system.
How to define a Riemannian metric in the parameter manifold has been proposed since long time for both classical \cite{riemannian_classical} and quantum systems \cite{riemannian_quantum}.
Recently, there have been developments in understanding the significance of the geometry of the parameter manifold for both classical and quantum systems \cite{widom_line, geometric_exponent, information_geometry, geometric_tensor,it_dg_quantum}.
One would expect knowledge of the critical manifold would fit naturally into such developments. We do not pursue further this issue here but we leave it to future research.
In this paper, we present a method to determine the tangent space and curvature of the critical manifold at the critical points of a system with Variational Monte Carlo Renormalization Group (VMCRG) \cite{vmcrg}.
We will show that unlike the computation of the critical exponents with Monte Carlo Renormalization Group \cite{mcrg} or VMCRG, the determination of the critical manifold tangent space (CMTS) and curvature does not suffer truncation error no matter how few renormalized coupling terms are used.
We discuss first the case where there are no marginal operators along the RG flow, and then the case where there are.
The examples that we consider in this paper are all classical, but the method can be extended to quantum systems if a sign-free path integral representation of the quantum system would be available.
\section{Monte Carlo Renormalization Group and the Critical Manifold}
\subsection{Coarse-graining and Renormalized Coupling Constants}
For notational simplicity, we use the terminology for classical magnetic spins on a lattice in the following discussion, although the formalism applies in general.
Consider a statistical mechanical system in $d$ spatial dimensions with spins $\bm\sigma$ and Hamiltonian $H^{(0)}(\bm\sigma)$,
\begin{equation}
\label{eq:hamiltonian}
H^{(0)}(\bm\sigma) = \sum_{\beta} K^{(0)}_\beta S_\beta(\bm\sigma)
\end{equation}
where $S_\beta(\bm\sigma)$ are the coupling terms of the system, such as nearest neighbor spin products, next nearest neighbor spin products, etc., and $\vec K^{(0)} = \{K^{(0)}_\beta\}$ are the corresponding coupling constants.
Here we call the original Hamiltonian before any RG transformation the zeroth level renormalized Hamiltonian, hence the notation $(0)$ in the superscript.
The critical manifold is then defined in the space of $K^{(0)}_\beta$ corresponding to a finite set of couplings $S_\beta(\bm\sigma)$.
In a real-space RG calculation, one defines coarse-grained spins $\bm\sigma'$ in the renormalized system with a conditional probability $T(\bm\sigma' |\bm\sigma)$ that effects a scale transformation with scale factor $b$.
$T(\bm\sigma'|\bm\sigma)$ is the probability of $\bm\sigma'$ given spin configuration $\bm\sigma$ in the original system.
The majority rule block spin in the Ising model proposed by Kadanoff \cite{block} is one example of the coarse-grained variables.
$T(\bm\sigma'|\bm\sigma)$ can be iterated $n$ times to define the $n$th level coarse-graining $T^{(n)}(\bm\mu|\bm\sigma)$ realizing a scale transformation with scale factor $b^n$:
\begin{equation}
T^{(n)}(\bm\mu|\bm\sigma) = \sum_{\bm\sigma^{(n-1)}}..\sum_{\bm\sigma^{(1)}} T(\bm\mu|\bm\sigma^{(n-1)}) \cdots T(\bm\sigma^{(1)}|\bm\sigma)
\end{equation}
$T^{(n)}$ defines the $n$th level renormalized Hamiltonian $H^{(n)}(\bm\mu)$ up to a constant $g(\vec K^{(0)})$ independent of $\bm\mu$ \cite{nauenberg}:
\begin{equation}
\label{eq:rg}
\begin{split}
H^{(n)}(\bm\mu) &\equiv -\ln \sum_{\bm\sigma} T^{(n)}(\bm\mu| \bm\sigma) e^{-H^{(0)}(\bm\sigma)} + g(\vec K^{(0)})
\\
&= \sum_{\alpha} K^{(n)}_\alpha S_\alpha(\bm \mu) + g(\vec K^{(0)})
\end{split}
\end{equation}
where $\{K^{(n)}_\alpha\}$ are the $n$th level renormalized coupling constants associated with the coupling terms $S_\alpha(\bm\mu)$ defined for the $n$th level coarse-grained spins.
Modulo the constant coupling term, $T^{(n)}(\bm\mu|\bm\sigma)$ defines $H^{(n)}(\bm\mu)$ uniquely.
$H^{(n)}$ renormalized from different starting Hamiltonians $H^{(0)}$ will generally be different.
However, if no marginal operators appear in the RG transformation, the renormalized Hamiltonians from different initial points on the critical manifold will converge to the same critical fixed-Hamiltonian , $H^*(\bm\mu)$, as $n$ goes to infinity.
To probe $H^*(\bm\mu)$ in a Monte Carlo (MC) simulation, one increases the iteration level $n$ and the system size $L$, until the renormalized Hamiltonian $H^{(n)}$ becomes invariant with $n$ to the desired accuracy and the $L$ dependence becomes negligible.
It is generally impossible to determine all of the coupling constants of $H^{(n)}(\bm\mu)$ because their number increases combinatorially with the lattice size.
In practice, one adopts some truncation scheme and approximates $H^{(n)}$ with a finite number of coupling terms $\{S_\alpha(\bm\mu)\}$ with coupling constants $K_\alpha^{(n)}$:
\begin{equation}
\label{eq:limit}
H^{(n)}(\bm\mu) \approx \sum_{\alpha} K^{(n)}_\alpha S_\alpha(\bm\mu)
\end{equation}
\subsection{Critical Manifold Tangent Space in the Absence of Marginal Operators}
To compute the CMTS, let us suppose that $K^{(0)}_\beta$ and $K^{(0)}_\beta + \delta K_\beta^{(0)}$ belong to the critical manifold and apply the RG procedure starting from these two points.
As the difference in the irrelevant directions becomes exponentially suppressed with progressively large $n$, the corresponding two renormalized Hamiltonians will tend to the same Hamiltonian $H^{(n)}$ in the absence of RG marginal operators.
In particular, the truncated coupling constants, $K_{\alpha, \text{truncate}}^{(n)}$ and $K_{\alpha,\text{truncate}}^{(n)} + \delta K_{\alpha,\text{truncate}}^{(n)}$, renormalized respectively from $K_\beta^{(0)}$ and $K_\beta^{(0)} + \delta K_\beta^{(0)}$, will be equal within deviations exponentially small with $n$, because they are the truncation approximation for two Hamiltonians, $H^{(n)}$ and $H^{(n)} + \delta H^{(n)}$, whose difference is exponentially small in $n$.
Thus, the spanning set of the CMTS, $\{\delta K_\beta^{(0)}\}$, satisfies the following equation for sufficiently large $n$,
\begin{equation}
K^{(n)}_{\alpha,\text{truncate}} + \sum_{\beta} \frac{\partial K_{\alpha,\text{truncate}}^{(n)}}{\partial K^{(0)}_\beta} \delta K^{(0)}_\beta =
K^{(n)}_{\alpha,\text{truncate}}
\label{eq:cond1}
\end{equation}
for every $\alpha$.
That is, the CMTS $\{\delta K^{(0)}_\beta\}$ is the kernel of the $n$th level RG Jacobian:
\begin{equation}
\label{eq:A}
\mathcal A^{(n,0)}_{\alpha\beta} \equiv \frac{\partial K_{\alpha,\text{truncate}}^{(n)}}{\partial K^{(0)}_\beta}
\end{equation}
for any well-defined truncation scheme.
In the following, we will use $K_\alpha^{(n)}$ to denote the truncated coupling constants.
As shown in \cite{vmcrg}, VMCRG provides an efficient way to compute the renormalized constants and the RG Jacobian matrix with MC under a given truncation scheme.
It introduces a bias potential $V(\bm\mu)$ of the coarse-grained variables, expanded in a finite set of renormalized couplings $S_\alpha(\bm\mu)$ with variational parameters $J_\alpha$:
\begin{equation}
V_{\vec J}(\bm\mu) = \sum_\alpha J_\alpha S_\alpha(\bm\mu),
\end{equation}
and a variational function of $\vec J = \{J_\alpha\}$:
\begin{equation}
\Omega(\vec J) = \ln \sum_{\bm\mu} e^{-(H^{(n)}(\bm\mu) + V_{\vec J}(\bm\mu))} + \sum_{\bm\mu} V_{\vec J}(\bm\mu) p_t(\bm\mu)
\end{equation}
where $p_t(\bm\mu)$ is a preset target probability distribution, which will be taken as the uniform distribution in the following.
As proved in \cite{varyfes}, $\Omega$ is convex in each $J_\beta$, and, if one excludes the constant coupling term, has a unique minimizer, $\vec J_{\min}$, which can be found with a stochastic gradient descent algorithm using the Jacobian and the Hessian of $\Omega(\vec J)$ \cite{vmcrg}:
\begin{equation}
\label{eq:gradient}
\frac{\partial \Omega(\vec J)}{\partial J_\alpha} = - \braket{S_\alpha(\bm \mu)}_{V_{\vec J}} + \braket{S_\alpha(\bm \mu)}_{p_t}
\end{equation}
\begin{equation}
\label{eq:hessian}
\frac{\partial^2 \Omega(\vec J)}{\partial J_\alpha \partial J_\beta} = \braket{S_\alpha(\bm \mu) S_\beta(\bm \mu)}_{V_{\vec J}} - \braket{S_\alpha(\bm \mu)}_{V_{\vec J}}\braket{S_\beta(\bm \mu)}_{V_{\vec J}}
\end{equation}
Here $\braket{\cdot}_{V_{\vec J}}$ is the biased ensemble average under $V_{\vec J}$ and $\braket{\cdot}_{p_t}$ is the ensemble average under the target probability distribution $p_t$.
The minimizer $\vec J_\min$ then satisfies the minimizing condition: for every renormalized coupling $S_\gamma(\bm\mu)$,
\begin{equation}
\label{eq:min_condition}
\braket{S_\gamma(\bm\mu)}_{V_{\min}} = \braket{S_\gamma(\bm\mu)}_{p_t}
\end{equation}
If the set of the coupling terms $S_\alpha$ is complete, $V_{\min}(\bm\mu) = \sum_{\alpha} J_{\alpha,\min} S_\alpha(\bm\mu) = -H^{(n)}(\bm\mu)$, and we identify for each $\alpha$,
\begin{equation}
\label{eq:rc}
K^{(n)}_\alpha = -J_{\alpha,\min}
\end{equation}
Because the set of $S_\alpha(\bm\mu)$ is not complete, a truncation error in computing $K^{(n)}_\alpha$ is incurred.
However, because the minimizer of $\Omega$ is unique, the truncation scheme is well-defined.
Within VMCRG, $\mathcal A^{(n,0)}_{\alpha\beta}$ can be obtained by expanding Eq. \ref{eq:min_condition} to linear order in $\delta K^{(0)}_\beta$ and $\delta K^{(n)}_\alpha$.
The result \cite{vmcrg} is that, for a given $\beta$, every $S_\gamma(\bm\mu)$ must satisfy,
\begin{equation}
\label{eq:linear}
\sum_\alpha \bbraket{S_\gamma(\bm\mu)}{S_\alpha(\bm\mu)}_{V} \frac{\partial K_\alpha^{(n)}}{\partial K_\beta^{(0)}} = \bbraket{S_\gamma(\bm\mu)}{S_\beta(\bm\sigma)}_{V}
\end{equation}
where $\bbraket{X}{Y}_V \equiv \braket{XY}_V - \braket{X}_V\braket{Y}_V$ is the connected correlation function of the observables $X$ and $Y$ in the biased ensemble with the potential $V_\min(\bm\mu)$.
Thus, for any $\beta$, the Jacobian matrix element $\mathcal A^{(n, 0)}_{\alpha\beta} = \frac{\partial K_\alpha^{(n)}}{\partial K_\beta^{(0)}}$, viewed as a column vector indexed by $\alpha$, can be obtained from Eq. \ref{eq:linear} by matrix inversion.
We also note that the method described above works for any target distribution $p_t(\bm\mu)$ in VMCRG.
A different $p_t(\bm\mu)$ will result in a different bias potential $V_{\min}(\bm\mu)$ to be used in the sampling of the matrix $\mathcal A^{(n,0)}$.
We use the uniform distribution here because then $V_{\min}(\bm\mu)$ acts to eliminate the long-range correlation in a critical system and the resultant ensemble for the sampling of $\mathcal A^{(n,0)}$ benefits from a much faster MC relaxation \cite{vmcrg}.
However, one can impose any arbitrary bias potential of the coarse-grained variables, $V(\bm\mu)$, and adopt the corresponding biased distribution as the target distribution.
All the steps in the above derivation follow, and the CMTS can then be computed in the biased ensemble with the arbitrary $V(\bm\mu)$.
In particular, if one insists on using the original ensemble with no bias potential, one only needs to set the target distribution to be the original unbiased distribution, in which case $V_{\min}$ necessarily vanishes and $\mathcal A^{(n,0)}$ is sampled in the unbiased ensemble.
\subsection{Critical Manifold Tangent Space in the Presence of Marginal Operators}
When there are marginal operators in the RG transformation, two different points on the critical manifold will converge to different fixed-point Hamiltonians.
However, starting from any point on the critical manifold, at sufficiently large $n$, $H^{(n)}$ will be equal to $H^{(n+1)}$, and so will the truncated renormalized constants $K_\alpha^{(n)}$ be equal to $K_\alpha^{(n+1)}$.
Now suppose that both $K^{(0)}_\beta$ and $K^{(0)}_\beta + \delta K_\beta^{(0)}$ are on the critical manifold, respectively giving rise to the truncated renormalized constants $K^{(n)}_\alpha$ and $K^{(n)}_\alpha + \delta K^{(n)}_\alpha$.
Then, the spanning set of CMTS, $\{\delta K_\beta^{(0)}\}$, instead of Eq. \ref{eq:cond1}, satisfies the following condition,
\begin{equation}
K^{(n)}_\alpha + \sum_{\beta} \frac{\partial K_\alpha^{(n)}}{\partial K^{(0)}_\beta} \delta K^{(0)}_\beta =
K^{(n + 1)}_\alpha + \sum_{\beta} \frac{\partial K_\alpha^{(n + 1)}}{\partial K^{(0)}_\beta} \delta K^{(0)}_\beta
\end{equation}
for every $\alpha$.
But $K_\alpha^{(n)}$ and $K^{(n+1)}_\alpha$ are already equal up to an exponentially small difference, because they are renormalized from the same point on the critical manifold.
Thus, when marginal operators appear in the RG transformation, the CMTS is the kernel of the matrix,
\begin{equation}
\mathcal A^{(n+1,0)}_{\alpha\beta} - \mathcal A^{(n,0)}_{\alpha\beta}
\label{eq:marginal_A}
\end{equation}
\subsection{The Normal Vectors to Critical Manifold Tangent Space}
Because of the spin-flip symmetry, the renormalization of the even operators and of the odd operators are decoupled in the examples we consider here, so they can be considered separately.
In the Ising models that we discuss later, the co-dimension of the critical manifold is one, and the tangent space is thus a hyperplane and the row vectors of $\mathcal A^{(n,0)}$ or $\mathcal A^{(n+1,0)} - \mathcal A^{(n,0)}$, for systems with or without marginal operators, are orthogonal to this hyperplane.
This means that the row vectors of $\mathcal{A}^{(n, 0)}$ or $\mathcal A^{(n+1,0)} - \mathcal A^{(n,0)}$ are all normal vectors to the CMTS and are parallel to one another.
Thus, the $\mathcal P$ matrix defined as
\begin{equation}
\mathcal P_{\alpha\beta} = \frac{\mathcal A^{(n,0)}_{\alpha\beta}}{\mathcal A^{(n,0)}_{\alpha 1}} \text{ or } \frac{\mathcal A^{(n+1,0)}_{\alpha\beta} - \mathcal A^{(n, 0)}_{\alpha\beta}}{\mathcal A^{(n+1,0)}_{\alpha 1} - \mathcal A^{(n, 0)}_{\alpha 1}},
\end{equation}
that contains the normalized row vectors of $\mathcal A^{(n,0)}$ or $\mathcal A^{(n+1,0)} - \mathcal A^{(n,0)}$, should have identical rows.
In the tricritical Ising model that we also discuss, the critical manifold in the even subspace has co-dimension two \cite{tri_mcrg}.
In this case, we cannot expect all the rows of $\mathcal P_{\alpha\beta}$ to be equal.
Instead, the rows should form a two-dimensional vector space to which the CMTS is orthogonal.
This outcome can be checked, for example, by verifying that all the row vectors of $\mathcal P_{\alpha\beta}$ lie in the vector space spanned by its first two rows.
If such consistency checks can be satisfied, it is a testament of the validity of RG theory, which predicts that a critical fixed-point Hamiltonian exists and that the co-dimension of the critical manifold has precisely the assumed value for the models considered in this paper.
In general, the CMTS computed from different renormalized couplings will have different statistical uncertainty because the sampling noise differs for different correlation functions in an MC simulation.
One should, thus, trust the result with the least uncertainty and use the values computed from other renormalized constants as a consistency check.
\section{Numerical Results for CMTS}
\subsection{2D Isotropic Ising model}
Consider the isotropic Ising model on a 2D square lattice with Hamiltonian $H(\bm\sigma)$
\begin{equation}
H(\bm\sigma) = -K^{(0)}_{\text{nn}}\sum_{\braket{i,j}}\sigma_i \sigma_j - K^{(0)}_{\text{nnn}} \sum_{[i,j]}\sigma_i \sigma_j
\end{equation}
where $\braket{i,j}$ denotes the nearest neighbor pairs and $[i,j]$ the next nearest neighbor pairs.
$K^{(0)}_\text{nn}$ and $K^{(0)}_\text{nnn}$ are the corresponding coupling constants.
This model is analytically solvable when $K^{(0)}_\text{nnn} = 0$ and is critical at the Onsager point with $K^{(0)}_\text{nn} = 0.4407...$ \cite{onsager}.
Four critical points are first located with VMCRG in the coupling space of $\{K^{(0)}_\text{nn}, K^{(0)}_\text{nnn}\}$.
This task can be achieved by fixing $K^{(0)}_\text{nnn}$ and varying $K^{(0)}_\text{nn}$ while monitoring how the corresponding renormalized coupling constant $K^{(n)}_\text{nn}$ varies with $n$, the RG iteration index.
The largest value of the original coupling constant, $K^{(0)}_{\text{nn},1}$, for which $K^{(n)}_\text{nn}$ decreases with $n$, and the smallest value,
$K^{(0)}_{\text{nn},2}$, for which $K^{(n)}_\text{nn}$ increases with $n$, define the best estimate, within statistical errors, of the interval $[K^{(0)}_{\text{nn},1},K^{(0)}_{\text{nn},2}]$ of location of the critical coupling, $K^{(0)}_{\text{nn},c}$.
We notice that the calculated renormalized constants are truncated and we assume here that the truncated $K^{(n)}_\text{nn}$ increases or decreases monotonically with the exact $K^{(n)}_\text{nn}$.
This assumption is very natural and does not seem to be violated in the present study.
Alternatively, the same procedure can be performed by fixing $K_\text{nn}$ and varying $K_\text{nnn}$.
In the following VMCRG calculations, we use $n = 4, L = 256$, and the $b=2$ majority rule with a random pick on tie.
We use three renormalized couplings: the nearest neighbor product $K^{(n)}_\text{nn}$, the next nearest product $K^{(n)}_\text{nnn}$, and the smallest plaquette $K^{(n)}_\square$.
The model is known to have no marginal operators.
The four critical points shown in Table \ref{table:ising_pab} all belong to the same critical phase, as they all flow into the same truncated fixed-point renormalized Hamiltonian.
The CMTSs are determined at these critical points in a four-dimensional coupling space spanned by $K^{(0)}_\text{nn}$, $K^{(0)}_\text{nnn}$, $K^{(0)}_\square$, and the third nearest neighbor products, $K^{(0)}_\text{nnnn}$.
The $\mathcal P_{\alpha\beta}$ is shown in Table. \ref{table:ising_pab}.
In addition, we also show the CMTS at the Onsager point, which is analytically solvable \cite{isingcmts}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.8em}
\centering
\begin{tabular}{lllll}
\hline
\hline
$K^{(0)}_\text{nn}$ & $K^{(0)}_\text{nnn}$ & $\mathcal P_{\alpha 2}$ & $\mathcal P_{\alpha 3}$ & $\mathcal P_{\alpha 4}$\\
\hline
0.4407 &0& 1.4134(3) & 0.5135(3) & 1.7963(5) \\
& & 1.4146(7) & 0.5134(7) & 1.799(2) \\
& & 1.413(3) & 0.511(3) & 1.794(7) \\
Exact & & 1.4142 & 0.5139 & 1.8006 \\
\hline
0.37 &0.0509& 1.3717(4) & 0.5242(3) & 1.7664(8) \\
& &1.375(1) & 0.5243(7) & 1.773(2) \\
& &1.372(4) & 0.527(3) & 1.773(6) \\
\hline
0.228 &0.1612& 1.2529(7) & 0.5303(4) & 1.6545(8) \\
& &1.254(1) & 0.5318(8) & 1.659(2) \\
& &1.252(5) & 0.535(3) & 1.65(1) \\
\hline
0.5 & -0.0416& 1.4441(4) & 0.5019(5) & 1.816(1) \\
& &1.444(2) & 0.503(2) & 1.818(4) \\
& &1.441(7) & 0.499(6) & 1.80(1) \\
\hline
\hline
\end{tabular}
\caption{$\mathcal P_{\alpha\beta}$ for the isotropic Ising model.
$\alpha$ indexes rows corresponding to the three renormalized constants: $\text{nn}, \text{nnn},$ and $\square$.
The fourth row of the table at the Onsager point shows the exact values.
$\beta = 2,3,$ and $4$ respectively indexes the component of the normal vector to CMTS corresponding to coupling terms $\text{nnn}, \square,$ and $\text{nnnn}$.
$\beta = 1$ corresponds to the $\text{nn}$ coupling term and $\mathcal P_{\alpha 1}$ is always 1 by definition.
The simulations were performed on 16 cores independently, each of which ran $3\times 10^6$ Metropolis MC sweeps.
The standard errors are cited as the statistical uncertainty.
}
\label{table:ising_pab}
\end{table}
The CMTS can also be computed in the odd coupling subspace, as we show here for the Onsager point.
In this calculation, we take $n = 5$, $L = 256$, and again the $b=2$ majority rule for coarse-graining.
The CMTS in a space of four odd couplings, listed in the legend of Table \ref{table:pab_odd}, is calculated from the same four renormalized couplings.
The result is shown in Table \ref{table:pab_odd}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.8em}
\centering
\begin{tabular}{lllll}
\hline
\hline
$K^{(0)}_\text{nn}$ & $K^{(0)}_\text{nnn}$ & $\mathcal P_{\alpha 2}$ & $\mathcal P_{\alpha 3}$ & $\mathcal P_{\alpha 4}$\\
\hline
0.4407 &0& 3.31248(8) & 1.65629(4) & 1.49852(6) \\
& & 3.296(2) & 1.649(4) & 1.479(2) \\
& & 3.315(3) & 1.658(2) & 1.503(2) \\
& & 3.32(5) & 1.68(4) & 1.51(3) \\
\hline
\hline
\end{tabular}
\caption{$\mathcal P_{\alpha\beta}$ for the odd coupling space of the isotropic Ising model.
$\alpha$ indexes rows corresponding to the four renormalized odd spin products: (0, 0), (0, 0)-(0,1)-(1,0), (0, 0)-(1, 0)-(-1,0) and (0, 0)-(1,1)-(-1,-1), where the pair $(i,j)$ is the coordinate of an Ising spin.
The simulations were performed on 16 cores independently, each of which ran $3\times 10^6$ Metropolis MC sweeps.
The standard errors are cited as the statistical uncertainty.
}
\label{table:pab_odd}
\end{table}
\subsection{3D Istropic Ising Model}
Consider now the same model on a 3D square lattice with $K_\text{nnn}^{(0)} = 0$, i.e. the 3D isotropic nearest neighbor Ising model.
This model does not have an analytical solution, but is known to experience a continuous transition at $K_\text{nn}^{(0)} = 0.22165...$ \cite{3dising}.
To compute the CMTS at this nearest neighbor critical point, we use $n = 3$, $L = 64$, and the $b=2$ marjority rule with a random pick on tie.
The CMTS is computed in an eight-dimensional coupling space $\{K^{(0)}\}$ spanned by the nearest-neighbor and the next nearest-neighbor renormalized coupling constants, $K^{(n)}_\text{nn}$ and $K^{(n)}_\text{nnn}$, as shown in Table \ref{table:3D_ising}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.25em}
\centering
\begin{tabular}{llllllr}
\hline
\hline
$\mathcal P_{\alpha 2}$ & $\mathcal P_{\alpha 3}$ & $\mathcal P_{\alpha 4}$ & $\mathcal P_{\alpha 5}$ & $\mathcal P_{\alpha 6}$ & $\mathcal P_{\alpha 7}$ & $\mathcal P_{\alpha 8}$\\
\hline
2.642(8) & 1.540(8)& 6.61(3) & 2.46(1) & 0.788(3) & 6.92(4) & 1.99(1) \\
\hline
2.64(2) & 1.55(2) & 6.7(1) & 2.50(2) & 0.795(3) & 7.0(1) & 1.99(2) \\
\hline
\hline
\end{tabular}
\caption{$\mathcal P_{\alpha\beta}$ for the 3D isotropic Ising model.
The two rows in the table correspond to the two different $\alpha$ which respectively index the $\text{nn}$ and the $\text{nnn}$ renormalized constants.
$\beta$ runs from 1 to 8, corresponding to the following spin products, $S^{(0)}_\beta(\bm\sigma)$: (0, 0, 0)-(1, 0, 0), (0, 0, 0)-(1, 1, 0), (0, 0, 0)-(2, 0, 0), (0, 0, 0)-(2, 1, 0), (0, 0, 0)-(1, 0, 0)-(0, 1, 0)-(0, 0, 1), (0, 0, 0)-(1, 0, 0)-(0, 1, 0)-(1, 1, 0), (0, 0, 0)-(2, 1, 1), and (0, 0, 0)-(1, 1, 1), where the triplet $(i, j, k)$ is the coordinate of an Ising spin.
16 independent simulations were run, each of which took $3\times 10^5$ Metropolis MC sweeps.
The simulations were performed at the nearest-neighbor critical point with $K_\text{nn} = 0.22165$.
}
\label{table:3D_ising}
\end{table}
\subsection{2D Anistropic Ising Model}
Consider then the anisotropic Ising model on a 2D square lattice with Hamiltonian $H(\bm\sigma)$
\begin{equation}
H(\bm\sigma) = -K^{(0)}_{\text{nn}_x} \sum_{\braket{i, j}_x} \sigma_i \sigma_j -K^{(0)}_{\text{nn}_y} \sum_{\braket{i,j}_y} \sigma_i \sigma_j
\end{equation}
where $\braket{i,j}_x$ and $\braket{i,j}_y$ respectively denote the nearest neighbor pairs along the horizontal and the vertical direction.
In the space of $\{K^{(0)}_{\text{nn}_x}, K^{(0)}_{\text{nn}_y}\}$, the model is exactly solvable and is critical along the line \cite{isingfermion}
\begin{equation}
\label{eq:ani}
\sinh(2K_{\text{nn}_x}^{(0)})\cdot \sinh(2K_{\text{nn}_y}^{(0)}) = 1
\end{equation}
With the $2\times 2$ majority rule, the system admits a marginal operator due to anisotropy in the RG transformation \cite{mcrg_aniising}.
We performed VMCRG calculations on two critical points of the system with ${K^{(0)}_{\text{nn}_y}}/{K^{(0)}_{\text{nn}_x}} = 2, $ and $3$, with four renormalized couplings: $K^{(n)}_{\text{nn}_x}, K^{(n)}_{\text{nn}_y}, K^{(n)}_{\text{nnn}}, K^{(n)}_\square$.
The CMTS is computed in the coupling space $\{K^{(0)}_{\text{nn}_x}, K^{(0)}_{\text{nn}_y}, K^{(0)}_\text{nnn}, K^{(0)}_\square, K^{(0)}_{\text{nnnn}_x}, K^{(0)}_{\text{nnnn}_y}\}$ using Eq. \ref{eq:marginal_A}, as shown by $\mathcal P_{\alpha\beta}$ in Table. \ref{table:ani_pab}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.2em}
\centering
\begin{tabular}{llllll}
\hline
\hline
$K^{(0)}_{\text{nn}_x}$ & $\mathcal P_{\alpha 2}$ & $\mathcal P_{\alpha 3}$ & $\mathcal P_{\alpha 4}$ & $\mathcal P_{\alpha 5}$ & $\mathcal P_{\alpha 6}$\\
\hline
0.304689 & 0.653(8) & 2.387(10) & 0.814(8) & 1.749(8) & 1.21(1) \\
& 0.646(4) & 2.381(5) & 0.807(4) & 1.755(4) & 1.200(5) \\
& 0.647(8) & 2.38(1) & 0.808(12) & 1.747(14) & 1.20(1)\\
& 0.63(2) & 2.37(3) & 0.78(3) & 1.76(4) & 1.22(3)\\
Exact & 0.6478 \\
\hline
0.240606 & 0.507(4) & 2.241(5) & 0.692(7) & 1.74(1) & 0.957(7) \\
& 0.498(2) & 2.236(3) & 0.681(3) & 1.739(3) & 0.946(4) \\
& 0.499(8) & 2.24(1) & 0.68(1) & 1.736(14) & 0.940(14) \\
& 0.500(16)& 2.23(3) & 0.67(3) & 1.75(4) & 0.94(2) \\
Exact & 0.5 \\
\hline
\hline
\end{tabular}
\caption{$\mathcal P_{\alpha\beta}$ for the 2D anisotropic Ising model.
$\alpha$ indexes rows corresponding to the four renormalized constants: $\text{nn}_x, \text{nn}_y, \text{nnn},$ and $\square$.
$\beta = 2-6$ respectively indexes the component of the normal vector to CMTS corresponding to coupling terms $\text{nn}_y, \text{nnn}, \square, \text{nnnn}_x,$ and $\text{nnnn}_y$.
$\beta = 1$ corresponds to the $\text{nn}_x$ coupling term and $\mathcal P_{\alpha 1}$ is always 1 by definition.
}
\label{table:ani_pab}
\end{table}
\subsection{2D Tricritical Ising Model}
Finally, let us consider the 2D tricritical Ising model with the Hamiltonian
\begin{equation}
H(\bm\sigma) = -K^{(0)}_\text{nn} \sum_{\braket{i,j}}\sigma_i \sigma_j - K^{(0)}_\triangle \sum_{i} \sigma_i^2
\end{equation}
where $\sigma = \pm 1, 0$ and $\braket{i,j}$ denotes the nearest neighbor pairs.
In the coupling space of $K^{(0)}_\text{nn}$ and $K^{(0)}_\triangle$, the model admits a line of Ising-like continuous phase transitions, which terminates at a tricritical point.
At the tricritical point, the underlying conformal field theory (CFT) changes from the Ising CFT with central charge $\frac{1}{2}$ to one with central charge $\frac{7}{10}$ \cite{cft}.
Accompanying this phase transition is a change in the co-dimension of the even critical manifold, from 1 of the Ising case to 2 of the tricritical case \cite{tri_mcrg}.
We compute the CMTS at the tricritical point, which has been determined to occur at $K_\text{nn}^{(0)} = 1.642(8)$ and $K_\triangle^{(0)} = -3.227(1)$ both by MCRG \cite{tri_mcrg} and finite size scaling \cite{tri_fss}.
The coupling space we consider has six couplings, listed in Table \ref{table:tri_coup}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.2em}
\centering
\begin{tabular}{ll}
\hline
\hline
\hspace{5mm} & Coupling \\
\hline
1 & $\sigma_i^2$\\
2 & $\sigma_i\sigma_j$, $i$ and $j$ nearest neighbor\\
3 & $\sigma_i\sigma_j$, $i$ and $j$ next nearest neighbor\\
4 & $\sigma_i\sigma_j\sigma_k \sigma_l$, $i, j, k, l$ in the smallest plaquette\\
5 & $(\sigma_i\sigma_j)^2$, $i$ and $j$ nearest neighbor\\
6 & $(\sigma_i\sigma_j)^2$, $i$ and $j$ next nearest neighbor\\
\hline
\hline
\end{tabular}
\caption{The couplings used in the computation of CMTS for the 2D tricritical Ising model.
}
\label{table:tri_coup}
\end{table}
We use $n = 5, L = 256$ and the $b=2$ majority-rule.
The normal vectors to the CMTS are computed using the first five renormalized couplings, as the statistical uncertainty of the sixth renormalized coupling is too large.
The result is again represented by $\mathcal P_{\alpha\beta}$ and shown in Table \ref{table:tri_pab}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.2em}
\centering
\begin{tabular}{llllll}
\hline
\hline
$\alpha$\hspace{5mm} & $\mathcal P_{\alpha 2}$ & $\mathcal P_{\alpha 3}$ & $\mathcal P_{\alpha 4}$ & $\mathcal P_{\alpha 5}$ & $\mathcal P_{\alpha 6}$\\
\hline
1 & 2.085(2) & 2.100(5) & 0.928(1) & 2.079(1) & 2.073(2) \\
2 & 2.200(2) & 2.271(3) & 1.046(2) & 2.190(2) & 2.232(2) \\
3 & 2.171(1) & 2.2285(2) & 1.0160(5) & 2.163(1) & 2.193(1)\\
4 & 2.214(1) & 2.283(1) & 1.04(1) & 2.20(1) & 2.24(1)\\
5 & 2.038(4) & 2.03(1) & 0.873(2) & 2.03(1) & 2.00(1) \\
\hline
\hline
\end{tabular}
\caption{$\mathcal P_{\alpha\beta}$ for the 2D tricritical Ising model.
$\alpha$ indexes rows corresponding to the first five renormalized couplings listed in Table \ref{table:tri_coup}, which also gives the couplings for $\beta = 2-6$.
}
\label{table:tri_pab}
\end{table}
As can be seen, the rows of $\mathcal P$ are not equal within statistical uncertainty, indicating that the co-dimension is higher than one.
To verify that the co-dimension is two, one can check whether the row vectors for $\alpha = 3-5$ are in the vector space spanned by the first two row vectors.
Let $\vec u_n$ be the $n$th row vector of $\mathcal P$.
If the hypothesis of co-dimension two were correct, one could write:
\begin{equation}
\vec u_3 = a\vec u_1 + b\vec u_2
\label{eq:u3}
\end{equation}
and find $a$ and $b$ from the first two components of the vectors $\vec u_1, \vec u_2$, and $\vec u_3$.
We could then check that the remaining components of $\vec u_3$ satisfy the linear relation in Eq. \ref{eq:u3} with the so found $a$ and $b$.
A similar check can be carried out for the vectors $\vec u_4$ and $\vec u_5$.
The vectors $\vec u_3, \vec u_4$, and $\vec u_5$ calculated in this way are reported in Table \ref{table:tri_fit}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.2em}
\centering
\begin{tabular}{llllll}
\hline
\hline
$\alpha$\hspace{5mm} & $\mathcal P_{\alpha 2} \hspace{5mm}$ & $\mathcal P_{\alpha 3}\hspace{5mm}$ & $\mathcal P_{\alpha 4}\hspace{5mm}$ & $\mathcal P_{\alpha 5}\hspace{5mm}$ & $\mathcal P_{\alpha 6}$\\
\hline
3 & 2.171 & 2.230 & 1.019 & 2.163 & 2.194\\
4 & 2.214 & 2.284 & 1.047 & 2.204 & 2.245\\
5 & 2.038 & 2.026 & 0.872 & 2.033 & 2.004\\
\hline
\hline
\end{tabular}
\caption{
$a\vec u_1 + b\vec u_2$ computed from Table \ref{table:tri_pab} for $\alpha = 3-5$ and $\beta = 2-6$.
}
\label{table:tri_fit}
\end{table}
As we can see, the $\mathcal P_{\alpha\beta}$ for $\alpha=3-5$ and $\beta=2-6$ in Table \ref{table:tri_fit} are equal within statistical uncertainty to the corresponding elements in Table \ref{table:tri_pab}, consistent with a co-dimension equal to two at the tricritical point.
\section{Curvature of the Critical Manifold}
Next, we compute the curvature of the critical manifold, using the isotropic Ising model as an example.
For a change $\{\delta K^{(0)}_\beta\}$ in the original coupling constants, we expand the corresponding change in the renormalized constants to quadratic order:
\begin{equation}
\label{eq:2nd_expansion}
\delta K^{(n)}_\alpha = \sum_{\beta} \mathcal A^{(n, 0)}_{\alpha\beta} \delta K^{(0)}_\beta + \frac{1}{2} \sum_{\beta\eta} \mathcal B^{(n,0)}_{\alpha \beta\eta} \delta K^{(0)}_\beta \delta K^{(0)}_\eta
\end{equation}
where $\mathcal A^{(n,0)}_{\alpha\beta}$ and $\mathcal B^{(n, 0)}_{\alpha\beta\eta}$ can be determined by substituting Eq. \ref{eq:2nd_expansion} in Eq. \ref{eq:min_condition} and enforcing equality to second order in $\delta K^{(0)}_\alpha$.
$\mathcal A^{(n, 0)}_{\alpha\beta}$ is already given in Eq. \ref{eq:linear}.
The result for $\mathcal B$ is that for given $\beta$ and $\eta$, for every $\gamma$, one requires
\begin{equation}
\label{eq:B}
\begin{split}
\sum_\alpha & \bbraket{S_\gamma(\bm\mu)}{S_\alpha(\bm\mu)}_V \mathcal B^{(n,0)}_{\alpha\beta\eta} = \bbraket{S_\gamma(\bm\mu)}{S_\beta(\bm\sigma)S_\eta(\bm\sigma)}_V \\
&+ \sum_{\alpha\nu} \mathcal A_{\alpha\beta}\mathcal A_{\nu\eta} \bbraket{S_\gamma(\bm\mu)}{S_\alpha(\bm\mu)S_\nu(\bm\mu) }_V \\
&- 2\sum_{\alpha} \mathcal A_{\alpha\eta} \bbraket{S_\gamma(\bm\mu)}{S_\beta(\bm\sigma) S_\alpha(\bm\mu)}_V
\end{split}
\end{equation}
where the connected correlation functions are again sampled in the biased ensemble $\braket{\cdot}_V$.
Note that $\mathcal B_{\alpha\beta\eta}$ given above is not symmetric in $\beta$ and $\eta$.
In order for it to be interpreted as a second-order derivative, it needs to be symmetrized:
\begin{equation}
\label{eq:second_d}
\frac{\partial^2 K^{(n)}_\alpha}{\partial K^{(0)}_\beta \partial K^{(0)}_\eta} = \frac{1}{2} \left(\mathcal B^{(n,0)}_{\alpha\beta\eta} + \mathcal B^{(n,0)}_{\alpha\eta\beta}\right)
\end{equation}
In the coupling space of any pair $\beta$ and $\eta$: $\{K^{(0)}_\beta, K^{(0)}_\eta\}$, the critical manifold of the 2D isotropic Ising model is a curve, and the curvature $\kappa_{\beta\eta}$ of the critical curve can be computed from the curvature formula \cite{implicitcurvature} of the implicit curve \begin{equation}
K^{(n)}_\alpha(K^{(0)}_\beta, K^{(0)}_\eta) = \text{constant}
\end{equation}
with the second-order derivatives given in Eq. \ref{eq:second_d}.
Again, this curvature is determined separately by each renormalized constant $\alpha$.
The result is given Table \ref{table:curvature}.
\begin{table}[htb!]
\setlength{\tabcolsep}{0.8em}
\centering
\begin{tabular}{lllll}
\hline
\hline
$K^{(0)}_\text{nn}$ & \diagbox{$\beta$}{$\eta$} & $\text{nnn}$ & $\square$ & \text{nnnn} \\
\hline
0.4407 & $\text{nn}$ & 0.143(8) & 0.27(2) & 0.21(2) \\
& $\text{nnn}$ & & 0.38(2) & 0.341(8) \\
& $\square$ & & & 0.20(2) \\
\multicolumn{2}{l}{Exact (\text{nn}, \text{nnn})} & 0.148 & & \\
\hline
0.37 & $\text{nn}$ & 0.18(1) & 0.23(1) & 0.30(3) \\
& $\text{nnn}$ & & 0.35(2) & 0.32(2) \\
& $\square$ & & & 0.18(3) \\
\hline
0.228 & $\text{nn}$ & 0.35(2) & 0.27(3) & 0.49(3) \\
& $\text{nnn}$ & & 0.35(4) & 0.29(2) \\
& $\square$ & & & 0.20(4) \\
\hline
\hline
\end{tabular}
\caption{$\kappa_{\beta\eta}$ at the same three critical points as in Table \ref{table:ising_pab}, calculated from $\partial^2 K^{(n)}_\text{nn}/\partial K^{(0)}_{\beta}\partial K^{(0)}_\eta$. The exact curvature for $\beta=\text{nn}$ and $\eta=\text{nnn}$ at the Onsager point is also shown \cite{isingcmts}.}
\label{table:curvature}
\end{table}
Here we only quote the result calculated from the nearest neighbor renormalized constants $K_\alpha^{(n)}$, $\alpha = \text{nn}$.
The curvature computed from other renormalized constants have statistical uncertainty much larger than the ones in Table \ref{table:curvature}.
The difficulty in sampling the curvature, or generally any higher-order derivatives, compared to the tangent space, can be seen from Eq. \ref{eq:B}.
Note that on the left side of Eq. \ref{eq:B}, the connected correlation function $\bbraket{S_\gamma}{S_\alpha}$ is of order $N$, where $N$ is the system size, but each of the terms on the right side is of order $N^2$.
Thus, a delicate and exact cancellation of terms of order $N^2$ must happen between the terms on the right hand side of Eq. \ref{eq:B} to give a final result only of order $N$.
The variance due to the terms on the right hand side, however, will accumulate and give an uncertainty typical for $O(N^3)$ quantities as each $S_\alpha$ is of order $N$.
(For the CMTS, the connected correlation functions of interest are also of order $N$, but the statistical uncertainties are those typical of $O(N^2)$ quantities, as seen in Eq. \ref{eq:linear}.)
In general, as an $m$-th order derivative of the critical manifold is computed, the connected correlation functions of interest will always be of order $N$, but the correlation functions that need to sampled will be of order $N^{m+1}$, giving an exceedingly large variance.
Thus, although in principle arbitrarily high order information about the critical manifold is available by expanding Eq. \ref{eq:min_condition}, in practice only low-order knowledge on the critical manifold can be obtained with small statistical uncertainty from a simulation near a single critical point.
\section{Conclusion}
We have presented an MC procedure to obtain the local geometrical information on the critical manifold in the vicinity of a given critical point.
The procedure is in essence a projector Monte Carlo method that is based on the fact that the irrelevant operators in a system decay exponentially fast along an RG trajectory.
Because of such decay, the truncated RG Jacobian matrix, $\mathcal A^{(n,0)}$, acquires a structure that is asymptotically clearer and clearer as $n$ increases, i.e. its kernel emerges with co-dimension equal to the number of relevant operators of the system.
This structure is quite robust.
On the one hand, it is immune from the truncation of the renormalized Hamiltonian.
On the other hand, it does not depend on what biased potential of the coarse-grained variables is applied to the system.
From the perspective of connected correlation functions between the orignal spins $\bm\sigma$ and the coarse-grained spins $\bm\mu$, the aforementioned structure means the following.
Given any bias potential $V(\bm\mu)$ at any critical point, each local observable $S_\beta$ of $\bm\sigma$ can be viewed as a linear functional $\bbraket{\,\cdot\,}{S_\beta(\bm\sigma)}$ on the space of the local observables of $\bm\mu$:
\begin{equation}
\bbraket{\,\cdot\,}{S_\beta(\bm\sigma)}: S_\gamma(\bm\mu) \mapsto \bbraket{S_\gamma(\bm\mu)}{S_\beta(\bm\sigma}_V
\end{equation}
The presence of the CMTS implies that many distinct linear functionals are linearly dependent.
In fact, by Eq. \ref{eq:linear}, for any $\{\delta K_\beta^{(0)}\}$ in the CMTS,
\begin{equation}
\label{eq:condition}
\sum_\beta \bbraket{\,\cdot\,}{S_\beta(\bm\sigma)} \delta K^{(0)}_\beta = 0
\end{equation}
This poses an infinite number of conditions which the coarse-graining procedure has to satisfy to generate a proper RG structure.
The majority-rule coarse-graining considered in our examples seems to do very well in satisfying these conditions.
But a question still remains.
Are the conditions satisfied exactly or just approximately but so closely that any violation is overshadowed by the statistical uncertainty?
In the latter case, which coarse-graining procedure, preferably with a finite number of parameters, can satisfy all the conditions in Eq. \ref{eq:condition}?
In the former case, what is the profound reason why all these conditions can be satisfied simultaneously?
\begin{acknowledgments}
All the codes used in this project were written in C\texttt{++}, and will be available upon request. We acknowledge support from DOE Award DE-SC0017865.
\end{acknowledgments}
\bibliographystyle{apsrev}
| -44,849.166343 |
[
-2.4765625,
2.474609375
] | 36.594912 |
[
-3.1015625,
0.0302734375,
-2.427734375,
-6.6171875,
-0.9296875,
9.2265625
] |
[
3.65234375,
9.7578125,
2.6328125,
7.44921875
] | 405 | 5,080 |
[
-3.4140625,
3.783203125
] | 34.901297 |
[
-6.17578125,
-5.1875,
-5.48046875,
-2.517578125,
2.46875,
14.0078125
] | 1.030712 | 4.699186 | 24.645669 | 8.309771 |
[
2.3384287357330322
] | -29,269.561215 | 5.634055 | -43,661.187332 | 1.92516 | 5.961158 |
[
-2.251953125,
-3.943359375,
-4.1015625,
-5.36328125,
2.251953125,
13.0390625
] |
[
-5.2578125,
-2.029296875,
-2.1484375,
-1.3310546875,
3.419921875,
4.35546875
] | |
BkiUc_Y5qhDBYTWvItjH
|
\section{Introduction}
In the last years the research on circuit quantum electrodynamics (cQED) has been extended to mesoscopic devices, e.g., quantum dots (QDs)
\cite{childress-lukin-pra2004,delbecq2011,Petersson2012,Schroer2012,BergenfeldtSET2012,frey-wallraff-prl2012,toida2013,basset2013,schiro2014,Liu2014},
tunnel junctions~\cite{gasse2013,soquet-simon-prb2013,Souquet2014,forgues2014,mendes-mora-njp2015,jin-schon-prb2015,qassemi-blais-prl2106,Parlavecchio2015,mora-portier-arxiv2105}
or dc-biased Josephson junctions~\cite{leppaekangas-fogelstrom-prl2013,armour2013,gramich2013,chen2014,dambach2015}. The high sensitivity of the cavity
field offers a powerful way to probe the electronic conductor in a non-invasive way~\cite{cottet2011,cottet2015,dmytruk-mora-simon-prb2015} thus acting as an excellent tool to investigate correlated electronic systems such as Majorana fermions~\cite{CottetMajos2013,QC32013,dmytruk-simon-prb2015} and Kondo physics
\cite{delbecq2011,Deng-leHur-guo-arxiv2015}. Furthermore, many interesting phenomena due to the interplay between electrons and photons have been
demonstrated in this hybrid device: quadrature squeezing \cite{gasse2013,forgues2014,mendes-mora-njp2015,qassemi-blais-prl2106,mora-portier-arxiv2105}, spin-photon
coupling \cite{Petersson2012,viennot-kontos-science2015}, coupling between distant QDs \cite{delbecq2013,DT32013,lambert2013,deng2015}, and photon lasing
\cite{jin-marthaler-schon-prb2011,gullans2015,lambert2015,liu2015}. These demonstrations open a huge avenue of possibilities in quantum information processing and quantum state engineering~\cite{souquet2016} with mesoscopic devices coupled to transmission lines. Also the photons emitted by a quantum conductor carry information on the dynamics of electrons~\cite{aguado-kouwenhoven-prl2000,basset2010,xu-belzig-prl2014}. They have been used to characterize the photonic side of the dynamical Coulomb blockade effect (DCB)~\cite{hofheinz-esteve-prl2011,altimiras2014,leppaekangas-fogelstrom-njp2014} in which photons are radiated by inelastic electron scattering.
Even with the rapid progress on cQED with mesoscopic devices the physics of a quantum point contact (QPC) coupled to a superconductor microwave
resonator has been practically unexplored. The QPC is a coherent conductor formed by two metallic gates on the top of a two-dimensional electron gas.
A voltage applied on the gates creates a one-dimension constriction connecting the two sides of the electron gas. The gate voltage controls both
number of channels $n$ connecting the two metallic reservoirs and their transmission probabilities $T_{n}$. As microwave photons are created by
the scattering of electrons tunneling from one reservoir to the other, the QPC is an excellent device to investigate photon statistics with
controlled electron scattering. In the case of a coherent conductor QPC coupled to an open transmission line, it has been predicted that the
photons emitted by the QPC present sub- or super-Poissonian statistics, depending on the transmission probability and source-drain voltage~\cite{beenakker-schomerus-prl2001,beenakker-schomerus-prl2004,fulga2010,lebedev-blatter-prb2010,hassler-otten-prb2015}. However, for a microwave cavity exchanging photons
with the QPC, less is known about photon statistics. The cases of a tunnel junction~\cite{jin-schon-prb2015} and a quantum dot~\cite{BergenfeldtSET2012}
have been investigated with a quantum master equation.
In this article, we discuss a dc-biased coherent QPC coupled to a microwave resonator formed with a transmission line cavity (TLC).
A single mode is considered in the cavity, with frequency $f_0 = 2 \pi \omega_0$, such that our model equivalently describes a lumped LC circuit
coupled to the QPC. When the dc-bias $V$ is smaller than the cavity frequency $\hbar \omega_0/e$ in reduced units, electrons coherently traversing
the QPC do not carry enough energy to emit photons to the cavity and the cavity remains in the vacuum state at zero temperature.
Despite this absence of photons, vacuum fluctuations in the cavity still affect and suppress electron tunneling with Frank-Condon factors~\cite{BergenfeldtSET2012}.
This suppression is alternatively captured by the equilibrium $P(E)$ theory~\cite{ingold-nazarov-book1992} of the DCB \cite{yeyati-urbina-prl2001,golubev-zaikin-prl2001}.
For $V> \hbar \omega_0/e$, the electrons scattered by the QPC emit and absorb photons to/from the cavity. We want to characterize the distribution of photons by studying the mean number of photon, the damping rate of the cavity and the second-order coherence $g^{(2)} (0)$ at vanishing time. For a weak electron-photon (e-p) coupling, the field radiated in an open transmission line exhibits~\cite{beenakker-schomerus-prl2001} a negative-binomial form for the photon distribution, similar to blackbody radiation, approaching a Poisson distribution with $g^{(2)} (0)=1$ when the number of bosonic modes is infinite. In our case of a single-mode cavity, we consistently find a thermal distribution with $g^{(2)} (0)=2$ as with other conductors~\cite{BergenfeldtSET2012,jin-schon-prb2015}, independently of the transmission coefficients of the QPC. Proceeding with the next-to-leading order at weak light-matter coupling, we obtain analytically a decrease in $g^{(2)} (0)$ controlled by the bias voltage $V$. This decrease is caused by a two-photon absorption process whereas two-photon emission is energetically forbidden for $V < 2 \hbar \omega_0/e$. The balance is reestablished at large bias where $g^{(2)} (0)=2$ is recovered.
Interestingly, our next-to-leading order calculation can also be understood as a backaction effect. In the presence of electron transport accompanied by photon emission, the cavity reaches a non-equilibrium stationary state characterized by a Bose-Einstein photon distribution. This cavity state generates an out-of-equilibrium DCB affecting transport which in turn modifies the photon distribution. Here we show that the conventional $P(E)$ theory extended to the stationary state~\cite{Souquet2014} recovers quantitatively the results of the straightforward perturbative calculation, revealing further the physics behind our analytical results.
We also discuss the connections between different theoretical approaches to describe the coupled QPC-cavity system. We first consider a capacitive model initially devised in Ref.~\onlinecite{dmytruk-mora-simon-prb2015} and included here in the Keldysh path integral framework~\cite{kamenev2011,torre2013}. By inspecting the leading electron-photon order, we show its equivalence with a Keldysh path integral method introduced by Kindermann and Nazarov~\cite{kindermann2003,snyman-nazarov-prb2008} in which electronic variables have already been integrated out. We also apply a rate equation approach, obtained by neglecting off-diagonal elements in the quantum master equation, and show that it coincides with results from the path integral methods in the rotating-wave approximation (RWA). Denoting $\kappa$ the damping rate of the cavity, we focus below on the case $|eV - \hbar \omega_0| \gg \kappa$ where RWA is applicable, leaving aside the regime $eV \simeq \hbar \omega_0$ where the antibunching inherited from electrons plays a crucial role in the photon distribution~\cite{hassler-otten-prb2015}.
The article is organized as follows. Section~\ref{qpc-tl} discusses different architectures coupling the QPC to the TLC. In Sec.~\ref{sec1} we introduce the capacitive model and derive an effective action for the photons at weak e-p coupling. We compute the number of photon, the cavity damping rate, the frequency pull and the second-order coherence $g^{(2)}(\tau)$. The alternative action form in which the electron degrees of freedom are exactly integrated is introduced in Sec.~\ref{sec2} and the previously derived effective action at weak e-p coupling is recovered. Further assuming small transmissions $T_{n} \ll 1$, we determine the first non-quadratic corrections to the effective action and provide results for the average number of photons, the cavity damping rate and $g^{(2)}(0)$. These results are exactly recovered in Sec.~\ref{sec3} using rate equations and in Sec.~\ref{sec4} by implementing a non-equilibrium $P(E)$ theory to describe the cavity DCB. Sec.~\ref{concl} concludes.
\section{QPC-TLC coupling \label{qpc-tl}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig1.pdf}
\caption{(a) Schematic representation of the TLC coupled to the QPC. The QPC is defined by the gate voltage $V_{g}$ that controls the transmission probability
$T_{n}$ of each channels $n$. A voltage $V$ applied to the TLC voltage node controls the electronic transport in the QPC. (b) and (c) Circuit representation
of a galvanic and capacitive coupling between the TLC and the QPC, respectively. \label{fig1}}
\end{center}
\end{figure}
Here we discuss how the QPC can be coupled to a TLC. Fig.~\ref{fig1}(a) depicts the TLC-QPC hybrid system. The TLC is characterized by its length
$d$ and frequency $\omega_{0}=1/\sqrt{C_{c}L_{c}}$, where $C_{c} = C_{0}d$ and $L_{c}=L_{0}d$ are the TLC capacitance and inductance, respectively.
The QPC can be coupled to the TLC either via a galvanic \cite{altimiras2014,Parlavecchio2015} or capacitive coupling \cite{Petersson2012,frey-wallraff-prl2012}.
In the galvanic coupling scheme, Fig.~\ref{fig1}(b), the TLC is terminated by the QPC and the dimensionless coupling constant is
\begin{equation}
\lambda_g = \sqrt{\frac{\pi Z_\text{c}}{R_\text{K}}}
\end{equation}
where $Z_\text{c}=\sqrt{L_c/C_c}$ is the cavity impedance and $R_{\text{K}}=h/e^2$ is the quantum resistance. A coupling strength of $\lambda_{g}^{2} \approx 0.3$
has been reached~\cite{altimiras2014} and even larger values are expected in a near future~\cite{samkharadze2015}.
In the capacitive coupling scheme, Fig.~\ref{fig1}(c), the QPC is inside the cavity and both left ($l$) and right ($r$) electronic
reservoir are coupled to the central and ground lines via capacitances $C_{\alpha}$ and $C_{\alpha g}$, where $\alpha=l,r$. The reservoirs are also coupled
between themselves via capacitance $C_\text{qpc}$. In this case, the dimensionless coupling constant between the lead $\alpha$ and the cavity field is
\begin{align}
\lambda_{\alpha} = \frac{C_\alpha C_{\alpha^{\prime} s} + C_\text{qpc}C_s}{C_{ls}C_{rs}+C_\text{qpc}C_s}\frac{ef(x_0)}{\sqrt{2\hbar\omega_{0}C_{c}}}
\end{align}
where $\alpha^{\prime}=l,r$ with $\alpha\neq \alpha^{\prime}$, $e$ is the electron charge, $f(x_0)$ is the wavefunction of the cavity field at the coupling
position $x_0$, $C_{\alpha s}=C_\alpha+C_{\alpha g}$ and $C_s=C_{ls}+C_{rs}$. The coupling constant was obtained via circuit theory \cite{koch_girvin_pra2010,BergenfeldtSET2012}.
A detailed description of the circuit theory applied to a quantum dot coupled to the TLC is given in Ref.~[\onlinecite{BergenfeldtSET2012}].
In the next section, we show that the photonic properties are characterized by the difference between the e-p coupling of each lead with the TLC,
i.e., the relevant quantity defining the coupling is
\begin{equation} \label{eq-lamb1}
\lambda_c = \lambda_{l}- \lambda_{r}= \frac{C_l C_{rg}-C_r C_{lg}}{C_{ls}C_{rs}+C_\text{qpc}C_s}\frac{ef(x_0)}{\sqrt{2\hbar\omega_{0}C_{c}}}.
\end{equation}
This equation reveals that if $C_{l(g)}=C_{r(g)}$ the QPC is completely decoupled from the TLC. It also shows that the maximum coupling occurs when one
reservoir is coupled to the central line and the other one to ground line. Thus, in this geometry, placing the QPC where the cavity field is maximum
and considering the typical values for the capacitances $C_l \approx C_{rg} \approx 10$ fF and $C_\text{qpc} \approx 0.1$ fF we have
that $\lambda_{c} \approx 0.98 \lambda_{g}$. In this geometry the capacitive coupling strength is of the same order as the galvanic one.
It is interesting to note that $\lambda_{g(c)}^{2}\hbar\omega_{0}=E_{c_{g(c)}}$, where $E_{c_{g(c)}}$ is the cavity charging energy. The different types
of coupling the QPC to the TLC produce different charging energies. For the galvanic coupling $E_{c_{g}}=e^{2}/2C_{c}$, while in the capacitive one
$E_{c_{c}}=e^{2}/2C_\text{eff}$, where $C_\text{eff}=C_{c}[C_{ls}C_{rs}+C_\text{qpc}C_s]^{2}/[(C_l C_{rg}-C_r C_{lg})f(x_0)]^2$.
From now on we use $\lambda$ to refer to the dimensionless e-p coupling and $E_{c}$ to the charging energy independently of the way the cavity is coupled
to the QPC.
\section{Model and Keldysh path-integral \label{sec1}}
\subsection{Formalism}
We consider a single-mode cavity field coupled to both sides of the constriction with different coupling constants $g_{l(r)}=\hbar\omega_{0} \lambda_{l(r)}$.
The hybrid system Hamiltonian is
\begin{equation} \label{Hamil}
\hat{H} = \hbar\omega_{0}\hat{a}^{\dagger}\hat{a} + \hat{H}_{\rm qpc} + (\hat{a}^{\dagger}+\hat{a})\hat{\eta}.
\end{equation}
The first term is the single-mode cavity field Hamiltonian, $\omega_{0}$ is the photon frequency and $\hat{a}$ ($\hat{a}^{\dagger}$) the photon annihilation
(creation) operator. The second term is the Hamiltonian describing the QPC electronic degrees of freedom, it contains the kinetic
energy of the electronic leads, the scattering term between them and electronic interactions. The last term describes the TLC-QPC interaction, where $\hat{\eta}=g_l\hat{n}_l+g_r\hat{n}_r$ and $\hat{n}_{\alpha}$ is the operator giving the total number of electrons in the lead $\alpha$.
A rigorously description of the e-p coupling via circuit theory gives rise to a capacitive electron-electron interaction \cite{BergenfeldtSET2012}.
We neglected this term in our description.
We use the Keldysh path-integral formalism to investigate the cavity field properties \cite{torre2013,kamenev2011}. The partition function
describing the cavity-QPC hybrid device is
\begin{align}
\mathcal{Z} &= \int \mathcal{D}[a,a^{*},c,c^{*}]e^{i (\mathcal{S}_{\text{cav}}+ \mathcal{S}_{\text{e-p}}+ \mathcal{S}_{\text{qpc}})/\hbar},
\end{align}
where $a$ ($c$) is the complex photonic (fermionic) field. $\mathcal{S}_{\text{cav}}$, $\mathcal{S}_{\text{e-p}}$ and $\mathcal{S}_{\text{qpc}}$
are, respectively, the cavity field, cavity-QPC interaction and QPC actions. In the frequency-domain the cavity action, in terms of the classical
and quantum components of the field $a$, is written as
\begin{equation} \label{Scav}
\mathcal{S}_{\text{cav}} = \int_{\omega} (a_{c}^{*},a_{q}^{*})_{\omega} \begin{pmatrix}
0 & [G_{0}^{\text{R}}]^{-1}(\omega) \\ [G_{0}^{\text{A}}]^{-1}(\omega) & [G_{0}^{\text{K}}]^{-1}(\omega) \end{pmatrix} \begin{pmatrix}
a_{c} \\ a_{q} \end{pmatrix}_{\omega},
\end{equation}
where we use the notation $\int_{\omega} = \int d\omega /2\pi$. $[G_{0}^{\text{R(A)}}]^{-1}(\omega) = \hbar\omega - \hbar\omega_{0} \pm i\hbar 0$
is the bare retarded (advanced) Green's function (GF), and the bare Keldysh GF $[G_{0}^{\text{K}}]^{-1}(\omega)$ is only a regularization, which
is neglected in the presence of the self-energies.
The cavity-QPC action is
\begin{equation} \label{epaction}
\mathcal{S}_{\text{e-p}}=-\int \frac{dt}{\sqrt{2}} \{[a_{c}(t)+a_{c}^{*}(t)]\eta_{q}(t)+[a_{q}(t)+a_{q}^{*}(t)]\eta_{c}(t)\}
\end{equation}
with $\eta_{c(q)}(t) = \eta_{+}(t) \pm \eta_{-}(t)$, $\eta_{\pm}(t)= \sum_{\alpha = l,r} g_{\alpha}n_{\alpha}^{\pm}(t)$, and $n_{\alpha}^{\pm}(t)$
is the density field in the forward ($+$) and backward ($-$) branches of the Keldysh time-ordered contour. Since we are interested in the cavity
field properties the specific formula of the QPC action is not necessary and, hence, we keep it as general as possible. The QPC is only characterized
by its transmission probability $T_{n}$.
Integrating over the fermionic degrees of freedom obtains an effective action ($\mathcal{S}_{\text{eff}}$) describing the photons. Assuming a small
cavity-QPC coupling constant we derive $\mathcal{S}_{\text{eff}}$ using the cumulant expansion~\cite{mendes-mora-njp2015}. To second-order in the e-p
coupling we obtain after averaging over electrons
\begin{equation}
\langle e^{i \mathcal{S}_{\text{e-p}}/\hbar}\rangle_{e} \simeq e^{i (\langle \mathcal{S}_{\text{e-p}}\rangle_{e}
+i \langle \delta \mathcal{S}_{\text{e-p}}^{2}\rangle_{e}/ 2\hbar)/\hbar},
\end{equation}
with $\langle \ldots \rangle_{\text{e}} = \int \mathcal{D}[c,c^{*}](\ldots) e^{i \mathcal{S}_{\text{qpc}}/\hbar}$, and $ \delta \mathcal{S}_{\text{e-p} }
= \mathcal{S}_{\text{e-p} } - \langle \mathcal{S}_{\text{e-p} } \rangle_{\text{e}}$. Thus, the photon effective action takes the form
\begin{equation}
\mathcal{S}_{\text{eff}} = \mathcal{S}_{\text{cav}} + \langle \mathcal{S}_{\text{e-p}}\rangle_{e} + \frac{i}{2\hbar}\langle \delta \mathcal{S}_{\text{e-p}}^{2}\rangle_{e}.
\end{equation}
The first term describes the uncoupled TLC, the second and third terms are the linear and quadratic contribution resulting from the e-p coupling.
The linear term is
\begin{align} \label{s-linear}
\langle \mathcal{S}_\text{e-p} \rangle_{\text{e}} &= -\sqrt{2}\int_{\omega} [a_{q}^{*}(\omega)+a_{q}(-\omega)][g_{r} \langle \hat{n}_{r}(\omega) \rangle + g_{l}\langle \hat{n}_{l}(\omega)\rangle]
\end{align}
with $\langle \hat{n}_{\alpha}(\omega) \rangle = \langle n_{\alpha}^{+}(\omega) \rangle_{e}= \langle n_{\alpha}^{-}(\omega) \rangle_{e}$. As expected it depends
only on the quantum fields, since a dependence on classical fields would violate the causality structure~\cite{kamenev2011}.
The quadratic action
\begin{align} \label{Sep2}
\langle \delta \mathcal{S}_{\text{e-p} }^{2} \rangle_{\text{e}} & = - \frac{2\hbar}{i}\int_{\omega} (a_{c}^{*},a_{q}^{*})_{\omega}
\begin{pmatrix} 0 & \Sigma^{\text{A}}(\omega) \\ \Sigma^{\text{R}}(\omega) & \Sigma^{\text{K}}(\omega) \end{pmatrix}
\begin{pmatrix} a_{c} \\ a_{q} \end{pmatrix}_{\omega} \nonumber \\
&- \mathcal{S}_{\text{a}}
\end{align}
introduces the advanced $\Sigma^\text{A}(\omega)$, retarded $\Sigma^\text{R}(\omega)$ and Keldysh $\Sigma^\text{K}(\omega)$ self-energies, while
\begin{equation*} \label{Sa}
\mathcal{S}_\text{a} = \frac{2\hbar}{i}\int_{\omega} [a_{c}(-\omega)a_{q}(\omega)\Sigma^{\text{R}}(\omega) + a_{q}(-\omega)a_{q}(\omega)\frac{\Sigma^{\text{K}}(\omega)}{2}+ \text{c.c}],
\end{equation*}
is the anomalous action. This term is responsible for producing quadrature squeezing when the quantum conductor is ac-biased~\cite{mendes-mora-njp2015}.
In the time-domain the retarded and Keldysh self-energies are expressed as
\begin{subequations} \label{retard}
\begin{align}
\Sigma^{\text{R}}(t_{2}-t_{1}) &= -\frac{i}{\hbar} \Theta(t_{2}-t_{1})\langle [\delta \hat{\eta}(t_{2}), \delta \hat{\eta}(t_{1})]\rangle \\
\Sigma^{\text{K}}(t_{1}-t_{2}) &= -\frac{i}{\hbar}\langle \{\delta \hat{\eta}(t_{1}), \delta \hat{\eta}(t_{2})\}\rangle.
\end{align}
\end{subequations}
with the notation $\delta \hat{A} (t) \equiv \hat{A} (t) - \langle \hat{A} \rangle$, thus in terms of the density-density correlators
\begin{equation} \label{dens_corr}
\langle \delta\hat{\eta}(t_{1})\delta\hat{\eta}(t_{2}) \rangle = \sum_{\alpha,\beta =l,r} g_{\alpha} g_{\beta} \langle \delta \hat{n}_{\alpha}(t_{1})\delta \hat{n}_{\beta}(t_{2}) \rangle.
\end{equation}
Following Ref.~\onlinecite{dmytruk-mora-simon-prb2015}, it is convenient to rewrite these correlators using current operators
\begin{equation} \label{dens-curr}
\hat{n}_{\alpha}(t) = \frac{i}{e}\int_{\omega} \frac{\hat{I}_{\alpha}(\omega)}{\omega} e^{-i\omega t},
\end{equation}
where $\hat{I}_{\alpha}$ is the charge current towards the lead $\alpha = l/r$. For a time-independent bias applied to the QPC, we introduce
the noise power spectrum function $\langle \delta \hat{I}_{\alpha}(\omega_{1}) \delta \hat{I}_{\beta}(\omega_{2}) \rangle = 2\pi S_{\alpha \beta}(\omega_{1}) \delta(\omega_{1}+\omega_{2})$ in its non-symmetrized form. Using the above relations, the self-energies of Eqs.~\eqref{retard}
decompose in frequency space as
\begin{align*}
\Sigma^{\text{R}}(\omega) =\Delta(\omega)-i\Gamma(\omega)/2, \qquad \Sigma^{\text{A}}(\omega)=\Sigma^{\text{R} *}(\omega)
\end{align*}
with the real part
\begin{align} \label{real_part}
\Delta(\omega) &= -\sum_{\alpha,\beta} \frac{g_{\alpha}g_{\beta}}{e^2\hbar} \mathcal{P}\int_{\omega_{1}} \frac{S_{\alpha \beta}(\omega_{1})-
S_{\alpha \beta}(-\omega_{1})}{\omega_{1}^{2}(\omega_{1}+\omega)}
\end{align}
and the imaginary part
\begin{align} \label{imag_part}
\Gamma(\omega)&=\sum_{\alpha,\beta} \frac{g_{\alpha}g_{\beta}}{e^2\hbar} \frac{S_{\alpha \beta}(\omega)-S_{\alpha \beta}(-\omega)}{\omega^{2}},
\end{align}
while the Keldysh component is
\begin{align*}
\Sigma^\text{K}(\omega) = -i\sum_{\alpha,\beta} \frac{g_{\alpha}g_{\beta}}{e^2\hbar} \frac{S_{\alpha \beta}(\omega)+S_{\alpha \beta}(-\omega)}{\omega^{2}}.
\end{align*}
The photon self-energies are completely characterized by the non-symmetrized auto- ($\alpha=\beta$) and cross-correlation
($\alpha\neq \beta$) noise spectra. While the retarded self-energy is given by the difference between absorption noise ($\omega>0$)
and emission noise ($\omega<0$), the Keldysh component is proportional to the sum of them, {\it i.e}, the symmetrized noise.
The advantage of introducing charge currents instead of densities in Eqs.~\eqref{retard} is that the resulting noise correlators can be computed by various techniques, in particular the conventional Landauer-B\"uttiker scattering formalism~\cite{blanter2000} in the case of a QPC. Moreover noise correlators are measurable quantities which can be accessed experimentally.
As the self-energies are proportional to the square of the e-p coupling constant the pole of the GFs are weakly modified by them. Therefore, we
approximate $\Sigma^{i}(\omega) \approx \Sigma^{i}(\omega_{0})$, with $i=\text{R, A, and K}$. This approximation is equivalent to the rotating-wave
approximation (RWA) performed in the time-domain consisting in averaging to zero all the fast oscillating terms.
The anomalous action, $\mathcal{S}_\text{a}$, is neglected within the RWA, since its contribution oscillates with frequency $2\omega_{0}$.
The final expression for the effective action is obtained by shifting the classical field to absorb the linear action
$\langle \mathcal{S}_{\text{e-p}} \rangle_{\text{e}}$, namely
\begin{equation}
a_{c}(\omega) \rightarrow a_{c}(\omega) + \sqrt{2}\sum_{\alpha}g_{\alpha} \langle \hat{n}_{\alpha}(\omega)\rangle G_{\text{R}}(\omega),
\end{equation}
thereby producing a correction to the photons correlators. For the number of photons the correction is proportional to $(\lambda \sum_n T_n/R_\text{K})^2$,
which is higher-order in the e-p coupling and can be neglected, see below. The third and fourth orders in the e-p
coupling discussed in Sec.~\ref{sec2} are also not altered by this shift in the classical field, as discussed in Appendix~\ref{appA}.
\subsection{Results from scattering theory \label{stheory}}
The photon effective action, given by Eq.~\eqref{Scav} and the first term of Eq.~\eqref{Sep2}, is
\begin{align} \label{Seff}
\mathcal{S}_{\text{eff}} & = \int_{\omega} (a_{c}^{*},a_{q}^{*})_{\omega} \begin{pmatrix} 0 & G_{\text{A}}^{-1}(\omega) \\ G_{\text{R}}^{-1}(\omega)
& -\Sigma^{\text{K}}(\omega_{0}) \end{pmatrix} \begin{pmatrix} a_{c} \\ a_{q} \end{pmatrix}_{\omega}
\end{align}
where $G_{\text{R(A)}}^{-1}(\omega) = \hbar\omega - \hbar\omega_{0} - \Sigma^{\text{R(A)}}(\omega_{0})$ is the inverse of the retarded (advanced)
GF. The Keldysh GF is $G^{K}(\omega) = G_{\text{R}}(\omega)\Sigma^{\text{K}}(\omega_{0})G_{\text{A}}(\omega)$. So far the photon effective action
has been derived assuming only weak electron-photon coupling but arbitrary transmission or electron-electron interactions.
To further characterize the photon effective action, we assume a non-interacting QPC, i.e., in the absence of both electron-electron interaction
and e-p coupling, which implies in energy-independent transmission probabilities \cite{blanter2000}. We compute the nonsymmetrized noise via the scattering
formalism~\cite{aguado-kouwenhoven-prl2000,blanter2000}. Charge conservation which imposes
\begin{equation}
S_{rr}(\omega)= S_{ll}(\omega)=-S_{rl}(\omega)=-S_{lr}(\omega).
\end{equation}
with
\begin{align} \label{qpc-nsnoise}
S_{rr}(\omega) &= \frac{1}{R_{\text{K}}}\left[2\hbar\omega \Theta(\hbar\omega) \sum_{n} T_{n}^{2}+\sum_{n} T_{n}R_{n}\bar{S}(\omega)\right],
\end{align}
where we introduce the notation
\begin{equation}\label{eq-noise}
\bar{S}(\omega)=(\hbar\omega+eV)\Theta(\hbar\omega+eV)+(\hbar\omega-eV)\Theta(\hbar\omega-eV),
\end{equation}
$\Theta(\omega)$ is the Heaviside step function and $R_n=1-T_n$.
We are now in a position to compute the the real and imaginary parts of the retarded self-energy. The real part
\begin{equation}
\Delta(\omega) = -\frac{(g_r-g_l)^{2}}{\hbar\pi}\sum_{n}T_{n} \mathcal{P}\int_{\omega_{1}}\frac{1}{\omega_{1}(\omega_{1}+\omega)}=0,
\end{equation}
is proportional to the difference between the e-p couplings and the transmission probability. However, the integral is zero for any value of
$\omega$, meaning that the cavity frequency is not shifted by the e-p coupling. This result extends the absence of a cavity frequency pull
obtained for a tunnel junction~\cite{mendes-mora-njp2015,dmytruk-mora-simon-prb2015} to leading order in the e-p coupling. Higher orders in
the coupling or an energy dependence of the transmission provide the non-linearities needed for a shift in the cavity resonant frequency.
The imaginary part of the self-energy is related to the exchange of photons between the cavity and the QPC. It determines the cavity damping
rate $\kappa$, i.e, the photon losses to the electronic environment. To leading order, $\kappa \simeq \kappa_0 = \Gamma(\omega_{0})/\hbar$,
with
\begin{equation} \label{kapp}
\kappa_0 =\frac{\lambda^{2}}{e^2}[S_{rr}(\omega_{0})-S_{rr}(-\omega_{0})] = \frac{\omega_{0}\lambda^{2}}{\pi}\sum_{n} T_{n},
\end{equation}
where $\lambda=(g_r -g_l)/\hbar\omega_{0}$. The cavity damping rate has a simple dependence on the transmission probability $T_{n}$,
and it increases with the number of electron channels. Considering a single channel QPC and $T_{1}=1$, one can access the dimensionless
e-p constant by measuring the cavity peak broadening \cite{Petersson2012,frey-wallraff-prl2012,toida2013,Deng-leHur-guo-arxiv2015}.
Finally, the Keldysh self-energy
\begin{align*}
\Sigma^{\text{K}}(\omega_{0}) = -i\frac{\lambda^{2}R_{\text{K}}}{2\pi}[S_{rr}(\omega_{0})+S_{rr}(-\omega_{0})],
\end{align*}
also depends on the QPC transmissions. We note that the different self-energies are non-zero only if $g_r$ differs from $g_l$ corresponding
to an inhomogeneous coupling between the leads and the cavity field.
\subsection{Cavity field properties}
We now present results for the cavity photons. The number of photons is related to the Keldysh GF via the formula
$2 \langle n \rangle +1=iG^\text{K}(t=0)$. From the Gaussian action~\eqref{Seff}, we easily find
\begin{align} \label{nphotons}
\langle n \rangle &= \frac{S_{rr}(-\omega_{0})}{S_{rr}(\omega_{0})-S_{rr}(-\omega_{0})}= F\frac{(eV-\hbar\omega_{0})}{2\hbar\omega_{0}}\Theta(eV-\hbar\omega_{0}),
\end{align}
where $F=\sum_{n}T_{n}(1-T_{n})/\sum_{n}T_{n}$ is the QPC Fano factor. The number of photons is determined by the emission noise~\cite{beenakker-schomerus-prl2004,LoosenLesovikJETP1997} over
the rate at which the cavity loses photons to the QPC. At zero temperature and $V \leq \hbar \omega_{0}/e$ the number of photons is zero.
In this case, electrons traversing the QPC must have an energy in a window of size $e V$ and are therefore not able to emit a photon
with energy $\hbar \omega_0$ to the cavity. Remarkably, the number of photons decreases as the transmission probabilities increase and
even vanish in the limit of perfectly transmitting channels. The physical reason is that photon emission, similarly to quantum noise,
is related to charge discreteness in electron transport. At perfect transmission, there is a continuous flow of charges with no noise
and no photon emission. In this case however, the damping rate of the cavity $\kappa_0$ remains finite as the bath of electrons can
still absorb photons from the cavity. For weak transmissions $T_n \ll 1$, the result~\eqref{nphotons} coincides
with previous studies for a tunnel junction~\cite{jin-schon-prb2015,mendes-mora-njp2015} and metallic QDs\cite{BergenfeldtSET2012}.
Next we compute the photon second-order coherence $g^{(2)}(\tau)=\langle \hat{a}^{\dagger}(0)\hat{a}^{\dagger}(\tau)\hat{a}(\tau)\hat{a}(0) \rangle/\langle n \rangle^{2}$.
At zero time, $g^{(2)}(0)=\langle n (n-1) \rangle/\langle n \rangle^{2}$ indicates whether the cavity field presents super-Poissonian
($g^{(2)}(0) >1$) or sub-Poissonian ($g^{(2)}(0) <1$) photon statistics. Also, its time-evolution is a direct measurement of photon bunching
($g^{(2)}(\tau)< g^{(2)}(0)$) or antibunching ($g^{(2)}(\tau)> g^{(2)}(0)$). The quadratic action~\eqref{Seff} implies a Gaussian field
distribution, similar to a thermal state, for which the calculation of correlation functions is straightforward by using Wick's theorem.
$g^{(2)}(\tau)$ is thus related to the first-order coherence $g_1 (\tau) = \langle \hat{a}^{\dagger}(\tau) \hat{a}(0) \rangle/\langle n \rangle$
whose modulus follows a simple exponential decay with the cavity damping rate $\kappa_0$. The result
\begin{equation}
g^{(2)}(\tau)= 1 + | g_1 (\tau) |^2 = 1+e^{-\kappa_0 \tau},
\end{equation}
exhibits photon bunching and super-Poissonian statistics $g^{(2)}(0) = 2$. These results are in fact a direct consequence of the Gaussian
field distribution~\cite{BergenfeldtSET2012,jin-schon-prb2015}. They are valid at weak coupling $\lambda \ll 1$ where non-quadratic terms
in the effective action can be discarded. The case of a dc-bias Josephson junction shows that increasing e-p interaction may lead to strongly
antibunched photons~\cite{dambach2015}. The next section is devoted to the study of non-quadratic terms in the action
to investigate in particular how they modify $g^{(2)} (0)$ as the coupling $\lambda$ increases.
\section{Beyond the thermal distribution \label{sec2}}
\subsection{Alternative gauge}
We showed above that the cavity field follows a thermal distribution at weak electron-photon coupling. Deviations to this statistical
description emerge by expanding the action~\eqref{epaction} beyond second order. Here, instead of expanding further Eq.~\eqref{epaction}, \cite{}
we use an alternative gauge for which the action integrated over electronic variables has been derived exactly~\cite{kindermann2003,snyman-nazarov-prb2008}.
This model was recently used to study a quantum tunneling detector coupled to a coherent conductor \cite{tobiska-nazarov-2006} and the light
emitted by the tunneling of electrons from a STM tip to a metallic surface \cite{xu-belzig-prl2014}.
The corresponding action is $\mathcal{S}_{\text{cav}}+ \mathcal{S}_{\text{e-p}}$ where the unperturbed cavity action $\mathcal{S}_{\text{cav}}$
is still given by Eq.~\eqref{Scav}. The average over electrons produces the non-linear photon action
\begin{equation} \label{actqpc2}
\mathcal{S}_{\text{e-p}} = -\frac{i\hbar}{2}\sum_{n} \Tr \ln \left[1 + \frac{T_{n}}{4}\left(\{ \check{G}_l,\check{G}_r\}-2\right)\right],
\end{equation}
where the trace is over the Keldysh indices ($\pm$) and time. $\check{G}_{\alpha}$ is the Keldysh GF of the electrons in the reservoir $\alpha$,
and they are defined in terms of the equilibrium GF
\begin{equation} \label{eqKGF}
\check{G}_{\text{eq}}^{\alpha}(\epsilon) = \begin{pmatrix}
1-2f(\epsilon_\alpha) & 2f(\epsilon_\alpha) \\
2-2f(\epsilon_\alpha) & 2f(\epsilon_\alpha)-1
\end{pmatrix},
\end{equation}
where $f(\epsilon)$ is the Fermi function. $\check{G}_{r}(t,t^{\prime})= \check{G}_{\text{eq}}^{r}(t-t^{\prime})$ and
$\check{G}_{l}(t,t^{\prime}) = \check{U}^{\dagger}(t)\check{G}_{\text{eq}}^{l}(t-t^{\prime})\check{U}(t^{\prime})$ with
\begin{equation} \label{gauge-eq}
\check{U}(t) = \begin{pmatrix} e^{\lambda \varphi_{+}(t)} && 0 \\ 0 && e^{\lambda \varphi_{-}(t)}\end{pmatrix}
\end{equation}
where $\varphi_{\pm}(t)=a_{\pm}^{*}(t)-a_{\pm}(t)$ and $a_{\pm}$ is the complex photon field defined in the $\pm$ Keldysh contour. The matrix
$\check{U}(t)$ describes the photons. For details on the derivation of $\mathcal{S}_\text{e-p}$ see Ref. [\onlinecite{snyman-nazarov-prb2008}].
The link between this description and the model from Sec.~\ref{sec1} is best seen in the tunneling limit where scattering in the QPC is accounted
for by the tunnel Hamiltonian $\hat{H}_{T} = \hat{\mathcal{T}}+\hat{\mathcal{T}}^{\dagger}$, where $\hat{\mathcal{T}}$ describes the tunneling of
one electron from the left to the right reservoir and $\hat{\mathcal{T}}^{\dagger}$ the reversed process. Applying the gauge transformation via
the unitary operator $\hat{U}_0 = \exp[\hat{\eta} (\hat{a}-\hat{a}^\dagger)/\hbar\omega_{0}]$ cancels the electron-photon coupling term
$(\hat{a}^{\dagger}+\hat{a})\hat{\eta}$ in Eq.~\eqref{Hamil} while dressing the tunneling part $\bar{H}_{T}= \hat{U}_0^{\dagger}\hat{H}_{T} \hat{U}_0 = \hat{\mathcal{T}}e^{-\lambda(\hat{a}^{\dagger}-\hat{a})}+\hat{\mathcal{T}}^{\dagger}e^{\lambda(\hat{a}^{\dagger}-\hat{a})}$ with
$\lambda = (g_{r}-g_{l})/\hbar\omega_{0}$. This new form of the tunnel Hamiltonian implies that each tunneling event is accompanied by
the coherence excitation of the cavity with the displacement operators $e^{\pm \lambda(\hat{a}^{\dagger}-\hat{a})}$. The same prescription
was used in deriving~\cite{snyman-nazarov-prb2008} Eq.~\eqref{actqpc2}. For completeness, we also show in Appendix~\ref{appA} that the
expansion of Eq.~\eqref{actqpc2} to second order in $\lambda$ agrees with the results of the previous section with the same identification
$\lambda = (g_{r}-g_{l})/\hbar\omega_{0}$.
\subsection{Non-quadratic effects \label{nq-action}}
We assume for simplicity weak transmission probabilities $T_{n}\ll 1$ and expand the action~\eqref{actqpc2} as
\begin{equation} \label{actqpc2ex}x
\mathcal{S}_{\text{e-p}} \simeq -\frac{i\hbar g_c}{8}\sum_{n} \Tr \left[ \left(\{ \check{G}_l,\check{G}_r\}-2\right)\right],
\end{equation}
where we introduce the dimensionless conductance $g_c =\sum_{n}T_{n}$. In this limit, the QPC description is equivalent to a tunnel junction.
Equation~\eqref{actqpc2ex} is further expanded to fourth order in $\lambda$. The effective action is
\begin{equation} \label{seff2}
\mathcal{S}_{\text{eff}} = \mathcal{S}_{\text{q}} + \mathcal{S}_{\text{nq}}^{(3)} + \mathcal{S}_{\text{nq}}^{(4)}.
\end{equation}
The second order $\mathcal{S}_{\text{q}}$ (from now on, the subscript q stands for the
quadratic approximation) has been derived in the previous section, see Eq.~\eqref{Seff}, and is rederived in Appendix~\ref{appA}
for the present model. Assuming weak transmission implies a Fano factor $F=1$ in Eq.~\eqref{nphotons} for the average number of
photons $\langle n \rangle_\text{q}$, or
\begin{equation}\label{nq}
\langle n \rangle_\text{q} = \frac{\lambda^2 S_{rr} (-\omega_0)}{e^2 \kappa_0} = \frac{(eV-\hbar\omega_0)}{2\hbar\omega_{0}}\Theta(eV-\hbar\omega_{0}).
\end{equation}
In this limit, the non-symmetrized noise power can be expressed as $S_{rr} (\omega) = (g_c/R_\text{K}) \bar{S}(\omega)$
where $\bar{S}(\omega)$ is given in Eq.~\eqref{eq-noise}.
The next terms in Eq.~\eqref{seff2} are derived in Appendix~\ref{appA}. The third-order expansion in the e-p coupling is
\begin{align} \label{eq-seff3}
&\mathcal{S}_\text{nq}^{(3)} =C_{0}\int dt_{1}dt_{2}d\epsilon_{l}d\epsilon_{r}
\sin[eV(t_{1}-t_{2})/\hbar] e^{-i\omega_{lr}(t_{1}-t_{2})} \nonumber \\
&\times f(\epsilon_{r})\bar{f}(\epsilon_{l})(2[\varphi_{+}(t_{2})-\varphi_{-}(t_{1})]^{3}-\mathsmaller{\sum}\limits_{\sigma}
[\varphi_{\sigma}(t_{2})-\varphi_{\sigma}(t_{1})]^{3})
\end{align}
where $C_0=-\lambda^{3}g_{c}/24\hbar\pi^2$, $\sigma=\pm$, $\bar{f}(\epsilon)=1-f(\epsilon)$ and $\omega_{lr}=(\epsilon_l-\epsilon_r)/\hbar$.
Analogously, the fourth order term takes the form
\begin{align} \label{eq-seff4}
&\mathcal{S}_\text{nq}^{(4)} =C\int dt_{1}dt_{2}d\epsilon_{l}d\epsilon_{r}\cos[eV(t_{1}-t_{2})/\hbar] e^{-i\omega_{lr}(t_{1}-t_{2})} \nonumber \\
&\times f(\epsilon_{r})\bar{f}(\epsilon_{l})(2[\varphi_{+}(t_{2})-\varphi_{-}(t_{1})]^{4}-\mathsmaller{\sum}\limits_{\sigma}
[\varphi_{\sigma}(t_{2})-\varphi_{\sigma}(t_{1})]^{4}),
\end{align}
where $C =-i\lambda^{4}g_{c}/96\hbar\pi^{2}$. As the non-quadratic terms are small for $\lambda \ll 1$, we compute their contributions
to the number of photons, retarded self-energy and $g^{(2)}(0)$ perturbatively. $\mathcal{S}_{\text{nq}}^{(3)}$
being odd in the number of bosons, it must be expanded at least to second order to contribute. The corresponding term is of order six
in $\lambda$ and is negligible compared to $\mathcal{S}_{\text{nq}}^{(4)}$. It is discarded in what follows.
The perturbation scheme employed here consists in expanding to first-order $e^{i\mathcal{S}_{\text{nq}}^{(4)}/\hbar} \approx 1 + i\mathcal{S}_{\text{nq}}^{(4)}/\hbar$.
Any photon expectation value is obtained by computing
\begin{equation} \label{eqaverage}
\langle \ldots \rangle = \langle \ldots \rangle_\text{q} + \frac{i}{\hbar} \langle \ldots \mathcal{S}_{\text{nq}}^{(4)} \rangle_\text{q},
\end{equation}
where the first term is the quadratic contribution and the second originates from $\mathcal{S}_{\text{nq}}^{(4)}$. The averages $\langle \ldots \rangle_{\text{q}} $ are taken with respect to the quadratic action $\mathcal{S}_{\text{q}}$ defined in Eq.~\eqref{Seff} at weak transmission.
In the Keldysh field theory formalism the number of photons is defined on the $\pm$ contour by $\langle n \rangle = \langle a_{-}^{*}a_{+} \rangle$.
Therefore, the non-quadratic contribution, $\langle n \rangle_{\text{nq}} = i\langle a_{-}^{*}a_{+}\mathcal{S}_{\text{nq}}^{(4)} \rangle/\hbar$,
is
\begin{align} \label{npng0}
\langle n \rangle_{\text{nq}} &= \frac{i}{\hbar} C \int dt_{1}dt_{2}d\epsilon_{l}d\epsilon_{r}\cos[eV(t_{1}-t_{2})/\hbar]e^{-i\omega_{lr}(t_{1}-t_{2})} \nonumber \\
&\times f(\epsilon_{r})\bar{f}(\epsilon_{l})(2\langle a_{-}^{*}a_{+}[\varphi_{+}(t_{2})-\varphi_{-}(t_{1})]^{4}\rangle_\text{q} \nonumber \\
&-\mathsmaller{\sum}\limits_{\sigma}\langle a_{-}^{*}a_{+}[\varphi_{\sigma}(t_{2})-\varphi_{\sigma}(t_{1})]^{4}\rangle_\text{q}).
\end{align}
This expectation value is evaluated using Wick's theorem. Details of this calculation are presented in the Appendix \ref{appB}. The number of
photons is
\begin{align} \label{npng}
\langle n \rangle &= \frac{\bar{S}(-\omega_{0})}{2\hbar\omega_{0}}-\frac{\lambda^{2}}{8(\hbar\omega_{0})^{3}}\left[\bar{S}^{2}(-\omega_{0})
\bar{S}(2\omega_{0})\right. \nonumber \\
&\left. -\bar{S}^{2}(\omega_{0})\bar{S}(-2\omega_{0}) \right].
\end{align}
The first term is $\langle n \rangle_\text{q}$ given by Eq.~\eqref{nq}. The second and smaller term originates from the non-quadratic part of the action and describe departure from the thermal-like state. It can be given a physically more transparent form by using $\langle n \rangle_\text{q}=\bar{S}(-\omega_0)/2\hbar\omega_0$ and $\langle n \rangle_\text{q}+1=\bar{S}(\omega_0)/2\hbar\omega_0$ such that the second term of Eq.~\eqref{npng} reads
\begin{align}\label{decomp}
&\frac{\bar{S}^{2}(-\omega_{0})}{(2\hbar\omega_{0})^{2}}\bar{S}(2\omega_{0})-\frac{\bar{S}^{2}(\omega_{0})}{(2\hbar\omega_{0})^{2}}\bar{S}(-2\omega_{0})\nonumber \\
&=\langle n \rangle_\text{q}^2\bar{S}(2\omega_{0})-(\langle n \rangle_\text{q}+1)^2\bar{S}(-2\omega_{0})
\end{align}
To interpret this decomposition, we have to recall that $\bar{S} (\pm \omega)$ is proportional to the absorption/emission noise for the QPC.
The first term in Eq.~\eqref{decomp} therefore corresponds to the absorption of a pair of photons from the cavity conditioned by their
presence $\propto \langle n \rangle_\text{q}^2$, while the second term describes photon-pair stimulated emission
$\propto (\langle n \rangle_\text{q}+1)^2$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.48\textwidth]{fig2.pdf}
\caption{Number of photons (left) and cavity damping (right) as a function of the voltage $V$ for $\lambda^2=0.15$ (solid line) and their
quadratic
contribution (dash line).\label{fig2}}
\end{center}
\end{figure}
The balance between these two-photon processes depends crucially on the dc-bias. In the range $\hbar \omega_{0}/e < V < 2\hbar \omega_{0}/e$, only
two-photon absorption occurs since $\bar{S} (-2\omega_0) = 0$, meaning that the available energy from tunneling electrons is not sufficient to excite
pairs of photons with energy $2 \hbar \omega_0$. As a result, the number of cavity photons~\eqref{npng} is smaller than the quadratic prediction.
This is shown Fig.~\ref{fig2} where the dependence on the dc-bias is illustrated for a moderate e-p coupling $\lambda^2 = 0.15$.
For $V>2\hbar\omega_0/e$ the emission of pairs of photons to the cavity is allowed but two-photon absorption still dominates since
\begin{equation}
\bar{S}^{2}(-\omega_{0})\bar{S}(2\omega_{0})-\bar{S}^{2}(\omega_{0})\bar{S}(-2\omega_{0}) = 4(\hbar\omega_{0})^{3}.
\end{equation}
Using that $\lambda^{2}\hbar\omega_{0} = E_c$, as noticed in Sec.~\ref{qpc-tl}, we rewrite the mean number of cavity photons as
\begin{equation} \label{npV2}
\langle n \rangle = \frac{eV-\hbar\omega_{0}-E_{c}}{2\hbar \omega_{0}} \qquad \text{for} \qquad V>2\hbar\omega_0/e,
\end{equation}
in which the electromagnetic charging energy $E_{c}$ of the cavity is subtracted to the available energy $e V$ of electrons. This result is strongly
reminiscent of the DCB effect at large voltage - or even the genuine Coulomb blockade - in the $P(E)$
framework~\cite{ingold-nazarov-book1992}. We elaborate later on this idea in Sec.~\ref{sec4} by establishing an out-of-equilibrium $P(E)$ approach for
this model and by showing that the reduction of $e V$ by $E_{c}$ is interpreted as a backaction effect of the state of the cavity.
The cavity damping rate is obtained from the retarded self-energy computed with the correction $\mathcal{S}_{\text{nq}}^{(4)}$. The result assumes the form
\begin{align}\label{damping1}
\kappa &= \kappa_0 \left[1- \frac{\lambda^{2}}{2\hbar\omega_0}(\bar{S}(\omega_0)+\bar{S}(-\omega_0))\right] + \frac{\lambda^4}{e^2} S_{rr}(0) \nonumber \\
&+\frac{\lambda^{4}}{2\hbar\omega_0 e^2}[S_{rr}(-\omega_0)\bar{S}(2\omega_0)-S_{rr}(\omega_0)\bar{S}(-2\omega_0)],
\end{align}
where the last term recovers the competition between two-photon emission and absorption. The net effect of this competition is to increase $\kappa$ as two-photon absorption dominates over emission. In addition, there is a Franck-Condon reduction of the leading damping rate $\kappa_0$. The second term $\propto S_{rr} (0)$ describes a process
in which a single photon is absorbed and reemitted with no energy cost for electrons. Since absorption is $\propto \langle n \rangle_\text{q}$ and emission $\propto \langle n \rangle_\text{q}+1$, the net effect is positive for the damping rate.
The bias voltage dependence of the damping rate is shown in Fig.~\ref{fig2} for $\lambda^2 = 0.15$. Interestingly, the different non-quadratic corrections compensate each other for $e V > 2 \hbar \omega_0$ and we obtain $\kappa = \kappa_0$ in this case. In the range $0 \leq V \leq 2 \hbar\omega_{0}/e$, $\kappa$ is smaller than $\kappa_0$ but increases linearly with the dc-bias. For $e V \leq \hbar \omega_0$, the cavity is in the vacuum state. The expression of the damping rate simplifies as
\begin{equation}
\kappa = \frac{\lambda^2}{e^2} \left[(1-\lambda^2)S_{rr}(\omega_0) + \lambda^{2}S_{rr}(0)\right],
\end{equation}
showing that the absorption of photons with energy $\hbar\omega_0$ is reduced by the Franck-Condon factor $(1-\lambda^2/2)^2$ affecting the electronic transmissions. The linear voltage dependence comes here from the zero-energy photon emission/absorption $\propto S_{rr}(0)$.
We finally compute $g^{(2)}(0)=\langle a_{-}^{* 2}a_{+}^{2}\rangle/\langle n \rangle^{2}$. Using Eq.~\eqref{eqaverage} we write its numerator as
\begin{align}
\langle a_{-}^{* 2}a_{+}^{2}\rangle &= 2\langle n \rangle^{2}+\frac{i}{\hbar}\langle a_{-}^{* 2}a_{+}^{2}\mathcal{S}_{\text{nq}}^{(4)}\rangle_{\text{q,fc}}.
\end{align}
The first term comes from pairing each $a^*_-$ with a $a_+$ field, $\mathcal{S}_{\text{nq}}^{(4)}$ being contracted with one pair only.
The second term is averaged with the quadratic part of the action $\mathcal{S}_{\text{q}}$ and only the contraction pairing each field
$a^*_-$ or $a_+$ to a field from $\mathcal{S}_{\text{nq}}^{(4)}$ is kept (fully connected diagram), with the result
\begin{align}
\frac{i}{\hbar} & \langle a_{-}^{* 2}a_{+}^{2}\mathcal{S}_{\text{nq}}^{(4)}\rangle_{\text{q,fc}}=-\frac{\lambda^{2}}{32(\hbar\omega_{0}^{4})}[\bar{S}(\omega_{0})+\bar{S}(-\omega_{0})]\nonumber \\
&\times [\bar{S}^{2}(-\omega_{0})\bar{S}(2\omega_{0})-\bar{S}^{2}(\omega_{0})\bar{S}(-2\omega_{0})].
\end{align}
Combining this expression with Eq.~\eqref{npng} for the number of photons, we find
\begin{align} \label{g2-final}
g^{(2)}(0)=2-\frac{\lambda^{2}}{8(\hbar\omega_{0})^{2}\bar{S}^{2}(-\omega_{0})}[\bar{S}(\omega_{0})+\bar{S}(-\omega_{0})] \nonumber \\
\times [\bar{S}^{2}(-\omega_{0})\bar{S}(2\omega_{0})-\bar{S}^{2}(\omega_{0})\bar{S}(-2\omega_{0})].
\end{align}
The deviation from the quadratic prediction $g^{(2)}(0)=2$ involves again a balance between two-photon emission and absorption.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig3.pdf}
\caption{$g^{(2)}(0)$ as a function of the voltage $V$ for $\lambda^2=0$ (dash line) and $\lambda^2=0.15$ (solid line).\label{fig-g2}}
\end{center}
\end{figure}
For a low number of photons~\cite{kubala2015}, $g^{(2)}(0) = 2 P_2/P_1^2$, where $P_n$ is the probability to host $n$ photons in the cavity. Two-photon absorption reduces $P_2$ in comparison to $P_1^2$. Since it is more efficient than two-photon emission, $g^{(2)}(0)$ is found to be smaller than $2$ for all voltages. This is shown in Fig.~\ref{fig-g2}. For $e V < \hbar \omega_0$, no photon are present and $g^{(2)}$ vanishes. In the range $\hbar\omega_0 < e V \leq 2\hbar\omega_{0}$, two-photon emission is prohibited, Eq.~\eqref{g2-final}
simplifies to
\begin{align}
g^{(2)}(0)&=2-\lambda^2(2\langle n \rangle_\text{q} + 1) \nonumber \\
&=2-\frac{\lambda^2}{(2\hbar\omega_{0})^{2}}eV\bar{S}(2\omega_{0}),
\end{align}
and $g^{(2)}(0)$ decreases with voltage as the cavity population increases and two-photon absorption processes $\langle n \rangle_\text{q}^2$ are reinforced. For a voltage larger than $2 \hbar \omega_0/e$, two-photon emission sets in and Eq.~\eqref{g2-final} takes the form
\begin{equation}
g^{(2)}(0)=2-E_c\frac{eV}{(eV-\hbar\omega_{0})^{2}},
\end{equation}
increasing with the voltage. At large voltage $e V \gg \hbar \omega_0, E_c$, the quadratic result $g^{(2)}(0)=2$ is finally recovered.
\section{Rate equations \label{sec3}}
The RWA used so far averages to zero terms that do not conserve energy. It removes most off-diagonal elements of
the density matrix. In this section, we apply a rate equation approach to the QPC-TLC system, corresponding to a quantum master equation
approach in which off-diagonal elements are disregarded, and indeed recover most results from the previous section. In this way the physical
picture of two-photon processes is further justified.
$P_n$ is the probability to have $n$ photons in the cavity. Its time evolution is fixed by
\begin{align} \label{rateq0}
\dot{P}_n &= -(\Gamma_{n\rightarrow n+1}+\Gamma_{n\rightarrow n-1}+\Gamma_{n\rightarrow n+2}+\Gamma_{n\rightarrow n-2})P_n \nonumber \\
&+\Gamma_{n+1\rightarrow n}P_{n+1}+\Gamma_{n-1\rightarrow n}P_{n-1}+\Gamma_{n+2\rightarrow n}P_{n+2} \nonumber \\
&+\Gamma_{n-2\rightarrow n}P_{n-2},
\end{align}
where $\Gamma_{i\rightarrow j}$ denotes the rate from $|i\rangle$ to $|j\rangle$ photons. $\Gamma_{n\rightarrow n\pm1}$ corresponds to
single-photon and $\Gamma_{n\rightarrow n\pm2}$ to two-photon emission/absorption. These rates are calculated in Appendix~\ref{appC}
via Fermi-Golden rule in the limit of weak transmissions $T_n \ll 1$. Fig.~\ref{fig-trans} illustrates the ladder of transitions processes.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.48\textwidth]{fig4.pdf}
\caption{Schematic representation of all allowed emission/absorption processes. The cavity emission and absorption processes are defined
by the QPC absorption [$S_{rr}(\omega)$] and emission [$S_{rr}(-\omega)$] noise. \label{fig-trans}}
\end{center}
\end{figure}
The leading (second in $\lambda$) order in the e-p coupling involves only single-photon exchange. Setting $\Gamma_{n\rightarrow n\pm2}=0$,
the steady state solution of Eq.~\eqref{rateq0} is the Bose-Einstein distribution
\begin{equation}\label{bose-einstein}
P_{n}^{0}=\left(1-\frac{S_{rr}(-\omega_0)}{S_{rr}(\omega_0)}\right)\left(\frac{S_{rr}(-\omega_0)}{S_{rr}(\omega_0)}\right)^n = \frac{\langle n \rangle_\text{q}^n }{(\langle n \rangle_\text{q}+1)^{n+1} },
\end{equation}
corresponding to a thermal Gaussian state. The mean number of photons $\sum_n n P_{n}^{0}$ recovers $\langle n \rangle_\text{q}$ given
in Eq.~\eqref{nq}.
Two-photon rates are higher orders in $\lambda$ and can be treated perturbatively with respect to the distribution $P_{n}^{0}$. Writing $P_{n}=P_{n}^{0}+p_{n}$, we look for the steady-state solution $\dot{P}_n = 0$ of Eq.~\eqref{rateq0}. Expanding to lowest non-vanishing
order in $\lambda$ gives
\begin{align} \label{eqrat1}
&-[(n+1)S_{rr}(-\omega_0)+n S_{rr}(\omega_0)]p_{n} +(n+1)S_{rr}(\omega_0)p_{n+1} \nonumber \\[2mm]
&+n S_{rr}(-\omega_0)p_{n-1}= \frac{\lambda^2}{4} \Big( -(n+1)(n+2) S_{rr}(2 \omega_0)P_{n+2}^{0} \nonumber \\[2mm]
&+[(n+1)(n+2) S_{rr}(-2 \omega_0)+n(n-1) S_{rr}(2 \omega_0)]P_{n}^{0} \nonumber \\[2mm]
&-n(n-1) S_{rr}(-2 \omega_0)P_{n-2}^{0} \Big).
\end{align}
The correction to the mean number of photon is determined by multiplying this expression by $n$ and summing over $n$. We finally obtain
that $\langle n \rangle = \sum_n n (P_{n}^{0}+p_{n})$, calculated perturbatively, coincides with Eq.~\eqref{npng} from Sec.~\ref{sec2}.
We proceed in two steps in order to compute $g^{(2)}(0)$. We multiply Eq.~\eqref{eqrat1} by $n^2$ and sum over $n$ to obtain
\begin{align}
\sum_n & n (n-1) p_{n} =-\frac{\lambda^{2}}{32(\hbar\omega_{0})^{4}}[9\bar{S}(-\omega_{0}) + \bar{S}(\omega_{0})] \nonumber \\
&\times [\bar{S}^{2}(-\omega_{0})\bar{S}(2\omega_{0})-\bar{S}^{2}(\omega_{0})\bar{S}(-2\omega_{0})]
\end{align}
corresponding to the two-photon correction to the average $\langle a^{\dagger 2} a^{2} \rangle$ while the leading order is simply
$\sum_n n (n-1) P_{n}^{0} = 2 (\sum_n n P_{n}^{0})^2 = 2 \langle n \rangle_\text{q}^2$. We then include the denominator $\langle n \rangle^2$
expanded in $\lambda$ and recover exactly Eq.~\eqref{g2-final} from Sec.~\ref{sec2}.
These results not only reinforces our physical interpretation for the formulas derived in Sec.~\ref{sec2} but also shows that the cavity
properties in the weak coupling regime are well described by a diagonal density matrix.
\section{Dynamical backaction \label{sec4}}
The QPC-TLC hybrid system (with weak transmissions) can also be discussed within an enlightening approach which emphasizes backaction.
The cavity provides a readout of the noise power spectrum of the QPC or tunnel junction. In return, there is backaction from the cavity
with a DCB effect which reduces transport. The modified noise properties of electrons are imprinted
in the state of the cavity. This effect is captured by the fourth order in $\lambda$ calculation of the previous sections but it can be
made more explicit by extending the $P(E)$ theory to a non-equilibrium steady state situation.
We begin by assuming that the cavity is in the thermal state characterized by the photon distribution~\eqref{bose-einstein} and the mean
number of photons is $\langle n \rangle_\text{q}$, see Eq.~\eqref{nq}. For weak transmissions $T_n \ll 1$, we use the tunnel Hamiltonian $\bar{H}_{T}=\hat{\mathcal{T}}e^{-\lambda(\hat{a}^{\dagger}-\hat{a})}+\hat{\mathcal{T}}^{\dagger}e^{\lambda(\hat{a}^{\dagger}-\hat{a})}$,
where the cavity field $\hat{a}$ plays the role of the environment, and proceed with the $P(E)$ approach by computing the current noise
correlator. The tunneling limit allows for a factorization of the electron and environment variables~\cite{safi2014,altimiras2014} such
that the noise takes the convoluted form
\begin{equation}
S_\text{DCB}(\omega) = \int_{-\infty}^{\infty}S_{rr} (\hbar\omega-E)P(E)dE.
\end{equation}
$S_{rr}$ ($S_\text{DCB}$) is the noise in the absence (presence) of the cavity. The $P(E)$ function,
\begin{equation}
P(E) = \frac{1}{2\pi\hbar}\int dt e^{iEt/\hbar}\langle \hat{X}^{\dagger}(t)\hat{X}(0)\rangle,
\end{equation}
where $\hat{X}(t)=\exp[-\lambda(\hat{a}^{\dagger}(t)-\hat{a}(t))]$ characterizes the environment. For $E>0$, it gives the probability for
the QPC/tunnel junction to emit a photon of energy $E$ in the environment during a tunneling event. The $E<0$ part describes photon absorption.
$P(E)$ can be evaluated exactly in a Gaussian state using Wick's theorem. We nevertheless expand in $\lambda$ for consistency with our weak
coupling scheme, and find
\begin{align}
P(E) &= (1-\lambda^2(\langle n \rangle_\text{q} + n_1))\delta(E) + \lambda^2 \langle n \rangle_\text{q} \delta(E+\hbar\omega_0) \nonumber \\
&+ \lambda^2 n_1 \delta(E-\hbar\omega_0)
\end{align}
where $n_1 = \langle n \rangle_\text{q} + 1$. It can be check that $P(E)$ is normalized, $\int d E P(E) = 1$. This expression is valid at zero temperature. The presence of non-zero values for $E<0$ therefore indicates that the cavity is in an out-of-equilibrium state and is able to
provide energy to the quantum conductor. For $e V < \hbar \omega_0$, the cavity is empty and $\langle n \rangle_\text{q} = 0, n_1 = 1$;
the elastic peak $\delta (E)$ is renormalized by vacuum fluctuations~\cite{jin-schon-prb2015} and inelastic photon emission occurs for
$E = \hbar \omega_0$.
We consider the noise properties of the QPC. The absorption noise is
\begin{align} \label{Anoise}
S_{\text{DCB}}(\omega_0)&=(1-\lambda^2(\langle n \rangle_\text{q}+n_1))S_{rr}(\omega_0)+ \lambda^2 n_1 S_{rr}(0) \nonumber \\
&+ \lambda^2 \langle n \rangle_\text{q} S_{rr}(2\omega_0).
\end{align}
In addition to the renormalized single-photon absorption, the second term describes the correlated emission and reabsorption of a photon by the QPC, present also when the cavity is in a vacuum state, and the third term, pair absorption, requires an occupied cavity.
A similar analysis for the emission noise
\begin{align} \label{Enoise}
S_\text{DCB}(-\omega_0)&=(1-\lambda^2(\langle n \rangle_\text{q}+n_1))S_{rr}(-\omega_0) \nonumber \\
&+ \lambda^2 \langle n \rangle_\text{q} S_{rr}(0)+ \lambda^2 n_1 S_{rr}(-2\omega_0)
\end{align}
reveals a renormalized single-photon emission, a correlated photon absorption and reemission, and pair emission. The first and second
terms are non-zero only for $e V > \hbar \omega_0$, the third term for $e V > 2 \hbar \omega_0$.
To summarize, the cavity back-action provides new absorption and emission mechanisms. Their effect on the noise properties of a tunnel
junction have been studied in Ref.~\onlinecite{jin-schon-prb2015}. We focus here on the effect on the cavity state. To lowest order in
the e-p coupling, the damping rate and number of photons of the cavity
\begin{subequations}
\begin{align}
\kappa_\text{DCB} &= \frac{\lambda^2}{e^2} [S_{\text{DCB}}(\omega_0)-S_{\text{DCB}}(-\omega_0)] \\[2mm]
\langle n \rangle_\text{DCB} &= \frac{\lambda^2 S_{\text{DCB}}(-\omega_0)}{e^2 \kappa_\text{DCB}}.
\end{align}
\end{subequations}
can be computed using Eqs.~\eqref{Anoise} and \eqref{Enoise} and the relations $\langle n \rangle_\text{q}=\bar{S}(-\omega_0)/2\hbar\omega_0$
and $n_1=\bar{S}(\omega_0)/2\hbar\omega_0$. We obtain
\begin{align}
&\kappa_\text{DCB} = \kappa_0 \left[1- \frac{\lambda^{2}}{2\hbar\omega_0}(\bar{S}(\omega_0)+\bar{S}(-\omega_0))\right] + \frac{\lambda^4}{e^2} S_{rr}(0) \nonumber \\
&+\frac{\lambda^{4}}{2\hbar\omega_0 e^2}[S_{rr}(-\omega_0)\bar{S}(2\omega_0)-S_{rr}(\omega_0)\bar{S}(-2\omega_0)]\\
&\langle n \rangle_\text{DCB} = \frac{\bar{S}(-\omega_{0})}{2\hbar\omega_{0}}-\frac{\lambda^{2}}{8(\hbar\omega_{0})^{3}}\left[\bar{S}^{2}(-\omega_{0})
\bar{S}(2\omega_{0})\right. \nonumber \\
&\left. -\bar{S}^{2}(\omega_{0})\bar{S}(-2\omega_{0}) \right].
\end{align}
These expressions coincide exactly with those obtained in Sec.~\ref{nq-action}, see Eqs.~\eqref{damping1} and~\eqref{npng}.
\section{Summary and conclusion \label{concl}}
We investigated the properties of a single-mode cavity field coupled to a quantum point contact or tunnel junction. We first used a
Keldysh path integral framework using two formulations related by a unitary gauge transformation: a coupling between the quantum
voltage operator of the cavity and the lead densities, and an inductive coupling where each scattering event is accompanied by the
excitation of the cavity. Expanding for weak electron-photon coupling to a quadratic form, we found a Gaussian thermal field
distribution with $g^{(2)} (0)=2$, and the photon self-energies are fully characterized by the emission and absorption noise of the
quantum point contact. The damping rate is constant in this limit, independent of the bias voltage.
Proceeding with the next order in the electron-photon coupling, we identified two-photon processes: pair emission or absorption as
well as correlated photon emission and absorption. We recovered these effects using a rate equation approach and a $P(E)$ calculation
adapted to our non-equilibrium situation. We obtained a reduced number of photons relatively to the thermal state prediction, a
suppressed $g^{(2)} (0) < 2$ for a not too large bias voltage, and a reduced cavity damping rate for $e V< 2 \hbar \omega_0$, which are
explained by noting the preeminence of two-photon absorption over emission.
It is tempting to speculate about the extension of these findings to higher orders in the electron-photon coupling $\lambda$. Increasing
powers of $\lambda$ will introduce emission and absorption processes involving three, four and higher number of photons, in which photon
absorption is always energetically more favourable than emission. This should modify the photon distribution by adding more weight to small
number of photons compared to larger numbers as the numerical results of Ref.~\onlinecite{BergenfeldtSET2012} seem also to indicate.
Interestingly, the balance between emission and absorption depends on the bias voltage such that high voltages are expected to bring the
system back to a Gaussian thermal distribution.
We emphasize a strong difference with dc-biased Josephson junctions where the physics of strong electron-photon coupling is essentially
described by generalized Frank-Condon factors and only processes conserving energy occur as there is no electronic dissipation. In tunnel
junctions, the non-conserved part of the energy can be dissipated in the leads which allows for more processes. The question is still
open whether it is possible to realize a non-classical state with $g^{(2)} (0) <1$ apart from the region $e V \simeq \hbar \omega_0$.
As a prospect of future investigation, we mention the extension of our approach by including fourth-order current correlations. They
appear to the order considered in this work when the QPC transmission probabilities are no longer small.
\section{Acknowledgments} We thank C. Altimiras, A. Clerk, P. Joyez, F. Portier and P. Simon for fruitful discussions. U.C.M. acknowledges the support from CNPq-Brazil
(Project No. 229659/2013-6).
| -55,968.521138 |
[
-2.234375,
1.958984375
] | 26.905132 |
[
-3.095703125,
0.489013671875,
-1.98828125,
-5.453125,
-0.64794921875,
8.140625
] |
[
3.322265625,
7.83203125,
2.443359375,
4.3828125
] | 392 | 6,592 |
[
-2.6484375,
3.04296875
] | 30.729201 |
[
-6.16015625,
-3.994140625,
-4.0859375,
-2.302734375,
1.8818359375,
12.0390625
] | 1.365482 | 12.526967 | 24.96966 | 2.580854 |
[
1.8732008934020996
] | -34,025.775408 | 6.49909 | -54,455.956561 | 0.59521 | 6.029705 |
[
-2.8125,
-4.015625,
-4.13671875,
-4.83984375,
2.45703125,
12.671875
] |
[
-5.3046875,
-1.4765625,
-2.162109375,
-1.1728515625,
3.28125,
4.5390625
] | |
BkiUbmPxK6wB9mn49hVy
|
\section{Data Augmentation by Backtranslation}\label{sec:aug}
Since our model is fast, we can train it with much more data. We therefore combine our model with a simple data augmentation technique to enrich the training data. The idea is to use two translation models, one translation model from English to French (or any other language) and another translation model from French to English, to obtain paraphrases of texts. This approach helps automatically increase the amount of training data for broadly any language-based tasks including the reading comprehension task that we are interested in. With more data, we expect to better regularize our models. The augmentation process is illustrated in Figure~\ref{data_augmentation} with French as a pivotal language.
In this work, we consider attention-based neural machine translation (NMT) models \cite{bahdanau2014neural,luong15attn}, which have demonstrated excellent translation quality \cite{wu2016google}, as the core models of our data augmentation pipeline. Specifically, we utilize the publicly available codebase\footnote{\url{https://github.com/tensorflow/nmt}} provided by \citet{luong17}, which replicates the Google's NMT (GNMT) systems \cite{wu2016google}.
We train 4-layer GNMT models on the public WMT data for both English-French\footnote{\url{http://www.statmt.org/wmt14/}} (36M sentence pairs) and English-German\footnote{\url{http://www.statmt.org/wmt16/}} (4.5M sentence pairs). All data have been tokenized and split into subword units as described in \cite{luong17}. All models share the same hyperparameters\footnote{\url{https://github.com/tensorflow/nmt/blob/master/nmt/standard_hparams/wmt16_gnmt_4_layer.json}} and are trained with different numbers of steps, 2M for English-French and 340K for English-German.
Our English-French systems achieve 36.7 BLEU on newstest2014 for translating into French and 35.9 BLEU for the reverse direction. For English-German and on newstest2014, we obtain 27.6 BLEU for translating into German and 29.9 BLEU for the reverse direction.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\columnwidth]{data_augmentation}
\caption{An illustration of the data augmentation process with French as a pivotal language. \textbf{k} is the beam width, which is the number of translations generated by the NMT system.}
\label{data_augmentation}
\end{figure}
Our paraphrase process works as follows, supposedly with French as a pivotal language. First, we feed an input sequence into the beam decoder of an English-to-French model to obtain $k$ French translations. Each of the French translation is then passed through the beam decoder of a reversed translation model to obtain a total of $k^2$ paraphrases of the input sequence.
\paragraph{Related Work.} While the concept of backtranslation has been introduced before, it is often used to improve either the same translation task \citet{sennrich16} or instrinsic paraphrase evaluations \citet{wieting17,paranet17}. Our approach is a novel application of backtranslation to enrich training data for down-stream tasks, in this case, the question answering (QA) task.
It is worth to note that \citep{li17} use paraphrasing techniques to improve QA; however, they only paraphrase questions and did not focus on the data augmentation aspect as we do in this paper.
\paragraph{Handling SQuAD Documents and Answers.}
We now discuss our specific procedure for the SQuAD dataset, which is essential for best performance gains. Remember that, each training example of SQuAD is a triple of $(d, q, a)$ in which document $d$ is a multi-sentence paragraph that has the answer $a$. When paraphrasing, we keep the question $q$ unchanged (to avoid accidentally changing its meaning) and generate new triples of $(d', q, a')$ such that the new document $d'$ has the new answer $a'$ in it. The procedure happens in two steps: (i) {\it document paraphrasing} -- paraphrase $d$ into $d'$ and (b) {\it answer extraction} -- extract $a'$ from $d'$ that closely matches $a$.
For the document paraphrasing step, we first split paragraphs into sentences and paraphrase them independently. We use $k=5$, so each sentence has 25 paraphrase choices. A new document $d'$ is formed by simply replacing each sentence in $d$ with a randomly-selected paraphrase. An obvious issue with this na\"{i}ve approach is that the original answer $a$ might no longer be present in $d'$.
The answer extraction addresses the aforementioned issue.
Let $s$ be the original sentence that contains the original answer $a$ and $s'$ be its paraphrase. We identify the newly-paraphrased answer with simple heuristics as follows. Character-level 2-gram scores are computed between each word in $s'$ and the start / end words of $a$ to find start and end positions of possible answers in $s'$.
Among all candidate paraphrased answer, the one with the highest character 2-gram score with respect to $a$ is selected as the new answer $a'$. Table~\ref{table:augmentation_answer} shows an example of the new answer found by this process.\footnote{We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase $s'$ from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence.}
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{l|p{7.5cm}|p{3.5cm}}
\hline
&Sentence that contains an answer & Answer \\\hline
Original& All of the departments in the College of Science offer PhD programs, except for the Department of Pre-Professional Studies. & Department of Pre-Professional Studies\\\hline
Paraphrase& All departments in the College of Science offer PHD programs with the exception of the Department of Preparatory Studies. & Department of Preparatory Studies
\\\hline
\end{tabular}
\end{center}
\caption{Comparison between answers in original sentence and paraphrased sentence.}
\label{table:augmentation_answer}
\end{table}
The quality and diversity of paraphrases are essential to the data augmentation method. It is still possible to improve the quality and diversity of this method. The quality can be improved by using better translation models. For example, we find paraphrases significantly longer than our models' maximum training sequence length tend to be cut off in the middle. The diversity can be improved by both sampling during the beam search decoding and paraphrasing questions and answers in the dataset as well. In addition, we can combine this method with other data augmentation methods, such as, the type swap method \citep{RaimanM17}, to acquire more diversity in paraphrases.
In our experiments, we observe that the proposed data augmentation can bring non-trivial improvement in terms of accuracy. We believe this technique is also applicable to other supervised natural language processing tasks, especially when the training data is insufficient.
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose a fast and accurate end-to-end model, QANet, for machine reading comprehension. Our core innovation is to completely remove the recurrent networks in the encoder. The resulting model is fully feedforward, composed entirely of separable convolutions, attention, linear layers, and layer normalization, which is suitable for parallel computation. The resulting model is both fast and accurate: It surpasses the best published results on SQuAD dataset while up to 13/9 times faster than a competitive recurrent models for a training/inference iteration. Additionally, we find that we are able to achieve significant gains by utilizing data augmentation consisting of translating context and passage pairs to and from another language as a way of paraphrasing the questions and contexts.
\section{Data Augmentation by Backtranslation}\label{sec:aug}
Since our model is fast, we can train it with much more data. We therefore combine our model with a simple data augmentation technique to enrich the training data. The idea is to use two translation models, one translation model from English to French (or any other language) and another translation model from French to English, to obtain paraphrases of texts. This approach helps automatically increase the amount of training data for broadly any language-based tasks including the reading comprehension task that we are interested in. With more data, we expect to better regularize our models. The augmentation process is illustrated in Figure~\ref{data_augmentation} with French as a pivotal language.
In this work, we consider attention-based neural machine translation (NMT) models \cite{bahdanau2014neural,luong15attn}, which have demonstrated excellent translation quality \cite{wu2016google}, as the core models of our data augmentation pipeline. Specifically, we utilize the publicly available codebase\footnote{\url{https://github.com/tensorflow/nmt}} provided by \citet{luong17}, which replicates the Google's NMT (GNMT) systems \cite{wu2016google}.
We train 4-layer GNMT models on the public WMT data for both English-French\footnote{\url{http://www.statmt.org/wmt14/}} (36M sentence pairs) and English-German\footnote{\url{http://www.statmt.org/wmt16/}} (4.5M sentence pairs). All data have been tokenized and split into subword units as described in \cite{luong17}. All models share the same hyperparameters\footnote{\url{https://github.com/tensorflow/nmt/blob/master/nmt/standard_hparams/wmt16_gnmt_4_layer.json}} and are trained with different numbers of steps, 2M for English-French and 340K for English-German.
Our English-French systems achieve 36.7 BLEU on newstest2014 for translating into French and 35.9 BLEU for the reverse direction. For English-German and on newstest2014, we obtain 27.6 BLEU for translating into German and 29.9 BLEU for the reverse direction.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\columnwidth]{data_augmentation}
\caption{An illustration of the data augmentation process with French as a pivotal language. \textbf{k} is the beam width, which is the number of translations generated by the NMT system.}
\label{data_augmentation}
\end{figure}
Our paraphrase process works as follows, supposedly with French as a pivotal language. First, we feed an input sequence into the beam decoder of an English-to-French model to obtain $k$ French translations. Each of the French translation is then passed through the beam decoder of a reversed translation model to obtain a total of $k^2$ paraphrases of the input sequence.
\paragraph{Related Work.} While the concept of backtranslation has been introduced before, it is often used to improve either the same translation task \citet{sennrich16} or instrinsic paraphrase evaluations \citet{wieting17,paranet17}. Our approach is a novel application of backtranslation to enrich training data for down-stream tasks, in this case, the question answering (QA) task.
It is worth to note that \citep{li17} use paraphrasing techniques to improve QA; however, they only paraphrase questions and did not focus on the data augmentation aspect as we do in this paper.
\paragraph{Handling SQuAD Documents and Answers.}
We now discuss our specific procedure for the SQuAD dataset, which is essential for best performance gains. Remember that, each training example of SQuAD is a triple of $(d, q, a)$ in which document $d$ is a multi-sentence paragraph that has the answer $a$. When paraphrasing, we keep the question $q$ unchanged (to avoid accidentally changing its meaning) and generate new triples of $(d', q, a')$ such that the new document $d'$ has the new answer $a'$ in it. The procedure happens in two steps: (i) {\it document paraphrasing} -- paraphrase $d$ into $d'$ and (b) {\it answer extraction} -- extract $a'$ from $d'$ that closely matches $a$.
For the document paraphrasing step, we first split paragraphs into sentences and paraphrase them independently. We use $k=5$, so each sentence has 25 paraphrase choices. A new document $d'$ is formed by simply replacing each sentence in $d$ with a randomly-selected paraphrase. An obvious issue with this na\"{i}ve approach is that the original answer $a$ might no longer be present in $d'$.
The answer extraction addresses the aforementioned issue.
Let $s$ be the original sentence that contains the original answer $a$ and $s'$ be its paraphrase. We identify the newly-paraphrased answer with simple heuristics as follows. Character-level 2-gram scores are computed between each word in $s'$ and the start / end words of $a$ to find start and end positions of possible answers in $s'$.
Among all candidate paraphrased answer, the one with the highest character 2-gram score with respect to $a$ is selected as the new answer $a'$. Table~\ref{table:augmentation_answer} shows an example of the new answer found by this process.\footnote{We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase $s'$ from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence.}
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{l|p{7.5cm}|p{3.5cm}}
\hline
&Sentence that contains an answer & Answer \\\hline
Original& All of the departments in the College of Science offer PhD programs, except for the Department of Pre-Professional Studies. & Department of Pre-Professional Studies\\\hline
Paraphrase& All departments in the College of Science offer PHD programs with the exception of the Department of Preparatory Studies. & Department of Preparatory Studies
\\\hline
\end{tabular}
\end{center}
\caption{Comparison between answers in original sentence and paraphrased sentence.}
\label{table:augmentation_answer}
\end{table}
The quality and diversity of paraphrases are essential to the data augmentation method. It is still possible to improve the quality and diversity of this method. The quality can be improved by using better translation models. For example, we find paraphrases significantly longer than our models' maximum training sequence length tend to be cut off in the middle. The diversity can be improved by both sampling during the beam search decoding and paraphrasing questions and answers in the dataset as well. In addition, we can combine this method with other data augmentation methods, such as, the type swap method \citep{RaimanM17}, to acquire more diversity in paraphrases.
In our experiments, we observe that the proposed data augmentation can bring non-trivial improvement in terms of accuracy. We believe this technique is also applicable to other supervised natural language processing tasks, especially when the training data is insufficient.
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose a fast and accurate end-to-end model, QANet, for machine reading comprehension. Our core innovation is to completely remove the recurrent networks in the encoder. The resulting model is fully feedforward, composed entirely of separable convolutions, attention, linear layers, and layer normalization, which is suitable for parallel computation. The resulting model is both fast and accurate: It surpasses the best published results on SQuAD dataset while up to 13/9 times faster than a competitive recurrent models for a training/inference iteration. Additionally, we find that we are able to achieve significant gains by utilizing data augmentation consisting of translating context and passage pairs to and from another language as a way of paraphrasing the questions and contexts.
\section{Experiments}\label{sec:experiment}
In this section, we conduct experiments to study the performance of our model and the data augmentation technique. We will primarily benchmark our model on the SQuAD dataset~\citep{RajpurkarZLL16}, considered to be one of the most competitive datasets in Q\&A. We also conduct similar studies on TriviaQA~\citep{JoshiCWZ17}, another Q\&A dataset, to show that the effectiveness and efficiency of our model are general.
\subsection{Experiments on SQuAD}
\subsubsection{Dataset and Experimental Settings}
\paragraph{Dataset.} We consider the Stanford Question Answering Dataset (SQuAD)~\citep{RajpurkarZLL16} for machine reading comprehension.\footnote{SQuAD leaderboard: \url{https://rajpurkar.github.io/SQuAD-explorer/}}
SQuAD contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of~\citep{RajpurkarZLL16} to retrieve the final test score. In our experiments, we report the test set result of our best single model.\footnote{On the leaderboard of SQuAD, there are many strong candidates in the ``ensemble" category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the ``single model" category and compare against other models with the same category.} For further analysis, we only report the performance on the validation set, as we do not want to probe the unseen test set by frequent submissions.
According to the observations from our experiments and previous works, such as~\citep{SeoKFH16,XiongZS16,WangYWCZ17,ChenFWB17}, the validation score is well correlated with the test score.
\paragraph{Data Preprocessing.}
We use the NLTK tokenizer to preprocess the data.\footnote{NLTK implementation: \url{http://www.nltk.org/}} The maximum context length is set to 400 and any paragraph longer than that would be discarded. During training, we batch the examples by length and dynamically pad the short sentences with special symbol \texttt{<PAD>}. The maximum answer length is set to 30.
We use the pretrained 300-D word vectors GLoVe~\citep{pennington2014glove}, and all the out-of-vocabulary words are replace with \texttt{<UNK>}, whose embedding is updated during training. Each character embedding is randomly initialized as a 200-D vector, which is updated in training as well.
We generate two additional augmented datasets obtained from Section~\ref{sec:aug}, which contain 140K and 240K examples and are denoted as ``data augmentation $\times$ 2'' and ``data augmentation $\times$ 3'' respectively, including the original data.
\paragraph{Training details.}
We employ two types of standard regularizations. First, we use L2 weight decay on all the trainable variables, with parameter $\lambda=3\times 10^{-7}$. We additionally use dropout on word, character embeddings and between layers, where the word and character dropout rates are 0.1 and 0.05 respectively, and the dropout rate between every two layers is 0.1. We also adopt the stochastic depth method (layer dropout)~\citep{HuangSLSW16} within each embedding or model encoder layer, where sublayer $l$ has survival probability
$p_l= 1 - {l\over L} (1-p_L)$ where $L$ is the last layer and $p_L=0.9$.
The hidden size and the convolution filter number are all 128, the batch size is 32, training steps are 150K for original data, 250K for ``data augmentation $\times$ 2", and 340K for ``data augmentation $\times$ 3". The numbers of convolution layers in the embedding and modeling encoder are 4 and 2, kernel sizes are 7 and 5, and the block numbers for the encoders are 1 and 7, respectively.
We use the ADAM optimizer~\citep{KingmaB14} with $\beta_1=0.8, \beta_2=0.999,\epsilon=10^{-7}$. We use a learning rate warm-up scheme with an inverse exponential increase from 0.0 to 0.001 in the first 1000 steps, and then maintain a constant learning rate for the remainder of training.
Exponential moving average is applied on all trainable variables with a decay rate 0.9999.
Finally, we implement our model in Python using Tensorflow~\citep{AbadiABBCCCDDDG16} and carry out our experiments on an NVIDIA p100 GPU.\footnote{TensorFlow implementation: \url{https://www.tensorflow.org/}}
\subsubsection{Results}
\paragraph{Accuracy.}
The F1 and Exact Match (EM) are two evaluation metrics of accuracy for the model performance. F1 measures the portion of overlap tokens between the predicted answer and groundtruth, while exact match score is 1 if the prediction is exactly the same as groundtruth or 0 otherwise. We show the results in comparison with other methods in Table~\ref{table:squad_all}. To make a fair and thorough comparison, we both report both the published results in their latest papers/preprints and the updated but not documented results on the leaderboard. We deem the latter as the unpublished results. As can be seen from the table, the accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In particular, our model trained on the original dataset outperforms all the documented results in the literature, in terms of both EM and F1 scores (see second column of Table~\ref{table:squad_all}). When trained with the augmented data with proper sampling scheme, our model can get significant gain 1.5/1.1 on EM/F1. Finally, our result on the official test set is 76.2/84.6, which significantly outperforms the best documented result 73.2/81.8.
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{lcc}
\hlin
& Published\tablefootnote{The scores are collected from the latest version of the documented related work on Oct 27, 2017.} & LeaderBoard\tablefootnote{The scores are collected from the leaderboard on Oct 27, 2017.} \\\hline
Single Model & EM~/~F1 & EM~/~F1 \\\hline
LR Baseline~\citep{RajpurkarZLL16} & 40.4 / 51.0 & 40.4 / 51.0\\
Dynamic Chunk Reader~\citep{YuZHYXZ16} & 62.5 / 71.0 & 62.5 / 71.0 \\
Match-LSTM with Ans-Ptr~\citep{WangJ16a} & 64.7 / 73.7 & 64.7 / 73.7\\
Multi-Perspective Matching~\citep{WangMHF16} &65.5 / 75.1 & 70.4 / 78.8\\
Dynamic Coattention Networks~\citep{XiongZS16} & 66.2 / 75.9 & 66.2 / 75.9\\
FastQA~\citep{WeissenbornWS17} & 68.4 / 77.1 & 68.4 / 77.1 \\
BiDAF~\citep{SeoKFH16} & 68.0 / 77.3 & 68.0 / 77.3\\
SEDT~\citep{LiuHWYN17} & 68.1 / 77.5 & 68.5 / 78.0\\
RaSoR~\citep{LeeKP016} & 70.8 / 78.7 & 69.6 / 77.7 \\
FastQAExt~\citep{WeissenbornWS17}& 70.8 / 78.9 & 70.8 / 78.9\\
ReasoNet~\citep{ShenHGC17} & 69.1 / 78.9 &70.6 / 79.4\\
Document Reader~\citep{ChenFWB17}& 70.0 / 79.0 & 70.7 / 79.4\\
Ruminating Reader~\citep{GongB17}&70.6 / 79.5& 70.6 / 79.5\\
jNet~\citep{ZhangZCDWJ17}& 70.6 / 79.8 & 70.6 / 79.8\\
Conductor-net & N/A & 72.6 / 81.4\\
Interactive AoA Reader~\citep{CuiCWWLH17} & N/A & 73.6 / 81.9\\
Reg-RaSoR & N/A & 75.8 / 83.3 \\
DCN+ & N/A & 74.9 / 82.8 \\
AIR-FusionNet & N/A& 76.0 / 83.9\\
R-Net~\citep{WangYWCZ17} & 72.3 / 80.7& 76.5 /84.3\\
BiDAF + Self Attention + ELMo & N/A & \textbf{77.9/ 85.3}\\
Reinforced Mnemonic Reader~\citep{HuPQ17} & 73.2 / 81.8 & 73.2 / 81.8\\
\hline
Dev set: QANet & \textbf{73.6} / \textbf{82.7} & N/A\\
Dev set: QANet + data augmentation $\times$2 & \textbf{74.5} / \textbf{83.2} & N/A\\
Dev set: QANet + data augmentation $\times$3 & \textbf{75.1} / \textbf{83.8}& N/A\\\hline
Test set: QANet + data augmentation $\times$3 & \textbf{76.2} / \textbf{84.6 }& 76.2 / 84.6\\
\hlin
\end{tabular}
\end{center}
\caption{The performances of different models on SQuAD dataset.
}
\label{table:squad_all}
\end{table}
\paragraph{Speedup over RNNs.}
To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models~\citep{ChenFWB17}. All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table~\ref{table:squad_speedup}. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{l|c|cc|cc|cc}
\hlin
&QANet & RNN-1-128 & Speedup & RNN-2-128 & Speedup & RNN-3-128 & Speedup \\\hline
Training& \textbf{3.2} & 1.1 & \textbf{2.9x} & 0.34 & \textbf{9.4x} & 0.24& \textbf{13.3x}\\
Inference& \textbf{8.1} & 2.2 & \textbf{3.7x} & 1.3 & \textbf{6.2x} & 0.92 & \textbf{8.8x}
\\\hlin
\end{tabular}
\end{center}
\caption{Speed comparison between our model and RNN-based models on SQuAD dataset, all with batch size 32. RNN-$x$-$y$ indicates an RNN with $x$ layers each containing $y$ hidden units. Here, we use bidirectional LSTM as the RNN. The speed is measured by batches/second, so higher is faster.}
\label{table:squad_speedup}
\end{table}
\paragraph{Speedup over BiDAF model.}
In addition, we also use the same hardware (a NVIDIA p100 GPU) and compare the training time of getting the same performance between our model and the BiDAF model\footnote{The code is directly downloaded from \url{https://github.com/allenai/bi-att-flow}}\citep{SeoKFH16}, a classic RNN-based model on SQuAD. We mostly adopt the default settings in the original code to get its best performance, where the batch sizes for training and inference are both 60. The only part we changed is the optimizer, where Adam with learning 0.001 is used here, as with Adadelta we got a bit worse performance. The result is shown in Table~\ref{table:squad_vs_bidaf} which shows that our model is 4.3 and 7.0 times faster than BiDAF in training and inference speed. Besides, we only need one fifth of the training time to achieve BiDAF's best F1 score ($77.0$) on dev set.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
& Train time to get 77.0 F1 on Dev set& Train speed & Inference speed\\\hline
QANet & 3 hours& 102 samples/s & 259 samples/s \\
BiDAF& 15 hours & 24 samples/s & 37samples/s\\\hline
Speedup & \textbf{5.0x} & \textbf{4.3x} & \textbf{7.0x}
\\\hline
\end{tabular}
\end{center}
\caption{Speed comparison between our model and BiDAF~\citep{SeoKFH16} on SQuAD dataset.}
\label{table:squad_vs_bidaf}
\end{table}
\subsubsection{Abalation Study and Analysis}
We conduct ablation studies on components of the proposed model, and investigate the effect of augmented data. The validation scores on the development set are shown in Table~\ref{table:squad_ablation}. As can be seen from the table, the use of convolutions in the encoders is crucial: both F1 and EM drop drastically by almost 3 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.3 gain of EM/F1 to the ultimate performance. We interpret these phenomena as follows: the convolutions capture the local structure of the context while the self-attention is able to model the global interactions between text.
Hence they are complimentary to but cannot replace each other. The use of separable convolutions in lieu of tradition convolutions also has a prominent contribution to the performance, which can be seen by the slightly worse accuracy caused by replacing separable convolution with normal convolution.
\paragraph{The Effect of Data Augmentation.}
We additionally perform experiments to understand the values of augmented data as their amount increases. As the last block of rows in the table shows, data augmentation proves to be helpful in further boosting performance. Making the training data twice as large by adding the En-Fr-En data only (ratio 1:1 between original training data and augmented data, as indicated by row ``data augmentation $\times$ 2 (1:1:0)") yields an increase in the F1 by 0.5 percent.
While adding more augmented data with French as a pivot does not provide performance gain, injecting additional augmented data En-De-En of the same amount brings another 0.2 improvement in F1, as indicated in entry ``data augmentation $\times$ 3 (1:1:1)". We may attribute this gain to the diversity of the new data, which is produced by the translator of the new language.
\paragraph{The Effect of Sampling Scheme.}
Although injecting more data beyond $\times$ 3 does not benefit the model, we observe that a good sampling ratio between the original and augmented data during training can further boost the model performance. In particular, when we increase the sampling weight of augmented data from (1:1:1) to (1:2:1), the EM/F1 performance drops by 0.5/0.3. We conjecture that it is due to the fact that augmented data is noisy because of the back-translation, so it should not be the dominant data of training. We confirm this point by increasing the ratio of the original data from (1:2:1) to (2:2:1), where 0.6/0.5 performance gain on EM/F1 is obtained. Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{lcc}
\hline
& EM~/~F1 & Difference to Base Model\\
& & EM~/~F1 \\
\hline
Base QANet & 73.6 / 82.7 \\\hline
~~~~~~~~~~~~~~~~~~~~~- convolution in encoders & 70.8 / 80.0 & -2.8 / -2.7 \\
~~~~~~~~~~~~~~~~~~~~~- self-attention in encoders & 72.2 / 81.4 & -1.4 / -1.3\\
~~~~~~~~~~~~~~~~~~~~~replace sep convolution with normal convolution & 72.9 / 82.0 & - 0.7 / -0.7\\\hline
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$2 (1:1:0) & 74.5 / 83.2 & +0.9 / +0.5 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (1:1:1) & 74.8 / 83.4 & +1.2 / +0.7 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (1:2:1) & 74.3 / 83.1 & +0.7 / +0.4 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (2:2:1) & 74.9 / 83.6 & +1.3 / +0.9 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (2:1:1) & 75.0 / 83.6 & +1.4 / +0.9 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (3:1:1) & \textbf{75.1 / 83.8} & \textbf{+1.5 / +1.1}\\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (4:1:1) & 75.0 / 83.6 & +1.4 / +0.9\\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (5:1:1) & 74.9 / 83.5 & +1.3 / +0.8\\
\hline
\end{tabular}
\end{center}
\caption{An ablation study of data augmentation and other aspects of our model. The reported results are obtained on the \emph{development set}. For rows containing entry ``data augmentation",
``$\times N$" means the data is enhanced to $N$ times as large as the original size, while
the ratio in the bracket indicates the sampling ratio among the original, English-French-English and English-German-English data during training.}
\label{table:squad_ablation}
\end{table}
\subsubsection{Robustness Study}
In the following, we conduct experiments on the adversarial SQuAD dataset~\citep{JiaL17} to study the robustness of the proposed model.
In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training.
We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates
sentences that are similar to the question, but
not contradictory to the correct answer, while AddOneSent adds
a random human-approved sentence that is not necessarily related to the context.
The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation.
The results are shown in Table~\ref{table:squad_adversarial}, where the F1 scores of other models are all extracted from \cite{JiaL17}.\footnote{Only F1 scores are reported in \cite{JiaL17}} Again, we only compare the performance of single models. From Table~\ref{table:squad_adversarial}, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. The injected noise in the training data might not only improve the generalization of the model but also make it robust to the adversarial sentences.
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{lcc}
\hline
Single Model & AddSent & AddOneSent \\\hline
Logistic~\citep{RajpurkarZLL16} & 23.2 & 30.4\\
Match~\citep{WangJ16a} & 27.3 & 39.0\\
SEDT~\citep{LiuHWYN17} & 33.9 & 44.8\\
DCR~\citep{YuZHYXZ16} & 37.8 & 45.1 \\
BiDAF~\citep{SeoKFH16} & 34.3 & 45.7\\
jNet~\citep{ZhangZCDWJ17} & 37.9 & 47.0 \\
Ruminating~\citep{GongB17} & 37.4 & 47.7 \\
RaSOR~\citep{LeeKP016} & 39.5 & 49.5\\
MPCM~\citep{WangMHF16} & 40.3 & 50.0\\
ReasoNet~\citep{ShenHGC17} & 39.4 & 50.3\\
Mnemonic~\citep{HuPQ17} & \textbf{46.6} & \textbf{56.0}\\
\hline
QANet & 45.2 & 55.7 \\
\hline
\end{tabular}
\end{center}
\caption{The F1 scores on the adversarial SQuAD test set.
}
\label{table:squad_adversarial}
\end{table}
\subsection{Experiments on TriviaQA}
In this section, we test our model on another dataset TriviaQA~\citep{JoshiCWZ17},
which consists of 650K context-query-answer
triples. There are 95K distinct question-answer pairs, which are authored by
Trivia enthusiasts, with 6 evidence documents (context)
per question on average, which are either crawled from Wikipedia or Web search.
Compared to SQuAD, TriviaQA is more challenging in that: 1) its examples have much longer context (2895 tokens per context on average) and may contain several paragraphs, 2) it is much noisier than SQuAD due to the lack of human labeling, 3) it is possible that the context is not related to the answer at all, as it is crawled by key words.
In this paper, we focus on testing our model on the subset consisting of answers from Wikipedia. According to the previous work~\citep{JoshiCWZ17,HuPQ17,PanLZCCH17}, the same model would have similar performance on both Wikipedia and Web, but the latter is five time larger. To keep the training time manageable, we omit the experiment on Web data.
Due to the multi-paragraph nature of the context, researchers also find that simple hierarchical or multi-step reading tricks, such as first predicting which paragraph to read and then apply models like BiDAF to pinpoint the answer within that paragraph~\citep{ClarkG17}, can significantly boost the performance on TriviaQA. However, in this paper, we focus on comparing with the single-paragraph reading baselines only. We believe that our model can be plugged into other multi-paragraph reading methods to achieve the similar or better performance, but it is out of the scope of this paper.
The Wikipedia sub-dataset contains around 92K training and 11K development examples. The average context and question lengths are 495 and 15 respectively. In addition to the full development set, the authors of~\cite{JoshiCWZ17} also pick a verified subset that all the contexts inside can answer the associated questions. As the text could be long, we adopt the data processing similar to \cite{HuPQ17,JoshiCWZ17}. In particular, for training and validation, we randomly select a window of length 256 and 400 encapsulating the answer respectively. All the remaining setting are the same as SQuAD experiment, except that the training steps are set to 120K.
\paragraph{Accuracy.} The accuracy performance on the development set is shown in Table~\ref{table:triviaqa_all}. Again, we can see that our model outperforms the baselines in terms of F1 and EM on Full development set, and is on par with the state-of-the-art on the Verified dev set.
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{lcc}
\hline
& Full & Verified \\\hline
Single Model & EM~/~F1 & EM~/~F1 \\\hline
Random~\citep{JoshiCWZ17} & 12.7 / 22.5 & 13.8 / 23.4\\
Classifier~\citep{JoshiCWZ17} & 23.4 / 27.7 & 23.6 / 27.9\\
BiDAF~\citep{SeoKFH16} & 40.3 / 45.7 & 46.5 /52.8 \\
MEMEN~\citep{PanLZCCH17} & 43.2/ 46.9 & 49.3 / 55.8\\
M-Reader~\citep{HuPQ17}$^*$ & 46.9/ 52.9$^*$ & 54.5/ 59.5$^*$\\
\hline
QANet & \textbf{51.1} / \textbf{56.6} & 53.3/ 59.2 \\
\hline
\end{tabular}
\end{center}
\caption{The development set performances of different \textit{single-paragraph} reading models on the Wikipedia domain of TriviaQA dataset. Note that $^*$ indicates the result on test set.
}
\label{table:triviaqa_all}
\end{table}
\paragraph{Speedup over RNNs.} In addition to accuracy, we also benchmark the speed of our model against the RNN counterparts. As Table~\ref{table:triviaqa_speedup} shows, not surprisingly, our model has 3 to 11 times speedup in training and 3 to 9 times acceleration in inference, similar to the finding in SQuAD dataset.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{l|c|cc|cc|cc}
\hline
&QANet & RNN-1-128 & Speedup & RNN-2-128 & Speedup & RNN-3-128 & Speedup \\\hline
Training& \textbf{1.8} & 0.41 & \textbf{4.4x} & 0.20 & \textbf{9.0x} & 0.11& \textbf{16.4x}\\
Inference& \textbf{3.2} & 0.89 & \textbf{3.6x} & 0.47 & \textbf{6.8x} & 0.26 & \textbf{12.3x}
\\\hline
\end{tabular}
\end{center}
\caption{Speed comparison between the proposed model and RNN-based models on TriviaQA Wikipedia dataset, all with batch size 32. RNN-$x$-$y$ indicates an RNN with $x$ layers each containing $y$ hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by batches/second, so higher is faster.}
\label{table:triviaqa_speedup}
\end{table}
\section{Introduction}\label{sec:intro}
There is growing interest in the tasks of machine reading comprehension and automated question answering. Over the past few years, significant progress has been made with end-to-end models showing promising results on many challenging datasets. The most successful models generally employ two key ingredients: (1) a recurrent model to process sequential inputs, and (2) an attention component to cope with long term interactions. A successful combination of these two ingredients is the Bidirectional Attention Flow (BiDAF) model by~\cite{SeoKFH16}, which achieve strong results on the SQuAD dataset~\citep{RajpurkarZLL16}. A weakness of these models is that they are often slow for both training and inference due to their recurrent nature, especially for long texts.
The expensive training not only leads to high turnaround time for experimentation and limits researchers from rapid iteration but also prevents the models from being used for larger dataset. Meanwhile the slow inference prevents the machine comprehension systems from being deployed in real-time applications.
In this paper, aiming to make the machine comprehension fast, we propose to remove the recurrent nature of these models. We instead exclusively use convolutions and self-attentions as the building blocks of encoders that separately encodes the query and context. Then we learn the interactions between context and question by standard attentions~\citep{XiongZS16,SeoKFH16,bahdanau2014neural}.
The resulting representation is encoded again with our recurrency-free encoder before finally decoding to the probability of each position being the start or end of the answer span.
We call this architecture QANet, which is shown in Figure~\ref{model_diagram}.
The key motivation behind the design of our model is the following:
convolution captures the local structure of the text, while the self-attention learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subsequent modeling layers.
The feed-forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a simple comparison, our model can achieve the same accuracy (77.0 F1 score) as BiDAF model~\citep{SeoKFH16} within 3 hours training that otherwise should have taken 15 hours. The speed-up gain also allows us to train the model with more iterations to achieve better results than competitive models. For instance, if we allow our model to train for 18 hours, it achieves an F1 score of 82.7 on the dev set, which is much better than~\citep{SeoKFH16}, and is on par with best published results.
As our model is fast, we can train it with much more data than other models. To further improve the model, we propose a complementary data augmentation technique to enhance the training data.
This technique paraphrases the examples by translating the original sentences from English to another language and then back to English, which not only enhances the number of training instances but also diversifies the phrasing.
On the SQuAD dataset, QANet trained with the augmented data achieves 84.6 F1 score on the test set,
which is significantly better than the best published result of 81.8 by~\cite{HuPQ17}.\footnote{After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. For example, the current (as of Dec 19, 2017) best documented model, SAN~\cite{Liu17}, achieves 84.4 F1 score which is on par with our method.}
We also conduct ablation test to justify the usefulness of each component of our model.
In summary, the contribution of this paper are as follows:
\begin{itemize}
\item We propose an efficient reading comprehension model that exclusively built upon convolutions and self-attentions. To the best of our knowledge, we are the first to do so. This combination maintains good accuracy, while achieving up to 13x speedup in training and 9x per training iteration, compared to the RNN counterparts. The speedup gain makes our model the most promising candidate for scaling up to larger datasets.
\item To improve our result on SQuAD, we propose a novel data augmentation technique to enrich the training data by paraphrasing.
It allows the model to achieve higher accuracy that is better than the state-of-the-art.
\end{itemize}
\section*{Acknowledgement}
Adams Wei Yu is supported by NVIDIA PhD Fellowship and CMU Presidential Fellowship. We would like to thank Samy Bengio, Lei Huang, Minjoon Seo, Noam Shazeer, Ashish Vaswani, Barret Zoph and the Google Brain Team for helpful discussions.
\section{The Model}\label{sec:model}
In this section, we first formulate the reading comprehension problem and then describe the proposed model QANet: it is a feedforward model that consists of only convolutions and self-attention, a combination that is empirically effective, and is also a novel contribution of our work.
\subsection{Problem Formulation}
The reading comprehension task considered in this paper, is defined as follows. Given a context paragraph with $n$ words $C=\{c_1, c_2,..., c_n\}$ and the query sentence with $m$ words $Q=\{q_1, q_2,..., q_m\}$, output a span $S=\{c_{i}, c_{i+1},..., c_{i+j}\}$ from the original paragraph $C$. In the following, we will use $x$ to denote both the original word and its embedded vector, for any $x\in C, Q$.
\subsection{Model Overview}
The high level structure of our model is similar to most existing models that contain five major components: an embedding layer, an embedding encoder layer, a context-query attention layer, a model encoder layer and an output layer, as shown in Figure \ref{model_diagram}.
These are the standard building blocks for most, if not all, existing reading comprehension models. However, the major differences between our approach and other methods are as follow:
For both the embedding and modeling encoders, we only use convolutional and self-attention mechanism, discarding RNNs, which are used by most of the existing reading comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. Note that even though self-attention has already been used extensively in~\cite{VaswaniSPUJGKP17}, the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. The use of convolutions also allows us to take advantage of common regularization methods in ConvNets such as stochastic depth (layer dropout)~\citep{HuangSLSW16}, which gives an additional gain of 0.2 F1 in our experiments.
In detail, our model consists of the following five layers:
\paragraph{1. Input Embedding Layer.} We adopt the standard techniques to obtain the embedding of each word $w$ by concatenating its word embedding and character embedding. The word embedding is fixed during training and initialized from the $p_1=300$ dimensional pre-trained GloVe ~\citep{pennington2014glove} word vectors, which are fixed during training. All the out-of-vocabulary words are mapped to an \texttt{<UNK>} token, whose embedding is trainable with random initialization. The character embedding is obtained as follows:
Each character is represented as a trainable vector of dimension $p_2=200$, meaning each word can be viewed as the concatenation of the embedding vectors for each of its characters. The length of each word is either truncated or padded to 16. We take maximum value of each row of this matrix to get a fixed-size vector representation of each word. Finally, the output of a given word $x$ from this layer is the concatenation $[x_w; x_c]\in\mathbf{R}^{p_1+p_2}$, where $x_w$ and $x_c$ are the word embedding and the convolution output of character embedding of $x$ respectively.
Following~\cite{SeoKFH16}, we also adopt a two-layer highway network~\citep{SrivastavaGS15} on top of this representation.
For simplicity, we also use $x$ to denote the output of this layer.
\begin{figure}[t]
\centering \includegraphics[width=0.7\columnwidth]{adams.pdf}
\caption{An overview of the QANet architecture (left) which has several Encoder Blocks. We use the same Encoder Block (right) throughout the model, only varying the number of convolutional layers for each block. We use layernorm and residual connection between every layer in the Encoder Block. We also share weights of the context and question encoder, and of the three output encoders. A positional encoding is added to the input at the beginning of each encoder layer consisting of $sin$ and $cos$ functions at varying wavelengths, as defined in ~\citep{VaswaniSPUJGKP17}.
Each sub-layer after the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block.}
\label{model_diagram}
\end{figure}
\paragraph{2. Embedding Encoder Layer.} The encoder layer is a stack of the following basic building block:
[convolution-layer $\times$ \# + self-attention-layer + feed-forward-layer],
as illustrated in the upper right of Figure \ref{model_diagram}. We use depthwise separable convolutions ~\citep{Chollet16a} ~\citep{kaiser2017depthwise} rather than traditional ones, as we observe that it is memory efficient and has better generalization. The kernel size is 7, the number of filters is $d=128$ and the number of conv layers within a block is 4.
For the self-attention-layer, we adopt the multi-head attention mechanism defined in ~\citep{VaswaniSPUJGKP17} which, for each position in the input, called the query, computes a weighted sum of all positions, or keys, in the input based on the similarity between the query and key as measured by the dot product. The number of heads is 8 throughout all the layers.
Each of these basic operations (conv/self-attention/ffn) is placed inside a \textit{residual block}, shown lower-right in Figure \ref{model_diagram}. For an input $x$ and a given operation $f$, the output is $f(layernorm(x)) + x$, meaning there is a full identity path from the input to output of each block, where layernorm indicates layer-normalization proposed in~\citep{BaKH16}. The total number of encoder blocks is 1. Note that the input of this layer is a vector of dimension $p_1+p_2=500$ for each individual word, which is immediately mapped to $d=128$ by a one-dimensional convolution. The output of this layer is a also of dimension $d=128$.
\paragraph{3. Context-Query Attention Layer.} This module is standard in almost every previous reading comprehension models such as \cite{WeissenbornWS17} and \cite{ChenFWB17}. We use $C$ and $Q$ to denote the encoded context and query. The context-to-query attention is constructed as follows: We first computer the similarities between each pair of context and query words, rendering a similarity matrix
$S\in\mathbf{R}^{n \times m}$. We then normalize each row of $S$ by applying the softmax function, getting a matrix $\overline{S}$. Then the context-to-query attention is computed as
$A = \overline{S} \cdot Q^T \in \mathbf{R}^{n\times d}$. The similarity function used here is the trilinear function~\citep{SeoKFH16}:
$$f(q, c) = W_0[q, c, q\odot c],$$
where $\odot$ is the element-wise multiplication and $W_0$ is a trainable variable.
Most high performing models additionally use some form of query-to-context attention, such as BiDaF~\citep{SeoKFH16} and DCN~\citep{XiongZS16}.
Empirically, we find that, the DCN attention can provide a little benefit over simply applying context-to-query attention, so we adopt this strategy. More concretely, we compute the column normalized matrix $\overline{\overline {S}}$ of $S$ by softmax function, and the query-to-context attention is
$B=\overline{S}\cdot {\overline{\overline {S}}}^T \cdot C^T$.
\paragraph{4. Model Encoder Layer.} Similar to~\cite{SeoKFH16}, the input of this layer at each position is $[c, a, c \odot a, c\odot b]$, where $a$ and $b$ are respectively a row of attention matrix $A$ and $B$. The layer parameters are the same as the Embedding Encoder Layer except that convolution layer number is 2 within a block and the total number of blocks are 7. We share weights between each of the 3 repetitions of the model encoder.
\paragraph{5. Output layer.} This layer is task-specific. Each example in SQuAD is labeled with a span in the context containing the answer. We adopt the strategy of \cite{SeoKFH16} to predict the probability of each position in the context being the start or end of an answer span.
More specifically, the probabilities of the starting and ending position are modeled as
$$p^1 = softmax(W_1[M_0;M_1]), ~~ p^2 = softmax(W_2[M_0;M_2]),$$
where $W_1$ and $W_2$ are two trainable variables and $M_0, M_1, M_2$ are respectively the outputs of the three model encoders, from bottom to top.
The score of a span is the product of its start position and end position probabilities.
Finally, the objective function is defined as the negative sum of the log probabilities
of the predicted distributions indexed by true start and end indices, averaged over all the training examples:
$$L(\theta)=-{1\over N}\sum_i^N \left[\log(p^1_{y_i^1}) + \log(p^2_{y_i^2})\right],$$
where $y_i^1$ and $y_i^2$ are respectively the groundtruth starting and ending position of example $i$, and $\theta$ contains all the trainable variables.
The proposed model can be customized to other comprehension tasks, e.g. selecting from the candidate answers, by changing the output layers accordingly.
\textbf{Inference}. At inference time, the predicted span $(s, e)$ is chosen such that $p^1_sp^2_e$ is maximized and $s\le e$. Standard dynamic programming can obtain the result with linear time.
\section{Related Work}\label{sec:rw}
Machine reading comprehension and automated question answering has become an important topic in the NLP domain. Their popularity can be attributed to an increase in publicly available annotated datasets, such as SQuAD~\citep{RajpurkarZLL16}, TriviaQA~\citep{JoshiCWZ17}, CNN/Daily News~\citep{HermannKGEKSB15}, WikiReading~\citep{HewlettLJPFHKB16}, Children Book Test~\citep{HillBCW15}, etc. A great number of end-to-end neural network models have been proposed to tackle these challenges, including BiDAF~\citep{SeoKFH16}, r-net~\citep{WangYWCZ17}, DCN~\citep{XiongZS16}, ReasoNet~\citep{ShenHGC17}, Document Reader~\citep{ChenFWB17}, Interactive AoA Reader~\citep{CuiCWWLH17} and Reinforced Mnemonic Reader~\citep{HuPQ17}.
Recurrent Neural Networks (RNNs) have featured predominatnly in Natural Language Processing in the past few years.
The sequential nature of the text coincides with the design philosophy of RNNs, and hence their popularity. In fact, all the reading comprehension models mentioned above are based on RNNs.
Despite being common, the sequential nature of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another drawback of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Unit~\citep{chung2014empirical} or Long Short Term Memory architectures~\citep{hochreiter1997long}. For simple tasks such as text classification, with reinforcement learning techniques, models~\citep{YuLL17} have been proposed to skip irrelevant tokens to both further address the long dependencies issue and speed up the procedure. However, it is not clear if such methods can handle complicated tasks such as Q\&A.
The reading comprehension task considered in this paper always needs to deal with long text, as the context paragraphs may be hundreds of words long.
Recently, attempts have been made to replace the recurrent networks by
full convolution or full attention architectures ~\citep{Kim14,gehring2017convolutional,vaswani2017attention,ShenZLJPZ17}.
Those models have been shown to be not only faster than the RNN architectures, but also effective in other tasks, such as text classification, machine translation or sentiment analysis.
To the best of our knowledge, our paper is the first work to achieve both \textit{fast} and \textit{accurate} reading comprehension model, by discarding the recurrent networks in favor of feed forward architectures. Our paper is also the first to mix self-attention and convolutions, which proves to be empirically effective and achieves a significant gain of 2.7 F1. Note that \cite{RaimanM17} recently proposed to accelerate reading comprehension by avoiding bi-directional
attention and making computation
conditional on the search beams. Nevertheless, their model is still based on the RNNs and the accuracy is not competitive, with an EM 68.4 and F1 76.2. \cite{WeissenbornWS17} also tried to build a fast Q\&A model by deleting the context-query attention module. However, it again relied on RNN and is thus intrinsically slower than ours. The elimination of attention further has sacrificed the performance (with EM 68.4 and F1 77.1).
Data augmentation has also been explored in natural language processing. For example, \cite{ZhangZL15} proposed to enhance the dataset by replacing the words with their synonyms and showed its effectiveness in text classification. \cite{RaimanM17} suggested using type swap to augment the SQuAD dataset, which essentially replaces the words in the original paragraph with others with the same type. While it was shown to improve the accuracy, the augmented data has the same syntactic structure as the original data, so they are not sufficiently diverse. \cite{ZhouYWTBZ17} improved the diversity of the SQuAD data by generating more questions. However, as reported by~\cite{WangYWCZ17}, their method did not help improve the performance. The data augmentation technique proposed in this paper is based on paraphrasing the sentences by translating the original text back and forth. The major benefit is that it can bring more syntactical diversity to the enhanced data.
\section{Experiments}\label{sec:experiment}
In this section, we conduct experiments to study the performance of our model and the data augmentation technique. We will primarily benchmark our model on the SQuAD dataset~\citep{RajpurkarZLL16}, considered to be one of the most competitive datasets in Q\&A. We also conduct similar studies on TriviaQA~\citep{JoshiCWZ17}, another Q\&A dataset, to show that the effectiveness and efficiency of our model are general.
\subsection{Experiments on SQuAD}
\subsubsection{Dataset and Experimental Settings}
\paragraph{Dataset.} We consider the Stanford Question Answering Dataset (SQuAD)~\citep{RajpurkarZLL16} for machine reading comprehension.\footnote{SQuAD leaderboard: \url{https://rajpurkar.github.io/SQuAD-explorer/}}
SQuAD contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of~\citep{RajpurkarZLL16} to retrieve the final test score. In our experiments, we report the test set result of our best single model.\footnote{On the leaderboard of SQuAD, there are many strong candidates in the ``ensemble" category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the ``single model" category and compare against other models with the same category.} For further analysis, we only report the performance on the validation set, as we do not want to probe the unseen test set by frequent submissions.
According to the observations from our experiments and previous works, such as~\citep{SeoKFH16,XiongZS16,WangYWCZ17,ChenFWB17}, the validation score is well correlated with the test score.
\paragraph{Data Preprocessing.}
We use the NLTK tokenizer to preprocess the data.\footnote{NLTK implementation: \url{http://www.nltk.org/}} The maximum context length is set to 400 and any paragraph longer than that would be discarded. During training, we batch the examples by length and dynamically pad the short sentences with special symbol \texttt{<PAD>}. The maximum answer length is set to 30.
We use the pretrained 300-D word vectors GLoVe~\citep{pennington2014glove}, and all the out-of-vocabulary words are replace with \texttt{<UNK>}, whose embedding is updated during training. Each character embedding is randomly initialized as a 200-D vector, which is updated in training as well.
We generate two additional augmented datasets obtained from Section~\ref{sec:aug}, which contain 140K and 240K examples and are denoted as ``data augmentation $\times$ 2'' and ``data augmentation $\times$ 3'' respectively, including the original data.
\paragraph{Training details.}
We employ two types of standard regularizations. First, we use L2 weight decay on all the trainable variables, with parameter $\lambda=3\times 10^{-7}$. We additionally use dropout on word, character embeddings and between layers, where the word and character dropout rates are 0.1 and 0.05 respectively, and the dropout rate between every two layers is 0.1. We also adopt the stochastic depth method (layer dropout)~\citep{HuangSLSW16} within each embedding or model encoder layer, where sublayer $l$ has survival probability
$p_l= 1 - {l\over L} (1-p_L)$ where $L$ is the last layer and $p_L=0.9$.
The hidden size and the convolution filter number are all 128, the batch size is 32, training steps are 150K for original data, 250K for ``data augmentation $\times$ 2", and 340K for ``data augmentation $\times$ 3". The numbers of convolution layers in the embedding and modeling encoder are 4 and 2, kernel sizes are 7 and 5, and the block numbers for the encoders are 1 and 7, respectively.
We use the ADAM optimizer~\citep{KingmaB14} with $\beta_1=0.8, \beta_2=0.999,\epsilon=10^{-7}$. We use a learning rate warm-up scheme with an inverse exponential increase from 0.0 to 0.001 in the first 1000 steps, and then maintain a constant learning rate for the remainder of training.
Exponential moving average is applied on all trainable variables with a decay rate 0.9999.
Finally, we implement our model in Python using Tensorflow~\citep{AbadiABBCCCDDDG16} and carry out our experiments on an NVIDIA p100 GPU.\footnote{TensorFlow implementation: \url{https://www.tensorflow.org/}}
\subsubsection{Results}
\paragraph{Accuracy.}
The F1 and Exact Match (EM) are two evaluation metrics of accuracy for the model performance. F1 measures the portion of overlap tokens between the predicted answer and groundtruth, while exact match score is 1 if the prediction is exactly the same as groundtruth or 0 otherwise. We show the results in comparison with other methods in Table~\ref{table:squad_all}. To make a fair and thorough comparison, we both report both the published results in their latest papers/preprints and the updated but not documented results on the leaderboard. We deem the latter as the unpublished results. As can be seen from the table, the accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In particular, our model trained on the original dataset outperforms all the documented results in the literature, in terms of both EM and F1 scores (see second column of Table~\ref{table:squad_all}). When trained with the augmented data with proper sampling scheme, our model can get significant gain 1.5/1.1 on EM/F1. Finally, our result on the official test set is 76.2/84.6, which significantly outperforms the best documented result 73.2/81.8.
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{lcc}
\hlin
& Published\tablefootnote{The scores are collected from the latest version of the documented related work on Oct 27, 2017.} & LeaderBoard\tablefootnote{The scores are collected from the leaderboard on Oct 27, 2017.} \\\hline
Single Model & EM~/~F1 & EM~/~F1 \\\hline
LR Baseline~\citep{RajpurkarZLL16} & 40.4 / 51.0 & 40.4 / 51.0\\
Dynamic Chunk Reader~\citep{YuZHYXZ16} & 62.5 / 71.0 & 62.5 / 71.0 \\
Match-LSTM with Ans-Ptr~\citep{WangJ16a} & 64.7 / 73.7 & 64.7 / 73.7\\
Multi-Perspective Matching~\citep{WangMHF16} &65.5 / 75.1 & 70.4 / 78.8\\
Dynamic Coattention Networks~\citep{XiongZS16} & 66.2 / 75.9 & 66.2 / 75.9\\
FastQA~\citep{WeissenbornWS17} & 68.4 / 77.1 & 68.4 / 77.1 \\
BiDAF~\citep{SeoKFH16} & 68.0 / 77.3 & 68.0 / 77.3\\
SEDT~\citep{LiuHWYN17} & 68.1 / 77.5 & 68.5 / 78.0\\
RaSoR~\citep{LeeKP016} & 70.8 / 78.7 & 69.6 / 77.7 \\
FastQAExt~\citep{WeissenbornWS17}& 70.8 / 78.9 & 70.8 / 78.9\\
ReasoNet~\citep{ShenHGC17} & 69.1 / 78.9 &70.6 / 79.4\\
Document Reader~\citep{ChenFWB17}& 70.0 / 79.0 & 70.7 / 79.4\\
Ruminating Reader~\citep{GongB17}&70.6 / 79.5& 70.6 / 79.5\\
jNet~\citep{ZhangZCDWJ17}& 70.6 / 79.8 & 70.6 / 79.8\\
Conductor-net & N/A & 72.6 / 81.4\\
Interactive AoA Reader~\citep{CuiCWWLH17} & N/A & 73.6 / 81.9\\
Reg-RaSoR & N/A & 75.8 / 83.3 \\
DCN+ & N/A & 74.9 / 82.8 \\
AIR-FusionNet & N/A& 76.0 / 83.9\\
R-Net~\citep{WangYWCZ17} & 72.3 / 80.7& 76.5 /84.3\\
BiDAF + Self Attention + ELMo & N/A & \textbf{77.9/ 85.3}\\
Reinforced Mnemonic Reader~\citep{HuPQ17} & 73.2 / 81.8 & 73.2 / 81.8\\
\hline
Dev set: QANet & \textbf{73.6} / \textbf{82.7} & N/A\\
Dev set: QANet + data augmentation $\times$2 & \textbf{74.5} / \textbf{83.2} & N/A\\
Dev set: QANet + data augmentation $\times$3 & \textbf{75.1} / \textbf{83.8}& N/A\\\hline
Test set: QANet + data augmentation $\times$3 & \textbf{76.2} / \textbf{84.6 }& 76.2 / 84.6\\
\hlin
\end{tabular}
\end{center}
\caption{The performances of different models on SQuAD dataset.
}
\label{table:squad_all}
\end{table}
\paragraph{Speedup over RNNs.}
To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models~\citep{ChenFWB17}. All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table~\ref{table:squad_speedup}. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{l|c|cc|cc|cc}
\hlin
&QANet & RNN-1-128 & Speedup & RNN-2-128 & Speedup & RNN-3-128 & Speedup \\\hline
Training& \textbf{3.2} & 1.1 & \textbf{2.9x} & 0.34 & \textbf{9.4x} & 0.24& \textbf{13.3x}\\
Inference& \textbf{8.1} & 2.2 & \textbf{3.7x} & 1.3 & \textbf{6.2x} & 0.92 & \textbf{8.8x}
\\\hlin
\end{tabular}
\end{center}
\caption{Speed comparison between our model and RNN-based models on SQuAD dataset, all with batch size 32. RNN-$x$-$y$ indicates an RNN with $x$ layers each containing $y$ hidden units. Here, we use bidirectional LSTM as the RNN. The speed is measured by batches/second, so higher is faster.}
\label{table:squad_speedup}
\end{table}
\paragraph{Speedup over BiDAF model.}
In addition, we also use the same hardware (a NVIDIA p100 GPU) and compare the training time of getting the same performance between our model and the BiDAF model\footnote{The code is directly downloaded from \url{https://github.com/allenai/bi-att-flow}}\citep{SeoKFH16}, a classic RNN-based model on SQuAD. We mostly adopt the default settings in the original code to get its best performance, where the batch sizes for training and inference are both 60. The only part we changed is the optimizer, where Adam with learning 0.001 is used here, as with Adadelta we got a bit worse performance. The result is shown in Table~\ref{table:squad_vs_bidaf} which shows that our model is 4.3 and 7.0 times faster than BiDAF in training and inference speed. Besides, we only need one fifth of the training time to achieve BiDAF's best F1 score ($77.0$) on dev set.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
& Train time to get 77.0 F1 on Dev set& Train speed & Inference speed\\\hline
QANet & 3 hours& 102 samples/s & 259 samples/s \\
BiDAF& 15 hours & 24 samples/s & 37samples/s\\\hline
Speedup & \textbf{5.0x} & \textbf{4.3x} & \textbf{7.0x}
\\\hline
\end{tabular}
\end{center}
\caption{Speed comparison between our model and BiDAF~\citep{SeoKFH16} on SQuAD dataset.}
\label{table:squad_vs_bidaf}
\end{table}
\subsubsection{Abalation Study and Analysis}
We conduct ablation studies on components of the proposed model, and investigate the effect of augmented data. The validation scores on the development set are shown in Table~\ref{table:squad_ablation}. As can be seen from the table, the use of convolutions in the encoders is crucial: both F1 and EM drop drastically by almost 3 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.3 gain of EM/F1 to the ultimate performance. We interpret these phenomena as follows: the convolutions capture the local structure of the context while the self-attention is able to model the global interactions between text.
Hence they are complimentary to but cannot replace each other. The use of separable convolutions in lieu of tradition convolutions also has a prominent contribution to the performance, which can be seen by the slightly worse accuracy caused by replacing separable convolution with normal convolution.
\paragraph{The Effect of Data Augmentation.}
We additionally perform experiments to understand the values of augmented data as their amount increases. As the last block of rows in the table shows, data augmentation proves to be helpful in further boosting performance. Making the training data twice as large by adding the En-Fr-En data only (ratio 1:1 between original training data and augmented data, as indicated by row ``data augmentation $\times$ 2 (1:1:0)") yields an increase in the F1 by 0.5 percent.
While adding more augmented data with French as a pivot does not provide performance gain, injecting additional augmented data En-De-En of the same amount brings another 0.2 improvement in F1, as indicated in entry ``data augmentation $\times$ 3 (1:1:1)". We may attribute this gain to the diversity of the new data, which is produced by the translator of the new language.
\paragraph{The Effect of Sampling Scheme.}
Although injecting more data beyond $\times$ 3 does not benefit the model, we observe that a good sampling ratio between the original and augmented data during training can further boost the model performance. In particular, when we increase the sampling weight of augmented data from (1:1:1) to (1:2:1), the EM/F1 performance drops by 0.5/0.3. We conjecture that it is due to the fact that augmented data is noisy because of the back-translation, so it should not be the dominant data of training. We confirm this point by increasing the ratio of the original data from (1:2:1) to (2:2:1), where 0.6/0.5 performance gain on EM/F1 is obtained. Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{lcc}
\hline
& EM~/~F1 & Difference to Base Model\\
& & EM~/~F1 \\
\hline
Base QANet & 73.6 / 82.7 \\\hline
~~~~~~~~~~~~~~~~~~~~~- convolution in encoders & 70.8 / 80.0 & -2.8 / -2.7 \\
~~~~~~~~~~~~~~~~~~~~~- self-attention in encoders & 72.2 / 81.4 & -1.4 / -1.3\\
~~~~~~~~~~~~~~~~~~~~~replace sep convolution with normal convolution & 72.9 / 82.0 & - 0.7 / -0.7\\\hline
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$2 (1:1:0) & 74.5 / 83.2 & +0.9 / +0.5 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (1:1:1) & 74.8 / 83.4 & +1.2 / +0.7 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (1:2:1) & 74.3 / 83.1 & +0.7 / +0.4 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (2:2:1) & 74.9 / 83.6 & +1.3 / +0.9 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (2:1:1) & 75.0 / 83.6 & +1.4 / +0.9 \\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (3:1:1) & \textbf{75.1 / 83.8} & \textbf{+1.5 / +1.1}\\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (4:1:1) & 75.0 / 83.6 & +1.4 / +0.9\\
~~~~~~~~~~~~~~~~~~~~~+ data augmentation $\times$3 (5:1:1) & 74.9 / 83.5 & +1.3 / +0.8\\
\hline
\end{tabular}
\end{center}
\caption{An ablation study of data augmentation and other aspects of our model. The reported results are obtained on the \emph{development set}. For rows containing entry ``data augmentation",
``$\times N$" means the data is enhanced to $N$ times as large as the original size, while
the ratio in the bracket indicates the sampling ratio among the original, English-French-English and English-German-English data during training.}
\label{table:squad_ablation}
\end{table}
\subsubsection{Robustness Study}
In the following, we conduct experiments on the adversarial SQuAD dataset~\citep{JiaL17} to study the robustness of the proposed model.
In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training.
We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates
sentences that are similar to the question, but
not contradictory to the correct answer, while AddOneSent adds
a random human-approved sentence that is not necessarily related to the context.
The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation.
The results are shown in Table~\ref{table:squad_adversarial}, where the F1 scores of other models are all extracted from \cite{JiaL17}.\footnote{Only F1 scores are reported in \cite{JiaL17}} Again, we only compare the performance of single models. From Table~\ref{table:squad_adversarial}, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. The injected noise in the training data might not only improve the generalization of the model but also make it robust to the adversarial sentences.
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{lcc}
\hline
Single Model & AddSent & AddOneSent \\\hline
Logistic~\citep{RajpurkarZLL16} & 23.2 & 30.4\\
Match~\citep{WangJ16a} & 27.3 & 39.0\\
SEDT~\citep{LiuHWYN17} & 33.9 & 44.8\\
DCR~\citep{YuZHYXZ16} & 37.8 & 45.1 \\
BiDAF~\citep{SeoKFH16} & 34.3 & 45.7\\
jNet~\citep{ZhangZCDWJ17} & 37.9 & 47.0 \\
Ruminating~\citep{GongB17} & 37.4 & 47.7 \\
RaSOR~\citep{LeeKP016} & 39.5 & 49.5\\
MPCM~\citep{WangMHF16} & 40.3 & 50.0\\
ReasoNet~\citep{ShenHGC17} & 39.4 & 50.3\\
Mnemonic~\citep{HuPQ17} & \textbf{46.6} & \textbf{56.0}\\
\hline
QANet & 45.2 & 55.7 \\
\hline
\end{tabular}
\end{center}
\caption{The F1 scores on the adversarial SQuAD test set.
}
\label{table:squad_adversarial}
\end{table}
\subsection{Experiments on TriviaQA}
In this section, we test our model on another dataset TriviaQA~\citep{JoshiCWZ17},
which consists of 650K context-query-answer
triples. There are 95K distinct question-answer pairs, which are authored by
Trivia enthusiasts, with 6 evidence documents (context)
per question on average, which are either crawled from Wikipedia or Web search.
Compared to SQuAD, TriviaQA is more challenging in that: 1) its examples have much longer context (2895 tokens per context on average) and may contain several paragraphs, 2) it is much noisier than SQuAD due to the lack of human labeling, 3) it is possible that the context is not related to the answer at all, as it is crawled by key words.
In this paper, we focus on testing our model on the subset consisting of answers from Wikipedia. According to the previous work~\citep{JoshiCWZ17,HuPQ17,PanLZCCH17}, the same model would have similar performance on both Wikipedia and Web, but the latter is five time larger. To keep the training time manageable, we omit the experiment on Web data.
Due to the multi-paragraph nature of the context, researchers also find that simple hierarchical or multi-step reading tricks, such as first predicting which paragraph to read and then apply models like BiDAF to pinpoint the answer within that paragraph~\citep{ClarkG17}, can significantly boost the performance on TriviaQA. However, in this paper, we focus on comparing with the single-paragraph reading baselines only. We believe that our model can be plugged into other multi-paragraph reading methods to achieve the similar or better performance, but it is out of the scope of this paper.
The Wikipedia sub-dataset contains around 92K training and 11K development examples. The average context and question lengths are 495 and 15 respectively. In addition to the full development set, the authors of~\cite{JoshiCWZ17} also pick a verified subset that all the contexts inside can answer the associated questions. As the text could be long, we adopt the data processing similar to \cite{HuPQ17,JoshiCWZ17}. In particular, for training and validation, we randomly select a window of length 256 and 400 encapsulating the answer respectively. All the remaining setting are the same as SQuAD experiment, except that the training steps are set to 120K.
\paragraph{Accuracy.} The accuracy performance on the development set is shown in Table~\ref{table:triviaqa_all}. Again, we can see that our model outperforms the baselines in terms of F1 and EM on Full development set, and is on par with the state-of-the-art on the Verified dev set.
\begin{table}[ht]
\small
\begin{center}
\begin{tabular}{lcc}
\hline
& Full & Verified \\\hline
Single Model & EM~/~F1 & EM~/~F1 \\\hline
Random~\citep{JoshiCWZ17} & 12.7 / 22.5 & 13.8 / 23.4\\
Classifier~\citep{JoshiCWZ17} & 23.4 / 27.7 & 23.6 / 27.9\\
BiDAF~\citep{SeoKFH16} & 40.3 / 45.7 & 46.5 /52.8 \\
MEMEN~\citep{PanLZCCH17} & 43.2/ 46.9 & 49.3 / 55.8\\
M-Reader~\citep{HuPQ17}$^*$ & 46.9/ 52.9$^*$ & 54.5/ 59.5$^*$\\
\hline
QANet & \textbf{51.1} / \textbf{56.6} & 53.3/ 59.2 \\
\hline
\end{tabular}
\end{center}
\caption{The development set performances of different \textit{single-paragraph} reading models on the Wikipedia domain of TriviaQA dataset. Note that $^*$ indicates the result on test set.
}
\label{table:triviaqa_all}
\end{table}
\paragraph{Speedup over RNNs.} In addition to accuracy, we also benchmark the speed of our model against the RNN counterparts. As Table~\ref{table:triviaqa_speedup} shows, not surprisingly, our model has 3 to 11 times speedup in training and 3 to 9 times acceleration in inference, similar to the finding in SQuAD dataset.
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{l|c|cc|cc|cc}
\hline
&QANet & RNN-1-128 & Speedup & RNN-2-128 & Speedup & RNN-3-128 & Speedup \\\hline
Training& \textbf{1.8} & 0.41 & \textbf{4.4x} & 0.20 & \textbf{9.0x} & 0.11& \textbf{16.4x}\\
Inference& \textbf{3.2} & 0.89 & \textbf{3.6x} & 0.47 & \textbf{6.8x} & 0.26 & \textbf{12.3x}
\\\hline
\end{tabular}
\end{center}
\caption{Speed comparison between the proposed model and RNN-based models on TriviaQA Wikipedia dataset, all with batch size 32. RNN-$x$-$y$ indicates an RNN with $x$ layers each containing $y$ hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by batches/second, so higher is faster.}
\label{table:triviaqa_speedup}
\end{table}
\section{Introduction}\label{sec:intro}
There is growing interest in the tasks of machine reading comprehension and automated question answering. Over the past few years, significant progress has been made with end-to-end models showing promising results on many challenging datasets. The most successful models generally employ two key ingredients: (1) a recurrent model to process sequential inputs, and (2) an attention component to cope with long term interactions. A successful combination of these two ingredients is the Bidirectional Attention Flow (BiDAF) model by~\cite{SeoKFH16}, which achieve strong results on the SQuAD dataset~\citep{RajpurkarZLL16}. A weakness of these models is that they are often slow for both training and inference due to their recurrent nature, especially for long texts.
The expensive training not only leads to high turnaround time for experimentation and limits researchers from rapid iteration but also prevents the models from being used for larger dataset. Meanwhile the slow inference prevents the machine comprehension systems from being deployed in real-time applications.
In this paper, aiming to make the machine comprehension fast, we propose to remove the recurrent nature of these models. We instead exclusively use convolutions and self-attentions as the building blocks of encoders that separately encodes the query and context. Then we learn the interactions between context and question by standard attentions~\citep{XiongZS16,SeoKFH16,bahdanau2014neural}.
The resulting representation is encoded again with our recurrency-free encoder before finally decoding to the probability of each position being the start or end of the answer span.
We call this architecture QANet, which is shown in Figure~\ref{model_diagram}.
The key motivation behind the design of our model is the following:
convolution captures the local structure of the text, while the self-attention learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subsequent modeling layers.
The feed-forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a simple comparison, our model can achieve the same accuracy (77.0 F1 score) as BiDAF model~\citep{SeoKFH16} within 3 hours training that otherwise should have taken 15 hours. The speed-up gain also allows us to train the model with more iterations to achieve better results than competitive models. For instance, if we allow our model to train for 18 hours, it achieves an F1 score of 82.7 on the dev set, which is much better than~\citep{SeoKFH16}, and is on par with best published results.
As our model is fast, we can train it with much more data than other models. To further improve the model, we propose a complementary data augmentation technique to enhance the training data.
This technique paraphrases the examples by translating the original sentences from English to another language and then back to English, which not only enhances the number of training instances but also diversifies the phrasing.
On the SQuAD dataset, QANet trained with the augmented data achieves 84.6 F1 score on the test set,
which is significantly better than the best published result of 81.8 by~\cite{HuPQ17}.\footnote{After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. For example, the current (as of Dec 19, 2017) best documented model, SAN~\cite{Liu17}, achieves 84.4 F1 score which is on par with our method.}
We also conduct ablation test to justify the usefulness of each component of our model.
In summary, the contribution of this paper are as follows:
\begin{itemize}
\item We propose an efficient reading comprehension model that exclusively built upon convolutions and self-attentions. To the best of our knowledge, we are the first to do so. This combination maintains good accuracy, while achieving up to 13x speedup in training and 9x per training iteration, compared to the RNN counterparts. The speedup gain makes our model the most promising candidate for scaling up to larger datasets.
\item To improve our result on SQuAD, we propose a novel data augmentation technique to enrich the training data by paraphrasing.
It allows the model to achieve higher accuracy that is better than the state-of-the-art.
\end{itemize}
\section*{Acknowledgement}
Adams Wei Yu is supported by NVIDIA PhD Fellowship and CMU Presidential Fellowship. We would like to thank Samy Bengio, Lei Huang, Minjoon Seo, Noam Shazeer, Ashish Vaswani, Barret Zoph and the Google Brain Team for helpful discussions.
\section{The Model}\label{sec:model}
In this section, we first formulate the reading comprehension problem and then describe the proposed model QANet: it is a feedforward model that consists of only convolutions and self-attention, a combination that is empirically effective, and is also a novel contribution of our work.
\subsection{Problem Formulation}
The reading comprehension task considered in this paper, is defined as follows. Given a context paragraph with $n$ words $C=\{c_1, c_2,..., c_n\}$ and the query sentence with $m$ words $Q=\{q_1, q_2,..., q_m\}$, output a span $S=\{c_{i}, c_{i+1},..., c_{i+j}\}$ from the original paragraph $C$. In the following, we will use $x$ to denote both the original word and its embedded vector, for any $x\in C, Q$.
\subsection{Model Overview}
The high level structure of our model is similar to most existing models that contain five major components: an embedding layer, an embedding encoder layer, a context-query attention layer, a model encoder layer and an output layer, as shown in Figure \ref{model_diagram}.
These are the standard building blocks for most, if not all, existing reading comprehension models. However, the major differences between our approach and other methods are as follow:
For both the embedding and modeling encoders, we only use convolutional and self-attention mechanism, discarding RNNs, which are used by most of the existing reading comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. Note that even though self-attention has already been used extensively in~\cite{VaswaniSPUJGKP17}, the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. The use of convolutions also allows us to take advantage of common regularization methods in ConvNets such as stochastic depth (layer dropout)~\citep{HuangSLSW16}, which gives an additional gain of 0.2 F1 in our experiments.
In detail, our model consists of the following five layers:
\paragraph{1. Input Embedding Layer.} We adopt the standard techniques to obtain the embedding of each word $w$ by concatenating its word embedding and character embedding. The word embedding is fixed during training and initialized from the $p_1=300$ dimensional pre-trained GloVe ~\citep{pennington2014glove} word vectors, which are fixed during training. All the out-of-vocabulary words are mapped to an \texttt{<UNK>} token, whose embedding is trainable with random initialization. The character embedding is obtained as follows:
Each character is represented as a trainable vector of dimension $p_2=200$, meaning each word can be viewed as the concatenation of the embedding vectors for each of its characters. The length of each word is either truncated or padded to 16. We take maximum value of each row of this matrix to get a fixed-size vector representation of each word. Finally, the output of a given word $x$ from this layer is the concatenation $[x_w; x_c]\in\mathbf{R}^{p_1+p_2}$, where $x_w$ and $x_c$ are the word embedding and the convolution output of character embedding of $x$ respectively.
Following~\cite{SeoKFH16}, we also adopt a two-layer highway network~\citep{SrivastavaGS15} on top of this representation.
For simplicity, we also use $x$ to denote the output of this layer.
\begin{figure}[t]
\centering \includegraphics[width=0.7\columnwidth]{adams.pdf}
\caption{An overview of the QANet architecture (left) which has several Encoder Blocks. We use the same Encoder Block (right) throughout the model, only varying the number of convolutional layers for each block. We use layernorm and residual connection between every layer in the Encoder Block. We also share weights of the context and question encoder, and of the three output encoders. A positional encoding is added to the input at the beginning of each encoder layer consisting of $sin$ and $cos$ functions at varying wavelengths, as defined in ~\citep{VaswaniSPUJGKP17}.
Each sub-layer after the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block.}
\label{model_diagram}
\end{figure}
\paragraph{2. Embedding Encoder Layer.} The encoder layer is a stack of the following basic building block:
[convolution-layer $\times$ \# + self-attention-layer + feed-forward-layer],
as illustrated in the upper right of Figure \ref{model_diagram}. We use depthwise separable convolutions ~\citep{Chollet16a} ~\citep{kaiser2017depthwise} rather than traditional ones, as we observe that it is memory efficient and has better generalization. The kernel size is 7, the number of filters is $d=128$ and the number of conv layers within a block is 4.
For the self-attention-layer, we adopt the multi-head attention mechanism defined in ~\citep{VaswaniSPUJGKP17} which, for each position in the input, called the query, computes a weighted sum of all positions, or keys, in the input based on the similarity between the query and key as measured by the dot product. The number of heads is 8 throughout all the layers.
Each of these basic operations (conv/self-attention/ffn) is placed inside a \textit{residual block}, shown lower-right in Figure \ref{model_diagram}. For an input $x$ and a given operation $f$, the output is $f(layernorm(x)) + x$, meaning there is a full identity path from the input to output of each block, where layernorm indicates layer-normalization proposed in~\citep{BaKH16}. The total number of encoder blocks is 1. Note that the input of this layer is a vector of dimension $p_1+p_2=500$ for each individual word, which is immediately mapped to $d=128$ by a one-dimensional convolution. The output of this layer is a also of dimension $d=128$.
\paragraph{3. Context-Query Attention Layer.} This module is standard in almost every previous reading comprehension models such as \cite{WeissenbornWS17} and \cite{ChenFWB17}. We use $C$ and $Q$ to denote the encoded context and query. The context-to-query attention is constructed as follows: We first computer the similarities between each pair of context and query words, rendering a similarity matrix
$S\in\mathbf{R}^{n \times m}$. We then normalize each row of $S$ by applying the softmax function, getting a matrix $\overline{S}$. Then the context-to-query attention is computed as
$A = \overline{S} \cdot Q^T \in \mathbf{R}^{n\times d}$. The similarity function used here is the trilinear function~\citep{SeoKFH16}:
$$f(q, c) = W_0[q, c, q\odot c],$$
where $\odot$ is the element-wise multiplication and $W_0$ is a trainable variable.
Most high performing models additionally use some form of query-to-context attention, such as BiDaF~\citep{SeoKFH16} and DCN~\citep{XiongZS16}.
Empirically, we find that, the DCN attention can provide a little benefit over simply applying context-to-query attention, so we adopt this strategy. More concretely, we compute the column normalized matrix $\overline{\overline {S}}$ of $S$ by softmax function, and the query-to-context attention is
$B=\overline{S}\cdot {\overline{\overline {S}}}^T \cdot C^T$.
\paragraph{4. Model Encoder Layer.} Similar to~\cite{SeoKFH16}, the input of this layer at each position is $[c, a, c \odot a, c\odot b]$, where $a$ and $b$ are respectively a row of attention matrix $A$ and $B$. The layer parameters are the same as the Embedding Encoder Layer except that convolution layer number is 2 within a block and the total number of blocks are 7. We share weights between each of the 3 repetitions of the model encoder.
\paragraph{5. Output layer.} This layer is task-specific. Each example in SQuAD is labeled with a span in the context containing the answer. We adopt the strategy of \cite{SeoKFH16} to predict the probability of each position in the context being the start or end of an answer span.
More specifically, the probabilities of the starting and ending position are modeled as
$$p^1 = softmax(W_1[M_0;M_1]), ~~ p^2 = softmax(W_2[M_0;M_2]),$$
where $W_1$ and $W_2$ are two trainable variables and $M_0, M_1, M_2$ are respectively the outputs of the three model encoders, from bottom to top.
The score of a span is the product of its start position and end position probabilities.
Finally, the objective function is defined as the negative sum of the log probabilities
of the predicted distributions indexed by true start and end indices, averaged over all the training examples:
$$L(\theta)=-{1\over N}\sum_i^N \left[\log(p^1_{y_i^1}) + \log(p^2_{y_i^2})\right],$$
where $y_i^1$ and $y_i^2$ are respectively the groundtruth starting and ending position of example $i$, and $\theta$ contains all the trainable variables.
The proposed model can be customized to other comprehension tasks, e.g. selecting from the candidate answers, by changing the output layers accordingly.
\textbf{Inference}. At inference time, the predicted span $(s, e)$ is chosen such that $p^1_sp^2_e$ is maximized and $s\le e$. Standard dynamic programming can obtain the result with linear time.
\section{Related Work}\label{sec:rw}
Machine reading comprehension and automated question answering has become an important topic in the NLP domain. Their popularity can be attributed to an increase in publicly available annotated datasets, such as SQuAD~\citep{RajpurkarZLL16}, TriviaQA~\citep{JoshiCWZ17}, CNN/Daily News~\citep{HermannKGEKSB15}, WikiReading~\citep{HewlettLJPFHKB16}, Children Book Test~\citep{HillBCW15}, etc. A great number of end-to-end neural network models have been proposed to tackle these challenges, including BiDAF~\citep{SeoKFH16}, r-net~\citep{WangYWCZ17}, DCN~\citep{XiongZS16}, ReasoNet~\citep{ShenHGC17}, Document Reader~\citep{ChenFWB17}, Interactive AoA Reader~\citep{CuiCWWLH17} and Reinforced Mnemonic Reader~\citep{HuPQ17}.
Recurrent Neural Networks (RNNs) have featured predominatnly in Natural Language Processing in the past few years.
The sequential nature of the text coincides with the design philosophy of RNNs, and hence their popularity. In fact, all the reading comprehension models mentioned above are based on RNNs.
Despite being common, the sequential nature of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another drawback of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Unit~\citep{chung2014empirical} or Long Short Term Memory architectures~\citep{hochreiter1997long}. For simple tasks such as text classification, with reinforcement learning techniques, models~\citep{YuLL17} have been proposed to skip irrelevant tokens to both further address the long dependencies issue and speed up the procedure. However, it is not clear if such methods can handle complicated tasks such as Q\&A.
The reading comprehension task considered in this paper always needs to deal with long text, as the context paragraphs may be hundreds of words long.
Recently, attempts have been made to replace the recurrent networks by
full convolution or full attention architectures ~\citep{Kim14,gehring2017convolutional,vaswani2017attention,ShenZLJPZ17}.
Those models have been shown to be not only faster than the RNN architectures, but also effective in other tasks, such as text classification, machine translation or sentiment analysis.
To the best of our knowledge, our paper is the first work to achieve both \textit{fast} and \textit{accurate} reading comprehension model, by discarding the recurrent networks in favor of feed forward architectures. Our paper is also the first to mix self-attention and convolutions, which proves to be empirically effective and achieves a significant gain of 2.7 F1. Note that \cite{RaimanM17} recently proposed to accelerate reading comprehension by avoiding bi-directional
attention and making computation
conditional on the search beams. Nevertheless, their model is still based on the RNNs and the accuracy is not competitive, with an EM 68.4 and F1 76.2. \cite{WeissenbornWS17} also tried to build a fast Q\&A model by deleting the context-query attention module. However, it again relied on RNN and is thus intrinsically slower than ours. The elimination of attention further has sacrificed the performance (with EM 68.4 and F1 77.1).
Data augmentation has also been explored in natural language processing. For example, \cite{ZhangZL15} proposed to enhance the dataset by replacing the words with their synonyms and showed its effectiveness in text classification. \cite{RaimanM17} suggested using type swap to augment the SQuAD dataset, which essentially replaces the words in the original paragraph with others with the same type. While it was shown to improve the accuracy, the augmented data has the same syntactic structure as the original data, so they are not sufficiently diverse. \cite{ZhouYWTBZ17} improved the diversity of the SQuAD data by generating more questions. However, as reported by~\cite{WangYWCZ17}, their method did not help improve the performance. The data augmentation technique proposed in this paper is based on paraphrasing the sentences by translating the original text back and forth. The major benefit is that it can bring more syntactical diversity to the enhanced data.
| -48,457.923845 |
[
-2.6875,
2.55859375
] | 48.712446 |
[
-1.9580078125,
3.21875,
-1.51953125,
-3.958984375,
-1.9736328125,
5.53125
] |
[
2.48828125,
6.109375,
1.265625,
6.640625
] | 1,204 | 13,290 |
[
-1.2822265625,
1.2841796875
] | 25.910961 |
[
-8.125,
-5.0703125,
-4.24609375,
-0.58544921875,
3.255859375,
13.03125
] | 0.581616 | 30.819867 | 12.565839 | 6.746655 |
[
1.9334373474121094
] | -34,971.383069 | 5.56298 | -47,449.559294 | 0.297571 | 6.125611 |
[
-3.30859375,
-3.56640625,
-3.34375,
-3.521484375,
2.947265625,
10.4296875
] |
[
-8.515625,
-4.203125,
-3.984375,
-2.4375,
4.66015625,
9.8125
] | |
BkiUdac4eIXhzes6odSS
|
\section{Other learned models} \label{app:model2and3}
In the Supplemental Material of \cite{jafferis2022traversable}, two additional learned Hamiltonians are studied numerically.
\emph{Model 2}---The first of these, which we refer to as Model 2, is given in Eq.~(S16) of \cite{jafferis2022traversable}:
\begin{equation} \label{eq:model2}
\begin{split}
H = & -0.35 \psi^1 \psi^2 \psi^3 \psi^6 +0.11 \psi^1 \psi^2 \psi^3 \psi^8 -0.17 \psi^1 \psi^2 \psi^4 \psi^7 \\
& -0.67 \psi^1 \psi^3 \psi^5 \psi^7+0.38 \psi^2 \psi^3 \psi^6 \psi^7 - 0.05 \psi^2 \psi^5 \psi^6 \psi^7.
\end{split}
\end{equation}
Model 2 is produced from the same machine-learning procedure as Model 1, i.e.~designed to match the teleportation signal of the $N=10$ SYK model.
The authors claim that Model 2 demonstrates perfect size winding and ``is consistent with other gravitational signatures''.
As noted in \cite{jafferis2022traversable}, Model 2 is not fully commuting.
Nevertheless, we observe that Model 2 becomes fully-commuting if: (i) the two \emph{smallest} terms in Eq.~\eqref{eq:model2} are removed, and (ii) one performs a basis rotation:
\begin{equation}
\begin{split}
\psi^1 & \rightarrow \cos(\theta) \psi^1 + \sin(\theta) \psi^7, \\
\psi^7 & \rightarrow \cos(\theta) \psi^7 - \sin(\theta) \psi^1,
\end{split}
\end{equation}
with $\theta = \tan^{-1}(-0.35/0.38)$.
At the timescale of teleportation ($t = 2.8$), the two smallest terms provide relatively small corrections to physical observables.
Thus, Model 2 can be considered weakly perturbed from a fully-commuting limit.
Consistent with this observation, we find that our main observations regarding Model 1 also apply to Model 2 (Fig.~\ref{fig:model2and3}a-d).
In particular, the individual two-point correlation functions exhibit strong revivals, the teleportation signal does not resemble the SYK model for untrained operators, and the size winding behavior resembles that of a random fully-commuting Hamiltonian (Fig.~\ref{fig:size-winding}c).
In addition, we note that the teleportation signal for the \emph{trained} operators in Model 2 displays a significant revival within the timescale on which it was trained (Fig.~\ref{fig:model2and3}b).
This contrasts with the $N=10$ SYK model and indicates that the training procedure was not fully successful; such disagreement is not shown or commented on in \cite{jafferis2022traversable}.
\emph{Model 3}---The second additional model, which we refer to as Model 3, is given in Eq.~(S17) of~\cite{jafferis2022traversable}:
\begin{equation} \label{eq:model3}
\begin{split}
H &= 0.60 \psi^1 \psi^3 \psi^4 \psi^5 +0.72 \psi^1 \psi^3 \psi^5 \psi^6 +0.49 \psi^1 \psi^5 \psi^6 \psi^9 \\
& +0.49 \psi^1 \psi^5 \psi^7 \psi^8+0.64 \psi^2 \psi^4 \psi^8 \psi^{10} - 0.75 \psi^2 \psi^5 \psi^7 \psi^8 \\
& +0.58 \psi^2 \psi^5 \psi^7 \psi^{10} - 0.53 \psi^2 \psi^7 \psi^8 \psi^{10}.
\end{split}
\end{equation}
Model 3 is produced via a different machine-learning procedure, which is designed to optimize the asymmetry in the teleportation signal between positive and negative couplings.
Unlike Models 1 and 2, Model 3 is not fully-commuting or near fully-commuting.
Referring to the average two-point correlator, the authors demonstrate that ``no periodicities are present despite the small number of terms in the Hamiltonian'' (Fig.~S26 of~\cite{jafferis2022traversable}).
In Fig.~\ref{fig:model2and3}d, we observe that the individual two-point correlators also exhibit thermalizing behavior at long time scales ($t \sim 30$).
This is consistent with Model 3 being non-commuting.
At earlier times, the correlators exhibit oscillations that are smaller than those of Model 1 and 2, but larger than fluctuations in the $N=10$ SYK model.
The teleportation signal for Model 3 exhibits a single-peak structure for nearly all operators, albeit with large variations in peak height (Fig.~\ref{fig:model2and3}e).
The authors note that Model 3 does not exhibit perfect size winding, but rather features a ``consistently large ratio [of phase alignment], suggesting slightly damped size winding''.
Indeed, we find that the phase alignment, $\bar r$, for Model 3 is comparable to that of the $N=10$ SYK model and lower than that of small-size fully-commuting models (Fig.~\ref{fig:size-winding}b).
This is consistent with our observation that perfect phase alignment at small system sizes is a generic feature of fully-commuting Hamiltonians and not of non-commuting Hamiltonians.
We note that only some operators in Model 3 (including the trained operators) exhibit a high degree of linearity $\chi$ (Fig.~\ref{fig:size-winding}c).
In this respect, Model 3 resembles the behavior of fully-commuting or nearly fully-commuting models (including Model 1 and 2) and not the SYK model.
\begin{figure}
\includegraphics[width=0.8\linewidth]{fig_4pt_ij.pdf}
\caption{%
\textbf{(a)} Four-point correlation functions, $F_{ij}(t)$, for Model 1, shown for all pairs of Majorana operators, $i<j\in [1,7]$. \textbf{(b)} The same correlation functions for a specific instance of the $N=10$ SYK model with $J=1.125$ and $i<j\in [1,10]$.
}
\label{fig:4pt}
\end{figure}
\section{Four-point correlators with $i \neq j$}
Scrambling is quantified in~\cite{jafferis2022traversable} via the behavior of the four-point correlation functions, $F_{\textrm{avg}}(t) = \sum_{i=1}^8 F_i(t)$, with $F_i (t) =- \textrm{Re} \left [ \big\langle \left [\psi^i(t), \psi^i(0) \right ]^2 \big\rangle_\beta \right ]$.
We note that such correlation functions, consisting of the same Majorana $\psi^i$ for the time-evolved and static operators, are not the most direct probe of scrambling dynamics, since their initial growth occurs on the same timescale as the decay of two-point correlation functions (i.e.~the thermalization time).
In the SYK model at large system sizes, the initial growth reaches a value of unity and is followed by a slower decay to value $1/2$ on the timescale of the scrambling time; such non-monotonic behavior is evident in the time traces of the $N=10$ SYK model shown in Fig.~\ref{fig:2pt}d.
A more typical probe of scrambling is the four-point correlator, $F_{ij} (t) =- \textrm{Re} \big[ \big\langle \left [\psi^i(t), \psi^j(0) \right ]^2 \big\rangle_\beta \big]$, for different operators, $i \neq j$.
In the SYK model at large system sizes, this correlator decays monotonically from unity to value $1/2$ on the timescale of the scrambling time.
\begin{figure*}
\includegraphics[width=\textwidth]{fig-teleport_fixed.pdf}
\caption{%
\textbf{(a)} Mutual information of the teleportation protocol with fixed injection time as a function of the readout time (i.e.~$t_0 = 2.8$ and $t = t_1$). The mutual information for Model 1 and the trained operators, $\psi^1$ and $\psi^2$, is in reasonable agreement with that of multiple instances of the $N=10$ SYK model (grey). \textbf{(b)} The mutual information for Model 1 and all pairs of untrained operators, $\psi^i$ and $\psi^j$ with $i<j\in [2,7]$, exhibits variations and revivals as a function of time. \textbf{(c)} The mutual information for all pairs of operators in the $N=10$ SYK model exhibits a single consistent peak.}
\label{fig:teleport-fixed}
\end{figure*}
In Fig.~\ref{fig:4pt}, we plot the four-point correlation functions, $F_{ij} (t)$ with $i\neq j$, for both Model 1 and the $N=10$ SYK model.
Much like the four-point correlation functions with $i = j$ (i.e.~$F_i(t)$, see Fig.~\ref{fig:2pt}), we find that the four-point correlation functions in Model 1 exhibit strong oscillations in time for all $i \neq j$.
In fact, the oscillations for many pairs of operators have unit amplitude.
In contrast, in the $N=10$ SYK model, all correlation functions exhibit a smooth decay to value $1/2$.
\section{Teleportation at fixed injection time}\label{app:fixed_injection}
As previously discussed, two versions of the teleportation protocol are analyzed in \cite{jafferis2022traversable}: using symmetric injection / readout times and fixed injection time.
In Fig.~\ref{fig:teleport-fixed}, we present results for latter protocol for Model 1 and the $N=10$ SYK model.
For Model 1, when the protocol is performed with the pair of operators that were trained on, the mutual information displays a single peak as a function of time.
For other pairs of operators, the mutual information displays an initial peak, whose height varies significantly for different pairs of operators, followed by revivals at later times.
This contrasts with the SYK model, in which the mutual information displays a single consistent peak for all pairs of operators, with small and infrequent fluctuations at late times.
\section{Size-winding metrics} \label{app:sizewinding}
Here, we elaborate on the phase alignment, $\bar r$, and the linear slope metric, $\chi$, which are plotted in Fig.~\ref{fig:size-winding}b and Fig.~\ref{fig:size-winding}c.
\emph{Phase alignment}---We recall that in~\cite{jafferis2022traversable}, the phase alignment is quantified by plotting the ratio, $r_l = \left| \sum_{|P| = l} c_P^2 \right| / \sum_{|P| = l} |c_P|^2$, for different sizes $l$ (Figs.~S14 and~S19 of~\cite{jafferis2022traversable}).
The denominator of this quantity is the operator size distribution, $p(l) = \sum_{|P|=l} |c_P|^2$, which is normalized to one, $\sum_l p(l) = 1$.
To facilitate comparison between different operators and models, we consider the weighted average of $r_l$, $r = \sum_l p(l) \, r_l = \sum_l \left| \sum_{|P| = l} c_P^2 \right| = \sum_l \left| q(l) \right|$.
For a given Hamiltonian, $r$ is lower bounded by the two-point function, $W = \tr( \psi^i(t) \rho^{1/2} \psi^i(t) \rho^{1/2} ) = \sum_P c_P^2$.
We note that this two-point function is constant in time, and therefore the sum of the squared coefficients is also constant in time.
Taking into account this lower bound motivates us to rescale $r$ as $\bar r = \frac{r - W}{1 - W}$, which ranges from zero to one.
\emph{Linear slope}---We seek to quantify the degree to which the phases of $q(l)$ follow a linear slope with respect to the size $l$.
The fit of a line of slope $\mu$ can be quantified via $C(\mu) = \left| \sum_l q(l) e^{- i \mu l} \right|$.
When deviations from a linear slope are small, this reduces to unity minus a weighted sum of squared errors; when deviations are large, it takes into account the periodicity of the phases.
The best fit, $C^*$, is found by maximizing over $\mu$, $C^* = \max_\mu C(\mu)$.
We define the metric, $\chi$, to interpolate between zero and one as $C^*$ interpolates between its minimum and maximum values.
The maximum value of $C(\mu)$ is given by the weighted average, $r$, of the phase alignment ratio.
The minimum value is lower bounded by the two-point function, $W = C(\mu = 0)$.
In addition, at small system sizes it is relevant to consider a second lower bound, corresponding to fitting a line between the two coefficients, $q(l_1)$ and $q(l_2)$, with the largest magnitude.
This consideration is necessary to avoid concluding that functions $q(l)$ with support on only two values of $l$ have non-trivial size winding.
This fit produces a $C$ of value at least $M = | q(l_1) | + | q(l_2) | - (r-| q(l_1) | + | q(l_2) |) = 2 | q(l_1) | + 2| q(l_2) |-r$.
We thus define the metric,
\begin{equation}
\chi = \frac{C^* - L}{r-L},
\end{equation}
where $L$ is the larger of the two lower bounds, $L = \max(W,M)$.
\section{Other fully-commuting models} \label{app:scaling}
Here we include details on the random fully-commuting models presented in Fig.~\ref{fig:size-winding}. For all random models in Fig.~\ref{fig:size-winding}, we take $\beta = 4$ and $t = 2.8$, identical to Model 1.
\emph{Majorana models}---In Model 1 with randomized coefficients, we draw each coefficient from a normal distribution with mean zero and standard deviation equal to the root-mean-square of the coefficients of Model 1.
In Model 1 with randomized terms and coefficients, we generate five random fully-commuting terms by successively drawing random four-Majorana terms (from $N=7$ total Majorana operators) and keeping each term only if it commutes with all terms already kept.
\emph{Ising models}---We consider random all-to-all Ising models with Hamiltonian, $H = \frac{1}{\sqrt{N}} \sum_{i < j} J_{ij} Z^i Z^j$.
The coefficients $J_{ij}$ are drawn from a normal distribution with mean zero and standard deviation $J = 0.17$.
\emph{Finite-size scaling}---To explore whether the size winding behavior of fully-commuting models persists at larger system sizes, in Fig.~\ref{fig:scaling} we plot the phase alignment, $\bar r$, for random all-to-all Ising models as a function of the system size $N$.
We focus on Ising models to avoid subtleties with scaling random fully-commuting Majorana models to larger system sizes (namely, there is no canonical choice of which fully-commuting terms to include).
We scale the evolution time $t$ with the square root of the system size, $t = 2.8 \sqrt{N / 4}$, to ensure that operators grow to the same fraction of the system size for each $N$.
We find that the phase alignment, $\bar r$, exhibits a decreasing trend with the system size.
\begin{figure}
\includegraphics[width=0.85\columnwidth]{fig_ising_scaling.pdf}
\caption{Phase alignment, $\bar r$, of the random all-to-all Ising model as a function of the system size $N \in [4,8]$, with $J = 0.17$ and $\beta = 4$.
Three disorder realizations are shown at each system size, with small horizontal offsets for clarity.
}
\label{fig:scaling}
\end{figure}
\section{Missing parameters}
Several parameters are omitted from \cite{jafferis2022traversable} which are necessary for reproducing the numerical results shown.
For the ease of future studies, we list these parameters below:
\begin{itemize}
\item In plots showing mutual information (e.g.~Figs.~1a, 3a, 3e of \cite{jafferis2022traversable}), the mutual information is divided by $\log(2)$.
\item For the teleportation plots (e.g.~Figs.~1a, 3a, 3e of \cite{jafferis2022traversable}), the two-sided coupling is $V = \frac{1}{qN} \sum_i \psi_L^i \psi_R^i$ with $N=10$ and $q=4$, for both Model 1 and the SYK models. The teleportation protocol is performed using $\psi^1$ and $\psi^2$.
\item In Fig.~3f of \cite{jafferis2022traversable}, the instantaneous coupling plot uses $\mu = -12$, however with $V$ normalized using $N=8$. This is distinct from the other plots in Figs.~1-3, which normalize $V$ with $N=10$. The Trotterized coupling plot uses $\mu = -18$ and $N=8$, and is Trotterized into three time steps, $t = -1.6,0,1.6$.
\item As previously emphasized, the correlation functions in Fig.~3b of \cite{jafferis2022traversable} are averaged over all local Majorana operators (for Model 1, this consists of $8$ operators: the 7 operators in Eq.~(\ref{eq3}) and an additional uncoupled operator). Also, the authors plot the real part of two- and four-point correlation functions.
To summarize, the plots correspond to ~$G_\textrm{avg}(t) = \frac 1 {8} \sum_{i=1}^8 G_i(t)$ with $G_i(t) = \textrm{Re}[ \left<\psi^i(t) \psi^i(0) \right>_\beta ]$ and $F_\textrm{avg} (t) = \sum_{i=1}^8 F_i (t)$ with $F_i (t) =- \textrm{Re} [ \big\langle \left [\psi^i(t), \psi^i(0) \right ]^2 \big\rangle_\beta ]$.
\item We are not able to exactly replicate the SYK dynamics shown in Figs.~1 and 3b in \cite{jafferis2022traversable}. We find qualitatively good agreement using an ensemble of $N=10$ SYK models with $J = 1.25$ for Fig.~1 in \cite{jafferis2022traversable} and $J = 1.125$ for Fig.~3b in \cite{jafferis2022traversable}.
\end{itemize}
\end{document}
| -15,830.299463 |
[
-2.142578125,
1.904296875
] | 71.022727 |
[
-2.537109375,
1.177734375,
-1.6708984375,
-5.140625,
-1.513671875,
7.328125
] |
[
0.412353515625,
7.0390625,
0.8857421875,
3.4921875
] | 162 | 2,139 |
[
-3.517578125,
4.08984375
] | 29.033087 |
[
-6.01171875,
-4.09765625,
-3.73046875,
-2.29296875,
1.6689453125,
11.7578125
] | 0.960363 | 29.076789 | 29.827022 | 3.254341 |
[
1.3734334707260132
] | -11,122.134675 | 5.354839 | -15,566.810469 | 0.864327 | 5.591933 |
[
-2.431640625,
-3.498046875,
-3.703125,
-4.8828125,
2.2109375,
12.1328125
] |
[
-5.640625,
-1.96875,
-2.052734375,
-1.5908203125,
3.31640625,
4.88671875
] | |
BkiUbPw4eIZijbrFxwl8
|
\section{Introduction}\label{intro}
Since the early years after \citet{ss1973} introduced thin disks, they have been known \citep{ss1976, piran1978} to be fraught with thermal instabilities. The instability occurs in the inner regions of the disk, where the pressure is dominated by radiation pressure and the opacity is mostly due to Thomson scattering, resulting in a much stronger temperature dependence in the heating of the disk compared to its cooling. \\
Decades later, the resolution of thermal instabilities still remains one of the major outstanding problems in understanding thin and slim disks. One of the major uncertainties in the Shakura \& Sunyaev (SS) $\alpha$ disk model is the assumption
that the viscous stress is proportional to the total pressure. Early attempts to model thermal (and viscous) stability, such as the works of \citet{sakimoto, stella, merloni}, explored the possibility that the viscous stress might instead be only proportional to the gas pressure. These disks were called $\beta$ disks, where $t_{r\phi}=\beta\, p_{gas}$. Recent numerical simulations \citep{jiang2013,bran1995,stone1996, mishra2016, 2016MNRAS.459.4397S, 2016MNRAS.462..960S} see the presence of thermal instabilities, where the onset of thermal instability causes the disk to expand or collapse at the time scale of only a few orbits. These local simulations do not see evidence for such $\beta$ disks. \\
Radiative MHD simulations such as those in \citep{2016MNRAS.459.4397S, 2016MNRAS.462..960S} find stable radiatively efficient and strongly radiation pressure dominated disks, in the presence of strong magnetic fields. This has led to the claim that strong magnetic fields could stabilize disks against thermal instabilities \citep{2016MNRAS.459.4397S, 2016MNRAS.462..960S, begelman07, oda09}. If there is not enough magnetic flux, however, the instability once again sets in and a different means for stabilization must be sought. An iron opacity bump has also been suggested by \citet{jiang2016} as a means to postpone (but not avoid) thermal instabilities. In this paper, we present Accretion in Radiative Equipartition (AiRE) disks as an alternative solution to the thermal instability problem in thin and slim disks.
\section{Slim Disk Equations}
In slim (and thin) accretion disks, the main approximations made are that the disk is axisymmetric, stationary, and $h<r$. The background metric is assumed to be Kerr. \\
Let us define the following expressions involving the black hole spin:
\begin{align}
\Delta&=M^4 (r_*^2-2r_*+a_*^2),\\
A&=M^4 (r_*^4+r_*^2a_*^2+2r_*a_*^2),\nonumber\\
\mathcal{C}&=1-3r_*^{-1}+2a_*r_*^{-3/2},\nonumber\\
\mathcal{D}&=1-2r_*^{-1}+2a_*^2r_*^{-2},\nonumber\\
\mathcal{H}&=1-4a_*r_*^{-3/2}+3a_*^2r_*^{-2}\nonumber,
\end{align}
where $a_*=a/M$ and $r_*=r/M$. When quantities have been vertically integrated, a polytropic equation of state has been assumed: $p=\mathcal{K}\ \rho^{1+1/N}$ where $N=3$, and $\mathcal{K}=const$. In the notation that follows, $\Sigma, T_c,$ and $P$ are vertically integrated quantities (whereas $p$, for example, is not). Unless otherwise noted, we use geometrized units, $G=c=1$. $\dot{M}$ is the mass accretion rate and we use the convention that $\dot{M}_{\rm Edd}=16 L_{\rm Edd}/c^2$ \footnote{A common alternative definition of the Eddington mass accretion rate is $\dot{M}_{\rm Edd}=L_{\rm Edd}/c^2$, but here we follow the convention of \citet{sadowski11}.}.\\
Following the notation of \citet{sadowski11, abram11}, the relativistic equations describing slim disks are:
\begin{enumerate}
\item Conservation of (rest) Mass:
\begin{equation}
\dot{M}=-2\pi\Sigma\sqrt{\Delta}\frac{v}{\sqrt{1-v^2}},
\label{mc}
\end{equation}
where $v$ is the radial velocity and $\Sigma$ is the surface density.\\
\item Conservation of radial momentum:
\begin{equation}
\frac{v}{1-v^2}\frac{dv}{dr}=\frac{\mathcal{A}}{r}-\frac{1}{\Sigma}\frac{dP}{dr},
\label{rmc}
\end{equation}
where
\begin{equation}
\mathcal{A}=-\frac{MA}{r^3\Delta \Omega_k^+ \Omega_k^-}\frac{(\Omega-\Omega_k^+)(\Omega-\Omega_k^-)}{1-\tilde{\Omega}^2\tilde{r}^2},
\end{equation}
$\tilde{r}=A/(r^2\sqrt{\Delta})$, and $\Omega$ is the angular velocity with respect to the stationary observer, while $\tilde{\Omega}=\Omega-\frac{2Mar}{A}$ is the angular velocity with respect to the inertial observer. $\Omega_k^\pm=\pm \sqrt{M}/(r^{3/2}\pm a\sqrt{M})$ are the angular velocities of co-rotating and counter-rotating Keplerian (or circular geodesic) orbits. \\
\item Conservation of angular momentum:
\begin{equation}
\frac{\dot{M}}{2\pi} (\mathcal{L}-\mathcal{L}_{in})=\frac{\sqrt{A \Delta}\gamma}{r}\alpha P,
\label{cam}
\end{equation}
where $\mathcal{L}$ is the angular momentum per unit mass, $\mathcal{L}_{in}$ is an integration constant (approaching the angular momentum at the innermost stable circular orbit (ISCO), in the thin disk limit), and
\begin{equation}
\gamma=\sqrt{\frac{1}{1-v^2}+\frac{\mathcal{L}^2r^2}{A}},
\label{lorentz}
\end{equation}
is the Lorentz factor. We adopt the $\alpha$ viscosity prescription, where the $t_{r\phi}$ component (in the co-moving frame of the fluid) of the viscous stress tensor can be expressed as $t_{r\phi}=-\alpha\,p$ \citep{ss1973}. The relation between $\Omega$ and $\mathcal{L}$ is
\begin{equation}
\Omega=\frac{2 a M r}{ A}+\frac{ r^3 \mathcal{L} \sqrt{\Delta}}{A^{3/2} \gamma}.
\end{equation}
\item Vertical equilibrium:
\begin{equation}
h^2\Omega_\perp^2=9\frac{P}{\Sigma},
\label{ve}
\end{equation}
where $ \Omega_\perp=\sqrt{\frac{M}{r^3}\frac{\mathcal{H}}{\mathcal{C}}}$ is the vertical epicyclic frequency, and $h$ is the half disk thickness.
\item Conservation of energy:
\begin{equation}
\mathcal{Q}^{\rm adv}=\mathcal{Q}^{\rm vis}-\mathcal{Q}^{\rm rad}.
\label{en_cons}
\end{equation}
Here, the advective cooling is defined as:
\begin{equation}
\mathcal{Q}^{\rm adv}=\frac{1}{r}\frac{d}{dr}\left[r u^r(E+P)\right]-u^r\frac{dP}{dr}-\int^{h}_{-h} u^z\frac{\partial p}{\partial z}dz,
\end{equation}
where $E$ is the vertically integrated energy density $E=\int^h_{-h}(\frac{3}{2}p_{\rm gas}+3 p_{\rm rad})\ dz$. Performing the $z$ integral in ${Q}^{\rm adv}$, we get
\begin{equation}
\begin{split}
\mathcal{Q}^{\rm adv}= \frac{\dot{M}}{2\pi r^2}& \left(\frac{}{}\right.\eta_3\frac{P}{\Sigma}\frac{d\ln P}{d\ln r}-(1+\eta_3)\frac{P}{\Sigma}\frac{d\ln \Sigma}{d\ln r}+\\
& \eta_3 \frac{P}{\Sigma}\frac{d\ln \eta_3}{d\ln r}+\Omega_\perp^2\eta_4 \frac{d\ln \eta_4}{d\ln r} \left. \frac{}{}\right),
\end{split}
\end{equation}
where
\begin{align}
\eta_1&=\frac{128}{315}\,h,\\
\eta_2&=\frac{8}{9},\\
\eta_3&=\frac{1}{P} \left(\frac{1}{5/3-1}\frac{k}{\mu m_p}\frac{8}{9}\Sigma\, T_c+\frac{256}{315}a_{\rm rad}\,T_c^4\,h\right),\\
\eta_4&=\frac{1}{18}\, h^2,
\end{align}
and where k is the Boltzmann constant, $m_p$ is the mass of the proton and $\mu=0.62$ is the solar abundance.
The viscous heating $\mathcal{Q}^{\rm vis}$ is fixed by the $\alpha$-prescription:
\begin{equation}
\mathcal{Q}^{\rm vis}=-\alpha P \frac{A\gamma^2}{r^3}\frac{d\Omega}{dr},
\label{vis}
\end{equation}
while the radiative cooling $\mathcal{Q}^{\rm rad}$ is:
\begin{equation}
\mathcal{Q}^{\rm rad} = \frac{64\sigma T_c^4}{3 \Sigma\kappa},
\label{rad}
\end{equation}
where $T_c$ is the temperature at the equatorial plane, and the opacity coefficient $\kappa$ is given by Kramer's formula \citep{sadowski11}:
\begin{equation}
\kappa=\kappa_{es}+\kappa_{ff}=0.34+6.4\times 10^{22}\rho T^{-7/2}
\label{kap}
\end{equation}
in cgs units, and using solar abundance. $\rho\sim\frac{\Sigma}{2 h}$ and $T\sim T_c$.
The factor $\frac{64}{3}$ in front of the radiation term in (\ref{rad}) is somewhat arbitrary, as it depends on the details of the assumed vertical structure and radiative transfer, and thus its exact value should be taken with a grain of salt\footnote{We use the coefficient appearing in \citet{sadowski11}, although in other references, such as \cite{sadowski09} it differs by a factor of 2.}.\\
\item The vertically integrated equation of state:
\begin{equation}
P=\eta_2 \frac{k}{\mu m_p}\Sigma\, T_c+\frac{2}{3}\,\eta_1\, a_{\rm rad}\, T_c^4,
\label{pressure}
\end{equation}
where the first term is the gas pressure $P_{\rm gas}=\eta_2 \frac{k}{\mu m_p}\Sigma\, T_c$, and the second term is the radiation pressure $P_{\rm rad}=\frac{2}{3}\,\eta_1\, a_{\rm rad}\, T_c^4$.
\end{enumerate}
In the thin disk (SS) limit of the above equations, Keplerian orbits are assumed ($\Omega \rightarrow \Omega_k$), while the disk becomes radiatively efficient ($\mathcal{Q}^{\rm adv} \rightarrow 0$). The system then simplifies to an algebraic one where closed form solutions may be found in three regimes: 1) An outer region where $P_{\rm gas}$ and $\kappa_{ff}$ dominate, 2) a middle region where $P_{\rm gas}$ and $\kappa_{es}$ dominate, and 3) an inner region where $P_{\rm rad}$ and $\kappa_{es}$ dominate.
\section{Thermal Instability}
The condition for thermal instability can be written as \citep{pringle}:
\begin{equation}
\frac{\partial}{\partial h} (\mathcal{Q}^{\rm vis}-\mathcal{Q}^{\rm rad})\Bigr|_\Sigma >0.
\end{equation}
For stellar mass black holes, the slim disks in the previous section are thermally unstable above $\dot{M}/\dot{M}_{Edd}\sim 0.06$, in their radiation pressure dominated regimes. For supermassive black holes, this limit is lower (see Figure \ref{radii}).
The thermal instability sets in because in the innermost regions of the disk we have $p \sim p_{\rm rad}$ and $\kappa \sim$ const., leading to
\begin{equation}
\mathcal{Q}^{\rm vis}=-\frac{3}{2} \Omega ~t_{r\phi} h \propto p~h \propto h^2,
\end{equation}
where we used vertical equilibrium $p\propto\Omega^2\, \Sigma\, h$ and assumed that $\Sigma$ and $\alpha$ are constant, while
\begin{equation}
\mathcal{Q}^{\rm rad}=16 \frac{c~ p_{\rm rad}}{\kappa \Sigma}\propto h.
\end{equation}
Since $h\propto p \propto T_c^4$, a temperature increase would change the rate of heating much faster than that of cooling, making the disk thermally unstable.\\
There is also a relation between the shape of the $T_c(\Sigma)$ curve and the stability of the disk. The equilibrium states of the disk can be described by the $T_c(\Sigma)$ solutions, at a fixed radius, to $\mathcal{Q}^{adv}=0$. This $T_c(\Sigma)$ relation is sometimes referred to as an S-curve \citep{abrams, lasota} because of the shape it takes in the models and temperature ranges often considered. Points on this curve with a positive slope correspond to stable equilibria, while points on this curve with a negative slope correspond to unstable equilibria \citep{lasota}. In the former case small perturbations to the temperature bring the state back to equilibrium, while in the latter case these perturbations lead to runaway heating or cooling.\\
\begin{figure}[h]
\centering
\includegraphics[width=1.1\textwidth]{f1}
\caption{Radius at which $P_{rad}=P_{gas}$.}
\label{radii}
\end{figure}
\section{Accretion in Radiative Equipartition}
Figure \ref{radii} shows the radii below which radiation pressure $P_{\rm rad}$ starts to dominate over gas pressure $P_{\rm gas}$ for black holes of different masses and $\alpha$'s. We shall then assume that at this point in the accretion flow, and inwards, the onset of thermal instability creates an inhomogeneous disk structure, where photons can effectively escape faster than they would through diffusion in a smooth disk. In other words, we hypothesize that the effective optical depth (or opacity) of the disk drops due to the instability, or we have more efficient cooling. This could be e.g. due to rising photon bubbles, low density funnels, or other inhomogeneous structures that could invalidate \eqref{rad}.\\
We assume that this efficient cooling can cool the disk down to pressure equipartition. However, the cooling would not drive the disk to become gas pressure dominated again, as the thermal instability responsible for faster cooling ceases in this regime and the disk heats back up within a viscous time. Therefore, the state of marginal thermal instability, or pressure equipartition, is a stable fixed point (see Sec. \ref{cooling} below for details). Indeed, since disks at high Eddington ratios are observed in a steady state, we conjecture that the cooling is efficient enough to keep the disk at equipartition i.e. $P_{\rm gas} \approx P_{\rm rad}$\footnote{More generally we can assume $P_{\rm gas} \approx \zeta P_{\rm rad}$. For example, \citet{sadowski11} finds $\zeta = 2/3$ for the onset of thermal instability, but here we consider $\zeta=1$ for simplicity. }. \\
We shall call these flows Accretion in Radiative Equipartition (or AiRE) disks. This equipartition tames the temperature dependence of the disk in the inner region. Within the radii of Figure \ref{radii}, AiRE disks have different properties from slim and thin disks. In this model, the equipartition condition replaces the radiative energy loss condition \eqref{rad}, which is only valid for atmospheres in vertical equilibrium with planar symmetry. More generally, many MHD/fluid instabilities occur when there are different fluid components that dominate inertia and (isotropic or anisotropic) stress. Some examples are Magneto-Rotational Instability (MRI), convective instability, Rayleigh-Taylor instability, Parker instability, and potentially whatever sets the maximum mass of main-sequence stars. In all these, the instability saturates at equipartition, where stress and inertia are (somewhat) equally distributed in dominant components. For example, in gas-dominated disks, MRI saturates when $P_{\rm mag} = {\cal O}(0.1) \times P_{\rm gas}$ \citep{hawley}. \\
\subsection{Efficient Cooling and Pressure Equipartition}\label{cooling}
Let us now consider how efficient cooling can lead to pressure equipartition.
\begin{figure}[p]
\centering
\begin{multicols}{2}
\hbox{ \hspace{-.5 cm}\includegraphics[width=.89 \linewidth]{f2}}\par
\hbox{ \hspace{-1 cm} \includegraphics[width=1.2 \linewidth]{f3}}\par
\end{multicols}
\begin{multicols}{2}
\hbox{\hspace{-.5 cm}\includegraphics[width=.9 \linewidth]{f4}}\par
\hbox{ \hspace{-1 cm} \includegraphics[width=1.18 \linewidth]{f5}}\par
\end{multicols}
\caption{Thermal equilibria at different fixed radii: $r=15\,M, \, 20\,M,\, 30\,M, \,40\,M$ from top-left to bottom-right. The dashed curve $L$ represents solutions with conserved angular momentum. The intersection of this $L$ curve with the standard model ($n=0$) is marked with a circle and the intersection of this curve with the AiRE disk model ($n>0$) is marked with a star. $\dot{M}/\dot{M}_{Edd}=0.1$ and $\alpha=0.1$.}
\label{thermequib}
\end{figure}
Since the development of inhomogeneities that (could) lead to more efficient cooling is driven by thermal instability, we can write a phenomenological toy model for cooling rate as:
\begin{eqnarray}
\tilde{\mathcal{Q}}^{\rm rad}\equiv{\mathcal{Q}^{\rm rad}}\left(1+\text{Re}[\sqrt{2-5\beta}]\frac{t_{vis}}{t_{th}}\right)^{n} \nonumber\\ = \frac{64\sigma T_c^4}{3 \Sigma\kappa} \left(1+\text{Re}[\sqrt{2-5\beta}]\frac{t_{vis}}{t_{th}}\right)^{n}
\label{rad_toy}
\end{eqnarray}
where $\beta=\frac{P_{gas}}{P_{gas}+P_{rad}}$, $t_{th}$ and $t_{vis}$ are the thermal and viscous timescales respectively, and $n>0$ is a free parameter. $n=0$ corresponds to the standard disk model, for which $\tilde{\mathcal{Q}}^{\rm rad}\equiv{\mathcal{Q}^{\rm rad}}$ from Eq. (\ref{rad}). As the rate of thermal instability is $\lambda_{th} \sim \text{Re}[\sqrt{2-5\beta}]t^{-1}_{th}$, which operates over an accretion/viscous time $\sim t_{vis}$, we expect the cooling to be roughly enhanced by $(\lambda_{th}t_{vis})^n$ with $n= {\cal O}(1)$. Therefore, Eq. (\ref{rad_toy}) gives a reasonable toy model for the instability-driven cooling. \\
Figure \ref{thermequib} shows the thermal equilibria solutions to $\tilde{\mathcal{Q}}^{adv}\equiv\mathcal{Q}^{vis}-\tilde{\mathcal{Q}}^{rad}=0$ at different fixed radii ($r=15\,M, \, 20\,M,\, 30\,M, \,40\,M$ from top-left to bottom-right) for $n=0, \, 0.05,\, 0.1,\,$ and $1$. The dashed curve $L$ represents solutions with conserved angular momentum. The intersection of this $L$ curve with each equilibrium curve picks out an equilibrium point we are interested in. The intersection with the standard disk model is marked with a circle and the intersection with the AiRE disk model is marked with a star. At larger radii these intersections are closer to one other, but as we move inwards in the disk, the AiRE disk gives us different equilibria compared to the standard model. The slope at the circle is negative, corresponding to an unstable equilibrium, while the slope at the star is positive, corresponding to a stable equilibrium. This confirms our expectation that efficient cooling (\ref{rad_toy}), driven by thermal instability, leads to radiative pressure equipartition $P_{\rm gas} \approx P_{\rm rad}$, which is thermally stable. \\
We think that local shearing box simulations such as \citet{jiang2013} have not yet seen this pressure equipartition realized, either due to insufficient running time or more likely due to the limited box size. Most of the runs in these simulations show either expanding or collapsing vertical height. However, in a big enough box, both should be happening in different places, and this could lead to the inhomogeneities we have just described.\\
\citet{mishra2016} also see thermal instability in simulations of initially radiation-pressure dominated thin disks. They argue that a comparison of their evolution with the relevant thin-disk thermal equilibrium curve suggests that their disk may be headed for the thermally stable, gas-pressure-dominated branch, meaning that it is moving towards equipartition. This supports our conjecture. Their simulations, however, had to be terminated before equipartition could occur because they reached a point where they could no longer resolve the disk.
\subsection{AiRE Disk Equations}
Using $P_{\rm gas} \approx P_{\rm rad}$, which for AiRE disks effectively replaces Eq. (\ref{rad}) in the slim disk model, the equation of mass conservation \eqref{mc} and vertical equilibrium \eqref{ve}, $T_c$ and $v$ can be related:
\begin{equation}
\dot{M} \approx-2 \pi \sqrt{\Delta} v\frac{\sqrt{18 k/\mu m_p} ~ \xi~ {T_c}^{7/2}}{\Omega_\perp}
\end{equation}
where $\xi\equiv\frac{2}{3}\times \frac{128}{315}\times \frac{a_{\rm rad}}{\eta_2 k/\mu m_p}$.\\
Thus, with the assumption of equipartition of gas pressure with radiation pressure, the equations of the previous section can be combined into one ordinary differential equation, namely the equation of radial momentum conservation \eqref{rmc}. This equation can be rewritten in the form
\begin{equation}
\frac{d v}{d r}=\frac{\mathcal{N}(r, v)}{\mathcal{D}(r, v)}
\label{ode},
\end{equation}
where
\begin{equation}
\mathcal{N}(r, v)=\frac{\mathcal{A}}{r}
\end{equation}
and
\begin{equation}
\mathcal{D}(r, v)\approx v-1.3 \frac{k}{v\, \mu m_{p}} \left(-\frac{\eta_2 \dot{M} \Omega_\perp \sqrt{\frac{k}{\mu m_p}}}{a_{rad}v \sqrt{\Delta } }\right)^{2/7}.
\end{equation}
Figures \ref{toomres} and \ref{tcs} show the (dimensionless) Toomre parameter and central temperature for AiRE disks in comparison with SS disks, using the Keplerian approximation ($\Omega \rightarrow \Omega_k$) for different non-spinning black hole masses with $\dot{M}/\dot{M}_{Edd}=0.1$, and $\alpha=0.01$. Work is in progress to find the spectrum of the AiRE disks and compare it to the SS spectrum \citep{ykyna} . It would be interesting to see if the AiRE disk spectrum may resolve some of the discrepancies that come from using the SS spectrum, such as in the determination of spins of black holes \citep{abram11}.\\
The thin disk solution in the outer region is used to set the boundary condition far outside the disk. In the innermost region, the flow must continuously make a transition from subsonic to supersonic flow. This occurs at a radius called the sonic point. The denominator $\mathcal{D}$ in Equation \eqref{ode} vanishes at the sonic point, making the equation singular, unless the numerator $\mathcal{N}$ also vanishes at this point. Thus at the sonic point we must have the inner boundary condition:
\begin{equation}
\mathcal{N}\Bigr|_{r=r_{sonic}}=\mathcal{D}\Bigr|_{r=r_{sonic}}=0.
\label{nd0}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=.9\textwidth]{f6}
\caption{Toomre parameter of AiRE disks and SS disks for different black hole masses. $\dot{M}/\dot{M}_{Edd}=0.1$, $a=0$ and $\alpha=0.01$.}
\label{toomres}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.9\textwidth]{f7}
\caption{Central Temperature of AiRE disks and SS disks for different black hole masses. $\dot{M}/\dot{M}_{Edd}=0.1$, $a=0$ and $\alpha=0.01$.}
\label{tcs}
\end{figure}
The value of the angular momentum constant $\mathcal{L}_{in}$ must be chosen carefully to achieve this. If we call the value of $\mathcal{L}_{in}$ which achieves this $\mathcal{L}_{in}^t$, then for smaller values of this constant $\mathcal{L}_{in}<\mathcal{L}_{in}^t$, $\mathcal{N}$ will vanish but not $\mathcal{D}$ (there will not be a transition from subsonic to supersonic flow), while for greater values of this constant $\mathcal{L}_{in}>\mathcal{L}_{in}^t$, $\mathcal{D}$ will vanish but not $\mathcal{N}$ (there will be a singularity).\\
The radial momentum equation \eqref{ode} may be solved by the shooting method, the relaxation method, or a combination of the two \citep{PressNR}. The topology of $v$ differs for solutions with $\mathcal{L}_{in}>\mathcal{L}_{in}^t$ and $\mathcal{L}_{in}<\mathcal{L}_{in}^t$, and this topology change is key to finding the solution via the shooting method. Figure \ref{vs} shows radial velocity profiles for different values of $\mathcal{L}_{in}$ for $M=10\,M_{\odot}$, $\dot{M}/\dot{M}_{Edd}=0.1$, $a=0$ and $\alpha=0.01$. Solutions with a minimum have $\mathcal{L}_{in}<\mathcal{L}_{in}^t$, whereas the solutions that diverge have $\mathcal{L}_{in}>\mathcal{L}_{in}^t$. The solution with the desired inner boundary condition \eqref{nd0} lies at the transition between these two branches of solutions.\\
\begin{figure}[h]
\centering
\includegraphics[width=.75\textwidth]{f8}
\caption{Radial velocity profiles for different values of $\mathcal{L}_{in}$. Solutions with a minimum have $\mathcal{L}_{in}<\mathcal{L}_{in}^t$. $M=10\,M_{\odot}$, $\dot{M}/\dot{M}_{Edd}=0.1$, $a=0$ and $\alpha=0.01$.}
\label{vs}
\end{figure}
An additional challenge in this problem is that the location of the sonic point is not known in advance. This can be handled by treating the sonic point as a \emph{free boundary} \citep{PressNR}, when using the relaxation method.
\section{The Nature of the Sonic Point}
\begin{figure}
\centering
\includegraphics[width=.67\textwidth]{f9}
\caption{Phase portrait for $\dot{M}/\dot{M}_{Edd}=0.12,\, M=10 M_{\odot},\, a=0,\, \alpha=0.2$. There are nodal sonic points at $r\simeq6.01$ and $r\simeq5.99$. The solution with Keplerian outer boundary conditions (in green) goes through the nodal point.}
\label{nod}
\vspace{.8cm}
\hspace{.3cm}\includegraphics[width=.68\textwidth]{f10}
\caption{Phase portrait for $\dot{M}/\dot{M}_{Edd}=0.1,\, M=10 M_{\odot},\, a=0,\, \alpha=0.01$. There is a Stable saddle point at $r \simeq 5.85$ and an unphysical spiral point at $r \simeq 6.15$. The solution with Keplerian outer boundary conditions (in green) goes through the saddle point.}
\label{sadspir}
\end{figure}
Once we have found the solution with the correct inner boundary condition, we can study the nature of the sonic point by considering the Jacobian matrix $
\mathcal{J}=\begin{pmatrix}
\frac{\partial\mathcal{D}}{\partial r} & \frac{\partial\mathcal{D}}{\partial v} \\
\frac{\partial\mathcal{N}}{\partial r} & \frac{\partial\mathcal{N}}{\partial v}
\end{pmatrix}
\label{jac}
$. The relative signs of the eigenvalues of this matrix, at the sonic point, characterize the sonic point. If the eigenvalues have the same sign, the sonic point is a nodal point (Figure \ref{nod}), if they have opposite signs, it is a saddle point (Figure \ref{sadspir}), and if they are complex, the sonic point is a spiral (unphysical) point (Figure \ref{sadspir}). Given that perturbations can only propagate downstream beyond the sonic point, we conjecture that saddle type sonic points correspond to stable disk configurations, while nodal points correspond to unstable ones. To see this, note that small perturbations inside nodal\footnote{There are two ways of going through a nodal point: in the fast direction (Figure \ref{nodfastpert}) and in the slow direction (Figure \ref{nodslowpert}). Our argument may only hold for passing through the nodal point in the slow direction, as the perturbations are smooth only for this direction.} sonic points grow as they get dragged deeper into the black hole, while small perturbations inside saddle sonic points shrink (see Figures \ref{sadpert}-\ref{nodslowpert}). Other authors have made similar arguments about the instability of nodal sonic points \citep[e.g.,][]{kato1,kato2}. \\
\begin{figure}[H]
\centering
\begin{multicols}{2}
\hbox{ \hspace{-.2 cm}\includegraphics[width=1. \linewidth]{f11}}\par
\caption{Evolution of perturbations to a steady state solution going through a saddle critical point. The steady state solution is the thick black line $ab$. As perturbations evolve $aBAb\rightarrow aB'A'b\rightarrow aB''A''b ...$, their amplitude shrinks.}
\label{sadpert}
\hbox{ \hspace{0 cm} \includegraphics[width=.92 \linewidth]{f12}}\par
\caption{Evolution of perturbations to a steady state solution going through a nodal critical point in the fast direction. The steady state solution is the thick black line $ab$. As perturbations evolve $aBAb\rightarrow aB'A'b\rightarrow aB''A''b ...$, their amplitude grows.}
\label{nodfastpert}
\end{multicols}
\begin{multicols}{2}
\centering\hbox{\hspace{.2 cm}\includegraphics[width=.94 \linewidth]{f13}}\par
\caption{Evolution of perturbations to a steady state solution going through a nodal critical point in the slow direction. The steady state solution is the thick black line $ab$. As perturbations evolve $aBAb\rightarrow aB'A'b\rightarrow aB''A''b ...$, their amplitude grows.}
\label{nodslowpert}
\end{multicols}
\end{figure}
For fixed values of $\dot{M}/\dot{M}_{Edd},~M$ and $a$, there is a transition from the saddle to the nodal type of sonic point, as we increase $\alpha$. Figure \ref{sadnod} shows some transition lines for different masses and spins of black holes. The transition occurs at
\begin{equation}
\begin{split}
\alpha_{\rm saddle} &\lesssim 0.77 \left(-\frac{v_r}{v_\phi}\right)^{1/3} \\
&\simeq 0.097 \left(1-a\right)^{2/15} \left(3+a\right)^{1/3} \left(\frac{\dot{M}/\dot{M}_{Edd}}{M/M_{\odot}}\right)^{1/24},
\end{split} \label{alpha_saddle}
\end{equation}
where $v_r=\frac{1}{\gamma} \frac{v}{\sqrt{1-v^2}}$ and $v_\phi=\frac{2 M a}{r\sqrt{\Delta}}+\frac{1}{\gamma}\frac{\mathcal{L} r}{\sqrt{A}}$. Based on our conjecture, we think that values of $\alpha$ greater than $\sim 0.77 \left(-\frac{v_r}{v_\phi}\right)^{1/3}$ correspond to nodal sonic points and are thus unstable, while AiRE disks with smaller $\alpha$'s have healthy saddle sonic points. A similar expression for this transition was found by \citet{afshordi} in pseudo-Newtonian isothermal disks, but with $\frac{h}{r}$ instead of $\frac{v_r}{v_\phi}$. Our expression is more general in that it holds for arbitrary spin in full general relativity.\\
\begin{figure}[h]
\centering
\includegraphics[width=.95\textwidth]{f14}
\caption{Saddle (stable) sonic point to nodal (unstable) sonic point transitions.}
\label{sadnod}
\end{figure}
Assuming that the value of $\alpha$ is fixed by the MHD physics in thin accretion disks, Eq. (\ref{alpha_saddle}) suggests a physical origin for the minimum Eddington ratio observed for the soft states of X-ray binaries \citep[e.g., see][]{2013ApJ...779...95K}, which is around 1-3\%. Combining these values with the current measurements of black hole masses and spins \citep[e.g.,][]{2014SSRv..183..295M} gives a range of $\alpha = 0.11-0.13$ for the viscosity parameter.
We should note that our proposed mechanism for the hard to soft transition would only be consistent with observations if $\alpha$ lies in this narrow range. Current variations of $\alpha$ in simulations are driven by their numerical and/or physical limitations (e.g., thermal instability). However, a better understanding of the physics involved could sharpen these predictions.
In other words, we have a strong prediction for how $\alpha$ should behave in more realistic accretion simulations, but a very weak prediction for the transition Eddington ratios (in the absence of a precise value of $\alpha$).
\section{Toomre Instability}
Self-gravity becomes important in disks outside of some radius, $r_{sg}$. When self-gravity becomes important, matter clumps together and will no longer accrete onto the compact object. The more massive the compact object, the closer $r_{sg}$ gets to the ISCO. When $r_{sg}=r_{ISCO}$ there will no longer be any accretion of luminous matter. Since accretion is the main mechanism by which supermassive black holes grow, this can be used to set an upper mass limit for supermassive black holes, as suggested in \citet{king}.\\
The radius at which self-gravity takes over can be determined by the Toomre parameter \citep{toomre} $Q=\frac{c_s\,\Omega_r}{\pi\,\Sigma}$, where $\Omega_r=\sqrt{\frac{-\frac{3 a^2}{r^2}+8 a \sqrt{M} r^{-3/2}-\frac{6 M}{r}+1}{r^3 \left(2 a \sqrt{M} r^{-3/2}-\frac{3 M}{r}+1\right)}}$ is the radial epicyclic frequency for circular, equatorial Kerr geodesics \citep{Gammie2004}. The condition for stability is that $Q>1$. Figure \ref{toomres} shows how $Q$ decreases with mass and approaches $1$ for masses $\sim 10^{11} M_{\odot}$. \\
We find that for the disk to remain Toomre stable ($Q>1$ everywhere outside the ISCO), we must have
\begin{equation}
M/M_{\odot}\leq f(a,\alpha) \sqrt{\dot{M}_{Edd}/\dot{M}},
\label{qineq}
\end{equation}
where $f(0, 0.1)\approx 6.3\times 10^{21}$, $f(0.999, 0.1)\approx 1.1\times 10^{22}$, and $f(0, 0.01)\approx 6.3\times 10^{20}$. Restricting to sub-Eddington mass accretion rates $\dot{M}/\dot{M}_{Edd}\leq1$, together with the constraint from Toomre stability \eqref{qineq}, we get\\
\begin{equation}
\begin{split}
\frac{dM}{dt}\leq \min\left[\dot{M}_{Edd}, \frac{\dot{{M}}_{Edd}(M_{\odot}) f(a,\alpha)}{M}\right]\\
\approx \dot{{M}}_{Edd}(M_{\odot})\left(\frac{m^2}{f(a,\alpha)^2}+\frac{1}{m^2}\right)^{-1/2}.
\label{deq}
\end{split}
\end{equation}
Integrating \eqref{deq} we have
\begin{equation}
t(M_f)\gtrsim \int_{M_i}^{M_f} dm \frac{1}{\dot{{M}}_{Edd}(M_{\odot})}\sqrt{\frac{m^2}{f(a,\alpha)^2}+\frac{1}{m^2}},
\end{equation}
where masses and $t$ are in units of $M_{\odot}$.
Inverting this relation to get $M_f(t)$, assuming that the first massive black holes were born at $z\sim 30$ and converting $t$ to $z$ to get $M(z)$, we arrive at the upper bounds shown in Figure \ref{toomre}, for nonspinning and nearly maximally spinning supermassive black holes. In the range of redshift shown in Figure \ref{toomre}, the mass bounds are not sensitive to the starting mass of the seed black hole ($M_i$). Also included in Figure \ref{toomre} are the upper bounds (for typical values of parameters) given in \citet{king}, as well as a \emph{Swift} satellite observation of $S5 0014+813$ at $z=3.366$, where its mass was found to be $4\times 10^{10} M_{\odot}$ \citep{ghisellini1}. \\
\begin{figure}[h]
\centering
\includegraphics[width=1.\textwidth]{f15}
\caption{Upper bounds for the mass of supermassive black holes at different redshifts $z$, due to the gravitational Toomre instability.}
\label{toomre}
\end{figure}
\section{Conclusion and Future Prospects}
To summarize our main results, we have introduced AiRE disks as a solution to thermal instability in thin disks. The key feature of AiRE disks is that the radiation pressure is in equipartition with the gas pressure in the inner region. We have presented some features of these flows such as their central temperature and Toomre parameter profiles. We have derived upper limits for the mass of supermassive black holes due to the gravitational Toomre instability in AiRE disks. We have also found a transition from saddle to nodal type of the sonic points in AiRE disks and used nodal point instability to place a lower limit on the mass accretion rate as a function of viscosity parameter $\alpha$ and black hole spin. We conjecture that this transition might be responsible for the observed lower limits on the Eddington ratio of the soft state in X-ray binaries. \\
While we introduced AiRE disks to provide a thermally stable description of thin accretion flows, they may also significantly refine our understanding of other disk properties. With new observations from advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) and the Event Horizon Telescope (EHT), we are at the advent of a new era of black hole physics. Disk Theory may play a major role in explaining some of their future findings. \\
In upcoming work \citep{ykyna}, we study the spectrum of AiRE disks and its properties. Furthermore, the onset of Toomre instability in the inner regions of AiRE disks around active galactic nuclei can lead to formation and merger of binary black hole systems, such as the ones recently detected by LIGO \citep{ligo}, and lead to characteristic detectable electromagnetic signatures \citep[e.g.,][]{bartos}.\\
Another important future direction is the study of the AiRE disk regime in full MHD radiative transfer simulations, and whether enhanced cooling leading to pressure equipartition can indeed arise in a realistic setting.
\section*{Acknowledgments}
We would like to thank Shane Davis, Jeremy Goodman, Ramesh Narayan, Olek S{\c a}dowski, and Jim Stone for useful discussions. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.
\bibliographystyle{mnras}
| -28,216.356461 |
[
-3.212890625,
2.947265625
] | 28.918919 |
[
-3.56640625,
-0.159423828125,
-1.9560546875,
-6.26953125,
-0.483154296875,
8.421875
] |
[
3.408203125,
7.74609375,
2.162109375,
5.30078125
] | 284 | 4,265 |
[
-2.796875,
2.916015625
] | 28.124 |
[
-5.94921875,
-3.77734375,
-4.19140625,
-2.45703125,
1.765625,
11.671875
] | 0.651067 | 22.888555 | 30.45721 | 2.708903 |
[
2.9953384399414062
] | -17,428.598476 | 5.942087 | -27,790.206763 | 0.615555 | 6.081647 |
[
-2.94921875,
-3.59375,
-3.5,
-4.71875,
2.42578125,
11.9453125
] |
[
-5.421875,
-2.38671875,
-2.400390625,
-1.912109375,
3.57421875,
5.2109375
] | |
BkiUdr05qYVBhn-WmS_G
|
\section{Introduction}
High energy physics has very few purely electroweak processes where
both $W^\pm$ and $Z^0$ contribute. One such process is neutrino trident
production.
Neutrino trident production is a purely electroweak process that
proceeds via exchange of a weak boson ($W^\pm$, $Z^0$) and
a photon. Feynman diagrams for two
possible interactions are shown in Fig.~\ref{fig:feynman}.
The reaction
\begin{equation}
\mbox{$\nu_\mu$}(\mbox{$\bar{\nu}_\mu$}) Z \ \rightarrow \ \mbox{$\nu_\mu$}(\mbox{$\bar{\nu}_\mu$}) \mu^+ \mu^- Z
\label{eq:trid}
\end{equation}
is of particular interest. This is because it can proceed via
either charged ($W^\pm$) or neutral ($Z^0$) currents
(Fig.~\ref{fig:feynman}).
which results in interference.
Theoretical calculations~\cite{theory} predict this interference
to cause a 40$\%$ {\em decrease} in reaction~\ref{eq:trid}
from what would be expected for a purely charged-current
interaction. The measurement
of this interference is a test of the Standard
Model.
\begin{figure}
\center
\begin{minipage}[tbp]{7.0cm}
\psfig{figure=feynman1.ps,width=7.0cm}
\end{minipage}
\begin{minipage}[tbp]{7.0cm}
\psfig{figure=feynman3.ps,width=7.0cm}
\end{minipage}
\caption{Feynman diagrams for neutrino trident production.}
\label{fig:feynman}
\end{figure}
The final state in reaction~\ref{eq:trid} contains two muons
with no hadronic energy. In a massive target experiment
the muons make a clear signal by their penetration through the detector.
The muons should also have a very small opening angle and a small
invariant mass.
The dominant process producing two muons in the NuTeV
detector, neutrino charged-current DIS with a second
muon from decays of the final state hadrons, is
removed by requiring low energy deposition consistent
with two muons near the point of interaction.
Neutrino trident production has been investigated previously by several
experiments (CHARM, CHARM II and CCFR). The extremely small cross-section
($\sigma_{trident} = 10^{-5} \sigma_{CC}$)
makes it difficult to study. The cross-section~\cite{theory} is proportional
to
\begin{equation}
\sigma \propto \frac{2 Z^2 \alpha^2 G_F^2}{9 \pi^3} \frac{1}{R_{nucleus}}
E \log{E}.
\end{equation}
The CHARM experiment~\cite{charm}
failed to conclusively observe neutrino trident production.
CHARM II~\cite{charm2} measured a
cross-section consistent with the Standard Model.
Finally, CCFR~\cite{ccfr} ruled out a
pure $V - A$ description at the 99$\%$ confidence
level. Each of these experiments used targets with different
Z's: CHARM (marble), CHARM II (glass) and CCFR (iron).
\section{Experimental Technique}
Reaction~\ref{eq:trid} can be studied by the NuTeV Experiment.
The following section describes the experiment, the technique
and issues related to the result.
The NuTeV Experiment (FNAL E815) uses the Fermilab Lab E detector and is
very similar to that of its immediate predecessor, CCFR.~\cite{ccfr}
The primary components of the NuTeV detector are the
target/calorimeter and the toroid spectrometer.
The target/calorimeter is a 690-ton steel sampling calorimeter. It consists
of 42 segments which each contain 10-cm of iron, two
scintillation counters and one drift chamber. The transverse dimensions
of each segment are 3 $\times$ 3 meters. The energy resolution of
the calorimeter is $\sigma / E_{had} = 0.89 / \sqrt{E_{had}}$.
The toroid spectrometer is located immediately downstream of the
calorimeter. It contains five sets of drift chambers in
an iron toroid magnet. Momentum
measurement is limited by multiple scattering to $\sigma_p / p = 0.11$.
The magnetic field is set to focus the primary $\mu^-$($\mu^+$) from
\mbox{$\nu_\mu$}(\mbox{$\bar{\nu}_\mu$}) charged-current (CC) interactions.
The primary upgrade of the NuTeV experiment is the sign-selected
quadrapole train (SSQT) beamline. This beamline provides sign-selection
of the secondary beam which produces a nearly pure \mbox{$\nu_\mu$}(\mbox{$\bar{\nu}_\mu$})
beam. The experiment ran in both \mbox{$\nu_\mu$} \ and \mbox{$\bar{\nu}_\mu$} \
modes.
This allows study
of \mbox{$\nu_\mu$} \ and \mbox{$\bar{\nu}_\mu$} \ trident production separately.
Selection criteria for this analysis are summarized in Table~\ref{tab:cuts}.
Events were selected with both muons momentum analyzed by
the toroid. Muons were required to have a minimum energy
($E_\mu$) and
have opposite charge. The energy associated with a shower at the
vertex ($E_{HAD}$) must be very small. The two muon
invariant mass ($M_{\mu\mu}$) is required to be small.
Finally, the fiducial volume is also limited to include regions
of the detector where the event could be well measured.
\begin{table}
\begin{center}
\caption{\label{tab:cuts} Selection criteria for neutrino tridents.}
\vspace{0.2cm}
\begin{tabular}{|c|c|} \hline
Variable & Criterion \\ \hline
$E_\mu(1)$ (at event vertex) & $> 9$ GeV \\ \hline
$E_\mu(1)$ (entering toroid) & $> 3$ GeV \\ \hline
$E_\mu(2)$ (at event vertex) & $> 9$ GeV \\ \hline
$E_\mu(2)$ (entering toroid) & $> 3$ GeV \\ \hline
charge(1) + charge(2) & $= 0 $ \\ \hline
\multicolumn{2}{|c|}{Both muon charges determined} \\ \hline
$E_{HAD}$ & $< 3$ GeV \\ \hline
$M_{\mu\mu}$ & $< 2.3$ GeV/c$^2$ \\ \hline
Vertex Position & $> 12$ cm. from \\
(Transverse) & calorimeter edge \\ \hline
Vertex Position & $> 20$ cm of steel \\
(Upstream) & from calorimeter edge \\ \hline
Vertex Position ) & $> 75$ cm of steel \\
(Downstream) & from calorimeter edge \\ \hline
\end{tabular}
\end{center}
\end{table}
A number of other physics processes can create a
signal similar to neutrino tridents. A list of the most important
sources is given in Table~\ref{tab:back}. The two sources with the
highest rate are charged-current charm
production and charged-current interactions with a $\pi / K$ decay
in the hadronic shower. However, each of these processes generally
creates a significant amount of hadronic energy. Monte Carlo studies
show that there is an expected background of $< 1$ event for both
of these combined.
Diffractive vector meson production can also produce a trident-like
signal. Light vector mesons ($\rho^0$, $\omega$, $\phi$, and $J/\Psi$)
can be produced via neutral currents and decay into two muons.
Charged-current production of $D_s^{+*}$ with a semi-leptonic decay ($D_s^+
\rightarrow \mu^+ + \nu_\mu + Y$) was presented at this
conference.~\cite{chorus} This may represent a significant background
to neutrino trident production.
Other small background sources include diffractive
$\pi^\pm$ production and $\tau\mu$ trident production. The
higher density target of CCFR/NuTeV limits the impact of diffractive
$\pi^\pm$ production compared to the lower target mass of CHARM II.
Monte Carlo studies will be performed to estimate
the contributions from each of the background sources.
\begin{table*}
\begin{center}
\caption{\label{tab:back} Possible sources of background to neutrino
trident production. }
\vspace{0.2cm}
\begin{tabular}{|c|c|c|} \hline
Source & Production & Decay \\ \hline
Charm Production & $\nu_{\mu} N \rightarrow \mu^- c X$ & $c
\rightarrow \mu^+ \nu_{\mu}$ \\ \hline
$\pi / K$ decay & $\nu_{\mu} N \rightarrow \mu^- (\pi / K) Y$ &
$\pi / K \rightarrow \mu^+ \nu_{\mu}$ \ or
\\
& & $\pi / K \rightarrow \mu^+ \nu_{\mu} \pi^0$ \\ \hline
Vector Meson Production & $\nu_{\mu} N \rightarrow \nu_{\mu} V^0 \ X$
& $V \rightarrow \mu^+ \mu^-$ \\
& ($V^0 = \{\rho^0, \omega, \phi, J/\psi\}$) & \\ \hline
$D_s^{+*}$ Production & $\nu_{\mu} N \rightarrow \mu^- D_s^{+*} X$ & $D_s^{+*} \rightarrow \gamma D_s^+ $ \\
& & $D_s^+ \rightarrow \mu^+ \nu_{\mu}$ \\ \hline
$\pi^\pm$ Production & $\nu_{\mu} N \rightarrow \mu^- \pi^+ X$ &
$\pi^+ \rightarrow \mu^+ \nu_{\mu}$ \\ \hline
$\tau\mu$ Trident Production & $\nu_{\mu} N \rightarrow \mu^- \tau^+
\nu_{\tau} N$ & $\tau \rightarrow \mu^+ \nu_{\mu} \overline{\nu}_{\tau}$
\\
\hline
\end{tabular}
\end{center}
\end{table*}
For the analysis presented here, the backgrounds are inferred from
the data rather than using Monte Carlo estimates.
The technique used is
similar to that of the previous experiments.~\cite{charm,charm2,ccfr}
It is based on the following procedure.
The $E_{HAD}$ cut is removed and is
plotted for either side of one of the other criteria ($M_{\mu\mu} < 2.3$
GeV/c$^2$).
The distributions are normalized and the distribution for
$M_{\mu\mu} > 2.3$ GeV/c$^2$
is used to estimate the background for
$M_{\mu\mu} < 2.3$ GeV/c$^2$.
Figures~\ref{fig:lowm}-\ref{fig:bothm} illustrate the background estimate.
All three plots show the $E_{HAD}$ distribution. Figure~\ref{fig:lowm}
shows the distribution for events which pass the invariant mass cut
($M_{\mu\mu} < 2.3$ GeV/c$^2$). Figure~\ref{fig:highm} shows the
distribution for events which fail the cut ($M_{\mu\mu} > 2.3$ GeV/c$^2$).
A normalization is performed to match the areas of the distributions
for $3.0 < E_{HAD} < 30.0$ GeV.
The normalized distributions are plotted
together in Fig.~\ref{fig:bothm}. The errors on the background estimate
the errors from the statistics of Fig.~\ref{fig:highm} and the error
on the normalization. The lowest three bins ($E_{HAD} < 3.0$ GeV) are the
bins of interest for the neutrino trident signal.
\begin{figure}
\center
\centerline{\psfig{figure=lowm4.ps,width=7.0cm}}
\caption{$E_{HAD}$ distribution for events with $M_{\mu\mu} < 2.3$ GeV/c$^2$.}
\label{fig:lowm}
\end{figure}
\begin{figure}
\center
\centerline{\psfig{figure=highm4.ps,width=7.0cm}}
\caption{$E_{HAD}$ distribution for events with $M_{\mu\mu} > 2.3$ GeV/c$^2$.}
\label{fig:highm}
\end{figure}
\begin{figure}
\center
\centerline{\psfig{figure=bothm4.ps,width=7.0cm}}
\caption{Normalized $E_{HAD}$ distribution for events with $M_{\mu\mu} < 2.3$ GeV/c$^2$ (histogram) and $M_{\mu\mu} > 2.3 GeV/c^2$ (points).}
\label{fig:bothm}
\end{figure}
\section{Results}
\begin{table*}[tb]
\begin{center}
\caption{\label{tab:results}Preliminary results for neutrino trident
analysis.}
\vspace{0.2cm}
\begin{tabular}{|lcrrr|} \hline
& & Background & Standard & \\
& Data & Estimate & Model~~ & V-A~~~~ \\ \hline
$\nu$ mode & 12 & $7.6 \pm 2.5$ & $7.2 \pm 0.3$ & $12.26 \pm 0.5$ \\
$\bar{\nu}$ mode & 5 & $2.0 \pm 1.4$ & $3.6 \pm 0.2$ & $6.1 \pm 0.3$ \\
Combined & 17 & $9.8 \pm 2.9$ & $10.8 \pm 0.3$ & $18.3 \pm 0.6$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
A preliminary analysis of data from the 1997-98 run of NuTeV results
in 17 events which pass
the trident selection criteria (12 in \mbox{$\nu_\mu$}, 5 in \mbox{$\bar{\nu}_\mu$}).
Results are summarized
in Table~\ref{tab:results}. The background estimate does not account for
all of the signal events and this excess is attributed to neutrino trident
production.
Table~\ref{tab:results} also shows the expected number of trident events
for both the Standard Model and $V - A$. The neutrino trident excess is
consistent with the Standard Model and does not favor $V - A$. This is
in agreement with CHARM II~\cite{charm2} and CCFR.~\cite{ccfr}
\section{Conclusions}
The NuTeV experiment has a sample of low-$E_{HAD}$ dimuon events.
A preliminary analysis has been performed which observes
a signal consistent with neutrino trident production.
Statistics will be improved by increasing acceptance
and combining with data from CCFR. This will yield the
highest statistics neutrino trident analysis to date. Other
sources of low-$E_{HAD}$ dimuons will be studied.
\section*{References}
| -13,492.406402 |
[
-3.35546875,
3.087890625
] | 27.816901 |
[
-5.33203125,
-2.701171875,
-2.119140625,
-7.7578125,
0.363525390625,
11.4765625
] |
[
1.5322265625,
7,
2.755859375,
5.203125
] | 139 | 1,435 |
[
-3.5625,
4
] | 30.961223 |
[
-6.37890625,
-5.140625,
-2.580078125,
-0.96923828125,
2.619140625,
9.078125
] | 0.856269 | 16.213216 | 35.470383 | 2.312058 |
[
2.6796140670776367
] | -10,067.668978 | 5.696864 | -13,130.447966 | 1.46789 | 5.529792 |
[
-3.421875,
-3.96484375,
-3.6171875,
-3.919921875,
2.447265625,
9.890625
] |
[
-6.859375,
-4.5859375,
-3.091796875,
-2.28125,
4.40234375,
7.6484375
] | |
BkiUdfM4ukPiEPTamZho
|
\section{Introduction}
Program preorders and equivalences are fundamental concepts in the
theory of programming languages since the very birth of the discipline.
Such notions are usually defined by means of relations
between program phrases aimed to order or identify programs according to
their observable \emph{behaviours}, the latter being usually defined
by means of a primitive notion of observation such as termination to
a given value. We refer to such
relations as \emph{behavioural relations}.
Well-known behavioural relations for higher-order functional languages
include the \emph{contextual preorder} and \emph{contextual equivalence}
\cite{Morris/PhDThesis}, \emph{applicative (bi)similarity} \cite{Abramsky/RTFP/1990},
and \emph{logical relations} \cite{Reynolds/Logical-relations/1983}.
Instead of asking when two programs $e$ and $e'$ are
behaviourally similar or equal, a more informative question
may be asked, namely how much (behaviourally)
different $e$ and $e'$ are.
That means that instead of looking at relations relating programs with
similar or equal behaviours we look at relations assigning programs a numerical
value representing their \emph{behavioural distance}, i.e. a numerical value
quantifying the observable differences between their behaviours.
The question of quantifying observable differences between programs turned
out to be particularly interesting (and challenging) for effectful higher-order
languages, where ordinary qualitative
(i.e. boolean-valued) equivalences and preorders are too strong.
This is witnessed by recent results on behavioural pseudometrics for
probabilistic $\lambda$-calculi
\cite{CrubilleDalLago/LICS/2015,CrubilleDalLago/ESOP/2017} as well as
results on semantics of higher-order languages for differential privacy
\cite{Pierce/DistanceMakesTypesGrowStronger/2010,GaboardiEtAl/POPL/2017}.
In the first case one soon realises that programs exhibiting a different
behaviour only with probability close to zero are fully discriminated
by ordinary behavioural relations, whereas in the second case relational
reasoning does not provide any information on how much behavioural differences
between inputs affect behavioural differences between outputs.
These problems can be naturally addressed by working with quantitative relations
capturing weakened notions of metric such as \emph{generalised metrics}
\cite{Lawvere/GeneralizedMetricSpaces/1973} and \emph{pseudometrics}
\cite{steen/CounterexamplesTopology/1995}. It is then natural to ask
whether and to what extent ordinary behavioural relations can be refined
into quantitative relations still preserving their nice properties.
Although easy to formulate, answering such question is far from
trivial and requires major improvements in the current theory of
behavioural reasoning about programs.
This paper contributes to answering the above question, and it does so
by studying the quantitative refinement of Abramsky's
\emph{applicative similarity} and \emph{bisimilarity}
\cite{Abramsky/RTFP/1990} for
higher-order languages enriched with algebraic effects.
Applicative similarity (resp. bisimilarity) is a coinductively defined
preorder (resp. equivalence) relating programs that exhibit similar
(resp. equal) extensional behaviours.
Due to its coinductive nature and to its nice
properties, applicative (bi)similarity has been studied for a
variety of calculi, both pure and effectful.
Notable examples are extensions to nondeterministic
\cite{Lassen/PhDThesis} and probabilistic
\cite{DalLagoSangiorgiAlberti/POPL/2014,CrubilleDalLago/ESOP/2014}
$\lambda$-calculi, and its more recent extension
\cite{DalLagoGavazzoLevy/LICS/2017}
to $\lambda$-calculi with algebraic effects \emph{\`a la}
Plotkin and Power \cite{PlotkinPower/FOSSACS/01}.
In \cite{DalLagoGavazzoLevy/LICS/2017} an abstract notion of applicative
similarity is studied for an untyped $\lambda$-calculus enriched with
a signature of effect-triggering operation symbols. Operation symbols
are interpreted as algebraic operations with respect to a monad $T$
encapsulating the kind of effect such operations produce. Examples are
probabilistic choices with the (sub)distribution monad, and nondeterministic
choices with the powerset monad.
The main ingredient used to extend Abramsky's applicative similarity
is the concept of a \emph{relator} \cite{Barr/LMM/1970,Thijs/PhDThesis/1996}
for a monad $T$, i.e.
an abstraction meant to capture the possible ways a relation
on a set $X$ can be turned into a relation on $T X$.
That allows to define an abstract notion of
\emph{effectful} applicative similarity parametric in a relator,
and to prove an abstract precongruence
theorem stating the resulting notion of applicative similarity is a
compatible preorder.
The present work originated from the idea of generalising the theory
developed in \cite{DalLagoGavazzoLevy/LICS/2017} to relations taking values
over arbitrary quantitative domains (such as the real extended half-line
$[0,\infty]$ or the unit interval $[0,1]$).
Such generalisation requires three major improvements in the current
theory of effectful applicative (bi)similarity:
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item
The first improvement is to move from boolean-valued
relations to relations taking values on quantitative
domains such as
$[0,\infty]$ or $[0,1]$ in such a way that restricting
these domains to the two element set $\{0,1\}$ (or $\{\mathsf{false}, \mathsf{true}\}$)
makes the theory collapse to
the usual theory of applicative (bi)similarity.
For that we rely on Lawvere's analysis \cite{Lawvere/GeneralizedMetricSpaces/1973} of
generalised metric spaces and preordered sets as
enriched categories.
Accordingly, we replace boolean-valued relations with relations taking
values over quantales \cite{Rosenthal/Quantales/1990}
$(\mathsf{V}, \leq, \otimes, k)$, i.e. algebraic structures
(notably complete lattices equipped with a monoid structure) that play the role
of sets of abstract quantities. Examples of quantales include
the extended real half-line $([0,\infty], \geq, 0, +)$ ordered by
the ``greater or equal'' relation $\geq$ and with monoid structure given by
addition (and its restriction to the unit interval $[0,1]$), and
the extended real half-line $([0,\infty], \geq, 0, \max)$ with monoid
structure given by binary maximum (in place of addition),
as well as any complete Boolean and Heyting algebra.
This allows to develop an algebra of quantale-valued relations,
$\mathsf{V}$-relations for short, which provides a general framework for studying
both behavioural relations and behavioural distances (for instance,
an equivalence $\mathsf{V}$-relation instantiates to an ordinary equivalence
relation on the boolean quantale $(\{\mathsf{false},\mathsf{true}\}, \leq, \wedge, \mathsf{true})$,
and to a pseudometric on the quantale $([0,\infty], \geq, 0, +)$).
\item
The second improvement is the generalisation of the notion of relator to
quantale-valued relators, i.e. relators acting on relations taking values
over quantales. Perhaps surprisingly, such
generalisation is at the heart of the filed of \emph{monoidal topology}
\cite{Hoffman-Seal-Tholem/monoidal-topology/2014}, a subfield of
categorical topology aiming to unify ordered, metric, and
topological spaces in categorical terms. Central to the development of
monoidal topology is the notion of $\mathsf{V}$-relator or $\mathsf{V}$-lax
extension of a monad $T$ which, analogously to the notion
of relator, is a construction lifting $\mathsf{V}$-relations on a set $X$ to
$\mathsf{V}$-relations on $T X$.
Notable examples of $\mathsf{V}$-relators are obtained from the
Hausdorff distance (for the powerset monad)
and from the
Wasserstein-Kantorovich distance \cite{Villani/optimal-transport/2008}
(for the distribution monad).
\item
The third improvement (on which we will expand more in the next paragraph)
is the development of a \emph{compositional} theory of behavioural
$\mathsf{V}$-relations (and thus of behavioural distances).
As we are going to see, ensuring compositionality in an higher-order
setting is particularly challenging
due to the ability of higher-order programs to copy their
input several times, a feature that allows them to amplify
distances between their inputs \emph{ad libitum}.
\end{enumerate}
The result is an abstract theory of behavioural $\mathsf{V}$-relations
that allows to define notions of quantale-valued applicative similarity and
bisimilarity parametric in a quantale-valued relator. The notions obtained
generalise the existing notions of real-valued applicative
(bi)similarity and can be instantiated to concrete calculi to
provide new notions of applicative (bisimilarity) distance.
A remarkable
example is the case of probabilistic $\lambda$-calculi, where to the best of the
author's knowledge a (non-trivial) applicative distance
for a universal (i.e. Turing complete) probabilistic $\lambda$-calculus is still
lacking in the literature (but see Section \ref{section:related-works}).
The main theorem of this paper states that under suitable conditions on monads and
quantale-valued relators the abstract notion of quantale-valued applicative
similarity is a compatible---i.e. compositional---reflexive and transitive
$\mathsf{V}$-relation.
Under mild conditions such result extends to quantale-valued
applicative bisimilarity, which is thus proved to be a compatible,
reflexive, symmetric, and transitive $\mathsf{V}$-relation
(i.e. a compatible pseudometric).
In addition to the concrete results obtained for quantale-valued applicative
(bi)similarity, the contribution of the present work also relies on
introducing and combining several
notions and results developed in different fields (such as monoidal topology,
coalgebra, and programming language theory)
to build an abstract framework for studying
quantitative refinements of behavioural relations for higher-order languages
whose applications go beyond the present study of applicative (bi)similarity.
\paragraph{Compositionality, distance amplification, and linear types}
Once we have understood what is the behavioural distance
$\delta(e, e')$ (which, for the sake of this argument,
we assume to be a non-negative real number)
between two programs $e$ and $e'$,
it is natural to ask if and how much such distance is modified when $e$ and
$e'$ are used inside a bigger program---i.e. a context---$\mathcal{C}[-]$.
Indeed we would like to reason about the
distance $\delta(\mathcal{C}[e], \mathcal{C}[e'])$ \emph{compositionally},
i.e. in terms of the distance $\delta(e, e')$.
Compositionality is at the heart of relational reasoning about program
behaviours. Informally, compositionality states that observational
indistinguishability is preserved by language constructors;
formally, a relation is compositional if it is \emph{compatible} with
all language constructors, meaning that whenever two programs $e$ and
$e'$ are related, then so are the bigger programs $\mathcal{C}[e]$
and $\mathcal{C}[e']$.
Analogous to the idea that compatible relations are preserved
by language constructors, we are tempted to define as compatible those
distances that are not increased by language constructors.
That is, we would like to say that a behavioural distance $\delta$ is compatible
if the distance $\delta(\mathcal{C}[e], \mathcal{C}[e'])$ between
$\mathcal{C}[e]$ and $\mathcal{C}[e']$ is always bounded by the
distance $\delta(e, e')$, no matter how $\mathcal{C}[-]$ uses
$e$ and $e'$. However, we soon realise that such proposal
cannot work: not only
how $\mathcal{C}[-]$ uses $e$ and $e'$ matters, but also
\emph{how much} it uses them does.
This phenomenon, called \emph{distance amplification}
\cite{CrubilleDalLago/ESOP/2017}, can be easily observed
when dealing with probabilistic languages.
Consider the following example for a probabilistic
untyped $\lambda$-calculus \cite{DalLagoSangiorgiAlberti/POPL/2014}
taken from \cite{CrubilleDalLago/ESOP/2017}.
Let $I$ be the identity combinator
and $I \oplus \Omega$ be the program evaluating to $I$ with probability
$\frac{1}{2}$, and diverging with probability $\frac{1}{2}$.
Assuming we observe the probability of convergence of a program,
it speaks by itself that we would expect the behavioural distance
$\delta(I, I \oplus \Omega)$ between $I$ and $I \oplus \Omega$ to be
$\frac{1}{2}$. However, it is sufficient to consider a family
$\{\mathcal{C}_n[-]\}_{n \geq 0}$ of contexts that duplicate their input $n$-times\footnote{
For instance
$\{(\abs{\varone}{\underbrace{(\varone I) \hdots (\varone I)}_n})(\lambda y.[-])\}_{n\geq 0}$.}
to see that any such context amplifies the observable distance between
$I$ and $I \oplus \Omega$: as $n$ grows, the probability of convergence of
$\mathcal{C}[I \oplus \Omega]$ tends to zero, whereas the one of
$\mathcal{C}[I]$ remains always equal to one.
During its evaluation, every time the context $\mathcal{C}_n$ evaluates its inputs
the detected distance between the latter is somehow
accumulated to the distances previously observed,
thus exploiting the \emph{linear}---in opposition to classical---nature of
the act of measuring. Such linearity naturally reflects the monoidal closed
structure of categories of metric spaces, in opposition with
the cartesian closed structure characterising `classical' (i.e.
boolean-valued) observations.
The above example shows that if we want to reason compositionally
about behavioural distances, then we have to accept that contexts can amplify
distances, and thus we should take into account the number of
times a program accesses its input.
More concretely, our notion of compatibility
allows a context $\mathcal{C}[-]$ using its input $s$ times to
increase the distance $\delta(e, e')$ between $e$ and $e'$,
but of a factor
\emph{at most} $s$. That is, the distance
$\delta(\mathcal{C}[e], \mathcal{C}[e'])$ should be bounded by
$s \cdot \delta(e, e')$.
Our main result states that quantale-valued applicative (bi)similarity
is compatible in this sense. This result allows us to reason about
behavioural distances compositionally, so that we can e.g. conclude
that the distance between $I$ and $I \oplus \Omega$
is indeed $\frac{1}{2}$ (Example \ref{ex:probabilistic-applicative-similarity-distance}).
Reasoning about the number of times programs use (or test) their inputs
requires a shift from ordinary languages to
refined languages tracking information about the so-called
\emph{program sensitivity}
\cite{Pierce/DistanceMakesTypesGrowStronger/2010,GaboardiEtAl/POPL/2017}.
The sensitivity of a program is the `law' describing
how much behavioural differences in outputs are affected by
behavioural differences in inputs, and thus provides the
abstraction needed to handle distance amplification.
Our refined language is a generalisation of
the language $\mathsf{Fuzz}$\
\cite{Pierce/DistanceMakesTypesGrowStronger/2010,GaboardiEtAl/POPL/2017},
which we call $\mathsf{V}$-$\mathsf{Fuzz}$. $\mathsf{Fuzz}$\ is a PCF-like language
refining standard $\lambda$-calculi by means of a powerful linear type
system enriched with sensitivity-indexed
`bang types' that allow to track program sensitivity.
Despite being parametric with respect to an arbitrary quantale,
the main difference between $\mathsf{V}$-$\mathsf{Fuzz}$\ and $\mathsf{Fuzz}$\
is that the former is an effectful calculus parametric with
respect to a signature of (algebraic) operation symbols.
This allows to consider imperative, nondeterministic,
and probabilistic versions of $\mathsf{Fuzz}$, as well as combinations thereof.
\paragraph{Structure of the work}
After having recalled some necessary mathematical
preliminaries, we introduce $\mathsf{V}$-$\mathsf{Fuzz}$\ and its monadic operational
semantics (Section \ref{section:v-fuzz}). We then introduce
(Section \ref{section:v-relators-and-v-relation-lifting})
the machinery of $\mathsf{V}$-relators showing how it can be successfully
instantiated on several examples.
In Section \ref{section:behavioural-v-relations} we define
applicative $\Gamma$-similarity, a $\mathsf{V}$-relation generalising
effectful applicative similarity parametric with respect to a$\mathsf{V}$-relator
$\Gamma$, and prove it is a reflexive and
transitive $\mathsf{V}$-relation whose kernel induces an abstract notion of
applicative similarity.
Our main theorem states
that under suitable conditions on the $\mathsf{V}$-relator $\Gamma$,
applicative $\Gamma$-similarity is compatible.
Finally, in Section
\ref{section:from-applicative-v-similarity-to-applicative-v-bisimilarity}
we define the notion of applicative $\Gamma$-bisimilarity
and prove that under mild conditions
such notion is a compatible
equivalence $\mathsf{V}$-relation (viz. a compatible pseudometric).
\section{Preliminaries}\label{section:preliminaries}
In this section we recall some basic definitions and
results needed in the rest of the paper. Unfortunately, there is no
hope to be comprehensive, and thus we assume the reader to be
familiar with basic domain theory \cite{AbramskyJung/DomainTheory/1994}
(in particular we assume the notions of $\omega$-complete (pointed)
partial order, \ensuremath{\omega\text{-}\mathsf{cppo}}\ for
short, monotone, and continuous functions), basic order
theory \cite{DaveyPriestley/Book/1990}, and basic category theory
\cite{MacLane/Book/1971}.
In particular, for a monoidal category
$\langle\mathbb{C}, I, \otimes \rangle$ we assume the reader to be familiar with the notion
of \emph{strong Kleisli triple}
\cite{MacLane/Book/1971,Kock/StrongMonads/1972}
$\mathbb{\monad} = \langle T, \eta, \strongkleisli{-}\rangle$.
We use the notation $\strongkleisli{f}: Z \otimes T X \to T Y$ for the
strong Kleisli extension of $f: Z \otimes X \to T Y$ (and use the same
notation for the ordinary Kleisli lifting of $f: X \to T Y$, the latter
being essentially the subcase of $\strongkleisli{-}$ for $Z = I$) and reserve the
letter $\eta$ to denote the unit of $\mathbb{\monad}$.
Oftentimes, we refer to a (strong) Kleisli triples as a (strong) monad.
We denote by $\mathbb{C}_\mathbb{\monad}$ the Kleisli category of $\mathbb{\monad}$.
Finally, we recall that every monad on $\mathsf{Set}$, the category of sets and functions,
is strong (with respect to the cartesian structure).
We also try to follow the notation used in
the just mentioned references. As a small difference, we denote
by $g \cdot f$ the composition of $g$ with $f$ rather than by $g \circ f$.
\subsection{Monads and Algebraic Effects}
Following \cite{PlotkinPower/FOSSACS/01} we consider algebraic
operations as sources of side effects. Syntactically,
algebraic operations are given via a signature $\Sigma$ consisting of
a set of operation symbols (uninterpreted operations) together
with their arity (i.e. their number of operands). Semantically,
operation symbols are interpreted as algebraic operations on
strong monads on $\mathsf{Set}$. To any $n$-ary operation symbol $\mathbf{op} \in \Sigma$
and any set $X$ we associate a map $op_X: (T X)^n \to T X$
(so that we equip $T X$ with a $\Sigma$-algebra structure)
such that $\kleisli{f}$ is a parametrised $\Sigma$-algebra
(homo)morphis, for any $f:Z \times X \to T Y$. Concretely,
we require
$op_Y(\strongkleisli{f}(z,x_1), \hdots, \strongkleisli{f}(z,x_1)) =
\strongkleisli{f}(z,op_X(x_1, \hdots, x_n))$
to hold for all $z \in Z, x_i \in T Y$.
We also use monads to give operational semantics to $\mathsf{V}$-$\mathsf{Fuzz}$\
\cite{DalLagoGavazzoLevy/LICS/2017}.
Intuitively, a program $e$ evaluates to a \emph{monadic value}
$\monadic{\valone} \in T \mathcal{V}$, where $\mathcal{V}$ denotes the set of values.
For instance, a nondeterministic program evaluates to a \emph{set} of values,
whereas a probabilistic program evaluates to a \emph{(sub)distribution} of
values.
Due to the presence of non-terminating programs the evaluation
of a term is defined as the limit of its ``finite evaluations'', and thus
we need monads to carry a suitable domain structure.
Recall that any category $\mathbb{C}$ is \ensuremath{\omega\text{-}\mathsf{cppo}}-enriched if the hom-set
$\mathbb{C}(X,Y)$ carries an
\ensuremath{\omega\text{-}\mathsf{cppo}}-structure, for all objects $X,Y$, and composition is continuous.
A (strong) monad $\mathbb{\monad}$ is \ensuremath{\omega\text{-}\mathsf{cppo}}-enriched
if $\mathbb{C}_\mathbb{\monad}$ is. In particular, in $\mathsf{Set}$ that means that
we have an \ensuremath{\omega\text{-}\mathsf{cppo}}\ $\langle T X, \sqsubseteq_X, \bot_X\rangle$ for any set $X$.
In particular, \ensuremath{\omega\text{-}\mathsf{cppo}}-enrichment of $\mathbb{\monad}$ gives the following
equalities for $g, g_n: X \to T Y$ and $f, f_n: Y \to T Z$
arrows in $\mathbb{C}$:
\begin{align*}
\strongkleisli{(\bigsqcup_{n<\omega} f_n)} \cdot g
& = \bigsqcup_{n<\omega} \strongkleisli{f_n} \cdot g, \\
\strongkleisli{f} \cdot (\bigsqcup_{n<\omega} g_n)
& = \bigsqcup_{n<\omega} (\strongkleisli{f} \cdot g_n).
\end{align*}
Since $\mathsf{V}$-Fuzz is a call-by-value language, we
also require the equality $\strongkleisli{f}(z, \bot_X) = \bot_Y$,
for $f: Z \otimes X \to T Y$.
Finally, we say that
$\mathbb{\monad}$ is $\Sigma$-\emph{continuous} if satisfies the above conditions and
operations $op_X : (T X)^n \to T X$ are continuous, meaning
that for all $\omega$-chains $c_1, \hdots, c_n$ in $T X$ we have:
$$
op_X(\bigsqcup c_1, \hdots, \bigsqcup c_n) = \bigsqcup op_X(c_1, \hdots, c_n).
$$
The reader can consult\cite{PlotkinPower/FOSSACS/01,DalLagoGavazzoLevy/LICS/2017}
for more details.
\begin{example}\label{ex:monads}
The following are $\Sigma$-continuous monads:
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item The partiality monad $(-)_\bot$ mapping a set $X$ to
$X_\bot \triangleq X + \{\bot_X\}$. We give $X_\bot$ an
\ensuremath{\omega\text{-}\mathsf{cppo}}\ structure via $\sqsubseteq_X$ defined by
$\monadic{x} \sqsubseteq_X \monadic{y}$ if and only if $\monadic{x} = \bot_X$
or $\monadic{x} = \monadic{y}$. We equip the function space
$X \to Y_\bot$ with the pointwise order induced by $\sqsubseteq$.
\item The powerset monad mapping a set to its powerset.
The unit maps an element $x$ to
$\{x\}$, whereas $\strongkleisli{f}: Z \times \mathcal{P}(X) \to \mathcal{P}(Y)$
is defined by
$\kleisli{f}(z,\mathpzc{X}) \triangleq \bigcup_{x \in \mathpzc{X}} f(z,x)$,
for $f: Z \times X \to \mathcal{P}(Y)$, $\mathpzc{X} \subseteq X$,
and $z \in Z$.
We give $\mathcal{P}(X)$ an \ensuremath{\omega\text{-}\mathsf{cppo}}\ structure via
subset inclusion $\subseteq$ and order the function space
$X \to \mathcal{P}(Y)$ with the pointwise order induced
by $\subseteq$. Finally,
we consider the signature $\Sigma = \{\oplus\}$ consisting
of a single binary operation symbol for pure
nondeterministic choice and interpret it as set-theoretic union.
\item The discrete \emph{subdistribution} monad $\mathcal{D}_{\leq 1}$
mapping a set $X$ to $\mathcal{D}(X_\bot)$, where $\mathcal{D}$
denotes the discrete \emph{full} distribution monad.
The unit of $\mathcal{D}$ maps an element
$x$ to the Dirac distribution $\dirac{x}$ on it, whereas
the strong Kleisli extension
$\strongkleisli{f}: Z \times \mathcal{D} X \to \mathcal{D} Y$
of $f: Z \times X \to \mathcal{D} Y$ is defined
by $\strongkleisli{f}(z,\mu)(y) \triangleq \sum_{x \in X} \mu(x) \cdot f(z,x)(y)$.
On $\mathcal{D}(X_\bot)$, define the order $\sqsubseteq_X$ by
$\mu \sqsubseteq_X \nu$ if and only if $\forall x \in X.\ \mu(x) \leq \nu(x)$ holds.
The pair $(\mathcal{D}(X_\bot), \sqsubseteq_X)$ forms an \ensuremath{\omega\text{-}\mathsf{cppo}},
with bottom element given by the Dirac distribution on $\bot_X$
(the distribution modelling
the always zero subdistribution). The \ensuremath{\omega\text{-}\mathsf{cppo}}\ structure lifts to
function spaces pointwisely.
Finally, consider the signature
$\Sigma \triangleq \{\oplus_p \mid p \in \mathbb{Q},\ 0 < p < 1\}$
whose interpretation on the subdistribution monad is defined by
$(\mu \oplus_p \nu)(x) \triangleq p \cdot \mu(x) + (1-p) \cdot \nu(x)$.
Restricting to $p \triangleq \frac{1}{2}$ we obtain fair probabilistic
choice $\oplus$.
\item The partial global state monad $\mathcal{G}_\bot$ is obtained from
the partiality monad and the global state monad; it maps
a set $X$ to $(S \times X)_\bot^X$.
The global state monad $\mathcal{G}$ maps a set $X$ to
$(S \times X)^S$. Since ultimately a location
stores a bit we take $S \triangleq \{0,1\}^\mathcal{L}$, where
$\mathcal{L}$ is a
set of (public) location names.
We can give an \ensuremath{\omega\text{-}\mathsf{cppo}}\ structure to $\mathcal{G}_\bot X$ by extending the
order of point $1$ pointwise.
We consider the signature $\Sigma_{\mathcal{L}} \triangleq
\{\mathbf{get}, \mathbf{set}_{\ell := 0}, \mathbf{set}_{\ell := 1} \mid \ell \in \mathcal{L}\}$ and
interpret operations in $\Sigma_{\mathcal{L}}$ on
$\mathcal{G}$ as follows:
\begin{align*}
\mathbf{set}_{\ell := 0}(f)(b) & \triangleq f(b[\ell := 0]), \\
\mathbf{set}_{\ell := 1}(f)(b) & \triangleq f(b[\ell := 1]), \\
\mathbf{get}(f,g)(b) & \triangleq
\begin{cases}
f(b) &\text{if } b = 0, \\
g(b) &\text{if } b = 1,
\end{cases}
\end{align*}
where for $b \in S$,
$b[\ell := x](\ell) \triangleq x$ and $b[\ell := x](\ell') \triangleq b(\ell')$,
for $\ell' \neq \ell$.
\end{enumerate}
\end{example}
\subsection{Relations, Metrics, and Quantales}
We now recall basic notions on quantales \cite{Rosenthal/Quantales/1990} and
quantale-valued relations ($\mathsf{V}$-relations) along the lines of
\cite{Lawvere/GeneralizedMetricSpaces/1973}. The
reader is referred to the monograph
\cite{Hoffman-Seal-Tholem/monoidal-topology/2014} for
an introduction.
\begin{definition}\label{def:quantale}
A (unital) quantale $(\mathsf{V}, \leq, \otimes, k)$,
$\mathsf{V}$ for short,
consists of a monoid $(\mathsf{V}, \otimes, k)$ and a sup-lattice
$(\mathsf{V}, \leq)$ satisfying the following distributivity laws:
\begin{align*}
b \otimes \bigvee_{i\in I} a_i
&= \bigvee_{i \in I} (b \otimes a_i),
&
(\bigvee_{i \in I} a_i) \otimes b
&= \bigvee_{i \in I} (a_i \otimes b).
\end{align*}
The element $k$ is called unit, whereas
$\otimes$ is called multiplication of the quantale.
Given quantales $\mathsf{V}, \mathsf{W}$, a
\emph{quantale lax morphism} is a \emph{monotone} map
$h: \mathsf{V} \to \mathsf{W}$
satisfying the following inequalities:
\begin{align*}
\ell
&\leq h(k),
&
h(a) \otimes h(b)
&\leq
h(a \otimes b),
\end{align*}
where $\ell$ is the unit of $\mathsf{W}$.
\end{definition}
It is easy to see that
$\otimes$ is monotone in both arguments.
We denote top and bottom elements of a quantale by
$\rotatebox[origin=c]{180}{$\Bot$}$ and $\Bot$, respectively.
Moreover, we say that a quantale is commutative if
its underlying monoid is, and it is non-trivial if
$k \neq \Bot$.
Finally, we observe that for any $a \in \mathsf{V}$,
the map $a \otimes (-): \mathsf{V} \to \mathsf{V}$
has a right adjoint $a \multimapdot (-): \mathsf{V} \to \mathsf{V}$
which is uniquely determined by:
$$
a \otimes b \leq c \iff
b \leq a \multimapdot c.
$$
From now on we tacitly assume quantales to be commutative
and non-trivial.
\begin{example}\label{ex:quantales}
The following are examples of quantales:
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item The \emph{boolean quantale}
$(\mathsf{2}, \leq, \wedge, \mathsf{true})$ where $\mathsf{2} = \{\mathsf{true}, \mathsf{false}\}$
and $\mathsf{false} \leq \mathsf{true}$.
\item The extended real half-line $([0, \infty], \geq, +,0)$
ordered by
the ``greater or equal'' relation $\geq$ and
extended\footnote{We extend
ordinary as follows:
$x + \infty \triangleq \infty \triangleq \infty + x$.}
addition as monoid multiplication.
We refer to such quantale as the \emph{Lawvere quantale}.
Note that in the Lawvere quantale the bottom element is $\infty$,
the top element is $0$, whereas infimum and supremum are defined
as $\sup$ and $\inf$, respectively. Notice also that $\multimapdot$ is
truncated subtraction.
\item Replacing addition with maximum in the Lawevere
quantale we obtain the \emph{ultrametric Lawvere quantale}
$([0,\infty], \geq, \max, 0)$, which
has been used to study generalised ultrametric spaces
\cite{Rutten/ultrametricSpaces/1996} (note that
in the ultrametric Lawvere quantale monoid multiplication and binary
meet coincide).
\item Restricting the Lawvere quantale to the unit interval
we obtain the \emph{unit interval quantale} $([0,1], \geq, +, 0)$,
where $+$ stands for truncated addition.
\item A left continuous \emph{triangular norm} ($t$-norm for short)
is a binary operator
$*: [0,1] \times [0,1] \to [0,1]$ that induces a quantale
structure over the complete lattice $([0,1], \leq)$ in
such a way that the quantale is commutative.
Examples $t$-norms are:
\begin{enumerate}
\item The \emph{product $t$-norm}: $x *_p y \triangleq x \cdot y$.
\item The \emph{\L{}ukasiewicz $t$-norm}: $x *_l y \triangleq \max\{x + y - 1, 0\}$.
\item The \emph{G\"{o}del $t$-norm}: $x *_g y \triangleq \min\{x,y\}$.
\end{enumerate}
\end{enumerate}
\end{example}
In all quantales of Example \ref{ex:quantales} the unit $k$
coincide the top element (i.e. $k = \rotatebox[origin=c]{180}{$\Bot$}$).
Quantales with such property are called \emph{integral quantales}, and
are particularly well-behaved.
For instance, in an
integral quantale $a \otimes b$ is a lower bound
of $a$ and $b$ (and thus
$a \otimes \bot = \bot$, for any $a \in \mathsf{V}$).
From now on we tacitly assume quantales to be integral.
\paragraph{$\mathsf{V}$-relations
The notion of $\mathsf{V}$-relation, for a quantale $\mathsf{V}$,
provides an abstraction of the notion relation that subsumes
both the qualitative---boolean valued---and the
quantitative---real valued---notion of
relation, as well as the associated notions of equivalence and
(pseudo)metric. Moreover, sets and $\mathsf{V}$-relations
form a category which, thanks to the quantale structure of $\mathsf{V}$,
behaves essentially like $\mathbf{Rel}$, the category of sets and relations.
That allows to develop an algebra of $\mathsf{V}$-relations on the same
line of the usual algebra of relations.
Formally, for a quantale $\mathsf{V}$, a $\mathsf{V}$-relation
$\alpha: X \tobar Y$ between sets
$X$ and $Y$
is a function $\alpha: X \times Y \to \mathsf{V}$.
For any set $X$ we can define the identity
$\mathsf{V}$-relation $id_{X} : X \tobar X$
mapping diagonal elements $(x,x)$ to $k$, and all other
elements to $\Bot$. Moreover, for $\mathsf{V}$-relations
$\alpha: X \tobar Y$ and
$\beta: Y \tobar Z$,
we can define the composition
$\beta \cdot \alpha: X \tobar Z$
by the so-called `matrix multiplication formula':
$$
(\beta \cdot \alpha)(x,z) \triangleq
\bigvee_{y \in Y} \alpha(x,y) \otimes \beta(y,z).
$$
Composition of $\mathsf{V}$-relations is associative,
and $id$ is the unit of composition.
As a consequence, we have that sets and
$\mathsf{V}$-relations form a category, called $\Vrel{\quantale}$.
$\Vrel{\quantale}$ is a monoidal category with unit given by the one-element
set and tensor product given by cartesian product of sets with
$\alpha \otimes \beta : X \times Y
\tobar X' \times Y'$
defined pointwise, for $\alpha: X \tobar X'$ and $\beta: Y \to Y'$.
Moreover, for all sets $X, Y$, the hom-set
$\Vrel{\quantale}(X, Y)$ inherits a complete lattice structure from
$\mathsf{V}$ according to the pointwise order. Actually,
the whole quantale structure of $\mathsf{V}$ is inherited, in the
sense that $\Vrel{\quantale}$ is a \emph{quantaloid}
\cite{Hoffman-Seal-Tholem/monoidal-topology/2014}. In particular,
for all $\mathsf{V}$-relations
$\alpha: X \tobar Y$,
$\beta_i : Y \tobar Z$ ($i \in I)$,
and $\gamma: Z \tobar W$ we have
the following distributivity laws:
\begin{align*}
\gamma \cdot (\bigvee_{i \in I} \beta_i)
&= \bigvee_{i \in I} (\gamma \cdot \beta_i),
&
(\bigvee_{i \in I} \beta_i) \cdot \alpha
&= \bigvee_{i \in I} (\beta_i \cdot \alpha).
\end{align*}
There is a bijection
$\dual{-}: \Vrel{\quantale}(X, Y) \to \Vrel{\quantale}(Y, X)$ that
maps each $\mathsf{V}$-relation $\alpha$ to its dual $\dual{\alpha}$ defined by
$\dual{\alpha}(y,x) \triangleq \alpha(x,y)$.
It is straightforward to see that
$\dual{-}$ is monotone
(i.e. $\alpha \leq \beta$ implies $\dual{\alpha} \leq \dual{\beta}$),
idempotent (i.e. $\dual{(\dual{\alpha})} = \alpha$), and preserves the identity
relation (i.e. $\dual{id} = id$). Moreover, since $\mathsf{V}$
is commutative we also have the equality
$\dual{(\beta \cdot \alpha)} = \dual{\alpha} \cdot \dual{\beta}$.
Finally, we define the graph functor $\mathcal{G}$ from $\mathsf{Set}$ to $\Vrel{\quantale}$ acting
as the identity on sets and mapping each function $f$ to its graph (so that
$\mathcal{G}(f)(x,y)$ is equal to $k$ if $y = f(x)$, and $\Bot$ otherwise).
It is easy to see that since $\mathsf{V}$ is non-trivial $\mathcal{G}$ is
faithful. In light of this observation we will use the notation
$f : X \to Y$ in place of $\mathcal{G}(f) : X \tobar Y$
in $\Vrel{\quantale}$.
A direct application of the definition of composition gives the equality:
$$
(\dual{g} \cdot \alpha \cdot f)(x,w) = \alpha(f(x),g(w))
$$
for $f: X \to Y$, $\alpha: Y \tobar Z$, and
$g: W \to Z$. Moreover, it is useful to keep in mind the
following adjunction rules \cite{Hoffman-Seal-Tholem/monoidal-topology/2014}
(for $\alpha, \beta, \gamma$
$\mathsf{V}$-relations, and $f,g$ functions with appropriate source and
target):
\begin{align*}
g \cdot \alpha \leq \beta
&\iff \alpha \leq \dual{g} \cdot \beta, \\
\beta \cdot \dual{f} \leq \gamma
&\iff \beta \leq \gamma \cdot f.
\end{align*}
The above inequalities turned out to be useful in making
pointfree calculations with $\mathsf{V}$-relations.
In particular, we can use \emph{lax}
commutative diagrams of the form
\begin{center}
\(
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
X
\ar[r]^{f}
\ar[d]_{\alpha}|@{|} &
Z
\ar[d]^{\beta}|-*=0@{|}
\\
Y
\ar[r]_{g} &
W } }
\)
\end{center}
as diagrammatic representation for the inequation
$g \cdot \alpha \leq \beta \cdot f$. By adjunction rules,
the latter is equivalent to $\alpha \leq \dual{g} \cdot \beta \cdot f$,
which pointwisely gives the following
generalised non-expansiveness condition\footnote{Taking $f = g$ generalised
non-expansiveness expresses
monotonicity of $f$ in the boolean quantale, and non-expansiveness of
$f$ in the Lawvere quantale and its variants (recall that when we
instantiate $\mathsf{V}$ as e.g. the Lawvere quantale we have to
\emph{reverse} inequalities).}:
$
\forall (x,y) \in X \times Y.\
\alpha(x,y) \leq \beta(f(x), g(y)).
$
Among $\mathsf{V}$-relations we are interested in those
generalising equivalences and pseudometrics.
\begin{definition}
A $\mathsf{V}$-relation $\alpha: X \tobar X$
is \emph{reflexive} if $id_{X} \leq \alpha$,
\emph{transitive} if $\alpha \cdot \alpha \leq \alpha$, and
\emph{symmetric} if $\alpha \leq \dual{\alpha}$.
\end{definition}
Pointwisely, reflexivity, transitivity, and symmetry give the
following inequalities:
$$
k \leq \alpha(x,x), \quad
\alpha(x,y) \otimes \alpha(y,z) \leq \alpha(x,z), \quad
\alpha(x,y) \leq \alpha(y,x),
$$
for all $x,y,z \in X$.
We call a reflexive and transitive $\mathsf{V}$-relation a
$\mathsf{V}$\emph{-preorder} or \emph{generalised metric}
\cite{Lawvere/GeneralizedMetricSpaces/1973,BonsangueBreguelRutten/GeneralisedMetricSpaces/1998}, and a reflexive, symmetric, and
transitive $\mathsf{V}$-relation a $\mathsf{V}$\emph{-equivalence} or \emph{pseudometric}.
\begin{example}
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item We see that $\mathsf{2}$-$\mathbf{Rel}$ is the ordinary category $\mathbf{Rel}$ of
sets and relations. Moreover, instantiating reflexivity and transitivity
on the boolean quantale, we recover the usual notion of preorder. If
we additionally require symmetry, then we obtain the usual notion
of equivalence relation.
\item On the Lawvere quantale transitivity gives:
$$
\inf_y \alpha(x,y) + \alpha(y,z) \geq \alpha(x,z),
$$
which means $\alpha(x,z) \leq \alpha(x,y) + \alpha(y,z)$, for any
$y \in X$. That is, in the Lawvere quantale transitivity gives exactly
the triangle inequality. Similarly, reflexivity gives
$0 \geq \alpha(x,x)$, i.e. $\alpha(x,x) = 0$.
If additionally $\alpha$ is symmetric, then we recover the usual notion
of pseudometric \cite{steen/CounterexamplesTopology/1995}.
\item Analogously to point $2$, if we consider the ultrametric
Lawvere quantale, we recover the ultrametric variants of the above notions.
\end{enumerate}
\end{example}
\begin{digression}[$\mathsf{V}$-categories]\label{digression:v-categories}
Lawvere introduced generalised metric spaces in his seminal paper
\cite{Lawvere/GeneralizedMetricSpaces/1973} as pairs $(X, \alpha)$
consisting of a set $X$ and a generalised metric $\alpha: X \tobar X$
over the Lawvere quantale.
Generalising from the Lawvere quantale to an arbitrary quantale $\mathsf{V}$
we obtain the so-called $\mathsf{V}$-categories
\cite{Hoffman-Seal-Tholem/monoidal-topology/2014}.
In fact, a $\mathsf{V}$-category $(X, \alpha)$ is nothing but a category
enriched over $\mathsf{V}$ regarded as a
bicomplete monoidal category. The notion of $\mathsf{V}$-enriched
functor precisely instantiates as non-expansive map between
$\mathsf{V}$-categories, so that one can consider the category
$\Vcat{\quantale}$ of $\mathsf{V}$-categories and $\mathsf{V}$-functors. The
category $\Vcat{\quantale}$ has a rich structure. In particular, it is monoidal
closed category. Given $\mathsf{V}$-categories $(X, \alpha), (Y, \beta)$,
their exponential $(Y^X, [\alpha, \beta])$
is defined by
$$
[\alpha, \beta](f,g) \triangleq \bigwedge_{x \in X} \beta(f(x),g(x))
$$
(cf. with the usual, real-valued, sup-metric on function spaces),
whereas their tensor product $(X \times Y, \alpha \otimes \beta)$ is defined
pointwise.
Although in this work we will not work with $\mathsf{V}$-categories
(we will essentially work in $\Vrel{\quantale}$), it is sometimes useful to think in
terms of $\mathsf{V}$-categories for `semantical intuitions'.
\end{digression}
\paragraph{Operations}
For a signature $\Sigma$, we need to specify how operations in $\Sigma$
interact with $\mathsf{V}$-relations (e.g. how they modify distances),
and thus how they interact with quantales.
\begin{definition}\label{def:signature-quantale}
Let $\Sigma$ be a signature. A $\Sigma$-quantale is a
quantale $\mathsf{V}$
equipped with \emph{monotone} operations
$op_{\quantale}: \mathsf{V}^n \to \mathsf{V}$,
for each $n$-ary operation $\mathbf{op} \in \Sigma$,
satisfying the following inequalities:
\begin{align*}
k
&\leq op_{\quantale}(k, \hdots, k), \\
op_{\quantale}(a_1, \hdots, a_n) \otimes
op_{\quantale}(b_1, \hdots, b_n)
&\leq op_{\quantale}(a_1 \otimes b_1,
\hdots, a_n \otimes b_n).
\end{align*}
\end{definition}
\begin{example}\label{ex:quantale-operations}
Both in the Lawvere quantale and in the unit
interval quantale we can interpret
operations $\oplus_p$ from Example \ref{ex:monads}
as probabilistic choices: $x \oplus_p y \triangleq p \cdot x + (1-p) \cdot y$.
In general, for a quantale $\mathsf{V}$ we can
interpret $op_{\quantale}(a_1, \hdots, a_n)$
both as $a_1 \otimes \hdots \otimes a_n$ and
$a_1 \wedge \hdots \wedge a_n$.
\end{example}
\paragraph{Change of Base Functors}
We model sensitivity of a program
as a function
giving the `law' describing how distances between inputs
are modified by the program. The notion of \emph{change of base functor}
provides a mathematical abstraction to model the concept of sensitivity
with respect to an arbitrary quantale.
\begin{definition}\label{def:change-of-base-functor}
A change of base functor \cite{Hoffman-Seal-Tholem/monoidal-topology/2014},
CBF for short,
between quantales $\mathsf{V}, \mathsf{W}$
is a lax quantale morphism $h :\mathsf{V} \to \mathsf{W}$
(see Definition \ref{def:quantale}). If $\mathsf{V} = \mathsf{W}$
we speak of change of base \emph{endofunctors} (CBEs, for short),
and denote them by $s, r \hdots$. Clearly, every CBE $s$
is also a CBF.
\end{definition}
The action $h \circ \alpha$ of a CBF $h: \mathsf{V} \to \mathsf{W}$
on a $\mathsf{V}$-relation $\alpha: X \tobar Y$ is defined by
$h \circ \alpha(x,y) \triangleq h(\alpha(x,y))$
(to improve readability we omit brackets).
Note that since $\mathsf{V}$ is integral, CBFs preserve
the unit.
\begin{example}\label{ex:change-of-base-functor}
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item Extended\footnote{We extend real-valued multiplication by:
$0 \cdot \infty \triangleq 0 \triangleq \infty \cdot 0$,
$\infty \cdot x \triangleq \infty \triangleq x \cdot \infty$.
} real-valued multiplication $c \cdot -$, for $c \in [0,\infty]$,
is a CBE on the Lawvere quantale. Functions $c \cdot -$ act as CBEs
also on the unit interval quantale (where multiplication is meant to
be truncated).
\item Both in the Lawvere quantale and in the unit interval
quantale, polynomials $P$ such that $P(0) = 0$ are
CBEs.
\item Define CBEs
$n, \infty : \mathsf{V} \to \mathsf{V}$, for $n < \omega$ by
$0(a) \triangleq k$, $(n+1)(a) \triangleq
a \otimes n(a)$, and $\infty(a) \triangleq \Bot$.
Note that $1$ acts as the identity function.
\end{enumerate}
\end{example}
Finally, we observe that the action of CBFs on a $\mathsf{V}$-relation
obeys the following laws:
\begin{align*}
(h \cdot h')(\alpha)
&= h \circ (h' \circ \alpha),
\\
(h \circ \alpha) \cdot (h \circ \beta)
&\leq h \circ (\alpha \cdot \beta).
\end{align*}
\begin{digression}\label{digression:change-of-base-functor}
We saw that $\mathsf{V}$-categories generalise the notions of
metric space and ordered set, and that the notion of
$\mathsf{V}$-functor generalises the notions of monotone and
non-expansive function. However, when dealing with metric spaces
besides non-expansive functions, a prominent role is played by
\emph{Lipshitz continuous} functions. Given metric spaces
$(X, d_X)$ and $(Y, d_Y)$, a function $f: X \to Y$ is called
\emph{$c$-continuous}, for $c \in \mathbb{R}_{\geq 0}$ if the
inequation $c \cdot d_X(x,x') \geq d_Y(f(x),f(x'))$ holds, for
all $x,x' \in X$. Example \ref{ex:change-of-base-functor} shows
that multiplication $c \cdot -$ by a real number $c$ is a
change of base endofunctor on the Lawvere quantale, meaning that
using CBEs we can generalise the notion of Lipshitz-continuity
to $\mathsf{V}$-categories. In fact, easy calculations show that
for any $\mathsf{V}$-category $(X, \alpha)$ and any CBE $s$
on $\mathsf{V}$, $(X, s \circ \alpha)$ is a $\mathsf{V}$-category.
In particular, we can define $s$-continuous functions from
$(X, \alpha)$ to $(Y, \beta)$ as $\mathsf{V}$-functors from
$(X, s \circ \alpha)$ to $(Y, \beta)$. That is,
we say that a function $f: X \to Y$ is $s$-continuous if
$s \circ \alpha(x,x') \leq \beta(f(x),f(x'))$ holds,
for all $x,x' \in X$.
\end{digression}
We conclude this section with the following result on
the algebra of CBEs.
\begin{lemma}\label{lemma:algebra-change-of-base-functors}
Let $\mathsf{V}$ be a $\Sigma$-quantale. CBEs
are closed under the following operations (where $\mathbf{op} \in \Sigma$):
\begin{align*}
(s \otimes r)(a)
&\triangleq s(a) \otimes r(a), \\
(r \cdot s)(a)
&\triangleq r(s(a)), \\
(s \wedge r)(a) &= s(a) \wedge s(b), \\
op_{\quantale}(s_1, \hdots, s_n)(a)
&\triangleq op_{\quantale}(s_1(a), \hdots, s_n(a)).
\end{align*}
\end{lemma}
\section{The $\mathsf{V}$-fuzz Language }\label{section:v-fuzz}
As already observed in the introduction, when dealing with
behavioural $\mathsf{V}$-relations a crucial parameter in amplification phenomena
is program sensitivity. To deal with such parameter we
introduce $\mathsf{V}$-fuzz, a higher-order effectful language
generalising $\mathsf{Fuzz}$\ \cite{GaboardiEtAl/POPL/2017}. As $\mathsf{Fuzz}$,
$\mathsf{V}$-$\mathsf{Fuzz}$\ is characterised by a powerful type system
inspired by \emph{bounded linear logic} \cite{DBLP:journals/tcs/GirardSS92}
giving syntactic information on program sensitivity.
\paragraph{Syntax}
$\mathsf{V}$-fuzz is a \emph{fine-grained call-by-value}
\cite{Levy/InfComp/2003} \emph{linear} $\lambda$-calculus with
finite sum and recursive types.
In particular, we make a formal distinction between values
and computations (which we simply refer to as terms), and use syntactic
primitives to returning values ($\mathsf{return}$) and sequentially compose
computations (via a $\mathbf{let}$-$\mathbf{in}$ constructor).
The syntax of $\mathsf{V}$-$\mathsf{Fuzz}$\ is parametrised over a
signature $\Sigma$ of operation symbols, a $\Sigma$-quantale $\mathsf{V}$,
and a family $\Pi$ of CBEs. From now on we
assume $\Sigma$, $\mathsf{V}$, and $\Pi$ to be fixed. Moreover,
we assume $\Pi$ to contain at least CBEs $n, \infty$
in Example \ref{ex:change-of-base-functor} and
to be closed under operations in Lemma \ref{lemma:algebra-change-of-base-functors}.
Types, values, and terms of $\mathsf{V}$-$\mathsf{Fuzz}$\ are defined in Figure
\ref{fig:types-terms-values-v-fuzz}, where $t$ denotes a type variable,
$I$ is a \emph{finite} set (whose elements are denoted by
$\hat \imath, \hat \jmath, \hdots$), and $s$ is in $\Pi$.
\begin{figure}[htbp]
{\hrulefill}
\begin{align*}
\typeOne
&\;::=\; t
\mid \sumtype{i \in I}{\typeOne_i}
\mid \typeOne \multimap \typeOne
\mid \recType{t}{\typeOne}
\mid {!}_{s} \typeOne.
\\
\valone
&\;::=\; \varone
\mid \abs{\varone}{e}
\mid \inject{\hat \imath}{\valone}
\mid \fold{\valone}
\mid {!} \valone.
\\
e
&\;::=\; \mathsf{return}{\valone}
\mid \valone \valone
\mid \casesum{\valone}{e_i}
\mid \seq{e}{e}
\\
& \text{ }\mid \casebang{\valone}{e}
\mid \casefold{\valone}{e}
\mid \mathbf{op}(e, \hdots, e).
\end{align*}{\hrulefill}
\caption{Types, values, and terms of $\mathsf{V}$-$\mathsf{Fuzz}$.}
\label{fig:types-terms-values-v-fuzz}
\end{figure}
Free and bound variables in terms and values are defined as usual.
We work with equivalence classes of terms modulo renaming and tacitly
assume conventions on bindings.
Moreover, we denote by $\substval{\valtwo}{\valone}{\varone}$ and
$\substcomp{e}{\varone}{\valone}$ the \emph{value} and
\emph{term} obtained by capture-avoiding substitution
of the \emph{value} $\valone$ for $\varone$ in $\valtwo$ and $e$,
respectively (see \cite{DalLagoGavazzoLevy/LICS/2017} for details).
Similar conventions hold for types. In particular, we denote
by $\typeOne[\typeTwo/t]$ the result of capture-avoiding substitution
of type $\typeTwo$ for the type variable $t$ in $\typeOne$. Finally,
we write \textbf{0} for the empty sum type, \textbf{1} for
$\textbf{0} \multimap \textbf{0}$, and $\mathsf{nat}$ for $\mu t.\textbf{1} + t$.
We denote the numeral $n$ by $\numeral{n}$.
$\mathsf{V}$-$\mathsf{Fuzz}$\ type system is essentially based on
judgments of the form
$
\varone_1 :_{s_1} \typeOne_1, \hdots,
\varone_n :_{s_n} \typeOne_n \vdash e: \typeOne,
$
where $s_1, \hdots, s_n$ are CBEs.
The informal meaning of such judgment is that on input $\varone_i$ ($i \leq n$),
the term $e$ has sensitivity $s_i$. That is,
$e$ amplifies the (behavioural) distance between two input
values $\valone_i, \valtwo_i$
of \emph{at most} a factor $s_i$; symbolically,
$s_i \circ \alpha(\valone_i, \valtwo_i) \leq
\alpha(\substcomp{e}{\varone_i}{\valone_i},
\substcomp{e}{\varone_i}{\valtwo_i})$
An \emph{environment} $\env$ is a sequence
$\varone_1 :_{s_1} \typeOne_1, \hdots, \varone_n :_{s_n} \typeOne_n$
of distinct identifiers with associated closed types and CBEs
(we denote the empty environment by $\emptyset$).
We can lift operations on CBEs in
Lemma \ref{lemma:algebra-change-of-base-functors} to environments as follows:
\begin{align*}
r \cdot \env
&= \varone_1 :_{r \cdot s_1} \typeOne_1, \hdots, \varone_n :_{r \cdot s_n} \typeOne_n,
\\
\env
\otimes
\Delta
&= \varone_1 :_{s_1 \otimes r_1} \typeOne_1, \hdots,
\varone_n :_{s_n \otimes r_n} \typeOne_n,
\\
op_{\quantale}(\env^{1}, \hdots, \env^{m})
&= \varone_1 :_{op_{\quantale}(s^{1}_1, \hdots, s^{m}_1)} \typeOne_1, \hdots,
\varone_n :_{op_{\quantale}(s^{1}_n, \hdots, s^{m}_n)} \typeOne_n,
\end{align*}
for
$\env = \varone_1 :_{s_1} \typeOne_1, \hdots,
\varone_n :_{s_n} \typeOne_n$,
$\Delta = \varone_1 :_{r_1} \typeOne_1, \hdots,
\varone_n :_{r_n} \typeOne_n$, and
$\env^{i} = \varone_1 :_{s^{i}_1} \typeOne_1, \hdots,
\varone_n :_{s^{i}_n} \typeOne_n$.
Note that the above operations are defined for environments having
the same structure (i.e. differing only on CBEs).
This is not a real restriction since we can always
add the missing identifiers
$\vartwo :_{k} \typeOne$, where $k$ is the constant function
returning the unit of the quantale
(but see \cite{Pierce/DistanceMakesTypesGrowStronger/2010}).
The type system for $\mathsf{V}$-$\mathsf{Fuzz}$\ is defined in Figure
\ref{fig:typing-system}. The system is based on two kinds of judgment
(exploiting the fine-grained style of the calculus):
judgments of the form $\env \imp^{\mathsf{v}} \valone: \typeOne$ for values
and judgments of the form $\env \imp e: \typeOne$ for terms.
We denote by $\mathcal{V}_{\typeOne}$ and $\Lambda_{\typeOne}$
for the set of closed values and terms of type $\typeOne$,
respectively. Sometimes we also use the notation
$\Lambda_{\env \vdash \typeOne}$ for the set
$\{e \in \Lambda \mid \env \vdash e: \typeOne\}$
(and similarity for values).
\begin{figure}
{\hrulefill}
\vspace{0.2cm}
\(
\vdash
{\valseq{\env, \varone :_{s} \sigma}{\varone : \sigma}}
{s \leq 1}
\quad
\vdash
{op_{\quantale}(\Gamma_1, \hdots, \Gamma_n) \vdash
\mathbf{op}(e_1, \hdots, e_n):
\typeOne}
{\compseq{\Gamma_1}{e_1: \sigma}
&
\cdots
&
\compseq{\Gamma_n}{e_n: \sigma}
}
\)
\vspace{0.2cm}
\(
\vdash
{\valseq{\Gamma}{\abs{\varone}e: \sigma \multimap \tau}}
{\Gamma, \varone :_{1} \sigma \imp e: \tau}
\quad
\vdash
{\compseq{\Gamma \otimes \Delta}{\valone \valtwo: \tau}}
{
\valseq{\Gamma}{\valone: \sigma \multimap \tau}
&
\valseq{\Delta}{\valtwo: \sigma}
}
\)
\vspace{0.2cm}
\(
\vdash
{\env \imp^{\mathsf{v}} \inject{\hat \imath}{\valone}: \sumtype{i \in I}{\typeOne_i}}
{\env \imp^{\mathsf{v}} \valone: \typeOne_{\hat \imath}}
\quad
\vdash
{s \cdot \env \otimes \Delta \vdash
\casesum{\valone}{e_i}: \typeTwo}
{
\env \imp^{\mathsf{v}} \valone: \sumtype{i \in I}{\typeOne_i}
&
\Delta, \varone :_s \typeOne_i \vdash e_i: \typeTwo
&
(\forall i \in I)
}
\)
\vspace{0.2cm}
\(
\vdash
{\compseq{\Gamma}{\mathsf{return}{\valone}: \sigma}}
{\valseq{\Gamma}{\valone: \sigma}}
\quad
\vdash
{\compseq{(s \wedge 1) \cdot \Gamma \otimes \Delta}
{\seq{e}{f}: \typeTwo}}
{\compseq{\Gamma}{e: \sigma}
&
\compseq{\Delta, \varone :_{s} \sigma}{f: \typeTwo}
}
\)
\vspace{0.2cm}
\(
\vdash
{\valseq{s \cdot \Gamma}{{!} \valone: {!}_{s} \sigma}}
{\valseq{\Gamma}{\valone: \typeOne}}
\quad
\vdash
{\compseq{s \cdot \Gamma \otimes \Delta}
{\unbang{\valone}{e}: \tau}}
{
\valseq{\Gamma}{\valone:{!}_{r} \sigma}
&
\compseq{\Delta, \varone :_{s \cdot r} \typeOne}
{e: \tau}
}
\)
\vspace{0.2cm}
\(
\vdash
{\valseq{\Gamma}{\fold{\valone}: \recType{t}{\sigma}}}
{
\valseq{\Gamma}{\valone:
\substType{\sigma}{t}{\recType{t}{\sigma}}}
}
\quad
\vdash
{\compseq{s \cdot \Gamma \otimes \Delta}
{\pmfold{\valone}{e}: \tau}}
{
\valseq{\Gamma}{\valone: \recType{t}{\sigma}}
&
\compseq{\Delta, \varone :_{s} \substType{\sigma}{t}
{\recType{t}{\sigma}}}
{e: \tau}
}
\)
{\hrulefill}
\caption{Typing rules.}
\label{fig:typing-system}
\end{figure}
\begin{example}\label{ex:instances-of-v-fuzz}
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item Instantiating $\mathsf{V}$-$\mathsf{Fuzz}$\ with $\Sigma \triangleq \emptyset$,
the Lawvere quantale, and CBEs
$\Pi = \{c \cdot - \mid c \in [0,\infty]\}$ we obtain
the original $\mathsf{Fuzz}$\
\cite{Pierce/DistanceMakesTypesGrowStronger/2010}
(provided we add a basic type for real numbers). We can also add
nondeterminism via a binary nondeterminism choice
operation $\oplus$.
\item We define the language $P$-$\mathsf{Fuzz}$\ as the instantiation of $\mathsf{V}$-$\mathsf{Fuzz}$\
with a fair probabilistic choice operation $\oplus$,
the unit interval quantale $([0,1], \geq, +, 0)$, and CBEs
$\Pi = \{c \cdot - \mid c \in [0,\infty]\}$
(as usual we are actually referring to truncated multiplication).
We interpret $\oplus$ in $[0,1]$ as in Example
\ref{ex:quantale-operations}.
\item We can add global states to
$P$-$\mathsf{Fuzz}$\ enriching $P$-$\mathsf{Fuzz}$'s signature with operations
in $\Sigma_{\mathcal{L}}$ from Example \ref{ex:monads}.
\end{enumerate}
\end{example}
Typing rules for $\mathsf{V}$-$\mathsf{Fuzz}$\ are similar to those of
$\mathsf{Fuzz}$\ (e.g. in the variable rule we require $s \leq 1$, meaning
that the open value $\varone$ can access $\varone$ at least once) with
the exception of the rule for sequencing where we
apply sensitivity $s \wedge 1$ to the environment
$\env$ even if the sensitivity of $\varone$ in $f$ is $s$.
Consider the following instance of the sequencing rule on the Lawvere quantale:
\[
\vdash[]
{x:_{\max(0,1) \cdot 1} \typeOne \imp \seqy{e}{f}: \typeTwo}
{x:_1 \typeOne \imp e: \typeOne
&
y:_0 \typeOne \imp f: \typeTwo}
\]
where $f$ is a closed term of type $\typeTwo$ and thus we can
assume it to have sensitivity $0$ on all variables.
According to our informal intuition, $e$ has sensitivity $1$
on input $\varone$, meaning that $(i)$ $e$ can possibly detect
(behavioural) differences between input values $\valone, \valtwo$,
and $(ii)$ $e$ cannot amplify their behavioural distance
of a factor bigger than $1$. Formally,
point $(ii)$ states that we have the inequality
$\alpha(\valone, \valtwo) \geq
\alpha(\substcomp{e}{\varone}{\valone},
\substcomp{e}{\varone}{\valtwo})$,
where $\alpha$ denotes a suitable behavioural $[0,1]$-relation.
On the contrary, $f$ is closed term and thus has sensitivity $0$ on
any input,
meaning that it cannot detect any observable difference between
input values.
In particular, for all values $\valone, \valtwo$ we have
$\alpha(\substcomp{f}{\vartwo}{\valone},
\substcomp{f}{\vartwo}{\valtwo})
= \alpha(f, f) = 0$ (provided that $\alpha$ is reflexive).
Replacing $\max(0,1)$ with $0$ in
the above rule (i.e. $s \wedge 1$ with $s$ in the general case)
would allow to infer the judgment
$\varone :_0 \typeOne \vdash \seqy{e}{f}: \typeTwo$,
and thus to conclude
$
\alpha(\seqy{\substcomp{e}{\varone}{\valone}}
{f},
\seqy{\substcomp{e}{\varone}{\valtwo}}
{f})
= 0.
$
The latter equality is unsound as evaluating
$\seqy{\substcomp{e}{\varone}{\valone}}
{f}$
(resp. $\seqy{\substcomp{e}{\varone}{\valtwo}}
{f}$) requires to
\emph{first} evaluate $\substcomp{e}{\varone}{\valone}$
(resp. $\substcomp{e}{\varone}{\valtwo}$) thus making
observable differences between $\valone$ and $\valtwo$
detectable (see also Section \ref{section:behavioural-v-relations}
for a formal explanation).
\begin{example}\label{ex:v-fuzz-terms}
For every type $\typeOne$ we have
the term $I \triangleq \mathsf{return}{(\abs{\varone}{\mathsf{return} \varone})}$
of type $\typeOne \multimap \typeOne$ as well as the purely
divergent divergent term
$\Omega \triangleq \omega {!} (\fold \omega)$ of type $\typeOne$, where
$\omega \in
\Lambda_{{!}_\infty(\recType{t}{{!}_\infty t \multimap \typeOne})
\multimap \typeOne}$ is defined by:
$
\omega \triangleq \abs{\varone}{\casebangy{\varone}
{\casefoldz{\vartwo}{\varthree {!} (\fold \varthree)}}}.
$
\end{example}
Before moving to the operational semantics of $\mathsf{V}$-$\mathsf{Fuzz}$,
we remark that the syntactic distinction between terms and values gives
the following equalities.
\begin{lemma}
The following equalities hold:
\begin{align*}
\mathcal{V}_{\typeOne \multimap \typeTwo} &=
\{\abs{\varone}{e} \mid \varone :_{1}
\typeOne \imp e: \typeTwo\}, \\
\mathcal{V}_{\sumtype{i \in I}{\typeOne_i}} &=
\bigcup_{\hat \imath \in I} \{\inject{\hat \imath}{\valone}
\mid \valone \in \mathcal{V}_{\typeOne_{\hat \imath}}\},
\\
\mathcal{V}_{{!}_{s} \typeOne} &=
\{{!} \valone \mid \valone \in \mathcal{V}_\typeOne\}.
\end{align*}
\end{lemma}
\paragraph{Operational Semantics}
We give $\mathsf{V}$-$\mathsf{Fuzz}$\ monadic operational (notably evaluation)
semantics in the style of \cite{DalLagoGavazzoLevy/LICS/2017}.
Let $\mathbb{\monad} = \langle T, \eta, \kleisli{-}\rangle$ be a
$\Sigma$-continuous monad.
Operational semantics is defined by means of an evaluation
function $\sem{-}^\typeOne$ indexed over closed types,
associating to any term in $\Lambda_\typeOne$
a monadic value in $T \mathcal{V}_\typeOne$. The evaluation
function $\sem{-}^\typeOne$ is itself defined by means of
the family of functions $\{\sem{-}^\typeOne_n\}_{n < \omega}$
defined in Figure \ref{fig:approximation-semantics}.
Indeed $\sem{-}_n^\typeOne$ is a function from $\mathcal{V}_\typeOne$ to
$T \mathcal{V}_\typeOne$.
\begin{figure}
{\hrulefill}
\begin{align*}
\sem{e}^\typeOne_0
&\triangleq \bot_{\mathcal{V}_\typeOne} \\
\sem{\mathsf{return}{\valone}}_{n+1}^\typeOne
&\triangleq \eta_{\mathcal{V}_\typeOne}(\valone) \\
\sem{(\abs{\varone}{e})\valone}_{n+1}^\typeOne
&\triangleq \sem{\substcomp{e}{\varone}{\valone}}_n^\typeOne \\
\sem{\casesum{\inject{\hat \imath}{\valone}}{e_i}}_{n+1}^\typeOne
&\triangleq \sem{\substcomp{e_{\hat \imath}}{\varone}{\valone}}_n^\typeOne \\
\sem{\pmfold{(\fold{\valone})}{e}}_{n+1}^\typeOne
&\triangleq \sem{\substcomp{e}{\varone}{\valone}}_{n}^\typeOne \\
\sem{\pmbang{{!} \valone}{e}}_{n+1}^\typeOne
&\triangleq \sem{\substcomp{e}{\varone}{\valone}}_{n}^\typeOne \\
\sem{\seq{e}{f}}_{n+1}^\typeOne
&\triangleq \kleisli{(\sem{\substcomp{f}{\varone}{-}}_n^{\typeTwo, \typeOne})}
\sem{e}_n^\typeTwo \\
\sem{\mathbf{op}(e_1, \hdots, e_k)}_{n+1}^\typeOne
&\triangleq op_{\mathcal{V}_\typeOne}(\sem{e_1}_n^\typeOne, \hdots,
\sem{e_k}_n^\typeOne)
\end{align*}
{\hrulefill}
\caption{Approximation evaluation semantics.}
\label{fig:approximation-semantics}
\end{figure}
Let us expand on the definition of $\sem{\seq{e}{f}}_{n+1}^\typeOne$.
Since $\seq{e}{f} \in \Lambda_\typeOne$,
there must be derivable judgments
$\emptyset \vdash e: \typeTwo$ and
$\varone:_s \typeTwo \vdash f: \typeOne$. As a consequence,
for any $\valone \in \mathcal{V}_\typeTwo$, we have
$\sem{\substcomp{f}{\varone}{\valone}}_n^\typeOne \in T\mathcal{V}_{\typeOne}$.
This induces a function
$\sem{\substcomp{f}{\varone}{-}}_n^{\typeTwo,\typeOne}$
from $\mathcal{V}_\typeTwo$ to $T \mathcal{V}_\typeOne$
whose Kleisli extension can be applied to
$\sem{e}_n^\typeTwo \in T \mathcal{V}_\typeTwo$.
Finally, it is easy to see that
$(\sem{e}_n)_{n < \omega}$ forms an $\omega$-chain
in $T \mathcal{V}_\typeOne$ (see Appendix \ref{appendix:proofs-v-fuzz}
for a proof of the following result).
\begin{restatable}{lemma}{evaluationSemanticsOmegaChain}
For any $e \in \Lambda_\typeOne$,
we have
$
\sem{e}_n^\typeOne \sqsubseteq_{\mathcal{V}_\typeOne} \sem{e}_{n+1}^\typeOne
$,
for any $n \geq 0$.
\end{restatable}
As a consequence, we can define
$\sem{-}^\typeOne: \Lambda_\typeOne \to T \mathcal{V}_\typeOne$
by
$$
\sem{e}^\typeOne \triangleq \bigsqcup\nolimits_{n<\omega} \sem{e}_n^\typeOne.
$$
In order to improve readability we oftentimes omit type superscripts
in $\sem{e}^\typeOne$.
We also notice that because $op$ is continuous and
$\mathbb{\monad}$ is \ensuremath{\omega\text{-}\mathsf{cppo}}-enriched, $\sem{-}^\typeOne$ is itself continuous.
\begin{proposition}\label{prop:continuity-evaluation-function}
The following equations hold:
\begin{align*}
\sem{\mathsf{return}{\valone}}
&= \eta(\valone), \\
\sem{(\abs{\varone}{e})\valone}
&= \sem{\substcomp{e}{\varone}{\valone}}, \\
\sem{\casesum{\inject{\hat \imath}{\valone}}{e_i}}
&= \sem{\substcomp{e_{\hat \imath}}{\varone}{\valone}}, \\
\sem{\pmfold{(\fold{\valone})}{e}}
&= \sem{\substcomp{e}{\varone}{\valone}}, \\
\sem{\pmbang{{!} \valone}{e}}
&= \sem{\substcomp{e}{\varone}{\valone}}, \\
\sem{\seq{e}{f}}
&= \kleisli{\sem{\substcomp{f}{\varone}{-}}}(\sem{e}), \\
\sem{\mathbf{op}(e_1, \hdots, e_k)}
&= op_{\mathcal{V}_\typeOne}(\sem{e_1}, \hdots, \sem{e_k}).
\end{align*}
\end{proposition}
\section{$\mathsf{V}$-relators and $\mathsf{V}$-relation Lifting}
\label{section:v-relators-and-v-relation-lifting
In \cite{DalLagoGavazzoLevy/LICS/2017} the abstract theory of
relators \cite{Barr/LMM/1970,Thijs/PhDThesis/1996} has been used to define
notions of applicative (bi)similarity for an untyped
$\lambda$-calculus enriched with algebraic operations.
Intuitively, a relator $\Gamma$ for a set endofunctor $F$
is an abstraction meant to capture
the possible ways a relation on a set $X$ can be turned (or lifted)
into a relation on $T X$.
Relators allow to abstractly express the idea that
bisimilar programs, when executed, exhibit the same
observable behaviour (i.e. they produce the same effects)
and evaluate to bisimilar values.
In particular, whenever two
programs $e$ and $e'$ are related by a
(bi)simulation $\mathcal{R}$, then the results
$\sem{e}$ and $\sem{e'}$ of their evaluation
must be related by $\Gamma \mathcal{R}$.
The latter relation ranging over
monadic values, it takes into account the visible effects of executing
$e$ and $e'$, such effects being encapsulated
via $T$.
The notion of $\mathsf{V}$-relator \cite{Hoffman-Seal-Tholem/monoidal-topology/2014}
is somehow the `quantitative' generalisation
of the concept of a relator. Analogously to ordinary relators,
$\mathsf{V}$-relators for a set endofunctor $F$ are abstractions
meant to capture the possible ways a $\mathsf{V}$-relation on a set $X$
can be (nicely) turned into a $\mathsf{V}$-relation on $T X$, and thus
provide ways to lift a behavioural distance between programs to a
(behavioural) distance between monadic values.
On a formal level, we say that a $\mathsf{V}$-relator extends $F$ from
$\mathsf{Set}$ to $\Vrel{\quantale}$, laxly\footnote{Relators are also known as \emph{lax extensions}
\cite{Hoffman-Seal-Tholem/monoidal-topology/2014,Hoffman/Cottage-industry/2015}.}.
\begin{definition}\label{def:v-relator}
For a set endofucunctor $T$ a \emph{$\mathsf{V}$-relator}
for $T$ is a mapping
$
(\alpha: X \tobar Y) \mapsto
(\Gamma \alpha: T X \tobar T Y)
$
satisfying conditions \eqref{vrel-1}-\eqref{vrel-4}.
We say that $\Gamma$ is conversive if it additionally satisfies
condition \eqref{vrel-5}.
\begin{align*}
1_{T X}
&\leq \Gamma(1_{X}),
\tag{$\mathsf{V}$-rel 1} \label{vrel-1}
\\
\Gamma\beta \cdot \Gamma\alpha
& \leq \Gamma(\beta \cdot \alpha),
\tag{$\mathsf{V}$-rel 2} \label{vrel-2}
\\
T f \leq \Gamma f
& \text{,}\phantom{\leq} \dual{(T f)} \leq \Gamma\dual{f},
\tag{$\mathsf{V}$-rel 3} \label{vrel-3}
\\
\alpha \leq \beta
& \implies \Gamma\alpha \leq \Gamma\beta,
\tag{$\mathsf{V}$-rel 4} \label{vrel-4}
\\
\Gamma(\dual{\alpha})
&= \dual{(\Gamma \alpha)}.
\tag{$\mathsf{V}$-rel 5} \label{vrel-5}
\end{align*}
\end{definition}
Conditions (\ref{vrel-1}), (\ref{vrel-2}), and (\ref{vrel-4})
are rather standard. Condition (\ref{vrel-3}), which actually consists
of two conditions, states that $\mathsf{V}$-relators behave in
the expected way on functions.
It is immediate to see that when instantiated with $\mathsf{V} = \mathsf{2}$,
the above definition gives the usual notion of relator, with some
minor differences. In \cite{DalLagoGavazzoLevy/LICS/2017}
and \cite{Levy/FOSSACS/2011}
a kernel preservation condition is required in place of
(\ref{vrel-3}). Such condition is also known as stability in
\cite{Huge-Jacobs/Simulations-in-coalgebra/2004}.
Stability requires the equality
$$
\Gamma(\dual{g} \cdot \alpha \cdot f)
=
\dual{(T g)} \cdot \Gamma \alpha \cdot T f
$$
to hold. It is easy to see that a $\mathsf{V}$-relator always
satisfies stability. Notice also that stability gives the following implication:
$$
\alpha \leq \dual{g} \cdot \beta \cdot f
\implies
\Gamma \alpha \leq \dual{(T g)} \cdot \Gamma \beta \cdot T f,
$$
which can be diagrammatically expressed as:
\begin{center}
\(
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
X
\ar[r]^{f}
\ar[d]_{\alpha}|@{|} &
Z
\ar[d]^{\beta}|-*=0@{|}
\\
Y
\ar[r]_{g} &
W } }
\)
$\implies$
\(
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
TX
\ar[r]^{T f}
\ar[d]_{\Gamma\alpha}|@{|} &
TZ
\ar[d]^{\Gamma\beta}|-*=0@{|}
\\
TY
\ar[r]_{T g} &
T W } }
\).
\end{center}
Finally, we observe that any $\mathsf{V}$-relator $\Gamma$ for $F$ induces an endomap $F_\Gamma$ on $\Vrel{\quantale}$
that acts as $F$ on sets and as $\Gamma$ as $\mathsf{V}$-relation.
It is easy to check that conditions in Definition \ref{def:v-relator}
makes $T_\Gamma$ a \emph{lax endofunctor}.
Before giving examples of $\mathsf{V}$-relators it is useful to
observe that the collection $\mathsf{V}$-relators is closed under specific
operations.
\begin{restatable}{proposition}{algebraVrelators}
\label{prop:algebra-of-v-relators}
Let $\functor, U$ be set endofunctors. Then:
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item
If $\vrelator$ and $\Delta$ are $\mathsf{V}$-relators for
$\functor$ and $U$, respectively, then
$\Delta \cdot \vrelator$ defined by
$
(\Delta \cdot \vrelator)\alpha \triangleq
\Delta \vrelator\alpha$
is a $\mathsf{V}$-relator for $U \functor$.
\item
If $\{\Gamma\}_{i \in I}$ is a family of $\mathsf{V}$-relators
for $\functor$, then $\bigwedge_{i \in I}\Gamma_i$ defined by
$
(\bigwedge_{i \in I} \Gamma_i)\alpha
\triangleq
\bigwedge_{i \in I} \Gamma_i\alpha
$
is a $\mathsf{V}$-relator for $\functor$.
\item
If $\Gamma$ is a $\mathsf{V}$-relator for $\functor$,
then $\dual{\Gamma}$ defined by $\dual{\Gamma}\alpha \triangleq
\dual{(\Gamma\dual{\alpha})}$ is a
$\mathsf{V}$-relator for $T$.
\item
For any $\mathsf{V}$-relator $\Gamma$,
$\Gamma \wedge \dual{\Gamma}$ is the greatest conversive
$\mathsf{V}$-relator smaller than $\Gamma$.
\end{enumerate}
\end{restatable}
\begin{proof}
See Appendix \ref{appendix:proofs-v-relators-and-v-relation-lifting}.
\end{proof}
\begin{example}\label{ex:v-relators}
Let us consider the monads in Example \ref{ex:monads} regarded as
functors.
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item For the partiality functor $(-)_\bot$
define the $\mathsf{V}$-relator $(-)_\bot$ by:
$$
\alpha_\bot(x,y) \triangleq \alpha(x,y),
\quad \alpha_\bot(\bot_X,\mathpzc{y}) \triangleq k,
\quad \alpha_\bot(x, \bot_Y) = \Bot,
$$
where $x \in X, y \in Y,
\mathpzc{y} \in Y_\bot$, and $\alpha: X \tobar Y$.
The $\mathsf{V}$-relation $\alpha_\bot$ generalises
the usual notion of \emph{simulation} for partial computations.
Similarly, $\alpha_{\bot\mbot} \triangleq
\alpha_\bot \wedge \dual{((\dual{\alpha})_\bot)}$
generalises the usual notion of \emph{bisimulation} for partial computation.
\item For the powerset functor $\mathcal{P}$
define the $\mathsf{V}$-relator $H$
(called Hausdorff lifting) and
its conversive counterpart
$H^s \triangleq H \wedge \dual{H}$ by
$
H \alpha(\mathpzc{X},\mathpzc{Y})
\triangleq
\bigwedge_{x \in \mathpzc{X}} \bigvee_{y \in \mathpzc{Y}} \alpha(x,y).$
If we instantiate $\mathsf{V}$ as the Lawvere
quantale, then $H^s$ gives the usual
Hausdorff lifting of distances on a set $X$ to distances on $\mathcal{P} X$,
whereas for $\mathsf{V} = \mathsf{2}$ we recover the usual notion of
\emph{(bi)simulation} for unlabelled transition systems.
\item For the full distribution functor $\mathcal{D}$
we define a $[0,1]$-relator (with respect to
the unit interval quantale) using the so-called
\emph{Wasserstein-Kantorovich lifting} \cite{Villani/optimal-transport/2008}.
For $\mu \in \mathcal{D}(X), \nu \in \mathcal{D}(Y)$,
the set $\Omega(\mu, \nu)$ of \emph{couplings} of $\mu$ and $\nu$
is the set of joint distributions $\omega \in \mathcal{D}(X \times Y)$
such that $\mu = \sum_{y\in Y} \omega(-, y)$ and
$\nu = \sum_{x \in X} \omega(x,-)$. For a $[0,1]$-relation
$\alpha: X \tobar Y$ define:
$$
W \alpha(\mu, \nu) \triangleq
\inf\nolimits_{\omega \in \Omega(\mu, \nu)}
\sum\nolimits_{x,y} \alpha(x,y) \cdot \omega(x,y).
$$
$W \alpha(\mu, \nu)$ attains its infimum and has
a dual characterisation.
\begin{restatable}{proposition}{dualityWassersteinLifting}
\label{prop:duality-wasserstein-lifting}
Let $\mu \in \mathcal{D}(X), \nu \in \mathcal{D}(Y)$
be countable distributions and
$\alpha: X \tobar Y$ be a $[0,1]$-relation.
Then:
\begin{align*}
W \alpha(\mu,\nu)
&=
\min \{ \sum\nolimits_{x,y} \alpha(x,y) \cdot \omega(x,y)
\mid\omega \in \Omega(\mu, \nu)\} \\
&=
\max \{\sum\nolimits_{x} a_x \cdot \mu(x) +
\sum\nolimits_y b_y \cdot \nu(y)
\\
&\phantom{=} \mid a_x + b_y \leq \alpha(x,y),
a_x, b_y \text{ bounded}\},
\end{align*}
where $a_x,b_y$ bounded means that
there exist $\bar a,\bar b \in \mathbb{R}$ such
that $\forall x.\ a_x \leq \bar a$, and
$\forall y.\ b_y \leq \bar b$.
\end{restatable}
The above proposition (see Appendix
\ref{appendix:proofs-behavioural-v-relations} for a proof)
is a direct consequence of the
Duality Theorem for \emph{countable} transportation problems
\cite{Kortanek/InfiniteTransportationProblems/1995}
(Theorem 2.1 and 2.2).
Using Proposition \ref{prop:duality-wasserstein-lifting} we can show that
$W$ indeed defines a $[0,1]$-relator
(but see Digression \ref{digression:building-v-relators}).
Finally, we can compose the Wasserstein lifting $W$ with the
$\mathsf{V}$-relator $(-)_\bot$
of point 1 obtaining the (non-conversive)
$[0,1]$-relator $W_\bot$ for the countable
subdistribution functor $\mathcal{D}_{\leq 1}$.
\end{enumerate}
\end{example}
\begin{digression}[Building $\mathsf{V}$-relators]
\label{digression:building-v-relators}
Most of the $\mathsf{V}$-relators in Example \ref{ex:v-relators} can be
obtained using a general abstract construction refining the so-called
\emph{Barr extension} of a functor \cite{Kurz/Tutorial-relation-lifting/2016}.
Recall that any relation $\mathcal{R}: X \tobar Y$
(i.e. a $\mathsf{2}$-relation $\mathcal{R}: X \times Y \to \mathsf{2}$) can be
equivalently presented as a subset of $X \times Y$ via its graph
$G_{\mathcal{R}}$. This allows to express
$\mathcal{R}$ as $\pi_2 \cdot \dual{\pi_1}$ (in $\mathbf{Rel}$), where
$\pi_1: G_\mathcal{R} \to X$, $\pi_2: G_\mathcal{R} \to Y$ are the
usual projection functions.
\begin{definition}\label{def:barr-extension}
Let $T$ be an endofunctor on $\mathsf{Set}$ and $\mathcal{R}: X \tobar Y$
be a a relation. The \emph{Barr extension} $\overline{T}$ of
$T$ to $\mathbf{Rel}$ is defined by:
$$
\overline{T} \mathcal{R} \triangleq T \pi_2 \cdot \dual{(T \pi_1)},
$$
where $\mathcal{R} = \pi_2 \cdot \dual{\pi_1}$.
Pointwise, $\overline{T}$ is defined by:
$$
\mathpzc{x}\ \overline{T} \mathcal{R}\ \mathpzc{y} \iff
\exists \mathpzc{w} \in T G_\mathcal{R}.\ (
T \pi_1 (\mathpzc{w}) = \mathpzc{x},\
T \pi_2 (\mathpzc{w}) = \mathpzc{y}),
$$
where $\mathpzc{x} \in T X$ and $\mathpzc{y} = T Y$
\end{definition}
In general, $\overline{T}$ is not a $\mathsf{2}$-relator, but it is
so if $T$ preserves weak pullback diagrams
\cite{Kurz/Tutorial-relation-lifting/2016}
(or, equivalently, if $F$ satisfies the Beck-Chevalley condition
\cite{Hoffman-Seal-Tholem/monoidal-topology/2014}).
Such condition is satisfied by all functors we have considered so far
in our examples.
Definition \ref{def:barr-extension} crucially relies on the
double nature of a relation, which can be viewed both as an
arrow in $\mathbf{Rel}$ and as an object in $\mathsf{Set}$. This is no
longer the case for a $\mathsf{V}$-relation, and thus it is not
clear how to define the Barr extension of a functor $F$ from
$\mathsf{Set}$ to $\Vrel{\quantale}$.
However, the Barr extension of $F$ can be characterised in an
alternative way if we assume $F$ to preserves weak pullback diagrams
(although the reader can see
\cite{Manes/Taut-monads,Hoffman-topological-theories-as-closed-objects}
for more general conditions). Let
$\xi: F \mathsf{2} \to \mathsf{2}$ be the map defined by
$\xi(\mathpzc{x}) = \mathsf{true}$
if and only if $\mathpzc{x} \in F \{\mathsf{true}\}$, where $F \{\mathsf{true}\}$
is the image of the map $F \iota$ for the inclusion
$\iota: \{\mathsf{true}\} \to \mathsf{2}$. That is, $\xi(\mathpzc{x}) = \mathsf{true}$ if and
only if there exists an element $\mathpzc{y} \in F\{\mathsf{true}\}$ such
that $F \iota (\mathpzc{y}) = \mathpzc{x}$. Note that this makes sense
since $F$ preserves monomorphisms (recall that we can describe monomorphism
as weak pullbacks) and thus
$F \iota: F \{\mathsf{true}\} \to F \mathsf{2}$ is a monomorphism.
We can now characterise
$\overline{F} \mathcal{R}$ without mentioning the graph of
$\mathcal{R}$:
$$
\overline{F}\mathcal{R}(\mathpzc{x},\mathpzc{y}) = \mathsf{true}
\iff
\exists \mathpzc{w} \in F (X \times Y).\
\begin{cases}
F \pi_1 (\mathpzc{w}) &= \mathpzc{x}, \\
F \pi_2 (\mathpzc{w}) &= \mathpzc{y}, \\
\xi \cdot F \mathcal{R}(\mathpzc{w}) &= \mathsf{true}.
\end{cases}
$$
Since the existential quantification is nothing but the joint of the
boolean quantale $\mathsf{2}$, the above characterisation
of $\overline{F}$ can be turned into a definition of an extension
of $F$ to $\Vrel{\quantale}$ parametric with respect to a map
$\xi: F \mathsf{V} \to \mathsf{V}$.
\begin{definition}\label{def:v-barr-extension}
For a set endofunctor $F$ and a map
$\xi: F \mathsf{V} \to \mathsf{V}$ define the
\emph{$\mathsf{V}$-Barr extension} $\overline{F}_\xi$
of $F$ to $\Vrel{\quantale}$ with respect to $\xi$ as follows:
$$
\overline{F}_\xi \alpha(\mathpzc{x}, \mathpzc{y})
\triangleq \bigvee_{\mathpzc{w} \in \Omega(\mathpzc{x}, \mathpzc{y})}
\xi \cdot F \alpha(\mathpzc{w}),
$$
for $\mathpzc{x} \in F X, \mathpzc{y} \in F Y$, where
the set $\Omega(\mathpzc{x}, \mathpzc{y})$ of generalised couplings of
$\mathpzc{x}, \mathpzc{y}$ is defined by:
$$
\Omega(\mathpzc{x}, \mathpzc{y}) \triangleq
\{\mathpzc{w} \in F(X \times Y) \mid
F \pi_1 (\mathpzc{w}) = \mathpzc{x},\
F \pi_2 (\mathpzc{w}) = \mathpzc{y}\}.
$$
\end{definition}
\begin{example}\label{ex:stucture-maps}
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item Taking
$\xi: \mathcal{P} \mathsf{V} \to \mathsf{V}$ defined by
$\xi(\mathpzc{X}) \triangleq \bigwedge \mathpzc{X}$ we
recover the Hausdorff lifting $H^s$.
\item Taking expectation function
$\xi: \mathcal{D} [0,1] \to [0,1]$ defined by
$\xi(\mu) \triangleq \sum_x x \cdot \mu(x)$ we
recover Wasserstein lifting $W$.
\end{enumerate}
\end{example}
Using the map $\xi: F \mathsf{V} \to \mathsf{V}$ we can define an extension
of $F$ to $\Vrel{\quantale}$. However, such extension is in general not a
$\mathsf{V}$-relators.
Nonetheless, under mild conditions on $\xi$ and assuming $F$ to
preserve weak pullback, it is possible to show that
$\overline{F}_\xi$ is indeed a $\mathsf{V}$-relator. The following
proposition has been proved in \cite{Clementino-Tholen/From-lax-monad-extensions-to-topological-theoreis,Hoffman-topological-theories-as-closed-objects}
(a similar result for real-valued pseudometric spaces has been proved in
\cite{Bonchi/Behavioral-metrics-via-functor-lifting/FSTTCS/2014,Bonchi/Towards-trace-metrics-via-functor-fifting/CALCO/2015}, where
an additional extension still parametric over $\xi$ is also studied).
\begin{proposition}\label{prop:v-barr-extensions-are-v-relators}
Let $F$ be functor preserving weak pullbacks and
$\xi: F \mathsf{V} \to \mathsf{V}$
be a map such that:
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item $\xi$ respect quantale multiplication:
$$
\xymatrix{
\ar @{} [dr] |\leq
F(\mathsf{V} \times \mathsf{V})
\ar[r]^{F \otimes}
\ar[d]_{\langle \xi \cdot F \pi_1, \xi \cdot F \pi_2\rangle} &
F \mathsf{V}
\ar[d]^{\xi\ .}
\\
\mathsf{V} \times \mathsf{V}
\ar[r]_{\otimes} &
\mathsf{V}
}
$$
\item $\xi$ respects the unit of the quantale:
$$
\xymatrix{
\ar @{} [dr] |\leq
F 1
\ar[r]^{F k}
\ar[d]_{!} &
F \mathsf{V}
\ar[d]^{\xi\ .}
\\
1
\ar[r]_{k} &
\mathsf{V}
}
$$
\item $\xi$ respects the order of the quantale. That is, the map
$\varphi \mapsto \xi \cdot F \varphi$, for $\varphi: X \to \mathsf{V}$,
is monotone.
\end{enumerate}
Then $\overline{F}_\xi$ is a conversive $\mathsf{V}$-relator.
\end{proposition}
It is straightforward to check that the expectation function in
Example \ref{ex:stucture-maps} satisfies the above three conditions.
By Proposition \ref{prop:v-barr-extensions-are-v-relators}
it follows that the Wasserstein lifting gives indeed a $[0,1]$-relator,
and thus so does its composition with the $[0,1]$-relator $(-)_\bot$.
The extension $\overline{F}_{\xi}$ gives a somehow canonical
\emph{conversive} $\mathsf{V}$-relator and thus provides a way to build
canonical (applicative) $\mathsf{V}$-\emph{bi}simulations.
However, $\overline{F}_{\xi}$ being intrinsically conversive
it is not a good candidate to build $\mathsf{V}$-simulations.
For most of the examples considered we can get around the problem
considering $(\overline{F}_\xi)_\bot$ (as we do with e.g. $W_\bot$).
Nonetheless, it is desirable
to have a general notion of extension characterising
notions of $\mathsf{V}$-simulations. That has been done for ordinary relations
in e.g. \cite{Huge-Jacobs/Simulations-in-coalgebra/2004,Levy/FOSSACS/2011}
for functors $F$ inducing
a suitable order $\leq_{X}$ on $T X$ and considering the
relator $\overline{F}_{\leq} \triangleq
\leq_{-} \cdot \overline{F} \cdot \leq_{-}$.
Proving that $\overline{F}_{\leq}$ gives indeed a relator requires
$T$ to satisfy specific conditions. For instance,
in \cite{Levy/FOSSACS/2011} it is proved that if $F$
satisfies a suitable form of weak-pullback preservation (which takes into
account the order induced by $F$), then
$\overline{F}_{\leq}$ is indeed a relator.
This suggests to consider functors $F$ inducing a suitable
$\mathsf{V}$-relation $\alpha_X$ on $T X$ and thus to
study if, and under which conditions,
$\alpha_{-} \cdot \overline{F}_\xi \cdot \alpha_{-}$
is a $\mathsf{V}$-relator. This proposal has not been investigated
in the context of the present work but it definitely constitutes a topic for
future research.
\end{digression}
\paragraph{$\mathsf{V}$-relators for Strong Monads}
In previous paragraph we saw that a $\mathsf{V}$-relator
extends a functor from $\mathsf{Set}$ to $\Vrel{\quantale}$ laxly.
Since we model effects through strong monads it seems more natural to
require $\mathsf{V}$-relators to
extend strong monads from $\mathsf{Set}$ to $\Vrel{\quantale}$ laxly.
The reason behind such requirement can be intuitively understood as follows.
Recall that by Proposition \ref{prop:continuity-evaluation-function}
we have (for readability we omit types)
$\sem{\seq{e}{f}} =
\kleisli{\sem{\substcomp{f}{\varone}{-}}} \sem{e}$.
This operation can be described using the so called bind function
$$
\mathbin{\scriptstyle{\gg=}}: (X \to T Y) \times T X \to T Y,
$$
so that we have
$\sem{\seq{e}{f}} =
\sem{\substcomp{f}{\varone}{-}} \mathbin{\scriptstyle{\gg=}} \sem{e}$.
Now, let $f,g: X \to Y$ be functions, $\alpha: X \tobar X, \beta: Y \tobar Y$
be $\mathsf{V}$ relations,
and $\Gamma$ be a $\mathsf{V}$-relator for $T$.
Considering the compound $\mathsf{V}$ relation
$[\alpha, \Gamma \beta] \otimes \Gamma \alpha$
(see Digression \ref{digression:v-categories}) and ignoring
issues about sensitivity, it is then natural to require
$\mathbin{\scriptstyle{\gg=}}$ to be non-expansive. That is, we require the inequality
$$
[\alpha, \Gamma \beta](f,g) \otimes
\Gamma \alpha(\mathpzc{x},\mathpzc{y}) \leq
\Gamma \beta(f \mathbin{\scriptstyle{\gg=}} \mathpzc{x},g \mathbin{\scriptstyle{\gg=}} \mathpzc{y})
$$
i.e.
$$
\bigwedge_{x \in X}\Gamma\beta(f(x),g(x)) \otimes
\Gamma \alpha(\mathpzc{x},\mathpzc{y}) \leq
\Gamma \beta(f \mathbin{\scriptstyle{\gg=}} \mathpzc{x},g \mathbin{\scriptstyle{\gg=}} \mathpzc{y}).
$$
Informally, we are requiring the behavioural distance
between sequential compositions of programs to be
bounded by the behavioural distances between their components
(this is of course a too strong requirement, but at this point
it should be clear to the reader that it is sufficient to
require $\mathbin{\scriptstyle{\gg=}}$ to be Lipshitz continuous rather
than non-expansive).
Since $\mathbin{\scriptstyle{\gg=}}$ is nothing but the strong Kleisli extension
$\strongkleisli{\mathsf{apply}}$ of the application function
$\mathsf{apply}: (X \to T Y) \times X \to T Y$
defined by $\mathsf{apply}(f,x) \triangleq f(x)$, what we need to do
is indeed to extend strong monads from $\mathsf{Set}$ to $\Vrel{\quantale}$ (laxly).
\begin{definition}\label{def:strong-v-relator}
Let $\mathbb{\monad} = \langle T, \eta, \strongkleisli{-} \rangle$
be a strong monad on $\mathsf{Set}$, and $\Gamma$ be a $\mathsf{V}$-relator
for $T$ (regarded as a functor).
We say that $\Gamma$ is an $L$-continuous\footnote{
Instantiating $\mathsf{V}$ as the Lawvere quantale,
we see that condition \eqref{s-Strong-Lax-Bind} is requiring
Lipshitz continuity of multiplication and strength of $\mathbb{\monad}$.
}
$\mathsf{V}$-relator for $\mathbb{\monad}$ if it satisfies
the following conditions for any CBE $s \leq 1$.
\begin{align}
\alpha \leq \dual{\eta_Y} \cdot \Gamma \alpha \cdot \eta_X,
\tag{Lax unit} \label{Lax-Unit}
\\
\gamma \otimes (s \circ \alpha)
\leq \dual{g} \cdot \Gamma \beta \cdot f
&\implies
\gamma \otimes (s \circ \Gamma\alpha)
\leq \dual{(\strongkleisli{g})} \cdot \Gamma \beta \cdot \strongkleisli{f},
\tag{$L$-Strong lax bind} \label{s-Strong-Lax-Bind}
\end{align}
\end{definition}
The condition $s \leq 1$ reflects the presence of
$s \wedge 1$ in the typing rule for sequencing.
Also notice that by taking $s \triangleq 1$, conditions \eqref{Lax-Unit}
and \eqref{s-Strong-Lax-Bind}
are equivalent to requiring unit, multiplication, and strength of $\mathbb{\monad}$
to be non-expansive.
\begin{example}\label{ex:wasserstein-lifintg-satisfies-strong-lax-bind}
It is easy to check that $\mathsf{V}$-relators for the
partiality and the powerset monads satisfy conditions in
Definition \ref{def:strong-v-relator}.
Using Proposition \ref{prop:duality-wasserstein-lifting}
it is possible to show that also the Wasserstein
lifting(s) $W$ and $W_\bot$ do,
although this is less trivial (see Appendix
\ref{appendix:proofs-behavioural-v-relations}).
\end{example}
Finally, if $\mathbb{\monad}$ is $\Sigma$-continuous we require
$\mathsf{V}$-relators for $\mathbb{\monad}$ to be compatible with the
$\Sigma$-continuous structure.
\begin{definition}
Let $\mathbb{\monad}$ be a $\Sigma$-continuous monad,
$\mathsf{V}$ be a $\Sigma$-quantale, and
$\Gamma$ be a $\mathsf{V}$-relator for $\mathbb{\monad}$.
We say that $\Gamma$ is \emph{$\Sigma$-compatible}
and \emph{inductive} if the following
inequalities hold:
\begin{align*}
op_{\quantale}(\Gamma\alpha(\mathpzc{u}_1, \mathpzc{y}_1), \hdots
\Gamma\alpha(\mathpzc{u}_n, \mathpzc{y}_n))
&\leq
\Gamma\alpha(op_{X}
(\mathpzc{u}_1, \hdots, \mathpzc{u}_n),
op_{Y}(\mathpzc{y}_1, \hdots, \mathpzc{y}_n)),
\\
k
&\leq \Gamma \alpha(\bot_{X}, \mathpzc{y}),
\\
\bigwedge\nolimits_n \Gamma\alpha(\mathpzc{x}_n, \mathpzc{y})
&\leq \Gamma\alpha(\bigsqcup\nolimits_n \mathpzc{x}_n, \mathpzc{y}).
\end{align*}
for any
$\omega$-chain $(\mathpzc{x}_n)_{n < \omega}$ and
elements $\mathpzc{u}_1, \hdots, \mathpzc{u}_n$ in $T X$,
elements $\mathpzc{y},\mathpzc{y}_1, \hdots, \mathpzc{y}_n \in T Y$,
$n$-ary operation symbol $\mathbf{op} \in \Sigma$, and
$\mathsf{V}$-relation $\alpha: X \tobar Y$.
\end{definition}
In particular, if $\Gamma$ is inductive and
$a \leq \Gamma\alpha(\mathpzc{x}_n, \mathpzc{y}))$ holds
for any $n < \omega$, then
$
a \leq \Gamma\alpha(\bigsqcup_{n< \omega} \mathpzc{x}_n, \mathpzc{y}).
$
\begin{example}\label{ex:inductive-v-relator}
Easy calculations show that $(-)_\bot$ and $H$
are inductive and $\Sigma$-compatible.
Using results from \cite{Villani/optimal-transport/2008}
and \cite{Wasserstein-metric-and-subordination} (Lemma 5.2)
it is possible to show that $W_\bot$ is
inductive, the relevant inequality being
$$
W_\bot\alpha(\sup_n \mu_n, \nu)
\leq \sup_n W_\bot\alpha(\mu_n, \nu).
$$
Proving $\Sigma$-compatibility of $W$ and $W_\bot$
amounts to prove
$$
\Gamma\alpha(\mu_1 \oplus_p \nu_1, \mu_2 \oplus_p \nu_2) \leq
\Gamma \alpha(\mu_1,\mu_2) \oplus_p
\Gamma \alpha(\nu_1, \nu_2),
$$
which is straightforward.
\end{example}
\paragraph{From $\mathsf{V}$-relators to $\mathsf{2}$-relators}
\label{paragraph:from-v-relators-to-two-relators}
Before applying the abstract theory of $\mathsf{V}$-relators to
$\mathsf{V}$-$\mathsf{Fuzz}$\ we show how a $\mathsf{V}$-relator induces a
canonical $\mathsf{2}$-relator (this will be useful in the next section).
Consider the maps:
\begin{center}
\begin{tabular}{cc}
$\varphi : \mathsf{V} \to \mathsf{2}$ &
$\psi : \mathsf{2} \to \mathsf{V}$
\\
$k \mapsto \mathsf{true},\ a \mapsto \mathsf{false}$ &
$\mathsf{true} \to k,\ \mathsf{false} \to \Bot$
\end{tabular}
\end{center}
We immediately see that $\varphi$ and $\psi$ are
CBFs and that $\varphi$ is the right adjoint of $\psi$.
We associate to every $\mathsf{V}$-relation $\alpha$
its kernel $\mathsf{2}$-relation $\varphi \circ \alpha$ and
to any $\mathsf{2}$-relation $\mathcal{R}$ the
$\mathsf{V}$-relation $\psi \circ \mathcal{R}$.
Similarly, we can associate to each
$\mathsf{V}$-relator $\Gamma$ the $\mathsf{2}$-relator
$\Delta_\Gamma\mathcal{R} \triangleq \varphi \circ \Gamma(\psi \circ \mathcal{R}).$
Moreover, since
$\varphi$ is the right adjoint of $\psi$ we have the inequalities:
\begin{align*}
\psi \circ \Delta_\Gamma \mathcal{R} &\leq \Gamma(\psi \circ \mathcal{R}) \\
\Delta_\Gamma(\varphi \circ \alpha) & \leq \varphi \circ \Gamma\alpha.
\end{align*}
Finally, we say that $\Gamma$ is compatible with $\varphi$ if
$\Delta_\Gamma(\varphi \circ \alpha) = \varphi \circ \Gamma\alpha$
holds for any $\alpha: X \tobar Y$.
\begin{example}
\label{ex:simulation-relators-from-v-simulation-relators}
\begin{enumerate}[wide = 0pt, leftmargin = *]
\item For the $\mathsf{V}$-relator $(-)_\bot$ and $\mathcal{R}: X \tobar Y$
we have
$\Delta_\bot\mathcal{R}(\mathpzc{x}, \mathpzc{y}) = \mathsf{true}$ if and
only if $\mathpzc{x} \in X, \mathpzc{y} \in Y$ and
$\mathcal{R}(\mathpzc{x},\mathpzc{y}) = \mathsf{true}$, or $\mathpzc{x} = \bot$.
That is, $\Delta_\bot$ gives the usual simulation relator
for `effect-free' $\lambda$-calculi.
An easy calculation shows that
$\Delta_\bot(\varphi \circ \alpha) = \varphi \circ \alpha_\bot$.
Replacing $(-)_\bot$ with $(-)_{\bot\bot}$ we recover the bisimulation
relator for `effect-free' $\lambda$-calculi.
\item For the $\mathsf{V}$-relator $H$ and $\mathcal{R}: X \tobar Y$
we have:
$$\Delta_H\mathcal{R}(\mathpzc{X}, \mathpzc{Y}) = \mathsf{true}
\iff \forall x \in \mathpzc{X}.\ \exists y \in \mathpzc{Y}.\
\mathcal{R}(x,y) = \mathsf{true}.
$$
Therefore, $\Delta_H$ gives the usual notion of simulation
for nondeterministic systems.
Proving compatibility with $\varphi$, i.e.
$\Delta_H(\varphi \circ \alpha) = \varphi \circ H\alpha$,
is straightforward.
A similar argument holds for $H^s$.
\item Consider the Wasserstein lifting $W$ and observe that
we have
$
\Delta_{W}\mathcal{R}(\mu, \nu) = \mathsf{true}$ if and only if
the following holds:
$$
\exists \omega \in \Omega(\mu,\nu).\
\forall x,y.\ \omega(x,y) > 0 \implies \mathcal{R}(x,y) = \mathsf{true}.
$$
We have thus recovered the usual notion of probabilistic relation lifting
via couplings \cite{Kurz/Tutorial-relation-lifting/2016}.
Moreover, if $\varphi \circ W\alpha(\mu,\nu) = \mathsf{true}$,
then $W\alpha(\mu,\nu) = 0$, meaning that there exists a coupling
$\omega \in \Omega(\mu, \nu)$ such that
$\sum_{x,y} \omega(x,y) \cdot \alpha(x,y) = 0$. In particular,
if $\omega(x,y) > 0$, then $\alpha(x,y) = 0$ i.e.
$(\varphi \circ \alpha)(x,y) = \mathsf{true}$. That is, $W$ is
compatible with $\varphi$. From point $1$ it follows that
$W_\bot$ is compatible with
$\varphi$ as well.
\end{enumerate}
\end{example}
We conclude this section with the following auxiliary lemma
(whose proof is given in Appendix \ref{appendix:proofs-behavioural-v-relations}),
which will be useful to prove that the kernel of applicative distances are
suitable applicative (bi)simulations.
\begin{restatable}{lemma}{kernelLemma}
\label{lemma:kernel-lemma}
Let $\Gamma$ be $\mathsf{V}$-relator compatible with
$\varphi$. Then the following hold:
\begin{align*}
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
X
\ar[r]^-{f}
\ar[d]_{\alpha}|-*=0@{|}
&
T Z
\ar[d]^{\Gamma \beta}|-*=0@{|}
\\
Y
\ar[r]_-{g}
&
T W
} }
&\implies
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
X
\ar[r]^-{f}
\ar[d]_{\varphi \circ \alpha}|-*=0@{|}
&
T Z
\ar[d]^{\Delta_{\Gamma} (\varphi \circ \beta)}|-*=0@{|}
\\
Y
\ar[r]_-{g}
&
T W
} },
\\
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
X
\ar[r]^-{f}
\ar[d]_{\mathcal{R}}|-*=0@{|}
&
T Z
\ar[d]^{\Delta_{\Gamma} \mathcal{S}}|-*=0@{|}
\\
Y
\ar[r]_-{g}
&
T W
} }
&\implies
\vcenter{
\xymatrix{
\ar @{} [dr] |\leq
X
\ar[r]^-{f}
\ar[d]_{\psi \circ \mathcal{R}}|-*=0@{|}
&
T Z
\ar[d]^{\Gamma (\psi \circ \mathcal{S})}|-*=0@{|}
\\
Y
\ar[r]_-{g}
&
T W
} }.
\end{align*}
\end{restatable}
\section{Behavioural $\mathsf{V}$-relations}
\label{section:behavioural-v-relations}
In this section we extend the relational theory developed in
e.g. \cite{Lassen/PhDThesis,Gordon/FOSSACS/01} for higher-order
functional languages to $\mathsf{V}$-relations for $\mathsf{V}$-$\mathsf{Fuzz}$.
Following \cite{Pitts/ATBC/2011} we refer to such relations as
\emph{$\lambda$-term $\mathsf{V}$-relations}.
Among such $\mathsf{V}$-relations we define
applicative $\Gamma$-similarity,
the generalisation of Abramsky's applicative similarity to
both algebraic effects and $\mathsf{V}$-relations, and
prove that under suitable conditions it is compatible
generalised metric. We postpone the study of applicative
$\Gamma$-bisimilarity to Section
\ref{section:from-applicative-v-similarity-to-applicative-v-bisimilarity}.
As usual we assume a signature $\Sigma$,
a $\Sigma$-quantale $\mathsf{V}$,
a collection of CBEs $\Pi$ (according to
Section \ref{section:v-fuzz}), and a $\Sigma$-continuous
(strong) monad $\mathbb{\monad}$ to be fixed. We also assume
$\mathsf{V}$-relators to satisfy all requirements given in Section
\ref{section:v-relators-and-v-relation-lifting}.
\begin{definition}
A \emph{closed} $\lambda$-term $\mathsf{V}$-relation
$\alpha = (\toterm{\alpha}, \toval{\alpha})$
associates to each closed type $\typeOne$, binary $\mathsf{V}$-relations
$\toval{\alpha}_\typeOne, \toterm{\alpha}_\typeOne$ on closed values
and terms inhabiting it, respectively.
\end{definition}
Since the syntactic shape of expressions
determines whether we are dealing with terms or
values, oftentimes we will write
$\alpha_\typeOne(e, f)$ (resp.
$\alpha_\typeOne(\valone, \valtwo)$) in place of
$\toterm{\alpha}_\typeOne(e, f)$ (resp.
$\toval{\alpha}_\typeOne(\valone, \valtwo)$).
In order to be able to work with open
terms we introduce the notion of
\emph{open $\lambda$-term $\mathsf{V}$-relation}.
\begin{definition}
An \emph{open} $\lambda$-term $\mathsf{V}$-relation
$\alpha$
associates to each (term) sequent
$\env \imp \typeOne$ a $\mathsf{V}$-relation
$\env \imp \alpha(-, -): \typeOne$ on
terms inhabiting it, and to each value sequent
$\env \imp^{\mathsf{v}} \typeOne$ a $\mathsf{V}$-relation
$\env \imp^{\mathsf{v}} \alpha(-, -): \typeOne$ on
values inhabiting it. We require open $\lambda$-term
$\mathsf{V}$-relations to be closed under weakening,
i.e. for any environment $\Delta$ we require:
\begin{align*}
(\env \imp \alpha(e, f): \typeOne)
&\leq
(\env \otimes \Delta \imp \alpha(e, f): \typeOne),
\\
(\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne)
&\leq
(\env \otimes \Delta \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne).
\end{align*}
\end{definition}
As for closed $\lambda$-term $\mathsf{V}$-relations, we
will often write $\env \vdash \alpha(\valone, \valtwo): \typeOne$ in
place of $\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne$ and
simply refer to open $\lambda$-term $\mathsf{V}$-relations as
$\lambda$-term $\mathsf{V}$-relations (whenever relevant we will
explicitly mention whether we are dealing with open or closed
$\lambda$-term $\mathsf{V}$-relations).
\begin{example}
Both the discrete and the indiscrete $\mathsf{V}$-relations are
open $\lambda$-term $\mathsf{V}$-relations.
The discrete $\lambda$-term $\mathsf{V}$-relation is defined by:
$$
\env \vdash \mathsf{disc}(e, e): \typeOne
\triangleq k,
\qquad
\env \vdash \mathsf{disc}(e, f): \typeOne
\triangleq \bot,
$$
(and similarly for values),
whereas the indiscrete $\lambda$-term $\mathsf{V}$-relation is
defined by
$$\env \vdash \mathsf{indisc}(e, f): \typeOne \triangleq k$$
(and similarly for values).
\end{example}
We notice that the collection
of open $\lambda$-term $\mathsf{V}$-relations carries
a complete lattice structure (with respect to the pointwise order),
meaning that we can define $\lambda$-term $\mathsf{V}$-relation both
inductively and coinductively.
We can always extend a closed $\lambda$-term $\mathsf{V}$-relation
$\alpha = (\toterm{\alpha}, \toval{\alpha})$ to an open
one.
\begin{definition}
Let $\env \triangleq \varone_1 :_{s_1} \typeOne_1, \hdots,
\varone_n :_{s_n} \typeOne_n$ be an environment. For values
$\vec{\valone} \triangleq \valone_1, \hdots, \valone_n$ we write
$\vec{\valone} : \env$
if for any $i \leq n$,
$\emptyset \imp^{\mathsf{v}} \valone_i : \typeOne_i$ holds.
Given a closed $\lambda$-term $\mathsf{V}$-relation
$\alpha = (\toterm{\alpha}, \toval{\alpha})$
we define its open extension
$\open{\alpha}$ as follows\footnote{The superscript is the
letter `o' (for open), and should not be confused with
$\circ$ which we use for the map $\dual{-}$ sending a
$\mathsf{V}$-relation to its dual.}:
\begin{align*}
\env \vdash \open{\alpha}(e, f): \typeTwo
&\triangleq
\bigwedge\nolimits_{\vec{\valone}: \env} \toterm{\alpha}_\typeTwo
(\substcomp{e}{\vec{\varone}}{\vec{\valone}},
\substcomp{f}{\vec{\varone}}{\vec{\valone}})
\\
\env
\imp^{\mathsf{v}} \open{\alpha}(\valone, \valtwo): \typeTwo
& \triangleq
\bigwedge\nolimits_{\bar{u}:\env} \toval{\alpha}_\typeTwo
(\substval{\valone}{\vec{u}}{\vec{\varone}},
\substval{\valtwo}{\vec{u}}{\vec{\varone}}).
\end{align*}
\end{definition}
We now define \emph{applicative $\Gamma$-similarity}.
\begin{definition}\label{def:v-applicative-simulation}
Let $\Gamma$ be a $\mathsf{V}$-relator and
$\alpha = (\toterm{\alpha}, \toval{\alpha})$ be a closed
$\lambda$-term $\mathsf{V}$-relation.
Define the closed $\lambda$-term $\mathsf{V}$-relation
$[\alpha] = (\toterm{[\alpha]}, \toval{[\alpha]})$
as follows:
\begin{align*}
\toterm{[\alpha]}_{\typeOne}(e, f)
&\triangleq
\Gamma\toval{\alpha}_{\typeOne}
(\sem{e}, \sem{f}),
\\
\toval{[\alpha]}_{\typeOne \multimap \typeTwo}
(\valone, \valtwo)
&\triangleq
\bigwedge\nolimits_{u \in \mathcal{V}_\typeOne}
\toterm{\alpha}_{\typeTwo}
(\valone u, \valtwo u),
\\
\toval{[\alpha]}_{\sumtype{i \in I}{\typeOne_i}}
(\inject{\hat \imath}{\valone}, \inject{\hat \imath}{\valtwo})
&\triangleq \toval{\alpha}_{\typeOne_{\hat \imath}}(\valone, \valtwo),
\\
\toval{[\alpha]}_{\sumtype{i \in I}{\typeOne_i}}
(\inject{\hat \imath}{\valone}, \inject{\hat \jmath}{\valtwo})
&\triangleq \Bot,
\\
[\alpha]_{\recType{t}{\typeOne}}
(\fold{\valone}, \fold{\valtwo})
&\triangleq
\alpha_{\substType{\typeOne}{t}{\recType{t}{\typeOne}}}
(\valone, \valtwo),
\\
[\alpha]_{{!}_{s} \typeOne}
({!} \valone, {!} \valtwo)
&\triangleq
(s \circ \alpha_{\typeOne})(\valone, \valtwo).
\end{align*}
(notice that the definition of $\toval{[\alpha]}$
is by case analysis on $\emptyset \imp^{\mathsf{v}} \valone, \valtwo: \typeOne$).
A $\lambda$-term $\mathsf{V}$-relation $\alpha$ is an
applicative $\Gamma$-simulation if $\alpha \leq [\alpha]$.
\end{definition}
The clause for $\typeOne \multimap \typeTwo$ generalises the usual applicative
clause, whereas the clause for ${!}_s \typeOne$ `scale'
$\toval{\alpha}_\typeOne$ by $s$.
It is easy to see that the above definition induces a map
$\alpha \mapsto [\alpha]$ on the complete lattice of
closed $\lambda$-term $\mathsf{V}$-relations.
Moreover, such map is monotone since both $\Gamma$ and
CBEs are.
\begin{definition}\label{def:v-applicative-similarity}
Define applicative $\Gamma$-similarity $\delta$ as the
greatest fixed point of $\alpha \mapsto [\alpha]$. That is,
$\delta$ is the greatest (closed) $\lambda$-term $\mathsf{V}$-relation
satisfying the equation $\alpha = [\alpha]$
(such greatest solution exists by the Knaster-Tarski Theorem).
\end{definition}
Applicative $\Gamma$-similarity comes with an associated
coinduction principle:
for any closed $\lambda$-term $\mathsf{V}$-relation $\alpha$,
if $\alpha \leq [\alpha]$, then $\alpha \leq \delta$.
\begin{example}\label{ex:probabilistic-applicative-similarity-distance}
Instantiating Definition \ref{def:v-applicative-similarity} with the
Wasserstein lifting $W_\bot$
we obtain the quantitative analogue of
\emph{probabilistic applicative similarity}
\cite{DalLagoSangiorgiAlberti/POPL/2014} for $P$-$\mathsf{Fuzz}$.
In particular, for two terms $e, f \in \Lambda_\typeOne$,
$\delta(e, f)$ is (for readability we omit subscripts):
\begin{equation*}
\begin{split}
& \min_{\omega \in \Omega{(\sem{e}, \sem{f})}}
\sum_{\valone, \valtwo \in \mathcal{V}}
\omega(\valone,\valtwo) \cdot \toval{\delta}(\valone,\valtwo)
+ \sum_{\valone \in \mathcal{V}}
\omega(\valone, \bot) \cdot \toval{\delta}_{\bot}(\valone, \bot)
\\
&+
\sum_{\valtwo \in \mathcal{V}}
\omega(\bot, \valtwo) \cdot \toval{\delta}_{\bot}(\bot,\valtwo)
+
\omega(\bot, \bot) \cdot \toval{\delta}_{\bot}(\bot,\bot).
\end{split}
\end{equation*}
The above formula can be simplified observing that we have
$\toval{\delta}_{\bot}(\bot,\bot) = 0$,
$\toval{\delta}_{\bot}(\valone, \bot) = 1$, and
$\toval{\delta}_{\bot}(\bot,\valtwo) = 0$ by very
definition of $\delta_\bot$.
We immediately notice
that $\delta$ is adequate in the following
sense: for all terms $e, f \in \Lambda_\typeOne$
we have the inequality
$$
\sum \sem{e} - \sum \sem{f}
\leq
\toterm{\delta}(e, f),
$$
where $\sum \sem{e}$ is the
probability of convergence of $e$, i.e.
$\sum_{\valone \in \mathcal{V}} \sem{e}(\valone)$,
and subtraction is actually truncated subtraction.
Let us now consider terms $I,\Omega \in \Lambda_{\typeOne \multimap \typeOne}$
of Example \ref{ex:v-fuzz-terms}.
We claim that
$\toterm{\delta}(I,I \oplus \Omega) = \frac{1}{2}$. By adequacy
we immediately see that
$\frac{1}{2} \leq \toterm{\delta}(I,I \oplus \Omega)$.
We prove $\toterm{\delta}(I,I \oplus \Omega) \leq \frac{1}{2}$.
Let $\valone \triangleq \abs{\varone}{\mathsf{return}{\varone}}$ and consider
the coupling $\omega$ defined by:
$$\omega(\valone,\valone)
= \frac{1}{2},
\quad \omega(\valone, \bot) =
\frac{1}{2}$$
and zero for the rest.
Indeed $\omega$ is a coupling of $\sem{I}$ and $\sem{I \oplus \Omega}$.
Moreover, by very definition of $\delta$ and $W_\bot$
we have:
$$
\toterm{\delta}(I, I \oplus \Omega) \leq
\omega(\valone,\valone)
\cdot \toval{\delta}(\valone,\valone) +
\omega(\valone, \bot).
$$
The right hand side of the above inequality gives exactly $\frac{1}{2}$,
provided that
$\toval{\delta}(\valone,\valone) = 0$.
This indeed holds in full generality.
\end{example}
\begin{restatable}{proposition}{applicativeVSimilarityReflexivityTransitivity}
\label{prop:applicative-v-similarity-is-reflexive-and-transitive}
Applicative $\Gamma$-similarity $\delta$ is a reflexive and
transitive $\lambda$-term $\mathsf{V}$-relation.
\end{restatable}
\begin{proof}[Proof sketch.]
The proof is by coinduction, showing that both the identity
$\lambda$-term $\mathsf{V}$-relation and $\delta \cdot \delta$
are applicative $\Gamma$-simulations.
A formal proof is given in Appendix \ref{appendix:proofs-behavioural-v-relations}.
\end{proof}
In light of Example \ref{ex:simulation-relators-from-v-simulation-relators}
we can look at the kernel
of $\delta$ and recover well-known notions of (relational) applicative similarity
(properly generalised to $\mathsf{V}$-$\mathsf{Fuzz}$).
\begin{restatable}{proposition}{kernelApplicativeSim}
\label{prop:kernel-of-app-v-sim-is-app-sim}
Define applicative $\Delta_{\Gamma}$-similarity $\preceq$
by instantiating Definition \ref{def:v-applicative-simulation} with the
$\mathsf{2}$-relator $\Delta_{\Gamma}$ and replacing the clause for
types of the form ${!}_s \typeOne$ as follows:
${!} \valone\ \mathcal{R}_{{!}_s \typeOne}\ {!} \valtwo $
implies
$(\varphi \cdot s \cdot \psi) \circ \mathcal{R}_\typeOne(\valone,\valtwo)$.
Then the kernel $\varphi \circ \delta$ of $\delta$ coincide with
$\preceq$.
\end{restatable}
\begin{proof}[Proof sketch]
By coinduction (and using Lemma \ref{lemma:kernel-lemma}) one shows
that $\varphi \circ \delta$ is an
applicative $\Delta_\Gamma$-simulation and that
$\psi \circ \preceq$ is an applicative $\Gamma$-simulation.
A detailed proof is given
in Appendix \ref{appendix:proofs-behavioural-v-relations}.
\end{proof}
Note that if $\mathcal{R}_\typeOne(\valone,\valtwo)$ holds, then so does
$(\varphi \cdot s \cdot \psi) \circ \mathcal{R}_\typeOne(\valone,\valtwo)$,
but the vice-versa does not necessarily hold. For instance,
taking $s \triangleq 0$ we see that
$$
(\varphi \cdot 0 \cdot \psi) \circ \mathcal{R}_\typeOne(\valone, \valtwo) =
\varphi(0(\psi(\mathsf{false}))) = \varphi(0 \cdot \infty) = \varphi(0) = \mathsf{true},
$$
which essentially means we identify distinguishable values if they are
not used.
Nonetheless, the reader should notice that
the encoding of a `standard' $\lambda$-calculus $\Lambda$ in $\mathsf{V}$-$\mathsf{Fuzz}$\ can
be obtained via the usual encoding of $\Lambda$ in its linear refinement
\cite{Wadler/CbN-CbV-linear-lambda-calculus/1999} which
corresponds to the fragment of $\mathsf{V}$-$\mathsf{Fuzz}$\ based on CBEs
$1$ and $\infty$, thus avoiding the above undesired
result.
Finally, we introduce the notion of \emph{compatibility}
which captures a form of Lipshitz-continuity with respect to $\mathsf{V}$-$\mathsf{Fuzz}$\
constructors. It is useful to follow \cite{Lassen/PhDThesis} and
define compatibility via the notion of \emph{compatible refinement}.
\begin{definition}\label{def:compatible-refinement}
The \emph{compatible refinement} $\refine{\alpha}$ of
an open $\lambda$-term $\mathsf{V}$-relation $\alpha$ is defined by:
\begin{align*}
(\env \vdash \refine{\alpha}(e, f): \typeOne)
&\triangleq \bigvee \{a \mid \env \models a
\leq \refine{\alpha}(e, f): \typeOne\},
\\
(\env \imp^{\mathsf{v}} \refine{\alpha}(\valone, \valtwo): \typeOne)
&\triangleq \bigvee \{a \mid \env \models^{\mathsf{v}} a \leq
\refine{\alpha}(\valone, \valtwo): \typeOne\},
\end{align*}
where judgments
$\env \models a \leq \refine{\alpha}(e, f): \typeOne$
and
$\env \models^{\mathsf{v}} a \leq \refine{\alpha}(\valone, \valtwo): \typeOne$
are inductively defined for $a \in \mathsf{V}$,
$\env \imp e, f : \typeOne$, and
$\env \imp^{\mathsf{v}} \valone, \valtwo : \typeOne$ by
rules in Figure \ref{fig:compatible-refinement}.
We say that $\alpha$ is compatible if $\refine{\alpha} \leq \alpha$.
\end{definition}
\begin{figure*}[htbp]
\hrule
\vspace{0.2cm}
\(
\vdash
{\env, \varone :_s \typeOne \howeimp
k \leq \refine{\alpha}(\varone, \varone):
\typeOne
}{}
\qquad
\vdash
{op_{\quantale}(\env_1, \hdots, \env_n) \howeimp
op_{\quantale}(a_1, \hdots, a_n) \leq
\refine{\alpha}(\mathbf{op}(e_1, \hdots, e_n),
\mathbf{op}(e_1, \hdots, e_n)): \typeOne
}
{
a_1 \leq \env_1 \vdash \alpha(e_1, f_1) : \typeOne
&
\cdots
&
a_n \leq \env_n \vdash \alpha(e_n, f_n) : \typeOne
}
\)
\vspace{0.2cm}
\(
\vdash
{\env \howeimpval a \leq \refine{\alpha}
(\abs{\varone}{e}, \abs{\varone}{f}):
\typeOne \multimap \typeTwo
}
{
a \leq \env, \varone :_{1} \typeOne \vdash
\alpha(e, f): \typeTwo
}
\qquad
\vdash
{\env \otimes \Delta \howeimp a \otimes b
\leq \refine{\alpha}(\valone \valtwo, \valone' \valtwo') : \typeTwo
}
{
a \leq \env \imp^{\mathsf{v}} \alpha(\valone, \valone'):
\typeOne \multimap \typeTwo
&
b \leq \Delta \imp^{\mathsf{v}} \alpha(\valtwo, \valtwo') : \typeOne
}
\)
\vspace{0.2cm}
\(
\vdash
{\env \howeimpval a \leq
\refine{\alpha}(\inject{\hat \imath}{\valone}, \inject{\hat \imath}{\valtwo}):
\sumtype{i \in I}{\typeOne_i}
}
{
a \leq \env \imp^{\mathsf{v}}
\alpha(\valone, \valtwo):
\typeOne_{\hat \imath}
}
\qquad
\vdash
{s \cdot \env \otimes \Delta
\howeimp s(a) \otimes b_{\hat \imath}
\leq \refine{\alpha}
(\casesum{\inject{\hat \imath}{\valone}}{e_i},
\casesum{\inject{\hat \imath}{\valtwo}}{f_i}):
\typeTwo
}
{
a \leq \env \imp^{\mathsf{v}} \alpha
(\inject{\hat \imath}{\valone},\inject{\hat \imath}{\valtwo}):
\sumtype{i \in I}{\typeOne_i}
&
b_i \leq \Delta, \varone:_{s_i} \typeOne_i \vdash
\leq \alpha(e_i, f_i): \typeTwo
&
(\forall i \in I)
}
\)
\vspace{0.2cm}
\(
\vdash
{\env \howeimp a \leq
\refine{\alpha}(\mathsf{return}{\valone}, \mathsf{return}{\valtwo}): \typeOne
}
{
a \leq \env \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne
}
\qquad
\vdash
{(s \wedge 1) \cdot \env \otimes \Delta \howeimp
(s \wedge 1)(a) \otimes b
\leq \refine{\alpha}(\seq{e}{f}, \seq{e'}{f'})
: \typeTwo
}
{
a \leq \env \vdash \alpha
(e, e'): \typeOne
&
b \leq \Delta, \varone :_{s} \typeOne \vdash
\alpha(f', f') : \typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash
{s \cdot \env \howeimpval s(a)
\leq \alpha
({!} \valone, {!} \valtwo): {!}_{s} \typeOne
}
{
a \leq \env \howeimp
\alpha(\valone, \valtwo): \typeOne
}
\qquad
\vdash
{s \cdot \env \otimes \Delta \howeimp
s(a) \otimes b \leq
\refine{\alpha}
(\pmbang{\valone}{e}, \pmbang{\valtwo}{f}): \typeTwo
}
{
a \leq \env \imp^{\mathsf{v}} \alpha(\valone, \valtwo):
{!}_{r} \typeOne
&
b \leq \Delta, \varone :_{s \cdot r} \typeOne \vdash
\alpha(e, f): \typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash
{\env \howeimpval a \leq \refine{\alpha}
(\fold{\valone}, \fold{\valtwo}) : \recType{t}{\typeOne}
}
{
a \env \imp^{\mathsf{v}} \alpha(\valone, \valtwo):
\substType{\typeOne}{t}{\recType{t}{\typeOne}}
}
\qquad
\vdash
{s \cdot \env \otimes \Delta
\howeimp s(a) \otimes
b \leq \refine{\alpha}
(\pmfold{\valone}{e}, \pmfold{\valtwo}{f}):\typeTwo
}
{
a \leq \env \imp^{\mathsf{v}} \alpha(\valone, \valtwo):
\recType{t}{\typeOne}
&
b \leq \Delta, \varone :_{s} \substType{\typeOne}
{t}{\recType{t}{\typeOne}}
\vdash b \leq \alpha(e, f): \typeTwo
}
\)
\vspace{0.2cm}
\hrule
\caption{Compatible refinement.}
\label{fig:compatible-refinement}
\end{figure*}
It is easy to see that if $\alpha$ is compatible, then it satisfies
inequalities in Figure \ref{fig:compatibility-clauses}. Actually,
$\alpha$ is compatible precisely if it satisfies the
inequalities in Figure \ref{fig:compatibility-clauses}.
\begin{figure*}
\hrule \text{ }\\
\begin{align*}
k &\leq (\env \imp^{\mathsf{v}} \alpha(\varone, \varone): \typeOne)
\\
\env, \varone :_{1} \typeOne \imp
\alpha(e, f): \typeTwo
& \leq
\env \imp^{\mathsf{v}} \alpha(\abs{\varone}{e},
\abs{\varone}{f}): \typeOne \multimap \typeTwo
\\
(\env \imp^{\mathsf{v}} \alpha(\valone, \valone') :
\typeOne \multimap \typeTwo) \otimes
(\Delta \imp^{\mathsf{v}} \alpha(\valtwo, \valtwo'): \typeOne)
& \leq
(\env \otimes \Delta \imp \alpha
(\valone \valtwo, \valone' \valtwo'): \typeTwo)
\\
\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne_{\hat \imath}
&\leq
\env \imp^{\mathsf{v}} \alpha(\inject{\hat \imath}{\valone},
\inject{\hat \imath}{\valtwo}): \sumtype{i \in I} \typeOne_i
\\
s \circ (\env \imp^{\mathsf{v}}
\alpha(\inject{\hat \imath}{\valone}, \inject{\hat \imath}{\valtwo}):
\sumtype{i \in I} \typeOne_i) \otimes
(\Delta, \varone :_{s} \typeOne \imp
\alpha(e_{\hat \imath}, f_{\hat \imath}): \typeTwo)
& \leq
s \cdot \env \otimes \Delta \imp
\alpha(\casesum{\inject{\hat \imath}{\valone}}{e_i},
\casesum{\inject{\hat \imath}{\valtwo}}{f_i}): \typeTwo
\\
\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne
& \leq
\env \imp \alpha(\mathsf{return}{\valone}, \mathsf{return}{\valtwo}):
\typeOne
\\
(s \wedge 1) \circ
(\env \imp \alpha(e, e'): \typeOne)
\otimes
(\Delta, \varone :_{s} \typeOne \imp \alpha
(f, f'): \typeTwo )
&\leq
(s \wedge 1) \cdot \env \otimes \Delta \imp
\alpha(\seq{e}{f}, \seq{e'}{f'}): \typeTwo
\\
s \circ (\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo): \typeOne)
& \leq
s \cdot \env \imp^{\mathsf{v}} \alpha({!}{\valone}, {!}{\valtwo}):
{!}_{s} \typeOne
\\
s \circ (\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo):
{!}_{r} \typeOne) \otimes
(\Delta, \varone :_{s \cdot r} \typeOne \imp
\alpha(e, f): \typeTwo)
&\leq
s \cdot \env \otimes \Delta \imp
\alpha(\pmbang{\valone}{e}, \pmbang{\valtwo}{f}): \typeTwo
\\
\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo):
\substType{\typeOne}{t}{\recType{t}{\typeOne}}
&\leq
\env \imp^{\mathsf{v}} \alpha(\fold{\valone}, \fold{\valtwo}):
\recType{t}{\typeOne}
\\
s \circ (\env \imp^{\mathsf{v}} \alpha(\valone, \valtwo):
\recType{t}{\typeOne})
\otimes
(\Delta, \varone :_{s}
\substType{\typeOne}{t}{\recType{t}{\typeOne}}
\imp \alpha(e, f): \typeTwo)
&\leq
s \cdot \env \otimes \Delta \imp
\alpha(\casefold{\valone}{e}, \casefold{\valtwo}{f}): \typeTwo
\\
op_{\quantale}(\env_1 \imp \alpha(e_1, f_1): \typeOne
,\hdots,
\env_n \imp \alpha(e_n, f_n): \typeOne)
&\leq
op_{\quantale}(\env_1, \hdots, \env_n) \imp
\alpha(\mathbf{op}(e_1, \hdots, e_n),
\mathbf{op}(f_1, \hdots, f_n)): \typeOne
\end{align*}
\hrule
\caption{Compatibility clauses.}
\label{fig:compatibility-clauses}
\end{figure*}
Notice that in the clause for sequential composition the presence of
$s \wedge 1$, instead of $s$, ensures that
for terms like $e \triangleq \seq{I}{\numeral{0}}$ and
$e' \triangleq \seq{\Omega}{\numeral{0}}$,
the distance $\alpha(e, e')$ is determined
\emph{before} sequencing (which captures the
idea that although $\numeral{0}$ will not `use' any input, $I$ and
$\Omega$ will be still evaluated, thus producing observable differences
between $e$ and $e'$). In fact, if we replace
$s \wedge 1$ with $s$, then by taking
$s \triangleq 0$ compatibility would imply
$\alpha(e, e') = k$, which is clearly unsound.
In order to make applicative $\Gamma$-similarity a useful tool,
we need it to allow compositional reasoning about programs. Formally,
that amount to prove that applicative $\Gamma$-similarity is
compatible.
\section{Howe's Method}
\label{section:howe-method}
To prove compatibility of applicative $\Gamma$-similarity
we design a generalisition of the so-called
Howe's method \cite{Howe/IC/1996} combining and extending
ideas from
\cite{CrubilleDalLago/LICS/2015} and \cite{DalLagoGavazzoLevy/LICS/2017}.
We start by defining the notion of \emph{Howe's extension},
a construction extending a $\lambda$-term $\mathsf{V}$-open relation
to a compatible and substitutive $\lambda$-term $\mathsf{V}$-relation.
\begin{definition}[Howe's extension (1)]\label{def:howe-extension-one}
The \emph{Howe's extension} $\howe{\alpha}$ of
an open $\lambda$-term $\mathsf{V}$-relation
$\alpha$
is defined as the least solution to the equation
$\beta = \alpha \cdot \refine{\beta}$.
\end{definition}
It is easy to see that compatible refinement $\refine{-}$ is monotone,
and thus so is the map $\Phi_{\alpha}$ defined by
$\Phi_{\alpha}(\beta) \triangleq \alpha \cdot \refine{\beta}$.
As a consequence, we can define $\howe{\alpha}$ as the least fixed point
of $\Phi_{\alpha}$. Since open extension $\open{-}$ is monotone as well,
we can define the Howe's extension of a closed $\lambda$-term $\mathsf{V}$-relation
$\alpha$ as $\howe{(\open{\alpha})}$.
It is also useful to spell out the above definition.
\begin{definition}[Howe's extension (2)]\label{def:howe-extension-two}
The \emph{Howe's extension} $\howe{\alpha}$ of
an open $\lambda$-term $\mathsf{V}$-relation $\alpha$ is defined by:
\begin{align*}
(\env \vdash \howe{\alpha}(e, f): \typeOne)
&\triangleq \bigvee \{a \mid \env \models a
\leq \howe{\alpha}(e, f): \typeOne\},
\\
(\env \imp^{\mathsf{v}} \howe{\alpha}(\valone, \valtwo): \typeOne)
&\triangleq \bigvee \{a \mid \env \models^{\mathsf{v}} a \leq
\howe{\alpha}(\valone, \valtwo): \typeOne\},
\end{align*}
where judgments
$\env \models a \leq \howe{\alpha}(e, f): \typeOne$
and
$\env \models^{\mathsf{v}} a \leq \howe{\alpha}(\valone, \valtwo): \typeOne$
are inductively defined for $a \in \mathsf{V}$,
$\env \imp e, f : \typeOne$, and
$\env \imp^{\mathsf{v}} \valone, \valtwo : \typeOne$ by
rules in Figure \ref{fig:howe-extension}.
\end{definition}
\begin{figure*}[htbp]
\hrule
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}var})]
{\env, \varone :_{s} \typeOne \models
a \leq \howe{\alpha}(\varone, \valtwo):
\typeOne
}
{
a \leq \env, \varone :_{s} \typeOne
\imp^{\mathsf{v}} \alpha(\varone, \valtwo): \typeOne
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}abs})]
{\env \models a \otimes
c \leq \howe{\alpha}
(\abs{\varone}{e}, f):
\typeOne \multimap \typeTwo
}
{
\env, \varone :_{1} \typeOne \models
a \leq \howe{\alpha}(e, g):
\typeTwo
&
c \leq \env \vdash \alpha
(\abs{\varone}{g}, f):
\typeOne \multimap \typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}app})]
{\env \otimes \Delta \models a
\otimes b \otimes c
\leq \howe{\alpha}(\valone \valtwo, f) : \typeTwo
}
{
\env \models a \leq
\howe{\alpha}(\valone, \valone'):
\typeOne \multimap \typeTwo
&
\Delta \models b \leq
\howe{\alpha}(\valtwo, \valtwo') : \typeOne
&
c \leq \env \otimes \Delta \vdash
\alpha(\valone' \valtwo', f):\typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}inj})]
{\env \models^{\mathsf{v}} a \otimes b \leq
\alpha(\inject{\hat \imath}{\valone}, u):
\sumtype{i \in I}{\typeOne_i}
}
{
\env \models^{\mathsf{v}} a \leq
\howe{\alpha}(\valone, \valtwo):
\typeOne_{\hat \imath}
&
b \leq \env \imp^{\mathsf{v}}
\alpha(\inject{\hat \imath}{\valtwo}, u):
\sumtype{i \in I}{\typeOne_i}
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}sum\text{-}cases})]
{s \cdot \env \otimes \Delta
\models s(a) \otimes b_{\hat \imath}
\otimes c \leq \howe{\alpha}
(\casesum{\inject{\hat \imath}{\valone}}{e_i}, g):
\typeTwo
}
{
\env \models^{\mathsf{v}} a \leq \howe{\alpha}
(\inject{\hat \imath}{\valone},\inject{\hat \imath}{\valtwo}):
\sumtype{i \in I}{\typeOne_i}
&
\forall i \in I.\ \Delta, \varone:_s \typeOne_{i} \models
b_{i} \leq
\howe{\alpha}(e_{i}, f_{i}): \typeTwo
&
c \leq s \cdot \env \otimes \Delta \imp
\alpha(\casesum{\inject{\hat \imath}{\valtwo}}{f_i}, g):
\typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}val})]
{\env \models a \otimes c \leq
\howe{\alpha}(\mathsf{return}{\valone}, f): \typeOne
}
{
\env \models a \leq \howe{\alpha}
(\valone, \valtwo): \typeOne
&
c \leq \env \vdash
\alpha(\mathsf{return}{\valtwo}, f): \typeOne
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}let})]
{(s \wedge 1) \cdot \env \otimes \Delta \models
(s \wedge 1)(a) \otimes b \otimes c
\leq \howe{\alpha}(\seq{e}{e'}, f) : \typeTwo
}
{
\env \models a \leq \howe{\alpha}
(e, g): \typeOne
&
\Delta, \varone :_{s} \typeOne \models b \leq
\howe{\alpha}(e', g') : \typeTwo
&
c \leq (s \wedge 1) \cdot \env \otimes \Delta \vdash
\alpha(\seq{g}{g'}, f): \typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}bang})]
{s \cdot \env \models s(a)
\otimes c \leq \howe{\alpha}
({!} \valone, z): {!}_{s} \typeOne
}
{
\env \models a \leq
\howe{\alpha}(\valone, \valtwo): \typeOne
&
c \leq s \cdot \env \vdash \alpha
({!} \valtwo, z): {!}_{s} \typeOne
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}bang\text{-}cases})]
{s \cdot \env \otimes \Delta \models
s(a) \otimes b \otimes c \leq
\howe{\alpha}(\pmbang{\valone}{e}, f): \typeTwo
}
{
\env \models a \leq \howe{\alpha}(\valone, \valtwo):
{!}_{r} \typeOne
&
\Delta, \varone :_{s \cdot r} \typeOne \models
b \leq \howe{\alpha}(e, g): \typeTwo
&
c \leq s \cdot \env \otimes \Delta
\vdash \alpha(\pmbang{\valtwo}{g}, f): \typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}fold\text{-}cases})]
{s \cdot \env \otimes \Delta
\models s(a) \otimes
b \otimes c \leq \howe{\alpha}
(\pmfold{\valone}{e}, f):\typeTwo
}
{
\env \models a \leq
\howe{\alpha}(\valone, \valtwo):
\recType{t}{\typeOne}
&
\Delta, \varone :_{s} \substType{\typeOne}
{t}{\recType{t}{\typeOne}}
\models b \leq \howe{\alpha}
(e, g): \typeTwo
&
c \leq s \cdot \env \otimes \Delta \vdash
\alpha(\pmfold{\valtwo}{g}, f): \typeTwo
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}fold})]
{\env \models a \otimes c \leq \howe{\alpha}
(\fold{\valone}, z) : \recType{t}{\typeOne}
}
{
\env \models a \leq \howe{\alpha}(\valone, \valtwo):
\substType{\typeOne}{t}{\recType{t}{\typeOne}}
&
c \leq \env \vdash \alpha(\fold{\valtwo}, z):
\recType{t}{\typeOne}
}
\)
\vspace{0.2cm}
\(
\vdash[(\mathsf{H\text{-}op})]
{op_{\quantale}(\env_1, \hdots, \env_n) \models
op_{\quantale}(a_1, \hdots, a_n) \otimes
c \leq \howe{\alpha}(\mathbf{op}(e_1, \hdots, e_n),
f): \typeOne
}
{
\forall i \leq n.\ \env_i \models a_i \leq
\howe{\alpha}(e_i, g_i) : \typeOne
&
c \leq op_{\quantale}(\env_1. \hdots, \env_n)
\vdash \alpha(\mathbf{op}(g_1, \hdots, g_n), f):
\typeOne
}
\)
\vspace{0.2cm}
\hrule
\caption{Howe's extension.}
\label{fig:howe-extension}
\end{figure*}
The next lemma (whose proof is given in Appendix \ref{appendix:proofs-howe-method})
is useful for proving properties of Howe's extension.
It states that $\howe{\alpha}$ attains
its value via the rules in Figure \ref{fig:howe-extension}.
\begin{restatable}{lemma}{howeOptimalValue}
\label{lemma:howe-optimal-value}
The following hold:
\begin{varenumerate}
\item Given well-typed values
$\env \imp^{\mathsf{v}} \valone, \valtwo: \typeOne$,
let
$$
A \triangleq \{a \mid \env \models^{\mathsf{v}}
a \leq \howe{\alpha}(\valone, \valtwo): \typeOne\}
$$
be non-empty. Then
$\env \models^{\mathsf{v}} \bigvee A \leq \howe{\alpha}(\valone, \valtwo)$
is derivable.
\item Given well-typed terms
$\env \imp e, f: \typeOne$,
let
$$A \triangleq \{a \mid \env \models^{\mathsf{c}}
a \leq \howe{\alpha}(e, f):
\typeOne\}
$$
be non-empty.
Then $\env \models^{\mathsf{c}} \bigvee A \leq
\howe{\alpha}(e, f)$ is derivable.
\end{varenumerate}
\end{restatable}
It is easy to see that Definition \ref{def:howe-extension} and
\ref{def:howe-extension-two} gives the same $\lambda$-term
$\mathsf{V}$-relation. In particular,
for an open $\lambda$-term $\mathsf{V}$-relation $\alpha$,
$\howe{\alpha}$ is the least compatible open $\lambda$-term $\mathsf{V}$-relation
satisfying the inequality $\alpha \cdot \beta \leq \beta$.
The following are standard results on Howe's extension. Proofs are
straightforward but tedious (they closely resemble their relational
counterparts), and thus are omitted.
\begin{lemma}\label{lemma:properties-howe-extension}
Let $\alpha$ be a reflexive and transitive open
$\lambda$-term $\mathsf{V}$-relation.
Then the following hold:
\begin{varenumerate}
\item $\howe{\alpha}$ is reflexive.
\item $\alpha \leq \howe{\alpha}$.
\item $\alpha \cdot \howe{\alpha} \leq \howe{\alpha}$.
\item $\howe{\alpha}$ is compatible.
\end{varenumerate}
\end{lemma}
We refer to property
$1$ as pseudo-transitivity. In particular, by very
definition of $\mathsf{V}$-relator we also have
$\Gamma \alpha \cdot \Gamma \howe{\alpha} \leq \Gamma \howe{\alpha}$.
We refer to the latter property as $\Gamma$-pseudo-transitivity.
Notice that Proposition
\ref{prop:applicative-v-similarity-is-reflexive-and-transitive}
implies that $\howe{(\open{\delta})}$ is compatible and bigger than $\open{\delta}$.
Finally, Howe's extension enjoys another remarkable property, namely substitutivity.
\begin{definition}\label{def:value-substitutive}
An open $\lambda$-term $\mathsf{V}$-relation $\alpha$ is value substitutive if for all
well-typed values
$\env, \varone :_{s} \typeOne \imp^{\mathsf{v}} \valone, \valtwo:
\typeTwo$, $\emptyset \imp^{\mathsf{v}} u : \typeOne$, and terms
$\env, \varone :_{s} \typeOne \vdash e, f:
\typeTwo$ we have:
\begin{align*}
(\env, \varone :_{s} \typeOne \imp^{\mathsf{v}} \alpha
(\valone, \valtwo): \typeTwo)
&\leq (\env \vdash \alpha(\substval{\valone}{u}{\varone},
\substval{\valtwo}{u}{\varone}): \typeTwo),
\\
(\env, \varone :_{s} \typeOne \vdash \alpha
(e, f): \typeTwo)
&\leq (\env \vdash \alpha(\substcomp{e}{\varone}{u},
\substcomp{f}{\varone}{u}): \typeTwo).
\end{align*}
\end{definition}
\begin{restatable}[Substitutivity]{lemma}{substitutivityLemma}
\label{lemma:substitutivity}
Let $\alpha$ be a value substitutive
$\lambda$-term $\mathsf{V}$-preorder.
For all values,
$\env, \varone :_{s} \typeOne \imp^{\mathsf{v}} u, z:
\typeTwo$ and $\emptyset \vdash \valone, \valtwo : \typeOne$,
and terms $\env, \varone :_{s} \typeOne \vdash
e, f: \typeTwo$, let
$\underline{a} \triangleq
\emptyset \imp^{\mathsf{v}} \howe{\alpha}(\valone, \valtwo): \typeOne$. Then:
\begin{align*}
(\env, \varone :_{s} \typeOne \imp^{\mathsf{v}}
\howe{\alpha}(u, z): \typeTwo)
\otimes
s(\underline{a})
&\leq
\env \imp^{\mathsf{v}} \howe{\alpha}(\substval{u}{\valone}{\varone},
\substval{z}{\valtwo}{\varone}): \typeTwo,
\\
(\env, \varone :_{s} \typeOne \vdash
\howe{\alpha}(e, f): \typeTwo)
\otimes
s(\underline{a})
&\leq
\env \vdash \howe{\alpha}(\substcomp{e}{\varone}{\valone},
\substcomp{f}{\varone}{\valtwo}): \typeTwo.
\end{align*}
\end{restatable}
\begin{proof}
See Appendix \ref{appendix:proofs-howe-method}.
\end{proof}
Notice that the open extension of any closed $\lambda$-term $\mathsf{V}$-relation
is value-substitutive.
We can prove the main result of the Howe's method, the the so-called \emph{Key Lemma}.
The latter states the Howe's extension of applicative $\Gamma$-similarity
(restricted to closed terms/values) is an applicative $\Gamma$-simulation.
By coinduction, we can conclude that $\delta$ and $\howe{\delta}$
(restricted to closed terms/values) coincide, meaning that the former is
compatible.
\begin{restatable}[Key Lemma]{lemma}{keyLemma}
\label{lemma:key-lemma}
Let $\alpha$ be a reflexive and transitive
applicative $\Gamma$-simulation.
Then the Howe's extension of $\alpha$ restricted to closed
terms/values in an applicative $\Gamma$-simulation.
\end{restatable}
\begin{proof}[Proof sketch.]
The proof is non-trivial and a detailed account is given in
Appendix \ref{appendix:proofs-howe-method}.
Let us write $\howe{\alpha}$ for the Howe's extension
of $\alpha$ restricted to closed terms/values.
By induction on $n$ one shows that for any $n \geq 0$,
$
\toterm{(\howe{\alpha})}_{\typeOne}(e, f) \leq
\Gamma \toval{(\howe{\alpha})}_{\typeOne}(\approxsem{e}{n}, \sem{f})
$
holds for all terms $e, f \in \Lambda_\typeOne$.
Since $\Gamma$ is inductive, the above inequality indeed gives the thesis.
The base case follows again by inductivity of $\Gamma$, whereas the
inductive step requires a case analysis on the structure of
$e$. The crucial case is sequencing, where
we rely on condition \eqref{s-Strong-Lax-Bind}.
\end{proof}
From the Key Lemma it directly follows our main result.
\begin{restatable}[Compatibility]{theorem}{applicativeVSimilarityCompatible}
\label{prop:applicative-v-similarity-is-compatible}
Applicative $\Gamma$-similarity is compatible.
\end{restatable}
\begin{proof}
We have to prove that $\open{\delta}$ is compatible. By Lemma
\ref{lemma:properties-howe-extension} we know that
$\open{\delta} \leq \howe{(\open{\delta})}$ and that $\howe{(\open{\delta})}$
is compatible. Therefore, to conclude the thesis it is sufficient to
prove $\howe{(\open{\delta})} \leq \open{\delta}$. The Key Lemma implies
that the restriction on closed
terms/values of $\howe{(\open{\delta})}$ is an applicative
$\Gamma$-simulation, and thus smaller or equal than $\delta$.
We can thus show that for all $\env \vdash e, e': \typeOne$,
the inequality $\env \vdash \howe{(\open{\delta})}(e, e'): \typeOne
\leq \env \vdash \open{\delta}(e, e'): \typeOne$ holds. In fact,
since $\howe{(\open{\delta})}$ is substitutive and thus value
substitutive\footnote{Notice that in Definition \ref{def:value-substitutive}
we substitute \emph{closed} values (in terms and values)
meaning that simultaneous substitution and sequential
substitution coincide. In particular, value substitution implies e.g.
$$
(\env \vdash \alpha
(e, f'): \typeTwo)
\leq \bigwedge_{\bar{\valone}: \env}
\toterm{\alpha}_\typeTwo(\substcomp{e}{\bar{\varone}}{\bar{\valone}},
\substcomp{f}{\bar{\varone}}{\bar{\valone}}).
$$.}
we have:
\begin{align*}
\env \vdash \howe{(\open{\delta})}(e, e): \typeOne
& \leq
\bigwedge_{\bar \valone: \env} \emptyset \vdash \howe{(\open{\delta})}
(\substcomp{e}{\bar \varone}{\bar \valone},
\substcomp{e'}{\bar \varone}{\bar \valone}): \typeOne
\\
&\leq \bigwedge_{\bar \valone} \toterm{\delta}_\typeOne
(\substcomp{e}{\bar \varone}{\bar \valone},
\substcomp{e'}{\bar \varone}{\bar \valone})
\\
& = \env \vdash \open{\delta}(e, e'): \typeOne.
\end{align*}
A similar argument holds for values.
\end{proof}
It is worth noticing that from our results directly follow
the following generalisation of Reed's and Pierce's
\emph{metric preservation}
\cite{Pierce/DistanceMakesTypesGrowStronger/2010,GaboardiEtAl/POPL/2017}.
\begin{corollary}[Metric Preservation (cf. \cite{GaboardiEtAl/POPL/2017})]
\label{cor:metric-preservation}
For any environment $\env \triangleq \varone_1 :_{s_1} \typeOne, \hdots,
\varone_n :_{s_n} \typeOne$, values $\bar{\valone}, \bar{\valtwo}: \env$, and
$\env \vdash e: \typeOne$ we have:
$$
s_1 \circ \toval{\delta}_{\typeOne_1}(\valone_1, \valtwo_1)
\otimes \cdots \otimes s_n \circ \toval{\delta}_{\typeOne_n}(\valone_n, \valtwo_n)
\leq \toterm{\delta}_\typeOne(\substcomp{e}{\vec{\varone}}{\vec{\valone}},
\substcomp{e}{\vec{\varone}}{\vec{\valtwo}}).
$$
\end{corollary}
Having proved that applicative $\Gamma$-similarity is a compatible
generalised metric, we now move to applicative $\Gamma$-bisimilarity.
\section{Applicative $\Gamma$-bisimilarity}
\label{section:from-applicative-v-similarity-to-applicative-v-bisimilarity}
In previous section we proved that
applicative $\Gamma$-similarity is a compatible
generalised metric.
However, in the context of programming language semantics it
is often desirable to work with equivalence
$\mathsf{V}$-relations---i.e. pseudometrics.
In this section we discuss two natural behavioural pseudometrics:
applicative $\Gamma$-\emph{bisimilarity} and two-way applicative
$\Gamma$-similarity. We prove that under suitable conditions on
CBEs (which are met by all examples we have considered
so far) both applicative $\Gamma$-\emph{bisimilarity} and two-way applicative
$\Gamma$-similarity are compatible pseudometrics ($\mathsf{V}$-equivalences).
Proving compatibility of the latter is straightforward. However, proving
compatibility of applicative $\Gamma$-bisimilarity is not trivial and
requires a variation of the so-called \emph{transitive closure trick}
\cite{Howe/IC/1996,Lassen/PhDThesis,Pitts/ATBC/2011} based on ideas in
\cite{Simpson-Niels/Modalities/2018}.
Before entering formalities, let us remark that so far we have
mostly worked with inequation and inequalities.
That was fine since we have been interested in non-symmetric $\mathsf{V}$-relations.
However, for symmetric $\mathsf{V}$-relations inequalities seem not to be
powerful enough, and often plain equalities are needed in order to make proofs work.
For that reason in the rest of this section we assume CBFs
to be monotone \emph{monoid (homo)morphism}. That is, we modify
Definition \ref{def:change-of-base-functor} requiring the equalities:
$$
h(k) = \ell, \qquad
h(a \otimes b) = h(a) \otimes h(b).
$$
Note that we do not require CBEs to be join-preserving
(i.e. continuous). We also require operations $op_{\quantale}$ to be
\emph{quantale (homo)morphism}, i.e. to preserves unit, tensor, and joins.
It is easy to see that the new requirements are met by all
examples considered so far. We start with two-way applicative
$\Gamma$-similarity.
\begin{proposition}\label{prop:two-way-applicative-similarity-is-compatible}
For a $\mathsf{V}$-relator $\Gamma$ define two-way
applicative $\Gamma$-similarity as
$\delta \otimes \dual{\delta}$. Then two-way applicative
$\Gamma$-similarity is a compatible $\mathsf{V}$-equivalence.
\end{proposition}
\begin{proof}[Proof sketch.]
Clearly $\delta \otimes \dual{\delta}$ is symmetric. Moreover, since
CBEs are monoid (homo)morphism it is also compatible.
\end{proof}
We now move to the more interesting case of applicative
$\Gamma$-bisimilarity. In light of Example \ref{ex:v-relators}
we give the following definition.
\begin{definition}\label{def:applicative-v-bismilarity}
Recall Proposition \ref{prop:algebra-of-v-relators}.
Define applicative $\Gamma$-\emph{bisimilarity} $\gamma$ as
applicative $(\Gamma \wedge \dual{\Gamma})$-similarity.
\end{definition}
Proposition \ref{prop:applicative-v-similarity-is-reflexive-and-transitive}
implies that $\gamma$ is reflexive and transitive.
Moreover, if CBEs preserve binary meet (a condition
satisfied by all our examples), i.e.
$s(a) \wedge s(b) =
s(a \wedge b)$ for any CBE $s$ in $\Pi$,
then $\gamma$ is also symmetric, ad thus a pseudometric.
Finally we observe that $\gamma$ is the greatest $\lambda$-term
$\mathsf{V}$-relation $\alpha$ such that both $\alpha$ and
$\dual{\alpha}$ are applicative $\Gamma$-simulation.
Proving compatibility of $\gamma$ is not straightforward,
and requires a variation of the so-called \emph{transitive closure trick}
\cite{Pitts/ATBC/2011}.
First of all we notice that we cannot apply the Key Lemma on
$\gamma$ since
$\Gamma \wedge \dual{\Gamma}$ being conversive is, in general, not inductive.
To overcome this problem, we follow \cite{Simpson-Niels/Modalities/2018}
and characterise applicative $\Gamma$-bisimilarity differently.
\begin{restatable}{proposition}{symmetricSimilarityIsBisimilarity}
\label{lemma:symmetric-similarity-is-bisimilarity}
Let $\Gamma$ be a $\mathsf{V}$-relator.
Define the $\lambda$-term
$\mathsf{V}$-relation $\gamma'$ as follows:
$$
\gamma' \triangleq \bigvee \{\alpha \mid \dual{\alpha} = \alpha,\
\alpha \leq [\alpha]\}.
$$ Then:
\begin{enumerate}
\item $\gamma'$ is a symmetric applicative $\Gamma$-simulation,
and therefore the largest such $\lambda$-term $\mathsf{V}$-relation.
\item $\gamma'$ coincide with applicative
$(\Gamma \wedge \dual{\Gamma})$-similarity $\gamma$.
\end{enumerate}
\end{restatable}
\begin{proof}
See Appendix \ref{appendix:proofs-applicative-v-bisimilarity}.
\end{proof}
Lemma \ref{lemma:symmetric-similarity-is-bisimilarity} allows to
apply the Key Lemma on $\gamma$, thus showing that $\howe{\gamma}$ is
compatible. However, the Howe's extension is an intrinsically asymmetrical
construction (cf. pseudo-transitivity) and there is little hope
to prove symmetry of $\howe{\gamma}$ (which would imply compatibility of
$\gamma$). Nevertheless, we observe that for a suitable class of
CBEs the transitive closure $\transitive{(\howe{\gamma})}$ of $\howe{\gamma}$
is a symmetric, compatible, $\Gamma$-simulation (and thus smaller
than $\gamma$).
\begin{definition}
We say that a CBE $s$ is
\emph{finitely continuous}, if $s \neq \infty$
implies
$s(\bigvee A) = \bigvee \{s(a) \mid a \in A\},$
for any set $A \subseteq \mathsf{V}$.
\end{definition}
\begin{example}
\label{ex:finitely-continuous-change-of-base-functors}
All concrete CBEs considered
in previous examples are finitely continuous. Moreover, it is easy to prove the
all CBEs defined from the CBEs $n, \infty$ of Example \ref{ex:change-of-base-functor}
using operations in Lemma \ref{lemma:algebra-change-of-base-functors}
are finitely continuous\footnote{Recall that since $a$ is integral we have the
inequality $a \otimes \bot = \bot$ for any $a \in \mathsf{V}$.}
provided that $op_{\quantale}(a_1, \hdots, \bot, \hdots, a_n) = \bot$
(which is the case for most of the concrete operations we considered).
\end{example}
The following is the central result of our argument (see Appendix
\ref{appendix:proofs-applicative-v-bisimilarity} for a proof).
\begin{restatable}{lemma}{transitiveClosureHoweExtensionCompatible}
\label{lemma:transitive-closure-of-howe-extension-is-compatible}
Assume CBEs in $\Pi$ to be finitely continuous.
Define the transitive closure $\transitive{\alpha}$ of a
$\mathsf{V}$-relation $\alpha$ as
$
\transitive{\alpha} \triangleq \bigvee_n \alpha^{(n)},
$
where $\alpha^{(0)} \triangleq id$, and
$\alpha^{(n+1)} \triangleq \alpha^{(n)} \cdot \alpha$.
\begin{varenumerate}
\item Let $\alpha$ be a reflexive and transitive $\lambda$-term
$\mathsf{V}$-relation. Then $\transitive{(\howe{\alpha})}$ is
compatible.
\item Let $\alpha$ be an reflexive, symmetric, and transitive
open $\lambda$-term $\mathsf{V}$-relation. Then
$\transitive{(\howe{\alpha})}$ is symmetric.
\end{varenumerate}
\end{restatable}
Finally, we can prove that applicative $\Gamma$-bisimilarity is
compatible.
\begin{theorem}\label{thm:applicative-v-bisimilarity-is-compatible}
If any CBE in $\Pi$ is finitely continuous, then applicative
$\Gamma$-bisimilarity is compatible.
\end{theorem}
\begin{proof}
From Lemma
\ref{lemma:transitive-closure-of-howe-extension-is-compatible}
we know that $\transitive{(\howe{\gamma})}$ is compatible.
Therefore it is sufficient to prove $\transitive{(\howe{(\gamma)})} = \gamma$.
One inequality follows from Lemma \ref{lemma:properties-howe-extension}
as follows: $\gamma \leq \howe{\gamma} \leq \transitive{(\gamma)}$.
For the other inequality we rely on the coinduction proof principle associated
with $\gamma$. As a consequence, it is sufficient to prove that
$\transitive{(\howe{(\gamma)})}$ is a symmetric applicative
$\Gamma$-simulation. Symmetry is given by Lemma
\ref{lemma:transitive-closure-of-howe-extension-is-compatible}.
From Key Lemma we know that $\howe{\gamma}$ is
an applicative $\Gamma$-simulation.
Since the identity $\lambda$-term $\mathsf{V}$-relation is a applicative
$\Gamma$-simulation and that the composition ofapplicative
$\Gamma$-simulations is itself an applicative
$\Gamma$-simulation (see the proof of Proposition \ref{prop:applicative-v-similarity-is-reflexive-and-transitive})
we see that $\transitive{(\howe{\gamma})}$
is itself an applicative $\Gamma$-simulation.
\end{proof}
Finally, we notice that all concrete CBEs considered
in this work are finitely continuous. We can then
rely on Theorem \ref{thm:applicative-v-bisimilarity-is-compatible}
to come up with concrete notions of compatible applicative $\Gamma$-bisimilarity.
Notably, we obtain compatible pseudometrics for $\mathsf{Fuzz}$\footnote{
Formally, we should extend our definitions adding a basic type
for real numbers and primitives for arithmetical operations,
but that is straightforward.} and $P$-$\mathsf{Fuzz}$.
\section{Further Developments}
In Section \ref{section:howe-method} we proved that applicative
$\Gamma$-similarity is a compatible $\mathsf{V}$-peorder (i.e.
a compatible generalised metric), whereas in Section
\ref{section:from-applicative-v-similarity-to-applicative-v-bisimilarity}
we proved that applicative $\Gamma$-bisimilarity (and two-way similarity)
is a compatible $\mathsf{V}$-equivalence (i.e. a compatible pseudometric)
In this last section we shortly sketch a couple of further considerations
on the results obtained in this work.
\paragraph{Contextual distances}
An issue that has not been touched concerns the quantitative counterpart of
contextual preorder and contextual equivalence.
Recently \cite{CrubilleDalLago/LICS/2015,CrubilleDalLago/ESOP/2017}
define a contextual distance $\delta^{ctx}$ for
probabilistic $\lambda$-calculi as:
$$
\delta^{ctx}(e, f) \triangleq
\sup_{\mathcal{C}} | \sum \sem{\mathcal{C}[e]} - \sum \sem{\mathcal{C}[f]}|,
$$
for contexts and terms of appropriate types. Taking into account
sensitivity, and thus moving to $P$-Fuzz,
such distance could be refined as
$$
\delta^{ctx}(e, f) \triangleq
\sup_{\mathcal{C}} \frac{| \sum \sem{\mathcal{C}[e]} - \sum \sem{\mathcal{C}[f]}|}
{n_{\mathcal{C}}},
$$
where $n_{\mathcal{C}}$ is the sensitivity of $\mathcal{C}$. Here some design
choices are mandatory in order to deal with division by zero and infinity.
Two immediate observations are that we would like
$$
\frac{| \sum \sem{\mathcal{C}[e]} - \sum \sem{\mathcal{C}[f]}|}
{n_{\mathcal{C}}}
$$ to be $0$ if $n_{\mathcal{C}} = 0$ and that
$$
\frac{| \sum \sem{\mathcal{C}[e]} - \sum \sem{\mathcal{C}[f]}|}
{n_{\mathcal{C}}} = 0
$$ if $n_{\mathcal{C}} = \infty$. That means that we can
restrict contexts to range over those with sensitivity different from
$0$ and $\infty$. In particular, excluding the latter means that we
are considering finitely continuous CBEs. This
observation (together with the fact that division is the right adjoint
of multiplication) suggests a possible generalisation of the contextual
distance to arbitrary quantales.
Informally, fixed a
$\lambda$-term $\mathsf{V}$-relation (i.e. a ground observation)
$\alpha_o$ we can define the contextual distance
$\alpha_o^{ctx}$ between two (appropriate) terms
$e, e'$ as:
$$
\alpha_o^{ctx}(e, e') \triangleq
\bigwedge_{\mathcal{C}} s^*(\alpha_o(\mathcal{C}[e], \mathcal{C}[e'])),
$$
where $\mathcal{C}$ ranges over contexts\footnote{
Give a formal definition of $\mathsf{V}$-$\mathsf{Fuzz}$/ requires some (tedious) work.
In fact, contexts should be terms with a hole
$[-]$ to be filled in with another \emph{term} of appropriate type.
However, due to the fine-grained nature of $\mathsf{V}$-$\mathsf{Fuzz}$, we
defined substitution of values only. Therefore, what we should do
is to define a grammar and a notion of substitution for contexts.
Moreover, we should also design
a type system for contexts keeping track of sensitivities
(see e.g. \cite{Harper-Cary/Syntactical-logical-relations/2007}
for the relational case). This is a tedious exercise but can be done
without difficulties. Here we simply notice that it is possible to
`simulate' contexts as follows. Let $\emptyset \imp^{\mathsf{v}} *: \mathsf{unit}$
be the unit value. Suppose we want to come up with a (closed) context
$\mathcal{C}[-]$ of type $\typeTwo$ and sensitivity $s$ taking as
input terms of type $\typeOne$. For that we consider the term
(for readability we annotate the lambda):
$$
\abs{\vartwo: {!}_s \mathsf{unit} \multimap \typeOne}
{\casebang{\vartwo}{\mathcal{C}[\vartwo *]}}
$$
where $\vartwo$ is a fresh variable.
To substitute a term $e$ of type $\typeOne$ in $\mathcal{C}$
we first thunk it to $\abs{}{e} \in \Lambda_{\mathsf{unit} \multimap \typeOne}$
and then consider:
$$
(\abs{\vartwo: {!}_s \mathsf{unit} \multimap \typeOne}
{\casebang{\vartwo}{\mathcal{C}[\vartwo *]}})({!} \abs{}{e})
$$
It is immediate to see that
$\sem{(\abs{\vartwo}
{\casebang{\vartwo}{\mathcal{C}[\vartwo *]}})({!} \abs{}{e})}
$ captures $\sem{\mathcal{C}[e]}$ (although the expression has
not been defined). Moreover, an easy calculation shows that for
any compatible $\lambda$-term $\mathsf{V}$-relation $\alpha$,
and for all terms $e, e'$ of type $\typeOne$
we have:
\begin{multline*}
s \circ \alpha_\typeOne(e, e')
\\ \leq
\alpha_\typeTwo((\abs{\vartwo}
{\casebang{\vartwo}{\mathcal{C}[\vartwo *]}})({!} \abs{}{e}),
(\abs{\vartwo}
{\casebang{\vartwo}{\mathcal{C}[\vartwo *]}})({!} \abs{}{e'})).
\end{multline*}}
with sensitivity $s$,
and the latter is finitely continuous and different from $\infty$.
We should also exclude the constantly
$k$ change of base functor. The map $s^*$ is defined as
the right adjoint of $s$ which exists since $s$ preserves
arbitrary joints (see Proposition 7.34 in \cite{DaveyPriestley/Book/1990}).
Another possibility is to define $\alpha^{ctx}$ as the largest compatible
and adequate $\mathsf{V}$-relation, where adequacy is defined via the
$\mathsf{V}$-relation $\alpha_o$. However, proving that such
$\mathsf{V}$-relation exists in general seems to be far from trivial.
These difficulties seem to suggest that contrary to what happens
when dealing with ordinary relations, a notion of contextual
$\mathsf{V}$-preorder/equivalence appears to be less natural than
the notion of applicative $\Gamma$-(bi)similarity.
\paragraph{Combining Effects}
Our last observation concerns the applicability of the framework
developed. In fact, all examples considered in this paper deal with calculi with
just one kind of effects (e.g. probabilistic nondeterminism). However,
we can apply the theory developed to combined effects as well.
We illustrate this possibility by sketching how to add global states
to $P$-Fuzz. Recall that the global state monad $\mathcal{G}$ is defined by
$\mathcal{G} X \triangleq (S \times X)^S$ where $S = \{0,1\}^{\mathcal{L}}$
for a set of (public) location names $\mathcal{L}$.
Such monad comes together with operation symbols for reading and writing
locations: $\Sigma = \{\mathbf{get}, \mathbf{set}_{\ell := 0},\mathbf{set}_{\ell := 1} \mid \ell \in \mathcal{L}\}$.
The intended semantics of $\mathbf{get}(e, f)$ is
to read the content of $\ell$ and to continue as $e$ if the
content is $0$, otherwise continue as $f$. Dually,
$\mathbf{set}_{\ell := 0}(e)$ (resp. $\mathbf{set}_{\ell := 1}(e)$) stores the
bit $0$ (resp. $1$) in the location $\ell$ and then continues
as $e$ (see Example \ref{ex:monads}).
Our combination of global stores and probabilistic computations is based on the
monad $\mathcal{G}_p X = (\mathcal{D}_\bot(S \times X))^S$. The unit $\eta$
of the monad is defined by
$\eta(x)(b) = \dirac{\langle b, x\rangle}$, whereas the strong Kleisli extension
$h^{\sharp}$ of $h: Z \times X \to (\mathcal{D}_\bot(S \times Y))^S$
is defined as follows:
first we uncurry $h$ (and apply some canonical isomorphisms) to obtain the function
$$h_u: Z \times (S \times X) \to \mathcal{D}_\bot(S \times Y).$$
We then define $h^{\sharp}$ by
$$
h^{\sharp}(z,m)(b) = \strongkleisli{h_u}(z,m(b)),
$$
where $\strongkleisli{h_u}: Z \times \mathcal{D}_\bot(S \times X)
\to \mathcal{D}_\bot(S \times Y)$
is the strong Klesli extension
of $h_u$ with respect to $\mathcal{D}_\bot$.
Easy calculations show that
the triple $\langle \mathcal{G}_p, \eta, -^{\sharp} \rangle$ is indeed a
strong Kleisli triple.
We now define a $[0,1]$-relator $\Gamma$ for $\mathcal{G}_p$.
Given $\alpha: X \tobar Y$, define
$$
\Gamma \alpha(m,n) =
\sup\nolimits_{b \in S} W_\bot(id_S + \alpha)(m(b),n(b)).
$$
Notice that $(id_S + \alpha)(\langle b,x\rangle, \langle b', x'\rangle) = 1$ if $b \neq b'$
and $\alpha(x,x')$ otherwise. It is relatively easy to prove that $\Gamma$
satisfies conditions in Section \ref{section:v-relators-and-v-relation-lifting}.
As an illustrative example we prove the following result.
\begin{lemma}
The $[0,1]$-relator $\Gamma$ satisfies condition \eqref{Strong-Lax-Bind}:
$$
\vcenter{
\xymatrix{
\ar @{} [dr] |\geq
Z \times X
\ar[r]^-{h}
\ar[d]_{\gamma + \alpha}|-*=0@{|}
&
\mathcal{G}_p X
\ar[d]^{\Gamma \beta}|-*=0@{|}
\\
Z' \times X'
\ar[r]_-{h'}
&
\mathcal{G}_p Y'
} }
\implies
\vcenter{
\xymatrix{
\ar @{} [dr] |\geq
Z \times \mathcal{G}_p X
\ar[r]^-{h^{\sharp}}
\ar[d]_{\gamma + \Gamma \alpha}|-*=0@{|}
&
\mathcal{G}_p Y
\ar[d]^{\Gamma \beta .}|-*=0@{|}
\\
Z' \times \mathcal{G}_p X'
\ar[r]_-{h'^\sharp}
&
\mathcal{G}_p Y'
} }
$$
\end{lemma}
\begin{proof}
Let us call $(1)$ and $(2)$ the right-hand side and left-hand side
of the above implication, respectively.
Moreover, we write $\alpha_S, \beta_S$ for
$id_S + \alpha, id_S + \beta$,
respectively.
Then:
\begin{align*}
(1) & \implies
\vcenter{
\xymatrix{
\ar @{} [dr] |\geq
Z \times (S \times X)
\ar[r]^-{f_u}
\ar[d]_{\gamma + \alpha_S}|-*=0@{|}
&
\mathcal{D}_\bot(S \times Y)
\ar[d]^{W_\bot \beta_S}|-*=0@{|}
\\
W \times (S \times U)
\ar[r]_-{g_u}
&
\mathcal{D}_\bot(S \times V)
} }
\\
&\implies
\vcenter{
\xymatrix{
\ar @{} [dr] |\geq
Z \times \mathcal{D}_\bot(S \times X)
\ar[r]^-{\strongkleisli{f_u}}
\ar[d]_{\gamma + W_\bot \alpha_S}|-*=0@{|}
&
\mathcal{D}_\bot(S \times Y)
\ar[d]^{W_\bot \beta_S}|-*=0@{|}
\\
W \times \mathcal{D}_\bot(S \times U)
\ar[r]_-{\strongkleisli{g_u}}
&
\mathcal{D}_\bot(S \times V)
} }
\\
& \implies (2).
\end{align*}
\end{proof}
By Theorem \ref{prop:applicative-v-similarity-is-compatible}
we thus obtain a notion of applicative
$\Gamma$-similarity which is a compatible generalised metric.
Since CBEs in $P$-Fuzz are finitely continuous we
can also apply results from Section
\ref{section:from-applicative-v-similarity-to-applicative-v-bisimilarity}
to obtain a compatible pseudometric.
\section{Related Work}
\label{section:related-works}
Several works have been done in the past years on quantitative
(metric) reasoning in the context of programming language semantics.
In particular, several authors have used (cartesian) categories of
\emph{ultrametric spaces} as a foundation for denotational semantics of
both concurrent
\cite{Arnold/Metric-interpretations/1980,DeBakker/Semantics-concurrency/1982}
and sequential programming languages \cite{Escardo/Metric-model-PCF/1999}.
A different approach is investigated in \cite{GaboardiEtAl/POPL/2017}
where a denotational semantics combining ordinary metric spaces and domains is
given to \emph{pure} (i.e. without effects) $\mathsf{Fuzz}$.
The main theorem of \cite{GaboardiEtAl/POPL/2017} is
a denotational version of the so-called \emph{metric preservation}
\cite{Pierce/DistanceMakesTypesGrowStronger/2010} (whose original proof requires
the introduction of a suitable \emph{step-indexed metric logical relation}).
Our Corollary \ref{cor:metric-preservation} is the operational
counterpart of such result generalised to arbitrary algebraic effects.
A different, although deeply related, line of research has been recently proposed
in \cite{CrubilleDalLago/LICS/2015,CrubilleDalLago/ESOP/2017} where coinductive,
operationally-based distances have been studied for probabilistic
$\lambda$-calculi. In particular,
in \cite{CrubilleDalLago/LICS/2015} a notion of applicative distance
based on the Wasserstein lifting is proposed for a probabilistic
\emph{affine} $\lambda$-calculus. Restricting to affine programs only
makes the calculus strongly normalising and remove copying capabilities
of programs by construction. In this way programs cannot amplify distances
between their inputs and therefore are forced to behave as non-expansive
functions.
This limitation is overcame in \cite{CrubilleDalLago/ESOP/2017}, where a
coinductive notion of distance is proposed for a full linear
$\lambda$-calculus, and distance trivialisation phenomena are studied in depth.
The price to pay for such generality
is that the distance proposed is not applicative, but
a trace distance
somehow resembling environmental bisimilarity \cite{Sangiorgi/Environmental/2011}.
\section{Conclusion}
In this work we have introduced an abstract framework
for studying quantale-valued behavioural relations
for higher-order effectful languages. Such framework has been
instantiated to define the quantitative refinements of
Abramsky's applicative similarity and bisimilarity for $\mathsf{V}$-$\mathsf{Fuzz}$, a
universal $\lambda$-calculus with a linear type system
tracking program sensitivity enriched with algebraic
effects.
Our main theorems
state that under suitable conditions the quantitative notions
of applicative similarity and bisimilarity obtained are a compatible
generealised metric and pseudometric, respectively.
These results can be instantiated to obtain compatible
pseudometrics for several concrete calculi.
A future research direction is to study how the abstract
framework developed can be used to investigate quantitative refinements
of behavioural relations different from applicative (bi)similarity.
In particular, investigating contextual
distances (see \cite{Gavazzo/Arxiv/2018} for some preliminary
observations), denotationally-based distances (along the lines of
\cite{GaboardiEtAl/POPL/2017}), and
distances based on suitable logical relations (such as the one in
\cite{Pierce/DistanceMakesTypesGrowStronger/2010}) are interesting topics
for further research.
\begin{acks}
The author would like to thank Ugo Dal Lago, Rapha\"elle Crubill\'e, and
Paul Levy for the many useful comments and suggestions.
Special thanks also goes to Alex Simpson and Niels Voorneveld for many
insightful discussions about the topic of this work.
\end{acks}
\bibliographystyle{plain}
| -134,760.613598 |
[
-3.3359375,
3.064453125
] | 20.687885 |
[
-2.462890625,
0.93505859375,
-2.1640625,
-5.97265625,
-0.8369140625,
8.5703125
] |
[
3.392578125,
7.6015625,
1.1875,
7.05859375
] | 854 | 16,787 |
[
-3.42578125,
4.03515625
] | 32.73947 |
[
-5.71875,
-4.45703125,
-5.6015625,
-2.337890625,
2.189453125,
13.9375
] | 0.152933 | 9.323337 | 15.434562 | 0.90916 |
[
2.170156478881836
] | -79,703.375266 | 6.193304 | -133,935.443001 | 0.323179 | 6.357533 |
[
-2.0390625,
-3.40234375,
-3.783203125,
-4.95703125,
2.21484375,
12.15625
] |
[
-5.57421875,
-2.65234375,
-2.142578125,
-1.41796875,
4.13671875,
5.47265625
] | |
BkiUdr05qsNCPfdFexNf
|
\section*{Supplementary material}
\centerline{\href{https://youtu.be/DZ6PLllJFzI}{https://youtu.be/DZ6PLllJFzI}}
\section{Introduction}
\input{sections/introduction.tex}
\section{Platform and locomotion concepts}
\input{sections/mobility_concept.tex}
\section{Two-agent robotic platform analysis and implementation}
\input{sections/robotic_platform.tex}
\section{Shapeshifter capabilities}
\input{sections/mission_operations.tex}
\section{A case study of Titan}
\input{sections/mission_operation_titan.tex}
\section{Conclusions and future works}
\input{sections/conclusions.tex}
\acknowledgments
The authors would like to thank Issa A.D. Nesnas, Kalind Carpenter, Rashied Baradaran Amini, Arash Kalantari, Jason Hofgartner, Ben Hockmann, Jonathan I. Lunine, Alessandra Babuscia, Benjamin Morrell, Jose Mendez and Kevin Liu for their precious contributions. Andrea Tagliabue thanks the \textit{Ermenegildo Zegna Founder's Scholarship} for supporting this project.
The research was partially carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The information presented about the Shapeshifter mission concept is pre-decisional and is provided for planning and discussion purposes only.
\textcopyright 2020 California Institute of Technology. Partial Government sponsorship acknowledged.
{\footnotesize
\bibliographystyle{IEEEtran}
\subsection{Flying and flying-array capabilities}
\begin{itemize}
\item \textbf{Map large areas}: A group of flying Cobots can search for desired scientific objectives and create a detailed topographical map of Titan. This allows the Shapeshifter to probe different kinds of surfaces, including very rough terrains and cliffs.
According to the energy-efficiency analysis presented in Section \ref{sec:RoboticPlatform}, with a maximum range of 130 km, a team of Cobots flying radially outward from the Home-base could survey a circular area of over 10,000 km$^2$, and return home safely to recharge.
\item \textbf{Create a communication mesh}: To maintain communication from under the surface (e.g., while exploring a cave), Shapeshifter can disassemble into Cobots that will maintain a network of communication nodes and line-of-sight from, e.g., a deep cryovolcanic lava tube to Titan's surface. An artist's representation of this functionality is included in Figure \ref{fig:ExploringCaveAndCarryHomeBase}.
\item \textbf{Collect spatio-temporal measurements}: By distributing multiple Cobots in a wide area, observations of a phenomena can be correlated not only with respect to time, but also space. This can be useful to study the evolution of the storms present on Titan \cite{nasaTitanStorms}.
\item \textbf{Collect samples}: Cobots can collect samples of rocks and terrain using an on-board scoop, whose design is left as future work. The collected samples are then analyzed by the Home-base. With approximately 30 N of maximum thrust available, a single Cobot could carry more than 20 kg of samples back to the Home-base. Maximum payload transportation capacity, anyway, could be reduced due to the aerodynamic drag of the payload (i.e. due to increased atmospheric density w.r.t. Earth).
\item \textbf{Transport the Home-base}: Shapeshifter can morph into a flight array of Cobots to lift and carry the portable Home-base from one mission site to another, as represented in an artist's concept in Figure \ref{fig:ExploringCaveAndCarryHomeBase}, by employing a decentralized collaborative aerial transportation strategy such as \cite{collaborativeTransportation}. The paylaod capacity of the twelve Cobots envisioned to be part of the mission can easily reach more than 200 kg on Titan, sufficient to carry the Home-base.
\end{itemize}
\subsection{Rollocopter capabilities}
\begin{itemize}
\item \textbf{Traverse long distances}: As shown in our analysis, morphing into a sphere can be a more efficient way of traversing long distances. By taking advantage of energy-efficient mobility, the Shapeshifter can increase its range on a single charge to up to 260 km, approximately doubling the range in flight mode. This corresponds to a reachable area of more than 50,000 km$^2$, radially outward from the stationary Home-base. %
\item \textbf{Explore caves}: Shapeshifter can explore subsurface voids, including cryovolcanic and karstic caves. For narrow passages, the Rollocopter configuration allows resilience to collisions. It allows to bounce off of walls to go through cracks, holes, and narrow passages to reach science targets.
\end{itemize}
\subsection{Torpedo capabilities}
\begin{itemize}
\item \textbf{Above and sub-surface navigation}: The propeller-based propulsion of each Cobot can generate thrust in gas and liquid environments \cite{diez2017unmanned}. Thus, Shapeshifter can in principle navigate and explore above and below Titan's mare, such as the Ligeia Mare represented in Figure \ref{fig:LigeiaMare}. These functionalities can be further developed by leveraging studies on motors and propulsion system for Titan's hydrocarbon lakes \cite{hartwig2016exploring}. A more detailed analysis of the Torpedo mode of the Shapeshifter is left as future work.
\end{itemize}
\subsection{Autonomy}
One of the main challenges for the proposed mission is autonomy, including localization, 3D mapping, obstacle avoidance, self-assembly and risk-aware decision making. In this section, we propose approaches that can be of interest for the Shapeshifter.
\subsubsection{State estimation, localization and mapping}
For navigation, we expect to rely on resource-constrained VIO (Visual Inertial Odometry) \cite{li2012vision}, which allows to achieve a cm-level localization accuracy when coupled with state-of-the-art localization solutions (e.g. via Distributed Pose-Graph Optimization), using on-board cameras and computing unit such as \cite{Snapdrag52:online}. In mapping mission scenarios, we envision the ability to create precise 3D maps with limited onboard computation power (following, for example, \cite{mu2015two}, \cite{agha2017confidence}, \cite{lajoie2019door}), in order to be able to compress the large number of images acquired by each Cobot into a representative map that can be easily shared with the Home-base. Potential perception challenges in (methane) fog or under-liquid \cite{garcia2017exploring} \cite{maldonado2016robotic} will be studied.
\subsubsection{Motion planning and control}
We use sampling-based methods \cite{janson2015fast}, \cite{pavone2007decentralized} to plan precise motions in 3D to accomplish shapeshifting behaviors, and adaptive trajectory-generation strategies such as \cite{tagliabue2019model} to guarantee that the Shapeshifter always operates at the most energy-efficient velocity. Distributed algorithms are employed in the case the Cobots have to find each other when lost or have to transport the Home-base without relying on any communication network \cite{collaborativeTransportation}, \cite{tagliabue2017collaborative} for example due to strong electromagnetic interferences.
\subsubsection{Decision making}
A fundamental aspect for the Shapeshifter is the ability to efficiently make complex decisions under uncertainty. In order to operate at its full capabilities, for example, the Shapeshifter will face the constant need to decide in which configuration to morph, based on multiple factors such as the environment (e.g. terrain), its battery level, the goal of the mission, the health of the system or the risk level that the mission managers are willing to take. To handle these challenges we rely on the risk-aware planning framework \ac{FIRM} \cite{agha2014firm}, \ac{SLAP} \cite{agha2015simultaneous} and its variant \cite{kim2019bi}.
\subsection{Mechanical design}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figs/in_CobotPicture.jpg}
\caption{\textit{Left:} Sub-components of a Cobot prototype. \textit{Right:} Our physical prototypes of two Cobots docked together forming a Rollocopter, capable of rolling on sand for increased range w.r.t. fly.}
\label{fig:PlatformImplementationOverview}
\end{figure}
A mechanical design of the two-agents Shapeshifter is based on Cobots equipped with a hemispherical shell so that the docking of two agents creates a cylindrical structure adequate for rolling in one dimension. The prototype of the platform is represented in Figure \ref{fig:PlatformImplementationOverview}, where we have highlighted the different sub-components that constitute the Cobot. The two Cobots are identical, with the exception of the docking mechanism and the spin direction of the propellers.
\subsection{Dynamic models for rolling and flying robots}\label{sec:dynamics}
In this section, we derive the dynamic model of a rolling Shapeshifter (Rollocopter), assuming that rolling happens without slipping, and we describe the dynamic model of a flying Cobot. We additionally outline an induced-velocity model for power consumption as a function of the rotor thrust (see e.g. \cite{johnson2012helicopter}, \cite{tagliabue2019model}) both for the flying and rolling cases.
\subsubsection{Reference frame}
For both the rolling and flying vehicles, we define an inertial reference frame $I=(x_I, y_I, z_I)$ and a non-inertial reference frame $B=(x_B, y_B, z_B)$ fixed in the \ac{CoM} of each vehicle. The frames, in the case of the rolling robot, are represented in Figure \ref{fig:3DShapeshifterFramesAndActuaturs}.
\subsubsection{Notation declaration} The notation $\prescript{}{W}{\boldsymbol{r}} = \prescript{}{W}{(r_x, r_y, r_z)}$ denotes a vector defined in the Cartesian reference frame $W$. The matrix $\prescript{}{A}{\boldsymbol{R}}_{BC}$ denotes the rotation matrix from the coordinate frame $C$ to $B$, defined in $A$.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figs/in_SimplifiedCobot.pdf}
\caption{Model of the two-agents Shapeshifter, where we have highlighted the inertial reference frame $I=(x_I, y_I, z_I)$, the body-fixed reference frame $B=(x_B, y_B, z_B)$ and the forces produced by the actuators.}
\label{fig:3DShapeshifterFramesAndActuaturs}
\end{figure}
\subsubsection{Rolling Shapeshifter}
The dynamic equations of the rolling Shapeshifter are derived by using the Newton-Euler method:
\begin{align}
\prescript{}{I}{\ddot{\boldsymbol{x}}} = & \frac{1}{m} \big ( \prescript{}{I}{\boldsymbol{R}}_{IB}\prescript{}{B}{\boldsymbol{f}_\text{cmd}} - m\prescript{}{I}{\boldsymbol{g}} + \prescript{}{I}{\boldsymbol{r}} + \prescript{}{I}{\boldsymbol{f}_\text{drag}}\big)
\label{eq:RollocopterDynamics:translation}
\end{align}
\begin{equation}
\begin{split}
\prescript{}{B}{\dot{\boldsymbol{\omega}}} = \boldsymbol{J}^{-1} \big ( & \prescript{}{B}{\boldsymbol{\tau}_\text{cmd}} - \prescript{}{B}{\boldsymbol{\omega}} \times \boldsymbol{J} \prescript{}{B}{\boldsymbol{\omega}} \\ &
- l\prescript{}{I}{\boldsymbol{R}}_{IB}^{-1} (\prescript{}{I}{\boldsymbol{n}} \times \prescript{}{I}{\boldsymbol{r}}) + \prescript{}{B}{\boldsymbol{\tau}_\text{rolling}} \big )
\label{eq:RollocopterDynamics:rotational}
\end{split}
\end{equation}
where $m$ and $\boldsymbol{J}$ represent, respectively, the mass and inertia of the vehicle; the vectors $\boldsymbol{x}, \dot{\boldsymbol{x}}, \ddot{\boldsymbol{x}}$ represent the position of the robot and its derivatives, while $\boldsymbol{\omega}, \dot{\boldsymbol{\omega}} $ the angular rates and their derivatives. The attitude is represented by the rotation matrix $\prescript{}{I}{\boldsymbol{R}}_{IB}$, defining a rotation from $B$ to $I$, while $\prescript{}{I}{\boldsymbol{n}}$ and $\prescript{}{I}{\boldsymbol{r}}$ represent, respectively, the normal of the plane on which the vehicle is rolling and the reaction force with such a plane, expressed in $I$. The forces and torques produced by the actuators are defined by $\prescript{}{B}{\boldsymbol{f}}_\text{cmd} = (0, 0, f_\text{cmd})$, and $\prescript{}{B}{\boldsymbol{\tau}}_\text{cmd}$. The total thrust force $f_\text{cmd}$ is defined as:
\begin{align}
f_\text{cmd} = \sum_{i=1}^{4}f_i - \sum_{i=5}^{8}f_i %
\end{align}
under the assumption that all the propellers are placed on parallel planes. The propellers are placed so that opposite or adjacent pairs spin in opposite directions. For example, following the same numbering scheme adopted in Figure \ref{fig:3DShapeshifterFramesAndActuaturs}, the pair of propellers (1, 5) have opposite directions of rotation, as well as the pair (1, 2).
For simplicity, we assume that the vehicle rolls without slipping around $y_B$.
The torque due to the deformations caused by the interaction between the vehicle and the terrain
is thus modeled as
\begin{align}
\prescript{}{B}{\boldsymbol{\tau}}_\text{rolling} = (0, C_\text{rr} \prescript{}{I}{\boldsymbol{r}} \cdot \prescript{}{I}{\boldsymbol{n}} l, 0)
\label{eq:rolling}
\end{align}
where $\cdot$ represents the scalar product, $C_\text{rr}$ the rolling resistance coefficient and $l$ the radius of the cylindrical shell of the robot.
We additionally assume that the aerodynamic drag force $\boldsymbol{f}_\text{drag}$, applied in the \ac{CoM} of the robot, is function of the second power of the velocity of the vehicle:
\begin{align}
\prescript{}{I}{\boldsymbol{f}}_\text{drag} = -\frac{1}{2}C_\text{d} \rho A ||\prescript{}{I}{\dot{\boldsymbol{x}}}||\prescript{}{I}{\dot{\boldsymbol{x}}}
\label{eq:drag}
\end{align}
where $C_\text{d}$ is the drag coefficient, and $A$ is the aerodynamic area, computed as the area of a Cobot's rectangular base projected on the plane orthogonal to the velocity of the vehicle $\prescript{}{I}{\dot{\boldsymbol{x}}}$. We observe that scalar $A$ is a function of the attitude of the robot.
\subsubsection{Flying Cobot}
The dynamic model of a flying Cobot is a special case of the rolling Shapeshifter, and can be obtained by Equations (\ref{eq:RollocopterDynamics:translation}) and (\ref{eq:RollocopterDynamics:rotational}) assuming $\boldsymbol{r} = \boldsymbol{0}$ and, as a consequence, $\boldsymbol{\tau}_\text{rolling} = \boldsymbol{0}$. %
\subsection{Power consumption model for rolling and flying robots}
We assume that the total power consumption of the robot is directly proportional to the aerodynamic power produced by the propellers, according to the commanded thrust and the motion of the robot, as shown in \cite{tagliabue2019model}, \cite{ware2016analysis}. We disregard the power consumption of other processes (e.g. thermal) in our analysis, as we don't expect many discrepancies between the two mobility modes.
Our induced velocity model relates individual rotor thrust to aerodynamic power, as detailed in \cite{leishman2006principles}, assuming forward-flight regime. %
The forward flight power-thrust model for the $i\textit{-th}$ propeller can be expressed as:
\begin{align}
P_i = \frac{f_i(\nu_i - \nu_\infty \sin \alpha)}{\eta_p \eta_m \eta_c}
\end{align}
Power consumption of rotor $i$, $P_i$, is defined as a function of rotor thrust ($f_i$), induced velocity ($\nu_i$), freestream airspeed ($v_\infty$), and angle of attack ($\alpha$). For this analysis, we assumed a propeller efficiency of $\eta_p$ = 0.6, motor efficiency of $\eta_m$ = 0.85, and controller efficiency $\eta_c$ = 0.95.
We assume that this forward-flight model applies to the rolling configuration as well, justified by the observation that all rotors in the Rollocopter are moving forward relative to freestream at all times during a rotation. We neglect any effect that potential vortex ring states might have on total power consumption.
\subsection{Motion control strategies for rolling and flying robots}
\subsubsection{Rolling Shapeshifter}
\label{sec:rolling_control}
For this initial study, we implemented a simple, centralized control strategy that tracks a desired body rate $\prescript{}{B}{\boldsymbol{\omega}}_\text{des}$ by applying a pure torque on the Rollocopter, i.e., producing zero net thrust. A detailed derivation for the control strategy of a conceptually similar platform is proposed in our related work \cite{rollocopter2019Ieee}. The desired torques $\prescript{}{B}{\boldsymbol{\tau}}_\text{cmd}$, expressed in the frame $B$ of the rolling vehicle, are computed according to a proportional-integral (PI) controller, using the measurements $\prescript{}{B}{\hat{\boldsymbol{\omega}}}$ of the body angular rates provided by the IMU on-board one of the two docked Cobots:
\vspace{-2pt}
\begin{align}
&\boldsymbol{e} = \prescript{}{B}{\boldsymbol{\omega}}_\text{des} - \prescript{}{B}{\hat{\boldsymbol{\omega}}} \\
\prescript{}{B}{\boldsymbol{\tau}}_\text{cmd} & = \boldsymbol{K}_\text{p}\boldsymbol{e} + \boldsymbol{K}_\text{i}\int_{t_0}^{t} \boldsymbol{e} dt
\end{align}\vspace{-2pt}
where $\boldsymbol{K}_\text{p}$ and $\boldsymbol{K}_\text{i}$ are diagonal matrices, tuning parameters of the controller.
Given the commanded torque $\prescript{}{B}{\boldsymbol{\tau}}_\text{cmd}$, a rotation speed input $n_i$ for the $i\textit{-th}$ propeller, with $i = 1, ..., 8$ is produced in the following way.
First, we define the quadruple $(f_\text{A}, ..., f_\text{D})$, where every entry corresponds to the force produced by each pair of opposite propellers (e.g. $f_\text{A} = f_\text{1} - f_\text{5}$, $f_\text{B} = f_\text{2} - f_\text{6}$, etc).
Second, we derive an expression which relates the forces $f_\text{A}, ..., f_\text{D}$ with
\begin{inparaenum}[(a)]
\item the total torques produced by the propellers, which we assume to be equivalent to $\prescript{}{B}{\boldsymbol{\tau}}_\text{cmd}$, and
\item the force $f_\text{cmd}$ produced along the positive $z$ axis of $B$, and expressed in body $B$ frame.
\end{inparaenum}
Such expression is defined as:
\begin{equation}
\begin{bmatrix}
f_\text{cmd} \\ \boldsymbol{\tau}_\text{cmd} \\
\end{bmatrix} =
\boldsymbol{M}
\begin{bmatrix}
f_\text{A} \\ f_\text{B} \\ f_\text{C} \\ f_\text{D} \\
\end{bmatrix}
\label{eq:AllocationStrategy}
\end{equation}
where $\boldsymbol{M}$ is a squared, full-rank matrix by the design of the system, and corresponds to the inverse of the allocation matrix. The matrix $\boldsymbol{M}$ is defined as:
\begin{align}
\boldsymbol{M} =
\begin{bmatrix}
1 & 1 & 1 & 1 \\
-c & c & c & -c \\
-c & -c & c & c \\
-k_\tau & k_\tau & -k_\tau & k_\tau \\
\end{bmatrix}
\end{align}
with $c = \frac{a}{\sqrt{2}}$, where $a$ is the arm length of each propeller to the \ac{CoM}, and $k_\tau$ is a constant with maps the $i\textit{-th}$ propeller's thrust $f_i$ to the aerodynamic torque $\tau_i$, according to $\tau_i = k_\tau f_i$.
Last, because we have assumed that the net force produced by propellers has to be zero, we impose $f_\text{cmd} = 0$ and solve Equation \ref{eq:AllocationStrategy} by inverting the square matrix $\boldsymbol{M}$,
allowing us to find the desired force that has to be produced by each propeller pair.
The desired rotation speed to be commanded to the propellers in each propeller pair can be retrieved in a straightforward way, for example, in the case of the propeller pair A (constituted by propeller 1 and 5):
\begin{equation}
(n_1, n_5)
=
\begin{cases}
(\sqrt{f_\text{A}/k_\text{t}}, 0) & \mbox{if } f_\text{A} \geq 0 \\
(0, \sqrt{-f_\text{A}/k_\text{t}}) & \mbox{if } f_\text{A} < 0
\end{cases}
\end{equation}
where $k_\text{t}$ relates the force produced by each propeller with its rotation speed according to $f_i = k_\text{t} n_i^2$.
\subsubsection{Flying Cobot}
For the flying Cobot, we employ a common cascaded control architecture \cite{Controll67:online} provided by the embedded microcontroller framework \cite{meier2015px4}.
\subsection{Framework for the energy efficiency analysis}
In this part, we introduce the approach used for the energy analysis.
We focus on the steady-state, planar motion of the robots ($\ddot{\boldsymbol{x}} = \boldsymbol{0}, \dot{\boldsymbol{\omega}} = \boldsymbol{0}$), and assume that the motors are controlled to apply pure torque (zero net thrust), implementing the presented control strategy. In the following analyses, $\theta$ defines the slope of the terrain, and $\alpha$ defines the angle between the body frame and the inertial coordinate system, as rotated about $\hat{y}_I = \hat{y}_B$ (\textit{pitch} angle). Cobot dimensions and other parameters used for the analysis are defined in Table \ref{table:dimensions}.
\newcommand\Tstrut{\rule{0pt}{2.6ex}} %
\newcommand\Bstrut{\rule[-0.9ex]{0pt}{0pt}} %
\newcommand{\TBstrut}{\Tstrut\Bstrut} %
\begin{table}[h]
\begin{center}
\begin{tabular}{| r | l | l |}
\hline
\textit{Symbol} & \textit{Meaning} & \textit{Value} \TBstrut\\
\hline \hline
$h$ & \textit{rolling}: height from one rotor & 0.16 m \Tstrut\\
& to opposite rotor & \Bstrut\\
& \textit{flying}: height from rotor to base & 0.08 m \TBstrut\\
\hline
$l$ & radius of cylinder & 0.2 m \TBstrut\\
\hline
$w$ & width of cylinder & 0.4 m \TBstrut\\
\hline
$C_\text{d}$ & drag coefficient & 2.1 \TBstrut\\
\hline
$C_\text{rr}$ & rolling resistance coefficient & 0.01 - 0.2 \TBstrut\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Parameters for the two-Cobot Shapeshifter.}
\label{table:dimensions}
\end{table}
\subsubsection{Rolling Shapeshifter}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/in_Rolling_FBD.png}
\caption{Free body diagram: rolling uphill. Thrust from four rotors produces a pure torque. Rollocopter is propelled by the ground reaction force (assume rolling without slipping) up a slope defined by $\theta$.}
\label{fig:rolling_fbd}
\end{figure}
The free-body diagram used in the planar-motion analysis of the rolling configuration is shown in Figure \ref{fig:rolling_fbd}. Torque due to rolling resistance is calculated as per Eq. (\ref{eq:rolling}). This analysis assumes a rolling friction coefficient between that of consolidated soil ($C_\text{rr} = 0.01$) and tires on loose sand ($C_\text{rr} = 0.2$), consistent with data gathered from the Huygens Surface Science package \cite{zarnecki2005soft}.
Drag force on the Rollocopter is calculated based on Eq. (\ref{eq:drag}), where drag coefficient, $C_\text{d} = 2.1$, is approximated based on Titan's atmosphere \cite{liechty2006cassini}. The aerodynamic area, $A$, is computed as the area of a Cobot's rectangular base projected onto the plane orthogonal to its velocity vector, as a function of the Rollocopter's orientation (\ref{eq:Ax}).
\begin{equation}
A = (h \mid\cos\alpha\mid + 2l\mid\sin\alpha\mid)w
\label{eq:Ax}
\end{equation}
\subsubsection{Flying Cobot}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figs/in_Flying_FBD.png}
\caption{Free body diagram: flying. Cobot flies at a constant height above a hill defined by $\theta$. Angle of attack $\alpha$ is determined by the angle required to keep the Cobot at a constant altitude.}
\label{fig:flying_fbd}
\end{figure}
Figure \ref{fig:flying_fbd} shows a free-body diagram for the flying configuration of a single Cobot. In this configuration, $\alpha$ is determined based on the angle of attack required for the Cobot to fly at a constant height above the ground. Aerodynamic drag force is again given by (\ref{eq:drag}), and aerodynamic area by (\ref{eq:Ax}).
\subsection{Results for the two-Cobot Shapeshifter}
In this section, we present the preliminary results for the mobility validation and energy-efficiency analysis of the two-agents Shapeshifter. We start by introducing a high-fidelity simulation environment of Titan, developed to test our control algorithms, validate our energy-efficiency analysis, and provide a way to simulate mobility aspects of the proposed mission. We then present a physical prototype of the platform, and use it to validate the flying, docking, un-docking, and rolling capability of the proposed design. We conclude this section by showing the results of our energy-efficiency analysis. The results highlight the mechanical feasibility of having a platform capable of both flying and rolling on a sandy terrain, as well as the energy savings realized by morphing into a rolling vehicle.
\subsubsection{Simulation environment}
One goal of this work is to construct a physics-based simulation to verify the analytical energy analysis, as well as provide a tool for further mobility analyses and development of control strategies. We chose to use Gazebo \cite{koenig2004design} as the simulation environment because of its realistic physics engine, including aerodynamic drag and rolling resistance, as well as existing quadrotor packages \cite{Furrer2016}. With Gazebo, the user can place Cobots in either flying or two-agents rolling configurations, then input waypoints for the agent; the simulation outputs power and energy data in real-time, providing a versatile platform for testing different Shapeshifter missions. \newline Our simulation setup is not limited to simulate the robot, but includes a 3D model of the Sotra Patera area on Titan. This model has been obtained via an elevation map computed from images captured by the Cassini mission. Figure \ref{fig:gazebo_titan} \textit{(bottom right)} shows the Shapeshifter rolling on a surface generated from a depth elevation map of Sotra Patera on Titan \textit{(top right)}. We also consider a basic simulation with one flying Cobot and one Rollocopter traversing a flat surface, shown in Figure \ref{fig:gazebo_titan} \textit{(top and bottom left)}, to isolate the mobility primitives.
\begin{figure}
\centering
\includegraphics[clip,width=\columnwidth]{figs/in_ShapeshifterSimulation2.png}
\caption{High-fidelity simulation based on ROS/Gazebo of the Shapeshifter on Titan. \textit{Top left:} simulated model of a Cobot. \textit{Top right:} model of the Sotra-Patera region, obtained by elevation maps of Titan reconstructed from images captured by the Cassini mission. \textit{Bottom right:} the simulated Shapeshifter (assembled as Rollocopter) near Sotra-Patera on Titan. \textit{Bottom left:} the simulated model of the Shapeshifter assembled as Rollocopter.}
\label{fig:gazebo_titan}
\end{figure}
\subsubsection{Hardware implementation details}
In this part, we present details of the the hardware prototype (two-Cobot Shapeshifter) built to validate the flying and rolling mobility modes of the Shapeshifter.
Each Cobot consists of four 6-inch propellers, \textit{EMax 2300 kV} brushless-DC motors, and a three-cell 2200 mAh battery, which provides approximately eight minutes of flight (on hover, on Earth). The side length and the diameter of the cylinder when two Cobots are docked is 0.4 m; the total weight of each Cobot is approximately 0.8 kg. Enough payload transportation capacity is guaranteed by the maximum thrust produced by the propellers, which corresponds to approximately 32 N. On-board computing power and IMU are provided by a Pixwhawk-mini running a PX4 flight stack \cite{meier2015px4}. The two Cobots are identical, with the exception of the position of the magnets and mechanical funnels for the docking mechanism.
\textbf{Shell for rolling and fly:} Each Cobot is equipped with a shell adequate for flying and capable of withstanding small impacts during rolling. The shell is designed using carbon fiber tubes, adequate for this task because they are stiff and lightweight. The carbon fiber tubes are connected together via 3D printed joints.
\textbf{Docking mechanism:}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figs/in_MagneticDockingMechanism.pdf}
\caption{Illustration of the magnetic and mechanical docking mechanism used to connect two Cobots.}
\label{fig:DockingMechanism}
\end{figure}
The docking mechanism is based on 12 permanent electromagnets (PEMs) mounted at two diametrically opposite extremities of each Cobot. Each PEM weighs approximately 10 g and produces a normal force of 15 N when connected to a 2 mm thick plate of steel, and when not powered. When powered, the magnet produces approximately 0 N of force. Thanks to this property, in order to maintain the two agents docked, no power is required. The PEMs can be activated and deactivated by a micro-controller interfaced with the on-board computer.
The docking mechanism is additionally constituted by mechanical funnels connected at the bottom of each Cobots, used to compensate for misalignments during the docking phase and to cancel the shear forces between the agents, during rolling. A representation of the docking mechanism, where its main components have been highlighted, can be found in Figure \ref{fig:DockingMechanism}.
\subsubsection{Mobility validation experiments}
\begin{figure*}
\centering
\includegraphics[width=0.86\linewidth]{figs/in_shapeshifter_movie.jpg}
\caption{Frames from the video-clip (see Supplementary Material) of the experiments conducted with our two-Cobot Shapeshifter. \textit{Top:} Docking sequence. \textit{Center:} Rolling sequence. \textit{Bottom:} Un-docking sequence.}
\label{fig:shapeshifter_movie}
\end{figure*}
In this part, we present the experimental results of the docking, un-docking and rolling maneuver obtained with our prototype, which are represented in Figure \ref{fig:shapeshifter_movie}.
\textbf{Docking}: As represented in the fist row of Figure \ref{fig:shapeshifter_movie}, a pilot remotely controls the flying Cobot while the second agent is on the ground, with the propellers pointed towards the ground. This experiment shows that docking is possible despite the limited flying accuracy of a human pilots, as a result of the alignment funnels and the strong magnetic field created by the magnets. We verify that docking has successfully happened by manually lifting one of the Cobots. \textbf{Rolling}: Once docked, all the motors of the agents are manually connected to the on-board controller of one robot. This limitation will be overcome in future works, for example by establishing a wireless link between the Cobots or by developing a decentralized control strategy. In our experiment, shown in the second row of Figure \ref{fig:shapeshifter_movie}, a pilot controls the rolling motion via a remote control (RC), configured to send the desired angular rates $\boldsymbol{\omega}_\text{cmd}$. Preliminary experiments show that the vehicle can effectively roll on a sandy terrain, uphill and in small dunes. The cylindrical design, anyway, severely limits the yawing (turning) capabilities of the robot.
\textbf{Un-docking:} The \acp{PEM} have been configured so that they can be remotely turned off. From the RC, two pilots simultaneously disable the \acp{PEM} while one of the Cobots takes off. The experiment is shown in the third row of Figure \ref{fig:shapeshifter_movie}.
\subsubsection{Energy analysis}
In this section, we aim to use our dynamic model to determine environmental conditions for which rolling is the more energy-efficient mobility primitive, as well as the conditions for which it is more efficient to fly. To make this distinction, we focus on developing a functional relationship between terrain primitives and the required energy of mobility. Since the proposed mission involves traversal over long distances of Titan's surface, the main objective of this analysis is to determine the maximum expected steady-state range of the Shapeshifter, both for rolling and flying.
The following results employ the analytical model and pure torque control strategy,
applying physical parameters for Titan: $g$ = 1.352 m/s$^2$ and $\rho$ = 5.4 kg/m$^3$. To get numerical results, we assume each Cobot has a 870 kJ battery (2200mAh at 11V).
Figure \ref{fig:range_v_velocity} shows the maximum achievable range for both flying and rolling along a surface of consolidated soil ($C_{rr} = 0.01$); the dashed line indicates the velocity that maximizes range for that configuration. At an optimal velocity of 0.14 m/s, two rolling Cobots are able to travel 267 km, while the maximum range for two flying Cobots is only 135 km, traveling at their optimal velocity of 1.7 m/s.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/in_range_cr001.png}
\caption{Range vs. velocity for two Cobots on Titan. Maximum achievable range for flying and rolling along a surface of consolidated soil. Dashed lines indicate optimal velocity for each configuration.}
\label{fig:range_v_velocity}
\end{figure}
As expected, rolling is more efficient than flying over ideal surface conditions. To optimize energy usage while traversing a non-ideal region, we must develop a relationship between terrain characteristics and power required for traversal. Figure \ref{fig:multi_dim} considers terrains that vary in surface traction from consolidated soil to loose sand, and in steepness from -0.5$^\circ$ to +2$^\circ$. Such steepness range is chosen because contains the transition line in which flying becomes more efficient than rolling. For each surface type and mobility method, we compute maximum range assuming the agents travel at their corresponding optimal velocities. By considering the difference in achievable range for each configuration, we can see where rolling is favorable (above red line) vs. flying (below line).
These results demonstrate that neither rolling nor flying consistently outperforms the other; rather, each configuration optimizes energy efficiency for a different set of conditions.
Especially since the characteristics of Titan's surface are largely unknown, a shape-shifting platform is crucial to accommodate unexpected surface conditions.
Furthermore, this relationship between terrain and mobility can be used to build a traversability map, and ultimately plan energy-efficient trajectories that optimize the Shapeshifter's route as well as mobility configuration, to take full advantage of the multi-modal architecture.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figs/in_Rolling_v_flying_range.png} \caption{Advantage of rolling vs. flying, as it relates to surface slope and rolling resistance. The heat-map represents the difference in range (expressed in kilometers) between flying and rolling (for example, rolling on consolidated soil with $\approx - 0.5^\circ$ of surface slope guarantees $\approx 200 km$ of range more than flying). Flying range is on the order of 130km for all terrains; add flying range to results shown here to get total rolling range.}
\label{fig:multi_dim}
\end{figure}
\subsection{Science objectives at Titan and payload}
\subsubsection{Science objectives} Liquid water and organics are essential for life but, despite being commonly found throughout the Solar System, locations where they are known to coexist are rare. Titan's cryovolcanic regions are high priority locations to search for contact between liquid water and complex organic material because those environments would be habitable for the period of liquid water persistence. The Sotra Patera region, represented in Figure \ref{fig:SotraPateraMissionArchitecture}, is the strongest candidate cryovolcanic feature on Titan \cite{lopes2013cryovolcanism}. Shapeshifters will explore the Sotra Patera region to confirm its cryovolcanic origin and determine the extent that liquid water lavas have interacted with organic surface materials. Shapeshifter's underwater capabilities will also allow it to explore under-liquid environments such as Ligeia Mare, represented in Figure \ref{fig:LigeiaMare}.
\subsubsection{Science payload} Each individual Cobot will carry an optical camera for the purposes of navigation and scientific imaging of Titan's morphologies. The Cobots' ability to image from the surface to high altitudes allows significant flexibility in image resolution and coverage. Due to size and energy limitations, the Cobots may carry only small and low-power instrumentation, such as the equipment typical considered for small spacecraft/rover hybrids (e.g. \cite{pavone2013spacecraft}), which include heat flow probes, accelerometers, magnetometers, seismometers, microphones and PH sensors.
Each or some Cobots are additionally equipped with a sample collecting unit, to collect samples of rocks or liquids to be analyzed in the Home-base. Simple technologies compatible with the Cobot's design include tongs, scoops, rakes, as extensively used during the Apollo missions, attached to the frame and actuated by the Cobot.
The Home-base will host most of the scientific payload.
These instruments include a mass spectrometer, an X-Ray and Raman spectormeter, for analyzing the Shapeshifter samples of Titan's complex organic materials. Using the Shapeshifter for sample collection and the Home-base for in-situ analysis is an optimal solution because the Shapeshifter can access any terrain while the in-situ instrumentation is not subject to the rolling and accelerations required for acquiring difficult samples. This combination allows for unconstrained sample acquisition and more sensitive sample analysis.
\subsection{Exploration of Sotra Patera and other unique Titan's features}
Our preliminary landing location is near Sotra Patera, the most likely site of cyrovolcanism on Titan, where our portable Home-base acting as an Earth relay and science laboratory, and a Shapeshifter, as a collection of 12 or more Cobots, are deployed. Once deployed, Shapeshifter will begin to morph into the optimal configuration based on the observed properties of the terrain. It will start by building high-resolution terrain maps of the region near its base. Then, it will continue shapeshifting to traverse on the Titan's surface, gather and relay scientific data to the Home-base. Examples of the science capabilities of the platform are:
\begin{itemize}
\item \textbf{Low/high resolution local mapping}: Shapeshifter builds maps of the region near their base;
\item \textbf{Stratigraphy, fault survey and and surface conductivity survey}: Shapeshifter explores cliffs and faults to analyze their potential sedimentary nature and measure the conductivity of the surface;
\item \textbf{Deep excursion}: Shapeshifter explores to its maximum range, making most efficient use of available energy by switching between the mapper and rollocopter modes;
\item \textbf{Cave exploration}: Shapeshifter explores detected caves and cryolava tubes in the Rollocopter mode;
\item \textbf{Mare diving for bathymetry and composition survey}: Shapeshifter morphs into a swimmer to dive under the surface of Titan's mare, collecting samples and creating a 3D map of the surrounding environment;
\item \textbf{Active/passive seismometry}: Basic seismography studies can be performed using the on-board accelerometers used for GN\&C, while the Home-base can host a seismometer.
\end{itemize}
After observing and analyzing a science site, Shapeshifter rebases, i.e., it morphs into a transporter and move the Home-base to a new mission site. An example of our mission is represented in Figure \ref{fig:SotraPateraMissionArchitecture}.
\subsection{Mission duration}
We envision a mission of the total duration of two Titan days, equivalent to approximately 31 Earth days.
\subsection{Communication}
Once deployed, the Cobots will navigate the terrain by rolling, flying, and swimming all while establishing a mesh communication networks. Telemetry, in-situ measurements, and images are passed on to the home-base, which acts as a relay to get data back to Earth.
\subsection{Platform}
The hardware platform of the Shapeshifter is constituted by two components, the Cobots and the Home-base, as represented in Figure \ref{fig:PlatformOverview}.
\subsubsection{Cobot}
\begin{figure}[h]
\centering
\includegraphics[width=0.60\columnwidth]{figs/in_single_cobot_new_labels.png}
\caption{Artist's representation of a Cobot. }
\label{fig:Cobot}
\end{figure}
Shapeshifter's Cobot units are similar to existing, off-the-shelf quadcopters, as represented in Figure \ref{fig:Cobot}. Each Cobot is equipped with four rotating propellers which allow the Cobot to fly. The frame enclosing the Cobot's actuators is equipped with controllable magnets (for example, programmable polymagnets \cite{sullivan2005magnetic}) that allow Cobots to self-assemble and perform shapeshifting. Cobot power is provided by an on-board battery that is recharged at the portable Home-base. Each Cobot is equipped with a camera and Inertial Measurement Unit (IMU), to perform Visual-Inertial based navigation and mapping. Cobots may additionally be equipped with a scoop or tools to collect samples to be analyzed by the Home-base. Their design is complemented by a radio, which allows to communicate with the Home-base and with other Cobots, via a mesh network.
\subsubsection{Home-base}
The design of the Home-base is inspired by the Huygens lander used during the Cassini mission to Titan \cite{liechty2006cassini}.
The main task of the Home-base is to host:
\begin{inparaenum}[(a)]
\item the instrumentation necessary to perform science measurements and analyze samples collected by the Cobots,
\item the baseline radioisotope-based power system (RPS), such as an MMRTG \cite{ritz2004multi}, necessary to provide power to the Home-base itself and recharge the batteries of the Cobots, and
\item to host the equipment necessary to establish a communication link with Earth.
\end{inparaenum}
The Home-base is not equipped with any means of locomotion, but can be collaboratively transported by a swarm of Cobots. An illustration of the Home-base, based on the Huygens lander design, is depicted in Figure \ref{fig:PlatformOverview}.
\subsection{Locomotion modes}
In this section, we present the main locomotion modes that the Shapeshifter can adopt by combining multiple Cobots together. An artist's representation of the Shapeshifter in its three main locomotion modes can be found in Figure \ref{pic:sys_des:ref_frame}.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figs/in_LocomotionModes.pdf}
\caption{Artist's concept of the three main locomotion modes of the Shapeshifter (from left to right): flying, rolling on the surface and swimming. Image credits: Ron Miller, Marilynn Flynn, Jose Mendez.}
\label{pic:sys_des:ref_frame}
\end{figure}
\subsubsection{Flying and flying-array}
Each Cobot can autonomously fly in Titan's atmosphere, for exploration, mapping and sample collecting purpose. A group of Cobots can additionally morph into a flight array, able to to lift and carry heavy objects, such as the portable Home-base, from one mission site to another, or even over lakes and cliffs. %
\subsubsection{Rollocopter}
Shapeshifter is able to morph into a spherical robot, the Rollocopter \cite{rollocopter2019Ieee}, that is able to roll on the surfaces and fly. In our artist's representation shown in Figure \ref{pic:sys_des:ref_frame}, we assume that 12 pentagon-shaped Cobots will morph into a spherical dodecahedron. While rolling, the robot is actuated by the same propellers used for flying. The implications, in terms of control and efficiency, of rolling using the force produced by propellers are studied in our related work \cite{rollocopter2019Ieee}.
\subsubsection{Torpedo}
Each Cobot can autonomously swim underwater or float on the surface, thanks to neutral buoyancy. Multiple Cobots can mechanically dock together in a torpedo-like structure for increased underwater autonomy or propulsion, in case of strong underwater currents. In this paper we mainly focus on the rolling and flying capabilities of the Shapeshifter, and a more detailed description of the \textit{Torpedo} mode is left as future work.
| -24,429.137007 |
[
0.5615234375,
0.07916259765625
] | 34.920635 |
[
-3.361328125,
1.02734375,
0.1519775390625,
-3.47265625,
-0.84423828125,
4.4609375
] |
[
-0.70947265625,
2.865234375,
-0.00684356689453125,
3.296875
] | 307 | 5,547 |
[
-2.2421875,
2.390625
] | 22.8279 |
[
-5.37109375,
-2.341796875,
-2.8515625,
-1.6123046875,
1.2607421875,
8.859375
] | 1.259704 | 17.24928 | 28.592032 | 1.052403 |
[
1.5578086376190186
] | -17,093.016586 | 6.153777 | -23,752.202925 | 0.500952 | 6.133218 |
[
-3.01953125,
-3.23828125,
-2.955078125,
-3.748046875,
2.640625,
9.9453125
] |
[
-5.4921875,
-1.736328125,
-2.310546875,
-1.78125,
3.41015625,
4.7265625
] | |
BkiUdR84dbgg4qzwKmho
|
\section{Introduction}
Research on the Fe-based superconductors has been intense since the initial discovery of superconductivity in LaO$_{1-x}$F$_{x}$FeAs (labelled 1111 according to the stoichiometry of the parent compound) with $T_{c}=26$~K \cite{Kamihara2008}. Among the studied systems, the phase diagrams found for Ba(Fe$_{1-x}$TM$_x)_2$As$_2$ (labelled 122, with TM = 3$d$ transition metals such as Co and Ni) have been quite intriguing \cite{Sefat2008,1367-2630-11-2-025008}, as the TM is isovalent to Fe, whereas in the 1111 \cite{Kamihara2008} and Ba$_{1-x}$(Na/K)$_{x}$Fe$_{2}$As$_{2}$ \cite{Rotter2008,doi:10.1021/cm100956k} cases, superconductivity is achieved by heterovalent doping. Initially, the tuning of superconductivity in the 122 system by substitution of Co or Ni had been understood by assuming that the TM ions simply contributed their extra electrons to the conduction bands \cite{canfield:060501,PhysRevB.82.024519,PhysRevB.84.020509}, with a resulting rigid-band shift of the Fermi level \cite{PhysRevB.83.094522,PhysRevB.83.144512,JPSJ.80.123701,Kemper2010}. However, theoretical analyses have indicated that the extra electrons of the TM dopants do not entirely delocalize and that at least part of the doping effect is associated with impurity scattering \cite{PhysRevLett.105.157004,2011arXiv1112.4858B,Vavilov2011}. These proposals have been supported by spectroscopic studies \cite{PhysRevLett.107.267402,PhysRevLett.109.077001,PhysRevB.86.104503,PhysRevLett.110.107007,0953-8984-24-21-215501}, as well as by studies of magnetic correlations \cite{PhysRevLett.109.167003,PhysRevLett.113.117001}. One also finds that the dependence of the superconducting dome as a function of dopant concentration does not follow the scaling behavior predicted by the rigid-band model \cite{Sefat2008,1367-2630-11-2-025008,canfield:060501,PhysRevB.84.054540,PhysRevB.82.024519}.
In the system Fe$_{1+y}$Se, superconductivity occurs in the nearly-stoichiometric compound without a need to overcome antiferromagnetic order \cite{mkwreview1,mcqueen:014522,Hu2011}. It is possible to enhance $T_c$ by partial substitution of Te for Se \cite{kata10,liupi0topp}; however, substitution of Co, Ni, or Cu for Fe in Fe$_{1+y}$Te$_{1-x}$Se$_{x}$, inevitably leads to a reduction in $T_c$ \cite{danielfeseco,williams-2009-21,PhysRevB.82.104502,mkwreview1,2010arXiv1010.4217G,fetesenico1}. For a given TM-dopant concentration, the depression of $T_c$ grows as one moves from Co to Ni to Cu. In previous work, we have confirmed that Cu reduces $T_c$ rapidly; furthermore, with 10\% Cu substitution in Fe$_{0.98-z}$Cu$_z$Te$_{0.5}$Se$_{0.5}$, the resistivity has the temperature dependence of an insulator \cite{PhysRevB.88.144509}. We also observed that low-energy ($\leq 12$~meV) antiferromagnetic spectral weight is significantly enhanced with Cu doping, but without inducing order. The impact of Cu on resistivity suggests strong scattering by the dopants, resulting in localization of conduction electrons, and it is not surprising that this would lead to the destruction of superconductivity. On the other hand, the Cu does not depress the magnetic correlations, which are believed to be important to superconductivity. In fact, the effect on the magnetism says something significant about the interactions responsible for the antiferromagnetic correlations. Now, Cu has the most extreme impact on the electronic transport, but is it qualitatively different from that induced by Ni or Co dopants?
In this paper, we attempt to answer this question by performing systematic resistivity and inelastic neutron scattering measurements on Fe$_{0.98-z}$Ni$_{z}$Te$_{0.5}$Se$_{0.5}$ single-crystal samples, with $z=0.02$, 0.04, 0.10 (labelled as Ni02, Ni04, and Ni10 respectively). The results are discussed with reference to those of the Cu-doped case \cite{PhysRevB.88.144509}. With increasing Ni content, $T_c$ is gradually suppressed. With 10\% Ni doping, the resistivity increases slowly with decreasing temperature, exhibiting a weakly insulating behavior. The low-energy magnetic correlations are modified somewhat by the Ni substitution, but the magnetic spectral weight in the normal state changes relatively little. Compared with the results from the Cu-substituted samples, these impacts of Ni doping are reduced in magnitude but qualitatively similar. Our results are compatible with the theoretical arguments that disorder and scattering are significant consequences of TM substitution \cite{PhysRevLett.105.157004,2011arXiv1112.4858B}; in addition, the interactions responsible for magnetic correlations must be short range.
\section{Experimental}
Single-crystal samples of Fe$_{0.98-z}$Ni$_{z}$Te$_{0.5}$Se$_{0.5}$ with nominal concentrations of $z$=0.02, 0.04, and 0.10 (labelled as Ni02, Ni04, and Ni10 respectively) were grown by the horizontal Bridgman method \cite{interplaywen}. To start, the raw materials (99.999\% Te, 99.999\% Se, 99.99\% Fe, and 99.99\% Ni) were weighed and mixed with the desired molar ratio, and then doubly sealed into evacuated high-purity (99.995\%) quartz tubes. The materials were put into the furnace horizontally and heated in the following sequence: ramped to 660~$^{\circ}$C in 3~h; held for 1~h; ramped to 900~$^{\circ}$C in 2~h; held for 1~h; ramped to 1000~$^{\circ}$C in 1~h; held for 12~h; cooled to 300~$^{\circ}$C with a cooling rate of $-0.5$ or $-1^{\circ}$C~h$^{-1}$; then the furnace was shut down and cooled to room temperature. There was a small temperature gradient in the furnace from one end to the other, so that the melted liquid crystallized unidirectionally. To minimize the effects of Fe interstitials, we used a nominal Fe composition of 0.98 instead of 1 for all samples. From X-ray and neutron powder diffraction, and inductively coupled plasma (ICP) measurements on the sample compositions, the maximum deviation of the real composition from the nominal one was determined to be less than 2\% \cite{xudoping11,2011arXiv1108.5968Z}. The $a$-$b$ plane resistivity was measured using a four-point configuration with four contacts made on the $a$-$b$ plane, in a commercial cryostat, with an applied current of 5~mA. The samples used in the resistivity measurements were cut from the same respective batches used in the neutron scattering measurements. The typical dimension was $7\times2\times0.4$~mm$^3$.
Neutron scattering experiments on Ni02 and Ni10 were carried out on the BT-7 triple-axis spectrometer at the NIST Center for Neutron Research, using a beam collimating configuration of Open-$80'$-Sample-$80'$-$120'$. Two pyrolytic graphite (PG) filters were placed after the sample to reduce contamination from higher-order neutrons. The final energy, $E_f$, was fixed at 14.7~meV. The Ni04 and Ni10 (the same sample measured on BT-7) samples were measured on the HB1 triple-axis spectrometer at the High Flux Isotope Reactor, Oak Ridge National Laboratory. The beam collimations were $48'$-$40'$-Sample-$40'$-$240'$ with 2 PG filters after the sample. The $E_f$ was fixed at 13.5~meV. Each of the crystals was a semicylinder, with two flat cleavage surfaces and a mass larger than 10~g. The crystals were mounted in aluminum sample holders and loaded into a closed-cycle refrigerator (CCR). The experiments were performed in the $(HK0)$ plane defined by the [100] and [010] wave vectors. The wave vectors, ${\bf Q}$, will be expressed in terms of reciprocal lattice units (rlu) of $(a^*, b^*,c^*)=(2\pi/a,2\pi/b,2\pi/c)$, where the room-temperature lattice constants are $a=b\approx3.8$~\AA, and $c=6.1$~\AA, corresponding to a unit cell with two Fe atoms. The measured intensity $I_{\rm meas}$ was converted to the dynamical spin correlation function $S({\bf Q},E)$ with absolute unit of $\mu_{\rm B}^2$eV$^{-1}$/Fe by the integrated incoherent elastic scattering intensity ${I_{\rm inc}}$ measured at (0.7, 0.3, 0), and (0.7, 0.7, 0) and averaged, using the formula \cite{neutron1,gynormal}
\begin{equation*}
S({\bf Q},E)=\frac{I_{\rm meas}\mu^2_{\rm B}}{4\pi |f({\bf Q})|^2 p^2}\cdot \frac{\sum_jn_j\sigma_{{\rm inc}, j}}{I_{\rm inc}},
\end{equation*}
where $\mu_{\rm B}$ is the Bohr magneton, $f(\bf Q)$ is the magnetic form factor of Fe$^{2+}$, $p = 0.27\times10^{-12}$~cm, $n_j$ and $\sigma_{{\rm inc}, j}$ are the molar ratio and the incoherent cross section for the element $j$ in the compound, respectively.
\section{Results}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.9\linewidth]
{rt.pdf}
\caption{\label{fig:rt1}(Color online) $a$-$b$ plane resistivity ($\rho_{ab}$) for Ni0, Ni02, Ni04, Ni10, and Cu10 in the semi-log scale. Dashed lines are results of fits to the data with the three-dimensional Mott variable range hopping formula, as described in the text. Inset shows $\rho_{ab}$ vs temperature in the low-temperature range for the three superconducting samples, Ni0, Ni02 and Ni04.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[height=0.9\linewidth,angle=90]
{tczrho.pdf}
\caption{\label{fig:tczrho}(Color online) $T_c$ as a function of TM concentration (a), and resistivity at the temperature where $\rho_{ab}$ starts to drop (b) for Co, Ni and Cu. Open and closed circles are data extracted from the work by Nabeshima {\it et al.},~\cite{1347-4065-51-1R-010102} and Shipra {\it et al.}~\cite{fetesenico1} respectively. Lines in (a) and shade in (b) are guides to the eyes. Error represents one standard deviation $\sigma$ throughout the whole paper.}
\end{center}
\end{figure}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.8\linewidth]
{mesh.pdf}
\caption{\label{fig:meshni7} (Color online) Contour plots of the magnetic scattering at a constant energy of 6~meV at 100~K for Ni0, Ni04, Ni10, and Cu10. The data are obtained by performing a series of linear scans along the [110] direction (illustrated by the arrow) through the positions indicated by the dots in (a). The bright spot close to (0.3, 1) is a spurion.}
\end{center}
\end{figure*}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.9\linewidth]
{trans6mev.pdf}
\caption{\label{fig:trans} (Color online) Linear scans through (0.5,\,0.5) along the [1\={1}0] direction with an energy transfer of 6~meV at 100~K for Ni0, Ni04, Ni10, and Cu10. Solid lines through the data are the results of fits with double Gaussians.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.9\linewidth]
{se.pdf}
\caption{\label{fig:se} (Color online) Q-integrated intensities of the constant-energy scans shown in Fig.~\ref{fig:trans}, but at energies ranging from 2 to 12~meV with a 2-meV interval at 100~K for Ni0, Ni04, Ni10, and Cu10. Solid lines are guides to the eyes.}
\end{center}
\end{figure}
Resistivity data with the current running in the \emph{a-b} plane, $\rho_{ab}$ for Ni02, Ni04, and Ni10 are shown in Fig.~\ref{fig:rt1}. For comparison, the resistivity for a Ni-free sample, Ni0, and 10\% Cu-substituted sample, Fe$_{0.88}$Cu$_{0.1}$Te$_{0.5}$Se$_{0.5}$ (Cu10), are also plotted. With Ni doping, superconductivity is gradually suppressed. For Ni02, the resistivity starts to drop at $\sim$~12~K, and zero resistivity is reached at $\sim$~9.8~K, as shown in the inset of Fig.~\ref{fig:rt1}; these two temperatures for the Ni04 sample are 10.5~K and 8.5~K respectively. The Ni10 is not superconducting down to the lowest temperature measured (2~K). The absolute values of the resistivity in the normal state are also higher in the Ni-substituted samples, indicating a suppression of the electrical conductivity.
Compared with Cu, the impact of Ni substitution on the normal-state resistivity is much more benign \cite{PhysRevB.88.144509}. Ni increases the normal-state resistivity with respect to the Ni-free sample by roughly a factor of 4 to 5 at 20~K, whereas the increase is almost 4 orders of magnitude for the Cu10 sample. The latter resistivity can be fitted rather well with a three-dimensional Mott variable-range-hopping formula \cite{Mott1969} $\rho_{ab}=\rho_0{\rm exp(T_0}/T^{1/(1+d)})$, as indicated in Fig.~\ref{fig:rt1} by the dashed line following the Cu10 data; here, $\rho_0$ and $T_0$ are constants, and $d=3$ is the dimensionality. This indicates that the Cu10 sample behaves like a three-dimensional Mott insulator. A similar fit to the Ni10 data works over a substantial temperature range, but overshoots at low temperature, perhaps due to the presence of residual superconducting correlations.
In Fig.~\ref{fig:tczrho}(a) we plot $T_c$ (onset) as a function of the TM substitution $z$ for our Ni and Cu-doped samples and compare with results for Co-doping (in samples with similar Te concentration) from Refs.~\onlinecite{1347-4065-51-1R-010102,fetesenico1}. One can see that there is a monotonic increase in the rate of $T_c$ suppression in going from Co to Ni to Cu. For Co, the sample is superconducting with $z$ up to 10\%, where $T_c$ is still close to 10~K \cite{fetesenico1}. For Ni, $T_c$ drops to zero somewhere between 4\% and 10\% doping, while for Cu, the cutoff $z$ for superconductivity is $\sim2$\%. For Co, Ni and Cu, the $T_c$ reduction rates are $-0.58$, $-1.24$, and $-3.68$~K per 1\% substitution, respectively. In Fig.~\ref{fig:tczrho}(b), it is shown that $T_c$ is anticorrelated with the normal-sate resistivity. For Co doping, where the resistivity is quite close to that of the undoped sample, $T_c$ is higher; Ni lies in the intermediate range between Co and Cu. The rate of $T_c$ reduction tends to follow the normal-state resistivity.
Next, we turn to the inelastic neutron scattering measurements of the magnetic excitations. We have performed a series of scans around the two characteristic wave vectors $(0.5,0)$ and $(0.5,0.5)$ at excitation energies of 2 to 12~meV, with a 2-meV interval. As in previous studies for Se contents close to 50\% \cite{PhysRevLett.109.227002,xudoping11,lumsden-2009,liupi0topp,PhysRevB.87.224410,1367-2630-14-7-073025,PhysRevB.88.144509}, there is little spectral weight around $(0.5,0)$; hence, we will focus on the results near $(0.5,0.5)$. While we have carried out measurements over a temperature range from 5 to 300~K, the trends in magnetic spectral weight with doping are adequately captured by considering just the data obtained at 100~K (well above the maximal $T_c$ of this system to avoid any effects from the superconducting correlations). Furthermore, the results for Ni02 appear to be very similar to those of the Ni04 sample. Taking all these points into account, we plot in Fig.~\ref{fig:meshni7} representative results as contour maps around $(0.5,0.5)$ at a constant energy of 6~meV for each of the Ni0, Ni04, and Ni10 samples. These maps are obtained by plotting a series of linear scans with the trajectories shown in Fig.~\ref{fig:meshni7}(a). For comparison, the data for Cu10 are also shown. At this temperature of 100~K, the magnetic scattering peaks at wave vectors displaced from $(0.5,0.5)$ along the $[1\bar{1}0]$ direction, as in previous work \cite{PhysRevLett.109.227002,PhysRevB.88.144509,PhysRevB.89.174517}.
To provide a better comparison of the strength of the magnetic excitations, in Fig.~\ref{fig:trans}, we plot linear scans through $(0.5,0.5)$ along the $[1\bar{1}0]$ direction with an energy transfer of 6~meV at 100~K for the Ni0, Ni04, Ni10 and Cu10 samples. While there is some variation in the wave vectors of the peaks, the widths stay relatively constant. To compare the overall spectral weight at low energies, we have {\bf Q}-integrated the intensities for scans from 2 to 12 meV; the results are plotted as a function of the energy transfer in Fig.~\ref{fig:se}. The Ni doping has relatively little impact on the low-energy magnetic spectral weight compared with the Cu doping, as represented by the Cu10 sample.
\section{Discussion}
Our measurements of resistivity in Ni and Cu-substituted samples of Fe$_{0.98}$Te$_{0.5}$Se$_{0.5}$ indicate that the TM dopants cause an increase in scattering whose magnitude grows rapidly as the atomic number of the TM dopant deviates from that of Fe. Considering the impact on $T_c$, it appears that even Co has a negative impact. With increasing atomic number, the $3d$ states of the TM dopants shift to higher binding energies relative to those of Fe. Density functional calculations of FeSe with various TM dopants have indicated that much of the charge of the extra $3d$ electrons remains close to the TM ions, and it has been proposed that the TM dopants may have their largest impact as scattering centers \cite{PhysRevLett.105.157004}. Our results are consistent with this proposal. It is important to note, however, that the TM dopants are not the only source of disorder in our samples. A diffraction study has found a difference in Fe-Te and Fe-Se bond lengths of 0.15~\AA\ \cite{Tegel2010}, while evidence for short-range segregation of Se and Te has been provided by scanning transmission electron microscopy \cite{Hu2011} and scanning tunneling microscopy \cite{He2011}. If disorder alone were the key factor, it is not clear why relatively small concentrations of Ni or Cu would have such a large effect on the transport properties.
In the case of BaFe$_2$As$_2$, TM substitution depresses antiferromagnetic order and, above a threshold concentration, induces superconductivity. It has been proposed that the disorder effect of TM dopants could be sufficient to explain this effect \cite{Vavilov2011}. In this argument, spin-density-wave order results from scattering of conduction electrons between Fermi surface pockets, and this mechanism is disrupted by impurity scattering to a greater extent than is the electron pairing of the superconducting state. Our results appear to be inconsistent with this proposal. We find that the low-energy magnetic spectral weight is not reduced by Ni doping, and actually increases with Cu doping. Furthermore, the magnetic correlation length is always quite short. These observations indicate that the important magnetic interactions must be short ranged and are insensitive to the coherence of electronic quasiparticles.
For the sake of completeness, we now want to discuss the possibility of static magnetic order. We know that in the optimally-doped samples, there is no static order, long- or short-ranged, and the spectral weight is concentrated around $(0.5,0.5)$ \cite{qiu:067008,lumsden-2009}. However, with sufficient excess Fe, which suppresses the superconductivity, short-range static order with the wave vector $(0.5,0,0.5)$, can be induced \cite{xudoping11}. These results indicate that extra Fe stabilizes the bicollinear antiferromagnetic order that is incompatible with superconductivity. Density-functional calculations have confirmed that with excess Fe, the spin configuration changes from collinear with an in-plane ordering wave vector of $(0.5,0.5)$ to a bicollinear structure with in-plane ordering wave vector $(0.5,0)$ \cite{han:067001}. To see whether similar static magnetic order can be induced with Ni or Cu substitution, we have carried out additional measurements in the $(H0L)$ plane. Those results turn out to be negative. Hence, it seems likely that the Ni dopants substitute for Fe in the lattice.
While the Ni and Cu dopants do not reduce the low-energy magnetic spectral weight, at concentration where no superconductivity is observed, they inhibit magnetic correlations from becoming commensurate at low temperature, and this effect is correlated with the suppression of superconductivity \cite{PhysRevLett.109.227002}. This result has been interpreted as a consequence of the suppressed orbital ordering by the disorder and thus worsened shape matching of the Fermi surface connected by the nesting vector~\cite{PhysRevLett.109.227002}. Our observations that the magnetic excitations in the Ni10 sample peak further away from the commensurate position [Figs.~\ref{fig:meshni7}(c), and \ref{fig:trans}] are consistent with these results. Thus, the scattering effects of the Ni and Cu dopants might impact orbital as well as charge correlations.
\section{SUMMARY}
In conclusion, our systematic resistivity and inelastic neutron scattering measurements on a series of Fe$_{0.98-z}$Ni$_z$Te$_{0.5}$Se$_{0.5}$ samples have shown that Ni suppresses both the superconductivity and conductivity, with relatively little impact on the magnetic correlations. These effects are weaker than those of the Cu-doped case. We attribute this to the weaker impurity potentials of the Ni dopants. Considering the reports on the substitution-dependent effects in the 122 \cite{PhysRevLett.110.107007,2011arXiv1112.4858B,PhysRevLett.109.167003} and Li(Na)FeAs (111) \cite{arXiv:1409.5612,PhysRevB.88.245112} systems, it appears that the substitution effects get stronger as the impurity potential of the substituent becomes larger is universal among the Fe-based superconductors.
\begin{acknowledgments}
We are grateful for stimulating discussions with Weiguo Yin, Xiangang Wan, Alex Frano and Ming Yi. Work at Nanjing University was supported by National Natural Science Foundation of China under contract No.~11374143, Ministry of Education under contract No.~NCET-13-0282, and Fundamental Research Funds for the Central Universities. Work at Lawrence Berkeley National Laboratory and Brookhaven National Laboratory was supported by the Office of Basic Energy Sciences, Division of Materials Science and Engineering, U.S. Department of Energy, under Contract No.~DE-AC02-05CH11231 and DE-AC02-98CH10886, respectively. Research at Oak Ridge National Laboratory's High Flux Isotope Reactor was sponsored by the Division of Scientific User Facilities of the Office of Basic Energy Sciences.
\end{acknowledgments}
| -13,828.651101 |
[
-2.70703125,
2.654296875
] | 46.511628 |
[
-2.9921875,
0.63916015625,
-1.95703125,
-5.88671875,
-0.62841796875,
8.4609375
] |
[
1.859375,
7.8671875,
3.513671875,
4.2421875
] | 293 | 2,951 |
[
-3.45703125,
4.046875
] | 27.520286 |
[
-5.65625,
-2.607421875,
-2.978515625,
-2.4921875,
1.1982421875,
10.5234375
] | 0.976563 | 29.528085 | 32.15859 | 2.632468 |
[
2.325345277786255
] | -10,828.396627 | 5.725517 | -13,384.742975 | 0.355114 | 5.785756 |
[
-3.3671875,
-3.83203125,
-3.390625,
-4.296875,
2.443359375,
11.7578125
] |
[
-6.18359375,
-2.09765625,
-2.341796875,
-1.7822265625,
3.671875,
4.9296875
] | |
BkiUdeo5qWTBJKvQ8ULT
|
\section{Introduction}
It is well known that the geometrical spin clusters (i.e., the
clusters composed of the neighboring spins of the same sign), undergo a
percolation transition at the thermodynamic critical temperature
$T_c$ in the two-dimensional Ising model \cite{CK80}. This type of
behavior is believed to be generally valid for a variety of
two-dimensional critical models that undergo a continuous phase
transition such as the $q$-state Potts model ($q\leq 4$) \cite{For},
which is a $q$-state generalization of the Ising model ($q=2$)
\cite{Pot}. The formation of spanning spin clusters and their
characteristic percolation exponents, can also be used to
characterize the universality class of the corresponding thermal
phase transition. Furthermore, at the critical point, the spanning
cluster is a scale-invariant fractal object whose fractal dimensions
uniquely specify the universality class of the associated continuous
phase transition \cite{GC}.
The critical behavior of a great number of statistical models in two
spatial dimensions, has been investigated by the conformal field
theory \cite{AMB}. The conformal invariance property, refers to
the invariance under coordinate transformations through which the angles
between the crossing lines in the $z$-plane do not change. From this
point of view, the cluster boundaries in the two-dimensional
critical systems are considered as conformally invariant curves, and
different characteristics such as their fractal dimensions are
obtained. Indeed the fractal geometry is a useful mathematical tool
for the characterization of a great many complex configurations.
Self-similarity is the most important characteristic of such fractal
objects. It is well known that the most widely studied statistical
models in the condensed matter physics such as the Ising model, its
$q$-state generalization the Potts model at criticality as well as
the many critical geometrical phenomena exhibited by the various
percolation models, consist of fractal lines
\cite{GC,Ca1,Co,Du,NC,N}.
More recently, the spin cluster boundaries (interfaces) in the
two-dimensional critical models, have been investigated rigorously
using the method of Stochastic Loewner Evolution (SLE), in which the
motion of a random walker along the cluster boundary in the
upper-half complex plane in the continuum limit is specified by the
Loewner dynamics \be \frac{\partial g_t(z)}{\partial
t}=\frac{2}{g_t(z)-\zeta_t}\,,\label{eq1.0}\ee which contains a
Brownian term $\zeta_t=\sqrt{\kappa}B_t$ whose amplitude is given by
the SLE parameter $\kappa$, also known as the diffusivity
\cite{Sch}. The function $g_t$ maps a parametric curve $\gamma_t$ in
the upper-half complex plane onto the real axis. Thus, given a real
function $\zeta_t$ and using the initial condition $g_0 = z$, the
Loewner differential equation (\ref{eq1.0}) determines the corresponding
trace $\gamma_t$ in the upper-half $z$-plane. The larger the diffusivity, $\kappa$, the more is the
deviation from a straight line. Indeed the nature of the SLE traces
change with the diffusivity: for the range of values
$0\leq\kappa<4$---a range that includes $\kappa =3$ characterizing
the boundaries of the geometrical clusters (of like sign) in the
critical two-dimensional Ising model---the SLE traces are
nonintersecting simple curves, while for $4\leq\kappa\leq8$ the
curves possess double points with possible self-touching (but no
crossing), and for $\kappa
> 8$ they become space-filling \cite{Sch,BB,Ca2}. Thus, the critical
fractal dimension of the interfaces $D_{I}$ introduced by this
theory are model dependent. The relation between $D_{I}$ and
$\kappa$ is given by $D_{I}= 1 + \frac{\kappa}{8}$ \cite{BV}. The
exact values are $\kappa = 3$ and $D_{I}=\frac{11}{8}$ for the geometric
spin cluster boundaries of the two-dimensional regular Ising model,
as obtained analytically through various methods \cite{SS2}. Like
the thermodynamic critical exponents in statistical mechanics,
$\kappa$ can divide different models into universality classes
\cite{Ca2}. The main difference between $\kappa$ and the thermal
critical exponents, is due to the method of definition---one being
thermodynamic and the other geometrical in nature. Normally, the
characterization of the universality class, requires a minimum of
two thermodynamic critical exponents. We note that in the case of
two-dimensional critical systems, a single parameter, $\kappa$,
appears to be sufficient to specify the universality class. This may
be attributed to the fact that of the two specifications of the
model that significantly influence its critical behavior, i.e.\
space dimensions $d$ and the order parameter dimensions $n$, one is
held fixed at $d=2$. Hence, the SLE diffusivity $\kappa$ alone can
be used to specify the universality class of the two-dimensional
critical systems.
In the following we focus on the thermalized bond Ising model (TBIM)
in two dimensions and investigate the behavior of its geometrical
spin clusters (i.e., spin clusters of like sign) and their external
boundaries (interfaces) at criticality. A Wolff single-cluster
Monte Carlo algorithm is used to generate configurations at and near
the criticality on square lattices, and a tie-breaking rule is used
to identify non-intersecting geometrical cluster boundaries along
the edges of the dual lattice. The rest of this paper is organized
as follows. In section 2, the thermalized-bond Ising model is
briefly reviewed, focussing on its thermodynamic critical behavior.
The method of simulation and the finite size scaling
procedures---employed to extrapolate the results obtained for finite
lattices to the thermodynamic limit---are explained in section 3. The
results are presented and discussed in section 4, and the paper is
concluded with a summary in section 5.
\section{The model system}
The thermalized bond Ising model (TBIM), is a bond-diluted Ising
model with a temperature dependent bond concentration, in which
every covalent bond linking a nearest neighbor pair of atoms is
allowed thermally induced electronic transitions between bonding and
anti bonding electronic states \cite{DSB}. Hence, it can be regarded
as containing {\em annealed} bond defects with a temperature dependent
concentration. Each bond at every instant is characterized by a
coupling constant $J_{ij}= 0, J_0$, such that zero corresponds to a
broken bond (anti-bonding electronic state), while $J_0$ means an
attractive coupling between the two atoms (bonding electronic
state), as illustrated schematically in figure \ref{figure1}.
Denoting the thermally averaged bond concentration by $p_b$,
$(1-p_b)$ must therefore represent the concentration of the broken
bonds due to thermal excitations. To keep the analysis simple, the
covalent bonds are treated as independent two-level systems with
energy gap $J_0$, as sketched in figure \ref{figure1} \cite{DSB}.
The ratio of the bonds to the broken bonds in equilibrium, is given
by the ratio of the corresponding Boltzmann factors
$p_b/(1-p_b)=\exp(\beta J_0)$, or \be p_b = 1/(1+\rme^{-\beta
J_0})\label{eq1.1} \ee where $\beta=1/k_BT$ is the reciprocal
temperature and $k_B$ is the Boltzmann constant. The bond
distribution function for the thermalized-bond model system is of
the form \be P_{J_{ij}}(\beta) = p_b\;\delta_{J_{ij},J_0} +
(1-p_b)\;\delta_{J_{ij},0}\label{eq1.2}\ee where, $p_b$ is given by
equation (\ref{eq1.1}), and $\delta$ denotes the Kronecker delta
\cite{TB}. Hence, the Hamiltonian of the system can be formally
defined by \be H = \sum_{\langle i,j\rangle} J_{ij} S_i S_j \ee
where $S_i=\pm 1$ is an Ising spin, and the sum is over all
nearest-neighbor pairs.
It is well known that mapping the regular Ising model onto the
equivalent correlated percolation problem introduces the bond
probability \be P_r = 1-\rme^{-2\beta J}\label{eq2.1} \ee for the
critical droplets (the Fortuin-Kastelyn clusters) of the regular
Ising model \cite{CK80, FK}. The combination of equation
(\ref{eq1.1}) and equation (\ref{eq2.1}), together with the choice
$J_0=2J$, which is also the energy gap between parallel and
antiparallel spins in the regular Ising model, results in a compound
bond probability $P_{\rm TBIM}=P_r p_b$ for our thermalized-bond
model system: \be P_{\rm TBIM} = (1-\rme^{-2\beta
J})/(1+\rme^{-2\beta J})=\tanh(\beta J).\label{eq2.2}\ee Equation
(\ref{eq2.2}) represents the bond probability used in our
single-cluster update MC simulations.
\begin{figure}
\includegraphics{fig1_new.eps}
\caption{Schematic illustration of the electronic energy states of
the covalent
bonds linking a chain of atoms. The energy gap between the bonding and the
antibonding level is denoted by $J_0$.} \label{figure1}
\end{figure}
The thermodynamic critical behavior of the TBIM, has been studied before in two \cite{DM}, and three
dimensions \cite{DSB}. In two dimensions, the critical temperature is estimated to be
$T_{c}= 1.4897(3)$ \cite{DM}, which is
lower than the critical temperature of the regular Ising model. The
lowering of the transition point is expected in the light of the
annealed bond disorder present. However, the thermal critical exponents are found to be unchanged,
within statistical errors. As for the three-dimensional TBIM, the thermal critical exponents are found to change consistent with the Fisher renormalization relations, as the specific heat exponent of the regular Ising model in three dimensions, $\alpha_r\simeq 0.11$, is finite and positive.
\section{Simulation method and finite-size scaling}
As pointed out in the introduction, the critical behavior of the
geometrical spin clusters and interfaces in the two-dimensional
TBIM, is the main goal of this paper. For consistency with the
postulates of SLE at $T_{c}$, we have considered the model on strips
of size $L_{x}\times L_{y}$, where the length of the strip $L_{x}$
is taken to be much larger than its width $L_{y}=L$ with an aspect
ratio $L_{x}/L_{y}=8$. The boundary conditions used for
simulations, are fixed for the lower boundary (real axis),
antiperiodic for the two sides, and free for the upper boundary of
the system, as shown schematically in figure \ref{figure2}. Using a
single-cluster update algorithm (Wolff's algorithm) \cite{UW} for
the two-dimensional TBIM on square lattice, we generated equilibrium
spin configurations at and near the critical point $T_c$. A typical run consisted of several weeks
of the CPU time on a single processor computer. Initially,
the system was allowed $2\times10^{3}\,L$ equilibration Monte Carlo
steps (MCS), and then the data points were accumulated by averaging
over $2\times 10^{2}\,L$ configurations that contained a spanning
cluster extending along the width of the strip $L$. Thus, $L$ sets
the appropriate length scale for the systems used, and the critical
interfaces can be studied by the theory of SLE in the scaling limit.
A turn right (or, alternatively, left) tie-breaking rule \cite{AS},
is used on the square lattice as a procedure to identify
the external perimeters (hulls) of the geometrical spin clusters
without any self-intersection. We note that in this case the hulls and
the external perimeters are the same.
To identify the interfaces in the upper half complex plane, a walker
moves along the edges of the dual lattice, starting from the origin
as sketched in figure \ref{figure2}. At the first step of the walk,
a spin (+) lies to the right of the walker (this direction is chosen
to be the preferable direction). After arriving at each site on the
dual lattice there are $3$ possibilities for the walker: it can
cross any of the $3$ nearest bonds of the original lattice. At the
first step of selection, it chooses the bonds containing two
different spins where crossing each of them leaves the spin $(+)$ to
the right and $(-)$ to the left of walker. The direction right or
left are defined locally according to the orientation of the walker.
If there are still two possibilities for crossing, the walker
chooses the bond which accords with the turn right tie-breaking
rule. It turns toward the bond which is on its right-bond side with
respect to its last direction in the last walk, if there is no
selected bond to its right, it prefers to move straight on and if
there is also no one there, it turns to its left. The procedure is
repeated iteratively until the walker touches the upper boundary.
The resulting interface is a unique one which has no
self-intersection and never gets trapped \cite{AS}. Note that we
just take the samples including a vertical spanning cluster in the
$y$-direction. The fractal dimension of the interfaces at
criticality, $D_{I}$, is obtained using the standard finite-size
scaling procedure. The length of interfaces is related to the sample
size as \be l\propto L^{D_{I}}.\label{eq3.1} \ee Indeed the fractal
dimension of the conformally invariant curves is provided by the SLE
theory as \be D_{I}= 1 + \frac{\kappa}{8} \label{eq3.2} \ee in which
diffusivity $\kappa$, as mentioned in the introduction,
characterizes different universality classes, and so does $D_I$. For
the regular Ising model, the diffusivity is believed to be $\kappa =
3$, and thus $D_{I}=\frac{11}{8}=1.375$. In addition, the fractal
dimension of the spanning spin cluster at $T_c$, obeys the relation
\be M\propto L^{D_{c}} \label{eq3.3} \ee where $M$ is the mass of
the cluster, and obtained by counting all the nearest-neighbor positive
(negative) spins to the right (left) of the SLE trace, as shown in figure \ref{figure2}.
The exact value of $D_c$ for the regular Ising model is
$D_{c}=\frac{187}{96}=1.9479...$ \cite{DB}. Besides these, we find
the winding angle variance through the winding angle function $w(e)$
as defined by Wilson and Wieland \cite{WW}. For each edge on the
dual lattice, there is a value for the winding function $w(e)$ at
that edge such that the winding angle at the neighboring edge $e'$
is defined by $w(e') = w(e)$ + $the$ $turning$ $angle$ $from$ $e$
$to$ $e'$ $measured$ $in$ $radians$. It is shown that the variance
of the winding angles, grows with the sample size like \be
\langle\theta^{2}\rangle= a + \frac{\kappa}{4} \ln(L).\label{eq3.4}
\ee Thus, by plotting $\langle\theta^{2}\rangle$ versus $\ln(L)$,
the slope gives a direct measure of $\kappa$ \cite{WW}.
\begin{figure}
\includegraphics{table_2.eps}
\caption{An schematic illustration of defining the domain boundaries
in two-dimensional TBIM, on a square lattice. Figure shows the dual
of the original square lattice including a spin configuration, with
fixed boundary conditions for the bottom end, antiperiodic on sides,
and free for the top end. The non-intersecting interface (shown by
arrows) is generated using a turn right tie-breaking rule.}
\label{figure2}
\end{figure}
The finite-size scaling of the spanning cluster size, is of the form
\cite{SA} \be M(L)=L^{D_{c}} \tilde{M}(L/\xi)\label{eq3.5} \ee where
the correlation length, also known as the connectedness length,
behaves like $\xi \sim (T-T_{c})^{-\nu_G}$, and the scaling function
$\tilde{M}(x)$ tends to a constant as its argument goes to zero at
$T_{c}$. Thus, the correlation length exponent $\nu_G$ of the
geometrical clusters, is estimated by a value that results in a data
collapse in a scaling plot $L^{-D_{c}}M(L)$ against
$L^{1/\nu_G}(\beta/\beta_c -1)$. Among other percolation quantities
of interest is the percolation strength $P_{\infty}(L)=M(L)/ L^2$,
which is the probability that a site chosen at random belongs to the
spanning cluster \cite{SA}. $P_{\infty}$ plays the role of an order
parameter for the percolation transition, and vanishes at the
percolation threshold $p_c=\tanh\beta_c$, at a rate specified by an
exponent $\beta_G$ defined by $P_{\infty}\propto
(p-p_c)^{\beta_G}$. The finite-size scaling relation for the
strength of percolation, is as follows \cite{SA}: \be P_{\infty}(L)
= L^{-\beta_G/\nu_G} \tilde{P}(L/\xi).\label{eq3.6}\ee Thus, at the
critical point $T_c$, a log-log plot of $P_{\infty}$ against $L$
must be a straight line with a slope equal to the ratio $-\beta_G
/\nu_G$. In the next section, we present our results for the
two-dimensional TBIM.
\section{Results and discussion}
In this section we present and discuss our main results for the
two-dimensional TBIM as obtained from simulations based on the
methods pointed out in the previous section. We performed
simulations for eight different system sizes $L=$ 100, 150, 200,
250, 300, 350, 400, and 450. Only the spin configurations including
a vertical spanning cluster are considered for analysis, and the
statistical errors were estimated by means of binning the
accumulated data. As it appears in figure \ref{figure3}, the slope
of a log-log plot of the spanning length $l$ versus the system size
$L$, results in a fractal dimension $D_{I}=1.373(8)$, which is
equivalent to a $\kappa =2.984(17)$ as given by equation
(\ref{eq3.2}).
\begin{figure}
\includegraphics{l_versus_L_new_2.eps}
\caption{A log-log plot of $l$ against $L$ for the two-dimensional
TBIM. The slope of the straight line gives the fractal dimension of
the external perimeters $D_{I}=1.373(8)$. The error bars are smaller
or comparable with the symbol size.} \label{figure3}
\end{figure}
To confirm our results, we also measured the winding angle variance
along the spanning contour by performing simulations for 10
different system sizes $L=$ 30, 50, 100, 150, 200, 250, 300, 350,
400, and 450. A plot of $\langle\theta^{2}\rangle$ against $L$ is
shown in figure \ref{figure4}. A curve of the form $a_{1}+ a_{2}
\ln(L)$ with parameters $a_{1}=-1.330(10)$ and $a_{2}=0.751(2)$
(where $a_{2}=\frac{\kappa}{4}$), was least-squares fitted to these
data. Furthermore, plotting $\langle\theta^{2}\rangle$ versus
$\ln(L)$, results in a SLE parameter $\kappa =3.004(9)$ as
illustrated in the inset of figure \ref{figure4}.
\begin{figure}
\includegraphics{Teta2_new_L_2.eps}
\caption{A plot of $\langle\theta^{2}\rangle$ versus $L$ for the
two-dimensional TBIM. In the inset, the variance is in
semi-logarithmic coordinates. The error bars are smaller or
comparable with the symbol size.}\label{figure4}
\end{figure}
A log-log plot of the spanning cluster mass $M$ versus the system
size $L$ is shown in figure \ref{figure5}. The slope gives a fractal
dimension $D_{c}=1.948(3)$.
\begin{figure}
\includegraphics{M_versus_L_new_2.eps}
\caption{A log-log plot of $M$ against $L$. The slope of the
straight line gives the fractal dimension of the geometrical
clusters at $T_c$, $D_{c}=1.948(3)$.} \label{figure5}
\end{figure}
Using the finite-size scaling ansatz as given in equation
(\ref{eq3.5}), the correlation length exponent $\nu_G$ for the
emerging spanning cluster, is estimated from a scaling plot of $
L^{-D_c}M(L)$ against $L^{1/\nu_G}(\beta/\beta_c-1)$ as shown in
figure \ref{figure6}. By varying $\nu_G$, and evaluating the quality
of the data collapse, our best estimate of the correlation length
exponent for the geometrical clusters is $\nu_G = 1.01(2)$. Finally,
the slope of the log-log plot of $P_{\infty}$ versus $L$ results in
$\beta_G=0.051(3)$, as shown in figure \ref{figure7}, which fits
well into the hyperscaling relation for the percolation exponents in
$d$ spatial dimensions $D_{c}=d-\frac{\beta_G}{\nu_G}$.
\begin{figure}
\includegraphics{fit_curve_1.eps}
\caption{A scaling plot for the spanning cluster mass. By varying
$\nu_G$, and evaluating the quality of the data collapse, our best
estimate of the correlation length exponent is $\nu_G = 1.01(2)$}
\label{figure6}
\end{figure}
\begin{figure}
\includegraphics{Beta_curve_2.eps}
\caption{A log-log plot of $P_{\infty}$ against $L$. The slope of
the straight line gives $\beta_G = 0.051(3)$} \label{figure7}
\end{figure}
The obtained results for the two-dimensional TBIM are listed in
table \ref{table1} for comparison with the analytical results of the
regular Ising model. At criticality, the fractal dimension of the
spin clusters $D_{c}$ and interfaces $D_{I}$ have been found to be
consistent with the analytical results obtained for the regular
Ising model, despite the temperature-dependent annealed bond dilution.
\begin{table}
\caption{\label{table1}The critical exponents of the geometrical clusters in $2d$ TBIM are
compared with those obtained for $2d$ regular Ising model.}
\begin{indented}
\item[]\begin{tabular}{llllll}
\br
&$\nu_G$ &$\beta_G$ & $\kappa$ & $ D_{I}$ &$D_{c}$ \\ \hline
TBIM &1.01(2)& 0.051(3) & 3.004(9) & 1.373(8) & 1.948(3) \\
Ising Model & 1.00 & $5/96$ & 3& 1.375 & $187/96$ \\
\end{tabular}
\end{indented}
\end{table}
It must be noted that although the geometrical clusters (i.e., the
neighboring sites of the same spin sign) uniquely characterize the
universality class of the critical system, they do not, however, represent the
critical droplets. Hence, the percolation critical exponents of the
geometrical clusters, do not in general coincide with those of the
corresponding thermal quantities. The critical droplets, are more
precisely specified by the so-called Fortuin-Kastelyn (FK) clusters whose
diffusivity parameter $\kappa$ has a duality relation with that of
the geometric spin clusters \cite{CK80,FK,DB1}. The FK clusters can
be obtained from the geometrical clusters through a random
decimation of bonds by a suitable probability, in this case
$1-\tanh(\beta)$, and are therefore less compact. Thus, despite the
correlation length exponent $\nu_G =1$, we note that the geometrical
clusters are too compact to represent the critical droplets, and the
exponent $\beta_G = 5/96$ differs appreciably from the corresponding
thermal exponent $\beta =1/8$, associated with the vanishing of the
magnetization order parameter at $T_c$. The value of the correlation
length (also known as the connectivity length) exponent $\nu_G= 1$,
for the geometrical clusters of the two-dimensional TBIM, is in
excellent agreement with the values obtained from a real-space
renormalization group analysis \cite{CK80,Co}, high-temperature
series expansion studies \cite{SG}, and precision numerical
simulations of the geometrical clusters of the standard
two-dimensional Ising model \cite{For,JS}. This result, however, opens
a question about a near perfect collapse onto a universal
function for the same data obtained for the regular Ising model, but
with a different exponent $\frac{15}{8}$ \cite{AS}.
As can be seen from table \ref{table1}, within the statistical
uncertainty, the value of the SLE parameter $\kappa$, the fractal
dimensions, and the percolation exponents of the geometrical spin
clusters of the two-dimensional TBIM, are in excellent agreement
with those of the corresponding regular Ising model. These results
agree well with an earlier study of the thermodynamic critical
behavior of the two-dimensional TBIM, which places the model system
in the universality class of the standard two-dimensional Ising
model \cite{DM}, and the Fisher renormalization relations, which
assert that annealed bond dilution can only change the critical
exponents if the specific heat exponent $\alpha_r$ of the regular
model is positive ($\alpha_r > 0$) \cite{Fis}. The
two-dimensional regular Ising model, however, is characterized by a
logarithmic divergence of the specific heat, $\alpha_r=0$, and
the exponents remain unchanged.
As for the fractal behavior of the geometrical clusters away from
the criticality, we note that for all temperatures below the
transition point $T<T_c$ (or $p>p_c$), $D_{\rm c}$ must equal the
space dimensions $d=2$, otherwise the percolation strength would
vanish in the thermodynamic limit of an infinite lattice. At $T_c$
($p=p_c$), the geometrical clusters of the two-dimensional TBIM, are
fractal with a fractal dimension $D_c = 1.948(3)$ ($<2$), thus
rendering the percolation strength zero at the critical point, as
expected. Hence, one expects the fractal dimension of the
geometrical clusters to change discontinuously from $D_c=d=2$ for
$T<T_c$, to $D_c=1.948(3)$ at $T_c$. This expectation is validated
by the data of reference \cite{AS}, where the so-called `effective'
fractal dimensions undergo a sharp crossover at $T_c$. We believe
that the crossover is a finite-size effect, and a remnant of the
discontinuity at $T_c$ in the thermodynamic limit. As for the
temperatures above the critical point $T>T_c$, there are no spanning
geometrical clusters in the thermodynamic limit and the procedures
used here become inapplicable. However, other standard procedures
such as the box counting method may be employed to investigate the
fractal behavior of the geometrical clusters above $T_c$ and within
a region of linear size of the order of the finite correlation
length $\xi$.
\section{Summary}
The fractal behavior of the geometrical spin clusters, are obtained
for the thermalized bond Ising model in two dimensions. For this
purpose, a modified Wolff single-cluster Monte Carlo simulation is
used to generate equilibrium spin configurations on square lattices
in the critical region. The obtained values for the fractal
dimensions of the spanning geometrical clusters, $D_{c}$, and that
of their interfaces, $D_{I}$, are in perfect agreement with those
reported for the regular Ising model. The variance of the winding
angles results in a value $\kappa=3.004(9)$ for the SLE parameter,
thus placing it in the universality class of the regular Ising
model. Furthermore, the percolation exponents of the geometrical
spin clusters at $T_c$, are found to be consistent with those
reported for the regular Ising model. These consistencies are
explained in terms of the Fisher renormalization relations, which
express the thermodynamic critical exponents of systems with
annealed bond dilution in terms of those of the regular model
system.
\section*{References}
| -16,776.793892 |
[
-2.740234375,
2.625
] | 14.623656 |
[
-2.79296875,
0.34521484375,
-2.5234375,
-6.55078125,
-1.134765625,
9.8828125
] |
[
3.494140625,
9.0859375,
3.02734375,
5.89453125
] | 197 | 3,680 |
[
-3.34375,
3.884765625
] | 24.631783 |
[
-6.171875,
-4.48828125,
-4.9609375,
-2.705078125,
1.890625,
13.484375
] | 2.473249 | 10.958746 | 25.081522 | 2.011925 |
[
2.3336377143859863
] | -12,386.963818 | 5.383696 | -16,100.532806 | 1.458712 | 5.602981 |
[
-2.6875,
-3.97265625,
-3.810546875,
-4.8828125,
2.37109375,
12.6640625
] |
[
-5.84765625,
-2.1953125,
-2.591796875,
-1.2119140625,
4.0625,
4.8828125
] | |
BkiUdBbxaKgTr7hAWNCN
|
\section*{abstract}
We present the Generalised Differential Image Motion Monitor. It is a compact instrument dedicated to measure 4 parameters of the optical turbulence: seeing, isoplanatic angle, coherence time and wavefront coherence outer scale. GDIMM is based on a small telescope (28cm diameter) equipped with a 3-holes mask at its entrance pupil. The instrument is fully automatic, and performs continuous monitoring of turbulence parameters at the Calern Observatory (France). This paper gives a description of the instrument, data processing and error budget. We present also statistics of $3\frac 1 2$years of monitoring of turbulence parameters above the Calern Observatory.\\
{\sl \noindent keywords:
Interferometers -- High Angular Resolution -- atmospheric effects -- site-testing.}
\section{Introduction}
\label{par:intro}
Atmospheric turbulence is responsible to the degradation of astronomical images observed through the atmosphere. Since the early 70's, many techniques have been developed to achieve diffraction limited resolution of observing instruments, namely speckle interferometry \cite{Labeyrie70}, long baseline interferometry \cite{Labeyrie75} and adaptive optics \cite{Rousset90}. Performances of these techniques rely on a good knowledge of atmospheric turbulence parameters, i.e. the seeing $\epsilon_0$, the isoplanatic angle $\theta_0$, the coherence time $\tau_0$ and the outer scale ${\cal L}_0$.
The 3 parameters $\epsilon_0$, $\theta_0$ and $\tau_0$ are of fundamental importance for adative optics (AO) correction: a large coherence time reduces the delay error, a small seeing value allows to close the loop easily and benefit from a rather good correction, and a large isoplanatic angle reduces the anisoplanatic error, enlarges the sky coverage and allows very wide fields of correction (see \cite{Carbillet17} and references therein). {The outer scale ${\cal L}_0$ has a significant effect for large diameter telescopes (8m and above) and impacts low Zernike mode such as tip-tilt \cite{Winker91}.
}
Since several years, our group develops original techniques and instrumentation for measuring the optical turbulence of the atmosphere. Several prototypes were developped in the past, such as the generalized seeing monitor (GSM, \cite{Ziad00}) which has become a reference for monitoring the coherence parameters of the wavefront at ground level. In the last 15 years GSM was used in a large number of astronomical observatories and for prospecting potential new sites (see \cite{Ziad00} and references therein).
The Generalized Differential Image Motion Monitor (GDIMM) was proposed in 2014 \cite{Aristidi14} to replace the aging GSM. It is a compact instrument very similar to a DIMM \cite{Sarazinroddier90}, with 3 sub-apertures of different diameters. GDIMM observes bright single stars up to magnitude $V\sim 2$, at zenith distances up to 30$^\circ$, which is enough to ensure observability at any time/night of the year.
\begin{figure*}
\parbox[c]{85mm}{
\includegraphics[width=8cm]{pupil.eps}}\
\parbox[c]{85mm}{
\includegraphics[width=8cm]{P1040425s.eps}\\ \vskip 2.5mm \ \\}
\caption{Left: the pupil mask of GDIMM (bottom part is a sectionnal view). Right: the GDIMM dome on its 4m high tower at Calern Observatory.}
\label{fig:photogdimm}
\end{figure*}
After a period of developpement and tests in 2013--2015, the GDIMM is operational since the end of 2015, as a part of the Calern atmospheric Turbulence Station (C\^ote d'Azur Observatory -- Calern site, France, UAI code: 010, Latitude=$43^\circ 45' 13''$~N, Longitude=$06^\circ 55' 22''$~E). GDIMM provides continuous monitoring of 4 turbulence parameters ($\epsilon_0$, $\theta_0$, $\tau_0$ and $\lo$) above the Calern Observatory. Data are displayed in real time through a website ({\tt cats.oca.eu}), the idea being to provide a service available to all observers at Calern, as well as building a database to make long-term statistics of turbulence (before CATS, no such database existed for this site, despite his 40 years of activity as an astronomical site).
The other objective is that Calern becomes an operational on-sky test platform for the validation of new concepts and components in order to overcome current limitations of high angular resolution (HRA) existing systems. Several activities regarding adaptive optics are operated at the M\'eO \cite{Samain08} and C2PU \cite{Bendjoya12} telescopes and they benefit of the data given by the CATS station.
This paper is organised as follows: {Sect.~\ref{par:instrument} describes the instrument. Sect.~\ref{par:seeing} to~\ref{par:L0} present the method used to derive each parameter (seeing, isoplanatic angle, coherence time and outer scale) and the associated error budget. }Sect.~\ref{par:results} is devoted to results obtained at the Calern observatory. A final discussion is presented
in Sect~\ref{par:conclusion}.
\section{Instrument description}
\label{par:instrument}
The GDIMM is based on a commercial Celestron C11 telescope (diameter 28cm), driven by an equatorial mount Astro-Physics AP900, controlled remotely by a computer. It is equipped with a pupil mask made of 3 sub-pupils (Fig.~\ref{fig:photogdimm}, left). Two sub-pupils are circular with a diameter $D_1=$6cm, separated by a distance $B=$20cm along the declination axis. Both are equipped with a glass prism oriented to give opposite tilts to the incident light. The third sub-aperture is circular, with diameter $D_3=$10cm and a central obstruction of 4cm and was designed to estimate the isoplanatic angle. It is protected by a glass parallel plate. A wide-field finder with a webcam is used to point stars and center them on the telescope.
The main camera is a Prosilica EC650. It offers a good sensitivity in the visible domain with a peak near the wavelength $\lambda=500$nm. The pixel size is 7.4$\mu$m. A Barlow lens enlarges the telescope focal to meet sampling requirements (we have $\lambda/D_1=7$ pixels and $\lambda/D_3=4$ pixels for $\lambda=500$nm). The camera allows short-exposure times and region-of-interest (ROI) definition to increase the frame rate. { An exposure time of a few milliseconds is required to observe stars of magnitude $V<2$ with sufficient SNR. The framerate is limited by the hardware, it is about 100 frames per second for our observations}. Such a high cadence is mandatory to properly sample the temporal variability of angles of arrival (AA) and to estimate the coherence time (see Sect.~\ref{par:calcultau0}).
An example of GDIMM short-exposure image is shown on Fig.~\ref{fig:snapshot}. It was obtained at Calern Observatory on April 4th, 2018 at 20h55UT on the star Regulus ($\alpha$~Leo, magnitude $V= 1.4$). The exposure time was 10ms {for this image}\color{black}. The central spot corresponds to the sub-pupil 3 (diameter 10cm); it is brighter than the two other ones, as expected. The first Airy ring is visible around the central spot: the seeing was $\epsilon_0=1.3$~arcsec for the wavelength $\lambda=500$nm (Fried diameter $r_0=8$cm, close to the pupil diameter). The image quality can be checked by computing the Strehl ratio of sub-images, using a simple formula proposed by \cite{Tokovinin02}. It is generally assumed that image quality is good when the Strehl ratio is over 30\% (this corresponds to phase distorsions lower than $\lambda/5$ over the pupil surface). For this example the 3 Strehl ratios are 0.79, 0.83 and 0.36 for spots corresponding to sub-pupils 1, 2 and 3.
The acquisition software is written in {\tt C++/QT}. It drives the whole observing sequence: dome opening, choice of the target star, telescope pointing, images acquisition, computation of turbulence parameters. The instrument is now fully automatic. It uses informations from a meteo station and a All-Sky camera to check observability. Observations are stopped if conditions degradate.
The GDIMM is placed on the top of a 4m high concrete pillar, and protected by an all-sky dome (Fig.~\ref{fig:photogdimm}, right). A more detailed description is given in previous papers \cite{Aristidi14, Ziad17, Ziad18, Aristidi18}.
\begin{figure}
\includegraphics[width=9cm]{snapshot.eps}
\caption{Top: GDIMM instantaneous image, taken on 2018-04-04, 20:55UT on the star Regulus ($\alpha$~Leo) with an exposure time of 10ms. Bottom: 1D projection of the image (sum of lines).}
\label{fig:snapshot}
\end{figure}
\subsection*{Data processing}
GDIMM data are based on sequences of two successive sets of $N=1024$ frames of a bright star, taken at exposure times $T=5$ms and $2T=10$ms. The full image size is 659$\times$493 pixels. Every frame contains 3 sub-images of the star, corresponding to the three sub-pupils of the instrument (see Fig~\ref{fig:snapshot}). Images are cropped in a rectangular zone (the ROI) of size 380$\times$150 pixels containing the 3 stellar spots. This allows to attain a cadence of 100 frames per second. After sky background removal and thresholding, we calculate the three photocenters and integrated intensities. These raw data are logged into a file for optional further processing. A series of filters is then applied to control the data quality:
\begin{itemize}
\item Sub-image detection is made in 3 square boxes (size 30$\times$30 pixels for lateral spots corresponding to pupils 1 and 2, 45$\times$45 pixels for the central spot, pupil 3) whose position is calculated on the first frame of the sequence. Sub-images for which the photocenter is too close to the box edge are rejected (this happens in case of strong wind or mount drift)
\item Outlier detection and rejection is made on photocenter coordinates and intensities.
\item Sub-images corresponding to the pupil 3 (diameter $D_3=10$cm) must be brighter than sub-images of pupils 1 and 2 (diameter $D_1=6$cm). They are rejected if it is not the case.
\item Drift correction is applied in by removing a linear trend on photocenter time series
\end{itemize}
The 4 turbulence parameters are then calculated (detailed description hereafter). The whole process (acquisition+processing) takes less than one minute of time. GDIMM provides one set of turbulence parameters every 2mn, they are sent to a database for real-time display on the CATS website ({\tt cats.oca.eu}). {Note that there is some dead time between two successive acquisitions to match this cadence of 2 minutes. We made this choice regarding the characteristic time of evolution of parameters, which is a few minutes (see \cite{Ziad16} and references therein). It we suppress the dead time, we can have a parameter quadruplet per minute with our current hardware. Some tests are currently be made to see if it can improve the parameter stability, especially for the outer scale estimation.
}
\section{Seeing measurements}
\label{par:seeing}
\subsection{Theory}
Seeing estimations by the GDIMM is based on differential motion. The principle of seeing estimation is well-known \cite{Sarazinroddier90}. It is based on variances of the photocenter difference of images produced by sub-pupils 1 and 2 (Fig.~\ref{fig:photogdimm}, left). The seeing $\epsilon_0$ (in radian) is computed using the following formulae~\cite{Tokovinin02}~:
\be
\epsilon_{0,l|t}=0.98 \,\left(\frac{D}{\lambda}\right)^{0.2}\:\left(\frac{\sigma_{l|t}^2}{K_{l|t}}\right)^{0.6}
\label{eq:seeing}
\ee
with
\begin{eqnarray}
K_l &=& 0.364\, (1-0.532 b^{-1/3})\nonumber \\ \ \label{eq:seeingK}\\
K_t &=& 0.364\, (1-0.798 b^{-1/3})\nonumber
\end{eqnarray}
where $B$ is the distance between the sub-apertures, $D$ their diameter, $b=B/D$, and $\lambda$ the wavelength, traditionnaly set to 500~nm as a standard. $\sigma_{l|t}^2$ are the longitudinal and transverse differential variances, calculated at the zenith (the correction is $\sigma^2(z=0)=\sigma^2(z)\, \cos(z)$ with $z$ the zenithal angle). Two estimations of the seeing are obtained for a given sequence, they are supposed to be the almost identical (isotropic hypothesis) and are averaged.
Differential variances (longitudinal and transverse) $\sigma^2_{l|t,T}$ and $\sigma^2_{l|t,2T}$ are calculated for sets corresponding to exposure times $T$ and $2T$. They are compensated from the finite exposure $T$ time using an exponential interpolation as proposed by \cite{Tokovinin02}
\be
\sigma^2_{l|t}=(\sigma^2_{l|t,T})^n \; (\sigma^2_{l|t,2T})^{1-n}
\label{eq:seeingcorrt}
\ee
This correction increase variances by a factor of the order of 10\% to 20\%. Two values of the seeing $\epsilon_{0,l|t}$ are deduced from Eq.~\ref{eq:seeing}, and averaged.
\subsection{Error analysis}
\label{par:errorseeing}
\subsubsection{Statistical error.}
\label{par:staterr}
Variance of image motion at exposure times $T$ and $2T$ are computed from samples of $N=1024$
individual frames: they are then affected by statistical noise due to
the finite size of the sample. Assuming statistical independence
between the frames, the statistical error on the variance $\sigma^2$ (both at exposure times $T$ and $2T$)
is given by \cite{Frieden83}
\be
\frac{\delta \sigma^2}{\sigma^2}=\sqrt{\frac{2}{N-1}}
\ee
that propagates onto the seeing an error contribution $\delta \epsilon_0$.
With 1024 independent frames we have $\frac{\delta
\sigma^2}{\sigma^2}=4.4$\%. The error on the seeing is calculated from Eqs~\ref{eq:seeing} and~\ref{eq:seeingcorrt} and gives
$\frac{\delta\epsilon_0}{\epsilon_0}\simeq 5$\%. This is the main source of uncertainty in our seeing estimations.
\subsubsection{Scale error}
Differential variances are obtained in units of pixel square and
require calibration of the pixel size. This is done by making
image sequences of binary star $\beta$ Cyg~AB (separation 34.6$''$).
We measured a pixel scale of $\xi=0.242\pm 0.003$$''$.
The uncertainty on $\xi$ propagates into
the differential variances when the conversion from pixels into
arcsec is performed. It gives a relative contribution on
the differential variances $\frac{\delta
\sigma^2}{\sigma^2}=0.6$\% and on the seeing
$\frac{\delta\epsilon_0}{\epsilon_0}=0.4$\%.
The scale calibration has to be done regularly: the telescope tube is subject to thermal dilatations that result in slight variations $\delta F$ of the focal length $F$, especially during the transition between the summer and the winter. We measured relative variations $\frac{\delta F}F\lesssim 1\%$, leading to a relative uncertainty $\frac{\delta\epsilon_0}{\epsilon_0}\simeq 1$\% on the seeing. This remains lower than the statistical error.
\subsubsection{Background noise}
The sky background is an additive Poisson noise independent from the
stellar signal. Its influence on DIMM data is discussed in \cite{Tokovinin02}.
It biases the computed differential variances by a term
\be \sigma_B^2=2
\frac{B^2}{I^2}\sum_{\mbox{\scriptsize window}} (x_{ij}-\bar x)^2
\label{eq:ron}
\ee
where $I$ is the total stellar flux, $B$ is the
sky background standard deviation and $x_{ij}$ the
coordinates of contributing pixels (the number of illuminated pixels in the star image
is typically of the order of 300 after thresholding and that defines the
``window'' over which the summation is made). With our data, the bias term is
$\sigma_B^2\simeq 10^{-2}$ pixels$^2$, giving a relative error $\frac{\delta
\sigma^2}{\sigma^2}=0.2$\%. This is negligible compared to the statistical error.
Other instrumental noises include the readout noise of the CCD, and the error on the centroid determination. These errors were studied in details in the past (see \cite{Ziad94} and references therein) and have a very small contribution, orders of magnitude below the statistical error.
\section{Isoplanatic angle measurements}
\subsection{Theory}
The isoplanatic angle $\theta_0$ is estimated from the scintillation of a single star observed through the sub-pupil 3, with a diameter of 10~cm and a central obstruction of~4~cm (Fig.~\ref{fig:photogdimm}, left). The scintillation index is the ratio of the variance $\sigma_I^2$ of the stellar intensity, divided by the square of its mean value $\bar I$:
\be
\label{eq:scintindex}
s=\frac{\sigma_I^2}{\bar I^2}
\ee
The principle of the calculation is based on the similarity of the theoretical expressions of $\theta_0$ and the scintillation index $s$ \cite{Looshogge79, Ziad00}. $\theta_0$ is obtained (in arcsec) for a wavelength $\lambda=500$~nm by the following formula
\be
\theta_0^{-5/3}=A \, s
\label{eq:isop}
\ee
where $A=14.87$ is computed numerically from eqs. 19 and 21 of \cite{Ziad00} using the value $h_0=10$km. The scintillation index $s$ is corrected from the zenithal distance $z$ by the formula $s(z=0)=s(z)\, \cos(z)^{8/3}$.
Simultaneous measurements of the seeing and the isoplanatic angle make it possible to derive the equivalent turbulence altitude defined by \cite{Roddier82} as
\be
\bar{h}=0.31 \frac{r_0}{\theta_0}
\label{eq:hmoy}
\ee
with $r_0=0.98 \frac{\lambda}{\epsilon_0}$ the Fried parameter. Statistics for $\bar h$ at Calern are presented in Section~\ref{par:results}.
\subsection{Isoplanatic angle estimation}
Scintillation indexes (sub-image corresponding to pupil 3) $s_T$ and $s_{2T}$ are calculated for sets corresponding to exposure times $T$ and $2T$.
{These sets are composed of $N=1024$ images, representing about 10s of data. This integration time appears to be long enough for the scintillation index to converge. To check that, we recorded long data sequences (up to 4000) images and calculated the scintillation index for integration times varying from 0 to 40s. The result is shown in Fig.~\ref{fig:scintconv} for 3 different sets taken at Calern on the night of March 19$^{\rm th}$, 2018. Scintillation indexes show satisfactory convergence (below 2\%) after 10s of integration time.
}
Compensation from the finite exposure time is made by linear extrapolation on scintillation indexes as proposed by \cite{Ziad00}
\be
s=2 s_T-s_{2T}
\label{eq:isopcorrt}
\ee
This compensation is more critical on the scintillation than on the differential variances. The correction can be of the order of 30\%--50\%. The isoplanatic angle is then derived from Eq.~\ref{eq:isop}.
\begin{figure}
\includegraphics[width=8cm]{scint_convergence.eps}
\caption{Scintillation index as a function of the integration time for 3 different data sets taken at Calern on the night of March 19$^{\rm th}$, 2018. The horizontal axis is limited to the range [0--15]s.}\label{fig:scintconv}
\end{figure}
\subsection{Error analysis}
\subsubsection{Statistical error}
The isoplanatic angle is estimated from the scintillation index $s$ via Eq.~\ref{eq:scintindex}. { Two estimates $s_T$ and $s_{2T}$ are made, corresponding to exposure times $T$ and $2T$, and combined to obtain the scintillation index corrected from the exposure time effects (Eq.~\ref{eq:isopcorrt}). The error on $s_T$ (same as $s_{2T}$) is}
\be
\frac{\delta s_T}{s_T}=\frac{\delta \sigma_I^2}{\sigma_I^2}+2 \frac{\delta\bar{I}}{\bar{I}}
\label{eq:statnoiseisop}
\ee
If we assume statistical independence between frames, the first term is the same as for the seeing and the second is $\frac 2{\sqrt N}$. We get
\be
\frac{\delta s_T}{s_T}=\sqrt{\frac{2}{N-1}} + \frac 2{\sqrt N}\simeq 10\%
\label{eq:err_s}
\ee
{
However in case of slow wind speed in the upper atmosphere (the major contributor to scintillation) the number of independent frames within an image cube is reduced to a number $N_I<N$ and we must replace $N$ by $N_I$ in the previous equation. An order of magnitude of $N_I$ is given by the ratio
\be
N_I=\frac{N t_e}{D_3/v}
\ee
where $D_3$ is the diameter of the sub-pupil 3, $N t_e$ the integration time (10s at a framerate of 100~Hz) and $v$ the wind speed of atmospheric layers contributing to the scintillation (high altitude layers). We do not know $v$, but we can have its order of magnitude by looking at the distribution of the effective wind speed $\bar v$ defined in Eq.~\ref{eq:tau0}. At Calern observatory, the distribution is bimodal (as shown in Fig.~\ref{fig:histoveffsum}) and high layers have a speed of the order of 13m/s. Taking this value for $v$, we obtain $N_I\simeq 1300$, which is the same order of magnitude as the number $N$ of frames in a data cube.
Using Eq.~\ref{eq:isopcorrt}, we obtain the relative statistical error on the zero exposure time scintillation index
\be
\frac{\delta s}{s}\simeq 15\%
\ee
There is another error source depending on the constant $A$ in Eq~\ref{eq:isop}. $A$ is indeed a function of an altitude parameter $h_0$ defined in eq.~21 of \cite{Ziad00}. The relation $A(h_0)$ is a function of the pupil geometry and is analytic. Its dependence with $h_0$ remains weak, we found that the relative error
\be
\frac{\delta A}{A} \lesssim 5\%
\ee
in the range $h_0 \in [1, 25]$km. The relative statistical error on the isoplanatic angle is
\be
\frac{\delta \theta_0}{\theta_0}=\frac 3 5 \frac{\delta A}{A}+ \frac 3 5 \frac{\delta s}{s} \simeq 15\%
\ee
}
\subsubsection{Sky background}
The presence of a sky background on individual images introduces a bias on the estimation of the mean stellar
intensity $\bar{I}$, its
standard deviation $\sigma_I$ and then on the scintillation index $s$. The observed background on our images is typically 40 ADU/pixel. Its relative contribution to the stellar flux (integrated on the star image) represents about 30\% for bright stars such as Deneb ($\alpha$ Cyg, magnitude $V=1.2$) observed by the GDIMM. To estimate the bias, let us introduce the following variables:
\begin{itemize}
\item $B$, the background intensity collected over the $N_I$ pixels illuminated by the star after threshold application, $\bar{B}$ and $\sigma^2_B$ its mean and variance. $B$ is a Poisson random variable, it must verify $\sigma_B=\sqrt{\bar B}$, that was well verified on images.
\item $I_t$ the total intensity (background+stellar flux) collected over the $N_I$ pixels.
\end{itemize}
The stellar flux is given by $I=I_t-B$, the measure being $I_t$. The mean $\bar{I}$ is biased by the term $\bar{B}$. This bias is estimated and removed as indicated above, but the background fluctuations lead to an error $\delta I$ on the estimation of $\bar{I}$ equal to $\delta I=\sigma_B\simeq \sqrt{\bar B}$. Similarly, the intensity variance $\sigma_I^2$ is biased by a term $\sigma_B^2$.
The error on the scintillation index is calculated from Eq.~\ref{eq:statnoiseisop} taking $\delta \sigma_I^2=\sigma_B^2$ (bias on intensity variance) and $\delta\bar{I}=\sigma_B$. Typical values are, in ADU units: $\sigma_B\simeq 240$, $\bar{I}\simeq 100000$, $\sigma_I\simeq 30000$. That gives a background error on the scintillation index $\frac{\delta s}{s}\le 1\%$, which is an order of magnitude below the statistical error.
\section{Coherence time measurements}
\label{ctime}
\subsection{Theory}
The coherence time $\tau_0$ relevant for AO applications, is defined by \cite{Roddier81}
\be
\tau_0=0.31\frac{r_0}{\bar v}
\label{eq:tau0}
\ee
where $\bar v$, the effective wind speed, is a weighted average of the wind speed on the whole atmosphere. It can be estimated \cite{Ziad12, Aristidi14, Ziad17} from the temporal structure functions $D_{x|y} (\tau)$ of the AA in the $x$ (resp. $y$) direction (parallel to the declination (resp. right ascension)). This function is zero for $\tau = 0$ and saturates to a value $D_s$ for large $\tau$, and its characteristic time
\be
D(\tau_{AA,x|y})=\frac{D_s}{e}
\label{eq:tauaa}
\ee
defines the decorrelation time of AA fluctuations in directions $x$ and $y$. To calculate the effective wind speed $\bar{v}$, we make use of the work by \cite{Conan00} and \cite{Ziad12} who gave two approximations of $\bar{v}$ (in m/s) corresponding to two different regimes:
\begin{itemize}
\item For $\tau_{AA,x|y} > \frac D{\bar v}$
\be
\bar v=10^3 D\, G^{-3} \left[\tau_{AA,x}^{\frac 1 3} + \tau_{AA,y}^{\frac 1 3} \right]^{-3}
\label{eq:veff}
\ee
where $D$ is the sub-pupil diameter and $G$ a constant \cite{Conan00}:
\be
G=\frac{(1-e^{-1}) (3.001 K^{\frac 1 3} + 1.286 K^{\frac 7 3}) + e^{-1} (2.882+1.628 K^2)}{0.411+0.188 K^2}
\ee
with $K=\frac{\pi D}{\lo}$. This case is met almost all the time with small pupils as GDIMM ones.
\item For $\tau_{AA,x|y} < \frac D{\bar v}$ (this case was never observed with our data):
\be
\bar v=\frac{D\, \sqrt{G_1}}2 \left[\tau_{AA,x}^{-2} + \tau_{AA,y}^{-2} \right]^{\frac12}
\ee
with
\be
G_1=\frac{2.62}{e} \, \left(1-1.04 K^{\frac 1 3} +0.57 K^2 - 0.45 K^{7/3}
\right)
\ee
\end{itemize}
We obtain 3 values of $\bar v$ for the 3 sub-pupils, which are averaged. The coherence time $\tau_0$ is eventually calculated from $r_0$ and $\bar v$ using Eq.\ref{eq:tau0}.
\subsection{Coherence time estimation}
\label{par:calcultau0}
\begin{figure*}
\begin{center}
\includegraphics[width=8cm]{AA_struct.eps}
\includegraphics[width=8cm]{AA_struct_zoom.eps}
\caption{Example of normalised structure functions of AA fluctuations along the $x$ axis, calculated for the 3 sub-pupils (and compensated from exposure time). Left: structure functions $\frac{D_x(\tau)}{D_s}$ divided by their saturation value. Right: zoom for $\tau\in[0, 25]$ms. The 3 curves intersect the line $\frac{D_x(\tau)}{D_s}=\frac 1 e$ (brown dashed line) at $\tau=\tau_{AA}$ (circles).}
\label{fig:structfn}
\end{center}
\end{figure*}
We remarked that the framerate (100 frames/second) is slightly variable: the first operation is then to resample time series of photocenter coordinates with a constant time step $\delta t$ (after some trials, we chose $\delta t=5$~ms). 12 structure functions $D(\tau)$ are computed for the 12 photocenter series (2 coordinates for 3 sub-images, and two frames sets for exposure times $T$ and $2T$) using the direct expression
\be
D_{x|y}(\tau)=\langle \left[x|y(t) - x|y(t+\tau)\right]^2\rangle
\ee
where $\langle \rangle$ stands for ensemble average over the $N=1024$ frames. Structure functions are compensated from finite exposure time using the same method as for the seeing:
\be
D_{x|y} (\tau)=D_{x|y,T} (\tau)^{n} \; D_{x|y,2T} (\tau)^{1-n}
\ee
where $D_{x|y,T}$ and $D_{x|y,2T}$ are calculated on image cubes taken with exposure times of $T$ and $2T$, and $n=1.75$. An example of structure functions is shown in Fig.~\ref{fig:structfn}. Curves correspond to the $x$ axis (declination) and were divided by their respective saturation value $D_s$. One can remark that the saturation is attained after 0.3--0.4s, and that there are some fluctuations of $D_x(\tau)$ in the saturation regime. These fluctuations are the main source of uncertainty on the determination of $\tau_{AA}$, as discussed in Sect.~\ref{par:errorstau}. The graph on the right is a zoom for small values of $\tau$: curves intersect with the line $\frac{D_x(\tau)}{D_s}=\frac 1 e$ at $\tau_{AA,1}=9.5$ms, $\tau_{AA,2}=8.8$ms and $\tau_{AA,3}=6.1$ms.
For each sub-pupil, the effective wind speed $\bar v$ is calculated from Eq.~\ref{eq:veff}. The three values of $\bar v$ are then averaged.
\subsection{Error analysis}
\label{par:errorstau}
The coherence time is deduced from the AA decorrelation time $\tau_{AA}$ defined by Eq.~\ref{eq:tauaa}. To calculate the error on $\tau_{AA}$, we express the finite difference
\be
\delta D_0(\tau) \simeq D_0'(\tau)\: \delta \tau
\ee
where $D_0(\tau)=\frac{D(\tau)}{e}$ is the normalised structure function and $D_0'(\tau)$ the derivative of $D_0$. Then, it is possible to estimate the error $\delta\tau$ at $\tau=\tau_{AA}$:
\be
\delta \tau=\frac{\Delta D_0}{D_0'(\tau_{AA})}
\ee
The error on $D_0$ can be estimated as the standard deviation of the structure function in the saturation zone, typical values are 10\% to 20\%. The derivative $D_0'(\tau_{AA})$ can be estimated by the slope of the structure function at $\tau=\tau_{AA}$. Errors on on $\tau_{AA}$ were calculated for each of the 6 structure function, for a 3~months data sample. We found a typical error on $\sim 30\%$. Relative errors on $\tau_{AA,x|y}$ for each sub-pupil and are summarised in the tabular below
\begin{tabular}{l|c|c|c|c|c|c}\hline
&\multicolumn{2}{c}{sub-pup. 1} &
\multicolumn{2}{c}{sub-pup. 2} &
\multicolumn{2}{c}{sub-pup. 3} \\
&$x$ & $y$ & $x$ & $y$ & $x$ & $y$\\ \hline
$\frac{\Delta \tau}{\tau_{AA}}$ &29\% & 35\% & 29\% & 37\% & 27\% & 27\% \\ \hline
\end{tabular}
{
The error on $\tau_{AA}$ propagates to the effective wind speed, giving a contribution $\delta_{v,\tau}$ to the uncertainty on $\bar v$, obtained by differentiation of Eq.~\ref{eq:veff}. For a relative error of $30\%$ on $\tau_{AA}$, this contribution $\delta_{v,\tau}$ is of 10\% (for $\tau_{AA}=6$ms) to 20\% (for $\tau_{AA}=24$ms).
In addition, the effective wind speed $\bar v$ calculated from Eq.~\ref{eq:veff} needs an estimate of the outer scale ${\cal L}_0$. However, at discussed in section~\ref{par:errorsL0}, the outer scale is strongly filtered and a measurement is not always available. In this case the standart value ${\cal L}_0=20$m is used. This results in a bias $\delta_{v,L}$ on the effective wind speed. This bias remains below 20\% for outer scales ${\cal L}_0 \in [10, 40]$m, which covers the majority of the situations on traditional sites.
Combining these two contributions, the relative uncertainty on $\bar v$ is then $\frac{\delta_v}{\bar v}\simeq 20$\% to 30\%.
}
The error $\delta_{\tau 0}$ on the coherence time $\tau_0$ is obtained from Eq.~\ref{eq:tau0}:
\be
\frac{\delta_{\tau 0}}{\tau_0}=\frac{\delta_{\epsilon_0}}{\epsilon_0}+ \frac{\delta_v}{\bar v} \simeq 25\% \; \mbox{ to } \; 35\%
\ee
\section{Outer scale measurements}
\label{par:L0}
\subsection{Theory}
The outer scale is, among the 4 turbulence parameters measured by GDIMM, the most difficult to estimate with a small instrument. In previous papers \cite{Ziad94, Aristidi14} we proposed to make use of variances of the absolute motions of sub-images to estimate the outer scale $\lo$. These absolute variances (in square radians) are given by \cite{Ziad94}
\be
\label{eq:varabs}
\sigma_D^2=0.17\, \lambda^2 r_0^{-5/3}\, (D^{-1/3}-1.525 \lo^{-1/3})
\ee
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{estimators_vs_l0.eps}
\caption{Outer scale estimators: $1/R$ (Eq.~\ref{eq:ratioRL0}) and $Q_{i,l|t}$ (Eq.~\ref{eq:ratioQL0}) as a function of $\lo$.}
\label{fig:diffvarl0}
\end{center}
\end{figure}
Because of telescope vibrations, direct estimation of $\lo$ from absolute variances using Eq.~\ref{eq:varabs} is not reliable. Our first idea, following the work by~\cite{Ziad94}, was to use the inverse relative difference of variances measured with sub-pupils 1 (or 2) and 3 (diameters 6cm and 10cm), i.e.
\be
R=\frac{\sigma_{D1}^2}{\sigma_{D1}^2 - \sigma_{D3}^2}=\frac{D_1^{-1/3}-1.525 \lo^{-1/3}}{D_1^{-1/3}-D_3^{-1/3}}
\label{eq:ratioRL0}
\ee
But with our values for $D_1$ and $D_3$, the variation is weak for decametric values of $\lo$, as illustrated by Fig.~\ref{fig:diffvarl0} and Table~\ref{table:estl0}. We have $1/R=0.216$ for $\lo$=10m and 0.200 for $\lo$=20m. To extract reliable $\lo$ from this estimator, we need high precision on variances (about 1\%), which is not the case (the statistical error on variances is of the order of 5\% as discussed in Sect.~\ref{par:errorseeing}, and there is some bias from telescope vibration).
We then looked for another estimator for $\lo$, and found that it was possible to use the ratio of absolute to differential variances of image motion:
\be
\label{eq:ratioQ}
Q_i=\frac{\sigma_{Di}^2}{\sigma_{l|t}^2}
\ee
where $\sigma_{D_i}^2$ is the absolute variance corresponding to the sub-pupil $i$, and $\sigma_{l|t}^2$ the longitudinal or transverse differential variance used to calculate the seeing (Eq.~\ref{eq:seeing}). This gives two expressions for the ratios $Q_i$
\be
\begin{array}{lll}
Q_{i,t} & = & \displaystyle \frac{\sigma_{D_i}^2}{\sigma_{t}^2}\; = \; 0.17 \frac{D_i^{-1/3}-1.525 \lo^{-1/3}}{0.364 D_1^{-1/3} -0.2905 B^{-1/3}}\\ \\
Q_{i,l} & = & \displaystyle\frac{\sigma_{D_i}^2}{\sigma_{l}^2}\; = \; 0.17 \frac{D_i^{-1/3}-1.525 \lo^{-1/3}}{0.364 D_1^{-1/3} -0.1904 B^{-1/3}}
\end{array}
\label{eq:ratioQL0}
\ee
Using absolute variances from the 3 sub-pupils, we get 6 estimations of $\lo$, from which we take the median value. Note that the absolute variance at the numerator of Eq.~\ref{eq:ratioQ} may be contaminated by telescope vibrations. Hence we use only the $x$ direction (declination axis) to compute absolute variances to reduce oscillations from the motor of the mount. Fig.~\ref{fig:diffvarl0} shows the variation of ratios $1/R$ and $Q_{i,l|t}$ as a function of $\lo$. All estimators have weak dependence with decametric $\lo$, but the ratios $Q_i$ are a little more sensitive. In Table~\ref{table:estl0} we computed the expected $Q_i$ ratios for $\lo=10$m and $\lo=20$m, and the required precision on variances to discriminate between the 2 values of $\lo$. We found that this required precision is 4 to 5\% for the ratios $Q_i$, while it was 1\% for the ratio $R$.
\begin{table}
\begin{tabular}{c|ccc}\\ \hline
& $\lo=$ 10m & $\lo=20$m & Required precision \\
& & & on variances \\ \hline
$1/R$ & 0.216 & 0.200 & 1\% \\
$Q_{1,l}$ & 0.520 & 0.560 & 4\% \\
$Q_{1,t}$ & 0.725 & 0.782 & 4\% \\
$Q_{3,l}$ & 0.407 & 0.448 & 5\% \\
$Q_{3,t}$ & 0.578 & 0.625 & 5\% \\ \hline
\end{tabular}
\caption{Value of ratios $1/R$ (Eq.~\ref{eq:ratioRL0}) and $Q_{i,l|t}$ (Eq. \ref{eq:ratioQL0}) for $\lo=10$m and $\lo=20$m. Column 4 is the required precision on variances to discriminate between the 2 values of $\lo$.}
\label{table:estl0}
\end{table}
Note that this estimator uses ratios of variances and is therefore independent of scale calibration. Also, we can remark that it is not necessary to have pupils of different diameters, the method should work with any DIMM or with a Shack-Hartmann (however, in this case it will not be possible to filter data with $H$ invariants presented hereafter).\\
\subsubsection*{$H$ Invariants}
\label{par:Hinv}
Combining Eqs \ref{eq:seeing} and \ref{eq:varabs}, we calculated the following ratios
\be
\begin{array}{lll}
H_{t} & = & \displaystyle \frac{\sigma_{D_i}^2-\sigma_{D_3}^2}{\sigma_{t}^2}\; = \; \frac{ 0.17 (D_1^{-1/3}-D_3^{-1/3})}{0.364 D_1^{-1/3} -0.2905 B^{-1/3}}\\ \\
H_{l} & = & \displaystyle\frac{\sigma_{D_i}^2-\sigma_{D_3}^2}{\sigma_{l}^2}\; = \; \frac{0.17 (D_1^{-1/3}-D_3^{-1/3})}{0.364 D_1^{-1/3} -0.1904 B^{-1/3}}
\end{array}
\label{eq:Hinv}
\ee
Where $i=1,2$ refers to sub-pupil 1 or 2 (they have the same diameter $D_1=D_2=6$cm). These ratios appear, at the first order, to be independent of turbulence conditions, so we named them ``$H$ invariants''. In fact this invariance is valid for large outer scales ($\frac{\lo}{D_i}\gg 1$). There is indeed a weak dependence of differential variances $\sigma^2_{l|t}$ with the outer scale \cite{ZiadThese}. This dependence is generally omitted in seeing estimations (Eq.~\ref{eq:seeing} and~\ref{eq:seeingK}). It can be estimated using eqs.~5.4 and~5.8 of~\cite{Conan00}. For pupils of diameter of 6cm, the effet of the outer scale on differential variances is under 0.1\% for $\lo>10$m and over 3\% for $\lo<1$m. The impact on $H$ invariants is $\lesssim 0.03$\% for $\lo>10$m and becomes greater than 2\% for $\lo<1$m (these very low outer scales are nevertheless exceptional: at Calern they correspond to less than 0.5\% of measured values).\\
Values of $H$ corresponding to our instrument are
\be
H_t=0.1567 \quad \mbox{and} \quad H_l=0.1128
\ee
These invariants are easy to calculate and can be used as a filter to reject bad data (contaminated by telescope vibrations). More discussion will be presented in Sect.~\ref{par:dataprocL0}.
\subsection{Outer scale estimation}
\label{par:dataprocL0}
Estimation of the outer scale requires absolute variances $\sigma_{Di}^2$ of AA fluctuations for each pupil (in the $x$ direction only). { As for differential variances used for seeing estimation, absolute variances are calculated from each image cube, and corrected from exposure time, following the same process as for differential variances (Eq.~\ref{eq:seeingcorrt}). One obtains a set of absolute and differential variances every 2mn.}
To reduce noise, time series of variances (both absolute and differential) are smoothed by a temporal sliding average. After some trials, the width of the temporal window was set to 10mn, leading to an average of 5 successive variances, reducing the error by a factor $\sqrt 5$ (see Sect.~\ref{par:errorsL0}).
Fig.~\ref{fig:varts} shows an example of the evolution of these smoothed variances for the night of 2018-10-03. Two things can be notices on these curves:
\begin{itemize}
\item The variance $\sigma_{D3}^2$ corresponding to the sub-pupil 3 should be smallest than $\sigma_{D1}^2$ according to Eq.~\ref{eq:varabs}. This is not always the case, fluctuations are sometimes larger than the expected difference.
\item The differential variance $\sigma_l$ between sub-pupils 1 and 2 is almost two times greater than absolute variances. This is good news: it means that the AA fluctuation signal is not dominated by correlated vibrations due to the telescope mount.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{var_ts_2018-10-03.eps}
\caption{Time series of variances observed at Calern on 2018-10-03. Solid lines: absolute variance in the $x$ direction (declination) for the 3 subpupils. Dashed line: differential longitudinal variance $\sigma_l^2$ between pupils 1 and 2. These variances were smoothed by a 10mn large sliding average.}
\label{fig:varts}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=8cm]{Hl_histo_2018-AugOct.eps}
\includegraphics[width=8cm]{Ht_histo_2018-AugOct.eps}
\caption{Histograms of invariants $H_l$ (left) and $H_t$ (right) for measured at Calern during the period August--October 2018. Blue (esp. orange) bars correspond to sub-pupil 1 (resp. 2). The vertical solid line is the theoretical value, and the two dashed lines are rejection thresholds.}
\label{fig:histoH}
\end{center}
\end{figure*}
The 6 ratios $Q$ are calculated from Eq.~\ref{eq:ratioQ} leading to 6 estimations ${\cal L}_{0,i}$ of the outer scale. Then, we calculate invariants $H_{l|t}$ (Eq.~\ref{eq:Hinv}) to be used as a filter for bad data. Histograms of $H$ invariants obtained during a 3~month period (August--October 2018) are displayed in Fig.~\ref{fig:histoH}. They present a peak for the theoretical value ($H_t=0.1567 $ and $H_l=0.1128$), and somewhat large dispersion around it. This dispersion result mainly from contamination of variances by noise and/or telescope vibrations (there is also a weak contribution due to the dependence of $H_{l|t}$ with the outer scale). After some trials, we decided to reject data for which $H_{l|t}>0.25$ or $H_{l|t}< 0.05$. That led to rejection of about 70\% of the individual outer scales ${\cal L}_{0,i}$. The final outer scale value is the median of the remaining ${\cal L}_{0,i}$ after filtering.
\subsection{Error analysis}
\label{par:errorsL0}
The estimation of $\lo$ is made from the ratios $Q_{i}$ by inverting Eq.~\ref{eq:ratioQL0}. The error $\delta_Q$ comes from errors on variances which propagate to $Q_{i}$ via Eq.~\ref{eq:ratioQ}.
To increase accuracy, we perform a rolling average of measured variances (they are calculated every 2mn) over time intervals of $T$ (set to $T=$10 minutes), corresponding to an average of $N_v=5$ individual variances, thus reducing the error by $\sqrt{N_v}$ on variances. The relative error $\delta_Q$ on $Q_i$ expresses as
\be
\frac{\delta_Q}{Q_i}=\frac 1{\sqrt{N_v}} \left(\frac{\delta\sigma_{Di}^2}{\sigma_{Di}^2}+ \frac{\delta\sigma_{l|t}^2}{\sigma_{l|t}^2}\right)
\ee
Taking only the statistical error on variances (they dominate indeed, as discussed in Sect.~\ref{par:staterr}), we obtain $\frac{\delta_Q}{Q_i}\simeq 5\%$ for $N=1024$ images and $N_v=5$. The error $\delta \lo$ on the outer scale is obtained by the finite difference
\be
\delta \lo=\lo'(Q_i)\ \delta_Q
\ee
where the derivative $\lo'(Q_i)$ is calculated from Eq.~\ref{eq:ratioQL0}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{err_L0.eps}
\caption{Relative error on $\lo$ as a function of $\lo$ for different values of $N_v$ (number of averaged variances). The number of images in a sequence is $N=1024$.}
\label{fig:errL0}
\end{center}
\end{figure}
The expected relative error $\frac{\delta\lo}{\lo}$ (due to the statistical error) is shown in Fig.\ref{fig:errL0}. Three curves are plotted for different values of $N_v$ in the range $\lo\in[5,50]$m. Both show that low $\lo$ values are estimated with better precision. With $N_v=1$ (no variance averaging) is impossible to obtain reliable values of $\lo$ (relative error is $\sim$70\% for $\lo=20$m). An average of at least $N_v=5$ individual variances is necessary to obtain acceptable error bars ($\frac{\delta\lo}{\lo}\simeq 30$\% for $\lo=20$m). The drawback is that one obtains estimations of $\lo$ smoothed over time intervals 10mn with $N_v=5$. This is greater than the characteristic time of outer scale fluctuations, whose value, estimated by GSM, is of the order of 6mn \cite{Ziad16}.
{
The statistical error is not the only contribution to the total uncertainty, especially for absolute variances which are contaminated by vibrations. A measure on their effect on $\lo$ can be made from the remaining distribution of $H$ invariants after filtering (see Section~\ref{par:dataprocL0}). The thresholds on $H_{l|t}$ to filter the data were obtained as a trade-off between data quality and the number of variances kept for outer scale estimation. The remaining $H$ distribution has a dispersion $\Delta H\simeq 0.1$ around the nominal value. This results into an error $\Delta\lo$ on the outer scale. To estimate it, we rewrite Eq.~\ref{eq:Hinv} as
\be
H_{l|t}=Q_{i,l|t}-Q_{3,l|t}
\ee
so that
\be
\Delta H\simeq \Delta Q_{i,l|t}+\Delta Q_{3,l|t} \simeq 0.1
\ee
corresponding to an uncertainty $\Delta Q_{i,l|t}\simeq 0.05$ on the ratios $Q$. Writing
\be
\Delta Q_{i,l|t}=\frac{\partial Q_{i,l|t}}{\partial\lo} \: \Delta\lo
\ee
and making use of Eq.~\ref{eq:ratioQL0} to calculate $\frac{\partial Q_{i,l|t}}{\partial\lo}$, we found that the resulting relative error on $\lo$ is of the order of 50\% for $\lo$ around 20m.
We are currently working on improvements on the algorithm of $\lo$ calculation to find better metrics and to reduce the effect of vibrations, an issue on small telescopes.
}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{params_hist.eps}
\caption{Histograms of turbulence parameters at Calern, calculated at the wavelength $\lambda=0.5\mu$m.}
\label{fig:paramshisto}
\end{center}
\end{figure*}
\section{First long-term GDIMM statistics}
\label{par:results}
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c}
& $\epsilon_0$ & $\theta_0$ & $\tau_0$ & $\lo$ & $\bar h$ & $\bar v$ \\
& [$''$] & [$''$] & [ms] & [m] & [m] & [m/s] \\ \hline
Median & 1.09 & 1.73 & 2.30 & 26.00 & 3436 & 12.84\\
Mean & 1.23 & 1.86 & 3.10 & 37.14 & 3698 & 13.59 \\
Std. dev. & 0.52 & 0.65 & 1.80 & 29.20 & 1566 & 5.47\\
$1^{st}$ quartile & 0.80 & 1.35 & 1.40 & 13.50 & 2504 & 9.24 \\
$3^{rd}$ quartile & 1.49 & 2.21 & 3.80 & 51.00 & 4618 & 16.74\\
$1^{st}$ centile & 0.45 & 0.58 & 0.50 & 3.10 & 1121 & 3.03\\
Last centile & 3.37 & 4.27 & 14.90 & 142.25 & 10279 & 30.23\\ \hline
Paranal & 0.81 & 2.45 & 2.24 & 22 & 3256 & 17.3 \\
La Silla & 1.64 & 1.25 & 1.46 & 25.5 & 3152 & 13.1 \\
Mauna Kea & 0.75 & 2.94 & 2.43 & 24 & 2931 & 17.2 \\ \hline
\end{tabular}
\caption{Statistics of turbulence parameters measured at Calern (at the wavelength $\lambda=0.5\mu$m) during the period June 2015--October 2018. Paranal, La Silla and Mauna Kea values are from the GSM database.}
\label{table:paramstat}
\end{table}
A total of 70097 turbulence parameter measurements (22698 for ${\cal L}_0$) were collected at Calern observatory during the 3$\frac 1 2$~year period from June~2015 to October~2018. Half of the data were obtained during the Summer season (June to September) where meteo conditions are better. Statistics are presented in Table~\ref{table:paramstat} for the 4 turbulence parameters ($\epsilon_0$, $\theta_0$ $\tau_0$, $\lo$) and for the equivalent turbulence altitude (Eq.~\ref{eq:hmoy}) and the effective wind speed (Eq.~\ref{eq:tau0}). Histograms are displayed in Fig.~\ref{fig:paramshisto} and show a classical log-normal shape for all parameters. Compared to other astronomical sites in the world (examples for Paranal, La Silla and Mauna Kea are given in Table~\ref{table:paramstat}) show that the Calern plateau is an average site.
The seeing is slightly lower in summer, we measured a median value of $0.96''$ in July and August (the median winter seeing during the period November--January is 1.21$''$). As a consequence, the median coherence time is higher in summer (3.2ms in July--August, 2.40ms in November--January). The outer scale $\lo$ has values similar to other sites such as Mauna Kea or La Silla.
Sequences of several hours of good seeing were sometimes observed, which is a good point for this site (and already known by ``old'' observers on interferometers during the 80's and 90's).Fig.~\ref{fig:histseeingsummerfit} displays seasonal seeing histograms, calculated for the summer (July and August) and the winter (November--March). They appear to be well modelled by a sum of two log-normal functions (they appear as dashed curves on the plots, their sum is the solid line). This is an evidence of the existence of two regimes: a ``good seeing'' distribution with a median value $\epsilon_1$ and a ``medium seeing'' situation with a median value $\epsilon_2$. In summer, we have $\epsilon_1=0.63''$ (the good seeing distribution contains 22\% of the data) and $\epsilon_2=0.95''$ (78\% of the data). In winter we have $\epsilon_1=0.66''$ (15\% of the data) and $\epsilon_2=1.15''$ (85\% of the data).
The equivalent turbulence altitude $\bar h$ has a median value around 3km, which is comparable to other classical sites. However we noticed a difference between the summer and the winter. During the 2 months of July and August, the median value of $\bar h$ was 3940m, while it is only 2870m in winter (November to March). Situations with a high value of $\bar h$ correspond to less turbulence in the ground layer, giving good seeing conditions as the ground layer is the main contributor to the total seeing.
As for the seeing, the effective wind speed histograms (Fig.~\ref{fig:histoveffsum}) are bimodal and can be modelled by the sum of two log-normal functions. They peak at $\bar v_1=6.7$m/s and $\bar v_2=13$m/s both for the summer and the winter. They contain respectively 32\% and 68\% of the data in summer, these proportions go to 53\% and 47\% in winter. The value $\bar v_1=6.7$m is indeed close to the median ground wind speed $v_G=5.7$m/s measured by the meteo station.
\begin{figure*}
\begin{center}
\includegraphics[width=8cm]{histo_seeing_summer_fit.eps}
\includegraphics[width=8cm]{histo_seeing_winter_fit.eps}
\caption{Left: seeing histogram for the summer (July--August). Right: seeing histogram for the winter (Nov--March). Superimposed curves are a least-square fit by a sum of two log-normal distributions (individual log-normal curves are dashed lines). The percentages corresponding to each log-normal in indicated in the legend.}
\label{fig:histseeingsummerfit}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=8cm]{histo_veff_summer_fit.eps}
\includegraphics[width=8cm]{histo_veff_winter_fit.eps}
\caption{Histograms of the effective wind speed in winter (November--January). Superimposed curves are a least-square fit by a sum of two log-normal distributions. Left: summer histogram. Right: winter histogram.}
\label{fig:histoveffsum}
\end{center}
\end{figure*}
\section{Conclusions}
\label{par:conclusion}
We have presented the GDIMM, a new turbulence monitor aiming at measuring the 4 integrated parameters
of the optical turbulence. GDIMM is a small instrument, easy to transport to make measurements at any site in the world, and was designed to provide a monitoring of the four integrated parameters of the atmospheric turbulence, i.e. seeing, isoplanatic angle, coherence time and outer scale.
Seeing measurements are given by differential motion, according to a well established theory and to an instrumental concept that makes them robust to telescope vibrations \cite{Sarazinroddier90, Verninmunoz95}. Isoplanatic angle measurements are made via the scintillation, following here again a well-known technique \cite{Looshogge79}, which has become popular thanks to its simplicity. It appears to give satisfactory results when compared to other techniques \cite{Ziad18b}. We indeed used intensively these two techniques to measure $\epsilon_0$ and $\theta_0$ during the campaigns of site testing of the site of Dome~C in Antarctica (see \cite{Aristidi12} and references therein).
The method for estimating the coherence time from the decorrelation time of AA fluctuations is recent. It was proposed a few years ago \cite{Ziad12} and is based upon analytical developments by \cite{Conan00}. First tests on reprocessed GSM data and comparisons with radiosoundings \cite{Ziad12} showed the pertinence of the method. The instrumental concept is simple, compared to other monitors such as the MASS-DIMM \cite{Kornilov07}, the only requirement is to have a camera allowing a high framerate (at least 100 frames per second) to properly sample the AA decorrelation time. After GSM in the past, the GDIMM is now, to our knowledge, the first monitor to { use this method routinely to calculate $\tau_0$.
}
A true asset of GDIMM is the possibility to measure the outer scale. In particular, obtaining reliable values of ${\cal L}_0$ is a challenge with small instruments, and this parameter is often neglected, though it has a strong impact of high angular resolution techniques, especially for extremely large telescopes (see the recent review by \cite{Ziad16}). We proposed here a method based on the ratios of absolute to differential motions. It is simple and can work with any DIMM or Shack-Hartmann based monitor, but requires good stability of the telescope mount since it is sensitive to vibrations.
A portable version of the GDIMM has been developped in parallel to the Calern one, to perform turbulence measurements at any site on the world. Discussions with the ESO (European Southern Observatory) are currently in progress to make GDIMM and PML observations at Paranal and compare with the ESO Astronomical Site Monitor \cite{Chiozzi16}.
\section{Acknowledgments}
We would like thank Jean-Marie Torre and Herv\'e Viot, from the Calern technical staff, for their valuable help on the electronics of the instrument. Thanks also to M. Marjani who worked on our data during his master thesis. The CATS project has been done under the financial support of CNES, Observatoire de la C\^ote d'Azur, Labex First TF, AS-GRAM, Federation Doblin, Universit\'e de Nice-Sophia Antipolis and R\'egion Provence Alpes
C\^ote d'Azur.
| -33,185.478526 |
[
-2.75,
2.673828125
] | 26.386233 |
[
-3.23046875,
-0.0313720703125,
-1.716796875,
-5.38671875,
-0.3125,
7.6171875
] |
[
3.259765625,
6.55078125,
3.544921875,
6.2421875
] | 557 | 6,949 |
[
-1.65234375,
1.724609375
] | 29.267673 |
[
-5.59765625,
-2.384765625,
-2.240234375,
-1.28125,
1.296875,
8.28125
] | 0.913762 | 15.040305 | 25.384947 | 5.161732 |
[
2.114931583404541
] | -20,976.145383 | 5.4333 | -32,136.890423 | 0.550906 | 6.171916 |
[
-3.46875,
-3.48828125,
-2.791015625,
-3.501953125,
2.65625,
9.9453125
] |
[
-5.98828125,
-2.861328125,
-2.814453125,
-2.09765625,
4.4609375,
6.6953125
] | |
BkiUa33xK03BfNelU4K3
|
\section{Introduction}
Given a data set, learning a dictionary in which each example admits a sparse representation is tremendously useful in a number of tasks~\citep{aharon2006rm,mairal2011task}. This problem, known as sparse coding~\citep{olshausen1997sparse} or dictionary learning~\citep{garcia2018convolutional}, has been the subject of significant investigation in recent years in the signal processing community. A growing body of work has mapped the sparse coding problem into encoders for sparse recovery~\citep{LISTA}, and into autoencoders purely for classification~\citep{rolfe2013discriminative} or denoising~\citep{simon2019rethinking, tolooshams2020deep} purposes.\\
Autoencoders are widely used for unsupervised learning. Their integration with supervised tasks and classifiers has become popular for their regularization power and reduction of the generalization gap~\citep{vincent2010stacked, epstein2018joint, epstein2019generalization}. \cite{rolfe2013discriminative} have shown benefits of autoencoders and sparse features in discriminative tasks.\\
For data reconstruction, recent work has highlighted some limitations of convolutional sparse coding (CSC) autoencoders~\citep{simon2019rethinking} and its multi-layer and deep generalizations
~\citep{SulamJeremias2018OMBP, zazo2019convolutional}. \cite{simon2019rethinking} argue that the sparsity levels that CSC allows can only accommodate very sparse vectors, making it unsuitable to capture all features of signals such as natural images, and propose to compute the minimum mean-squared error solution under the CSC model, which is a dense vector capturing a richer set of features.\\
To address the aforementioned limitations of classical sparse coding, we propose a dense and sparse coding model that represents a signal as the sum of two components: one that admits a dense representation $\x$ in a dictionary $\A$ that is useful for reconstruction, and another whose representation $\u$ is discriminative and sparse in a second dictionary $\B$. Based on empirical evidence, the authors in
~\citep{zazo2019convolutional} argue that a multi-layer extension of this model can, in principle, have arbitrary depth. However, to our knowledge, the dense and sparse coding model has not been yet fully analyzed. Our contributions are\\
\noindent \textbf{Conditions for identifiability and recovery by convex optimization}: We derive conditions under which the dense and sparse representation is unique. We then propose a convex program for recovery that minimizes $\vectornorm{\A\x}^2_2 + \vectornorm{\u}_1$, subject to linear constraints. \\
\noindent \textbf{Phase-transition curves}: We demonstrate through simulations that the convex program can successfully solve the dense and sparse coding problem.\\
\noindent \textbf{Discriminative reconstruction}: We propose a dense and sparse autoencoder (DenSaE) that has competitive discriminative power and improves the representation capability compared to sparse networks.\\
The paper is organized as follows. \textbf{Section~\ref{sec:th}} discusses theoretical analysis of the dense and sparse coding problem. Phase transition, classification, and denoising experiments appear in \textbf{Section~\ref{sec:exps}}. We conclude in \textbf{Section~\ref{sec:conclusion}}.
\section{Related work}
We comment on the most closely related models. Given the measurements $\y$, the problem of recovering $\x$ and $\u$ is similar in flavor to sparse recovery in the union of dictionaries~\citep{donoho2001uncertainty,elad2002generalized,donoho2003optimally,soltani2017fast, studer2011recovery, studer2014stable}. Most results in this literature take the form of an uncertainty principle that relates the sum of the sparsity of $\x$ and $\u$ to the mutual coherence between $\A$ and $\B$, and which guarantees that the representation is unique and identifiable by $\ell_1$ minimization. To our knowledge, the analysis of this program is novel and in sharp contrast to classical settings in sparse approximation, in which the objective consists of a single sparsifying norm, rather than the combination of different norms. Robust PCA~\citep{candes2011robust}, which decomposes a matrix as the sum of low-rank and sparse matrices, uses the combination of the $\ell_1$ and nuclear norms, giving it a flavor similar to our problem.\\
Our model resembles weighted LASSO \citep{lian2018weighted,mansour2017recovery}. Compared to weighted LASSO, we can directly map the weighted LASSO objective $\lVert\bm{W}\bm{\alpha}\rVert_1$ to $\lVert\u\rVert_1$ by letting $\bm{\alpha} = \begin{bmatrix}\x & \u\end{bmatrix}^T$ and choosing appropriately the entries of a diagonal matrix $\bm{W}$, with $W_{ij} \in \{0, 1\}$; however, in the weighted LASSO formulation, constraints can only be enforced on the sparse component $\u$. Our work differs in that a significant part of our analysis is the directed Euclidean norm constraint on $\x$, which recovers a unique solution $\x^{\star} \in \text{Ker}(\A)^{\perp}$. Our model can also be interpreted as a special case of Morphological Component Analysis (MCA) \citep{elad2005simultaneous} for $K=2$, $\bm{s} = \sum_{k=1}^{K}\bm{\Phi}_k \bm{\alpha}_k$, with, however, some distinct differences: i) MCA encodes different morphological structures via the dictionaries $\bm{\Phi}_k$. We encode a smooth morphological component via the whole product $\A\x$, which is conceptually different, and ii) we make no assumption of sparsity on the dense component $\x$. This leads to an optimization objective that is the combination of $\ell_1$ and $\ell_2$ norms, unlike that of MCA. Finally, a bare application of noisy sparse coding would treat $\e = \A\x$ as arbitrary noise, hence i) recovers $\u$ approximately and ii) cannot recover $\x$. However, in our analysis, the term $\A\x$ is not just undesired noise but represents a sought-out feature. We can recover both $\x$ and $\u$ exactly. See \textbf{Appendix C} for a comparison of our model to noisy compressive sensing. We note that the full dense and sparse coding model is $\y = \A\x+\B\u+\e$ where $\e$ is Gaussian noise.\\
\noindent \textbf{Notation}: Lowercase and uppercase boldface letters denote column vectors and matrices, respectively. Given a vector $\x \in \real^{n}$ and a support set $S\subset \{1,...,n\}$,
$\x_S$ denotes the restriction of $\x$ to indices in $S$. For a matrix $\A \in \real^{m\times p}$, $\A_{S}$ is a submatrix of size $m \times |S|$ with column indices in $S$. The column space of a matrix $\A$ (the span of the columns of $\A$) is designated by $\col(\A)$, its null space by $\text{Ker}(\A)$. We denote the Euclidean, $\ell_1$ and $\ell_{\infty}$ norms of a vector, respectively as $||\x||_{2}$, $||\x||_{1}$, and $||\x||_{\infty}$. The operator and infinity norm of a matrix $\A$ are respectively denoted as $\vectornorm{\A}$ and $\vectornorm{\A}_{\infty}$. The sign function, applied componentwise to a vector $\x$, is denoted by $\sgn(\x)$. The indicator function is denoted by $\mathbbm{1}$. The column vector $\e_{i}$ denotes the vector of zeros except a $1$ at the $i$-th location. The orthogonal complement of a subspace $\bm{W}$ denoted by $\bm{W}^{\perp}$. The operator $\mathcal{P}_{\bm{W}}$ denotes the orthogonal projection operator onto the subspace $\bm{W}$.\\
\section{Theoretical Analysis}\label{sec:th}
The dense and sparse coding problem studies the solutions of the linear system $\y = \A\x+\B\u$. Given matrices $\A \in \real^{m\times p}$ and $\B \in \real^{m\times n}$ and a vector $\y \in \real^{m}$, the goal is to provide conditions under which there is a unique solution ($\x^{*},\u^{*}$), where $\u^{*}$ is $s$-sparse, and an algorithm for recovering it.\\
\subsection{Uniqueness results for the feasibility problem}
In this subsection, we study the uniqueness of solutions to the linear system accounting for the different structures the measurement matrices $\A$ and $\B$ can have.
For more details of all the different cases we consider, we refer the reader to \textbf{Appendix A}. The main result of this subsection is Theorem \ref{thm:uniqueness_maximum_angle} which, under a natural geometric condition based on the minimum principal angle between the column space of $\A$ and the span of $s$ columns in $\B$, establishes a uniqueness result for the dense and sparse coding problem. Since the vector $\u$ in the proposed model is sparse, we consider the classical setting of an overcomplete measurement matrix $\B$ with $n\gg m$. The next theorem provides a uniqueness result assuming a certain direct sum representation of the space $\real^{m}$.
\begin{thm}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$.
If $\B_{S}$ has full column rank and $\real^{m} = \col(\A)\oplus \col(\B_S)$, the only unique solution to the linear system, with the condition that any feasible $s$-sparse vector $\u$ is supported on $S$ and
any feasible $\x$ is in $\ker(\A)^{\perp}$, is $(\x^{*},\u^{*})$.
\end{thm}
\begin{proof}
Let $(\x,\u)$, with $\u$ supported on $S$ and $\x^{*}\in \ker(\A)^{\perp}$, be another solution pair. It follows that $\A\deltaf+\B_{S}(\deltas)_{S} = \bm{0}$ where $\deltaf = \x-\x^{*}$ and $\deltas = \u_{S}- \u^{*}_{S}$. Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S})$ respectively. The equation $ \A\deltaf+\B_{S}(\deltas)_{S} = \bm{0}$ can equivalently be written as $ \sum_{i=1}^{r} \langle \A\deltaf\,,\U_i \rangle \U_i + \sum_{i=1}^{q} \langle \B_{S}(\deltas)_{S}\,,\V_i \rangle \V_i = \bm{0}$
with $\U_i$ and $\V_i$ denoting the i-th column of $\U$ and $\V$ respectively. More compactly, we have $\begin{bmatrix} \U & \V \end{bmatrix} \begin{bmatrix} \{\langle \A\deltaf\,,\U_i\rangle\}_{i=1}^{r} \\ \{\langle \B_{S}(\deltas)_{S}\,,\V_i\rangle\}_{i=1}^{q}\end{bmatrix} = \bm{0}$. Noting that the matrix $\begin{bmatrix} \U & \V \end{bmatrix}$ has full column rank, the homogeneous problem admits the trivial solution implying that $\A\deltaf=\bm{0}$ and $\B_{S}(\deltas)_{S} = \bm{0}$. Since $\B_{S}$ has full column rank and $\deltaf \in \{\ker(\A) \cap \ker(\A)^{\perp}\}$, it folows that $\deltaf = \deltas= \bm{0}$. Therefore, $(\x^{*},\u^{*})$ is the unique solution.
\end{proof}
The uniqueness result in the above theorem hinges on the representation of the space $\real^{m}$ as the direct sum of the subspaces $\col(\A)$ and $\col(\B_S)$.
We use the definition of the minimal principal angle between two subspaces, and its formulation in terms of singular values
\citep{bjorck1973numerical}, to derive an explicit geometric condition for the uniqueness analysis of the linear system in the general case.
\begin{defn}
Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal basis of $\col(\A)$ and $\col(\B)$ respectively. The minimum principal angle between the subspaces $\col(\A)$ and $\col(\B)$ is defined as follows
\begin{equation}
\cos (\mu(\U,\V))= \underset{\u \in \col(\U), \v \in \col(\V)}{\max}\,\, \frac{\u^{T}\v}{||\u||_{2}||\v||_{2}},
\end{equation}
The minimum angle $\mu(\U,\V)$ is also equal to the largest singular value of $ \U^T\V$, $\cos (\mu(\U,\V))=\sigma_{1}(\U^T\V)$.
\end{defn}
\begin{thm}\label{thm:uniqueness_maximum_angle}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$.
Assume that $\B_{S}$ has full column rank . Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S})$ respectively. If $\cos(\mu(\U,\V))= \sigma_{1}(\U^T\V)<1$, the only unique solution to the linear system, with the condition that any feasible $s$-sparse vector $\u$ is supported on $S$ and any feasible $\x$ is in
$\ker(\A)^{\perp}$, is $(\x^{*},\u^{*})$.
\end{thm}
\begin{proof}
Consider any candidate solution pair $(\x^{*}+\deltaf, \u^{*}+\deltas)$. We will prove uniqueness by showing that $\A\deltaf+\B_{S}(\deltas)_{S}=0$ if and only if $\deltaf=\bm{0}$ and $(\deltas)_{S} =\bm{0}$.
Using the orthonormal basis set $\U$ and $\V$, $\A\deltaf+\B_{S}\u_{S}$ can be represented as : $\displaystyle \A\deltaf+\B_{S}(\deltas)_{S} =
\begin{bmatrix}
\U & \V\\
\end{bmatrix}
\begin{bmatrix}
\U^{T}\A\deltaf\\
\V^{T} \B_{S}(\deltas)_{S}\\
\end{bmatrix}
$. For simplicity of notation, let $\K$ denote the block matrix: $\displaystyle \K = \begin{bmatrix} \U & \V\\ \end{bmatrix}$. If we can show that the columns of $\K$ are linearly independent, it follows that $\A\deltaf+\B_{S}(\deltas)_{S}=\bm{0}$ if and only if $\A\deltaf=\bm{0}$ and $\B_{S}(\deltas)_{S}=\bm{0}$.
We now consider the matrix $\K^T\K$ which has the following representation
\begin{align*}
\K^T\K &= \begin{bmatrix}
[\I]_{r\times r} & [\U^{T}\V]_{r\times q}\\
[\V^{T}\U]_{q\times r} & [\I]_{q\times q}
\end{bmatrix} \\
&= \begin{bmatrix}
[\I]_{r\times r} & [\bm{0}]_{r\times q}\\
[\bm{0}]_{q\times r} & [\I]_{q\times q}
\end{bmatrix}
+
\begin{bmatrix}
[\bm{0}]_{r\times r} & [\U^{T}\V]_{r\times q}\\
[\V^{T}\U]_{q\times r} & [\bm{0}]_{q\times q}
\end{bmatrix}
.
\end{align*}
With the singular value decomposition of $\U^{T}\V$ being $\U^{T}\V = \Q\mathbf{\bSigma}\R^{T}$, the last matrix in the above representation has the following equivalent
form $\begin{bmatrix}
\bm{0} & \U^{T}\V\\
\V^{T}\U & \bm{0}
\end{bmatrix}
= \begin{bmatrix} \Q & \bm{0}\\ \bm{0} &\R \end{bmatrix}
\begin{bmatrix} \bm{0} & \bSigma\\\bSigma & \bm{0}\end{bmatrix}
\begin{bmatrix} \Q & \bm{0}\\ \bm{0} &\R \end{bmatrix} ^{T}
$.
It now follows that $\begin{bmatrix} \bm{0} & \U^{T}\V\\ \V^{T}\U & \bm{0} \end{bmatrix}$ is \emph{similar} to the matrix $ \begin{bmatrix}
\bm{0} & \mathbf{\bSigma}\\
\mathbf{\bSigma} & \bm{0}
\end{bmatrix}$. Hence, the nonzero eigenvalues of $\K^T\K$ are $1\pm \sigma_{i}$, $1\le i\le \min(p,q)$, with $\sigma_{i}$ denoting the $i$-th largest singular value of $\U^T\V$.
Using the assumption $\sigma_{1}<1$ results the bound $ \lambda_{\min}\left(\K^T\K\right)>0$. It follows that the columns of $\K$ are linearly independent, and hence
$\A\deltaf=\bm{0}$ and $\B_{S}(\deltas)_{S} = \bm{0}$. Since $\B_{S}$ is full column rank and $\deltaf \in \{\ker(\A) \cap \ker(\A)^{\perp}\}$, it follows that $ \deltaf=\bm{0}$
and $(\deltas)_{S} = \bm{0}$. This concludes the proof.
\end{proof}
A restrictive assumption of the above theorem is that the support of the sought-after $s$-sparse solution $\u^{*}$ is known. We can remove this assumption by considering $\col(\A)$ and $\col(\B_{T})$ where $T$ is an arbitrary subset of $\{1,2,...,n\}$ with $|T|=s$. More precisely, we state the following corollary whose proof is similar to the proof of Theorem \ref{thm:uniqueness_maximum_angle}.
\begin{cor}\label{thm:uniqueness_maximum_angle_arbitrary_sparsity}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$
and $T$ be an arbitrary subset of $\{1,2,...,n\}$ with $|T|\le s$. Assume that any $2s$ columns of $\B$ are linearly independent. Let $\U\in \real^{m\times p}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S\cup T})$ respectively. If $\mu(\U,\V)= \sigma_{1}(\U^T\V)<1$, holds for all choices of $T$, the only unique solution to the linear system is $(\x^{*},\u^{*})$ with the condition that any feasible $\u$ is $s$-sparse and any feasible $\x$ is in $\ker(\A)^{\perp}$.
\end{cor}
Of interest is the identification of simple conditions such that $\sigma_{1}(\U^T\V)<1$. The following theorem proposes one such condition to establish uniqueness.
\begin{thm}\label{thm:uniqueness_maximum_angle_sparsity}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$.
Assume that $\B_{S}$ has full column rank . Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S})$ respectively.
Let $\underset{i,j}{\max}\,\, |(\U^T\V)_{i,j}| = \mu $. If $s<\frac{1}{\sqrt{r}\mu }$, the only unique solution to the linear system, with the condition that any feasible $s$-sparse vector $\u$ is supported on $S$ and any feasible $\x$ is in $\ker(\A)^{\perp}$, is $(\x^{*},\u^{*})$.
\end{thm}
\begin{proof}
It suffices to show that $\sigma_1<1$. Noting that $\sigma_{1} = ||\U^T\V||_{2}$, we use the following matrix norm inequality $||\U^T\V||_{2} \le \sqrt{r} ||\U^T\V||_{\infty}$ as follows: $\sigma_{1} \le \sqrt{r}\,||\U^T\V||_{\infty} \le \sqrt{r} \mu s < 1$.
\end{proof}
The constant $\mu$ is the coherence of the matrix $\U^T\V$ \citep{donoho2005stable,tropp2004greed}. The above result states that if the mutual coherence of $\U^T\V$ is small, we can accommodate increased sparsity of
the underlying signal component $\u^{*}$. We note that, up to a scaling factor, $\sigma_1(\U^T\V)$ is the block coherence of $\U$ and $\V$ \citep{eldar2010block}. However, unlike the condition
in \citep{eldar2010block}, we don't restrict the dictionaries $\A$ and $\B$ to have linearly independent columns.
In the next subsection, we propose a convex program
to recover the dense and sparse vectors. Theorem \ref{thm:main_result} establishes uniqueness and complexity results for the proposed optimization program.
\subsection{Dense and sparse recovery via convex optimization}
Given that the dense and sparse coding problem seeks a dense vector $\x^{*}$ and a sparse solution $\u^{*}$, with measurements given as $\y = \A\x^{*}+\B\u^{*}$, we propose the following convex optimization program
\begin{equation}\label{eq:l1l2_min}
\underset{\x,\u}{\min}\, \vectornorm{\A\x}_{2}^{2}+ \vectornorm{\u}_{1} \,\, \textrm{s.t.}\,\, \y = \A\x+\B\u.
\end{equation}
In this section, we show that, under certain conditions, the above minimization problem admits a unique solution. Our proof is a non-trivial adaptation of the existing analysis in \citep{kueng2014ripless} for the anistropic compressive sensing problem. This analysis is based on a single measurement matrix and can not be directly applied to our scenario. Let $\a_1,...,\a_m$ be a sequence of zero-mean i.i.d random vectors drawn from some distribution $F$ on $\real^{p}$ and let $\b_1,...,\b_m$ be a sequence of zero-mean i.i.d random vectors drawn from some distribution $G$ on $\real^{n}$. We can eliminate the dense component in the linear constraint by projecting the vector $\y$ onto the orthogonal complement of $\col(\A)$ to obtain $\mathcal{P}_{\col(\A)^{\perp}}(\y) = \mathcal{P}_{\col(\A)^{\perp}}(\B\u)$. With this, the matrix $\mathcal{P}_{\col(\A)^{\perp}}(\B)$ is central in the analysis to follow. We define the matrix $\C = \frac{1}{\sqrt{m}}\sum_{i=1}^{m}\e_i\c_i^{T}$
where $\c_i = [\mathcal{P}_{\col(A)^{\perp}}(\B)]^{T}\e_i$ denotes the $i$-th measurement vector corresponding to a row of this matrix.
Further technical discussion on the matrix $\C$ is deferred to \textbf{Appendix B}. We use the matrix $\C$ introduced above
and adapt the anisotropic compressive sensing theory in \citep{kueng2014ripless} to analyze uniqueness of the proposed program. Below, we give brief background to this theory highlighting important assumptions and results following the notation closely therein.\\
\textbf{Anisotropic compressive sensing}: Given a sequence of zero-mean i.i.d random vectors $\d_1,...,\d_m$ drawn from some distribution $F$ on $\real^{n}$, with measurements $\y= \D\u^{*}$,
the anisotropic compressive sensing problem studies the following optimization program
\begin{equation}\label{eq:anisotropic_min}
\underset{\u}{\min}\,\vectornorm{\u}_{1} \quad\text{s.t.}\quad \y = \D\u ,
\end{equation}
where $\D = \frac{1}{\sqrt{m}} \sum_{i=1}^{m}\e_i\d_i^{T}$ and $\u^{*}$ is the sought-out sparse solution. The analysis makes three important assumptions.\\
\textbf{Completeness}: The covariance matrix $\bSigma$ is invertible with condition number denoted by $\kappa$.\\
\textbf{Incoherence}: The incoherence parameter is the smallest number $\nu$ such that
\begin{equation}\label{eq:inchoherence}
\underset{1\le i\le n}{\max}\, |\langle \d\,,\e_{i}\rangle|^{2}\le \nu \text{ and } \quad \underset{1\le i\le n}{\max}\, |\langle \d\,,E[\c\c^{*}]^{-1}\e_i|^{2}\le \nu
\end{equation}
hold almost surely.\\
\textbf{Conditioning of the covariance matrix}: We start with the following definition of the $s$-sparse condition number restated from \citep{kueng2014ripless}.
\begin{defn} \citep{kueng2014ripless}
The largest and smallest $s$-sparse eigenvalue of a matrix $\X$ are given by
\begin{align*}
\lambda_{\max}(s,\X):&= \underset{\v,||\v||_{0}\le s}{\max}\, \frac{||\X\v||_{2}}{||\v||_{2}}\\
\lambda_{\min}(s,\X):&= \underset{\v,||\v||_{0}\le s}{\min}\, \frac{||\X\v||_{2}}{||\v||_{2}}.
\end{align*}
The $s$-sparse condition number of $\X$ is $\displaystyle \text{cond}(s,\X) = \frac{\lambda_{\max}(s,\X)}{\lambda_{\min}(s,\X)}$.
\end{defn}
Given these assumptions, the main result in \citep{kueng2014ripless} reads
\begin{thm} \citep{kueng2014ripless}\label{thm:anisotropic_main_result}
With $\kappa_{s} = \max\{\text{cond}(s,\bSigma),\text{cond}(s,\bSigma^{-1})\}$
let $\u \in \mathbb{C}^{n}$ be an $s$-sparse vector and let $\omega \ge 1$. If the number of measurements fulfills $m \ge C\kappa_{s}\,\nu\, \omega^{2}\,s\log n$, then the solution $\u$ of the convex program \eqref{eq:anisotropic_min} is unique and equal to $\u^{*}$ with probability at least $1 - e^{-\omega}$.
\end{thm}
The proof of Theorem \ref{thm:anisotropic_main_result} is based on the dual certificate approach. The idea is to first propose a dual certificate vector $\v$ with sufficient conditions
that ensure uniqueness of the minimization problem. It then remains to construct the dual certificate satisfying the conditions. We seek a similar result for the uniqueness of the convex program corresponding to the dense and sparse coding model. However, the standard analysis can not be directly applied since it only considers a single measurement matrix. This requires us to analyze the matrix $\C$ introduced earlier. The anisotropic compressive sensing analysis in \citep{kueng2014ripless} assumes the following conditions on the dual certificate $\v$
\begin{equation}\label{eq:dual_conditions}
||\v_{S} - \sgn(\u^{*}_{S})||_{2}\le \tfrac{1}{4} \,\,\,\,\text{and}\,\,\,\, ||\v_{S^{\perp}}||_{\infty}\le \tfrac{1}{4}.
\end{equation}
The following condition follows from the assumptions in Theorem \ref{thm:anisotropic_main_result}
\begin{equation}\label{eq:deviation_inequality}
||\bm{\Delta}_{S}||_{2}\le 2 ||\bm{\Delta}_{S^{\perp}}||_{2},
\end{equation}
where $\bm{\Delta} \in \text{Ker}(\D)$. The conditions \eqref{eq:dual_conditions} and \eqref{eq:deviation_inequality} will be used in the proof of our main result.
The main part of the technical analysis in \citep{kueng2014ripless} is using the assumptions in Theorem \ref{thm:anisotropic_main_result} and showing that
the above conditions \eqref{eq:dual_conditions} and \eqref{eq:deviation_inequality} hold with high probability.\\
\textbf{Main result}: Using the the background discussed above, we assume completeness, incoherence, and conditioning of the covariance matrix $\bSigma$. Our main result is stated below.
\begin{thm}\label{thm:main_result}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $\omega\ge 1$ and define $\kappa_{s} = \max\{\text{cond}(s,\bSigma),\text{cond}(s,\bSigma^{-1})\}$. Assume the two conditions
\begin{equation}
||\B_{S}^{T}\A||\le \frac{1}{32||\x^{*}||_{2}}, \,\quad ||\B_{S^{\perp}}^{T}\A||_{\infty}\le \frac{1}{32||\x^{*}||_{\infty}}.
\end{equation}
If the number of measurements fulfills $m \ge C\kappa_{s}\,\nu\, \omega^{2}\,s\log n$, then the solution of the convex program \eqref{eq:l1l2_min} is unique and equal to $(\x^{*},\u^{*})$ with probability at least $1 - e^{-\omega}$.
\end{thm}
\begin{proof sketch} Consider a feasible solution pair $(\x^{*}+\deltaf, \u^{*}+\deltas)$ and let the function $f(\x,\u)$ denote the objective in the optimization program.
The idea of the proof is to show that any feasible solution is not minimal in the objective value, $f(\x^{*}+\deltaf,\u^{*}+\deltas)> f(\x,\u)$. Using duality and
characterization of the subgradient $\Lam$ of the $\ell_1$ norm, we first show that $f(\x^{*}+\deltaf, \u^{*}+\deltas)>f(\x^{*},\u^{*})+\langle \sgn(\u_{S}^{*})+\Lam-\v-2\B^{T}\A\x^{*}\,,\deltas\rangle$ where $\v\in \col(\C^T) $, with $\C= \mathcal{P}_{\col(A)^{\perp}}(\B)$, denoting the dual certificate. It then remains to show that the term $\langle \sgn(\u_{S}^{*})+\Lam-\v-2\B^{T}\A\x^{*}\,,\deltas\rangle$ is positive. To show this, we further analyze this term and make use of the assumptions of the theorem, the dual certificate conditions \eqref{eq:dual_conditions}, and the deviation inequality
in \eqref{eq:deviation_inequality}. For a complete proof, see \textbf{Appendix B}. \end{proof sketch}
\textbf{Complexity compared to $\ell_1$ minimization}: The sample complexity of solving the convex program corresponding to the dense and sparse coding problem is larger than that of $\ell_1$ minimization for the compressive sensing problem. Essentially, the constants $\kappa_{s}$ and $\nu$ in our analysis are expected to scale with $p+n$, in contrast to the compressive sensing analysis where they scale with $n$.
\section{Experiments}\label{sec:exps}
\subsection{Phase transition curves}
We generate \emph{phase transition curves} and present how the success rate of the recovery, using the proposed model, changes under different scenarios. To generate the data, we fix the number of columns of $\B$ to be $n = 100$. Then, we vary the sampling ratio $\sigma = \frac{m}{n + p} \in [0.05,0.95]$ and the sparsity ratio $\rho = \frac{s}{m}$ in the same range. The sensing matrix in our model is $[\A \quad \B]$, hence the apparent difference in the definition of $\sigma$ compared to ``traditional'' compressive sensing. In the case where we revert to the compressive sensing scenario ($p = 0$), the ratios coincide.\\
We generate random matrices $\A \in \mathbb{R}^{m \times p}$ and $\B \in \mathbb{R}^{m \times n}$ whose columns have expected unit norm. The vector $\u \in \mathbb{R}^n$ has $s$ randomly chosen indices, whose entries are drawn according to a standard normal distribution, and $\bm{x} \in \mathbb{R}^p$ is generated as $\x = \A^T \boldsymbol{\gamma}$ where
$\boldsymbol{\gamma} \in \mathbb{R}^m$ is a random vector. The construction ensures that $\x$ does not belong in the null space of $\A$, and hence ignores trivial solutions with respect to this dense component. We normalize both $\x$ and $\u$ to have unit norm, and generate the measurement vector $\y \in \mathbb{R}^m$ as $\y = \A \x+ \B\u$. We solve the convex optimization problem in \eqref{eq:l1l2_min} to obtain the numerical solution pair $(\hat{\x}, \hat{\u} )$ using \texttt{CVXPY}, and register a successful recovery if both $\frac{\lVert\hat{\x} - \x\rVert_2}{\lVert\x\rVert_2} \leq \epsilon$ and $\frac{\lVert\hat{\u} - \u\rVert_2}{\lVert\u\rVert_2} \leq \epsilon$, with $\epsilon = 10^{-3}$. For each choice of $\sigma$ and $\rho$ we average $100$ independent runs to estimate the success rate.\\
Figure~\ref{fig:ptc} shows the phase transition curves for $p \in \{0.1m, 0.5m\}$ to highlight different ratios between $p$ and $n$. We observe that increasing $p$ leads to a deterioration in performance. This is expected, as this creates a greater \emph{overlap} on the spaces spanned by $\A$ and $\B$. We can view our model as explicitly modeling the noise of the system. In such a case, the number of columns of $\A$ explicitly encodes the complexity of the noise model: as $p$ increases, so does the span of the noise space. \\
Extending the signal processing interpretation, note that we model the noise signal $\x$ as a dense vector, which can be seen as encoding smooth areas of the signal that correspond
to \emph{low-frequency} components. On the contrary, the signal $\u$ has, by construction, a sparse structure, containing \emph{high-frequency} information, an interpretation that will be further validated in the next subsection. Further numerical experiments comparing the dense and sparse coding model to noisy compressive sensing can be found in \textbf{Appendix C}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[scale=0.8]{figs/kernel/heatmap_both_p=01_magma}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[scale=0.8]{figs/kernel/heatmap_both_p=05_magma}
\end{subfigure}
\caption{Phase transition curves for $p = 0.1m$ (top) and $p = 0.5m$ (bottom).}
\label{fig:ptc}
\end{figure}
\subsection{Classification and image denoising}
We formulate the dense and sparse dictionary learning problem as minimizing the objective
\[
\underset{\substack{\A,\B, \{\x^j\}_{j=1}^J, \{\u^j\}_{j=1}^J}}{\min} \sum_{j=1}^J \frac{1}{2} \| \y^j - \A \x^j - \B \u^j \|_2^2 + \frac{1}{2 \lambda_x} \| \A\x^j \|_2^2 + \lambda_u\| \u^j \|_1,
\]
where $J$ is the number of images, $\lambda_x$ controls the smoothness of $\A\x^j$ and $\lambda_u$ controls the degree of sparsity. Based on the objective, we use deep unfolding to construct a unfolding neural network~\citep{tolooshams2020deep,LISTA}, which we term the dense and sparse autoencoder (DenSaE), tailored to learning the dictionaries from the dense and sparse model. The encoder maps $\y^{j}$ into a dense vector $\x_T^{j}$ and a sparse one $\u_T^{j}$ by unfolding $T$ proximal gradient iterations. The decoder reconstructs the image. For classification, we use $\u_T$ and $\x_T$ as inputs to a linear classifier $\bm{C}$ that maps them to the predicted class $\bm{\hat q}$. We learn the dictionaries $\A$ and $\B$, as well as the classifier $\C$, by minimizing the weighted reconstruction (Rec.) and classification (Logistic) loss (i.e., $(1 - \beta)$ Rec. + $\beta$ Logistic). Figure~\ref{fig:ae} shows the DenSaE architecture (for details see \textbf{Appendix D}).\\
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\tikzstyle{block} = [draw, fill=none, rectangle,
minimum height=1em, minimum width=1em]
\tikzstyle{sum} = [draw, fill=none, minimum height=0.1em, minimum width=0.1em, circle, node distance=1cm]
\tikzstyle{cir} = [draw, fill=none, circle, line width=0.7mm, minimum width=0.3cm, node distance=1cm]
\tikzstyle{loss} = [draw, fill=none, color=black, ellipse, line width=0.5mm, minimum width=0.7cm, node distance=1cm]
\tikzstyle{blueloss} = [draw, fill=none, color=black, ellipse, line width=0.5mm, minimum width=0.7cm, node distance=1cm, color=black]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto, node distance=2cm,>=latex']
cloud/.style={
draw=red,
thick,
ellipse,
fill=none,
minimum height=1em}
\node [input, name=input] {};
\node [cir, node distance=0.0001cm, right of=input] (Y) {$\y$};
\node [block, right of=Y, minimum width=1cm, node distance=1.2cm] (BT) {$\alpha_u\B^{\text{T}}$};
\node [block, right of=BT, minimum width=0.7cm, node distance=1.3cm] (pb) {$\prox_{b}$};
\node [cir, right of=pb, node distance=1.cm] (ut) {$\u_{t}$};
\node [fill=none, below of=ut, node distance=0.5cm] (connection_1) {$$};
\node [fill=none, below of=ut,left=3.6pt, node distance=0.5cm] (connection_2) {$$};
\node [cir, right of=ut, node distance=1.2cm] (uT) {$\u_{T}$};
\node [output, node distance=1.2cm, right of=uT] (output) {};
\node [block, right of=uT, node distance=1.1cm] (B) {$\B$};
\node [block, below of=pb, node distance=0.8cm, left=0.5cm] (B_cns) {$\B$};
\node [rectangle, fill=none, node distance=0.6cm, below of=B] (middle) {};
\node [block, above of=BT, minimum width=0.7cm, node distance=1.01cm] (AT) {$\alpha_x\A^{\text{T}}$};
\node [cir, above of=ut, node distance=1.01cm] (xt) {$\x_{t}$};
\node [fill=none, below of=xt, node distance=0.50cm] (connection_x1) {$$};
\node [fill=none, below of=xt,left=3.6pt, node distance=0.50cm] (connection_x2) {$$};
\node [cir, above of=uT, node distance=1.01cm] (xT) {$\x_{T}$};
\node [output, node distance=1.2cm, right of=xT] (output_x) {};
\node [block, above of=B, node distance=1.01cm] (A) {$\A$};
\node [block, below of=B_cns, node distance=0.6cm] (A_cns) {$\A$};
\node [block, right of=A_cns, node distance=1.2cm] (pr) {$1 + \frac{1}{\lambda_x}$};
\node [rectangle, fill=none, node distance=0.6cm, below of=A] (middle) {};
\node [output, right of=A, node distance=0.5cm, below=0.5cm] (out) {};
\node [cir, right of=out, node distance=1.2cm] (yhat) {$\hat \y$};
\node [rectangle, below of=B, minimum width=0.01cm, node distance=0.651cm, left=0.000cm] (Y1m) {};
\node [rectangle, below of=B, minimum width=0.01cm, node distance=0.58cm, left=0.006cm] (Y1ml) {};
\node [rectangle, below of=B, minimum width=0.01cm, node distance=0.58cm, right=0.006cm] (Y1mr) {};
\draw[thick, line width=2, black, ->] ($(xt.north east)+(1.1,0.65)$) -- ($(xt.north east)+(3.5,0.65)$);
\draw[thick, line width=2, black, ->] ($(xt.north east)+(-4.,0.65)$) -- ($(xt.north east)+(1.,0.65)$);
\draw[thick,dotted] ($(xT.north east)+(-0.95,+0.17)$) rectangle ($(AT.west)+(-0.25,-2.8)$);
\node [rectangle, fill=none, node distance=1.12cm, right=-5pt,above of=A] (text) {\footnotesize{Decoder}};
\node [rectangle, fill=none, node distance=2.15cm, right=0pt,above of=pb] (encoder) {\footnotesize{Encoder}};
\draw [->] (Y) -- node [name=m, pos=0.3, above] {} (BT);
\draw [] (Y) -- node [name=mx, pos=0.4, below] {} (BT);
\draw [->] (mx) |- node [] {} (AT);
\draw [->] (BT) -- node[name=s, pos=0.2, above] {} (pb);
\draw [->] (pb) -- node[] {} (ut);
\draw [-] (ut) |- node[] {} (connection_2);
\draw [->] (connection_1) -| node[] {} (s);
\draw [->] (ut) -- node[name=loop, pos=0.1, above] {} (uT);
\draw [->] (uT) -- node[name=forzu, pos=0.1, above] {} (B);
\draw [->] (loop) |- node[] {} (B_cns);
\draw [->] (AT) -- node[name=sx, pos=0.1, above] {} (xt);
\draw [-] (xt) |- node[] {} (connection_x2);
\draw [->] (connection_x1) -| node[] {} (sx);
\draw [->] (xt) -- node[name=loop_x, pos=0.3, above] {} (xT);
\draw [->] (xt) -- node[name=loop_x2, pos=0.55, above] {} (xT);
\draw [->] (xT) -- node[name=forzx, pos=0.3, above] {} (A);
\draw [->] (loop_x) |- node[] {} (pr);
\node [cir, below of=uT, node distance=0.95cm] (z) {$\bm{z}$};
\node [block, right of=z, node distance=0.8cm] (C) {$\bm{C}$};
\node [block, right of=C, node distance=0.9cm] (softmax) {$\prox_{\text{\tiny max}}$};
\node [cir, right of=softmax, node distance=1.1cm] (qhat) {$\bm{\hat q}$};
\draw[thick, line width=2, black, ->] ($(z.south east)+(-0.5,-0.3)$) -- ($(z.south east)+(2.5,-0.3)$);
\node [rectangle, fill=none, node distance=0.75cm, left=20pt,below of=softmax] (classifier) {\footnotesize{Classifier}};
\draw [->] (loop_x2) |- node[] {} (z);
\draw [->] (z) -- node[] {} (C);
\draw [->] (C) -- node[] {} (softmax);
\draw [->] (softmax) -- node[] {} (qhat);
\draw [->] (pr) -- node[] {} (A_cns);
\draw [->] (B_cns) -| node[name=atob, pos=0.5, above] {} (m);
\draw [->] (A_cns) -| node[] {} (m);
\draw [->] (A_cns) -| node[] {} (atob);
\draw [->] (B) -| node[] {} (out);
\draw [->] (A) -| node[] {} (out);
\draw [->] (out) -- node[] {} (yhat);
\node [rectangle, fill=none, node distance=1.7cm, above of=pb] (text) {\footnotesize{Repeat $T$ times}};
\end{tikzpicture}
\end{minipage}
\caption{DenSaE. The vector $\bm{z}$ is normalized stacked features with a column of $1$ (i.e., $\bm{z} = [\mathbf{1} ; \frac{[\x_{T} ; \u_{T}]}{\| [\x_{T} ; \u_{T}]\|}]$), $\prox_b$ and $\prox_{\text{\tiny max}}$ are the soft-thresholding and soft-max operators respectively.}
\vspace{-4mm}
\label{fig:ae}
\end{figure}
We examined the following questions.\\
\begin{itemize}[leftmargin=6mm, itemsep=0.1mm, parsep=0pt, topsep=0pt]
\item [a)] How do the discriminative, reconstruction, and denoising capabilities change as we vary the number of filters in $\A$ vs. $\B$?
\item [b)] What is the performance of DenSaE compared to sparse coding networks?
\item [c)] What data characteristics does the model capture?\\
\end{itemize}
As baselines, we trained two variants, $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$ and $\text{CSCNet}_{\text{LS}}^{\text{tied}}$, of CSCNet~\citep{simon2019rethinking}, an architecture tailored to dictionary learning for the sparse coding problem. In $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$, the bias is a shared hyper-parameter. In $\text{CSCNet}_{\text{LS}}^{\text{tied}}$, we learn a different bias for each filter by minimizing the reconstruction loss. When the dictionaries are non-convolutional, we call the network SCNet.
\begin{table}[!t]
\renewcommand{1.5}{0.7}
\centering
\caption{DenSaE's performance on MNIST test dataset from both disjoint (D) and joint ($\text{J}_{\beta}$) training.}
\setlength\tabcolsep{1.2pt}
\setlength\parskip{0pt}
\begin{tabular}{c|c|ccccc}
\bottomrule
\multicolumn{1}{c}{} &\multicolumn{1}{c}{} & $\substack{\text{SCNet}_{\text{hyp}}^{\text{LS}}}$ & $\substack{\text{SCNet}_{\text{hyp}}^{\text{tied}}}$ & $\substack{5\A\\395\B}$ & $\substack{25\A\\375\B}$ & $\substack{200\A\\200\B}$\\
\midrule
\multicolumn{1}{c}{} & $\frac{\A}{\A + \B}$ model & - & - & $1.25$ & $6.25$ & $50$\\
\midrule
\multirow{2}{0.7cm}{D} & Acc. & \multicolumn{1}{|c}{94.16} & 98.32 & 98.18 & 98.18 & 96.98\\
& Rec. & \multicolumn{1}{|c}{1.95} & 6.80 & 6.83 & 6.30 & 3.04\\
& $\frac{\A}{\A + \B}$ class & - & - & $0$ & $0$ & $0$ \\
& $\frac{\A}{\A + \B}$ rec. & - & - & $8$ & $28$ & $58$\\
\hline
\multirow{2}{0.7cm}{$\text{J}_{0.75}$} & Acc. & 96.91 & 98.18 & 98.19 & 98.23 & 97.64\\
& Rec. & 2.17 & 1.24 & 0.75 & 1.11 & {\bf 0.51}\\
& $\frac{\A}{\A + \B}$ class & - & - & $8$ & $8$ & $84$\\
& $\frac{\A}{\A + \B}$ rec. & - & - & $8$ & $36$ & $8$\\ \midrule
\multirow{2}{0.7cm}{$\text{J}_{1}$} & Acc. & 96.06 & 98.59 & {\bf 98.61} & 98.56 & 98.40\\
& Rec. & 71.20 & 47.70 & 32.61 & 30.20 & 25.57\\
& $\frac{\A}{\A + \B}$ class & - & - & $16$ & $46$ & $42$\\
& $\frac{\A}{\A + \B}$ rec. & - & - & $0$ & $2$ & $4$\\
\bottomrule
\end{tabular}
\vspace{-5mm}
\label{tab:class}
\end{table}
\subsubsection{DenSaE strikes a balance between discriminative capability and reconstruction}
We study the case when DenSaE is trained on the MNIST dataset for joint reconstruction and classification purposes. We show a) how the explicit imposition of sparse and dense representations in DenSaE helps to balance discriminative and representation power, and b) that DenSaE outperforms SCNet. We warm start the training of the classifier using dictionaries obtained by first training the autoencoder, i.e., with $\beta = 0$.\\
\textbf{Characteristics of the representations $\x_T$ and $\u_T$}: To evaluate the discriminative power of the representations learned by only training the autoencoder, we first trained the classifier \emph{given} the representations (i.e., first train $\A$ and $\B$ with $\beta = 0$, then train $\C$ with $\beta = 1$). We call this disjoint training. The first four rows of section D from Table~\ref{tab:class} show, respectively, the classification accuracy (Acc.), $\ell_2$ reconstruction loss (Rec.), and the relative contributions, expressed as a percentage, of the dense or sparse representations to classification and reconstruction for disjoint training. Each col of $[\A\ \B]$, and of $\C$, corresponds to either a dense or a sparse feature. For reconstruction, we find the indices of the $50$ most important columns and report the proportion of these that represent dense features. For each of the $10$ classes (rows of $\C$), we find the indices of the $5$ most important columns (features) and compute the proportion of the total of $50$ indices that represent dense features.
The first row of Table~\ref{tab:class} shows the proportion of rows of $[\A\ \B]$ that represent dense features. Comparing this row, respectively to the third and fourth row of section D reveals the importance of $\x$ for reconstruction, and of $\u$ for classification. Indeed, the first two rows of section D show that, as the proportion of dense features increases, DenSaE gains reconstruction capability but results in a lower classification accuracy. Moreover, in DenSaE, the most important features in classification are all from $\B$, and the contribution of $\A$ in reconstruction is greater than its percentage in the model, which clearly demonstrates that dense and sparse coding autoencoders balance discriminative and representation power. \\
The table also shows that DenSaE outperforms $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ in classification and $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ in reconstruction. We observed that in the absence of noise, training $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ results in dense features with negative biases, hence, making its performance close to DenSaE with large number of atoms in $\A$. We see that $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ in absence of a supervised classification loss fails to learn discriminative features useful for classification. On the other hand, enforcing sparsity in $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ suggests that sparse representations are useful for classification.\\
\textbf{How do roles of $\x_T$ and $\u_T$ change as we vary $\beta$ in joint training?}: In joint training of the autoencoder and the classifier, it is natural to expect that the reconstruction loss should increase compared to disjoint training. This is indeed the case for $\text{SCNet}_{\text{hyp}}^{\text{LS}}$; as we go from disjoint to joint training and as $\beta$ increases (Table~\ref{tab:class}, sections labeled J), the reconstruction loss increases and classification accuracy has an overall increase. However, for $\beta < 1$, joint training of both networks that enforce some sparsity on their representations, $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ and DenSaE, improves reconstruction and classification. Moreover, as we increase the importance of classification loss (i.e., increase $\beta$), the contribution of dense representations decreases in reconstruction and increases in discrimination.\\
For purely discriminative training ($\beta = 1$), DenSaE outperforms both $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ and $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ in classification accuracy and representation capability. We speculate that this likely results from the fact that, by construction, the encoder from DenSaE seeks to produce two sets of representations: namely a dense one, mostly important for reconstruction and a sparse one, useful for classification. In some sense, the dense component acts as a prior that promotes good reconstruction. More detailed results can be found in ${\bf Appendix D}$.\\
\begin{figure*}
\begin{minipage}[b]{0.8\linewidth}
\centering
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto, node distance=2cm,>=latex']
cloud/.style={
draw=red,
thick,
ellipse,
fill=none,
minimum height=1em}
\node [input, name=input] {};
\node [rectangle, fill=none, node distance=0.000001cm, right of=input] (model) {a) $\text{DenSaE}_{\substack{4\A\\60\B}}$};
\node [rectangle, fill=none, node distance=2.2cm, right of=model] (A) {$\includegraphics[width=0.138\linewidth]{./figs/noisy}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=A] (B) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_ax_hat}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=B] (C) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_bu_hat}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=C] (D) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_img_hat}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=D] (E) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_A}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=E] (F) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_B}$};
\node [rectangle, fill=none, node distance=1.12cm, above of=A] (text) {Noisy};
\node [rectangle, fill=none, node distance=1.12cm, above of=B] (text) {$\A\x$};
\node [rectangle, fill=none, node distance=1.12cm, above of=C] (text) {$\B\u$};
\node [rectangle, fill=none, node distance=1.12cm, above of=D] (text) {Denoised};
\node [rectangle, fill=none, node distance=1.12cm, above of=E] (text) {$\A$};
\node [rectangle, fill=none, node distance=1.12cm, above of=F] (text) {$\B$};
\node [rectangle, fill=none, node distance=2.25cm, below of=model] (model) {b) $\text{CSCNet}_{\text{LS}}^{\text{tied}}$};
\node [rectangle, fill=none, node distance=2.25cm, below of=A] (A) {$\includegraphics[width=0.138\linewidth]{./figs/img}$};
\node [rectangle, fill=none, node distance=2.25cm, below of=B] (B) {$\includegraphics[width=0.138\linewidth]{./figs/64B_ax_emi}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=B] (C) {$\includegraphics[width=0.138\linewidth]{./figs/64B_bu_emi}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=C] (D) {$\includegraphics[width=0.138\linewidth]{./figs/64B_img_hat}$};
\node [rectangle, fill=none, node distance=1.75cm, below of=E] (E) {$\includegraphics[width=0.138\linewidth]{./figs/64B_A_emi}$};
\node [rectangle, fill=none, node distance=1.045cm, below of=E] (U) {$\includegraphics[width=0.138\linewidth]{./figs/64B_unused}$};
\node [rectangle, fill=none, node distance=2.25cm, below of=F] (F) {$\includegraphics[width=0.138\linewidth]{./figs/64B_B_emi}$};
\node [rectangle, fill=none, node distance=1.12cm, above of=A] (text) {Original};
\node [rectangle, fill=none, node distance=1.12cm, above of=B] (text) {Implicit $\A\x$};
\node [rectangle, fill=none, node distance=1.12cm, above of=C] (text) {Implicit $\B\u$};
\node [rectangle, fill=none, node distance=1.12cm, above of=D] (text) {Denoised};
\node [rectangle, fill=none, node distance=0.62cm, above of=E] (text) {Implicit $\A$};
\node [rectangle, fill=none, node distance=0.44cm, above of=U] (text) {Unused};
\node [rectangle, fill=none, node distance=1.12cm, above of=F] (text) {Implicit $\B$};
\end{tikzpicture}
\end{minipage}
\caption{Visualization of a test image for $\tau=50$. a) DenSaE $(4\A, 60\B)$, b) $\text{CSCNet}_{\text{LS}}^{\text{tied}}$.}
\label{fig:vis}
\end{figure*}
\noindent \underline{\textbf{Remark}}: As our network is non-convolutional, we do not compare it to the state-of-the-art, a convolutional network. We do not compare our results with the network in~\citep{rolfe2013discriminative} as that work does not report reconstruction loss and it involves a sparsity enforcing loss that change the learning behaviour.
\subsubsection{Denoising}
We trained DenSaE for supervised image denoising when $\beta = 0$ using BSD432 and tested it on BSD68~\citep{MartinFTM01} (see \textbf{Appendix D} for details). We varied the ratio of number of filters in $\A$ and $\B$ as the overall number of filters was kept constant. We evaluate the model in the presence of Gaussian noise with standard deviation of $\tau=\{15,25,50,75\}$.\\
\textbf{Ratio of number of filters in $\A$ and $\B$}: Unlike reconstruction, Table~\ref{tab:psnr_ratio} shows that the smaller the number of filters associated with $\A$, the better DenSaE can denoise images. We hypothesize that this is a direct consequence of our findings from \textbf{Section~\ref{sec:th}} that the smaller the number of columns of $\A$, the easier the recovery $\x$ and $\u$.\\
\begin{table}[!t]
\renewcommand{1.5}{0.5}
\centering
\caption{DenSaE's denoising performance on test BSD68 as the ratio of filters in $\A$ and $\B$ changes.}
\setlength\tabcolsep{2pt}
\setlength\parskip{0pt}
\begin{tabular}{c|ccccc}
\bottomrule
$\tau$ & $1\A63\B$ & $4\A60\B$ & $8\A56\B$ & $16\A48\B$ & $32\A32\B$\\
\midrule
$15$ & {\bf 30.21} & 30.18 & 30.18 & 30.14 & 29.89\\
$25$ & {\bf 27.70} & {\bf 27.70} & 27.65 & 27.56 & 27.26\\
$50$ & {\bf 24.81} & {\bf 24.81} & 24.43 & 24.44 & 23.68\\
$75$ & 23.31 & {\bf 23.33} & 23.09 & 22.09 & 20.09\\
\bottomrule
\end{tabular}
\label{tab:psnr_ratio}
\end{table}
\begin{table}[!t]
\renewcommand{1.5}{0.5}
\centering
\caption{DenSaE vs. CSCNet on test BSD68.}
\setlength\tabcolsep{2pt}
\setlength\parskip{0pt}
\begin{tabular}{c|ccc}
\bottomrule
$\tau$ & DenSaE & $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$ & $\text{CSCNet}_{\text{LS}}^{\text{tied}}$ \\
\midrule
$15$ & 30.21 & 30.12 & \bf 30.34 \\
$25$ & 27.70 & 27.51& \bf 27.75 \\
$50$ & \bf 24.81 & 24.54 & \bf 24.81\\
$75$ & \bf 23.33 & 22.83 & 23.32 \\
\bottomrule
\end{tabular}
\label{tab:psnr_sc}
\end{table}
\textbf{Dense and sparse coding vs. sparse coding}: Table~\ref{tab:psnr_sc} shows that DenSaE (best network from Table~\ref{tab:psnr_ratio}) denoises images better than $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$, suggesting that the dense and sparse coding model represents images better than sparse coding. \\
\textbf{Dictionary characteristics}: Figure~\ref{fig:vis}(a) shows the decomposition of a noisy test image ($\tau = 50$) by DenSaE. The figure demonstrates that $\A\x$ captures low-frequency content while $\B\u$ captures high-frequency details (edges). This is corroborated by the smoothness of the filters associated with $\A$, and the Gabor-like nature of those associated with $\B$~\citep{MehrotraR1992Gfed}. We observed similar performance when we tuned $\lambda_x$, and found that, as $\lambda_x$ decreases, $\A\x$ captures a lower frequencies, and $\B\u$ a broader range.\\
\textbf{CSCNet implicitly learns $\A\x + \B\u$ model in the presence of noise}: We observed that $\text{CSCNet}_{\text{LS}}^{\text{tied}}$ comprises three groups of filters: one with small bias, one with intermediate ones, and a third with large values (see \textbf{Appendix D} for bias visualizations). We found that the feature maps associated with the large bias values are all zero. Moreover, the majority of features are associated with intermediate bias values, and are sparse, in contrast to the small number of feature maps with small bias values, which are dense. These observations suggest that {\it autoencoders implementing the sparse coding model ($\y = \B\u$), when learning the biases by minimizing reconstruction error, implicitly perform two functions.} First, they select the optimal number of filters. Second, they partition the filters into two groups: one that yields a dense representation of the input, and another that yields a sparse one. In other words, the architectures trained in this manner {\it implicitly learn the dense and sparse coding model} ($\y = \A\x + \B\u$). Figure~\ref{fig:vis}(b) shows the filters.\\
\section{Conclusions}\label{sec:conclusion}
This paper proposed a novel dense and sparse coding model for a flexible representation of a signal as $\y = \A\x+\B\u$. Our first result gives a verifiable condition that guarantees uniqueness of the model. Our second result uses tools from RIPless compressed sensing to show that, with sufficiently many linear measurements, a convex program with $\ell_1$ and $\ell_2$ regularizations can recover the components $\x$ and $\u$ uniquely with high probability. Numerical experiments on synthetic data confirm our observations.\\
We proposed a dense and sparse autoencoder, DenSaE, tailored to dictionary learning for the $\A\x + \B\u$ model. DenSaE, naturally decomposing signals into low- and high-frequency components, provides a balance between learning dense representations that are useful for reconstruction and discriminative sparse representations. We showed the superiority of DenSaE to sparse autoencoders for data reconstruction and its competitive performance in classification.
\small
\bibliographystyle{IEEEtranN}
\interlinepenalty=10000
\section{Introduction}
Given a data set, learning a dictionary in which each example admits a sparse representation is tremendously useful in a number of tasks~\citep{aharon2006rm,mairal2011task}. This problem, known as sparse coding~\citep{olshausen1997sparse} or dictionary learning~\citep{garcia2018convolutional}, has been the subject of significant investigation in recent years in the signal processing community. A growing body of work has mapped the sparse coding problem into encoders for sparse recovery~\citep{LISTA}, and into autoencoders purely for classification~\citep{rolfe2013discriminative} or denoising~\citep{simon2019rethinking, tolooshams2020deep} purposes.\\
Autoencoders are widely used for unsupervised learning. Their integration with supervised tasks and classifiers has become popular for their regularization power and reduction of the generalization gap~\citep{vincent2010stacked, epstein2018joint, epstein2019generalization}. \cite{rolfe2013discriminative} have shown benefits of autoencoders and sparse features in discriminative tasks.\\
For data reconstruction, recent work has highlighted some limitations of convolutional sparse coding (CSC) autoencoders~\citep{simon2019rethinking} and its multi-layer and deep generalizations
~\citep{SulamJeremias2018OMBP, zazo2019convolutional}. \cite{simon2019rethinking} argue that the sparsity levels that CSC allows can only accommodate very sparse vectors, making it unsuitable to capture all features of signals such as natural images, and propose to compute the minimum mean-squared error solution under the CSC model, which is a dense vector capturing a richer set of features.\\
To address the aforementioned limitations of classical sparse coding, we propose a dense and sparse coding model that represents a signal as the sum of two components: one that admits a dense representation $\x$ in a dictionary $\A$ that is useful for reconstruction, and another whose representation $\u$ is discriminative and sparse in a second dictionary $\B$. Based on empirical evidence, the authors in
~\citep{zazo2019convolutional} argue that a multi-layer extension of this model can, in principle, have arbitrary depth. However, to our knowledge, the dense and sparse coding model has not been yet fully analyzed. Our contributions are\\
\noindent \textbf{Conditions for identifiability and recovery by convex optimization}: We derive conditions under which the dense and sparse representation is unique. We then propose a convex program for recovery that minimizes $\vectornorm{\A\x}^2_2 + \vectornorm{\u}_1$, subject to linear constraints. \\
\noindent \textbf{Phase-transition curves}: We demonstrate through simulations that the convex program can successfully solve the dense and sparse coding problem.\\
\noindent \textbf{Discriminative reconstruction}: We propose a dense and sparse autoencoder (DenSaE) that has competitive discriminative power and improves the representation capability compared to sparse networks.\\
The paper is organized as follows. \textbf{Section~\ref{sec:th}} discusses theoretical analysis of the dense and sparse coding problem. Phase transition, classification, and denoising experiments appear in \textbf{Section~\ref{sec:exps}}. We conclude in \textbf{Section~\ref{sec:conclusion}}.
\section{Related work}
We comment on the most closely related models. Given the measurements $\y$, the problem of recovering $\x$ and $\u$ is similar in flavor to sparse recovery in the union of dictionaries~\citep{donoho2001uncertainty,elad2002generalized,donoho2003optimally,soltani2017fast, studer2011recovery, studer2014stable}. Most results in this literature take the form of an uncertainty principle that relates the sum of the sparsity of $\x$ and $\u$ to the mutual coherence between $\A$ and $\B$, and which guarantees that the representation is unique and identifiable by $\ell_1$ minimization. To our knowledge, the analysis of this program is novel and in sharp contrast to classical settings in sparse approximation, in which the objective consists of a single sparsifying norm, rather than the combination of different norms. Robust PCA~\citep{candes2011robust}, which decomposes a matrix as the sum of low-rank and sparse matrices, uses the combination of the $\ell_1$ and nuclear norms, giving it a flavor similar to our problem.\\
Our model resembles weighted LASSO \citep{lian2018weighted,mansour2017recovery}. Compared to weighted LASSO, we can directly map the weighted LASSO objective $\lVert\bm{W}\bm{\alpha}\rVert_1$ to $\lVert\u\rVert_1$ by letting $\bm{\alpha} = \begin{bmatrix}\x & \u\end{bmatrix}^T$ and choosing appropriately the entries of a diagonal matrix $\bm{W}$, with $W_{ij} \in \{0, 1\}$; however, in the weighted LASSO formulation, constraints can only be enforced on the sparse component $\u$. Our work differs in that a significant part of our analysis is the directed Euclidean norm constraint on $\x$, which recovers a unique solution $\x^{\star} \in \text{Ker}(\A)^{\perp}$. Our model can also be interpreted as a special case of Morphological Component Analysis (MCA) \citep{elad2005simultaneous} for $K=2$, $\bm{s} = \sum_{k=1}^{K}\bm{\Phi}_k \bm{\alpha}_k$, with, however, some distinct differences: i) MCA encodes different morphological structures via the dictionaries $\bm{\Phi}_k$. We encode a smooth morphological component via the whole product $\A\x$, which is conceptually different, and ii) we make no assumption of sparsity on the dense component $\x$. This leads to an optimization objective that is the combination of $\ell_1$ and $\ell_2$ norms, unlike that of MCA. Finally, a bare application of noisy sparse coding would treat $\e = \A\x$ as arbitrary noise, hence i) recovers $\u$ approximately and ii) cannot recover $\x$. However, in our analysis, the term $\A\x$ is not just undesired noise but represents a sought-out feature. We can recover both $\x$ and $\u$ exactly. See \textbf{Appendix C} for a comparison of our model to noisy compressive sensing. We note that the full dense and sparse coding model is $\y = \A\x+\B\u+\e$ where $\e$ is Gaussian noise.\\
\noindent \textbf{Notation}: Lowercase and uppercase boldface letters denote column vectors and matrices, respectively. Given a vector $\x \in \real^{n}$ and a support set $S\subset \{1,...,n\}$,
$\x_S$ denotes the restriction of $\x$ to indices in $S$. For a matrix $\A \in \real^{m\times p}$, $\A_{S}$ is a submatrix of size $m \times |S|$ with column indices in $S$. The column space of a matrix $\A$ (the span of the columns of $\A$) is designated by $\col(\A)$, its null space by $\text{Ker}(\A)$. We denote the Euclidean, $\ell_1$ and $\ell_{\infty}$ norms of a vector, respectively as $||\x||_{2}$, $||\x||_{1}$, and $||\x||_{\infty}$. The operator and infinity norm of a matrix $\A$ are respectively denoted as $\vectornorm{\A}$ and $\vectornorm{\A}_{\infty}$. The sign function, applied componentwise to a vector $\x$, is denoted by $\sgn(\x)$. The indicator function is denoted by $\mathbbm{1}$. The column vector $\e_{i}$ denotes the vector of zeros except a $1$ at the $i$-th location. The orthogonal complement of a subspace $\bm{W}$ denoted by $\bm{W}^{\perp}$. The operator $\mathcal{P}_{\bm{W}}$ denotes the orthogonal projection operator onto the subspace $\bm{W}$.\\
\section{Theoretical Analysis}\label{sec:th}
The dense and sparse coding problem studies the solutions of the linear system $\y = \A\x+\B\u$. Given matrices $\A \in \real^{m\times p}$ and $\B \in \real^{m\times n}$ and a vector $\y \in \real^{m}$, the goal is to provide conditions under which there is a unique solution ($\x^{*},\u^{*}$), where $\u^{*}$ is $s$-sparse, and an algorithm for recovering it.\\
\subsection{Uniqueness results for the feasibility problem}
In this subsection, we study the uniqueness of solutions to the linear system accounting for the different structures the measurement matrices $\A$ and $\B$ can have.
For more details of all the different cases we consider, we refer the reader to \textbf{Appendix A}. The main result of this subsection is Theorem \ref{thm:uniqueness_maximum_angle} which, under a natural geometric condition based on the minimum principal angle between the column space of $\A$ and the span of $s$ columns in $\B$, establishes a uniqueness result for the dense and sparse coding problem. Since the vector $\u$ in the proposed model is sparse, we consider the classical setting of an overcomplete measurement matrix $\B$ with $n\gg m$. The next theorem provides a uniqueness result assuming a certain direct sum representation of the space $\real^{m}$.
\begin{thm}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$.
If $\B_{S}$ has full column rank and $\real^{m} = \col(\A)\oplus \col(\B_S)$, the only unique solution to the linear system, with the condition that any feasible $s$-sparse vector $\u$ is supported on $S$ and
any feasible $\x$ is in $\ker(\A)^{\perp}$, is $(\x^{*},\u^{*})$.
\end{thm}
\begin{proof}
Let $(\x,\u)$, with $\u$ supported on $S$ and $\x^{*}\in \ker(\A)^{\perp}$, be another solution pair. It follows that $\A\deltaf+\B_{S}(\deltas)_{S} = \bm{0}$ where $\deltaf = \x-\x^{*}$ and $\deltas = \u_{S}- \u^{*}_{S}$. Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S})$ respectively. The equation $ \A\deltaf+\B_{S}(\deltas)_{S} = \bm{0}$ can equivalently be written as $ \sum_{i=1}^{r} \langle \A\deltaf\,,\U_i \rangle \U_i + \sum_{i=1}^{q} \langle \B_{S}(\deltas)_{S}\,,\V_i \rangle \V_i = \bm{0}$
with $\U_i$ and $\V_i$ denoting the i-th column of $\U$ and $\V$ respectively. More compactly, we have $\begin{bmatrix} \U & \V \end{bmatrix} \begin{bmatrix} \{\langle \A\deltaf\,,\U_i\rangle\}_{i=1}^{r} \\ \{\langle \B_{S}(\deltas)_{S}\,,\V_i\rangle\}_{i=1}^{q}\end{bmatrix} = \bm{0}$. Noting that the matrix $\begin{bmatrix} \U & \V \end{bmatrix}$ has full column rank, the homogeneous problem admits the trivial solution implying that $\A\deltaf=\bm{0}$ and $\B_{S}(\deltas)_{S} = \bm{0}$. Since $\B_{S}$ has full column rank and $\deltaf \in \{\ker(\A) \cap \ker(\A)^{\perp}\}$, it folows that $\deltaf = \deltas= \bm{0}$. Therefore, $(\x^{*},\u^{*})$ is the unique solution.
\end{proof}
The uniqueness result in the above theorem hinges on the representation of the space $\real^{m}$ as the direct sum of the subspaces $\col(\A)$ and $\col(\B_S)$.
We use the definition of the minimal principal angle between two subspaces, and its formulation in terms of singular values
\citep{bjorck1973numerical}, to derive an explicit geometric condition for the uniqueness analysis of the linear system in the general case.
\begin{defn}
Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal basis of $\col(\A)$ and $\col(\B)$ respectively. The minimum principal angle between the subspaces $\col(\A)$ and $\col(\B)$ is defined as follows
\begin{equation}
\cos (\mu(\U,\V))= \underset{\u \in \col(\U), \v \in \col(\V)}{\max}\,\, \frac{\u^{T}\v}{||\u||_{2}||\v||_{2}},
\end{equation}
The minimum angle $\mu(\U,\V)$ is also equal to the largest singular value of $ \U^T\V$, $\cos (\mu(\U,\V))=\sigma_{1}(\U^T\V)$.
\end{defn}
\begin{thm}\label{thm:uniqueness_maximum_angle}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$.
Assume that $\B_{S}$ has full column rank . Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S})$ respectively. If $\cos(\mu(\U,\V))= \sigma_{1}(\U^T\V)<1$, the only unique solution to the linear system, with the condition that any feasible $s$-sparse vector $\u$ is supported on $S$ and any feasible $\x$ is in
$\ker(\A)^{\perp}$, is $(\x^{*},\u^{*})$.
\end{thm}
\begin{proof}
Consider any candidate solution pair $(\x^{*}+\deltaf, \u^{*}+\deltas)$. We will prove uniqueness by showing that $\A\deltaf+\B_{S}(\deltas)_{S}=0$ if and only if $\deltaf=\bm{0}$ and $(\deltas)_{S} =\bm{0}$.
Using the orthonormal basis set $\U$ and $\V$, $\A\deltaf+\B_{S}\u_{S}$ can be represented as : $\displaystyle \A\deltaf+\B_{S}(\deltas)_{S} =
\begin{bmatrix}
\U & \V\\
\end{bmatrix}
\begin{bmatrix}
\U^{T}\A\deltaf\\
\V^{T} \B_{S}(\deltas)_{S}\\
\end{bmatrix}
$. For simplicity of notation, let $\K$ denote the block matrix: $\displaystyle \K = \begin{bmatrix} \U & \V\\ \end{bmatrix}$. If we can show that the columns of $\K$ are linearly independent, it follows that $\A\deltaf+\B_{S}(\deltas)_{S}=\bm{0}$ if and only if $\A\deltaf=\bm{0}$ and $\B_{S}(\deltas)_{S}=\bm{0}$.
We now consider the matrix $\K^T\K$ which has the following representation
\begin{align*}
\K^T\K &= \begin{bmatrix}
[\I]_{r\times r} & [\U^{T}\V]_{r\times q}\\
[\V^{T}\U]_{q\times r} & [\I]_{q\times q}
\end{bmatrix} \\
&= \begin{bmatrix}
[\I]_{r\times r} & [\bm{0}]_{r\times q}\\
[\bm{0}]_{q\times r} & [\I]_{q\times q}
\end{bmatrix}
+
\begin{bmatrix}
[\bm{0}]_{r\times r} & [\U^{T}\V]_{r\times q}\\
[\V^{T}\U]_{q\times r} & [\bm{0}]_{q\times q}
\end{bmatrix}
.
\end{align*}
With the singular value decomposition of $\U^{T}\V$ being $\U^{T}\V = \Q\mathbf{\bSigma}\R^{T}$, the last matrix in the above representation has the following equivalent
form $\begin{bmatrix}
\bm{0} & \U^{T}\V\\
\V^{T}\U & \bm{0}
\end{bmatrix}
= \begin{bmatrix} \Q & \bm{0}\\ \bm{0} &\R \end{bmatrix}
\begin{bmatrix} \bm{0} & \bSigma\\\bSigma & \bm{0}\end{bmatrix}
\begin{bmatrix} \Q & \bm{0}\\ \bm{0} &\R \end{bmatrix} ^{T}
$.
It now follows that $\begin{bmatrix} \bm{0} & \U^{T}\V\\ \V^{T}\U & \bm{0} \end{bmatrix}$ is \emph{similar} to the matrix $ \begin{bmatrix}
\bm{0} & \mathbf{\bSigma}\\
\mathbf{\bSigma} & \bm{0}
\end{bmatrix}$. Hence, the nonzero eigenvalues of $\K^T\K$ are $1\pm \sigma_{i}$, $1\le i\le \min(p,q)$, with $\sigma_{i}$ denoting the $i$-th largest singular value of $\U^T\V$.
Using the assumption $\sigma_{1}<1$ results the bound $ \lambda_{\min}\left(\K^T\K\right)>0$. It follows that the columns of $\K$ are linearly independent, and hence
$\A\deltaf=\bm{0}$ and $\B_{S}(\deltas)_{S} = \bm{0}$. Since $\B_{S}$ is full column rank and $\deltaf \in \{\ker(\A) \cap \ker(\A)^{\perp}\}$, it follows that $ \deltaf=\bm{0}$
and $(\deltas)_{S} = \bm{0}$. This concludes the proof.
\end{proof}
A restrictive assumption of the above theorem is that the support of the sought-after $s$-sparse solution $\u^{*}$ is known. We can remove this assumption by considering $\col(\A)$ and $\col(\B_{T})$ where $T$ is an arbitrary subset of $\{1,2,...,n\}$ with $|T|=s$. More precisely, we state the following corollary whose proof is similar to the proof of Theorem \ref{thm:uniqueness_maximum_angle}.
\begin{cor}\label{thm:uniqueness_maximum_angle_arbitrary_sparsity}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$
and $T$ be an arbitrary subset of $\{1,2,...,n\}$ with $|T|\le s$. Assume that any $2s$ columns of $\B$ are linearly independent. Let $\U\in \real^{m\times p}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S\cup T})$ respectively. If $\mu(\U,\V)= \sigma_{1}(\U^T\V)<1$, holds for all choices of $T$, the only unique solution to the linear system is $(\x^{*},\u^{*})$ with the condition that any feasible $\u$ is $s$-sparse and any feasible $\x$ is in $\ker(\A)^{\perp}$.
\end{cor}
Of interest is the identification of simple conditions such that $\sigma_{1}(\U^T\V)<1$. The following theorem proposes one such condition to establish uniqueness.
\begin{thm}\label{thm:uniqueness_maximum_angle_sparsity}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $S$, with $|S|=s$, denote the support of $\u^{*}$.
Assume that $\B_{S}$ has full column rank . Let $\U\in \real^{m\times r}$ and $\V\in \real^{m\times q}$ be matrices whose columns are the orthonormal bases of $\col(\A)$ and $\col(\B_{S})$ respectively.
Let $\underset{i,j}{\max}\,\, |(\U^T\V)_{i,j}| = \mu $. If $s<\frac{1}{\sqrt{r}\mu }$, the only unique solution to the linear system, with the condition that any feasible $s$-sparse vector $\u$ is supported on $S$ and any feasible $\x$ is in $\ker(\A)^{\perp}$, is $(\x^{*},\u^{*})$.
\end{thm}
\begin{proof}
It suffices to show that $\sigma_1<1$. Noting that $\sigma_{1} = ||\U^T\V||_{2}$, we use the following matrix norm inequality $||\U^T\V||_{2} \le \sqrt{r} ||\U^T\V||_{\infty}$ as follows: $\sigma_{1} \le \sqrt{r}\,||\U^T\V||_{\infty} \le \sqrt{r} \mu s < 1$.
\end{proof}
The constant $\mu$ is the coherence of the matrix $\U^T\V$ \citep{donoho2005stable,tropp2004greed}. The above result states that if the mutual coherence of $\U^T\V$ is small, we can accommodate increased sparsity of
the underlying signal component $\u^{*}$. We note that, up to a scaling factor, $\sigma_1(\U^T\V)$ is the block coherence of $\U$ and $\V$ \citep{eldar2010block}. However, unlike the condition
in \citep{eldar2010block}, we don't restrict the dictionaries $\A$ and $\B$ to have linearly independent columns.
In the next subsection, we propose a convex program
to recover the dense and sparse vectors. Theorem \ref{thm:main_result} establishes uniqueness and complexity results for the proposed optimization program.
\subsection{Dense and sparse recovery via convex optimization}
Given that the dense and sparse coding problem seeks a dense vector $\x^{*}$ and a sparse solution $\u^{*}$, with measurements given as $\y = \A\x^{*}+\B\u^{*}$, we propose the following convex optimization program
\begin{equation}\label{eq:l1l2_min}
\underset{\x,\u}{\min}\, \vectornorm{\A\x}_{2}^{2}+ \vectornorm{\u}_{1} \,\, \textrm{s.t.}\,\, \y = \A\x+\B\u.
\end{equation}
In this section, we show that, under certain conditions, the above minimization problem admits a unique solution. Our proof is a non-trivial adaptation of the existing analysis in \citep{kueng2014ripless} for the anistropic compressive sensing problem. This analysis is based on a single measurement matrix and can not be directly applied to our scenario. Let $\a_1,...,\a_m$ be a sequence of zero-mean i.i.d random vectors drawn from some distribution $F$ on $\real^{p}$ and let $\b_1,...,\b_m$ be a sequence of zero-mean i.i.d random vectors drawn from some distribution $G$ on $\real^{n}$. We can eliminate the dense component in the linear constraint by projecting the vector $\y$ onto the orthogonal complement of $\col(\A)$ to obtain $\mathcal{P}_{\col(\A)^{\perp}}(\y) = \mathcal{P}_{\col(\A)^{\perp}}(\B\u)$. With this, the matrix $\mathcal{P}_{\col(\A)^{\perp}}(\B)$ is central in the analysis to follow. We define the matrix $\C = \frac{1}{\sqrt{m}}\sum_{i=1}^{m}\e_i\c_i^{T}$
where $\c_i = [\mathcal{P}_{\col(A)^{\perp}}(\B)]^{T}\e_i$ denotes the $i$-th measurement vector corresponding to a row of this matrix.
Further technical discussion on the matrix $\C$ is deferred to \textbf{Appendix B}. We use the matrix $\C$ introduced above
and adapt the anisotropic compressive sensing theory in \citep{kueng2014ripless} to analyze uniqueness of the proposed program. Below, we give brief background to this theory highlighting important assumptions and results following the notation closely therein.\\
\textbf{Anisotropic compressive sensing}: Given a sequence of zero-mean i.i.d random vectors $\d_1,...,\d_m$ drawn from some distribution $F$ on $\real^{n}$, with measurements $\y= \D\u^{*}$,
the anisotropic compressive sensing problem studies the following optimization program
\begin{equation}\label{eq:anisotropic_min}
\underset{\u}{\min}\,\vectornorm{\u}_{1} \quad\text{s.t.}\quad \y = \D\u ,
\end{equation}
where $\D = \frac{1}{\sqrt{m}} \sum_{i=1}^{m}\e_i\d_i^{T}$ and $\u^{*}$ is the sought-out sparse solution. The analysis makes three important assumptions.\\
\textbf{Completeness}: The covariance matrix $\bSigma$ is invertible with condition number denoted by $\kappa$.\\
\textbf{Incoherence}: The incoherence parameter is the smallest number $\nu$ such that
\begin{equation}\label{eq:inchoherence}
\underset{1\le i\le n}{\max}\, |\langle \d\,,\e_{i}\rangle|^{2}\le \nu \text{ and } \quad \underset{1\le i\le n}{\max}\, |\langle \d\,,E[\c\c^{*}]^{-1}\e_i|^{2}\le \nu
\end{equation}
hold almost surely.\\
\textbf{Conditioning of the covariance matrix}: We start with the following definition of the $s$-sparse condition number restated from \citep{kueng2014ripless}.
\begin{defn} \citep{kueng2014ripless}
The largest and smallest $s$-sparse eigenvalue of a matrix $\X$ are given by
\begin{align*}
\lambda_{\max}(s,\X):&= \underset{\v,||\v||_{0}\le s}{\max}\, \frac{||\X\v||_{2}}{||\v||_{2}}\\
\lambda_{\min}(s,\X):&= \underset{\v,||\v||_{0}\le s}{\min}\, \frac{||\X\v||_{2}}{||\v||_{2}}.
\end{align*}
The $s$-sparse condition number of $\X$ is $\displaystyle \text{cond}(s,\X) = \frac{\lambda_{\max}(s,\X)}{\lambda_{\min}(s,\X)}$.
\end{defn}
Given these assumptions, the main result in \citep{kueng2014ripless} reads
\begin{thm} \citep{kueng2014ripless}\label{thm:anisotropic_main_result}
With $\kappa_{s} = \max\{\text{cond}(s,\bSigma),\text{cond}(s,\bSigma^{-1})\}$
let $\u \in \mathbb{C}^{n}$ be an $s$-sparse vector and let $\omega \ge 1$. If the number of measurements fulfills $m \ge C\kappa_{s}\,\nu\, \omega^{2}\,s\log n$, then the solution $\u$ of the convex program \eqref{eq:anisotropic_min} is unique and equal to $\u^{*}$ with probability at least $1 - e^{-\omega}$.
\end{thm}
The proof of Theorem \ref{thm:anisotropic_main_result} is based on the dual certificate approach. The idea is to first propose a dual certificate vector $\v$ with sufficient conditions
that ensure uniqueness of the minimization problem. It then remains to construct the dual certificate satisfying the conditions. We seek a similar result for the uniqueness of the convex program corresponding to the dense and sparse coding model. However, the standard analysis can not be directly applied since it only considers a single measurement matrix. This requires us to analyze the matrix $\C$ introduced earlier. The anisotropic compressive sensing analysis in \citep{kueng2014ripless} assumes the following conditions on the dual certificate $\v$
\begin{equation}\label{eq:dual_conditions}
||\v_{S} - \sgn(\u^{*}_{S})||_{2}\le \tfrac{1}{4} \,\,\,\,\text{and}\,\,\,\, ||\v_{S^{\perp}}||_{\infty}\le \tfrac{1}{4}.
\end{equation}
The following condition follows from the assumptions in Theorem \ref{thm:anisotropic_main_result}
\begin{equation}\label{eq:deviation_inequality}
||\bm{\Delta}_{S}||_{2}\le 2 ||\bm{\Delta}_{S^{\perp}}||_{2},
\end{equation}
where $\bm{\Delta} \in \text{Ker}(\D)$. The conditions \eqref{eq:dual_conditions} and \eqref{eq:deviation_inequality} will be used in the proof of our main result.
The main part of the technical analysis in \citep{kueng2014ripless} is using the assumptions in Theorem \ref{thm:anisotropic_main_result} and showing that
the above conditions \eqref{eq:dual_conditions} and \eqref{eq:deviation_inequality} hold with high probability.\\
\textbf{Main result}: Using the the background discussed above, we assume completeness, incoherence, and conditioning of the covariance matrix $\bSigma$. Our main result is stated below.
\begin{thm}\label{thm:main_result}
Assume that there exists at least one solution to $ \y = \A\x+\B\u$, namely the pair $(\x^{*},\u^{*})$. Let $\omega\ge 1$ and define $\kappa_{s} = \max\{\text{cond}(s,\bSigma),\text{cond}(s,\bSigma^{-1})\}$. Assume the two conditions
\begin{equation}
||\B_{S}^{T}\A||\le \frac{1}{32||\x^{*}||_{2}}, \,\quad ||\B_{S^{\perp}}^{T}\A||_{\infty}\le \frac{1}{32||\x^{*}||_{\infty}}.
\end{equation}
If the number of measurements fulfills $m \ge C\kappa_{s}\,\nu\, \omega^{2}\,s\log n$, then the solution of the convex program \eqref{eq:l1l2_min} is unique and equal to $(\x^{*},\u^{*})$ with probability at least $1 - e^{-\omega}$.
\end{thm}
\begin{proof sketch} Consider a feasible solution pair $(\x^{*}+\deltaf, \u^{*}+\deltas)$ and let the function $f(\x,\u)$ denote the objective in the optimization program.
The idea of the proof is to show that any feasible solution is not minimal in the objective value, $f(\x^{*}+\deltaf,\u^{*}+\deltas)> f(\x,\u)$. Using duality and
characterization of the subgradient $\Lam$ of the $\ell_1$ norm, we first show that $f(\x^{*}+\deltaf, \u^{*}+\deltas)>f(\x^{*},\u^{*})+\langle \sgn(\u_{S}^{*})+\Lam-\v-2\B^{T}\A\x^{*}\,,\deltas\rangle$ where $\v\in \col(\C^T) $, with $\C= \mathcal{P}_{\col(A)^{\perp}}(\B)$, denoting the dual certificate. It then remains to show that the term $\langle \sgn(\u_{S}^{*})+\Lam-\v-2\B^{T}\A\x^{*}\,,\deltas\rangle$ is positive. To show this, we further analyze this term and make use of the assumptions of the theorem, the dual certificate conditions \eqref{eq:dual_conditions}, and the deviation inequality
in \eqref{eq:deviation_inequality}. For a complete proof, see \textbf{Appendix B}. \end{proof sketch}
\textbf{Complexity compared to $\ell_1$ minimization}: The sample complexity of solving the convex program corresponding to the dense and sparse coding problem is larger than that of $\ell_1$ minimization for the compressive sensing problem. Essentially, the constants $\kappa_{s}$ and $\nu$ in our analysis are expected to scale with $p+n$, in contrast to the compressive sensing analysis where they scale with $n$.
\section{Experiments}\label{sec:exps}
\subsection{Phase transition curves}
We generate \emph{phase transition curves} and present how the success rate of the recovery, using the proposed model, changes under different scenarios. To generate the data, we fix the number of columns of $\B$ to be $n = 100$. Then, we vary the sampling ratio $\sigma = \frac{m}{n + p} \in [0.05,0.95]$ and the sparsity ratio $\rho = \frac{s}{m}$ in the same range. The sensing matrix in our model is $[\A \quad \B]$, hence the apparent difference in the definition of $\sigma$ compared to ``traditional'' compressive sensing. In the case where we revert to the compressive sensing scenario ($p = 0$), the ratios coincide.\\
We generate random matrices $\A \in \mathbb{R}^{m \times p}$ and $\B \in \mathbb{R}^{m \times n}$ whose columns have expected unit norm. The vector $\u \in \mathbb{R}^n$ has $s$ randomly chosen indices, whose entries are drawn according to a standard normal distribution, and $\bm{x} \in \mathbb{R}^p$ is generated as $\x = \A^T \boldsymbol{\gamma}$ where
$\boldsymbol{\gamma} \in \mathbb{R}^m$ is a random vector. The construction ensures that $\x$ does not belong in the null space of $\A$, and hence ignores trivial solutions with respect to this dense component. We normalize both $\x$ and $\u$ to have unit norm, and generate the measurement vector $\y \in \mathbb{R}^m$ as $\y = \A \x+ \B\u$. We solve the convex optimization problem in \eqref{eq:l1l2_min} to obtain the numerical solution pair $(\hat{\x}, \hat{\u} )$ using \texttt{CVXPY}, and register a successful recovery if both $\frac{\lVert\hat{\x} - \x\rVert_2}{\lVert\x\rVert_2} \leq \epsilon$ and $\frac{\lVert\hat{\u} - \u\rVert_2}{\lVert\u\rVert_2} \leq \epsilon$, with $\epsilon = 10^{-3}$. For each choice of $\sigma$ and $\rho$ we average $100$ independent runs to estimate the success rate.\\
Figure~\ref{fig:ptc} shows the phase transition curves for $p \in \{0.1m, 0.5m\}$ to highlight different ratios between $p$ and $n$. We observe that increasing $p$ leads to a deterioration in performance. This is expected, as this creates a greater \emph{overlap} on the spaces spanned by $\A$ and $\B$. We can view our model as explicitly modeling the noise of the system. In such a case, the number of columns of $\A$ explicitly encodes the complexity of the noise model: as $p$ increases, so does the span of the noise space. \\
Extending the signal processing interpretation, note that we model the noise signal $\x$ as a dense vector, which can be seen as encoding smooth areas of the signal that correspond
to \emph{low-frequency} components. On the contrary, the signal $\u$ has, by construction, a sparse structure, containing \emph{high-frequency} information, an interpretation that will be further validated in the next subsection. Further numerical experiments comparing the dense and sparse coding model to noisy compressive sensing can be found in \textbf{Appendix C}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[scale=0.8]{figs/kernel/heatmap_both_p=01_magma}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[scale=0.8]{figs/kernel/heatmap_both_p=05_magma}
\end{subfigure}
\caption{Phase transition curves for $p = 0.1m$ (top) and $p = 0.5m$ (bottom).}
\label{fig:ptc}
\end{figure}
\subsection{Classification and image denoising}
We formulate the dense and sparse dictionary learning problem as minimizing the objective
\[
\underset{\substack{\A,\B, \{\x^j\}_{j=1}^J, \{\u^j\}_{j=1}^J}}{\min} \sum_{j=1}^J \frac{1}{2} \| \y^j - \A \x^j - \B \u^j \|_2^2 + \frac{1}{2 \lambda_x} \| \A\x^j \|_2^2 + \lambda_u\| \u^j \|_1,
\]
where $J$ is the number of images, $\lambda_x$ controls the smoothness of $\A\x^j$ and $\lambda_u$ controls the degree of sparsity. Based on the objective, we use deep unfolding to construct a unfolding neural network~\citep{tolooshams2020deep,LISTA}, which we term the dense and sparse autoencoder (DenSaE), tailored to learning the dictionaries from the dense and sparse model. The encoder maps $\y^{j}$ into a dense vector $\x_T^{j}$ and a sparse one $\u_T^{j}$ by unfolding $T$ proximal gradient iterations. The decoder reconstructs the image. For classification, we use $\u_T$ and $\x_T$ as inputs to a linear classifier $\bm{C}$ that maps them to the predicted class $\bm{\hat q}$. We learn the dictionaries $\A$ and $\B$, as well as the classifier $\C$, by minimizing the weighted reconstruction (Rec.) and classification (Logistic) loss (i.e., $(1 - \beta)$ Rec. + $\beta$ Logistic). Figure~\ref{fig:ae} shows the DenSaE architecture (for details see \textbf{Appendix D}).\\
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\tikzstyle{block} = [draw, fill=none, rectangle,
minimum height=1em, minimum width=1em]
\tikzstyle{sum} = [draw, fill=none, minimum height=0.1em, minimum width=0.1em, circle, node distance=1cm]
\tikzstyle{cir} = [draw, fill=none, circle, line width=0.7mm, minimum width=0.3cm, node distance=1cm]
\tikzstyle{loss} = [draw, fill=none, color=black, ellipse, line width=0.5mm, minimum width=0.7cm, node distance=1cm]
\tikzstyle{blueloss} = [draw, fill=none, color=black, ellipse, line width=0.5mm, minimum width=0.7cm, node distance=1cm, color=black]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto, node distance=2cm,>=latex']
cloud/.style={
draw=red,
thick,
ellipse,
fill=none,
minimum height=1em}
\node [input, name=input] {};
\node [cir, node distance=0.0001cm, right of=input] (Y) {$\y$};
\node [block, right of=Y, minimum width=1cm, node distance=1.2cm] (BT) {$\alpha_u\B^{\text{T}}$};
\node [block, right of=BT, minimum width=0.7cm, node distance=1.3cm] (pb) {$\prox_{b}$};
\node [cir, right of=pb, node distance=1.cm] (ut) {$\u_{t}$};
\node [fill=none, below of=ut, node distance=0.5cm] (connection_1) {$$};
\node [fill=none, below of=ut,left=3.6pt, node distance=0.5cm] (connection_2) {$$};
\node [cir, right of=ut, node distance=1.2cm] (uT) {$\u_{T}$};
\node [output, node distance=1.2cm, right of=uT] (output) {};
\node [block, right of=uT, node distance=1.1cm] (B) {$\B$};
\node [block, below of=pb, node distance=0.8cm, left=0.5cm] (B_cns) {$\B$};
\node [rectangle, fill=none, node distance=0.6cm, below of=B] (middle) {};
\node [block, above of=BT, minimum width=0.7cm, node distance=1.01cm] (AT) {$\alpha_x\A^{\text{T}}$};
\node [cir, above of=ut, node distance=1.01cm] (xt) {$\x_{t}$};
\node [fill=none, below of=xt, node distance=0.50cm] (connection_x1) {$$};
\node [fill=none, below of=xt,left=3.6pt, node distance=0.50cm] (connection_x2) {$$};
\node [cir, above of=uT, node distance=1.01cm] (xT) {$\x_{T}$};
\node [output, node distance=1.2cm, right of=xT] (output_x) {};
\node [block, above of=B, node distance=1.01cm] (A) {$\A$};
\node [block, below of=B_cns, node distance=0.6cm] (A_cns) {$\A$};
\node [block, right of=A_cns, node distance=1.2cm] (pr) {$1 + \frac{1}{\lambda_x}$};
\node [rectangle, fill=none, node distance=0.6cm, below of=A] (middle) {};
\node [output, right of=A, node distance=0.5cm, below=0.5cm] (out) {};
\node [cir, right of=out, node distance=1.2cm] (yhat) {$\hat \y$};
\node [rectangle, below of=B, minimum width=0.01cm, node distance=0.651cm, left=0.000cm] (Y1m) {};
\node [rectangle, below of=B, minimum width=0.01cm, node distance=0.58cm, left=0.006cm] (Y1ml) {};
\node [rectangle, below of=B, minimum width=0.01cm, node distance=0.58cm, right=0.006cm] (Y1mr) {};
\draw[thick, line width=2, black, ->] ($(xt.north east)+(1.1,0.65)$) -- ($(xt.north east)+(3.5,0.65)$);
\draw[thick, line width=2, black, ->] ($(xt.north east)+(-4.,0.65)$) -- ($(xt.north east)+(1.,0.65)$);
\draw[thick,dotted] ($(xT.north east)+(-0.95,+0.17)$) rectangle ($(AT.west)+(-0.25,-2.8)$);
\node [rectangle, fill=none, node distance=1.12cm, right=-5pt,above of=A] (text) {\footnotesize{Decoder}};
\node [rectangle, fill=none, node distance=2.15cm, right=0pt,above of=pb] (encoder) {\footnotesize{Encoder}};
\draw [->] (Y) -- node [name=m, pos=0.3, above] {} (BT);
\draw [] (Y) -- node [name=mx, pos=0.4, below] {} (BT);
\draw [->] (mx) |- node [] {} (AT);
\draw [->] (BT) -- node[name=s, pos=0.2, above] {} (pb);
\draw [->] (pb) -- node[] {} (ut);
\draw [-] (ut) |- node[] {} (connection_2);
\draw [->] (connection_1) -| node[] {} (s);
\draw [->] (ut) -- node[name=loop, pos=0.1, above] {} (uT);
\draw [->] (uT) -- node[name=forzu, pos=0.1, above] {} (B);
\draw [->] (loop) |- node[] {} (B_cns);
\draw [->] (AT) -- node[name=sx, pos=0.1, above] {} (xt);
\draw [-] (xt) |- node[] {} (connection_x2);
\draw [->] (connection_x1) -| node[] {} (sx);
\draw [->] (xt) -- node[name=loop_x, pos=0.3, above] {} (xT);
\draw [->] (xt) -- node[name=loop_x2, pos=0.55, above] {} (xT);
\draw [->] (xT) -- node[name=forzx, pos=0.3, above] {} (A);
\draw [->] (loop_x) |- node[] {} (pr);
\node [cir, below of=uT, node distance=0.95cm] (z) {$\bm{z}$};
\node [block, right of=z, node distance=0.8cm] (C) {$\bm{C}$};
\node [block, right of=C, node distance=0.9cm] (softmax) {$\prox_{\text{\tiny max}}$};
\node [cir, right of=softmax, node distance=1.1cm] (qhat) {$\bm{\hat q}$};
\draw[thick, line width=2, black, ->] ($(z.south east)+(-0.5,-0.3)$) -- ($(z.south east)+(2.5,-0.3)$);
\node [rectangle, fill=none, node distance=0.75cm, left=20pt,below of=softmax] (classifier) {\footnotesize{Classifier}};
\draw [->] (loop_x2) |- node[] {} (z);
\draw [->] (z) -- node[] {} (C);
\draw [->] (C) -- node[] {} (softmax);
\draw [->] (softmax) -- node[] {} (qhat);
\draw [->] (pr) -- node[] {} (A_cns);
\draw [->] (B_cns) -| node[name=atob, pos=0.5, above] {} (m);
\draw [->] (A_cns) -| node[] {} (m);
\draw [->] (A_cns) -| node[] {} (atob);
\draw [->] (B) -| node[] {} (out);
\draw [->] (A) -| node[] {} (out);
\draw [->] (out) -- node[] {} (yhat);
\node [rectangle, fill=none, node distance=1.7cm, above of=pb] (text) {\footnotesize{Repeat $T$ times}};
\end{tikzpicture}
\end{minipage}
\caption{DenSaE. The vector $\bm{z}$ is normalized stacked features with a column of $1$ (i.e., $\bm{z} = [\mathbf{1} ; \frac{[\x_{T} ; \u_{T}]}{\| [\x_{T} ; \u_{T}]\|}]$), $\prox_b$ and $\prox_{\text{\tiny max}}$ are the soft-thresholding and soft-max operators respectively.}
\vspace{-4mm}
\label{fig:ae}
\end{figure}
We examined the following questions.\\
\begin{itemize}[leftmargin=6mm, itemsep=0.1mm, parsep=0pt, topsep=0pt]
\item [a)] How do the discriminative, reconstruction, and denoising capabilities change as we vary the number of filters in $\A$ vs. $\B$?
\item [b)] What is the performance of DenSaE compared to sparse coding networks?
\item [c)] What data characteristics does the model capture?\\
\end{itemize}
As baselines, we trained two variants, $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$ and $\text{CSCNet}_{\text{LS}}^{\text{tied}}$, of CSCNet~\citep{simon2019rethinking}, an architecture tailored to dictionary learning for the sparse coding problem. In $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$, the bias is a shared hyper-parameter. In $\text{CSCNet}_{\text{LS}}^{\text{tied}}$, we learn a different bias for each filter by minimizing the reconstruction loss. When the dictionaries are non-convolutional, we call the network SCNet.
\begin{table}[!t]
\renewcommand{1.5}{0.7}
\centering
\caption{DenSaE's performance on MNIST test dataset from both disjoint (D) and joint ($\text{J}_{\beta}$) training.}
\setlength\tabcolsep{1.2pt}
\setlength\parskip{0pt}
\begin{tabular}{c|c|ccccc}
\bottomrule
\multicolumn{1}{c}{} &\multicolumn{1}{c}{} & $\substack{\text{SCNet}_{\text{hyp}}^{\text{LS}}}$ & $\substack{\text{SCNet}_{\text{hyp}}^{\text{tied}}}$ & $\substack{5\A\\395\B}$ & $\substack{25\A\\375\B}$ & $\substack{200\A\\200\B}$\\
\midrule
\multicolumn{1}{c}{} & $\frac{\A}{\A + \B}$ model & - & - & $1.25$ & $6.25$ & $50$\\
\midrule
\multirow{2}{0.7cm}{D} & Acc. & \multicolumn{1}{|c}{94.16} & 98.32 & 98.18 & 98.18 & 96.98\\
& Rec. & \multicolumn{1}{|c}{1.95} & 6.80 & 6.83 & 6.30 & 3.04\\
& $\frac{\A}{\A + \B}$ class & - & - & $0$ & $0$ & $0$ \\
& $\frac{\A}{\A + \B}$ rec. & - & - & $8$ & $28$ & $58$\\
\hline
\multirow{2}{0.7cm}{$\text{J}_{0.75}$} & Acc. & 96.91 & 98.18 & 98.19 & 98.23 & 97.64\\
& Rec. & 2.17 & 1.24 & 0.75 & 1.11 & {\bf 0.51}\\
& $\frac{\A}{\A + \B}$ class & - & - & $8$ & $8$ & $84$\\
& $\frac{\A}{\A + \B}$ rec. & - & - & $8$ & $36$ & $8$\\ \midrule
\multirow{2}{0.7cm}{$\text{J}_{1}$} & Acc. & 96.06 & 98.59 & {\bf 98.61} & 98.56 & 98.40\\
& Rec. & 71.20 & 47.70 & 32.61 & 30.20 & 25.57\\
& $\frac{\A}{\A + \B}$ class & - & - & $16$ & $46$ & $42$\\
& $\frac{\A}{\A + \B}$ rec. & - & - & $0$ & $2$ & $4$\\
\bottomrule
\end{tabular}
\vspace{-5mm}
\label{tab:class}
\end{table}
\subsubsection{DenSaE strikes a balance between discriminative capability and reconstruction}
We study the case when DenSaE is trained on the MNIST dataset for joint reconstruction and classification purposes. We show a) how the explicit imposition of sparse and dense representations in DenSaE helps to balance discriminative and representation power, and b) that DenSaE outperforms SCNet. We warm start the training of the classifier using dictionaries obtained by first training the autoencoder, i.e., with $\beta = 0$.\\
\textbf{Characteristics of the representations $\x_T$ and $\u_T$}: To evaluate the discriminative power of the representations learned by only training the autoencoder, we first trained the classifier \emph{given} the representations (i.e., first train $\A$ and $\B$ with $\beta = 0$, then train $\C$ with $\beta = 1$). We call this disjoint training. The first four rows of section D from Table~\ref{tab:class} show, respectively, the classification accuracy (Acc.), $\ell_2$ reconstruction loss (Rec.), and the relative contributions, expressed as a percentage, of the dense or sparse representations to classification and reconstruction for disjoint training. Each col of $[\A\ \B]$, and of $\C$, corresponds to either a dense or a sparse feature. For reconstruction, we find the indices of the $50$ most important columns and report the proportion of these that represent dense features. For each of the $10$ classes (rows of $\C$), we find the indices of the $5$ most important columns (features) and compute the proportion of the total of $50$ indices that represent dense features.
The first row of Table~\ref{tab:class} shows the proportion of rows of $[\A\ \B]$ that represent dense features. Comparing this row, respectively to the third and fourth row of section D reveals the importance of $\x$ for reconstruction, and of $\u$ for classification. Indeed, the first two rows of section D show that, as the proportion of dense features increases, DenSaE gains reconstruction capability but results in a lower classification accuracy. Moreover, in DenSaE, the most important features in classification are all from $\B$, and the contribution of $\A$ in reconstruction is greater than its percentage in the model, which clearly demonstrates that dense and sparse coding autoencoders balance discriminative and representation power. \\
The table also shows that DenSaE outperforms $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ in classification and $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ in reconstruction. We observed that in the absence of noise, training $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ results in dense features with negative biases, hence, making its performance close to DenSaE with large number of atoms in $\A$. We see that $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ in absence of a supervised classification loss fails to learn discriminative features useful for classification. On the other hand, enforcing sparsity in $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ suggests that sparse representations are useful for classification.\\
\textbf{How do roles of $\x_T$ and $\u_T$ change as we vary $\beta$ in joint training?}: In joint training of the autoencoder and the classifier, it is natural to expect that the reconstruction loss should increase compared to disjoint training. This is indeed the case for $\text{SCNet}_{\text{hyp}}^{\text{LS}}$; as we go from disjoint to joint training and as $\beta$ increases (Table~\ref{tab:class}, sections labeled J), the reconstruction loss increases and classification accuracy has an overall increase. However, for $\beta < 1$, joint training of both networks that enforce some sparsity on their representations, $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ and DenSaE, improves reconstruction and classification. Moreover, as we increase the importance of classification loss (i.e., increase $\beta$), the contribution of dense representations decreases in reconstruction and increases in discrimination.\\
For purely discriminative training ($\beta = 1$), DenSaE outperforms both $\text{SCNet}_{\text{hyp}}^{\text{LS}}$ and $\text{SCNet}_{\text{hyp}}^{\text{tied}}$ in classification accuracy and representation capability. We speculate that this likely results from the fact that, by construction, the encoder from DenSaE seeks to produce two sets of representations: namely a dense one, mostly important for reconstruction and a sparse one, useful for classification. In some sense, the dense component acts as a prior that promotes good reconstruction. More detailed results can be found in ${\bf Appendix D}$.\\
\begin{figure*}
\begin{minipage}[b]{0.8\linewidth}
\centering
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto, node distance=2cm,>=latex']
cloud/.style={
draw=red,
thick,
ellipse,
fill=none,
minimum height=1em}
\node [input, name=input] {};
\node [rectangle, fill=none, node distance=0.000001cm, right of=input] (model) {a) $\text{DenSaE}_{\substack{4\A\\60\B}}$};
\node [rectangle, fill=none, node distance=2.2cm, right of=model] (A) {$\includegraphics[width=0.138\linewidth]{./figs/noisy}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=A] (B) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_ax_hat}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=B] (C) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_bu_hat}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=C] (D) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_img_hat}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=D] (E) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_A}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=E] (F) {$\includegraphics[width=0.138\linewidth]{./figs/4A60B_B}$};
\node [rectangle, fill=none, node distance=1.12cm, above of=A] (text) {Noisy};
\node [rectangle, fill=none, node distance=1.12cm, above of=B] (text) {$\A\x$};
\node [rectangle, fill=none, node distance=1.12cm, above of=C] (text) {$\B\u$};
\node [rectangle, fill=none, node distance=1.12cm, above of=D] (text) {Denoised};
\node [rectangle, fill=none, node distance=1.12cm, above of=E] (text) {$\A$};
\node [rectangle, fill=none, node distance=1.12cm, above of=F] (text) {$\B$};
\node [rectangle, fill=none, node distance=2.25cm, below of=model] (model) {b) $\text{CSCNet}_{\text{LS}}^{\text{tied}}$};
\node [rectangle, fill=none, node distance=2.25cm, below of=A] (A) {$\includegraphics[width=0.138\linewidth]{./figs/img}$};
\node [rectangle, fill=none, node distance=2.25cm, below of=B] (B) {$\includegraphics[width=0.138\linewidth]{./figs/64B_ax_emi}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=B] (C) {$\includegraphics[width=0.138\linewidth]{./figs/64B_bu_emi}$};
\node [rectangle, fill=none, node distance=1.95cm, right of=C] (D) {$\includegraphics[width=0.138\linewidth]{./figs/64B_img_hat}$};
\node [rectangle, fill=none, node distance=1.75cm, below of=E] (E) {$\includegraphics[width=0.138\linewidth]{./figs/64B_A_emi}$};
\node [rectangle, fill=none, node distance=1.045cm, below of=E] (U) {$\includegraphics[width=0.138\linewidth]{./figs/64B_unused}$};
\node [rectangle, fill=none, node distance=2.25cm, below of=F] (F) {$\includegraphics[width=0.138\linewidth]{./figs/64B_B_emi}$};
\node [rectangle, fill=none, node distance=1.12cm, above of=A] (text) {Original};
\node [rectangle, fill=none, node distance=1.12cm, above of=B] (text) {Implicit $\A\x$};
\node [rectangle, fill=none, node distance=1.12cm, above of=C] (text) {Implicit $\B\u$};
\node [rectangle, fill=none, node distance=1.12cm, above of=D] (text) {Denoised};
\node [rectangle, fill=none, node distance=0.62cm, above of=E] (text) {Implicit $\A$};
\node [rectangle, fill=none, node distance=0.44cm, above of=U] (text) {Unused};
\node [rectangle, fill=none, node distance=1.12cm, above of=F] (text) {Implicit $\B$};
\end{tikzpicture}
\end{minipage}
\caption{Visualization of a test image for $\tau=50$. a) DenSaE $(4\A, 60\B)$, b) $\text{CSCNet}_{\text{LS}}^{\text{tied}}$.}
\label{fig:vis}
\end{figure*}
\noindent \underline{\textbf{Remark}}: As our network is non-convolutional, we do not compare it to the state-of-the-art, a convolutional network. We do not compare our results with the network in~\citep{rolfe2013discriminative} as that work does not report reconstruction loss and it involves a sparsity enforcing loss that change the learning behaviour.
\subsubsection{Denoising}
We trained DenSaE for supervised image denoising when $\beta = 0$ using BSD432 and tested it on BSD68~\citep{MartinFTM01} (see \textbf{Appendix D} for details). We varied the ratio of number of filters in $\A$ and $\B$ as the overall number of filters was kept constant. We evaluate the model in the presence of Gaussian noise with standard deviation of $\tau=\{15,25,50,75\}$.\\
\textbf{Ratio of number of filters in $\A$ and $\B$}: Unlike reconstruction, Table~\ref{tab:psnr_ratio} shows that the smaller the number of filters associated with $\A$, the better DenSaE can denoise images. We hypothesize that this is a direct consequence of our findings from \textbf{Section~\ref{sec:th}} that the smaller the number of columns of $\A$, the easier the recovery $\x$ and $\u$.\\
\begin{table}[!t]
\renewcommand{1.5}{0.5}
\centering
\caption{DenSaE's denoising performance on test BSD68 as the ratio of filters in $\A$ and $\B$ changes.}
\setlength\tabcolsep{2pt}
\setlength\parskip{0pt}
\begin{tabular}{c|ccccc}
\bottomrule
$\tau$ & $1\A63\B$ & $4\A60\B$ & $8\A56\B$ & $16\A48\B$ & $32\A32\B$\\
\midrule
$15$ & {\bf 30.21} & 30.18 & 30.18 & 30.14 & 29.89\\
$25$ & {\bf 27.70} & {\bf 27.70} & 27.65 & 27.56 & 27.26\\
$50$ & {\bf 24.81} & {\bf 24.81} & 24.43 & 24.44 & 23.68\\
$75$ & 23.31 & {\bf 23.33} & 23.09 & 22.09 & 20.09\\
\bottomrule
\end{tabular}
\label{tab:psnr_ratio}
\end{table}
\begin{table}[!t]
\renewcommand{1.5}{0.5}
\centering
\caption{DenSaE vs. CSCNet on test BSD68.}
\setlength\tabcolsep{2pt}
\setlength\parskip{0pt}
\begin{tabular}{c|ccc}
\bottomrule
$\tau$ & DenSaE & $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$ & $\text{CSCNet}_{\text{LS}}^{\text{tied}}$ \\
\midrule
$15$ & 30.21 & 30.12 & \bf 30.34 \\
$25$ & 27.70 & 27.51& \bf 27.75 \\
$50$ & \bf 24.81 & 24.54 & \bf 24.81\\
$75$ & \bf 23.33 & 22.83 & 23.32 \\
\bottomrule
\end{tabular}
\label{tab:psnr_sc}
\end{table}
\textbf{Dense and sparse coding vs. sparse coding}: Table~\ref{tab:psnr_sc} shows that DenSaE (best network from Table~\ref{tab:psnr_ratio}) denoises images better than $\text{CSCNet}_{\text{hyp}}^{\text{tied}}$, suggesting that the dense and sparse coding model represents images better than sparse coding. \\
\textbf{Dictionary characteristics}: Figure~\ref{fig:vis}(a) shows the decomposition of a noisy test image ($\tau = 50$) by DenSaE. The figure demonstrates that $\A\x$ captures low-frequency content while $\B\u$ captures high-frequency details (edges). This is corroborated by the smoothness of the filters associated with $\A$, and the Gabor-like nature of those associated with $\B$~\citep{MehrotraR1992Gfed}. We observed similar performance when we tuned $\lambda_x$, and found that, as $\lambda_x$ decreases, $\A\x$ captures a lower frequencies, and $\B\u$ a broader range.\\
\textbf{CSCNet implicitly learns $\A\x + \B\u$ model in the presence of noise}: We observed that $\text{CSCNet}_{\text{LS}}^{\text{tied}}$ comprises three groups of filters: one with small bias, one with intermediate ones, and a third with large values (see \textbf{Appendix D} for bias visualizations). We found that the feature maps associated with the large bias values are all zero. Moreover, the majority of features are associated with intermediate bias values, and are sparse, in contrast to the small number of feature maps with small bias values, which are dense. These observations suggest that {\it autoencoders implementing the sparse coding model ($\y = \B\u$), when learning the biases by minimizing reconstruction error, implicitly perform two functions.} First, they select the optimal number of filters. Second, they partition the filters into two groups: one that yields a dense representation of the input, and another that yields a sparse one. In other words, the architectures trained in this manner {\it implicitly learn the dense and sparse coding model} ($\y = \A\x + \B\u$). Figure~\ref{fig:vis}(b) shows the filters.\\
\section{Conclusions}\label{sec:conclusion}
This paper proposed a novel dense and sparse coding model for a flexible representation of a signal as $\y = \A\x+\B\u$. Our first result gives a verifiable condition that guarantees uniqueness of the model. Our second result uses tools from RIPless compressed sensing to show that, with sufficiently many linear measurements, a convex program with $\ell_1$ and $\ell_2$ regularizations can recover the components $\x$ and $\u$ uniquely with high probability. Numerical experiments on synthetic data confirm our observations.\\
We proposed a dense and sparse autoencoder, DenSaE, tailored to dictionary learning for the $\A\x + \B\u$ model. DenSaE, naturally decomposing signals into low- and high-frequency components, provides a balance between learning dense representations that are useful for reconstruction and discriminative sparse representations. We showed the superiority of DenSaE to sparse autoencoders for data reconstruction and its competitive performance in classification.
\small
\bibliographystyle{IEEEtranN}
\interlinepenalty=10000
| -92,397.694594 |
[
-2.625,
2.43359375
] | 29.908257 |
[
-3.4375,
0.1961669921875,
-2.134765625,
-6.3984375,
-0.865234375,
9.1953125
] |
[
4.4453125,
8.0546875,
1.71875,
7.98828125
] | 1,053 | 12,404 |
[
-2.55078125,
2.68359375
] | 33.476147 |
[
-6.27734375,
-4.9453125,
-5.359375,
-2.4296875,
2.5234375,
13.8203125
] | 0.647954 | 21.403366 | 12.221864 | 5.181249 |
[
1.5163938999176025
] | -56,255.579302 | 5.847791 | -92,062.998262 | 1.968678 | 6.109942 |
[
-2.095703125,
-3.46875,
-4.2421875,
-5.6640625,
2.23046875,
12.9140625
] |
[
-5.63671875,
-2.4453125,
-2.30859375,
-1.8642578125,
3.927734375,
5.12890625
] | |
BkiUfc84uzliAaD0xFG7
|
\section{Introduction}
The three-gluon vertex is one of the QCD fundamental Green's functions. This vertex allows the computation of the strong coupling constant and the measurement of a static potential between color charges. Herein we report on an upgrade of the lattice computation of this vertex performed by some of the authors in \cite{duarte2016, proc2016}.
The three-gluon correlation function $G^{a_1 a_2 a_3}_{\mu_1 \mu_2 \mu_3} (p_1, p_2, p_3)$ is given by
\begin{equation}
\langle A^{a_1}_{\mu_1} (p_1) \, A^{a_2}_{\mu_2} (p_2) \, A^{a_3}_{\mu_3} (p_3) \rangle = V \, \delta( p_1 + p_2 + p_3) ~
{G^{a_1 a_2 a_3}_{\mu_1 \mu_2 \mu_3} (p_1, p_2, p_3)}
\end{equation}
and can be written in terms of the gluon propagator $D^{ab}_{\mu\nu}(p^2)$ and the one-particle irreducible (1PI) vertex $\Gamma$ using
\begin{equation}
{G^{a_1a_2a_3}_{\mu_1\mu_2\mu_3} (p_1, p_2, p_3)} = D^{a_1b_1}_{\mu_1\nu_1}(p_1) ~ D^{a_2b_2}_{\mu_2\nu_2}(p_2) ~ D^{a_3b_3}_{\mu_3\nu_3}(p_3)
{\Gamma^{b_1b_2b_3}_{\nu_1\nu_2\nu_3} (p_1, p_2, p_3)} .
\end{equation}
Bose symmetry requires the 1PI vertex to be symmetric under permutations of any pair $(p_i, a_i, \mu_i)$. Given that
\begin{equation}
\Gamma^{a_1 a_2 a_3}_{\mu_1 \mu_2 \mu_3} (p_1, p_2, p_3) = f_{a_1 a_2 a_3} \Gamma_{\mu_1 \mu_2 \mu_3} (p_1, p_2, p_3)
\end{equation}
then the function $\Gamma_{\mu_1 \mu_2 \mu_3} (p_1, p_2, p_3)$ must be antisymmetric under the interchange of any pair $(p_i, \mu_i)$.
A complete description of $\Gamma_{\mu_1 \mu_2 \mu_3} (p_1, p_2, p_3)$ in the continuum requires six Lorentz invariant form factors, two associated to the transverse component $\Gamma^{(t)}$
and the remaining associated to the longitudinal $\Gamma^{(l)}$ \cite{ballchiu}.
\section{Asymmetric momentum configuration}
In this work we consider the computation of the three-gluon vertex in the asymmetric momentum configuration $p_2=0$, as in \cite{alles, duarte2016}. In this case, the correlation function can be written as
\begin{equation}
G_{\mu_1\mu_2\mu_3} (p, 0, -p) = V \frac{N_c(N^2_c-1)}{4} \left[D(p^2)\right]^2 \, D(0) \frac{\Gamma (p^2)}{3} ~ ~ p_{\mu_2} ~T_{\mu_1\mu_3} (p).
\end{equation}
The contraction of the Lorentz $\mu_1$ and $\mu_3$ indices, together with the contraction with the momentum $p_\alpha$, gives
\begin{equation}
G_{\mu \, \alpha \,\mu} (p, 0, -p) \, p_\alpha = V \frac{N_c(N^2_c-1)}{4}
\, \left[D(p^2)\right]^2 \, D(0) ~~\Gamma (p^2) ~~ p^2 .
\end{equation}
From this expression it is possible to extract the form factor $\Gamma (p^2)$. However, a lattice measurement of $\Gamma (p^2)$ requires the computation of the ratio
\begin{equation}
G_{\mu \alpha \mu} (p, 0, -p) p_\alpha / \left[D(p^2)\right]^2 \, D(0)
\end{equation}
and the extraction of $\Gamma (p^2)$ from this ratio will originate large statistical fluctuations at high momenta, where $D(p^2)$ becomes quite small. In fact, assuming Gaussian error propagation, it is possible to show that the statistical error on $\Gamma (p^2)$ behaves as $\Delta \Gamma(p^2) \sim p^4$ in the UV regime \cite{duarte2016}.
\section{Handling of noise, lattice artefacts}
In order to try to deal with the large statistical fluctuations at high momenta, we considered a few strategies \cite{guitese}:
\begin{itemize}
\item explore the ambiguity on the scale setting and perform a binning in the momentum --- all data points in each bin are replaced by a weighted average of the data points;
\item perform a $H(4)$ extrapolation of the lattice data \cite{becirevic1999, soto2009} --- such procedure is based on the remnant $H(4)$ symmetry group
associated with a hypercubic lattice. On the lattice, a scalar quantity $F$ is a function of the $H(4)$ invariants
\begin{displaymath}
p^2 = p^{[2]} = \sum_\mu p^2_\mu , \quad
p^{[4]} = \sum_\mu p^4_\mu , \quad
p^{[6]} = \sum_\mu p^6_\mu , \quad
p^{[8]} = \sum_\mu p^8_\mu ,
\end{displaymath}
i.e. $F_{Lat} = F(p^{[2]}, p^{[4]}, p^{[6]}, p^{[8]})$. The continuum limit will be given by $F(p^{[2]}, 0, 0, 0)$ up to corrections $\mathcal{O}(a^2)$. Having several data points for the same $p^2$ but different $p^{[4]}$, $p^{[6]}$, $p^{[8]}$, an extrapolation of $F_{Lat}$ to the continuum limit can be done, assuming that it can be written as a power series of the H(4) invariants. Note that, in this work, only a linear extrapolation in $p^{[4]}$ is considered.
\end{itemize}
\section{Lattice setup}
In this work we consider the $64^4$ ensemble of 2000 configurations already studied in \cite{duarte2016}, together with a $80^4$ ensemble of 1800 configurations, both generated with the Wilson gauge action at $\beta=6.0$. The rotation to the Landau gauge has been performed using the Fourier accelerated Steepest Descent method \cite{davies} implemented with the help of Chroma \cite{chroma} and PFFT \cite{pfft} libraries. The gluon field is computed using the definition
\begin{equation}
a g_0 A_\mu (x + a \hat{e}_\mu) = \frac{ U_\mu (x) - U^\dagger (x)}{ 2 i g_0}
- \frac{\mbox{Tr} \left[ U_\mu (x) - U^\dagger (x) \right]}{6 i g_0}
\end{equation}
with the momentum space gluon field given by
\begin{equation}
A_\mu (\hat{p}) = \sum_x e^{- i \hat{p} (x + a \hat{e}_\mu) } \, A_\mu (x + a \hat{e}_\mu) \,\,,\,\, \hat{p}_\mu = \frac{2 \, \pi \, n_\mu}{a \, L_\mu}.
\end{equation}
\section{Results}
In Figure \ref{binned} we compare the original and binned data for $\Gamma (p^2)$. The binning of the data suppresses the large statistical errors in the high momentum region and produces a well defined and smooth curve.
\begin{figure}[h]
\vspace{0.55cm}
\centering
\subfigure[$64^4$ lattice.]{ \includegraphics[width=0.42\textwidth]{plots/gamma_64x4.eps} \label{binn64}} \qquad
\subfigure[$80^4$ lattice.]{ \includegraphics[width=0.42\textwidth]{plots/gamma_80x4.eps} \label{binn80}}
\caption{Original and binned data for $\Gamma (p^2)$.}
\label{binned}
\end{figure}
Next, in Figure \ref{binnedoverp2} we compare the binned data for both lattices. The results of the two volumes agree within errors, suggesting that finite volume effects are small.
\begin{figure}[h]
\vspace{0.65cm}
\begin{center}
\includegraphics[width=0.6\textwidth]{plots/gamma_over_p2_compare.eps}
\end{center}
\caption{Comparison of binned data for $\Gamma (p^2)$.}
\label{binnedoverp2}
\end{figure}
In Figure \ref{H4extr} we compare the H(4) extrapolation of the $64^4$ lattice data with the binning of the original data. We observe that the H(4) extrapolation pushes the vertex to higher values in the high momentum regime. Nevertheless, in the infrared region, the extrapolated data is compatible with the original data, for both lattice volumes --- see Figure \ref{H4infra}.
\begin{figure}[h]
\vspace{0.65cm}
\begin{center}
\includegraphics[width=0.6\textwidth]{plots/all_gamma_over_p2_64_H4.eps}
\end{center}
\caption{Results of the H(4) extrapolation of $\Gamma (p^2)$ on the $64^4$ lattice volume.}
\label{H4extr}
\end{figure}
\begin{figure}[h]
\vspace{0.55cm}
\centering
\subfigure[$p^2 \Gamma(p^2)$.]{ \includegraphics[width=0.42\textwidth]{plots/all_gamma.eps} \label{H4infra-p2G}} \qquad
\subfigure[$\Gamma(p^2)$.]{ \includegraphics[width=0.42\textwidth]{plots/all_gamma_over_p2.eps} \label{H4infra-G}}
\caption{Original and H(4) data for both lattice volumes for low momenta.}
\label{H4infra}
\end{figure}
\section{Infrared behaviour of $\Gamma(p^2)$}
No zero crossing of $\Gamma(p^2)$, an indication of ghost dominance in the
infrared, is seen in the lattice data reported here. In order to check for
a change of sign in $\Gamma(p^2)$, in this section we explore the infrared
behaviour of the lattice $\Gamma(p^2)$, using the $80^4$ data for momenta below 1GeV, and fit the data to $\Gamma_1(p^2)=A + Z \ln(p^2)$ and $ \Gamma_2(p^2)=A + Z \ln(p^2+m^2)$. The first one is a typical ansatz considered in recent studies to study the zero crossing, see \cite{guitese} for details, and the second one is a variant of the first one which includes an infrared logarithmic regularizing mass.
In Figure \ref{zerocrossing} we plot the best fits of the lattice data for both fitting functions, obtained through the minimization of $\chi^2/d.o.f.$ .
For $\Gamma_1(p^2)$, we got $\chi^2/d.o.f. = 1.23$ with $A=0.2395(16)$ and $Z=0.0646(21)$. Accordingly the zero crossing occurs at $p_o=157$MeV.
For $\Gamma_2(p^2)$, the parameters take the values $A=0.208(24)$, $Z=0.124(27)$ and $m=0.61(15)$GeV, with a $\chi^2/d.o.f. = 0.95$. As shown in the right plot of Figure \ref{zerocrossing}, in this case there is no zero crossing.
\begin{figure}[h]
\vspace{0.55cm}
\centering
\subfigure[$\Gamma (p^2) = A + Z \ln(p^2)$.]{ \includegraphics[width=0.42\textwidth]{plots/gamma80-fit1.eps} \label{zerocrossing-fit1}} \qquad
\subfigure[$\Gamma (p^2) = A + Z \ln(p^2+m^2)$.]{ \includegraphics[width=0.42\textwidth]{plots/gamma80-fit2.eps} \label{zerocrossing-fit2}}
\caption{Infrared $80^4$ lattice data for $\Gamma(p^2)$ together with some fitting functions. }
\label{zerocrossing}
\end{figure}
\section{Conclusions and outlook}
In this paper we describe an improved calculation of the three gluon vertex on the lattice, for the asymmetric momentum configuration. We use two different lattice volumes $(6.5$ fm$)^4$ and $(8.2$ fm$)^4$, with a common lattice spacing of $a = 0.102$ fm. We show that a H(4) extrapolation of the lattice data pushes the vertex to higher values in UV regime. We proceed with a functional study in the infrared region, considering some functional forms compatible with zero crossing and IR divergence.
Further momentum configurations will be explored in the near future.
\acknowledgments
This work was supported by national funds from FCT
Fundação para a Ciência e a Tecnologia, I. P., within the
Projects UIDB/04564/2020, UIDP/04564/2020, and CERN/FIS-COM/0029/2017.
G. T. R. C. acknowledges financial support from FCT
under Project UIDB/04564/2020, and also from the Generalitat Valenciana
(genT program CIDEGENT/2019/040) and Ministerio de Ciencia e
Innovacion PID2020-113644GB-I00.
P. J. S. acknowledges financial support from FCT
under Contract CEECIND/00488/2017.
This work was granted access to the HPC resources of
the PDC Center for High Performance Computing at the
KTH Royal Institute of Technology, Sweden, made
available within the Distributed European Computing
Initiative by the PRACE-2IP, receiving funding from the
European Communitys Seventh Framework Programme
(FP7/2007-2013) under Grant agreement no. RI-283493.
The use of Lindgren has been provided under DECI-9
project COIMBRALATT. We acknowledge that the results
of this research have been achieved using the PRACE-3IP
project (FP7 RI312763) resource Sisu based in Finland at
CSC. The use of Sisu has been provided under DECI-12
project COIMBRALATT2. We acknowledge the
Laboratory for Advanced Computing at the University of
Coimbra \cite{lca} for providing access to the HPC resource
Navigator. The authors acknowledge Minho Advanced Computing Center
\cite{macc} for providing HPC resources that have contributed to
the research results reported within this paper. This work was
produced with the support of MACC and it was funded by FCT I.P.
under the Advanced Computing Project CPCA/A2/6816/2020, platform Bob.
This work was produced with the support of INCD \cite{incd} funded by FCT and
FEDER under the project 01/SAICT/2016 nº 022153.
| -13,038.668374 |
[
-3.19921875,
2.83984375
] | 39.086294 |
[
-5.70703125,
-2.755859375,
-2.705078125,
-7.85546875,
0.434814453125,
12.0859375
] |
[
0.873046875,
8.96875,
1.9736328125,
3.93359375
] | 117 | 1,454 |
[
-3.611328125,
4.1640625
] | 33.717177 |
[
-5.51171875,
-4.23828125,
-3.078125,
-1.4599609375,
1.98046875,
9.3046875
] | 1.419052 | 28.294678 | 39.533288 | 5.921905 |
[
1.6125589609146118
] | -9,063.782448 | 5.57359 | -12,769.62448 | 0.666338 | 5.580322 |
[
-2.71484375,
-3.341796875,
-4.15234375,
-4.8046875,
1.97265625,
11.4375
] |
[
-6.90234375,
-4.1875,
-3.47265625,
-2.603515625,
4.69921875,
7.82421875
] | |
BkiUbZc4uzqh_FJ_Jusf
|
\section{Introduction \label{sec:intro}}
The underlying spin-$\frac{1}{2}$ partonic structure of hadrons became
first manifest in the analysis of the deep inelastic
scattering~\cite{Callan:1969uq}. Actually, further understanding of
the partonic spin distributions can be gained by the study of the {\em
transversity distributions}~\cite{Barone:2001sp}. From this
viewpoint, generalized parton distributions (GPDs) \cite%
{Mueller:1998fv,Ji:1996ek,Radyushkin:1996nd} (for extensive reviews
see, e.g.,~\cite{Belitsky:2005qn,Feldmann:2007zz,Boffi:2007yc} and
references therein) encode a detailed information on the parton
structure of hadrons when analyzed at short distances. In the
impact-parameter space, the GPD's can be viewed as partonic
probabilities in the infinite-momentum frame distributed along the
longitudinal momentum fraction (Bjorken-x) and the transverse space
directions~\cite{Burkardt:2000za,Burkardt:2002hr}. It should be noted
that both GPD's as well as their partonic interpretation depend
strongly on the renormalization scale and it is not obvious {\it a
priori} what, if any, is the reference scale, which might have some
universal value and significance. From a dynamical point of view, the
choice of such a scale is crucial, as the high-energy modes are
integrated out in favor of an effective and yet unknown
non-perturbative low-energy dynamics. The renormalization group deals
with the intertwining of scales in principle, although in practice it
can be explored only at the lowest orders of the perturbation theory
in the running strong coupling constant. In addition, GPD's depend
also on the factorization scheme corresponding to the physical process
used to extract the partonic distributions at high energies.
>From a purely theoretical point of view, the great difficulty to
determine the GPDs from first principles in QCD is related to their
genuine Minkowski-space nature, suggesting application of the light-cone
kinematics and non-perturbatively motivated approaches, such as the
transverse lattice~\cite{Burkardt:2001jg}, which so far has produced
encouraging but scarce results. More recently, however, the lowest
Bjorken-$x$ moments of the kinematically intricate GPDs, the so-called
Generalized Form Factors (GFFs), have become directly accessible to
Euclidean lattices in QCD at sufficiently short-distance resolution
scales (see, e.g., \cite{Musch:2010ka,Hagler:2009ni}). This is due to
the fact that GFFs for space-like momenta can be written as matrix
elements of local operators which can be directly extracted from the
asymptotics of the Euclidean correlation functions. As a further
simplification, the scale dependence of GFFs in the space-like region
undergoes a triangular-matrix multiplicative renormalization, which can
be easily implemented (see, e.g.,~\cite{Broniowski:2009zh}). A well
known feature of the QCD evolution is the loss of resolution at higher
energies, a property triggered by the existence of the asymptotic
ultraviolet fixed point, which enhances similarity at increasingly
high $Q^2$-values.
In this paper we analyze the quark \emph{transversity} generalized
parton distribution of the pion (tGPD), related to the matrix elements
of the bilocal tensor current operator $\bar{q}(x)\sigma _{\mu \nu
}q(0)$ (see Sec.~\ref{sec:def} and Refs.~\cite{Diehl:2005jf,Burkardt:2005hp} for
precise definitions). The transversity distribution, also termed the
\emph{maximal helicity} GPD, as it involves aligned parton-helicity
operators, provides insight into the nontrivial spin structure of the
hadron. For the spin-0 hadrons, tGPDs arise due to a nonzero orbital angular
momentum between the initial and final state, and thus offer a unique
opportunity to learn about the spin structure without the many
complications of the hadronic spin degrees of freedom, as is the case of the nucleon. Due to their
inherent complexity, tGPDs are the least investigated among the hadronic GPD's.
In this regard the study of the spin structure of the pion is
particularly appealing and challenging, although at present it is
unclear how it can be reliably extracted from the high-energy
experiments.
The recent lattice determination of the first two $X$-moments
of the pion tGPD, denoted as \emph{transversity} generalized form factors (tGFFs)~\cite{Brommel:2007xd}, provides
first important and non-trivial information on this
issue. The calculation was carried out at a lattice spacing of $a \sim 0.1 {\rm ~fm}$ and a pion mass $m_\pi \sim 600 {\rm ~MeV}$. For such a
small lattice spacing the matching to the perturbative $\overline{\rm MS}$ scheme becomes feasible and corresponds to
the scale $\mu \simeq 2 {\rm~GeV}$. This lattice calculation has triggered some related studies
focusing either on perturbative aspects of the high-$Q^{2}$ dependence
of the transversity form factors~\cite{Diehl:2010ru}, or non-perturbative issues studied within chiral quark
models~\cite{Broniowski:2010nt,Nam:2010pt}.
In this work we analyze the tGPD and the tGFFs of the pion for several
chiral quark models, extending the results presented previously~\cite{Broniowski:2010nt} and
providing further details. While this
unavoidably makes the paper a bit technical, we hope that many of the
details provided here show how a proper implementation of the chiral symmetry,
relativity, and normalization can be achieved in a non-perturbative model
calculation. This is particularly
interesting for the case of nonlocal models, where the mass function
depends on the momentum. Although such models are expected to feature chiral
quark dynamics more realistically, many complications arise due to the
time-like kinematics implied by the very definition of the GPDs. We recall
that we are effectively carrying out the one-loop calculations, where some
variables are integrated out and some may be left unintegrated. Thus, special attention
must be paid to the treatment of the integrals, particularly to keep
the Poincar\'e invariance explicitly at any step of the calculation,
such that all results are mutually consistent.
Via sum rules, the (generalized) form factors are related to the GPDs~\cite%
{Ji:1998pc,Radyushkin:2000uy,Goeke:2001tz,Bakulev:2000eb,Diehl:2003ny,Ji:2004gf,Belitsky:2005qn,Feldmann:2007zz,Boffi:2007yc}%
. Experimentally, the GPDs of the pion constitute rather elusive
quantities which appear in rare exclusive processes, such as the
deeply virtual Compton scattering (DVCS) or the hard electro-production of
mesons (HMP).
Chiral quark models have proved to correctly describe numerous
features related to the vector GPD of pion. The parton distribution
functions (PDF) have been evaluated in the Nambu--Jona-Lasinio (NJL)
model in Refs.~\cite%
{Davidson:1994uv,RuizArriola:2001rr,Davidson:2001cc}. The extension to
diagonal GPDs in the impact parameter space was carried out in \cite%
{Broniowski:2003rp}. Other analyses of the pionic GPDs and PDFs were
performed in nonlocal chiral quark models \cite%
{Dorokhov:1998up,Polyakov:1999gs,Dorokhov:2000gu,Anikin:2000th,Praszalowicz:2002ct,Praszalowicz:2003pr,Bzdak:2003qe,Holt:2010vj,Nguyen:2011jy}%
, in the NJL model \cite%
{Polyakov:1999gs,Theussl:2002xp,Bissey:2003yr,Noguera:2005cc,Broniowski:2007si}
and in the light-front constituent quark models~\cite%
{Frederico:2009pj,Frederico:2009fk}. The parton distribution
amplitudes, related to the GPD via a low-energy
theorem~\cite{Polyakov:1998ze}, were evaluated in~\cite%
{Esaibegian:1989uj,Dorokhov:1991nj,Petrov:1998kg,Anikin:1999cx,Praszalowicz:2001wy,Dorokhov:2002iu,RuizArriola:2002bp,RuizArriola:2002wr}%
. The gravitational form factors were computed in
\cite{Broniowski:2008hx}. Finally, the pion-photon transition
distribution amplitudes \cite%
{Pire:2004ie,Pire:2005ax,Lansberg:2006fv,Lansberg:2007bu} were
obtained in Refs.~\cite%
{Tiburzi:2005nj,Broniowski:2007fs,Courtoy:2007vy,Courtoy:2008af,Kotko:2008gy}%
.
Besides the phenomenological motivation, it is useful to review
shortly what aspects of the present investigation suggest the use of
chiral quark models within the present context (see, e.g.,
\cite{RuizArriola:2002wr}). Firstly, the pion, treated as a composite $q \bar
q$ state, becomes a Goldstone boson of the spontaneously broken chiral
symmetry. This of course requires the correct implementation of the
chiral Ward-Takahashi identities -- a rather non-trivial point, since
this condition is not automatically fulfilled in loop
calculations. At the quark level, this feature is compatible with the
large-$N_c$ scaling relations. Within such a scheme the pion loop
corrections are $1/N_c$-suppressed but chiral-log enhanced at small
pion masses. However, the leading-$N_c$ contributions present a much
milder pion-mass dependence, a favorable situation for the
unphysically large pion masses used on the
lattice~\cite{Brommel:2007xd}. Moreover, relativity for the GPDs is
properly implemented through the so-called polynomiality conditions,
and, more specifically, by the explicit use of the double
distributions (DDs). Finally, the scale at which a quark model
calculation is carried out can only be identified after a correct
separation of the momentum fraction carried by the quark degrees of
freedom. As mentioned already, the partonic properties depend on the
renormalization scale, and according to
phenomenology~\cite{Sutton:1991ay,Gluck:1999xe} as well as independent
lattice calculations~\cite{Best:1997qp}, the (valence) quarks carry about
40\% of the total momentum at the scale $\mu= 2{\rm GeV}$. In
effective quark models, where the quarks carry 100\% of the total
momentum, the perturbative scale is unexpectedly and rather
uncomfortably low. However, the assumption has been tested to higher
orders and confronted by comparing to a variety of
high-energy data or lattice calculations. In the present calculation
of the transversity form factors we find again agreement with the data
after the QCD evolution scheme is implemented, starting from a low
quark-model scale.
GPDs in general, and tGPDs in particular, are subjected to a set of conditions {\it
a priori} imposed by symmetries and/or completeness, namely, the chiral
symmetry, relativity, positivity, and finiteness of sum rules. Within
the framework of low energy chiral quark models, where there is an
inherent cut-off marking the low energy regime, these conditions are
actually not easy to fulfill on purely mathematical grounds. Indeed,
one-loop integrals are four dimensional, whereas GPDs leave two
integration variables unintegrated and hence some consistency is
required. However, once this difficulty is mastered, which is the case
of our approach, there is a trend to
independence to details of the model. This independence is largely enhanced
{\it after} the QCD evolution, since differences are washed out at
increasingly higher energy scales. The feature is also observed
in the study of transversity, as to make differences between
various chiral quark models rather small.
We apply the local NJL model with the Pauli-Villars regularization, as well as two
variants of the nonlocal chiral quark models inspired by the
nontrivial structure of the QCD vacuum~\cite%
{Diakonov:1985eg,Holdom:1990iq}. These models provide the results at
the quark-model scale. After the necessary (multiplicative) QCD
evolution~\cite{Broniowski:2007si}, our model results are in a quite
remarkable agreement with the lattice data for tGFFs. Lower values of
the constituent quark mass, $\sim 250$~MeV, are preferred.
The outline of the paper is as follows: In Sec.~\ref{sec:basics} we
give the general definitions of the pion tGPD and tGFFs. Then we
derive these quantities in the nonlocal chiral quark models from the
triangle diagram in Sec.~\ref{sec:mod}. By using the extremely convenient
$\alpha $-representation, we obtain the corresponding expressions for the tGFFs
in the momentum- and impact-parameter spaces, the tGPDs for the isosinglet and
isovector channels, and also, in special forward and symmetric
kinematics, the distribution of the transversity size of the pion. The analysis is carried out
explicitly for specific nonlocal models in Sec.~\ref{sec:specific}.
For numerical estimates of these quantities we use two variants of the
chiral nonlocal models and the local NJL model. In Sec.~\ref{sec:evol}
we present the QCD evolution of the above quantities in general, as well as show
its consequences for the studied models. Numerical results for
the transversity distribution functions after evolution are shown in
Sec.~\ref{sec:res}. Finally, in Sec.~\ref{sec:concl} we draw our
main conclusions.
\section{Basic definitions of the transversity form factors and generalized parton distribution \label{sec:def}\label{sec:basics}}
In this section we provide the basic definitions as well as the
kinematics of the transversity observables analyzed in the present
work.
The pion $u$-quark tGFFs, $B_{Tni}^{\pi ,u}\left( t\right) $, parametrize the
matrix element
\begin{align}
& \left\langle \pi ^{+}\left( p^{\prime }\right) \left\vert O_{T}^{\mu \nu
\mu _{1}\cdots \mu _{n-1}}\right\vert \pi ^{+}\left( p\right) \right\rangle =%
\mathcal{TAS}\frac{P^{\mu }q^{\nu }}{m_{\pi }}
\nonumber \\
& \times \sum_{\substack{ i=0, \\ \mathrm{even}}}^{n-1}q^{\mu _{1}}...q^{\mu
_{i}}P^{\mu _{i+1}}...P^{\mu _{n-1}}B_{Tni}^{\pi ,u}\left( t\right) ,
\label{PionME}
\end{align}%
where the local tensor quark operator is%
\begin{align}
& \mathcal{O}_{T}^{\mu \nu \mu _{1}\cdots \mu _{n-1}} \label{TensorOp} \\
& =\mathcal{T}\underset{\left( \mu \nu \right) }{\mathcal{A}}\underset{%
\left( \mu _{1}\cdots \mu _{n-1}\right) }{\mathcal{S}}\overline{u}\left(
0\right) i\sigma ^{\mu \nu }i\overleftrightarrow{D}^{\mu _{1}}\cdot \cdot
\cdot i\overleftrightarrow{D}^{\mu _{n-1}}u\left( 0\right) , \notag
\end{align}%
with $\overleftrightarrow{D}^{\beta }=\overleftrightarrow{\partial }^{\beta
}-igA^{\beta }$ being the QCD covariant derivative, and $\overleftrightarrow{%
\partial }^{\beta }=\frac{1}{2}\left( \overrightarrow{\partial }^{\beta }-%
\overleftarrow{\partial }^{\beta }\right) $. In Eq.~(\ref{PionME}), $p^{\prime }$
and $p$ are the initial and final pion momenta, while $P=\frac{1}{2}(p^{\prime
}+p) $, $q=p^{\prime }-p$, and $t=-q^{2}.$ The symbol $\mathcal{TAS}$
denotes symmetrization ($\mathcal{S}$) in $\nu ,\mu _{1},\ldots ,\mu _{n-1}$%
, followed by antisymmetrization ($\mathcal{A}$) in $\mu ,\nu $, with the
additional prescription that the traces in all index pairs are subtracted ($%
\mathcal{T}$). The factor $1/m_{\pi }$ is introduced by convention in order
to have dimensionless form factors~\cite{Brommel:2007xd}. Also, as in~\cite%
{Brommel:2007xd}, we use the positively charged pion and the
up-quark density for definiteness.
The above definition, which projects on twist-2 operators, can be
implemented in a simple and manifestly covariant way (see, e.g., \cite{Diehl:2010ru}) by a contraction
with two constant auxiliary four-vectors, $a$ and $b$, satisfying $a^{2}=(ab)=0$ and $%
b^{2}\neq 0$. The tGFFs are then defined via%
\begin{align}
& M_{Tn}^{\pi ,u}\left( \xi ,t\right) \label{PionTme} \\
& =\left\langle \pi ^{+}\left( p^{\prime }\right) \left\vert \overline{u}%
\left( 0\right) i\sigma ^{\mu \nu }a_{\mu }b_{\nu }\left( i%
\overleftrightarrow{D}a\right) ^{n-1}u\left( 0\right) \right\vert \pi
^{+}\left( p\right) \right\rangle \notag \\
& =\left( aP\right) ^{n-1}\frac{\left[ \left( ap\right) \left( bp^{\prime
}\right) \right] }{m_{\pi }}\sum_{\substack{ i=0, \\ \mathrm{even}}}%
^{n-1}\left( 2\xi \right) ^{i}B_{Tni}^{\pi ,u}\left( t\right) , \notag
\end{align}%
where the skewness parameter is defined as\footnote{Throughout this work we use the so-called symmetric notation.}
\begin{equation}
\xi =-\frac{\left( aq\right) }{2\left( aP\right) }, \label{PqKsi}
\end{equation}%
$\xi \in [0,1]$, and $(aq)$, etc., denote the scalar products of four-vectors.
In Eq.~ (\ref{PionTme}), $\left[ ...\right] $ denotes the antisymmetrization in $a$
and $b$.
The tGFFs defined in (\ref{PionTme}) refer to the $u$-quarks; those for the $d$%
-quarks follow from the isospin symmetry and read%
\begin{equation}
B_{Tni}^{\pi ,d}\left( t\right) =\left( -1\right) ^{n}B_{Tni}^{\pi ,u}\left(
t\right) . \label{PionTmeD}
\end{equation}
The definition of the corresponding tGPD is \cite{Belitsky:2005qn}
\begin{eqnarray}
&& \langle \pi ^{+}(p^{\prime })\mid \bar{u}(-a)i\sigma ^{\mu \nu }a_{\mu}b_{\nu }u(a)\mid \pi ^{+}(p)\rangle \nonumber \\
&& =\frac{\left[ \left( ap\right) \left( bp^{\prime }\right) \right] }
{m_{\pi }}\int_{-1}^{1}dX \,e^{-i X\left( Pa\right) }E_{T}^{\pi, u}(X,\xi ,t), \label{PionTGPD}
\end{eqnarray}
where we do not display explicitly the gauge link factor.
The tGFFs can be written as the Mellin moments of tGPD of the pion as%
\begin{equation}
\int_{-1}^{1}dX\,X^{n-1}E_{T}^{\pi, u}\left( X,\xi ,t\right)
=\sum_{\substack{ i=0, \\ \mathrm{even}}}^{n-1}\left( 2\xi \right)^{i}B_{Tni}^{\pi ,u}\left( t\right). \label{En}
\end{equation}
\section{Chiral quark models \label{sec:mod}}
In this section we review the generic one-loop features of chiral
quark models, where the quark self-energy as well as the interaction
vertices are assumed to have a fairly general momentum dependence to
be specified later on. We derive general expressions for the tGPD at the
one-quark-loop level, applicable to both nonlocal and local models.
We also display formal properties of tGPD in our aproach.
\subsection{Nonlocal chiral quark models \label{sec:inst}}
In the quark-model calculation in the large-$N_c$ limit the matrix element (\ref{PionTme}) is given by the triangle
diagram shown in Fig.~\ref{fig:tri}\footnote{%
We should emphasize at this point that the tensor matrix element (\ref%
{PionTme}) can not be induced by tadpole-type of diagrams. This is evident,
because these diagrams depend only on one external vector $q$ from which it is
impossible to construct the antisymmetric combination involving the matrix
element (\ref{PionTme}). In this aspect, the results obtained in \cite%
{Nam:2010pt} can not be correct.}. To calculate this diagram we explore
the manifestly covariant method based on the effective approach to nonperturbative
QCD dynamics. All expressions will be computed in the Euclidean space, appropriate
for the process under consideration and, in general, for the treatment of nonperturbative
physics. The nonperturbative quark propagator, dressed by the interaction
with the QCD vacuum, is assumed to have the form%
\begin{equation}
S\left( k\right) =\frac{\widehat{k}+m\left( k^{2}\right) }{D\left(
k^{2}\right) }. \label{Qprop}
\end{equation}%
The main requirement imposed on the quark propagator is that at large quark
virtualities one recovers the perturbative limit,
\begin{equation}
S\left( k\right) \overset{k^{2}\rightarrow \infty }{\rightarrow }\frac{%
\widehat{k}}{k^{2}}. \label{QpropAS}
\end{equation}%
It is also assumed that the dynamical quark mass, $m(k^2)$, is a function rapidly
dropping with the quark virtuality $k^{2}$. It is normalized at zero as%
\begin{equation}
m\left( 0\right) =M_{q},\qquad D\left( 0\right) =M_{q}^{2}. \label{M0}
\end{equation}%
We also need the quark-pion vertex\footnote{In this work we use the dominant (in the spontaneous symmetry-breaking mechanism)
structures for the quark propagator
and the quark-pion vertex. More general structures are used in the Schwinger-Dyson approach \cite{Maris:1997hd}.}
\begin{equation}
\Gamma _{\pi }^{a}\left( k,q\right) =\frac{i}{f_{\pi }}\gamma _{5}\tau
^{a}F\left( k_{+}^{2},k_{-}^{2}\right) , \label{QPiVert}
\end{equation}%
where $k_{\pm }=k\pm q/2$. The nonlocal vertex $F\left(
k_{+}^{2},k_{-}^{2}\right) $ is a symmetric function of its arguments,
normalized to $F\left( k^{2},k^{2}\right) = m\left( k^{2}\right) $. In the
present study, the nonlocal model calculations are performed in the strict chiral
limit, which means that $m\left( k^{2}\rightarrow \infty \right) =0$.
\subsection{Calculation of the triangle diagram \label{sec:triangle}}
Within the described approach the triangle diagram for the matrix element (\ref%
{PionTme}) yields
\begin{align}
& M_{Tn}\left( \xi ,t\right) =\frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}\int \frac{%
d^{4}k}{\pi ^{2}}F\left( k_{+}^{2},k_{-}^{2}\right) F\left(
k_{3}^{2},k_{-}^{2}\right) \\
& \frac{1}{4}Tr\left\{ S\left( k_{+}\right) \gamma _{5}S\left( k_{-}\right)
\gamma _{5}S\left( k_{3}\right) \sigma _{\mu \nu }\right\} \left( \frac{%
k_{+}+k_{3}}{2},a\right) ^{n-1} \!\!\!\!\! a_{\mu }b_{\nu }, \notag
\end{align}%
where $k_{+}=k$ is the initial momentum of the struck quark, $k_{3}=k_{+}+q$ is its final
momentum, $k_{-}=k_{+}-p$ is the momentum of the spectator quark (cf. Fig.~\ref{fig:tri}), and the covariant
average momentum $(k_{+}+k_{3})/2$ corresponds to the
derivative in the definition~(\ref{PionTme}).
\begin{figure}[tb]
\includegraphics[width=.3\textwidth]{TriangleBDR11.eps} \vspace{-2mm}
\caption{(Color online) The leading-$N_{c}$ one-quark-loop triangle diagram
contribution to the leading twist tGPD of the pion.
\label{fig:tri}}
\end{figure}
After taking the trace one has%
\begin{align}
& M_{Tn}\left( \xi ,t\right) =\frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}\int \frac{%
d^{4}k}{\pi ^{2}}\frac{F\left( k_{+}^{2},k_{-}^{2}\right) F\left(
k_{3}^{2},k_{-}^{2}\right) }{D\left( k_{+}^{2}\right) D\left(
k_{-}^{2}\right) D\left( k_{3}^{2}\right) } \label{MT} \\
& \times \left\{ m\left( k_{+}^{2}\right) \left[ \left( k_{-}a\right) \left(
k_{3}b\right) \right] -m\left( k_{-}^{2}\right) \left[ \left( k_{+}a\right)
\left( k_{3}b\right) \right] \right. \notag \\
& \left. +m\left( k_{3}^{2}\right) \left[ \left( k_{+}a\right) \left(
k_{-}b\right) \right] \right\} \left( \frac{k_{+}+k_{3}}{2},a\right) ^{n-1},
\notag
\end{align}%
where the antisymmetrization in $a$ and $b$ is implied. Considering the crossed channel
it is easy to get the relation
\begin{align}
& \left( \left\{ ...\right\} \left( \frac{k_{+}+k_{3}}{2},a\right)
^{n-1}\right) _{d\mathrm{-channel}} \\
& \rightarrow \left( -1\right) ^{n}\left( \left\{ ...\right\} \left( \frac{%
k_{+}+k_{3}}{2},a\right) ^{n-1}\right) _{u\mathrm{-channel}}, \notag
\end{align}%
in agreement with (\ref{PionTmeD}).
For the further analysis, it is very convenient to transform the integral in (\ref{MT}%
) into the $\alpha $-representation (see \cite{Bogolyubov:1980,Zavialov:1990}%
), which is one of the basic methods for the study of hard processes in
perturbative QCD \cite{Radyushkin:1997ki}, as well as in nonperturbative
quark models \cite{Dorokhov:2000gu}. The technical advantage of this method is the
explicit maintenance of the Lorentz covariance.
Let us define for any function $F$ of virtuality $k^{2}$, decaying at large
virtuality as $1/k^{2}$ or faster, its $\alpha $ representation (Laplace
transform)
\begin{equation}
F\left( k^{2}\right) =\int_{0}^{\infty }d\alpha \, e^{-\alpha k^{2}}f\left(
\alpha \right)
\label{AlphaDef}
\end{equation}%
where $F\left( k^{2}\right) $ is the image of the original function $f\left( \alpha
\right) $. We will use the short-hand $F\left( k^{2}\right) \sim f\left( \alpha
\right) $. Let us introduce the following notation \cite{Dorokhov:2010bz,Dorokhov:2010zzb}
\begin{align}
& \frac{F(k_{+}^{2},k_{-}^{2})F\left( k_{3}^{2},k_{-}^{2}\right) }{D\left(
k_{+}^{2}\right) D\left( k_{-}^{2}\right) D\left( k_{3}^{2}\right) }m\left(
k_{+}^{2}\right) \sim G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) ,\quad
\notag \\
& \frac{F(k_{+}^{2},k_{-}^{2})F\left( k_{3}^{2},k_{-}^{2}\right) }{D\left(
k_{+}^{2}\right) D\left( k_{-}^{2}\right) D\left( k_{3}^{2}\right) }m\left(
k_{-}^{2}\right) \sim G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) , \notag
\\
\quad & \frac{F(k_{+}^{2},k_{-}^{2})F\left( k_{3}^{2},k_{-}^{2}\right) }{%
D\left( k_{+}^{2}\right) D\left( k_{-}^{2}\right) D\left( k_{3}^{2}\right) }%
m\left( k_{3}^{2}\right) \sim G_{0,0,m}\left( \alpha ,\beta ,\gamma \right),
\end{align}%
where the triple $\alpha $ representation (i.e. in parameters $\alpha$,
$\beta$, and $\gamma$) is applied (see Fig. \ref{fig:tri}). With this notation the
momentum integral in Eq.~(\ref{MT}) is transformed into the
$\alpha$-representation expression for the matrix element,
\begin{align}
& M_{Tn}\left( \xi ,t\right) =\left( aP\right) ^{n-1}\left[ \left( ap\right)
\left( bp^{\prime }\right) \right] \frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}} \label{MTn}
\times \\ & \int \frac{d\left( \alpha \beta \gamma
\right) }{\Delta ^{3}}e^{-\frac{1}{\Delta }\left( \alpha \gamma t-\beta
\left( \alpha +\gamma \right) m_{\pi }^{2}\right) }\left( \frac{\beta
+\left( \gamma -\alpha \right) \xi }{\Delta }\right) ^{n-1} \notag
\times \\ & \left[ \alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) \! + \!\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) \! + \! \gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) \right] , \notag
\end{align}%
where $\Delta =\alpha +\beta +\gamma $ and
\begin{eqnarray}
\int d\left( \alpha \beta \gamma
\right) ...=\int_{0}^{\infty }d\alpha \int_{0}^{\infty }d\beta
\int_{0}^{\infty }d\gamma ...
\end{eqnarray}
The only dependence on $\xi$ in Eq.~(\ref{MTn}) appears in the polynomial factor in the second line.
It is clear that in the expansion of this polynomial in
powers of $\xi $ only the even powers survive, in accordance with Eq.~(\ref{PionTme}%
), since for the odd powers of $\xi $ the integrand is antisymmetric in $\alpha $
and $\gamma $. Thus the polynomiality property of Eq.~(\ref{En}), namely that the
$X^{n-1}$ moment of $E_{T}^{\pi }\left( X,\xi
,t\right) $ is a polynomial in $\xi $ of the order not higher than $n$, is
immediately evident within our approach.
\subsection{Transversity pion form factors in momentum- and impact-parameter spaces \label{sec:tff}}
>From representation (\ref{MTn}), by using the definition of the tGFFs (\ref%
{PionTme}), one gets\footnote{%
In the following we will explore the strict chiral limit of $m_{\pi }=0.$}
\begin{align}
& B_{Tni}^{u}\left( t\right) =\frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}\frac{%
\left( n-1\right) !}{i!\left( n-1-i\right) !}\int \frac{d\left( \alpha \beta
\gamma \right) }{\Delta ^{n+2}}e^{-\frac{\alpha \gamma }{\Delta }t}
\label{BTn} \\
& \left[ 2\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) \right] \beta ^{n-1-i}\left(
\frac{\gamma -\alpha }{2}\right) ^{i}\!\!, \notag
\end{align}%
where $i=0,2,...\leq n-1$, and the symmetry properties under the interchange of $\alpha $ and $\gamma $
has been used. The transverse (impact parameter) space representation is obtained, by definition,
after a 2D Fourier-Bessel transformation,
\begin{equation}
F\left( b_{\perp }^{2}\right) =\int \frac{d^{2}q_{\perp }}{\left( 2\pi
\right) ^{2}}e^{-i(b_{\perp }q_{\perp })}F\left( t=-q_{\perp }^{2}\right).
\label{2dFourier}
\end{equation}%
We then get for even $i$ the expression
\begin{align}
& B_{Tni}^{u}\left( b_{\perp }^{2}\right) =\frac{N_{c}}{16\pi ^{3}f_{\pi
}^{2}}\frac{\left( n-1\right) !}{i!\left( n-1-i\right) !}\int \frac{d\left(
\alpha \beta \gamma \right) }{\alpha \gamma \Delta ^{n+1}}e^{-\frac{\Delta }{%
\alpha \gamma }\frac{b_{\perp }^{2}}{4}} \notag \\
& \left[ 2\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) \right] \beta ^{n-1-i}\left(
\frac{\gamma -\alpha }{2}\right) ^{i}\!\!. \label{BTnTr}
\end{align}
\subsection{Pion transversity Generalized Parton Distribution \label{sec:tgpd}}
Through the use of the definition of the tGPD in Eq.~(\ref{En}) we arrive at the formula
\begin{align}
& E_{T}^{\pi }\left( X,\xi ,t\right) =\frac{N_{c}}{4\pi ^{2}f_{\pi
}^{2}}\int \frac{d\left( \alpha \beta \gamma \right) }{\Delta ^{3}}e^{-\frac{%
\alpha \gamma }{\Delta }t} \times \label{ETn} \\
& \left[ \alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) +\gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) \right] \notag \\
& \times \delta \left( X-\frac{\beta +\left( \gamma -\alpha \right) \xi }{%
\Delta }\right) , \notag \\
& -1<X=\frac{\beta +\left( \gamma -\alpha \right) \xi }{\Delta }<1.
\notag
\end{align}%
Let us integrate over the $\beta $ parameter, corresponding to the quark
spectator. From the $\delta $ function we resolve $\beta $ as%
\begin{equation}
\beta =\frac{ \left( X+\xi \right) \alpha +\left( X-\xi \right) \gamma }{1-X} \label{beta}
\end{equation}%
and apply the positivity conditions for $\alpha$, $\beta$, and $\gamma$.
At fixed $\xi \in \left[ 0,1\right] $
and $X \in \left[ -1,1\right] $ one has 3 distinct regions:
\begin{align*}
& \mathrm{I}.\quad \xi <X<1,\quad \mathrm{where}\quad X%
+\xi >0,X-\xi >0, \\
& \mathrm{II}.\quad -\xi <X<\xi ,\quad \mathrm{where}\quad \mathrm{X%
}+\xi >0,X-\xi <0, \\
& \mathrm{III}.\quad -1<X<-\xi ,\quad \mathrm{where}\quad X%
+\xi <0,X-\xi <0.
\end{align*}%
In region I $\beta $ is positive without any limitations. In region
III all coefficients in Eq.~(\ref{beta}) are negative, hence the support of the integrand
has zero measure and the integral in Eq.~(\ref{ETn}) equals zero. In
the central region II the coefficient of $\alpha $ in Eq.~(\ref{beta}) is
positive and the coefficient of $\gamma $ is negative, thus one has
the limitation $\alpha >\gamma \frac{\xi -X}{\xi +X}$. Finally, the
total result may be combined as%
\begin{align}
& E_{T}^{\pi }\left( X,\xi ,t\right) =\Theta \left( X+\xi
\right) \frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}\int_{0}^{\infty } \!\!\!\!\! d\gamma
\int_{{\rm max}\left\{ 0,\gamma \frac{\xi -X}{\xi +X}\right\}
}^{\infty } \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! d\alpha \, e^{-\frac{\alpha \gamma }{\Delta }t} \times \notag \\
& \frac{\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) +\gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) }{\Delta ^{2}\left( 1-X\right) }, \label{ETnM}
\end{align}%
where $\Theta\left( x\right) $ is the step function, $\beta $ is given by Eq.~(\ref{beta}), and
$\Delta =\left[ \alpha +\gamma +\xi \left( \alpha -\gamma \right) \right]/({1-X})$.
The isovector and isosinglet tGPDs of the pion are obtained as the symmetric and antisymmetric
combinations,%
\begin{align}
E_{T}^{\pi ,I=1}\left( X,\xi ,Q^{2}\right) & \equiv E_{T}^{\pi ,S}\left( X,\xi ,Q^{2}\right) \notag \\ &= E_{T}^{\pi }\left(
X,\xi ,Q^{2}\right) +E_{T}^{\pi }\left( -X,\xi,Q^{2}\right) , \notag \\
E_{T}^{\pi ,I=0}\left( X,\xi ,Q^{2}\right) & \equiv E_{T}^{\pi ,A}\left( X,\xi ,Q^{2}\right) \notag \\ &= E_{T}^{\pi }\left(
X,\xi ,Q^{2}\right) -E_{T}^{\pi }\left( -X,\xi,Q^{2}\right) . \label{ETnI01}
\end{align}
The support of $E_{T}^{\pi ,I=0,1}$ is $-1 \le X \le 1$. The significance of the isospin combinations comes from the fact that
they evolve autonomously with the renormalization scale, see Sec.~\ref{sec:evol}.
\subsection{Special kinematics: $\xi =0$ and $\xi=X$ cases \label{sec:special}}
Some special kinematics is evident. For the case $\xi =0$ (tPDF) we have
\begin{align}
& E_{T}^{\pi }\left( X,\xi =0,t\right) =\Theta \left( X%
\right) \frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}\int_{0}^{\infty }d\left( \alpha
\gamma \right) e^{-\frac{\alpha \gamma }{\Delta }t} \times \notag \\
& \frac{2\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) }{\Delta ^{2}\left( 1-\mathrm{X%
}\right) }, \label{ETnMF}
\end{align}%
where $\beta =\left( \alpha +\gamma \right) \frac{X}{1-X}$ and
$\Delta =\left( \alpha +\gamma \right) \frac{1}{1-X}$. Note that in general
the first term in the numerator dominates in the small $X$ region, while the second one
is more important in the region of large $X$.
For the border case, $\xi=X$, we find
\begin{align}
& E_{T}^{\pi }\left( X,\xi=X,t\right) =\Theta\left(
X\right) \frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}\int_{0}^{\infty
}d\left( \alpha \gamma \right) e^{-\frac{\alpha \gamma }{\Delta }t}
\times \notag \\
& \frac{\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) +\gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) }{\Delta ^{2}\left( 1-X\right) }, \label{ETnMD}
\end{align}%
with $\beta =2\alpha \frac{X}{1-X}$ and $\Delta =\left[ \alpha
+\gamma +X\left( \alpha -\gamma \right) \right] \frac{1}{1-X}$.
\subsection{Double Distribution \label{sec:DD}}
Some symmetry properties of the GPDs are more transparent when they
are constructed from the double distributions (DDs) \cite%
{Mueller:1998fv,Radyushkin:1996nd,Radyushkin:2011dh}. Actually, the relativistic invariance
exhibited by the polynomiality conditions is manifestly built-in in this approach (see,
e.g., Ref.~\cite{BAG}). To pass to double distributions, we first make the substitution (see, e.g., \cite{Radyushkin:2011dh}) $%
\alpha =x_{1}L,\beta =x_{2}L,\gamma =x_{3}L$ in Eq.~(\ref{ETn}) and obtain%
\begin{align}
& E_{T}^{\pi }\left( X,\xi ,t\right) =\frac{N_{c}}{4\pi ^{2}f_{\pi
}^{2}}\int_{0}^{\infty }dL\int_{0}^{1}dx_{1}dx_{2}dx_{3}e^{-x_{1}x_{3}t} \times \notag \\
& \delta \left( 1-x_{1}-x_{2}-x_{3}\right) \delta \left( x-x_{2}-\left(
x_{3}-x_{1}\right) \xi \right) \times \notag \\
& \left[ x_{1}G_{m,0,0}\left( x_{1}L,x_{2}L,x_{3}L\right)
+x_{2}G_{0,m,0}\left( x_{1}L,x_{2}L,x_{3}L\right) \right. \notag \\
& \left. +x_{3}G_{0,0,m}\left( x_{1}L,x_{2}L,x_{3}L\right) \right] .
\end{align}%
To recover the DD representation we further make the replacement $%
x_{2}=b,x_{3}-x_{1}=a$ and arrive at%
\begin{equation}
E_{T}^{\pi }\left( X,\xi ,t\right)
=\int_{0}^{1}db\int_{-1+b}^{1-b}da\delta \left( X-b-a\xi \right)
f_{T}^{\pi }\left( a,b,t\right) , \label{ETnDD}
\end{equation}%
with the DD identified as%
\begin{align}
& f_{T}^{\pi }\left( a,b,t\right) =\frac{N_{c}}{4\pi ^{2}f_{\pi }^{2}}%
\int_{0}^{\infty }dL\,e^{-x_{1}x_{3}t} \times \label{DD} \\
& \left[ x_{1}G_{m,0,0}\left( x_{1}L,bL,x_{3}L\right) +bG_{0,m,0}\left(
x_{1}L,bL,x_{3}L\right) \right. \notag \\
& \left. +x_{3}G_{0,0,m}\left( x_{1}L,bL,x_{3}L\right) \right] . \notag
\end{align}%
Here $x_{1}=\frac{1}{2}\left( 1-b-a\right)$ and $x_{3}=\frac{1}{2}\left(
1-b+a\right) $. In the above expressions the parameter $b$ is
non-negative. The $b\leq 0$ part of the DD comes from the crossed diagram.
Sometimes it is also convenient to separate the so-called D-term, defined as%
\begin{equation}
D\left( b,t\right) =\int_{-1+b}^{1-b}daf_{T}^{\pi}\left( a,b,t\right) .
\label{Dterm}
\end{equation}
\subsection{The $b_{\perp}$ space and the transverse pion size \label{sec:bperp}}
Let us now consider tGPD in the transverse coordinate space, $b_{\perp}$. By using the
2D Fourier-Bessel transform of Eq.~(\ref{2dFourier}) one easily gets%
\begin{align}
& E_{T}^{\pi }\left( X,\xi ,b_{\perp }^{2}\right) =\Theta \left(
X+\xi \right) \frac{N_{c}}{16\pi ^{3}f_{\pi }^{2}}
\int_{0}^{\infty } \!\!\!\!\! d\gamma \int_{{\rm max}\left\{ 0,\gamma \frac{\xi -X}{%
\xi +X}\right\} }^{\infty } \hspace{-1.75cm} d\alpha \, e^{-\frac{\Delta }{\alpha \gamma
}\frac{b_{\perp }^{2}}{4}} \times \notag \\
& \frac{\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) +\gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) }{\Delta \alpha \gamma \left( 1-X\right) },
\end{align}%
where the value of the parameter $\beta $ is given by Eq.~(\ref{beta}) and \mbox{$\Delta =\left[ \alpha +\gamma +\xi
\left( \alpha -\gamma \right) \right] \frac{1}{1-X}$}.
In the zero longitudinal momentum transfer limit, $\xi \rightarrow 0$,
one obtains the so-called 3D transverse parton distribution%
\begin{equation}
f_{T}^{\pi }\left( X,b_{\perp }\right) =E_{T}^{\pi }\left( \mathrm{X%
},\xi \rightarrow 0,b_{\perp }^{2}\right) .
\end{equation}%
Following \cite{Perevalova:2011qi} one can also introduce the normalized
quark probability density in the transverse plane,%
\begin{equation}
\rho _{T}^{\pi }\left( X,b_{\perp }\right) =\frac{f_{T}^{\pi
}\left( X,b_{\perp }\right) }{f_{T}^{\pi }\left( X\right) }%
, \label{QPD}
\end{equation}%
where
\begin{eqnarray}
f_{T}^\pi \left( X\right) \equiv E_{T}^{\pi }\left( X,\xi =0,t=0\right),
\label{fTpi}\end{eqnarray}
as defined in (\ref{ETnMF}). The partons with the
longitudinal momentum fraction $X$ occupy within the hadron a disc of
the average transverse radius squared given by
\begin{equation}
b_{\perp }^{2}\left( X\right) =\int d^{2}b_{\perp }b_{\perp
}^{2}f_{T}^{\pi }\left( X,b_{\perp }\right) . \label{b2x}
\end{equation}%
In chiral quark models the triangle diagram yields
\begin{align}
& b_{\perp }^{2}\left( X\right) =\frac{N_{c}}{\pi ^{2}f_{\pi }^{2}}%
\left( 1-X\right) ^{2}\int d\left( \alpha \gamma \right) \frac{%
\alpha \gamma }{\left( \alpha +\gamma \right) ^{3}} \times \label{b2xModel} \\
& \left[ 2\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) \right] , \notag
\end{align}%
where $\beta =\left( \alpha +\gamma \right) \frac{X}{1-X}$%
. The C-odd transverse size of the hadron, determined by the slope
of the tGFF at low momentum transfer, can be obtained by integrating $b_{\perp
}^{2}(X)$ over the momentum fraction,
\begin{equation}
b_{\perp }^{2}=2\int_{0}^{1}dX\,b_{\perp }^{2}\left( X \right) . \label{b2}
\end{equation}
According to Gribov~\cite{Gribov:1973jg}, one can interpret the
normalized quark density (\ref{QPD}) as an evolution of the
probability density for a stochastic motion of a particle in the
transverse plane. The role of the evolution time is played by the
rapidity variable, $\eta =\ln(1/X)$. For the stochastic process one can
introduce the mean squared distance of the particle as follows
\cite{Perevalova:2011qi}:%
\begin{equation}
d_{\perp }^{2}\left( X\right) =\int d^{2}b_{\perp }b_{\perp
}^{2}\rho \left( X,b_{\perp }\right) =\frac{b_{\perp }^{2}\left(
X\right) }{f\left( X\right) }. \label{D2T}
\end{equation}%
By using a model with short-range interactions, Gribov predicted that
\cite{Gribov:1973jg}%
\begin{equation}
d_{\perp }^{2}\left( \eta \right) =D\eta , \label{D2Tgribov}
\end{equation}%
where $D$ is a constant,
while in \cite{Perevalova:2011qi} the result is%
\begin{equation}
d_{\perp }^{2}\left( \eta \right) \sim \frac{1}{\left( 4\pi f_{\pi }\right)
^{2}}e^{\left( 1-\omega \right) \eta }. \label{D2Tpolyakov}
\end{equation}%
Here $\omega \approx 0.5$ is the slope of the forward quark
distribution at small $X$, i.e., $q(X)\sim 1/X^{\omega }$.
Note that Eq.~(\ref{D2Tpolyakov}) is
${\cal O} (N_c^{-1})$, since $f_\pi = {\cal O}(\sqrt{N_c}) $. Actually,
the ``chiral inflation'' discussed in Ref.~\cite{Perevalova:2011qi} is
a pion-loop effect, which is $1/N_c$-suppressed, but at the same time it is chirally
enhanced as $\log ( m_\pi^2)$ for $m_\pi \to 0$, compared to the
leading one-quark-loop contribution. In the real world with $N_c=3$
and $m_\pi=140{\rm ~MeV}$ the relative chiral contributions to the rms radius
of the pion are about 20\%~\cite{Gasser:1983yg}~\footnote{Actually,
from the relation for the rms radius of the pion found in ChPT~\cite{Gasser:1983yg},
$\langle r^2 \rangle = (\bar l_5 -1)/(16\pi^2 f_\pi^2)$, one has the
{\it total} low energy constant $\bar l_5=13.9\pm 1.3$, most of which
is saturated by the $\rho$-meson exchange, $\bar l_5^\rho \simeq 17$, at the
leading order in $N_c$. Thus, the subleading ($1/N_c $-suppressed)
contribution is estimated to be $\Delta \bar l_5 \equiv \bar l_5 -
\bar l_5^\rho \sim \log(m_\pi^2/m_\rho^2)\sim - 3$.}. Of course, the
additional inclusion of pion-loops in our model would automatically
reproduce this universal inflating phenomenon.
\section{Model results \label{sec:specific}}
Having derived the general formulas for tGPDs in chiral quark models from the triangle diagram of Fig. \ref{fig:tri}, we now pass
to presenting explicit numerical calculations. We start with the nonlocal models.
In the present work we consider two variants of the quark-pion
vertex of Eq.~(\ref{QPiVert}),
\begin{align}
& F_{\mathrm{I}}\left( k_{+}^{2},k_{-}^{2}\right) =\sqrt{m\left(
k_{+}^{2}\right) m\left( k_{-}^{2}\right) }, \label{QPiVertI} \\
& F_{\mathrm{HTV}}\left( k_{+}^{2},k_{-}^{2}\right) =\frac{1}{2}\left[
m\left( k_{+}^{2}\right) +m\left( k_{-}^{2}\right) \right] ,
\label{QPiVertT}
\end{align}
where $m\left( k^{2}\right) $ is the momentum-dependent dynamical quark
mass. The form (\ref{QPiVertI}) is motivated by the instanton picture
of the QCD vacuum \cite{Diakonov:1985eg} and is labeled ``instanton'', while the form (\ref{QPiVertT}), the
Holdom-Terning-Verbeek (HTV) vertex, comes from the nonlocal chiral
quark model of Ref.~\cite{Holdom:1990iq}. Some relevant differences between
both prescriptions regarding the proper implementation of chiral
symmetry are discussed in Ref.~\cite{Broniowski:1999dm}.
We consider the dynamical quark mass of the form%
\begin{equation}
m\left( k^{2}\right) =M_{q}f^{2}\left( k^{2}\right) ,
\end{equation}%
and for simplicity take the profile function $f(k^{2})$ as a Gaussian,
\begin{equation}
f(k^{2})=e^{-\Lambda k^{2}} \label{GaussF}
\end{equation}%
(note that $\Lambda$ has the interpretation of the squared inverse momentum cut-off).
The model contains two parameters: the dynamical quark mass at zero momentum, $M_{q}$, and
the nonlocality scale, $\Lambda $. For our numerical estimates we take one
parameter fixed at a physically reasonable value, $M_{q} \simeq 240$~MeV, and then fix $\Lambda $
via the pion decay constant evaluated in the chiral limit, $f_{\pi }=84$~MeV \cite{Gasser:1983yg}.
The expression for $f_{\pi }$ in the instanton model is given by
the Diakonov-Petrov formula \cite{Diakonov:1985eg},
\begin{eqnarray}
&&\hspace{-4mm} f_{\pi }^{\mathrm{I}}=\left[ \frac{N_{c}}{4\pi ^{2}}\! \int_{0}^{\infty
} \!\!\!\!\! du \, u\frac{m\left( u\right) }{D^{2}\left( u\right) } \left( m\left( u\right)\! - \!um^{\prime }\left( u\right)
\!+ \! u^{2}m^{\prime 2}( u) \right) \right] ^{1/2} \!\!\!\!, \nonumber \\
&&\label{FpiDP}
\end{eqnarray}%
while in the HTV model one has the Pagels-Stokar formula \cite%
{Pagels:1979hd,Holdom:1990iq}
\begin{equation}
f_{\pi }^{\mathrm{HTV}}=\left[ \frac{N_{c}}{4\pi ^{2}}\int_{0}^{\infty
}du\, u\frac{m\left( u\right) }{D^{2}\left( u\right) }\left( m\left(
u\right) -\frac{1}{2}um^{\prime }\left( u\right) \right) \right] ^{1/2}.
\label{FpiPS}
\end{equation}%
The described parameter-fitting procedure yields
\begin{equation}
\Lambda _{\mathrm{I}}=0.7~\mathrm{GeV}^{-2},\quad \Lambda _{\mathrm{HTV}%
}=0.375~\mathrm{GeV}^{-2}.
\end{equation}
For the instanton model, the integrand in Eq.~(\ref{MTn}) and the subsequent
formulas can be expressed as follows:
\begin{eqnarray}
&&\alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) +\gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) \notag \\
&&\overset{\mathrm{I}}{\rightarrow }\alpha d_{\alpha }^{3/2}d_{\beta
}^{1} d_{\gamma }^{1/2}+\beta d_{\alpha }^{1/2} d_{\beta }^{2} d_{\gamma
}^{1/2}+\gamma d_{\alpha }^{1/2} d_{\beta }^{1} d_{\gamma }^{3/2}, \label{II}
\end{eqnarray}%
while for the HTV model one has%
\begin{align}
& \alpha G_{m,0,0}\left( \alpha ,\beta ,\gamma \right) +\beta
G_{0,m,0}\left( \alpha ,\beta ,\gamma \right) +\gamma G_{0,0,m}\left( \alpha
,\beta ,\gamma \right) \notag \\
& \overset{\mathrm{HTV}}{\rightarrow }\frac{1}{4}\left\{ \alpha \left(
d_{\alpha }^{2}d_{\beta }^{1}d_{\gamma }^0+d_{\alpha }^{2}d_{\beta
}^0 d_{\gamma }^{1}+d_{\alpha }^{1}d_{\beta }^{2}d_{\gamma }^0+d_{\alpha
}^{1}d_{\beta }^{1}d_{\gamma }^{1}\right) \right. \notag \\
& +\left( \alpha \longleftrightarrow \gamma \right) \notag \\
& +\beta \left( d_{\alpha }^{1}d_{\beta }^{2}d_{\gamma }^0 +d_{\alpha
}^0 d_{\beta }^{2}d_{\gamma }^{1}+d_{\alpha }^0 d_{\beta }^{3}d_{\gamma
}^0+d_{\alpha }^{1}d_{\beta }^{1}d_{\gamma }^{1}\right) . \label{IT}
\end{align}%
Here we have introduced the short-hand notation
\begin{align}
\frac{m^{2n}\left(k^{2}\right) }{D\left( k^{2}\right) }\sim d_{\alpha }^{n}. \label{AlphaExact}
\end{align}
For the assumed Gaussian form factor (\ref{GaussF}) the $d_{\alpha }^{n}$ function at large $%
\alpha \gg \Lambda $ has the following behavior%
\begin{equation}
\frac{1}{R\left( \lambda \right) }M_{q}^{n}e^{-\lambda \left( \alpha
-2n\Lambda \right) }\Theta \left( \alpha -2n\Lambda \right) ,
\label{AlphaApprox}
\end{equation}%
with
\begin{equation}
R\left( \lambda \right) =1-4\Lambda m^{2}\left( \lambda \right) ,
\end{equation}
where $\lambda $ is the root of the equation%
\begin{equation}
\lambda +m^{2}\left( \lambda \right) =0.
\end{equation}%
The functions (\ref{AlphaApprox}) can also be used as approximants for
the analytic calculations of the quark distributions in the pion. In the momentum
representation this simplification means that in the denominators of the
integrands we neglect the momentum dependence of the dynamical quark mass, as would be
the case of the local quark models.
\subsection{The numerical results for nonlocal models \label{sec:NonLocRes}}
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{Bni123IT.eps}
\caption{(Color online) The tGFFs $B_{ni}^{\pi, u}(t)$ in the HTV model (solid line) and in the
instanton model (dashed line) for several lowest values of $n$ and $i$.
The sequence in the legend corresponds to the sequence of the curves, from top to bottom.
\label{BTniNonLoc}}
\end{figure}
In this subsection we present the results for the nonlocal models.
These results are obtained from the formulas presented above with the help of
numerical integration.
We start by exploring the $t$-dependence. In Fig.~\ref{BTniNonLoc} we present the
pion $u$-quark tGFFs in the HTV model and in the instanton model. First of all, the increase of the indices
$n$ or $i$ causes a decrease of the form factor normalization. We also note a faster fall-off
with $t$ of the tGFFs for the case of the instanton model compared to the HTV case. We note that
the tGFFs undergo the QCD evolution, which will be discussed in detail in Sec.~\ref{sec:evol}.
The $B_{n0}^{\pi, u}$ form factors, however, evolve multiplicatively, hence we can read off
their $t$-dependence from Fig.~\ref{BTniNonLoc}.
At large $t$ the $B_{10}^{\pi, u}$ form factor in the HTV model has the asymptotic behavior $\sim \ln t/t$.
This follows from the asymptotic formula
\begin{eqnarray}
&&B_{T10}^{u}\left( t \gg \Lambda^{-1} \right) \overset{\mathrm{HTV}}{=}\frac{1}{t}%
\frac{N_{c}}{16\pi ^{2}f_{\pi }^{2}}\left[ \int_{0}^{\infty }du\frac{%
m^{3}\left( u\right) }{D\left( u\right) }\ln \left( \frac{t}{u}\right)
\right. \notag \\
&&\left. +2\int_{0}^{\infty }du\frac{m^{2}\left( u\right) }{D\left( u\right)
}\int_{0}^{\infty }dv\frac{m\left( u+v\right) }{D\left( u+v\right) } \times \right . \notag \\ && \hspace{2.1cm} \left . \left( 1-%
\frac{m\left( u\right) m\left( u+v\right) }{u+v}\right) \right] .
\end{eqnarray}%
For the instanton model the fall-off is exponential, since
\begin{eqnarray}
&&B_{T10}^{u}\left( t \gg \Lambda^{-1} \right) \overset{\mathrm{I}}{=}\frac{N_{c}}{%
4\pi ^{2}f_{\pi }^{2}}\frac{\sqrt{\pi }M_{q}^{3}}{R\left( \lambda \right) }
\\
&&\frac{1}{t}\frac{1}{\sqrt{\Lambda \sqrt{\lambda t}}}\left( 1-2\sqrt{\frac{%
\lambda }{t}}\right) e^{-\Lambda \left( \sqrt{\lambda t}-6\lambda \right)
}E_{1}\left( \Lambda \sqrt{\lambda t}\right) . \notag
\end{eqnarray}%
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{Bni12IT_b2.eps}
\caption{(Color online) The tGFFs $B_{ni}^{\pi, u}(b_T^2)$ in the impact parameter space in the HTV
model (solid line) and in the instanton model (dashed line).
The sequence in the legend corresponds to the sequence of the curves, from top to bottom.
\label{BTni(b2)}}
\end{figure}
In Fig.~\ref{BTni(b2)} we display the tGFFs in the impact-parameter space.
The information is the same as in Fig.~\ref{BTniNonLoc}, as the
two figures are simply linked with a Fourier-Bessel transform. Nevertheless, the different
large-$t$ behavior of the instanton and HTV models is very vividly seen in the small-$b_T$ behavior in
Fig.~\ref{BTni(b2)}.
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{f_x.eps}
\caption{(Color online) The tPDF in the HTV model (solid line) and in the
instanton model (dashed line).
\label{f(x)}}
\end{figure}
Next, we explore the $X$ dependence in the simplest case of $t=0$ and $\xi=0$ (tPDF).
In Fig.~\ref{f(x)} we present the results of calculations of the tPDF in
the nonlocal models (\ref{fTpi}).
We notice a more-less triangular shape for both models, with a depletion near $X=0$.
The end-point behavior of these functions can be
inferred from Eq.~(\ref{ETnMF}) by using the approximants (\ref{AlphaApprox}).
The $X\rightarrow 1$ behavior is governed by the properties of the active
dynamical quark, while the $X\rightarrow 0$ behavior is related to the
spectator quark. For the instanton model the endpoint behavior is
exponentially suppressed, namely
\begin{eqnarray}
f_{T}^{I}\left( X\rightarrow 1\right) &\sim &\left( 1-X\right) ^{2}\exp
\left[ -\frac{2\lambda \Lambda }{1-X}\right] , \notag \\
f_{T}^{I}\left( X\rightarrow 0\right) &\sim &\exp \left[ -\frac{2\lambda
\Lambda }{X}\right] , \label{fI}
\end{eqnarray}%
while for the HVT model one has a power-like behavior%
\begin{eqnarray}
f_{T}^{HVT}\left( X\rightarrow 1\right) &\sim &\left( 1-X\right) , \notag \\
f_{T}^{HVT}\left( X\rightarrow 0\right) &\sim &\mathrm{const.} \label{fT}
\end{eqnarray}%
We remark here that the end-point behavior in Eqs.~(\ref{fI},\ref{fT}) is
sensitive to the radiative corrections, hence it evolves with the scale.
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{b_x.eps}
\caption{(Color online) The distribution function of the transverse size in
the HTV model (solid line) and in the instanton model (dashed
line).
\label{b(x)}}
\end{figure}
A similar behavior is obtained for the transverse size distribution at $t=0$,
shown in Fig. \ref{b(x)}, namely
\begin{eqnarray}
b_{\perp I}^{2}\left( X\rightarrow 1\right) &\sim &\left( 1-X\right)
^{4}\exp \left[ -\frac{2\lambda \Lambda }{1-X}\right] ,\qquad \\
b_{\perp I}^{2}\left( X\rightarrow 0\right) &\sim &\frac{1}{X}\exp \left[ -%
\frac{2\lambda \Lambda }{X}\right] , \notag \\
b_{\perp HVT}^{2}\left( X\rightarrow 1\right) &\sim &\left( 1-X\right)
^{3},\notag \\ b_{\perp HVT}^{2}\left( X\rightarrow 0\right) &\sim &\mathrm{const}. \notag
\end{eqnarray}
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{d_x.eps}
\caption{(Color online) The distribution function of the mean square
distance in the HTV model (solid line) and in the instanton
model (dashed line), plotted as a function of $X$.
\label{d(x)}}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{dT_eta.eps}
\caption{(Color online) The distribution function of the mean square
distance as function of rapidity $\protect\eta $ in the HTV model (solid
line), in the instanton model (dashed line), in the Gribov approach \cite{Gribov:1973jg} (G) (dot-dashed line),
and in the PPV model \cite{Perevalova:2011qi} (dotted line).
\label{d(eta)}}
\end{figure}
Next, we present our results for the distribution function of the mean square
distance. In Fig.~\ref{d(x)} we show $d_\perp^2$ as a function of $X$, while
in Fig.~\ref{d(eta)} we present the same quantity as a function of the rapidity
variable $\eta$. We also compare our results to the calculations of Refs.~\cite{Gribov:1973jg} (G) and \cite{Perevalova:2011qi} (PPV).
In the region of large $\eta$, corresponding to low $X$, various model predictions are different.
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{E_x_I=1.eps}
\caption{(Color online) The pion tGPD for isovector case
in the HTV model (solid lines) and in the instanton model
(dashed lines) for several values of $\xi$.
\label{E(x)I1}}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=.48\textwidth]{E_x_I=0.eps}
\caption{(Color online) The pion tGPD for isoscalar case
in the HTV model (solid lines) and in the instanton model
(dashed lines) for several values of $\xi$.
\label{E(x)I0}}
\end{figure}
Finally, we explore the dependence on $\xi$ and $X$ of the pion tGPDs at $t=0$.
The results are given in Figs.~\ref{E(x)I1} and \ref{E(x)I0}.
We note the symmetry properties following from the definition (\ref{ETnI01}).
We can also see that the curves bend near $X=\xi$.
To summarize the study of this subsection we state that the results, apart for
mathematically different end-point behavior, are qualitatively similar in the two
explored variants of the nonlocal chiral quark models.
\subsection{Nambu--Jona-Lasinio model \label{sec:NJL}}
We term the usual Nambu--Jona-Lasinio model with point-like quark-quark
interactions the \emph{local} NJL model. All formulas for the local model
follow from the nonlocal expressions given above, with the constant
quark mass, which formally corresponds to taking the limit $\Lambda \to 0$.
In addition, a regularization prescription, necessary to make the divergent
integrals finite, is implemented, as discussed below.
The one-quark-loop action of the NJL model is
\begin{align}
\Gamma_{\mathrm{NJL}} =-i N_{c} \mathrm{Tr} \log\left( i\setbox0=%
\hbox{$\partial$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>%
\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} \partial \else
\rlap{\hbox to
\dimen1{\hfil$\partial$\hfil}} / \fi - M U^{5} - m \right) \Big|_{\mathrm{reg%
}} , \label{eq:eff_ac_NJL}
\end{align}
where $M$ is the constituent quark mass generated via the spontaneous
breaking of the chiral symmetry,
\begin{align}
U^{5}=\exp(i \gamma_{5} \mathchoice{\mbox{\boldmath$\displaystyle \phi$}}
{\mbox{\boldmath$\textstyle \phi$}} {\mbox{\boldmath$\scriptstyle \phi$}}
{\mbox{\boldmath$\scriptscriptstyle \phi$}}\cdot
\mathchoice{\mbox{\boldmath$\displaystyle \tau$}} {\mbox{\boldmath$\textstyle \tau$}} {\mbox{\boldmath$\scriptstyle \tau$}} {\mbox{\boldmath$\scriptscriptstyle
\tau$}}),
\end{align}
with $\mathchoice{\mbox{\boldmath$\displaystyle \phi$}} {\mbox{\boldmath$%
\textstyle \phi$}} {\mbox{\boldmath$\scriptstyle \phi$}} {%
\mbox{\boldmath$\scriptscriptstyle \phi$}}$ denoting the pion field, while $%
m $ is the current quark mass. We apply the NJL with the Pauli-Villars
regularization in the twice-subtracted version of Refs.~\cite%
{RuizArriola:1991gc,Schuren:1991sc, RuizArriola:2002wr}. Variants of chiral
quark models differ in the way of performing the necessary regularization of
the quark loop diagrams, which may to some extent influence the physical
results.
Here we use the prescription where $M^{2}$ in the loop integral is replaced
with the combination $M^{2}+\Lambda^{2}$, where in the present context
$\Lambda$ is the cut-off
parameter, and then the regularized observable is evaluated according to the
formula
\begin{eqnarray}
\mathcal{O}_{\mathrm{reg}} = \mathcal{O}(0) - \mathcal{O}(\Lambda^{2} ) +
\Lambda^{2} d \mathcal{\ O}(\Lambda^{2} ) / d\Lambda^{2}.
\end{eqnarray}
The pre-multiplying factor $g_{\pi}^{2}=M^{2}/f_{\pi}^{2}$ is not
regularized~\cite{RuizArriola:1991gc,Schuren:1991sc, RuizArriola:2002wr}.
In the local model it is relatively simple to go beyond the chiral limit, hence we
do not restrict ourselves to the case $m_\pi=0$.
Since the lattice data used in this work are actually for $m_\pi=600$~MeV, hence not at all
close to the chiral limit of $m=0$, we need to deal with a situation of
moderately large pion masses. The prescription to fix the model parameters
is as follows: the three constants $\Lambda$, $M$, and $m$ are traded for
the constituent quark mass, $M$, the pion decay constant $f_{\pi}$, and $%
m_{\pi}$. We assume that $\Lambda$ depends on $M$ only, and not on $m$.
Constraining $f_{\pi}=93$~MeV (the physical value) and using the given value
of $m_{\pi}$ leaves us with one free parameter only, $M$, which is taken
in the $250-300$~MeV ball park.
We recall that the optimum value of $M$ used in chiral quark
models depends of particular observable used for the fitting procedure. The
application to the $\rho $ meson suggests $M$ above $m_{\rho }/2\sim 400$~MeV, while
the soliton models for the nucleon prefer $M\sim 300-350$~MeV \cite%
{Christov:1995vm}. However, significantly lower values follow from other studies in the
pion sector. The
charge radius of the pion in the NJL model with the Pauli-Villars regulator
favors $M\sim 280$~MeV \cite{RuizArriola:2002wr}, however, the pion-loop
corrections to this observable are important. The analysis of the radii of
the pion charge and transition form factors from quark triangle diagrams
yields $M=\sqrt{2/3}\,\pi f_{\pi }\sim 240$~MeV \cite{Gerasimov:1978cp}.
Another restriction on the value of $M$ follows from the Adler function and the
corresponding vacuum polarization contribution to the gyromagnetic factor $g-2$
of the muon. The loop approach (without and with radiative corrections) \cite%
{Pivovarov:2001mw,Boughezal:2011vw} yields $M=180-200$~MeV, the analytic perturbation model \cite{Milton:2001mq} gives $240$~MeV,
while the nonlocal chiral quark model \cite{Dorokhov:2004ze} suggests $250$~MeV. Our chosen
value of $\sim 250$~MeV falls into this ball park.
In the NJL model the formulas for the lowest two transversity form factors are very simple,
\begin{align}
& \frac{B_{T10}^{\pi ,u}(t)}{m_{\pi }}=\int_{0}^{1}\!\!\!d\alpha
\int_{0}^{1-\alpha }\!\!\!\!\!\!d\beta \,K,
\notag \\
& \frac{B_{T20}^{\pi ,u}(t)}{%
m_{\pi }}=\int_{0}^{1}\!\!\!d\alpha \int_{0}^{1-\alpha }\!\!\!\!\!\!d\beta
\,\alpha K, \notag \\
& K=\left. \frac{N_{c}g_{\pi }^{2}M}{2\pi ^{2}\left( M^{2}+m_{\pi
}^{2}(\alpha -1)\alpha +t\beta (\alpha +\beta -1)\right) }\right\vert _{%
\mathrm{reg}}. \label{eq:NJL}
\end{align}%
with $g_{\pi}=M/f_{\pi}$. The variables $\alpha $ and $\beta $ are the Feynman parameters.
The result for the tGPD are particularly simple at $t=0$ and in the chiral limit, namely trapezoidal for
the symmetric ($I=1$) combination,
\begin{equation}
E_{T}^{\pi ,S}(X,\xi ,t=0;\mu _{0})/N=\left\{
\begin{array}{rl}
1, & 0\leq X\leq \xi \\
\frac{1-X}{1-\xi }, & \xi \leq X\leq 1%
\end{array}%
\right. ,
\end{equation}%
and triangular for the antisymmetric ($I=0$) combination,
\begin{equation}
E_{T}^{\pi ,A}(X,\xi ,t=0;\mu _{0})/N=\left\{
\begin{array}{rl}
X/{\xi }, & 0\leq X\leq \xi \\
\frac{1-X}{1-\xi }, & \xi \leq X\leq 1%
\end{array}%
\right. .
\end{equation}
Here $N$ denotes a normalization constant following from the model.
Other results of the local NJL model, the corresponding plots, and comparisons to the predictions
of the nonlocal models will be presented in the following parts, together
with the discussion of the QCD evolution.
\section{QCD evolution \label{sec:evol}}
We now come to a very important aspect of our analysis.
Before comparing the results to the lattice data we need to carry out the
QCD evolution, as the tGPD and tGFFs evolve with the scale. The need for the
evolution has been discussed in detail in \cite{Broniowski:2007si}. In
essence, our approach consists of 1)~evaluation of the appropriate soft
matrix element in the given model at the low quark-model scale, where the matrix element is
matched to the QCD result, and 2)~subsequent evolution to higher scales with
appropriate perturbative QCD equations.
For instance, the lattice data correspond typically to the scale of about $%
Q=2$~GeV, as follows from the used value of the lattice spacing,
while the quark model calculation corresponds to a much lower scale,
\begin{equation}
\mu _{0}\sim \Lambda _{\mathrm{QCD}}.
\end{equation}%
A detailed discussion of the evolution issue and ways to set the quark model
scale is presented in Ref.~\cite{Broniowski:2007si,Broniowski:evol}, where the
scale%
\begin{equation}
\mu _{0}=313~\mathrm{MeV} \label{scale}
\end{equation}%
is advocated. We stress that the inclusion of evolution is crucial for
obtaining the results at experimental or lattice scales. A non-trivial
test is to check that the procedure reproduces consistently other
observables at a given scale, $\mu$ (see e.g.
Ref.~\cite{Broniowski:2007si,Broniowski:evol} for a detailed comparison).
\subsection{Evolution of tGPD \label{sec:evoltGPG}}
The leading-order DGLAP-ERBL evolution for tGPD is given, e.g., in~\cite%
{Belitsky:2005qn}. To carry out this evolution in practical terms, we use
the method given in \cite%
{Kivel:1999sk,Kivel:1999wa,Manashov:2005xp,Kirch:2005tt}, where the basic
objects are the moments in the Gegenbauer polynomials of index $n$
\begin{equation}
g_{n}(\mu)=\int_{0}^{1}dX\, E_{T}^{\pi ,S}(X,\xi ,t;\mu)G_{n}^{3/2}(X/\xi ).
\end{equation}%
The DGLAP region, $X>\xi $, is outside of the orthogonality range for the polynomials $G_{n}^{3/2}(X/\xi)$.
The LO DGLAP-ERBL evolution amounts to the multiplication
\begin{equation}
g_{n}(\mu )=L_{n}g_{n}(\mu _{0}),
\end{equation}%
\begin{equation}
L_{n}=\left( \frac{\alpha (\mu )}{\alpha (\mu _{0})}\right) ^{\gamma
_{n}^{T}/(2\beta _{0})}.
\end{equation}%
The anomalous dimensions in the transversity (tensor) channel are given by
\begin{equation}
\gamma _{n}^{T}=\frac{32}{3}H_{n}-8,
\end{equation}%
where $H_{n}=\sum_{k=1}^{n}1/k$. In particular, one has for the two lowest form factors
$\gamma _{1}^{T}=\frac{8}{3}$ and $\gamma _{2}^{T}=8$. We use $\beta _{0}=%
\frac{11}{3}N_{c}-\frac{2}{3}N_{f}$ and the running coupling constant
\begin{equation}
\alpha (\mu )={4\pi }/[{\beta _{0}\log (\mu ^{2}/\Lambda _{QCD}^{2})}],
\end{equation}%
with $\Lambda _{\mathrm{QCD}}=226~\mathrm{MeV}$ for $N_{c}=N_{f}=3$. The
inversion of the evolved moments back into the evolved GPD, applied in our
calculation, is explained in \cite%
{Kivel:1999sk,Kivel:1999wa,Manashov:2005xp,Kirch:2005tt}.
We also recall that in the transversity channel the quark distributions
evolve autonomously, i.e. do not mix with the gluon distributions, which is
the case of the vector and axial channels. That way no gluon tGPDs are
generated by the QCD evolution, as by construction they vanish in chiral quark
models at the quark-model scale.
\subsection{Evolution of transversity form factors \label{sec:evoltFF}}
The LO DGLAP-ERBL evolution of tGFFs, defined as moments of the GPDs, has
been spelled out explicitly in~\cite{Broniowski:2009zh}. The triangular
structure which appears from the considerations on the evolution of the
tGPDs is, for odd $n=2k+1$,
\begin{eqnarray}
&&B_{2k+1,2l}=k\Gamma (2k)\sum_{m=0}^{k}(4m+3)L_{2m+1}\sum_{j=k-l}^{k} \\
&&\frac{2^{2(j-k)}(-1)^{m-j}\Gamma \left( j+m+\frac{3}{2}\right)
B_{2j+1,2(j-k+l)}^{0}}{\Gamma (2j+1)\Gamma (m-j+1)\Gamma (k-m+1)\Gamma
\left( k+m+\frac{5}{2}\right) }, \notag
\end{eqnarray}%
and, for even $n=2k+2$,
\begin{eqnarray}
&&B_{2k+2,2l}=\Gamma (2k+2)\sum_{m=0}^{k}(4m+5)L_{2m+2}\sum_{j=k-l}^{k} \\
&&\frac{2^{2j-2k-1}(-1)^{m-j}\Gamma \left( j+m+\frac{5}{2}\right)
B_{2(j+1),2(j-k+l)}^{0}}{\Gamma (2j+2)\Gamma (m-j+1)\Gamma (k-m+1)\Gamma
\left( k+m+\frac{7}{2}\right) }, \notag
\end{eqnarray}%
where $k=0,1,2,\dots $ and $l=0,1,\dots ,k$. We have introduced a
short-hand notation $B_{ni}=B_{Tni}^{\pi }(t;\mu )$ and $B_{ni}^{0}=B_{Tni}^{%
\pi }(t;\mu _{0})$. For the lowest moments we have, explicitly,
\begin{eqnarray}
B_{10} &=&L_{1}B_{10}^{0}, \notag \\
B_{32} &=&\frac{1}{5}(L_{1}-L_{3})B_{10}^{0}+L_{3}B_{32}^{0}, \notag \\
B_{54} &=&\frac{1}{105}(9L_{1}-14L_{3}+5L_{5})B_{10}^{0} \notag \\
&&+\frac{2}{3}(L_{3}-L_{5})B_{32}^{0}+L_{5}B_{54}^{0}, \notag \\
&\dots & \notag \\
B_{20} &=&L_{2}B_{20}^{0}, \notag \\
B_{42} &=&\frac{3}{7}(L_{2}-L_{4})B_{20}^{0}+L_{4}B_{42}^{0}, \notag \\
&\dots & \notag \\
B_{30} &=&L_{3}B_{30}^{0}, \notag \\
B_{52} &=&\frac{2}{3}(L_{3}-L_{5})B_{30}^{0}+L_{5}B_{52}^{0}, \notag \\
&\dots & \notag \\
B_{40} &=&L_{4}B_{40}^{0}. \label{ev:ns}
\end{eqnarray}%
In particular, the two lowest tGFFs available from the lattice data, $B_{T10}^{\pi
,u}$ and $B_{T20}^{\pi ,u}$, evolve multiplicatively as follows:
\begin{equation*}
B_{Tn0}^{\pi ,u}(t;\mu )=B_{Tn0}^{\pi ,u}(t;\mu _{0})\left( \frac{\alpha
(\mu )}{\alpha (\mu _{0})}\right) ^{\gamma _{n}^{T}/(2\beta _{0})},
\end{equation*}%
which numerically gives
\begin{align}
& B_{T10}^{\pi ,u}(t;2~\mathrm{GeV})=0.75B_{T10}^{\pi ,u}(t;\mu _{0}),
\notag \\
& B_{T20}^{\pi ,u}(t;2~\mathrm{GeV})=0.43B_{T20}^{\pi ,u}(t;\mu _{0}).
\label{evol:explicit}
\end{align}%
Note a stronger reduction for $B_{T20}$ compared to $B_{T10}$ as the result
of the evolution.
In the chiral limit and at $t=0$
\begin{align}
& B_{T10}^{\pi,u}(t=0;\mu_{0})/m_{\pi}=\frac{N_{c} M}{4\pi^{2} f_{\pi}^{2}},
\label{LocLim1} \\
& \frac{B_{T20}^{\pi,u}(t=0;\mu)}{B_{T10}^{\pi,u}(t=0;\mu)}=\frac{1}{3}
\left( \frac{\alpha(\mu)}{\alpha(\mu_{0})}\right) ^{8/27}. \label{LocLim2}
\end{align}
\section{Numerical results after the QCD evolution \label{sec:res}}
In this section we present our numerical results {\em after the QCD evolution} for the tGPD of the pion, its
special cases $\xi =0$ and $\xi =1$, corresponding to the tPDF and tDA,
respectively, as well as discuss the tGFFs. The latter are compared to the
available lattice data of~\cite{Brommel:2007xd}.
\subsection{tGPD \label{ssec:tgpd}}
The results of the calculation of the tGPD of the pion at a sample value of $%
\xi =1/3$ and at $t=0$, together with the LO DGLAP-ERBL evolution, are given
in Figs.~\ref{fig:tGPDnonloc} and \ref{fig:tGPDloc}. For the non-local case
we take the HTV model (\ref{QPiVertT},\ref{IT}), as the results of the
instanton model (\ref{QPiVertI},\ref{II}) are qualitatively similar. Here we
take for simplicity the chiral limit, $m_{\pi }=0$. We provide in the
figures the symmetric (S) and asymmetric (A) combinations in the $X$
variable (\ref{ETnI01}). The solid lines correspond to the calculation at
the quark-model scale, $\mu _{0}$. In this case we conventionally normalize
the plotted functions with a constant $N$ in such a way that
\begin{equation}
\int_{0}^{1}dXE_{T}^{\pi ,S}(X,\xi ,t=0;\mu _{0})/N=\frac{1+\xi }{2}
\end{equation}%
for all displayed models.
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{chiral240t0xi3sym.eps} \newline
\vspace{4.5mm} \includegraphics[width=.47\textwidth]{chiral240t0xi3asym.eps}
\caption{(Color online) The DGLAP-ERBL evolution of the symmetric (S, or $I=1$) and
antisymmetric (A, or $I=0$) parts of the quark tGPD of the pion in the non-local HTV
model for $m_\protect\pi=0$, $t=0$, $\protect\xi=1/3$, and $M=240$~MeV.
The solid line corresponds to the initial condition at the quark model scale
$\protect\mu_0=313$~MeV, the dashed line shows the result of the evolution
to $\protect\mu=2$~GeV, and the dotted line to $\protect\mu=1$~TeV.
\label{fig:tGPDnonloc}}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{localt0xi3sym.eps}
\newline
\vspace{4.5mm} \includegraphics[width=.47\textwidth]{localt0xi3asym.eps}
\caption{(Color online) Same as Fig.~\protect\ref{fig:tGPDnonloc} for the
local NJL model. \label{fig:tGPDloc}}
\end{figure}
Further, we note the gross qualitative similarity between the nonlocal HTV
model and the local NJL model. The differences are manifest in the end-point
behavior. Near $X=1$ the tGPD in non-local model is suppressed, as explained
in Sect.~\ref{sec:NonLocRes}. Also, near $X=0$ the quantity $E^{\pi,S}_T$ is
depleted compared to the local case, where no minimum is present.
The dashed and dotted curves show the results evolved to the scales $2$~GeV
and 1~TeV, respectively. After the evolution the results of the HTV model
and the local NJL model are qualitatively very similar.
\subsection{tPDF \label{sec:tpdf}}
Next, we explore the special case $\xi =0$, again for $t=0$ and $m_{\pi }=0$%
. In this case tGPD corresponds, by definition, to tPDF. In Fig.~\ref{fig:tPDF} we compare
the predictions of the three considered models at the quark-model scale, $%
\mu _{0}$. We note different end-point behavior, both at $X=1$ and at $X=0$,
according to the discussion presented in Sect.~\ref{sec:NonLocRes}. Near $X=1$
the instanton model has a stronger suppression in tPDF than the HTV model.
The local model approaches zero linearly. Again, we note that the QCD evolution changes the
end-point behavior.
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{pcompsym.eps}
\caption{(Color online) Comparison of the tPDF ($E_{T}^{\protect\pi }(X,t=0,%
\protect\xi =0)$) in the local model (solid line), instanton model (dashed
line), and the HTV model (dotted line) for $m_{\protect\pi }=0$, evaluated
at the quark-model scale.
\label{fig:tPDF}}
\end{figure}
\subsection{tDA \label{sec:tda}}
Another interesting limiting case is provided with \mbox{$\xi=1$}. In that case
\begin{eqnarray}
E_T^\pi(X,t=0,\xi=1)=\phi_T(X),
\end{eqnarray}
where $\phi_T^\pi(X)$ is the tensor distribution amplitude of the pion,
defined as
\begin{eqnarray}
&&\langle 0 | \overline{d}(z) \sigma_{\alpha \beta} \gamma_5 u(-z)| \pi^+(q)
\rangle = \label{PT} \\
&& i \frac{\sqrt{2}}{3} N^{T} (p_\alpha z_\beta-p_\beta z_\alpha)\int_0^1 du\,
e^{i (2u-1) q\cdot z} \phi_{T}^{\pi}(u), \notag
\end{eqnarray}
where $X=2u-1$ and $N^{T}$ is the normalization factor yielding $\int_0^1 du
\phi_T(u)=1$.
The local NJL model predicts a constant $\phi_T^\pi(X)$ at the quark-model
scale. Again, as seen from Fig.~\ref{fig:tDA}, the difference between the
local and non-local models is seen in the end-point behavior, $X \sim \pm 1$%
. In the intermediate range of $X$ the tDA $\phi_T^\pi(X)$ is close to a
constant also for the non-local models.
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{tpda.eps}
\caption{(Color online) Comparison of the tDA ($\protect\phi _{T}^{\protect%
\pi }(X)$) in the local model (solid line), instanton model (dashed line),
and the HTV model (dotted line) for $m_{\protect\pi }=0$, evaluated at the
quark-model scale.
\label{fig:tDA}}
\end{figure}
In Fig.~\ref{fig:evphiT} we show the LO ERBL evolution of the tDA of the
pion in the local NJL model. We note a gradual approach towards the
asymptotic form
\begin{eqnarray}
\phi _{T,\mathrm{asym}}^{\pi }(u)=6u(1-u).
\end{eqnarray}
For the non-local models the effect of the evolution is similar.
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{evphiT.eps}
\caption{(Color online) Evolution of the tensor distribution amplitude, tDA,
in the local NJL model. The subsequent curves (from bottom to top at $u=1/2$) correspond to $\protect\mu =%
\protect\mu _{0}=313$~MeV (the constant), $\protect\mu =500$~MeV, $\protect%
\mu =2$~GeV, $\protect\mu =1000$~GeV, and $\protect\mu =\infty $ (the
asymptotic form $6u(1-u)$.
\label{fig:evphiT}}
\end{figure}
\subsection{tGFFs \label{ssec:tff}}
In Fig.~\ref{fig:evff} we show the LO DGLAP-ERBL evolution of the tGFFs
evaluated in the local NJL model.
By comparing the two panels of Fig.~\ref{fig:evff} we note that for the tGFFs
is multiplicative, and increasing the scale leads a quenching of $B_{Tn0}$
the form factor. For the form factors $B_{Tni}$ with $i\neq 0$ the evolution
is more complicated, as can be inferred from Eq.~(\ref{ev:ns}).
For the non-local models the effects of the evolution for tGFFs are similar.
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{loc250mpi600qmscale.eps} \newline
\vspace{4mm} \includegraphics[width=.47\textwidth]{loc250mpi600scale2.eps}
\caption{(Color online) The transversity form factors $B_{ni}^{u}(t)$,
evaluated in the local NJL model at the quark-model scale $\protect\mu _{0}$
(top panel) and evolved to $\protect\mu =2$~GeV (bottom panel). Solid line
-- $B_{10}^{u}(t)$, dashed line -- $B_{20}^{u}(t)$, dotted line -- $%
B_{30}^{u}(t)$, dash-dotted line -- $B_{32}^{u}(t)$.
\label{fig:evff}}
\end{figure}
\subsection{Chiral quark models vs lattice \label{sec:lattice}}
The content of this Section has already been presented by us in a greater
detail in~\cite{Broniowski:2010nt}. For the completeness of the present work
we repeat the main results.
The presently available full-QCD lattice results \cite{Brommel:2007xd} are
for $B_{10}^{\pi ,u}$ and $B_{20}^{\pi ,u}$ and for $-t$ up to 2.5~GeV$^{2}$%
, with moderately low, but still away from the physical limit, values of the
pion mass, $m_{\pi }\sim 600$~MeV. The calculation of~\cite{Brommel:2007xd}
uses the same $N_{f}=2$ set of the QCDSF/UKQCD ensembles with improved
Wilson fermions and the Wilson gauge action that were used previously in the
analysis of the pion charge and gravitational form factors \cite%
{Brommel:2005ee}.
We note that for $t=0$ both the local and non-local models yield the
normalization
\begin{eqnarray}
& B_{T10}^{\pi,u}(t=0;\mu_{0})/m_{\pi}=\frac{N_{c}}{2\pi^{2}f_{\pi}^{2}}
\notag \\
& \times\int_{0}^{\infty}du\frac{um^{2}(u)}{(u+m^{2}(u))^{3}}%
(m(u)-um^{\prime }(u)), \label{NonLocB1} \\
& B_{T20}^{\pi,u}(t=0;\mu_{0})/m_{\pi}=\frac{N_{c}}{2\pi^{2}f_{\pi}^{2}}%
\Big\{\int_{0}^{\infty}du\frac{um(u)}{(u+m^{2}(u))^{3}} \notag \\
& \times(m^{2}(u)+\frac{1}{2}um(u)m^{\prime}(u)+\frac{1}{6}%
u^{2}m^{\prime,2}(u)) \notag \\
& -\int_{0}^{\infty}du\frac{u^{2}m^{2}(u)}{(u+m^{2}(u))^{4}}
(m(u)+2m^{2}(u)m^{\prime}(u))\Big\}, \label{NonLocB2}
\end{eqnarray}
where $m^{\prime}(u)=dm(u)/du$. In the local limit, where $m(k^{2})\to%
\mathrm{const}$, one reproduces Eqs.~(\ref{LocLim1},\ref{LocLim2}).
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{nonloc_ev.eps} \vspace{-2mm}
\caption{(Color online) The transversity form factors in the HTV model
(solid line) and in the instanton model (dashed line). The data
come from~\protect\cite{Brommel:2005ee}.
\label{fig:nonloc}}
\end{figure}
The results for $B_{Tn0}^{\pi,u}(t)$, $n=1,2$, are shown in Fig.~\ref%
{fig:nonloc}. In our study we have assumed that $B_{Tn0}/m_{\pi}$ depends
weakly on $m_{\pi}$, similarly to the local model case~\cite%
{Broniowski:2010nt}. Therefore, to compare to the lattice data for $B_{Tn0}$,
we multiply the results of the calculations obtained in the chiral limit
with $m_{\pi}=600$~MeV. We have carried out the QCD evolution procedure as
described in the previous Sections, from the quark model scale up to the
lattice scale of 2~GeV. From Fig.~\ref{fig:nonloc} we note that the HTV
model with the vertex function given by Eq.~(\ref{QPiVertT}) (solid lines)
and with $M_q=300$~MeV works best, describing accurately the data, while
the instanton model, Eq.~(\ref{QPiVertI}) (dashed lines), results in form
factors falling-off too steeply. We have found that lower values of $M_q$
spoil the agreement with the lattice data.
\begin{figure}[tb]
\includegraphics[width=.47\textwidth]{plt25m600.eps} \vspace{-2mm}
\caption{(Color online) The transversity form factors obtained in the NJL
model (lines) for $M=250$~MeV and $m_{}\protect\pi=600$~MeV, evolved to the
lattice scale of 2~GeV and compared to the lattice data from Fig.~1 of~%
\protect\cite{Brommel:2007xd} (points).
\label{fig:resu}}
\end{figure}
In Fig.~\ref{fig:resu} we show the results from the local NJL model evolved
to the lattice scale of $\mu=2$~GeV, confronted with the lattice data
scanned from Fig.~1 of~\cite{Brommel:2007xd}. We have used $m_{\pi}=600$~MeV
and selected $M=250$~MeV, which optimizes the comparison. As we see, the
agreement is remarkable.
In Ref.~\cite{Broniowski:2010nt} we have also investigated the dependence of
the values of the form factors at $t=0$ on the value of $m_{\pi}$, as
studied in~\cite{Brommel:2007xd}.
We have also noted in \cite{Broniowski:2010nt} that the results presented in Fig.~%
\ref{fig:resu} depend quite sensitively on the value of the constituent
quark mass, $M$, with higher $M$ yielding lower values of the transversity
form factors.
\section{Conclusions \label{sec:concl}}
In the present paper we have shown how the spinless pion acquires a
non-trivial spin structure within the framework of chiral quark
models. This has been achieved by computing the transversity
distributions, corresponding to matrix elements of the tensor quark
density, within chiral quark models, where the pion arises as the
pseudo-Goldstone boson of the spontaneously broken chiral
symmetry. Moreover, we have worked at the leading order in the $1/N_c$
expansion, which amounts to carrying out one-quark-loop calculations,
where the implementation of the symmetry constraints becomes
absolutely essential. Chiral symmetry is respected by implementing the
pertinent chiral Ward-Takahashi identities at the quark
level. Moreover, the relativity constraints are fulfilled in terms of
the polynomiality conditions which are manifestly preserved through
the use of the double distributions, or, equivalently, by working with
the $\alpha$-representations.
We have provided comprehensive results for the tGPDs of the pion, as well as
related quantities following from restrained kinematics, evaluation of
moments, or taking the Fourier-Bessel transforms to the
impact-parameter space. We have also shown in detail various technical
aspects of our analysis, including the use of the
$\alpha$-representation in the nonlocal models.
The generated tGPDs are defined at
a given low-energy quark-model scale, and comparison to data or lattice results
corresponds to implementing the suitable QCD evolution. Actually, while the
momentum-transfer or, equivalently, the impact-parameter dependence of
the tGFFs remains scale independent, their absolute normalization does
depend multiplicatively on the renormalization scale. Remarkably, the
absolute predictions for the multiplicatively evolved $B_{Tn0}$, for $n=1,2$, agree
surprisingly well with the lattice results, supporting many previous calculations
following the same chiral-quark-model scheme amended with the subsequent QCD evolution.
\bigskip
One of us (AED) is thankful A.V. Radyushkin and S.V. Mikhailov for numerous discussions.
\medskip
{Supported by the Bogoliubov-Infeld program (JINR), the Polish Ministry of
Science and Higher Education, grants N~N202~263438 and N~N202~249235,
Spanish DGI and FEDER grant FIS2008-01143/FIS, Junta de Andaluc{\'{\i}}a
grant FQM225-05, and EU Integrated Infrastructure Initiative Hadron Physics
Project, contract RII3-CT-2004-506078. AED acknowledges partial support from
the Russian Foundation for Basic Research, projects No.~10-02-00368 and No.~11-02-00112.}
| -73,060.379 |
[
-2.791015625,
2.58203125
] | 18.712674 |
[
-2.53125,
0.143798828125,
-1.966796875,
-5.65625,
-0.8466796875,
8.6640625
] |
[
2.255859375,
8.5703125,
1.857421875,
4.7421875
] | 540 | 9,257 |
[
-3.36328125,
4.05859375
] | 32.658904 |
[
-6.0859375,
-4.5703125,
-5.5625,
-3.025390625,
1.7548828125,
14.4921875
] | 1.441954 | 12.221734 | 23.841417 | 3.352728 |
[
2.1961135864257812
] | -41,913.201691 | 5.880955 | -71,274.472625 | 0.977223 | 6.204768 |
[
-2.5390625,
-4.05078125,
-4.00390625,
-4.921875,
2.2265625,
12.75
] |
[
-5.33203125,
-2.322265625,
-2.33984375,
-1.275390625,
3.697265625,
5.0859375
] | |
BkiUeZ7xK0wg0_74_xvo
|
\section{Introduction}\label{sec:introduction}
Pulsar timing arrays could well be used to detect ultra-low frequency
gravitational waves (GWs) within the next 5-10
years~\cite{2010IAUS..261..228H}. This is an especially exciting
prospect given the concurrent efforts of the LIGO-Virgo Scientific
collaboration (LVC) whose aim is to make direct detection of GWs (in
the $\sim$10-1000 Hz regime) using the 2$^{\mathrm{nd}}$ generation of
ground based interferometric detectors within the same
timescale~\cite{2009RPPh...72g6901A}.
In this work we outline the beginnings of a Bayesian approach to the
detection of GWs with pulsar timing using simplistic signal and noise
models onto which can be built further levels of sophistication in the
future. A key long-term aim of our analysis is to improve our ability
to time existing millisecond pulsars by a factor of
3-10~\cite{2009arXiv0909.1058J,2006ChJAS...6b.169H}. One of the main
problems to be overcome is to be able to sensibly account for the
excess low-frequency noise seen in many stable millisecond
pulsars~\cite{2010CQGra..27h4013H}. We focus on a \emph{single piece}
of the complete pulsar timing analysis, the generation of
time-of-arrival (TOA) measurements. Given a single pulsar
observation\footnote{We discuss in Sec.~\ref{sec:discussion} that
while TOAs are associated with individual pulsar observations (or
subsets of an observation), in general a given TOA will depend on
parameters ``fit'' to previous observations.}, this is the
arrival time of the average pulse at the telescope where in this
context ``average'' means the sum of pulses produced by ``folding''
the data with a periodicity equal to the assumed pulse period. It is
from these TOAs that pulsar astronomers then model the spin evolution
of pulsars taking into account the motion of the radio telescope
relative to the pulsar~\cite{2006MNRAS.369..655H}. The presence of
GWs in the field between the telescope and the pulsar will result in
small shifts in the arrival times of
pulses~\cite{1979ApJ...234.1100D,1983ApJ...265L..39H}.
We choose to limit our investigation to single pulsar observations
(typically $100-1000$s seconds in duration) and since TOAs are defined in
the reference frame of the telescope and the GW
timescale $\gg$ the timescale of a single observation, we are able to
neglect any GW effect in our analysis. We will discuss two
different strategies for the estimation of parameters (including the
TOA) from two separate starting points, what we will call
``search-mode'' data and ``pre-folded'' data. In both cases we
perform the analysis using a commonly used Bayesian integration algorithm
in order to obtain posterior probability distributions on the signal
parameters.
We note that our approach is aimed as a starting point for future more
realistic scenarios and that it can be viewed as an approach being
built from the bottom-up. We mean this in the sense that we try to
start from the most basic datasets available (see
Secs.~\ref{sec:searchmode} and \ref{sec:folding}) and attempt to build
a data-analysis framework in which the multitude of physical processes
affecting pulsar signals can be included and accounted for. In
contrast, other work on the specifics of GW detection using pulsar
timing arrays has taken a more top-down approach. These analyses have started with timing residuals, the result
of a fit to the data assuming non GW effects (effectively the end of
the pulsar data processing chain), and either neglected
this potential
inconsistency~\cite{2004ApJ...606..799J,2009PhRvD..79h4030A} or made
attempts to account for it~\cite{2009MNRAS.395.1005V}.
The paper is organised as follows. In Sec.~\ref{sec:signal} we
describe our simplistic signal and noise model. In
Secs.~\ref{sec:searchmode} and \ref{sec:folding} we then go on to
describe the form of this signal model in two different
representations of the original dataset. The basic concepts concerning
our Bayesian approach to the parameter estimation problem can be found
in Sec.~\ref{sec:Bayesian} and finally we discuss our conclusions and
potential future developments in Sec.~\ref{sec:discussion}.
\section{The signal : A toy model}\label{sec:signal}
We begin with a dataset defined on a discrete 2-dimensional grid of
time $t_{j}$ versus radio-frequency $f_{k}$ of which an example is
shown in Fig.~\ref{fig:timedomain}. Data of this kind is often
referred to as
``search-mode'' data since this data format is the kind used when
performing searches for unknown pulsars. Each of the $M$ rows of the
2-dimensional grid is a timeseries of radio-frequency power measured
within a radio-frequency band with central frequency $f_{k}$. Typical
sampling times and
observation durations are $\sim 10$s of $\mu$seconds and $100-1000$s
of seconds respectively. Typical frequency channel widths and total
detector bandwidths are $\sim 1$ MHz and $100-1000$s MHz respectively.
For ``search-mode'' data we assume the following signal model
\begin{equation}\label{eq:signalplusnoise}
x(t_{j},f_{k}) = s(t_{j},f_{k}) + n(t_{j},f_{k}),
\end{equation}
where $x(t_{j},f_{k})$ represents the discretely sampled dataset,
$s(t_{j},f_{k})$ is the signal and $n(t_{j},f_{k})$ is the noise which
for simplicity we assume as independent Gaussian distributed random
variables with zero mean. The signal itself we
define as
\begin{equation}\label{eq:signalmodel}
s(t_j,f_k) =
\sum_{\alpha=0}^{n'-1}A\exp\left[-\frac{\left(t_{j}-\mu_{\alpha
k}\right)^2}{2w^2}\right],
\end{equation}
where $\alpha$ sums over all $n'$ pulses that intersect with the
observation\footnote{Due to the effects of dispersion, whilst the
pulse period is equal in all frequency channels, a particular pulse
near the end of the timeseries for a high-frequency channel can be
delayed by dispersion such that it does not intersect with the
observation at a lower frequency channel. The same applies to
pulses near to the start of the timeseries in a lower frequency
channel since they may arrive before the observation in a higher
frequency channel.}. We use $A$ as the pulse peak amplitude, $w$ as
the pulse width, and $\mu_{\alpha k}$ as the centre of the $\alpha$'th
pulse in the $k$'th frequency channel. Note that we are modelling
each pulse as having a single Gaussian profile component and that the
amplitude and width remain constant in both time and with frequency
channel. The inclusion of additional Gaussian pulse components
requires only a
trivial modification to the model. In Sec.~\ref{sec:discussion} we discuss numerous potential
additions and modifications required to make this toy model a more
accurate representation of a real pulsar signal.
The time at the centre of each pulse is defined as
\begin{equation}\label{eq:meanpulse}
\mu_{\alpha k} = \left(\phi_{k}+\alpha\right)P + \xi_{\alpha},
\end{equation}
where $P$ is the constant pulse period and $\phi_{k}$ is the phase
(defined on the range $\left[0,1\right)$) of the first pulse in the
observation for the $k$'th frequency channel. We have also included a
random pulse ``jitter'' term where for each pulse we apply a random
shift to the pulse arrival time where each shift $\xi_{\alpha}$ is
drawn from a Gaussian distribution with zero mean and variance
$\sigma_{\xi}^{2}$. Such effects have been observed in several
pulsars and can be attributed to unknown processes in the pulse
emission mechanism and possibly related to giant
pulses~\cite{2000ApJ...535..365K,2002MNRAS.334..523K,1996ApJ...457L..81C,2007A&AT...26..585K,2006AstL...32..583K}.
We show in Sec.~\ref{sec:searchmode} that for our purposes, the effect
of this particular pulse ``jitter'' modelling can be absorbed into a
subset of the other signal parameters.
The phase of the first pulse in each frequency channel $\phi_{k}$
can be related to the phase $\Phi_{0}$, defined as the phase of the
pulse at the midpoint frequency channel
$f_{\mathrm{mid}}=(f_{M}-f_{1})/2$ and with reference to the
midpoint of the observation $t = T/2$ by %
%
\begin{equation}\label{eq:phi}
\phi_{k} = \mathrm{mod}\left(\frac{T}{2P} + \frac{\Delta
t(f_{k})}{P} + \Phi_{0},1\right).
\end{equation}
The relative delay due to dispersion in the $k$'th frequency channel
$\Delta t(f_{k})$ is given by
\begin{equation}\label{eq:deltat}
\Delta t(f_{k}) = 4.148808\times 10^3
\left(f_k^{-2}-f_{\mathrm{mid}}^{-2}\right) DM\,\,\,\mathrm{sec},
\end{equation}
where $DM$ is the dispersion measure in $\mathrm{cm}^{-3}\mathrm{pc}$
and the units of the frequencies are MHz. Note that in our simplistic
model we do not account for dispersion smearing within individual channels.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{timedomaindata}
\caption{A example of time vs frequency channel ``search-mode''
data showing a portion of a simulated
dataset consisting of a strong signal in Gaussian noise. Here
we show only the first $0.05$ seconds of data ($\Delta t=64\,\mu$sec) for $8$
$1$-MHz wide frequency channels. The signal has an amplitude $A=5$, pulse
width $w=0.25$ msec, period
$P=5$ msec, a phase $\Phi_{0}=0.2$, and a dispersion
measure $DM=100\,\mathrm{cm}^{-3}\mathrm{pc}$. The noise has unit variance.
}\label{fig:timedomain}
\end{center}
\end{figure}
\section{Using ``search-mode'' data in the Fourier domain}\label{sec:searchmode}
The signals received from pulsars are periodic and their frequency
evolution is slow i.e. the timescale of frequency variation $\gg$ the
pulse period. By Fourier transforming each channel's time-series
we find that a pulsar signal can be represented as a series of
narrow-band harmonics as shown in Fig.~\ref{fig:frequencydomain}. In
a realistic situation we would expect to have some prior
knowledge of the pulsar frequency before performing an analysis and
therefore transforming the data
in such a way allows us to isolate the regions in the dataset where
the signal is concentrated (at the harmonics). This in-turn allows
us to be economic with the data samples that we are interested in and will make any numerical
likelihood computation more efficient.
Let us define the discrete Fourier transform as
\begin{equation}\label{eq:fouriertransform}
\tilde{x}(\nu_{l})=\sum_{j=0}^{N-1}x(t_{j})e^{-2\pi jl/N}\Delta t,
\end{equation}
where $\nu_{l}$ represents the elements of a vector containing the positive
discrete Fourier frequencies\footnote{Note that there is a clear
distinction between the radio frequencies (or frequency channels)
and the Fourier frequencies.} with frequency spacing $1/T$
and where $N$ is the number of time samples.
When applied to the time series from each frequency channel
of a noise-free signal (defined by Eqs.~\ref{eq:signalmodel},\ref{eq:meanpulse})
we obtain
\begin{eqnarray}\label{eq:fourierpulse1}
\tilde{s}(\nu_{l},f_{k}) &=& \sum_{j=0}^{N-1}\sum_{\alpha=0}^{n'-1}A\exp\left[-\frac{\left(t_{j}-\mu_{\alpha
k}\right)^2}{2w^2}\right]e^{-2\pi jl/N}\Delta t, \nonumber \\
&=& \sum_{\alpha=0}^{n'-1}\tilde{s}_{\alpha}(\nu_{l},f_{k}),
\end{eqnarray}
where we have decomposed the complete Fourier transform into the Fourier
transform of each pulse. We then have
\begin{eqnarray}
\tilde{s}_{\alpha}(\nu_{l},f_{k}) &=& A\sum_{j=0}^{N-1}\exp\left[-\frac{\left(t_{j}-\mu_{\alpha
k}\right)^2}{2w^2} - 2\pi jl/N\right]\Delta t, \nonumber \\
&\approx& A\exp\left\{-2\pi i \nu_{l}\mu_{\alpha k}\right\}\int_{-\infty}^{\infty}\exp\left\{\frac{y^{2}}{2w^{2}}-2\pi i
\nu_{l}y\right\}\,dy,\nonumber \\
&=& Aw\sqrt{2\pi}\exp\left\{-2\pi i\nu_{l}\mu_{\alpha
k}-2\pi^{2}w^{2}\nu_{l}^{2}\right\},\label{eq:fourierpulse2}
\end{eqnarray}
where we have approximated the discrete sum over time samples with the
continuous integral over the dummy variable $y=t-\mu_{\alpha k}$
assuming that each pulse itself spans $\gg 1$ time bin and is not
truncated by the edges of the timeseries\footnote{Clearly, of the $n'$
pulses that intersect the time-frequency plane there will
be some frequency channels for which a pulse does not appear in the
timeseries due to dispersion. Equation~\ref{eq:fourierpulse2} is
therefore only applicable to those pulses in a particular frequency
channel that are found to intersect the timeseries.}. We can now perform the sum
over $\alpha$ (the individual pulses) to obtain the complete Fourier
transform. However, note that $\mu_{\alpha k}$ is a function of
$\xi_{\alpha}$, the random individual pulse arrival time
jitter. We choose to average over this random variable under the assumption that
there are a large number of pulses within the observation
time. This averaging procedure leads to the following replacement:
\begin{eqnarray}\label{eq:averagejitter}
e^{-2\pi i\nu_{l}\xi} \rightarrow\Large\langle e^{-2\pi
i\nu_{l}\xi}\Large\rangle &=& \int_{-\infty}^{\infty} \frac{e^{-\xi^{2}/2\sigma_{\xi}^{2}}}{\sqrt{2\pi\sigma^{2}_{\xi}}}\,e^{-2\pi
i\nu_{l}\xi}\, d\xi, \nonumber \\
&=& e^{-2\pi^{2}\nu_{l}^{2}\sigma_{\xi}^{2}},
\end{eqnarray}
where we have replaced the pulse jitter term with its expectation
value and used a Gaussian distribution for the pulse jitter with a
zero mean and variance of $\sigma^{2}_{\xi}$. Finally we obtain the
following expression for the Fourier transform of the signal only
timeseries
\begin{equation}\label{eq:completefourier2}
\tilde{s}(\nu_{l},f_{k})
= \frac{A_{\xi}w_{\xi}T}{P}\sqrt{2\pi}\exp\left\{-2\pi^{2}\nu_{l}^2 w_{\xi}^{2}
\right\}\exp\left\{-2\pi i \phi_{k}\nu_{l}\right\}\tilde{W}_{l}.
\end{equation}
We can see from this equation that in the Fourier domain the signal
can be decomposed into four parts. There is a real positive amplitude term proportional to
the pulse amplitude, width, and number of pulses ($n\approx T/P$)
which is multiplied by a frequency dependent envelope function that
decays with increasing frequency at a rate proportional to the pulse width. There is also a unit amplitude complex phase term
dependent upon the initial phase of the pulse in the given frequency
channel multiplied by a second complex phase term
$\tilde{W}_{l}$ given by
\begin{equation}
\tilde{W}_{l} = \frac{P}{T}\exp\left\{2\pi i \nu_{l}P\right\}\left[\frac{1 -
\exp\left\{-2\pi i\nu_{l}T\right\}}{\exp\left\{2\pi
i\nu_{l}P\right\} - 1}\right],
\end{equation}
which, in the limit of $T\gg P$ can be written as
\begin{equation}
\tilde{W}_{l} = \sum_{\beta=0}^{n}\left\{\frac{\sin(2\pi\Delta\nu_{l\beta}T)}{2\pi\Delta\nu_{l\beta}T} +
i\left[\frac{\cos(2\pi\Delta\nu_{l\beta}T) -1}{2\pi\Delta\nu_{l\beta}T}\right]\right\},
\end{equation}
where $\Delta\nu_{l\beta}=\nu_{l} - \beta/P$ and $\beta$ labels
the individual signal harmonics of which there are $n$. This final complex
phase term contains the information regarding the location and phase
of the signal harmonics. We can now see that each signal harmonic is
identical in shape but will each have a different phase and
amplitude. In addition, as one moves to different frequency channels
the phase of a given harmonic will be rotated by a quantity dependent upon the dispersion measure.
Note that we have also re-parameterised the pulse amplitude and width
using
\begin{eqnarray}
A_{\xi} &=& \frac{Aw}{\sqrt{w^{2}+\sigma_{\xi}^{2}}}, \label{eq:jitteramplitude}\\
w_{\xi} &=& \sqrt{w^{2}+\sigma_{\xi}^{2}}, \label{eq:jitterwidth}
\end{eqnarray}
since with the addition of pulse jitter there exists a degeneracy
between the original pulse amplitude and width. The product of the
amplitude and width determine the overall amplitude of the Fourier
transform of the signal and the sum of the squares of the width and
the pulse jitter parameter determine the rate of the reduction in
amplitude of the harmonics with increasing frequency. Using the data
to measure this amplitude and its attenuation with increasing
frequency will therefore not allow us to constrain all three
parameters\footnote{We note that strictly speaking it would be
possible to identify the values of all three parameters for a very
strong signal. Pulse arrival time jitter acts to remove a small
fraction of power from
the harmonics and distribute it amongst the inter-harmonic frequency
bins. Our analysis is restricted to localised regions at each
harmonic and so we treat this information as lost.}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{frequencydomaindata}
\caption{A example of Fourier transformed ``search-mode'' data showing a portion of a simulated
dataset consisting of a strong signal in Gaussian noise. The
panels on the left show (in blue) the real part of the complex Fourier
transform of the data as a function of Fourier frequency for $8$
$1$-MHz wide
frequency channels. The imaginary parts are shown (in red) on the
right. The dataset used to generate this plot is identical to that shown in
Fig.~\ref{fig:timedomain} and we have truncated the frequency
range at $2$ kHz since there the harmonic content of the signal is
significantly reduced beyond this frequency.
}\label{fig:frequencydomain}
\end{center}
\end{figure}
\section{Using folded data}\label{sec:folding}
The majority of pulsar timing data is pre-processed and reduced in
volume by the
process of folding. In this process, sections of the time series from each
frequency channel of an observation will be folded with an assumed
pulse period\footnote{The folding procedure can also include
de-dispersion over a limited range of frequency channels where, just
as with folding, an assumed value of the dispersion is used. Hence a
large number of frequency channels can be grouped together into a
single pulse profile measurement. We do not consider this potential feature of
the folding procedure in this work.}. At the time of folding this pulse period will not
necessarily be the most accurate value. The pulse period itself is
updated and refined with each subsequent observation. However, once
data have been folded, most notably for older observations, the
original search mode data may be lost, meaning that re-folding with the more
refined period is not possible.
We will focus on the effect of
folding with an inaccurate pulse period. One can argue that since the most basic initial
pulse period estimates will require a coherent measurement over some
prior observation spanning many pulses, we should expect an initial
worst case fractional uncertainty in the pulse period of $\sim P/T$
which for a $10$ msec pulsar period and a $100$ second coherent observation
equates to a period error of $\sim 1$ $\mu$second. In addition to the pulse
period, for realistic analyses a number of other parameters are used
in the folding procedure such as the sky position coordinates, the
intrinsic pulsar spin-frequency derivatives, the dispersion measure
plus orbital parameters if the source is in a binary system. In our
toy model we ignore these complications.
We choose to define the result of the folding process for a single observation as a
2-dimensional grid of pulse profiles labelled by time and channel
frequency, an example of which is shown in Fig.~\ref{fig:foldeddata}. To perform a consistent analysis of such a dataset we
take into account the fact that profiles have been obtained using a
non-precise value of the pulse period. If we consider a dataset that
has already been folded at a specific (non-exact) pulse period
$P'=P+\Delta P$ then we can define a new folded dataset as
\begin{equation}\label{eq:foldeddata}
X(\phi',P',f_{k}) = \sum_{\beta=0}^{n-1}x\Big((\beta+\phi')P',f_{k}\Big),
\end{equation}
where $\beta$ indexes each fold up to $n=\mathrm{floor}(T/P')$.
Substituting in our signal model (Eqs.~\ref{eq:signalmodel} and \ref{eq:meanpulse}) we
can accurately approximate the discretely summed noise-free pulse profile as
\begin{eqnarray}\label{eq:profilemodel}
S(\phi',P',f_{k})
&\approx&
\frac{A_{\xi}w_{\xi}}{|\Delta P|}\sqrt{\frac{\pi}{2}}\sum_{z=-1}^{1}\left[\mathrm{erf}\left(a_{z}+b\right)
- \mathrm{erf}\left(a_{z}\right)\right],
\end{eqnarray}
where we have used
\begin{eqnarray}
a_{z} &=& \frac{|\Delta P|}{\Delta P}\,\left[\frac{(P+\Delta P)(\phi'+z)-\phi_{k}
P}{w_{\xi}\sqrt{2}}\right], \label{eq:foldingaz}\\
b &=& \frac{|\Delta P|(n-1)}{w_{\xi}\sqrt{2}}. \label{eq:foldingb}
\end{eqnarray}
In the calculation of Eq.~\ref{eq:profilemodel} we have again replaced
the pulse arrival time jitter term with its expectation value (as done
in Sec.~\ref{sec:searchmode}) and approximated the sum over pulses
with a continuous integral. We have also been forced to
re-parameterise the pulse amplitude and width parameters for the same
reasons as described in the previous section and have chosen to use an
identical re-parameterisation (defined in
Eqs.~\ref{eq:jitteramplitude} and~\ref{eq:jitterwidth}). The summation
over the index $z$ is simply to account for the fact that folding a
signal with an arbitrary initial phase may separate the pulse profile
into significant contributions spanning the $\phi'=0=1$ point. This
also acts to account for the fact that if folding with an incorrect
pulse period the true pulse will slowly drift across the $\phi'$
space. In this scenario the tails of neighbouring pulses begin to
contribute to the sum and by including the $z=\pm 1$ terms we are
accurately modelling this effect.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{foldeddata}
\caption{An example simulated folded dataset showing folded pulse profiles
for $4$ sub-intervals each spanning $25$ seconds and for $8$
frequency channels each spanning $1$ MHz. The simulated signal parameters
are equal to those defined and used in
Figs.~\ref{fig:timedomain} and~\ref{fig:frequencydomain} with
the exception that here the signal amplitude $A=0.1$ is significantly
lower. Two curves are plotted in each panel, the blue curves
are profiles obtained through folding with the true pulse period
$P$. The red curves are the profiles obtained through folding
with an pulse period error $\Delta P'=10$ nsec. Note
that this size period error is equivalent to a phase error of
$\sim 0.01$ cycles over the course of a sub-integration.
}\label{fig:foldeddata}
\end{center}
\end{figure}
\section{A Bayesian analysis}\label{sec:Bayesian}
The Bayesian component to our approach can be viewed as standard in
the sense that we aim to simply apply Bayes probability theorem to the
time-of-arrival problem with the intention of computing marginalised
posterior probability distributions on the signal parameters.
Bayes theorem can be expressed as
\begin{equation}\label{eq:bayestheorem}
p(\boldsymbol{\theta}|\boldsymbol{x},\mathcal{M}) = \frac{L(\boldsymbol{x}|\boldsymbol{\theta},\mathcal{M})\pi(\boldsymbol{\theta}|\mathcal{M})}{E(\mathcal{M}|\boldsymbol{x})},
\end{equation}
where the term on the left-hand-side is the joint posterior
probability distribution on the parameter set $\boldsymbol{\theta}$
given a dataset defined by the vector $\boldsymbol{x}$ and a chosen model
represented by $\mathcal{M}$.
The function $L(\boldsymbol{x}|\boldsymbol{\theta},\mathcal{M})$ is the
likelihood function describing the dataset
$\boldsymbol{x}$ given the parameter set $\boldsymbol{\theta}$ and the
model $\mathcal{M}$. The function $\pi(\boldsymbol{\theta}|\mathcal{M})$ is the joint prior
probability distribution on the parameter set $\boldsymbol{\theta}$
given the model $\mathcal{M}$. Finally we have the Bayesian evidence
$E(\mathcal{M}|\boldsymbol{x},\boldsymbol{\theta})$ representing the probability
of the model $\mathcal{M}$ given the dataset $\boldsymbol{x}$.
To obtain marginalised posterior distributions on a particular signal
parameter we are required to perform a multi-dimensional integration
of the joint posterior distribution over the remaining parameters.
Formally this can be written as
\begin{equation}\label{eq:posterior}
p(\theta_{m}|\boldsymbol{x},\mathcal{M}) \propto \int_{\mathcal{S}} d^{n}\boldsymbol{\theta'}\,
L(\boldsymbol{x}|\boldsymbol{\theta},\mathcal{M})\pi(\boldsymbol{\theta}|\mathcal{M}),
\end{equation}
where the parameter vector $\boldsymbol{\theta'}$ consists of the
subset of parameters in the vector $\boldsymbol{\theta}$ excluding the
parameter $\theta_{m}$ and where $\mathcal{S}$
defines the volume of integration on that space. Note that there is no
dependence upon the Bayesian evidence in the calculation of the
marginalised posterior distribution since it is independent of the
parameter values themselves and can be absorbed into the normalisation
of the posterior distribution.
In practice the calculation of posterior distribution functions can be
a difficult and computationally intensive procedure. Over the last
decade much work has been dedicated to the efficient numerical
computation of posterior probability distributions and more recently
to the evaluation of the Bayesian evidence. One of the now standard
tools available for Bayesian data analysis is the
Markov-Chain-Monte-Carlo (MCMC)~\cite{MCMCinPractice,Gelman95}, an efficient
method for obtaining random samples drawn from a posterior probability
distribution of which there are a number of
variations~\cite{marinari-1992-19,
Gramacy:2010:IT:1713542.1713556, Cai:2008:MAA:1484982.1485000,
RePEc:mtn:ancoec:2001:3:16,Hernandez-Marin_2007, 2009arXiv0904.2207T}. More
recently the strategy known as ``nested sampling''~\cite{skilling:395,Sivia96}
has given the data analyst the ability to accurately estimate the
Bayesian evidence, a model dependent quantity used to perform model
selection. The first direct application of this
strategy was to perform cosmological model
selection using {\it WMAP}
data~\cite{2006ApJ...638L..51M}. For this work we chose to perform our analysis using the freely
available nested sampling algorithm
\verb1MultiNest1~\cite{2008MNRAS.384..449F}. Note that this algorithm
has been specifically designed to be robust with respect to
multi-modal posterior distributions and to compute the Bayesian
evidence. For this work we use it purely to obtain posterior
probability distributions on the pulsar parameters.
Let us now define the likelihood functions specific to the two
approaches described in Secs.~\ref{sec:searchmode} and~\ref{sec:folding}.
The likelihood function for the Fourier domain approach to the
``search-mode'' data is defined as
\begin{equation}\label{eq:searchmodelikelihood}
L^{\mathrm{sm}}(\boldsymbol{\tilde{x}}|\boldsymbol{\theta}) =
\left(2\pi\sigma_{f}^{2}\right)^{-NM/4}\exp\left\{-\frac{1}{2\sigma^{2}_{f}}\sum_{j=0}^{N/2-1}\sum_{k=0}^{M-1}
|\tilde{x}_{jk} - \tilde{s}_{jk}(\boldsymbol{\theta})|^2\right\},
\end{equation}
where $N/2$ and $M$ are the total number of Fourier-frequency and radio-frequency
bins respectively and we define $\boldsymbol\theta=\{A_{\xi},w_{\xi},DM,P,\Phi_{0}\}$ as
the vector of signal parameters. We have used $\sigma_{f}^{2}$ to
represent the frequency domain noise variance which we assume to be
Gaussian, white, and stationary and therefore constant for all Fourier
and radio frequency bins. In this ideal scenario the frequency domain
noise variance is related to the time domain noise variance
$\sigma_{t}^{2}$ by $\sigma_{f}^{2} = N(\Delta t)^2\sigma_{t}^{2}$.
The likelihood function for the folded data can similarly be written as
\begin{equation}\label{eq:foldedlikelihood}
L^{\mathrm{fold}}(\boldsymbol{X}|\boldsymbol{\theta}) =
\left(2\pi\sigma_{X}^{2}\right)^{-N_{\mathrm{s}}M/2}\exp\left\{-\frac{1}{2\sigma^{2}_{X}}\sum_{j=0}^{N_{\mathrm{s}}-1}\sum_{k=0}^{M-1}
\left(X_{jk} - S_{jk}(\boldsymbol{\theta})\right)^2\right\},
\end{equation}
where $N_{s}$ is the number of equal length sub-intervals into which each frequency
channel's timeseries has been divided. The noise contribution in
a particular folded phase bin is simply the sum of $n=\mathrm{floor}(T/P')$ Gaussian
distributed variables of variance $\sigma_{t}^{2}$ and hence
$\sigma_{X}^{2}=n\sigma_{t}^{2}$. The parameter vector
$\boldsymbol{\theta}$ is identical to that defined for the search-mode
data.
In general the choice of prior probability distribution functions on the parameters
$\boldsymbol{\theta}$ would be chosen according to one's prior beliefs
on the values of those parameters. However, for the purposes of our
toy model investigation we choose ``flat'' prior distributions for all
parameters with prior ranges chosen to be far greater than the
expected span of the posterior distributions. In this case we do not
favour any particular choice of parameter values over any others.We note that in making
this choice we are disregarding a powerful feature of the Bayesian
analysis, the ability to correctly incorporate prior information into
the result. However, one can show that for strong signal-to-noise
ratios the effect of the prior on the posterior is dominated by that of the likelihood
function itself.
To conclude this section we would like to make it clear that what we
have described in Secs.~\ref{sec:searchmode}
and~\ref{sec:folding} do \emph{not} constitute two separate models.
We have described two separate representations of the same original
dataset and have in-fact used the same signal model. Model selection
therefore could \emph{not} be applied to these two methods. Our aim
is to compare the effectiveness of each choice of dataset
representation by contrasting the posterior distributions on the
signal parameters when a single common time-radio-frequency dataset is used
to generate both the Fourier-radio-frequency and a folded dataset.
Model selection using the Bayesian evidence and the computation of the
Bayes factor (the ratio of model evidences) and odds-ratio (the Bayes
factor multiplied by the ratio of prior model probabilities) is a
potentially powerful tool in future advanced implementations of our
analysis strategy. Our choice of nested sampling implementation, \verb1MultiNest1, has
been designed specifically to compute the Bayesian evidence, making
model selection between different pulsar signal models an obvious and
easy to implement extension of our approach.
\section{Discussion}\label{sec:discussion}
Shown in Fig.~\ref{fig:posteriors} is an example of typical
marginalised posterior probability distributions on the signal
parameters $\boldsymbol{\theta}=\{A_{\xi},w_{\xi},DM,P,\Phi_{0}\}$ plus
the time-of-arrival parameter $t_{\mathrm{TOA}}$. The latter is not independent of the
other parameters and is a function of both the phase parameter and the
pulse period such that $t_{\mathrm{TOA}} = \Phi_{0}P$ and is therefore
defined as the arrival time of the first pulse received at the
mid-point frequency channel immediately following the mid-point of the
observation.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{posteriors}
\caption{The marginalised posterior distributions on the signal
parameters for a simulated signal in Gaussian noise. In solid blue we
show the results obtained when using the Fourier domain
representation of the ``search-mode'' data. In solid red we
show the results obtained when using folded data as the input
for the case where the data was folded with the true pulse
period. In dashed red we show the results for the folding
scenario where an incorrect period, $\Delta P=10$ nanoseconds,
has been used to fold the data and we have accounted for this
within the signal model. The dashed black curves show the
result where this effect has not been accounted for. The
vertical dotted black lines indicate the values of the true
signal parameters. For all
results the data was converted into the Fourier and folded
representations from a single common time-frequency dataset of
length $100$ sec with sampling time $64$ $\mu$sec and frequency
range of $8$ MHz consisting of $8$ channels each of $1$ MHz bandwidth.
}\label{fig:posteriors}
\end{center}
\end{figure}
Our results show that the ability to determine the signal parameters
is unaffected by the choice of data representation when comparing
the Fourier domain approach and the folded data. This is apparent
from the consistent widths of the posterior distributions which
define the uncertainty in parameter estimation. The clear effect
that we see is the discrepancy between the location of the posterior
distributions for the case where the error in folding period has been
accounted for and where it has not. We see that the estimation of the
dispersion, pulse amplitude, pulse width, and pulse frequency is only
marginally affected. However, the phase parameter and therefore the
time-of-arrival estimate, is strongly biased by the false assumption that
the signal has been folded with the correct pulse period. For the
results shown in Fig.~\ref{fig:posteriors} the pulse period error of
$10$ nsecs is equivalent to an accumulated phase error of only
$3.6$ degrees over the length of the $25$ sec sub-integrations.
This appears as a $\sim 1.8$ degree ($\equiv 0.005$ cycles) error in
the estimate of $\Phi_{0}$ leading to a $\sim 25$ $\mu$second error in
the estimate of the time-of-arrival value\footnote{This observed phase
error is half of the total accumulated phase error because the phase
parameter value is defined at the midpoint of the observation and
therefore the phase error effectively accumulates over $T/2$ rather
than $T$.}.
It is clear that the work presented here is intended only as a
potential starting point for more advanced applications of Bayesian
data analysis techniques to the problem of pulsar timing. A clear
difference between our approach as described here and established
techniques is that we have obtained our pulsar parameter estimates
from a single simulated observation. The standard approach is to employ a more
global strategy in which the process of producing a time-of-arrival
measurement for a given observation is not just a function of the
given observation but of all existing observations of the pulsar. Each TOA represents the reduction of an entire observation
into a single number after having performed a global fit (over all
observations) for a set of
common pulsar parameters e.g. pulse period, the period derivatives,
the sky position, proper motion, pulse shape parameters, the
dispersion measure, etc. When new observations are taken, the
procedure is repeated and these parameters are refined. As discussed
in Sec.~\ref{sec:folding}, as data is recorded it is often
reduced (in terms of data volume) by folding at an assumed pulse
period and in addition may be partially de-dispersed with an assumed
dispersion measure. The detrimental effect of this process (as seen
in our results) will rapidly diminish as more and more observations
are made but further analysis is required to rule out such effects as
contributors to the low-frequency timing noise seen in the msec pulsars.
The scope of this work is limited to the generation of TOAs but we
would also like to briefly discuss the specific aim of gravitational wave
detection using pulsar timing arrays. From a purely theoretical
Bayesian data analysis perspective in an ideal scenario, firstly, one would use
an un-reduced dataset spanning all observations of all relevant
pulsars. Secondly one would construct a model including all pulsar
signal parameters \emph{and} all gravitational wave signal
parameters. After applying sensible prior distributions to all of
these parameters one would compute marginalised posterior distributions on
both pulsar and gravitational wave parameters and perform model
selection. We could then establish whether the observations coupled with our prior
beliefs were consistent with the presence of gravitational waves.
In practice this is a very difficult task for various reasons but most
notably due to the vast computational resources required to process
the vast volume of un-reduced data and to explore the
multi-dimensional parameter space describing the entire pulsar array
and the intervening gravitational wave. For this reason, in terms of
gravitational wave detection, constructing a reduced dataset is highly
desirable. In fact, the problem of gravitational wave detection using
timing residuals (the difference between the time-of-arrival values
and those attained by fitting a gravitational wave free pulsar model)
as the initial dataset have already been applied to the specific case
of searching for the gravitational wave stochastic
background~\cite{2009MNRAS.395.1005V,2009PhRvD..79h4030A}.
The apparent separation of the complete gravitational wave detection
problem into a gravitational wave free component, from which a reduced
dataset is produced, and then a second component in which this reduced
dataset is then analysed including the effects of gravitational waves,
seems potentially problematic. Under the assumption that each TOA
measurement is independent of all others one can argue strongly that
the effect of a low-frequency gravitational wave on each measurement
is negligible and that the TOA truly represents the unambiguous
arrival time of an average pulse within that observation and defined
at some epoch. As soon as one performs a global fit (neglecting
gravitational waves) over all observations of a given pulsar a
gravitational wave of sufficient amplitude will affect the best fit
pulsar parameters. Such a procedure could absorb some fraction
of a gravitational wave into the pulsar parameter estimates (e.g the
pulsar period derivatives). In future work we hope to address this
issue and to provide a comparison between an analysis
using independent TOA measurements as a dataset for gravitational wave
detection and an analysis using globally estimated TOAs.
In addition we hope to be able to include, and account for, many of the physical
effects and data analysis issues that we have ignored in our toy model
approach. These include a more robust treatment of the noise where we
allow time and frequency variation and investigate the validity of the
assumption of Gaussianity. In reference to this we hope to also
include the effects of radio frequency interference (RFI) and
investigate methods in which we are able to analytically marginalise over the noise
and therefore potentially avoid the need to estimate it. We also aim
to include the effect of polarisation into the analysis. A
search-mode dataset is itself the product of two independent radio
signal polarisation measurements which are combined as a function of
the Stokes parameters. These parameters can be incorporated into the
Bayesian framework and uncertainties on these parameters can be
marginalised over in parallel with the signal parameters. Less well
defined effects to consider include a time and frequency varying
pulse profile parameterisation, time varying dispersion measure,
scattering, scintillation and nulling. Finally, we hope to develop
this work beyond the toy model to a point at which it can be applied
to real pulsar data. In such a scenario we will also have to
incorporate barycentric routines~\cite{2006MNRAS.369..655H} to include
the obvious effects of detector motion, sky position uncertainty and,
where applicable, binary orbital motion.
\ack
We thank Maura McLaughlin, Benjamin Knispel, Reinhard Prix, Christian
R\"over and Xavier Siemens for insightful discussions and invaluable input.
\section*{References}
\bibliographystyle{unsrt}
| -23,062.067806 |
[
-2.462890625,
2.412109375
] | 11.323529 |
[
-3.05078125,
0.3642578125,
-1.81640625,
-5.8203125,
-1.162109375,
8.3828125
] |
[
2.08984375,
6.6328125,
2.4921875,
4.93359375
] | 256 | 5,203 |
[
-2.798828125,
3.1328125
] | 24.929951 |
[
-6.140625,
-4.1171875,
-4.140625,
-2.30078125,
1.90234375,
11.625
] | 1.205695 | 5.882128 | 23.294253 | 1.687457 |
[
2.159092903137207
] | -15,555.74364 | 5.818374 | -22,758.081347 | 0.369967 | 5.781459 |
[
-2.869140625,
-3.748046875,
-3.46484375,
-4.46875,
2.494140625,
11.578125
] |
[
-5.71875,
-2.626953125,
-2.634765625,
-1.654296875,
3.78515625,
4.9921875
] | |
BkiUdmc5qdmC89dsdNxj
|
\section{Introduction}
The discovery of two-dimensional (2D) systems whose quasiparticles are described in terms of a Dirac theory~\cite{Novoselov2005} has been one of the major breakthroughs over the last two decades in condensed matter physics and has fuelled research in the area of 2D materials~\cite{Novoselov2016,Roldan2017}.
Graphene, that features gapless Dirac cones in the neighborhood of the Fermi energy~\cite{Neto2009}, is a paradigmatic example.
Interestingly, there are also 2D semiconductors that require a description through a massive Dirac equation~\cite{Xiao2012,Goerbig2014}, instead of a Schr\"{o}dinger-like model.
Whereas both Dirac and Schr\"{o}dinger theories would yield similar energy bands, their wave functions and linear response are distinct.
The massive Dirac Hamiltonian comprises a finite Berry curvature that entails an unconventional Hall response~\cite{Xiao2007}.
The Landau level spectrum of massive Dirac electrons features valley-dependent zeroth Landau levels aligned with either the valence or the conduction bands~\cite{Koshino2010}.
These properties are absent for Schr\"{o}dinger quasiparticles.
The effective picture in terms of a gapped Dirac Hamiltonian provides an unifying description of materials that, from the chemical point of view, are quite different.
For instance, whereas for graphene the Dirac states are made of $p_z$ orbitals~\cite{Neto2009}, for transition metal dichalcogenides they are made of $d_{x^2-y^2}$ and $d_{xy}$ orbitals in the valence band and $d_ {z^2}$ in the conduction band~\cite{Xiao2012,Kosmider2013}.
In this work, we study the optical response of massive Dirac systems under the influence of applied out-of-plane magnetic fields.
We focus on the case of transition metal dichalcogenide (TMD) monolayers, \ch{MX2}, where \ch{M}=\ch{Mo},\ch{W} and \ch{X}=\ch{S},\ch{Se}, whose magneto-optical properties have attracted considerable interest both from the experimental~\cite{Aivazian2015,Srivastava2015,Schmidt2016,Wang2017} and theoretical~\cite{Rose2013,Chu2014} side.
These direct band gap semiconductors are object of intense scrutiny because of their strong light-matter coupling~\cite{Mak2010,Splendiani2010}, strong spin-orbit interactions~\cite{Xiao2012,Kosmider2013}, rich excitonic effects~\cite{Berkelbach2013,Ugeda2014,Chaves2017,Wang2018} and potential applications in the emergent field of valleytronics~\cite{Zhang2014,Mak2018}.
Nevertheless, our results can be easily adapted to other systems described by a massive Dirac equation, such as gapped graphene~\cite{Jiang2010,Pedersen2011}, silicene and related materials~\cite{Tabert2013} or antiferromagnetic honeycomb semiconductors~\cite{Li2013}.
The effects of orbital coupling to an external out-of-plane magnetic field, as well as spin-orbit interactions, are explicitly taken into account.
Electron-electron interactions are considered at the Hartree-Fock level, but electron-hole attraction and corresponding excitonic effects are left for a companion publication~\cite{Have2018}.
The rest of this paper is organized as follows.
In Section~\ref{section:Hamiltonian}, we introduce the physical system and its model Hamiltonian, which forms the basis for the whole work.
Section~\ref{section:formalism} contains the formalism used to calculate the magneto-optical properties, in particular the derivation of the electric susceptibility response function.
The analysis of the results is presented in Section~\ref{section:longitudinal}, for the longitudinal susceptibility, \ref{section:Hall}, for the transverse susceptibility, and \ref{section:circular}, for the response to circularly polarized light.
Section~\ref{section:exchange} is devoted to the calculation of the exchange self-energy corrections.
Additional technical details are provided in the Appendixes.
\section{Model Hamiltonian}
\label{section:Hamiltonian}
We consider a single-layer TMD in the $xy$-plane with a perpendicular uniform magnetic field pointing in the $z$-direction.
The crystal structure consists of an hexagonal lattice of trigonal prismatic unit cells, each of them containing one transition metal atom and two chalcogens.
The resulting hexagonal Brillouin zone has two inequivalent sets of three equivalent corners, the so-called $K$ and $K'$ valleys (or Dirac points).
Due to the absence of an inversion center, the valley index provides an additional discrete degree of freedom for carriers in this system.
The physical system is depicted in Fig.~\ref{fig:system}.
\begin{figure}
\includegraphics[width=\columnwidth]{system.pdf}
\caption{(Color online) Representation of the physical system.
(a): light is shinned into a transition metal dichalcogenide (TMD) monolayer subject to a perpendicular magnetic field, $\bm{B}$, uniform in space and time.
(b)-(c): the TMD crystal structure consists of an hexagonal lattice ---top view shown in (c)--- of trigonal prismatic unit cells, (b), each of them containing one transition metal atom (big gray spheres) and two chalcogens (small red spheres); in (c), the blue region marks the unit cell of the crystal, defined by the primitive vectors $\bm{a}_1$ and $\bm{a}_2$.
(d): corresponding (hexagonal) Brillouin zone, defined in reciprocal space by the primitive vectors $\bm{b}_1$ and $\bm{b}_2$, with the Dirac points $K$ and $K'$ indicated.}
\label{fig:system}
\end{figure}
In the low-energy regime, the electronic properties of TMD monolayers are often described by a massive Dirac Hamiltonian around the valleys~\cite{Xiao2012,Liu2013,Rose2013,Kormanyos2015}.
Spin-orbit coupling (SOC) splits both the valence and conduction bands, with opposite spin splittings at the two valleys, preserving time reversal symmetry thereby and leading to the so-called spin-valley coupling~\cite{Xiao2012}.
The magnitude of SOC splitting in the valence and conduction bands is different, on account of their different atomic orbital breakdown.
The spin splitting of the valence band is of the order of hundreds of $\si{\milli\electronvolt}$ whereas, in the conduction band, it is smaller than few tens of $\si{\milli\electronvolt}$~\cite{Liu2013}.
Moreover, different TMD materials yield different relative signs of spin splitting in the conduction and valence bands at a given valley~\cite{Liu2013}.
In these systems, SOC commutes with the spin operator $S_z$.
As a result, it can be introduced in a phenomenological manner~\cite{Ochoa2013,Chaves2017} by redefining the Dirac mass, including a valley ($\tau$) and spin ($s$) dependency, $\Delta \rightarrow \Delta_{\tau s}$, and adding an offset energy term, $\xi_{\tau s}$, defined below.
In the presence of a uniform out-of-plane magnetic field, $\bm{B}=B_0 \hat{\bm{z}}$, the single-particle Hamiltonian for each valley and spin subspace is thus written, in the Landau gauge, as
\begin{equation}
H_0^{\tau,s} = v_F \left( \tau \sigma_x p_x + \sigma_y p_y + e B_0 x \sigma_y \right) + \Delta_{\tau s} \sigma_z + \xi_{\tau s} \mathbb{1}_2,
\label{eq:H0}
\end{equation}
where $\tau=\pm$ ($+$ for the $K$ valley and $-$ for the $K'$), $s=\uparrow(+), \downarrow(-)$, $v_F$ is the Fermi velocity, $\sigma_i (i=x,y,z)$ are the Pauli matrices with eigenvalues $\pm 1$, $\bm{p} = (p_x, p_y) = -\mathrm{i}\hbar \bm{\nabla}$ is the canonical electron momentum ($\hbar$ is the reduced Planck constant), $-e<0$ is the electron charge and $\mathbb{1}_2$ is the $2 \times 2$ identity matrix.
The Pauli matrices and the identity matrix act on the space of the highest energy valence and lowest energy conduction states~\cite{Xiao2012}.
The explicit forms of the valley- and spin-dependent Dirac mass, $\Delta_{\tau s}$, and offset energy, $\xi_{\tau s}$, read~\cite{Ochoa2013,Chaves2017}
\begin{equation}
\Delta_{\tau s} = \Delta - \tau s \frac{\Delta_{\text{SOC}}^\mathcal{V}-\Delta_{\text{SOC}}^\mathcal{C}}{4}, \quad
\xi_{\tau s} = \tau s \frac{\Delta_{\text{SOC}}^\mathcal{V}+\Delta_{\text{SOC}}^\mathcal{C}}{4},
\end{equation}
where $\Delta_{\text{SOC}}^{\mathcal{V}}$ ($\Delta_{\text{SOC}}^{\mathcal{C}}$) is the spin splitting in the valence (conduction) band.
For $B_0=0$, the band gap is given by $2 \Delta_{\tau s}$.
The effective Hamiltonian, Eq.~\eqref{eq:H0}, shows that the dependency of the mass term on the valley and spin indexes is encoded in the product $\tau s$.
In addition, the valley index appears on its own in the kinetic term, leading to valley-selective circular dichroism (introduced in Section~\ref{subsection:formalism_circular}), as we discuss in Section~\ref{section:circular}.
We neglect Zeeman splitting, that could be easily added as an additional term $g \mu_B B_0 \frac{s}{2} \mathbb{1}_2$, where $g$ is the $g$-factor and $\mu_B$ the Bohr magneton.
This term would split the energy bands of the two spin channels by $|g| \mu_B B_0 \simeq 0.12 B_0[\si{\tesla}] \si{\milli\electronvolt}$.
Compared to the spin splitting driven by the strong SOC, this effect is, for any reasonable scenario, negligible in the valence bands of TMDs.
As for the conduction bands, even though Zeeman and SOC can yield comparable magnitudes for strong applied fields, the results discussed in this paper are not substantially affected by the absence of Zeeman splitting in the model.
The effect of higher than first order $\bm{k} \cdot \bm{p}$ terms in the Hamiltonian~\cite{Kormanyos2015} has also been ignored.
Closed analytical expressions for the eigenstates of $H^{\tau,s}_0$ can be obtained in terms of Landau levels that fall into two categories: the zeroth Landau level and the $n \neq 0$ Landau levels~\cite{Jiang2010,Koshino2010,Lado2013}.
The eigenvalues read
\begin{equation}
E_{n,\lambda}^{\tau, s} = \lambda \sqrt{\Delta_{\tau s}^2 + \frac{1}{2} \left( \hbar \omega_0 \right)^2 n } + \xi_{\tau s},
\label{eq:spectrum}
\end{equation}
where $\frac{\omega_0}{2} = \frac{v_F}{l_B}$ is the characteristic angular frequency ($l_B = \sqrt{\frac{\hbar}{e B_0}}$ is the magnetic length) and $\{n;\lambda\}$ is the set of quantum numbers that describes the energy levels of this system, in which $n$ is the Landau level (LL) index and $\lambda$ the conduction ($\mathcal{C}$) or valence ($\mathcal{V}$) band index.
For the $n \neq 0$ LLs, $n=1,2,...$ and $\lambda=+(\mathcal{C}),-(\mathcal{V})$; the zeroth Landau level (0LL) is obtained setting $n=0$ and $\lambda=-\tau$.
The corresponding wave functions yield
\begin{equation}
\psi_{n,\lambda,k_y}^{\tau,s} (u,y) = \frac{\mathrm{e}^{\mathrm{i} k_y y}}{\sqrt{L_y}} \frac{\mathrm{e}^{-u^2/2}}{\sqrt{\sqrt{\pi} l_B}} C_{n,\lambda}^{\tau, s}
\begin{pmatrix}
\tilde{H}_{n_\tau} (u) \\
\mathrm{i} B_{n,\lambda}^{\tau, s} \tilde{H}_{n_\tau + \tau} (u)
\end{pmatrix} ,
\end{equation}
where $k_y$ stands for the wave vector in the $y$-direction, which is quantized as $k_y = \frac{2\pi n_y}{L_y} , \ n_y \in \mathbb{Z}$ by applying periodic boundary conditions along the $y$-direction to a sample of length $L_y$.
We have also defined $u \equiv \frac{x}{l_B} + l_B k_y$, $n_\tau \equiv n-\frac{1+\tau}{2}$, $\tilde{H}_n \equiv \frac{1}{\sqrt{2^n n!}} H_n$ for $n\geq0$ (where $H_n$ are the Hermite polynomials) and $\tilde{H}_{-1} \equiv 0$.
The normalization constants, $C_{n,\lambda}^{\tau, s}$ and $B_{n,\lambda}^{\tau, s}$, are given by
\begin{equation}
C_{n,\lambda}^{\tau, s} = \sqrt{\frac{\bar{\Delta}_{\tau s} \left( \bar{\Delta}_{\tau s} + \check{E}^{\tau, s}_{n,\lambda} \right) + n}{\bar{\Delta}_{\tau s} \left( \bar{\Delta}_{\tau s} + \check{E}^{\tau, s}_{n,\lambda} \right) + 2n}} \in \mathbb{R}
\end{equation}
and
\begin{equation}
B_{\text{0LL}}^{\tau, s} = -\mathrm{i}, \quad
B_{n\neq 0,\lambda}^{\tau, s} = \frac{\sqrt{2n}}{ \bar{\Delta}_{\tau s} + \check{E}^{\tau, s}_{n,\lambda}} \in \mathbb{R},
\end{equation}
in which $\bar{\Delta}_{\tau s} \equiv \frac{2 \Delta_{\tau s}}{\hbar \omega_0}$, $\check{E}^{\tau, s}_{n,\lambda} \equiv \bar{E}^{\tau, s}_{n,\lambda} - \bar{\xi}_{\tau s}$, $\bar{E}^{\tau, s}_{n,\lambda} \equiv \frac{2 E^{\tau, s}_{n,\lambda}}{\hbar \omega_0}$ and $\bar{\xi}_{\tau s} \equiv \frac{2 \xi_{\tau s}}{\hbar \omega_0}$.
The band structure implied by Eq.~\eqref{eq:spectrum} is depicted in Fig.~\ref{fig:energy_spectrum} for the case of \ch{MoSe2}.
Except for Section~\ref{section:exchange}, typical general values $\hbar v_F=3.5 \si{\electronvolt \angstrom}$ and $\Delta=0.8 \si{\electronvolt}$~\cite{Xiao2012} are fixed throughout the paper.
Regarding the SOC parameters, each TMD is treated in separate as there are significant differences among different materials, for instance on the sign of $\Delta_{\text{SOC}}^\mathcal{C}$.
The SOC values used in this work are listed in Table~\ref{tab:parameters}.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{energy_spectrum.pdf}
\caption{(Color online) Energy bands of monolayer \ch{MoSe2} in the Dirac approximation.
Different colors represent different spin projections: blue for spin up and red for spin down.
Dashed lines describe the solutions without external fields; band crossing exists in the conduction bands because $\Delta_{\text{SOC}}^\mathcal{C}<0$.
The application of an out-of-plane magnetic field ---$B_0 = 500 \si{\tesla}$ in this figure--- leads to the quantization of these bands into the Landau levels (horizontal lines); the unfeasible magnitude of $B_0$ is set only for readability purposes, as the observed features do not change qualitatively when working with practical values.
Comparing the energy bands of both $K$ and $K'$ valleys, only the spin projection is interchanged, except for the zeroth Landau levels (dash-dotted lines).}
\label{fig:energy_spectrum}
\end{figure}
\begin{table}
\begin{tabular}{l | c c c c }
& $\Delta_{\text{SOC}}^\mathcal{V} \left( \si{\electronvolt} \right)$
& $\Delta_{\text{SOC}}^\mathcal{C} \left( \si{\electronvolt} \right)$
& $\tilde{\mu}_B^{(\tau s = +)} \left(\mu_B \right)$
& $\tilde{\mu}_B^{(\tau s = -)} \left(\mu_B \right)$
\\ \hline
\ch{MoS2} & $0.148$ & $-0.003$ & $2.11$ & $1.92$ \\
\ch{WS2} & $0.430$ & $+0.029$ & $2.30$ & $1.79$ \\
\ch{MoSe2} & $0.184$ & $-0.021$ & $2.15$ & $1.89$ \\
\ch{WSe2} & $0.466$ & $+0.036$ & $2.32$ & $1.77$ \\
\end{tabular}
\caption{List of spin-orbit coupling (SOC) parameters, $\Delta_{\text{SOC}}^{\mathcal{V}/\mathcal{C}}$, and effective Bohr magnetons, $\tilde{\mu}_B^{(\tau s)}$ (in units of Bohr magneton $\mu_B$), for different transition metal dichalcogenide materials.
The SOC parameters are taken from Ref.~\onlinecite{Liu2013}.
The effective Bohr magnetons are calculated through the expression defined in the text.
By definition, $\tilde{\mu}_B^{(\tau s)}$ depends on the product of valley ($\tau$) and spin ($s$).}
\label{tab:parameters}
\end{table}
The properties of the 0LL eigenstates are quite different from those of the $n \neq 0$ LLs.
The energy levels of the $n \neq 0$ LLs depend on the product $\tau s$, meaning that we can correspond $K$ to $K'$ bands by interchanging the spin projections.
However, this does not hold for the $n=0$ LLs, whose energy is given by $E_{\text{0LL}}^{\tau, s} = -\tau \Delta_{\tau s} + \xi_{\tau s}$.
In fact, we see that the $K$ ($K'$) valley hosts a valence-like (conduction-like) 0LL spin doublet.
This doublet is split exclusively by SOC, as the 0LLs do not disperse with the applied magnetic field, which also contrasts with the $n \neq 0$ LLs.
It must be noted, however, that more elaborate calculations~\cite{Chu2014,Lado2016} reveal a valley-dependent spectrum that contrasts with the Dirac model.
Although the valley-dependent physics of the 0LL is captured in the same manner, these first-principles calculations show $n \neq 0$ LLs that are also different for both valleys, even when SOC is ignored~\cite{Lado2016}.
\begin{comment}
I have verified with some detail the consequences of not capturing this effect.
The discussion is quite complex and depends on many factors, but in principle none of the main conclusions of this paper is jeopardized.
\end{comment}
For most practical values of $n \neq 0$ and $B_0$, it is true that $\Delta_{\tau s}^2 \gg \frac{1}{2} \left( \hbar \omega_0 \right)^2 n$.
Therefore, we can expand Eq.~\eqref{eq:spectrum} in Taylor series and obtain
\begin{equation}
E^{\tau s}_{n \neq 0,\mathcal{C}} \simeq \Delta + 2 \tilde{\mu}_B^{(\tau s)} n B _0 + \tau s \frac{\Delta_{\text{SOC}}^\mathcal{C}}{2}
\label{eq:Taylor_LLs_conduction}
\end{equation}
and
\begin{equation}
E^{\tau s}_{n \neq 0,\mathcal{V}} \simeq -\left(\Delta + 2 \tilde{\mu}_B^{(\tau s)} n B _0 - \tau s \frac{\Delta_{\text{SOC}}^\mathcal{V}}{2} \right),
\label{eq:Taylor_LLs_valence}
\end{equation}
where we have defined the effective Bohr magneton as $\tilde{\mu}_B^{(\tau s)} = \frac{e\hbar}{2 m_{\tau s}}$, in which $m_{\tau s} = \frac{\Delta_{\tau s}}{v^2_F}$ is the effective electron rest mass.
From these equations, it is clear that the $n \neq 0$ LLs disperse linearly with $n$ and $B_0$, but with a slope that is controlled by $\tilde{\mu}_B^{(\tau s)}$ and thus yields different values for $\tau s = +$ or $\tau s = -$ (see Table~\ref{tab:parameters}).
As a result, at a given valley, the sign of the spin splitting between two LLs with the same $n \neq 0$ and different spin $s$ can be reversed as we ramp either $n$ or $B_0$.
This is apparent in the conduction bands of Fig.~\ref{fig:energy_spectrum} and is a direct consequence of the fact that SOC leads to a spin-dependent non-relativistic mass in the Dirac theory, which in turn controls LL dispersion.
\section{Magneto-optical response: formalism}
\label{section:formalism}
In this section, we introduce a general formalism to calculate the magneto-optical response in metals and semiconductors: the equation of motion (EOM) method~\cite{Ferreira2011}, a technique based on Ref.~\onlinecite{Peres2010} and generalized to include the effect of external magnetic fields.
The EOM method permits to derive analytical expressions of response functions that are fully equivalent to the Kubo formula when linear response theory is employed and electron-electron interactions are not taken into account.
Here, we apply this formalism to the Hamiltonian described in Section~\ref{section:Hamiltonian} and derive, within the linear response regime, analytical expressions for the electric susceptibility tensor in the cartesian basis, which are then manipulated to explicitly address the case in which the incident light is circularly polarized.
Free carrier transitions are considered in a first approximation, disregarding all the Coulomb interactions and thus treating electrons and holes as quasi-free particles.
Compared to the Kubo formula, the advantage of the EOM method is that, by treating Coulomb effects at the same level of the interaction with light, further corrections can be introduced within the same formalism.
In Section~\ref{section:exchange}, we account for Coulomb interactions at the self-energy level.
The role of excitonic effects is the main subject of a forthcoming publication~\cite{Have2018}.
\subsection{Dipole matrix elements}
The interaction with light is included, within the dipole approximation, via the following Hamiltonian:
\begin{equation}
H_I = - \bm{d} \cdot \bm{\mathcal{E}} = e \bm{r} \cdot \bm{\mathcal{E}}(t),
\end{equation}
where $\bm{r} = (x,y)$ is the 2D position vector, $\bm{d} = -e \bm{r}$ is the electric dipole moment and $\bm{\mathcal{E}} = \bm{\mathcal{E}}(t)$ is the electric field of the incident light, which is assumed homogeneous and dependent of the time $t$.
The method used in this paper relies on the calculation of the expectation value of the electric polarization density operator with regard to the unperturbed Hamiltonian, whose (complete) basis is $\alpha=\{n;\lambda;k_y\}$.
Therefore, the matrix elements of the polarization density created by the dipole, $\bm{P} = \frac{\bm{d}}{A}$ ($A$ is the area of the system), are relevant quantities that define optical selection rules.
The computation of the dipole matrix elements in each one of the $\eta=\{\tau;s\}$ subspaces, $\bm{d}^{\eta}_{\alpha \rightarrow \alpha'} = \braket{\alpha'|\bm{d}|\alpha}_\eta = \left( \bm{d}^{\eta}_{\alpha' \rightarrow \alpha} \right)^*$, shows that only transitions between the same $k_y$ are coupled, i.e., $\bm{d}^{\eta}_{\alpha \rightarrow \alpha'} = \delta_{k_y,k'_y} {\bm{d}^{\eta}}_{\{n;\lambda\}}^{\{n';\lambda'\}}$~\footnote{This result is easily obtained using that $\braket{\alpha'|\bm{r}|\alpha}_\eta = \frac{\braket{\alpha'|\left[ \bm{r},H^\eta_0 \right]|\alpha}_\eta}{E^\eta_\alpha-E^\eta_{\alpha'}} = \mathrm{i} \hbar v_F \frac{\braket{\alpha'|\left(\tau \sigma_x,\sigma_y \right)|\alpha}_\eta}{E^\eta_\alpha-E^\eta_{\alpha'}}$, for $\alpha \neq \alpha'$, followed by the spatial integration.}.
In addition, it also reveals that the only nonzero terms are
\begin{equation}
{\bm{d}^{\eta}}_{\{n;\lambda\}}^{\{n+\tau;\lambda'\}} = \frac{-e \hbar v_F}{E_{n,\lambda}^{\eta} - E_{n+\tau,\lambda'}^{\eta}}
C_{n+\tau,\lambda'}^{\eta} C_{n,\lambda}^{\eta} B_{n,\lambda}^{\eta} \left( -\tau, \mathrm{i} \right),
\end{equation}
for $n+\tau \geq 0$, and
\begin{equation}
\text{\small $
{\bm{d}^{\eta}}_{\{n;\lambda\}}^{\{n-\tau;\lambda'\}} = \frac{-e \hbar v_F}{E_{n,\lambda}^{\eta} - E_{n-\tau,\lambda'}^{\eta}}
C_{n-\tau,\lambda'}^{\eta} C_{n,\lambda}^{\eta} \left( B_{n-\tau,\lambda'}^{\eta} \right)^* \left( \tau, \mathrm{i} \right),
$}
\end{equation}
for $n-\tau \geq 0$.
The former relations embody the following optical selection rule: for an electron with wave vector $k_y$ and in a given LL with index $n$, the absorption of a photon can only induce a transition ---which can be intra or interband--- to a state with the same wave vector and with a LL index given by $n' = n \pm 1 \geq 0$.
This well-known selection rule~\cite{Gusynin2007a,Pedersen2011,Ferreira2011,Tabert2013} adds up to the ones imposed by construction: the decoupling of the valleys, which is consistent with the dipole approximation, and the decoupling of the spins, which is consistent with the lack of spin-flip terms in the Hamiltonian.
\subsection{Electric susceptibility}
Moving to the Heisenberg picture, and introducing the (time-dependent) creation/annihilation fermionic operators in this representation, $\hat{c}^\dagger_{\alpha,\eta}(t)/\hat{c}_{\alpha,\eta}(t)$, the total Hamiltonian can be written as
\begin{equation}
\hat{H}(t) = \hat{H}_0(t) + \hat{H}_{I}(t),
\label{eq:Htotal}
\end{equation}
where
\begin{equation}
\hat{H}_0(t) = \sum_{\eta,\alpha} E^{\eta}_{\alpha} \hat{c}^\dagger_{\alpha,\eta}(t)\hat{c}_{\alpha,\eta}(t)
\end{equation}
is the unperturbed Hamiltonian and
\begin{equation}
\hat{H}_I(t) = -\bm{\mathcal{E}}(t) \cdot \sum_{\eta,\alpha,\alpha'} \bm{d}^{\eta}_{\alpha \rightarrow \alpha'} \hat{c}^\dagger_{\alpha',\eta}(t) \hat{c}_{\alpha,\eta}(t)
\label{eq:HI}
\end{equation}
is the Hamiltonian that describes the dipole interaction with light.
Repeating the same procedure for the polarization density, we get
\begin{equation}
\hat{\bm{P}}(t) = \frac{1}{A} \sum_{\eta,\alpha,\alpha'} \bm{d}^{\eta}_{\alpha \rightarrow \alpha'} \hat{c}^\dagger_{\alpha',\eta}(t) \hat{c}_{\alpha,\eta}(t)
\end{equation}
and, defining the general operator $\hat{T}_{\alpha,\alpha'}^\eta (t) \equiv \hat{c}^\dagger_{\alpha',\eta}(t) \hat{c}_{\alpha,\eta}(t)$, whose EOM reads
\begin{equation}
-\mathrm{i} \hbar \frac{d}{dt} \hat{T}_{\alpha,\alpha'}^\eta (t) = \left[ \hat{H}(t), \hat{T}_{\alpha,\alpha'}^\eta (t)\right],
\label{eq:EOM}
\end{equation}
it is apparent that the time evolution of the polarization density operator can be achieved by solving Eq.~\eqref{eq:EOM}.
The details regarding the technical step of solving the above-mentioned EOM are provided in Appendix~\hyperref[sec:AppendixA]{A}.
In short, we start by calculating the commutator, so we can explicitly write down the differential equation.
Then, we solve for its expectation value within the linear response approximation and in the adiabatic regime.
The outcome is the expression for $\braket{\hat{\bm{P}}(t)} \equiv \bm{P}(t)$ within the former approximations.
Expressing $\bm{P}(t)$ through its Fourier transform, $\bm{P}(\omega)$, we are then able to recognize the (homogeneous and dynamical) electric susceptibility tensor,
\begin{equation}
\chi (\omega) =
\begin{pmatrix}
\chi_{xx} (\omega) & \chi_{xy} (\omega) \\
\chi_{yx} (\omega) & \chi_{yy} (\omega)
\end{pmatrix},
\end{equation}
via the constitutive relation $\bm{P}(\omega) = \varepsilon_0 \chi(\omega) \bm{\mathcal{E}}(\omega)$, where $\varepsilon_0$ is the vacuum permittivity, $\omega$ is the angular frequency and $\bm{\mathcal{E}}(\omega)$ is the Fourier transform of $\bm{\mathcal{E}}(t)$.
Putting it all together, we conclude that $\chi_{xx} = \chi_{yy}$ and $\chi_{xy} = -\chi_{yx}$, which is an expected result for systems with $C_6$ symmetry~\cite{Nowick1995}.
The final expressions for the longitudinal and transverse susceptibility, $\chi_{xx}$ and $\chi_{yx}$ (respectively), read
\begin{equation}
\chi_{xx} (\omega) = \mathcal{S}_{+}(\omega) , \quad \chi_{yx} (\omega) = \mathrm{i} \mathcal{S}_{-}(\omega),
\label{eq:chi_1}
\end{equation}
where $\mathcal{S}_\pm(\omega)$ are auxiliar functions defined as
\begin{equation}
\begin{split}
& \text{\footnotesize $
\mathcal{S}_{\pm} (\omega) \equiv \sum_\eta \sum_{\{n;\lambda\},\lambda'} \frac{f\left(E_{n+1,\lambda'}^\eta\right) - f\left(E_{n,\lambda}^\eta\right)}{2 \pi l_B^2 \varepsilon_0}
\left| {d_x^{\eta}}_{\{n;\lambda\}}^{\{n+1;\lambda'\}} \right|^2 \times
$} \\
& \text{\footnotesize $
\quad \times \left( \frac{1}{E_{n,\lambda}^\eta - E_{n+1,\lambda'}^\eta + \hbar \omega + \mathrm{i} \Gamma} \pm
\frac{1}{E_{n,\lambda}^\eta - E_{n+1,\lambda'}^\eta - \hbar \omega - \mathrm{i} \Gamma} \right),
$}
\label{eq:chi_2}
\end{split}
\end{equation}
in which $\Gamma$ is a phenomenological parameter that accounts for disorder within the adiabatic approximation and $f$ stands for the Fermi-Dirac distribution at Fermi level $\mu$ and absolute temperature $T$ (see Appendix~\hyperref[sec:AppendixA]{A} for details).
Throughout this work, we have set $\Gamma=7\si{\milli\electronvolt}$, which is a rather low but feasible value that corresponds to samples of TMDs encapsulated in hexagonal boron nitride and with little impurity~\cite{Cadiz2017,Ajayi2017}.
The disorder parameter does not influence the results presented in this paper if the full width at half maximum of the lorentzian implicit in Eq.~\eqref{eq:chi_2}, $2\Gamma$, is smaller (or at least of the same order of magnitude) than the LL splitting, which is roughly given by $2 \tilde{\mu}_B^{(\tau s)} B_0 \sim 0.2 B_0[\si{\tesla}] \si{\milli\electronvolt}$.
This explains why we have set such strong (but still feasible) out-of-plane magnetic fields in the optical response results.
For clarity purposes, we stress that, to write $\chi_{yx} (\omega)$ in its final form, we have used that $\left( {d_y^{\eta}}_{\{n;\lambda\}}^{\{n+1;\lambda'\}} \right)^* {d_x^{\eta}}_{\{n;\lambda\}}^{\{n+1;\lambda'\}} = \mathrm{i} \left| {d_x^{\eta}}_{\{n;\lambda\}}^{\{n+1;\lambda'\}} \right|^2$.
\subsection{Circularly polarized light}
\label{subsection:formalism_circular}
Associated with the will of exploring valley-based optoelectronic applications, many studies deal with circularly polarized light~\cite{Yao2008,Cao2012}.
The underlying mechanism is valley-selective circular dichroism, i.e., differential absorption of left- and right-handed photons when comparing the contributions from inequivalent valleys.
This contrasts with the usual circular dichroism, for which there is a difference in the (overall) absorption of left-handed ($\sigma^-$) and right-handed ($\sigma^+$) light.
At $\bm{B}=\bm{0}$, the massive Dirac Hamiltonian breaks time reversal symmetry in each valley, leading to a circular dichroism that is valley-dependent~\cite{Yao2008,Ezawa2013,Xu2014}.
In this case, the total circular dichroism vanishes when summing over valleys, as time reversal symmetry is restored.
However, illumination with circularly polarized light results in populations of excited carriers with valley polarization.
Conceptually, this permits to access the valley pseudospin degree of freedom, the key idea of valleytronics.
In addition, because of the strong SOC, the same mechanism also leads to an optically-induced spin imbalance in TMD materials~\cite{Xu2014}.
In this work, we propose a complementary route to induce both valley and spin polarization in TMDs with linearly polarized light.
Nevertheless, for completeness, we discuss here the case of incident circularly polarized light, which is relevant for Section~\ref{section:circular}.
Assuming incident light with circular polarization, i.e., $\bm{\mathcal{E}}(\omega) = \bm{\mathcal{E}}^\pm(\omega) \equiv \frac{\mathcal{E}_0 (\omega)}{\sqrt{2}} \left(1, \mathrm{e}^{\pm \mathrm{i} \pi/2}\right)$, where $\mathcal{E}_0 (\omega)$ is the (equal) amplitude of the two plane waves and $\pm$ stands, in the point of view of the source, for right and left polarization, respectively, the electric susceptibility tensor is shown to be diagonal in the circular basis, with the diagonal elements given by
\begin{equation}
\chi_\pm (\omega) = \chi_{xx} (\omega) \pm \mathrm{i} \chi_{yx} (\omega).
\label{eq:chi_cartesian_circular}
\end{equation}
This relation lays on symmetry foundations as it is valid as long as $\chi_{xx} = \chi_{yy}$ and $\chi_{xy} = -\chi_{yx}$ are satisfied.
Moreover, it shows that circular dichroism is encoded in the real part of $\chi_{yx}$.
\section{Longitudinal susceptibility}
\label{section:longitudinal}
We now move onto the discussion of the main features that characterize the low-energy non-interacting magneto-optical response in TMDs.
Although Coulomb interactions are known to be significant~\cite{Aivazian2015,Wang2017,Wang2018}, the study of the non-interacting limit provides reference for further analyses.
In this section, we discuss the results for the dynamical longitudinal susceptibility, $\chi_{xx} (\omega)$.
This quantity is directly relevant in modeling experiments where TMDs are excited with linearly polarized light.
In addition, $\chi_{xx} (\omega)$ contributes to $\chi_\pm (\omega)$, as seen in Eq.~\eqref{eq:chi_cartesian_circular}.
Therefore, it is also important to interpret the response to circularly polarized light (Section~\ref{section:circular}).
The evaluation of Eq.~\eqref{eq:chi_2} requires a cutoff, as usual when dealing with low-energy effective models.
For this matter, we establish a range of frequencies that is consistent with the underlying $\bm{k} \cdot \bm{p}$ theory that leads to the Dirac Hamiltonian.
By construction, this theory is only valid in the neighborhood of the high-symmetry $K$ and $K'$ points, which sets an energy window out of which the model does not work.
Taking a energy window of $\interval{-1.5}{1.5}~\si{\electronvolt}$ ---for which the upper bound lies $\sim 0.7\si{\electronvolt}$ above the bottom of the conduction band--- and bearing in mind the optical selection rules, plus the Pauli exclusion principle, we see that $\hbar \omega \lesssim 3\si{\electronvolt}$ is a suitable criterion, as it contemplates all and only the transitions between bands within the energy window.
This provides an intrinsic cutoff for the imaginary part of $\chi_{xx} (\omega)$, given that the only bands that contribute satisfy $\left| E_{n,\lambda}^\eta - E_{n+1,\lambda'}^\eta \right| \simeq \hbar \omega$.
For the real part, we have found that numerical convergence is attained with a cutoff energy of $|E_\text{cut}| \sim 4 \si{\electronvolt}$, which corresponds to a cutoff in the LLs, $n_\text{cut}$, that varies roughly as $4\times10^4 \left(B_0[\si{\tesla}]\right)^{-1}$.
The analysis of the results in this section is divided into three main categories that depend on the doping level.
We first consider the case of an intrinsic TMD, with $\mu$ lying inside the gap.
Then, we focus on the doped regime and separate two distinct scenarios.
First, we take a system on which the 0LLs do not participate in the optical transitions.
Second, we discuss the case of a TMD n-doped (p-doped) up to the first 0LL in the conduction (valence) band, for which the optical transitions that involve the 0LLs take a predominant role.
\subsection{Undoped regime: Fermi level in the gap}
\label{subsection:longitudinal_undoped}
As we discuss in Section~\ref{section:Hall}, $\chi_{yx}$ vanishes for arbitrary $\omega$ in the undoped regime.
Thus, for intrinsic TMDs, the magneto-optical response is governed exclusively by $\chi_{xx}$.
When $\mu$ lies in the gap, intraband transitions are Pauli blocked, as thermal activations are negligible compared to the band gap, even at room temperature ($k_B T \simeq 26\si{\milli\electronvolt}$ for $T=300\si{\kelvin}$, compared to gaps in the order of $2\Delta = 1.6\si{\electronvolt}$).
Therefore, in the undoped regime, the magneto-optical response is independent of the temperature and fully driven by interband transitions.
Fig.~\ref{fig:chixx_mugap} shows a plot of $\chi_{xx} (\omega)$ in a neutral \ch{MoS2} for $B_0 = 30 \si{\tesla}$, whose discussion follows below.
\begin{figure}
\includegraphics[width=\columnwidth]{chixx_mugap.pdf}
\caption{(Color online) Longitudinal susceptibility, $\chi_{xx}$, as a function of the photon energy, in monolayer \ch{MoS2} at the charge neutrality point and for a magnetic field of $30\si{\tesla}$ (results independent of the temperature).
The imaginary part, which is directly related with optical absorption, shows a sequence of peaks that correspond to the allowed optical transitions.
The vertical dashed lines mark the energy of the less energetic transition for each spin-valley product: for the $K$ ($K'$) valley, blue is for spin up (down) and red for spin down (up).
The presence of a plateau between the vertical lines is the signature of spin-orbit coupling effects.}
\label{fig:chixx_mugap}
\end{figure}
The imaginary part of $\chi_{xx} (\omega)$ describes photon absorption processes, induced when the photon energy matches the energy difference between an occupied and an empty state.
The resulting curve features a structure of peaks that correspond to interband transitions satisfying the optical selection rules, which are summarized in Table~\ref{tab:transitions}.
It must be noted that, although spin-valley coupling is not manifest in the LL spectrum due to the valley-dependent 0LLs (see Fig.~\ref{fig:energy_spectrum}), $\tau s$ is still a relevant quantity to characterize transition energies, as all of them are maintained when we change valley and spin at the same time, even if the 0LLs are involved.
\begin{table}
\begin{tabular}{l | c | c c}
& $K,s$ & $K',-s$ \\ \hline
$\mathcal{T}^{(\tau s)}_0$ & $ \{0;\mathcal{V}\} \rightarrow \{1;\mathcal{C}\}$ & $\{1;\mathcal{V}\} \rightarrow \{0;\mathcal{C}\}$\\
\hline
$\mathcal{T}^{(\tau s)}_1$ & $\{1;\mathcal{V}\} \rightarrow \{2;\mathcal{C}\}$ & $\{2;\mathcal{V}\} \rightarrow \{1;\mathcal{C}\}$\\
& $\{2;\mathcal{V}\} \rightarrow \{1;\mathcal{C}\}$ & $\{1;\mathcal{V}\} \rightarrow \{2;\mathcal{C}\}$\\
\hline
$\mathcal{T}^{(\tau s)}_2$ & $\{2;\mathcal{V}\} \rightarrow \{3;\mathcal{C}\}$ & $\{3;\mathcal{V}\} \rightarrow \{2;\mathcal{C}\}$\\
& $\{3;\mathcal{V}\} \rightarrow \{2;\mathcal{C}\}$ & $\{2;\mathcal{V}\} \rightarrow \{3;\mathcal{C}\}$\\
\hline
\hline
$\mathcal{T}^{(\tau s)}_{n>0}$ & $\{n;\mathcal{V}\} \rightarrow \{n+1;\mathcal{C}\}$ & $\{n+1;\mathcal{V}\} \rightarrow \{n;\mathcal{C}\}$\\
& $\{n+1;\mathcal{V}\} \rightarrow \{n;\mathcal{C}\}$ & $\{n;\mathcal{V}\} \rightarrow \{n+1;\mathcal{C}\}$
\end{tabular}
\caption{List of the allowed optical transitions in intrinsic transition metal dichalcogenides, organized by their energies ($\mathcal{T}^{(\tau s)}_0, \mathcal{T}^{(\tau s)}_1, ...$).
The representation of the transitions that correspond to each energy is separated by valley $\tau$, for a fixed spin-valley product (in this case given by $\tau s = s$, where $s$ is the spin index).
There are four degenerate transitions for every energy, except for $\mathcal{T}^{(\tau s)}_0$, for which there are two.
Transitions with equal contributions to the optical response are presented in the same line.}
\label{tab:transitions}
\end{table}
Within the frequency range $\mathcal{T}^{(\tau s = +)}_0 < \hbar \omega < \mathcal{T}^{(\tau s = -)}_0$, where $\mathcal{T}^{(\tau s = \pm)}_0 = E^{K,\pm}_{1,\mathcal{C}} - E^{K,\pm}_\text{0LL} = E^{K',\mp}_\text{0LL} - E^{K',\mp}_{1,\mathcal{V}}$ are the transition energies that correspond to the vertical blue and red lines in Figure~\ref{fig:chixx_mugap} (respectively), only two (out of four) flavors of $\tau$ and $s$ contribute to the absorption, namely the ones that respect $\tau s = +$.
For $\hbar \omega > \mathcal{T}^{(\tau s = -)}_0$, the absorption curve features a second step that marks the entrance of transitions with $\tau s = -$.
The energy splitting of the two thresholds, given by $\mathcal{T}^{(\tau s = -)}_0 - \mathcal{T}^{(\tau s = +)}_0$, depends explicitly on the SOC parameters and is easily shown to vanish if and only if $\Delta_{\text{SOC}}^\mathcal{V} = \Delta_{\text{SOC}}^\mathcal{C} = 0$.
\begin{comment}
In the limit $\Delta_{\tau s}^2 \gg \frac{1}{2} \left( \hbar \omega_0 \right)^2$, it takes the form
\begin{equation}
\begin{split}
\mathcal{T}^{(\tau s = -)}_0 - \mathcal{T}^{(\tau s = +)}_0 &\simeq 2 \left[ \tilde{\mu}_B^{(\tau s = -)} - \tilde{\mu}_B^{(\tau s = +)} \right] B_0 \\
&\quad + \Delta_{\text{SOC}}^\mathcal{V} - \Delta_{\text{SOC}}^\mathcal{C}.
\end{split}
\end{equation}
\end{comment}
Thus, the presence of a plateau in $\text{Im}\{ \chi_{xx} (\omega) \}$ is a direct consequence of SOC interactions.
We now discuss the intensity of the degenerate transitions, which come in doublets for $\mathcal{T}^{(\tau s)}_0$ and in quadruplets for all the other transition energies, as depicted in Table~\ref{tab:transitions}.
The height of the transitions is governed by the dipole matrix elements in Eq.~\eqref{eq:chi_2}, which satisfy the identity
\begin{equation}
\left| {d_x^{\tau,s}}_{\{n;\lambda\}}^{\{n+1;\lambda'\}} \right|^2 = \left| {d_x^{-\tau,-s}}_{\{n;-\lambda\}}^{\{n+1;-\lambda'\}} \right|^2.
\label{eq:dipoles_ident}
\end{equation}
This relation shows that ``counterpart transitions'', i.e., transitions with the same energy and equal contributions to the optical response, are obtained by changing valley, spin and also the band indexes at the same time.
In Table~\ref{tab:transitions}, we present the counterpart transitions in the same line.
It is therefore clear that every absorption peak in Fig.~\ref{fig:chixx_mugap} (which is characterized by a given $\tau s$ product), has equal contributions from the two possible $\tau$ and $s$ combinations.
For instance, using the notation of Table~\ref{tab:transitions}, this means that a peak with energy $\mathcal{T}^{(\tau s = +)}_n$ has equal contributions from $\tau=K, s=\uparrow$ and $\tau=K', s=\downarrow$.
Interestingly, in the case of the quadruplets, the two pairs of counterpart transitions are not equivalent in intensities.
In fact, the computation of the dipole matrix elements shows that one pair of transitions is overwhelmingly stronger than the other.
This feature cannot be observed through the spin and valley breakdown of the absorption curve because both the weak and strong pairs of transitions are allowed in the undoped regime.
However, as we discuss in Section~\ref{subsection:longitudinal_doped1}, doping allows to explore this property.
The real part of $\chi_{xx} (\omega)$, which describes the reactive dielectric response of the TMD, is also shown in Fig.~\ref{fig:chixx_mugap}.
Expectedly, for in-gap frequencies, it decays smoothly as we decrease $\hbar \omega$ below the absorption threshold.
Above the absorption threshold, it oscillates as a function of the frequency, due to the presence of many resonant peaks in absorption.
\subsection{Doped system with optical transitions to zeroth Landau levels Pauli blocked}
\label{subsection:longitudinal_doped1}
Away from charge neutrality, we find two fundamental differences with the undoped regime.
First, intraband transitions enter into play, while some of the interband ones become Pauli blocked.
Second, the AC Hall response, given by $\chi_{yx} (\omega)$, is no longer null, as we explore in Section~\ref{section:Hall}.
The carrier density implied to get to this regime can arise either from gating or chemical doping.
We start with the case where the 0LLs cannot participate in the optical transitions, neither as initial nor final states.
Due to the optical selection rules, it suffices to have $\mu$ lying above (below) both $n=1$ LLs in the conduction (valence) band.
In this regime, the system is a quantum Hall insulator and the ground state has no spin nor valley polarization.
Without loss of generality, we take the example of a n-doped \ch{MoS2}, with $\mu = 1\si{\electronvolt}$ ($\sim 0.2\si{\electronvolt}$ above the bottom of the conduction band), for a magnetic field of $50\si{\tesla}$.
The overview of the results is presented in Fig.~\ref{fig:chixx_doped_+schemes}, and its analysis follows below.
\begin{figure}
\includegraphics[width=\columnwidth]{chixx_doped_+schemes.pdf}
\caption{(Color online) Longitudinal magneto-optical response in a doped (Fermi level $\mu=1\si{\electronvolt}$) monolayer \ch{MoS2}, for a magnetic field of $50\si{\tesla}$: in (a), the longitudinal susceptibility, $\chi_{xx}$, is plotted as a function of the photon energy (results in the inset are roughly independent of the temperature $T$); (b) and (c) show the valley and spin breakdown of the absorptive part of $\chi_{xx}$ at zero absolute temperature; in (d), a scheme of the optical transitions between the energy bands is presented.
Discussion is provided in the text.}
\label{fig:chixx_doped_+schemes}
\end{figure}
Intra and interband absorption occur at very different frequencies, as observed in Fig.~\ref{fig:chixx_doped_+schemes}-(a).
The energy scale of the intraband absorption peak is controlled by the energy difference between two adjacent LLs in the same band, which, using Eqs.~\eqref{eq:Taylor_LLs_conduction} and \eqref{eq:Taylor_LLs_valence}, can be estimated as $2 \tilde{\mu}_B^{(\tau s)} B _0 \sim 0.2 B_0 [\si{\tesla}] \si{\milli\electronvolt}$.
Even for a very large field of $50\si{\tesla}$, we see that the intraband peak occurs around $\hbar \omega = 10\si{\milli\electronvolt} \ll 2\Delta$.
Thus, the discussion of the intra and interband parts of the magneto-optical spectrum can be separated.
At $T=0$, the intraband peak in absorption has contributions from a total of four transitions.
These intraband transitions connect the last occupied LL, $\{n;\lambda\} = \{n_F;\text{sign}(\mu)\}$, and the first empty one, $\{n;\lambda\} = \{n_F+\text{sign}(\mu);\text{sign}(\mu)\}$, for the four channels of $\tau$ and $s$.
Due to spin-valley coupling, the four transitions are divided into two non-degenerate pairs of degenerate transitions.
The valley and spin breakdown of the intraband absorption peak, presented in Fig.~\ref{fig:chixx_doped_+schemes}-(b), shows that the degenerate transitions yield different but comparable intensities.
In addition, it also shows that the non-degenerate transitions cannot be resolved in energy.
This is explained by the presence of a broadening parameter, $\Gamma=7\si{\milli\electronvolt}$, which blurs the small energy splitting between the peaks.
The broadening parameter also makes the intraband optical spectrum robust with respect to variations in the temperature.
The small temperature dependency can be understood with the help of the scheme in Fig.~\ref{fig:chixx_doped_+schemes}-(d).
Looking at the short arrows ---which represent intraband transitions that respect the optical selection rules---, we see that the green one marks the only allowed transition at $T=0$.
At finite temperatures, other LLs are thermally activated (blue region) and enable more transitions (yellow arrows).
The absence of a noticeable temperature dependency is then obtained because, up to the first Pauli blocked transitions (red arrows), the variation in energy of these transitions is small compared to $\Gamma$.
This is a consequence of the highly linear dispersion of the LLs with $n$ in the regime $\Delta_{\tau s}^2 \gg \frac{1}{2} \left( \hbar \omega_0 \right)^2 n$.
Doping introduces new features in the interband contributions to $\chi_{xx} (\omega)$.
First, we observe a blue shift of the absorption threshold, associated with the filling of LLs in the conduction band, for the case of a n-doped system, or the depletion of LLs in the valence band, in the case of p-doping.
Second, we obtain a lineshape that carries a significant temperature dependency, as seen in Fig.~\ref{fig:chixx_doped_+schemes}-(a).
At $T=0$, the lineshape features a similar double step structure that reflects the strong SOC.
\begin{comment}
Even at T=0, the double step structure does not depend exclusively on SOC but also on the doping level.
Here, we avoid this discussion, as it is tough to visualize and not very relevant.
\end{comment}
However, at room temperature, this feature is smoothed out and the explanation is self-evident in the scheme of Fig.~\ref{fig:chixx_doped_+schemes}-(d).
Looking at the long arrows, which mark the less energetic interband transitions in play due to thermal activation (within the same color code as before), it is clear that, in contrast with the intraband optical spectrum, the increase of the temperature induces transitions that can be resolved in energy, which in turn leads to the disappearance of a clear double step structure.
It must be noted that our analysis does not include the reduction of the band gap with the increase of the temperature, expected due to thermal expansion of the lattice that widens the bands~\cite{Ashcroft1976}.
The most intriguing difference between the doped and undoped interband optical spectra is observed in the limit of $T=0$, whose validity is discussed below.
In the doped case, the height of the lowest energy interband peak in absorption is half of the others within the SOC plateau.
The origin of this ``half peak'' is explained through the Pauli exclusion principle.
For a given $\tau s$, and since 0LLs are not in play, there are in general four degenerate interband transitions contributing to the absorption peaks, as depicted in Table~\ref{tab:transitions}.
However, for the half peak, two out of the four transitions are Pauli blocked, leading to a reduction of the intensity by half.
In Fig.~\ref{fig:chixx_doped_+schemes}-(d), the two blocked transitions are represented by the yellow dashed arrow, while the two allowed ones are represented by the long green arrow~\footnote{For the sake of clarity, we underline that each arrow in Fig.~\ref{fig:chixx_doped_+schemes}-(d) represents two transitions, as there are two possible combinations of $\tau$ and $s$ that yield $\tau s = +1$.}.
In practice, the limit $T=0$ is valid as long as the thermal activation does not change considerable the occupation of the LLs that are immediately above or below the Fermi level.
This is realized for $T \lesssim 0.5 B_0[\si{\tesla}] \si{\kelvin}$.
Interestingly, the elimination of two out of four transitions that results in the half peak also provides a way to induce both a valley and spin imbalance in TMDs using linearly polarized light.
The intensity of the four degenerate transitions is controlled by the matrix elements, in such a way that there are two equally strong and two equally weak oscillator strengths, as previously mentioned in Section~\ref{subsection:longitudinal_undoped}.
For instance, Eq.~\eqref{eq:dipoles_ident} imposes that if some transition $\{n; \mathcal{V} \} \rightarrow \{n+1; \mathcal{C} \}$ is strong in the channel $\{\tau;s\}$, so it is the (counterpart) transition $\{n+1; \mathcal{V} \} \rightarrow \{n; \mathcal{C} \}$ in the channel $\{-\tau;-s\}$.
Now, in the case of the half peak, Pauli blocking occurs for transitions that are not counterpart of each other, which results on having only one of the two strong transitions active.
Therefore, the resulting absorption is overwhelmingly dominated by just one valley and one spin, as observed in Fig.~\ref{fig:chixx_doped_+schemes}-(c).
In fact, the intensities are so different that the contribution of the weak transition cannot be detected.
Our findings imply that driving a doped TMD with linearly polarized light can induce a nearly perfect spin and valley imbalance at some specific range of frequencies of the longitudinal magneto-optical absorption.
As we shall see in Section~\ref{section:Hall}, the same imbalance is also verified in the transverse response.
These findings permit to envision a mechanism for optical orientation and add value to the field of valleytronics.
\subsection{Doped system with a single Landau level polarized}
\label{subsection:longitudinal_doped2}
We now briefly comment on the regime where the TMD is doped with electrons or holes up to the first 0LL in the conduction or valence band, respectively.
In this case, the system has a spin-polarized ground state.
It is straightforward to check that, at sufficiently low temperatures, a single valley and spin control can be achieved either at the intraband part of the longitudinal absorption spectrum or at the frequency of the less energetic transition in the interband part.
In this situation, the spin and valley selectiveness is not nearly perfect as a consequence of extremely unbalanced dipole matrix elements (as in Section~\ref{subsection:longitudinal_doped1}) but exact and based entirely on the optical selection rules.
This is strongly connected with the findings from Ref.~\onlinecite{Tabert2013}.
The carrier density needed to polarize a single LL is given by $|\rho| \simeq 2.4 \times 10^{10} B_0[\si{\tesla]}\si{\per \centi \meter \squared}$.
Thus, the right combination of carrier density and magnetic field that leads to this regime seems within experimental reach.
\section{Transverse susceptibility}
\label{section:Hall}
In this section, we undertake the analysis of the dynamical transverse susceptibility, $\chi_{yx} (\omega)$, also known as Hall susceptibility.
As seen in Eq.~\eqref{eq:chi_cartesian_circular}, this quantity determines circular dichroism.
Therefore, it is relevant to model experiments that explore the magneto-optical Kerr effect and the Faraday rotation, for example.
At half filling, the contributions to $\chi_{yx} (\omega)$ coming from opposite valleys have opposite signs.
As a result, the total $\chi_{yx} (\omega)$ vanishes, although each valley yields a finite AC Hall response, as demonstrated in Appendix~\hyperref[sec:AppendixB]{B}.
Thus, the application of an out-of-plane magnetic field ---which breaks time reversal symmetry--- is not sufficient to induce a Hall response in intrinsic TMDs.
For doped TMDs, the transverse susceptibility is no longer null and can be split into two terms, $\chi_{yx} (\omega) = \chi^{\text{intra}}_{yx} (\omega) + \chi^{\text{inter}}_{yx} (\omega)$, which are determined by intra and interband types of optical transitions, respectively.
For simplicity, we take $T=0$ and consider a system in which the 0LLs cannot participate in the optical transitions.
This regime is realized for $T \lesssim 0.5 B_0[\si{\tesla}] \si{\kelvin}$ and $\mu > \text{max} \left( E^\eta_{1,\mathcal{C}} \right)$ or $\mu < \text{min} \left( E^\eta_{1,\mathcal{V}} \right)$.
Within these considerations, we obtain largely simplified analytical expressions for $\chi^{\text{intra}}_{yx} (\omega)$ and $\chi^{\text{inter}}_{yx} (\omega)$, given by
\begin{equation}
\text{\small $
\chi^{\text{intra}}_{yx} (\omega) =
\mathrm{i} \frac{\text{sign}(\mu) \left( \hbar \omega + \mathrm{i} \Gamma \right)}{\pi l_B^2 \varepsilon_0} \sum_\eta
\frac{\left| {d_x^{\eta}}_{\{n_\mu;\text{sign}(\mu)\}}^{\{n_\mu + 1;\text{sign}(\mu)\}} \right|^2}
{\left( \hbar \omega^\eta_{n_\mu} \right)^2 - \left( \hbar \omega + \mathrm{i} \Gamma \right)^2},
$}
\label{eq:chi_xy_intra}
\end{equation}
\begin{equation}
\text{\small $
\chi^{\text{inter}}_{yx} (\omega) =
\mathrm{i} \frac{\text{sign}(\mu) \left( \hbar \omega + \mathrm{i} \Gamma \right)}{\pi l_B^2 \varepsilon_0} \sum_\eta
\frac{\left| {d_x^{\eta}}_{\{n_\mu;-\text{sign}(\mu)\}}^{\{n_\mu + 1;\text{sign}(\mu)\}} \right|^2}
{\left( \hbar \Omega^\eta_{n_\mu} \right)^2 - \left( \hbar \omega + \mathrm{i} \Gamma \right)^2},
$}
\label{eq:chi_xy_inter1}
\end{equation}
where $n_\mu = n_F - \frac{1-\text{sign}(\mu)}{2}$ is introduced for convenience and corresponds to the last occupied LL if $\mu>0$ or to the first empty one if $\mu < 0$, while
\begin{equation}
\hbar \omega^\eta_{n_\mu} = \left| E^\eta_{n_\mu + 1,\text{sign}(\mu)} - E^\eta_{n_\mu,\text{sign}(\mu)} \right|
\end{equation}
and
\begin{equation}
\hbar \Omega^\eta_{n_\mu} = \left| E^\eta_{n_\mu + 1,\text{sign}(\mu)} - E^\eta_{n_\mu,-\text{sign}(\mu)} \right|
\end{equation}
are the energies of the intraband and interband transitions contributing to the Hall response, respectively.
For clarity purposes, we note that the sum over LLs, present in the general expression for $\chi_{yx} (\omega)$, is taken care of by the fact that all the (canceling) contributions that lead to a null AC Hall response in the undoped regime can be removed.
In Fig.~\ref{fig:chixy_doped}, we present typical results in the regime for which Eqs.~\eqref{eq:chi_xy_intra} and \eqref{eq:chi_xy_inter1} are valid.
The doping case is the same as the one considered in Section~\ref{subsection:longitudinal_doped1}.
Additionally, the choice of the parameters allows for a direct comparison of these results with the ones obtained in Fig.~\ref{fig:chixx_doped_+schemes}.
\begin{figure*}
\includegraphics[width=2\columnwidth]{chixy_doped.pdf}
\caption{
(Color online) (a) Hall susceptibility, $\chi_{yx}$, as a function of the photon energy, in a doped (Fermi level $\mu = 1\si{\electronvolt}$) monolayer \ch{MoS2} at zero absolute temperature and for a magnetic field of $50\si{\tesla}$.
(b-e) Valley and spin breakdown of the real (b,c) and imaginary (d,e) parts of (a), divided in the (non-canceling) contributions that come from intraband (b,d) and interband (c,e) optical transitions.
The valley and spin breakdown of the interband optical spectrum reveals a dominant contribution of transitions within the $K$ valley.}
\label{fig:chixy_doped}
\end{figure*}
In contrast to the longitudinal response, resonance peaks are observed in the real part of the Hall susceptibility.
This is justified by the fact that absorption is described by the susceptibility tensor in its diagonal form, Eq.~\eqref{eq:chi_cartesian_circular}, i.e., in the circular basis.
In this basis, the contribution to the imaginary part of $\chi_\pm (\omega)$ comes from the real part of $\chi_{yx} (\omega)$.
Analytically, this is also verified through Eq.~\eqref{eq:chi_1} by the presence of an extra overall imaginary unit when comparing the expressions for $\chi_{xx} (\omega)$ and $\chi_{yx} (\omega)$.
The results shown in Fig.~\ref{fig:chixy_doped}-(a) imply genuine (as opposed to valley-resolved) circular dichroism.
Through Eq.~\eqref{eq:chi_cartesian_circular}, we see that $\text{Re} \{ \chi_{yx} \} \neq 0$ leads to a differential absorption of $\sigma^+$ and $\sigma^-$ photons.
This effect is stronger at the resonant frequencies.
The spin and valley breakdown of the Hall response, shown in Figs.~\ref{fig:chixy_doped}-(b-e), reveals that interband absorption is dominated by the $K$ valley.
Due to SOC, this also implies a spin imbalance, given that transition energies are related by spin-valley coupling.
The origin of this result is completely analogous to the discussion of the half peak in Section~\ref{subsection:longitudinal_doped1}.
As in Section~\ref{subsection:longitudinal_doped2}, it is straightforward to verify that, at sufficiently low temperatures, a TMD with a single LL polarized induces a (perfect) spin and valley imbalance in the Hall response, which is based entirely on the optical selection rules.
Evidently, the transitions responsible for this phenomenon involve the 0LLs.
\section{Response to circularly polarized light}
\label{section:circular}
The thorough study of $\chi_{xx}$ and $\chi_{yx}$ presented in the last two sections permits to address the magneto-optical response of TMDs to circularly polarized light.
Here, we focus on the absorptive part of $\chi_\pm (\omega) = \chi_{xx} (\omega) \pm \mathrm{i} \chi_{yx} (\omega)$ at half filling.
In Fig.~\ref{fig:chipm_mugap}, we show representative results, obtained for undoped \ch{MoS2} and $B_0 = 30 \si{\tesla}$.
The analysis follows below.
\begin{figure}
\includegraphics[width=\columnwidth]{chipm_mugap.pdf}
\caption{(Color online)
Imaginary part of the susceptibility to left-handed circularly polarized light, $\chi_-$, as a function of the photon energy, in monolayer \ch{MoS2} and for a magnetic field of $30\si{\tesla}$ (results independent of the temperature and resolved in the valley and spin contributions).
The peaks in $\text{Im} \{ \chi_- \}$, which are directly related with absorption of left-handed photons, reveal a valley-selective circular dichroism towards the $K$ valley.
Results for right polarization are the same with opposite spin and valley.}
\label{fig:chipm_mugap}
\end{figure}
It is apparent that the absorption of $\sigma^-$ ($\sigma^+$) photons is dominated by the $K$ ($K'$) valley.
Thus, the well-known~\cite{Yao2008,Ezawa2013,Xu2014} valley-resolved circular dichroism at $B_0 = 0$ is preserved at finite field.
Given that $\chi_{xx} (\omega)$ has equal contributions from both valleys, the valley imbalance is fully controlled by $\chi_{yx} (\omega)$.
This is made possible by the fact that, in the intrinsic case, $\chi_{yx} (\omega)$ is non-zero for each valley, even though the sum over valleys yields a vanishing AC Hall response.
To gain insight about the origin of the valley-selective circular dichroism, we make the limit of no impurities, $\Gamma \rightarrow 0^+$, and use the Sokhotski-Plemelj theorem to write
\begin{equation}
\begin{split}
& \text{Im} \{ \chi_\pm (\omega) \} = \pm \sum_\eta \sum_{\{n;\lambda\}}
\frac{\lambda}{l_B^2 \varepsilon_0} \left| {d_x^{\eta}}_{\{n;\lambda\}}^{\{n+1;-\lambda\}} \right|^2 \times \\
& \quad \times
\delta \left( E_{n,\lambda}^\eta - E_{n+1,-\lambda}^\eta \mp \hbar \omega \right),
\label{eq:chi_pm_limit}
\end{split}
\end{equation}
where we have also used that, in the undoped regime,
\begin{equation}
f\left(E_{n+1,\lambda'}^\eta\right) - f\left(E_{n,\lambda}^\eta\right) = \lambda \delta_{\lambda',-\lambda}.
\end{equation}
Looking at Eq.~\eqref{eq:chi_pm_limit}, we observe that the Dirac Delta implies $\lambda=\mathcal{C}/\mathcal{V}$ for right/left polarization.
This relation blocks counterpart transitions for the whole interband optical spectrum, in the same way that doping blocks a specific set of counterpart interband transitions that contribute to $\chi_{xx}$ and $\chi_{yx}$.
As a result, we get highly unbalanced valley contributions at any $\omega$ of the interband absorption, which are determined exclusively by the magnitude of the dipole matrix elements.
The presence of SOC interactions is only reflected by the splitting of the lineshapes that correspond to different spin contributions within the same valley.
Thus, the valley-selective circular dichroism is independent of SOC and only determined by the $\tau$ dependency of the kinetic term in the Hamiltonian.
These results show that the well-established optically-induced valley polarization for intrinsic TMDs~\cite{Cao2012} remains upon application of an out-of-plane magnetic field.
In addition, the analytical approach to this problem unveils that the valley-selective circular dichroism is not a selection rule that completely cancels absorption in one valley, but a consequence of extremely unbalanced dipole matrix elements.
\section{Exchange self-energy corrections}
\label{section:exchange}
We now turn our attention to how the electronic and optical properties discussed before are modified due to Coulomb interactions.
In particular, we keep track of corrections up to the self-energy (SE) level, which lead to the renormalization of the electronic band structure and thus affect the optical response by changing the frequency of the transitions in play.
Since the dipole matrix elements remain identical, the main features of the magneto-optical response of TMD monolayers are maintained at this level of approximation.
The inclusion of these effects is carried out within the same EOM formalism.
\subsection{Keldysh potential}
In order to account for electron-electron repulsions in a 2D landscape, we replace the typical Coulomb potential by the Keldysh potential~\cite{Cudazzo2011}.
In the direct space, the Keldysh energy potential between two electrons in $\bm{r}$ and $\bm{r}'$, $U(\bm{r}-\bm{r}')$, has a rather intricate form.
In contrast, its Fourier transform yields a more transparent expression, given by
\begin{equation}
U(\bm{q}) = \frac{e^2}{2 \varepsilon_0} \frac{1}{q \left( r_0 q + 1 \right)},
\label{eq:Keldysh}
\end{equation}
where $\bm{q} = (q_x,q_y)$ is the transferred momentum and $r_0$ is a material-dependent constant that measures the deviation from the 2D Coulomb energy potential, which is recovered making $r_0 = 0$.
When in presence of a dielectric medium with relative permittivity $\varepsilon_r$, Eq.~\eqref{eq:Keldysh} is modified by the transformation $r_0 q + 1 \rightarrow r_0 q + \varepsilon_r$.
For simplicity, we assume TMDs in vacuum or suspended in air ($\varepsilon_r \simeq 1$), thus ignoring screening effects due to the presence of dielectric media.
The magnitude of the band renormalization so obtained is therefore an upper limit.
\subsection{Exchange self-energy: analytical expressions}
Disregarding coupling between different valleys, we write the (two-particle) Hamiltonian that accounts for electron-electron interactions as
\begin{equation}
\hat{H}_{ee} (t) = \frac{1}{2} \sum_{\substack{\tau \\ s, s'}} \sum_{\substack{\alpha_1, \alpha_2 \\ \alpha_3, \alpha_4}} U^{\tau,s,s'}_{\substack{\alpha_1, \alpha_2 \\ \alpha_3, \alpha_4}}
\hat{c}^\dagger_{\alpha_1,\tau,s} \hat{c}^\dagger_{\alpha_2,\tau,s'} \hat{c}_{\alpha_3,\tau,s'} \hat{c}_{\alpha_4,\tau,s},
\label{eq:Hee}
\end{equation}
where
\begin{equation}
U^{\tau,s,s'}_{\substack{\alpha_1, \alpha_2 \\ \alpha_3, \alpha_4}} =
\int_{\mathbb{R}^2} \frac{d\bm{q}}{(2\pi)^2} \ U(q)
F^{\tau,s}_{\alpha_1,\alpha_4}(\bm{q}) F^{\tau,s'}_{\alpha_2,\alpha_3}(-\bm{q})
\label{eq:Coulomb_integrals}
\end{equation}
are the Coulomb integrals and
\begin{equation}
F^{\tau,s}_{\alpha,\alpha'}(\bm{q}) = \int_A d\bm{r} \
\mathrm{e}^{\mathrm{i} \bm{q} \cdot \bm{r}}
\left[ \psi^{\tau,s}_\alpha(\bm{r}) \right]^\dagger \psi^{\tau,s}_{\alpha'}(\bm{r})
\label{eq:structure_factors}
\end{equation}
the structure factors.
In Eq.~\eqref{eq:Hee}, the time dependency of the fermionic operators is omitted to shorten notation.
The exclusion of inter-valley contributions is justified by the large momentum difference between $K$ and $K'$, which implies a large transferred momentum that in turn suppresses $U(q)$ and consequently the inter-valley Coulomb integrals.
The following task is to include $\hat{H}_{ee} (t)$ in the total Hamiltonian, Eq.~\eqref{eq:Htotal}, and obtain the new (interacting) EOM.
This task boils down to the calculation of the commutator $\left[\hat{H}_{ee}(t), \hat{T}^\eta_{\alpha,\alpha'}(t) \right]$, whose result is shown in Appendix~\hyperref[subsec:AppendixC1]{C.1}.
Among the new terms, we then identify and keep the ones that lead to a band renormalization.
Random phase approximation and linear response regime are implied in this last step and the details regarding this manipulation can be found in Appendix~\hyperref[subsec:AppendixC2]{C.2}.
As final result, we find that the energy bands are renormalized as
\begin{equation}
\left( E^\eta_\alpha \right)_{\text{renorm}} = E^\eta_\alpha + \Sigma^\eta_\alpha,
\end{equation}
where
\begin{equation}
\Sigma^\eta_\alpha = - \sum_{\alpha'} f\left( E^\eta_{\alpha'} \right) U^{\tau,s,s}_{\substack{\alpha', \alpha \\ \alpha', \alpha}}
\label{eq:SE}
\end{equation}
are the exchange SE corrections.
As usual, we observe that the exchange corrections to energy bands with a given spin come from electrons in bands with the same spin.
The Coulomb integrals can be reduced to one-dimensional quadratures (see Appendix~\hyperref[subsec:AppendixC3]{C.3} for details).
At $T=0$, Eq.~\eqref{eq:SE} is simplified into
\begin{equation}
\Sigma^\eta_\alpha = - \sum_{\{n',\lambda'\} \in \text{occ.}} D^{\eta}_{\substack{\{n,\lambda\} \\ \{n',\lambda'\}}} I^{\eta}_{\substack{\{n,\lambda\} \\ \{n',\lambda'\}}},
\label{eq:SEsimp}
\end{equation}
where $D^{\eta}_{\substack{\{n,\lambda\} \\ \{n',\lambda'\}}}$ are real constants defined as
\begin{equation}
D^{\eta}_{\substack{\{n,\lambda\} \\ \{n',\lambda'\}}} = \frac{1}{2^{|n-n'|} } \left( C^\eta_{n,\lambda} C^\eta_{n',\lambda'} \right)^2,
\end{equation}
$I^{\eta}_{\substack{\{n,\lambda\} \\ \{n',\lambda'\}}}$ are integrals given by
\begin{equation}
\begin{split}
& \text{\small $
I^{\eta}_{\substack{\{n,\lambda\} \\ \{n',\lambda'\}}} = \frac{1}{l_B^2} \int_0^{+\infty}\frac{d\bar{q}}{2\pi} \
\bar{q}^{2|n-n'|+1} U\left( \frac{\bar{q}}{l_B} \right) \mathrm{e}^{-\bar{q}^2/2} \times
$} \\
& \text{\small $
\quad \times \Bigg| \tilde{L}^{|n-n'|}_{(n_\tau, n'_\tau)} \left( \frac{\bar{q}^2}{2} \right) +
B^\eta_{n,\lambda} B^\eta_{n',\lambda'} \tilde{L}^{|n-n'|}_{(n_\tau + \tau, n'_\tau + \tau)} \left( \frac{\bar{q}^2}{2} \right) \Bigg|^2,
$}
\label{eq:SEintegrals}
\end{split}
\end{equation}
and the notation $\{n',\lambda'\} \in \text{occ.}$ means that the sum runs over occupied states only.
In Eq.~\eqref{eq:SEintegrals}, we have defined $\bar{q} \equiv l_B q$, $\tilde{L}^{|n-n'|}_{(b,c)} \equiv \sqrt{\frac{\text{min}(b,c)!}{\text{max}(b,c)!}} L^{|n-n'|}_{\text{min}(b,c)}$ for $\text{min}(b,c) \in \mathbb{N}^0$ ($L^{|n-n'|}_{\text{min}(b,c)}$ are the associated Laguerre polynomials) and $\tilde{L}^{|n-n'|}_{(b,c)} \equiv 0$ for $\text{min}(b,c)=-1$.
Moreover, we remind that $n_\tau \equiv n - \frac{1+\tau}{2}$.
In order to evaluate Eq.~\eqref{eq:SEsimp}, it is clear that a cutoff is again required, as the summation implied extends over an infinity of valence states.
Furthermore, we have verified numerically that the summation diverges logarithmically with the LL cutoff, $n_\text{cut}$.
Even when dealing with energy differences, this was checked to lead to corrections that are, to some extent, cutoff-dependent.
To fix $n_\text{cut}$, we start by counting the total number of electrons in a TMD sample of area $A$.
At half filling, we get $2 A/A_\text{u.c.}$, where $A_\text{u.c.} = \frac{\sqrt{3}}{2} a^2$ is the area of the hexagonal unit cell with lattice parameter $a \simeq 3.15\si{\angstrom}$~\cite{Ding2011}.
Then, this number is divided by $4$ (to account for spin and valley) and matched to the number of electronic states in $n_\text{cut}$ LLs.
Given the degeneracy of the LLs, $\frac{A}{2 \pi l_B^2}$, we obtain that
\begin{equation}
n_\text{cut} = \frac{\pi l_B^2}{A_\text{u.c.}} \simeq \frac{24000}{B_0[\si{\tesla}]}
\end{equation}
is the number of filled LLs per spin and valley.
In the computations that follow, we use the material-dependent parameters listed in Table~\ref{tab:parametersxc}.
\begin{table}
\begin{tabular}{l | c c c c c}
& $\hbar v_F \left( \si{\electronvolt \angstrom} \right)$
& $\Delta \left( \si{\electronvolt} \right)$
& $\Delta_{\text{SOC}}^\mathcal{V} \left( \si{\electronvolt} \right)$
& $\Delta_{\text{SOC}}^\mathcal{C} \left( \si{\electronvolt} \right)$
& $r_0 \left( \si{\angstrom} \right)$ \\ \hline
\ch{MoS2} & $3.51$ & $0.83$ & $0.148$ & $-0.003$ & $41.5$ \\
\ch{WS2} & $4.38$ & $0.90$ & $0.430$ & $+0.029$ & $37.9$ \\
\ch{MoSe2} & $3.11$ & $0.74$ & $0.184$ & $-0.021$ & $51.7$ \\
\ch{WSe2} & $3.94$ & $0.80$ & $0.466$ & $+0.036$ & $45.1$ \\
\end{tabular}
\caption{List of parameters used in the numerical computation of the exchange self-energy corrections for different transition metal dichalcogenides.
Values in the first and second, third and forth, and last columns were taken from Ref.~\onlinecite{Xiao2012}, Ref.~\onlinecite{Liu2013}, and Ref.~\onlinecite{Berkelbach2013}, respectively.}
\label{tab:parametersxc}
\end{table}
\subsection{Renormalized optical transition energies}
As a direct application of the calculations presented above, we study how a selected set of optical transitions is renormalized in energy due to the exchange SE corrections, at $T=0$.
We consider different TMDs and focus on the following cases:
\begin{itemize}
\item Fermi level in the gap.
Interband transitions: $\mathcal{T}^{K,s}_{ \{ 0,\mathcal{V} \} \rightarrow \{ 1,\mathcal{C} \} } \equiv E^{K,s}_{1,\mathcal{C}} - E^{K,s}_{0,\mathcal{V}}$ and
$\mathcal{T}^{K',s}_{ \{ 1,\mathcal{V} \} \rightarrow \{0,\mathcal{C}\} } \equiv E^{K',s}_{0,\mathcal{C}} - E^{K',s}_{1,\mathcal{V}}$.
From the renormalization of these transition energies, we obtain the renormalized energy thresholds that define the SOC plateau observed in the absorption spectrum of intrinsic TMDs (see Figs.~\ref{fig:chixx_mugap} and \ref{fig:chipm_mugap}).
Evidently, the exchange-corrected value of $\mathcal{T}^{K,\uparrow}_{\{ 0,\mathcal{V} \} \rightarrow \{ 1,\mathcal{C} \}} = \mathcal{T}^{K',\downarrow}_{\{ 1,\mathcal{V} \} \rightarrow \{0,\mathcal{C}\}}$ corresponds to the renormalized band gap.
\item System doped with electrons or holes up to the first 0LL.
Intraband transitions: $\mathcal{T}^{K,\uparrow}_{ \{ 1,\mathcal{V} \} \rightarrow \{ 0,\mathcal{V} \} } \equiv E^{K,\uparrow}_{0,\mathcal{V}} - E^{K,\uparrow}_{1,\mathcal{V}}$, for p-doping, and $\mathcal{T}^{K',*}_{ \{ 0,\mathcal{C} \} \rightarrow \{ 1,\mathcal{C} \} } \equiv E^{K',*}_{1,\mathcal{C}} - E^{K',*}_{0,\mathcal{C}}$, for n-doping, where $*=\uparrow$ if $\Delta^\mathcal{C}_{\text{SOC}} > 0$ and vice-versa.
In this regime, these optical transitions lead to intraband peaks in the absorption spectrum that are spin- and valley-selective.
\end{itemize}
In the undoped case, both the interband optical spectrum and the exchange SE corrections are independent of $T$.
For doped systems the limit $T=0$ is only valid as long as $T \lesssim 0.5 B_0[\si{\tesla}] \si{\kelvin}$ and provides an upper limit for the renormalization of the intraband transition energies.
In Table~\ref{tab:renormalization}, we present the results obtained for $B_0 = 10\si{\tesla}$.
These results show the usual tendency of the Hartree-Fock approximation to enhance energy gaps obtained through standard local density functional theory calculations.
However, it must be noted that, in optical spectroscopic measurements, absorption occurs for photon energies below the exchange-corrected values due to excitonic effects.
\begin{table*}
\begin{tabular}{l | c c c c}
& $\mathcal{T}^{K,\uparrow}_{ \{ 0,\mathcal{V} \} \rightarrow \{ 1,\mathcal{C} \} } \left( \si{\electronvolt} \right)$
& $\mathcal{T}^{K,\downarrow}_{ \{ 0,\mathcal{V} \} \rightarrow \{ 1,\mathcal{C} \} } \left( \si{\electronvolt} \right)$
& $\mathcal{T}^{K,\uparrow}_{ \{ 1,\mathcal{V} \} \rightarrow \{ 0,\mathcal{V} \} } \left( \si{\milli\electronvolt} \right)$
& $\mathcal{T}^{K',*}_{ \{ 0,\mathcal{C} \} \rightarrow \{ 1,\mathcal{C} \} } \left( \si{\milli\electronvolt} \right)$ \\ \hline
\ch{MoS2} & $1.587,\ 2.454$ & $1.738,\ 2.717$ & $2.4,\ 105.4$ & $2.4,\ 103.5$ \\
\ch{WS2} & $1.603,\ 2.567$ & $2.003,\ 3.024$ & $3.6,\ 107.9$ & $2.9,\ 105.1$ \\
\ch{MoSe2} & $1.380,\ 2.202$ & $1.584,\ 2.433$ & $2.1,\ 101.2$ & $2.1,\ 100.7$ \\
\ch{WSe2} & $1.388,\ 2.240$ & $1.818,\ 2.729$ & $3.4,\ 104.9$ & $2.6,\ 102.9$ \\
\end{tabular}
\caption{Renormalization in energy of a selected set of optical transitions (described in the text) for different transition metal dichalcogenides and a magnetic field of $10\si{\tesla}$:
bare and exchange-corrected values (computed at zero absolute temperature) separated by commas, in the respective order.
Results obtained for $\mathcal{T}^{K,s}_{ \{ 0,\mathcal{V} \} \rightarrow \{ 1,\mathcal{C} \} }$ are equal to the ones for $\mathcal{T}^{K',-s}_{\{ 1,\mathcal{V} \} \rightarrow \{0,\mathcal{C}\}}$.}
\label{tab:renormalization}
\end{table*}
For intrinsic TMDs, we find a band gap correction whose magnitude is comparable to the renormalization of the direct band gap in the absence of external magnetic fields~\cite{Chaves2017}.
In the case of the intraband transitions between adjacent LLs, the exchange-corrected values obtained are most likely a severe overestimation of what should be observed in optical experiments.
In fact, Kohn's theorem~\cite{Kohn1961} states that the cyclotron resonance frequency of an electron gas is not altered by electron-electron interactions.
Although this theorem ignores the coupling to the lattice~\cite{Ando1982}, far-infrared spectroscopy probing the cyclotron frequency of the 2D electron gas formed in silicon inversion layers~\cite{Jr.1974} has revealed a good agreement between the experiment and the independent-electron theory.
The applicability of Kohn's theorem for Dirac electrons has been discussed in the literature~\cite{Roldan2010}.
Kohn's theorem implies the existence of interaction-independent collective modes that are relevant for optical spectroscopic measurements.
However, this theorem does not preclude that the quasiparticle spectrum, probed directly through other experiments, can be strongly renormalized by interactions.
Thus, scanning tunneling microscopy (STM) or a combination of angle-resolved photoemission spectroscopy (ARPES) and inverse ARPES could be used to investigate the renormalization of the LL energies due to Coulomb interactions.
\subsection{Renormalization of the spin-orbit splitting}
We now discuss an exchange-driven mechanism to enhance the spin-orbit splitting.
Since the 0LLs do not disperse with the magnetic field, the energy difference between the two $n=0$ LLs in the conduction/valence band is given by $\Delta_{\text{SOC}}^{\mathcal{C}/\mathcal{V}}$.
As shown in Table~\ref{tab:parametersxc}, first-principle calculations predict values of $\Delta_{\text{SOC}}^{\mathcal{C}}$ relatively small compared to those of $\Delta_{\text{SOC}}^{\mathcal{V}}$.
These first-principle results were obtained for undoped TMDs, in the absence of external fields.
Here, we consider the renormalization of $\Delta_{\text{SOC}}^{\mathcal{C}}$, due to SE corrections, for doped systems and in the presence of an out-of-plane magnetic field.
We take as example the case of a monolayer \ch{MoSe2}, for which $\Delta_{\text{SOC}}^{\mathcal{C}} = -21\si{\milli\electronvolt}$ in the undoped regime.
At the Hartree-Fock level, it is clear that, in order to maximize the renormalization of this splitting, the Fermi level should lie between the two $n=0$ LLs in the conduction band.
For this matter, we consider the material doped with electrons up to the lowest energy 0LL.
In addition, the system should be cooled down such that there is no significant thermal activation of the unoccupied 0LL.
For the calculations, we take $T=0$, which is valid as long as $k_B T \ll \Delta_{\text{SOC}}^{\mathcal{C}}$.
\begin{comment}
We are fixing a doping regime for which only the lowest energy 0LL is occupied.
This is not required in general.
Note that, depending on the parameters ($\Delta_{\text{SOC}}^{\mathcal{C}}$ and $B_0$), we can have other LLs between the two 0LLs.
In that case, we can also have a (small) temperature dependency even when the unoccpied 0LL is not thermally activated.
For simplicity, we ignore this discussion.
The regime considered is the most general and these details are not very important.
\end{comment}
In the regime described above, the energy of the unoccupied 0LL is renormalized due to valence states only.
On the other hand, the energy of the polarized 0LL is renormalized by states in the valence bands and in the 0LL itself.
When computing the difference, the dominant contribution comes from the auto SE correction, i.e., the exchange SE correction to the occupied 0LL due to itself.
The origin of the other contributions, which come from corrections due to the $n \neq 0$ LLs in the valence band that do not cancel each other, can be traced back to the presence of SOC interactions in the model.
In Fig.~\ref{fig:SOCc_renorm}, we plot the evolution of the renormalized spin-orbit splitting of the 0LLs in the conduction band of \ch{MoSe2}, as a function of the magnetic field.
We present results that include the complete SE corrections, the contribution of the auto SE only, and a low-field approximation of the former (see derivations below).
The carrier density implied to keep only the lowest energy 0LL polarized is $\rho \simeq -2.4 \times 10^{10} B_0[\si{\tesla]}\si{\per \centi \meter \squared}$.
The analytical expression for the auto SE correction reads
\begin{align}
& \tilde{\Sigma}^\eta_\text{0LL} = - D^{\eta}_{\substack{\text{0LL} \\ \text{0LL}}} I^{\eta}_{\substack{\text{0LL} \\ \text{0LL}}} \nonumber
\\ & \phantom{\tilde{\Sigma}^\eta_\text{0LL}}
= -\frac{e^2}{4 \pi \varepsilon_0} \frac{1}{l_B} \int_0^{+\infty} d\bar{q} \ \frac{ \mathrm{e}^{-\bar{q}^2/2} }{ \frac{r_0}{l_B} \bar{q} + 1 } \nonumber
\\ & \phantom{\tilde{\Sigma}^\eta_\text{0LL}}
= -\frac{e^2}{4 \pi \varepsilon_0} \frac{ \mathrm{e}^{-\frac{l_B^2}{2 r_0^2 }} }{2 r_0} \left[ \frac{\pi}{\mathrm{i}} \text{erf} \left(\mathrm{i} \frac{l_B}{\sqrt{2} r_0} \right) - \text{Ei} \left( \frac{l_B^2}{2 r_0^2} \right) \right],
\label{eq:autoSE}
\end{align}
where erf is the error function and Ei the exponential integral function.
In the limit of small $B_0$, Eq.~\eqref{eq:autoSE} can be simplified making a Taylor expansion around $\frac{r_0}{l_B} = 0$ which, up to second order, yields
\begin{equation}
\tilde{\Sigma}^\eta_\text{0LL} \simeq -\frac{e^2}{4 \pi \varepsilon_0} \frac{1}{r_0} \left( \frac{\sqrt{2 \pi}}{2}\frac{r_0}{l_B} - \frac{r_0^2}{l_B^2} \right).
\label{eq:autoSE_Taylor}
\end{equation}
The validity of Eq.~\eqref{eq:autoSE_Taylor} is controlled by the ratio $\frac{r_0}{l_B}$, that scales as $0.2 \sqrt{B_0 [\si{\tesla}]}$ for \ch{MoSe2}.
\begin{figure}
\includegraphics[width=\columnwidth]{SOCc_renorm.pdf}
\caption{(Color online) Spin-orbit splitting of the zeroth Landau levels (0LLs) in the conduction band of \ch{MoSe2}, renormalized by the exchange self-energy (SE) corrections (computed at zero absolute temperature), as a function of the magnetic field.
The Fermi level, $\mu$, is kept between the two spin split 0LLs in the conduction band, such that only the lowest energy 0LL in the $K'$ valley is polarized, as depicted in the cartoon.
The horizontal black dashed line corresponds to the non-interacting reference, whereas the others correspond to exchange-corrected values that include the complete SE corrections (green solid line), the contribution of the auto SE only (brown solid line), and a low-field second order Taylor expansion of the former (brown dashed line).
These results reveal a large exchange-driven enhancement of the splitting, which increases with the intensity of the magnetic field and approaches the non-interacting value in the limit of zero field.}
\label{fig:SOCc_renorm}
\end{figure}
The complete SE results show a large exchange-driven enhancement of $\Delta_{\text{SOC}}^{\mathcal{C}}$: even at a moderate field of $2 \si{\tesla}$, we obtain a renormalization in the order of $100\si{\milli\electronvolt}$.
It is also apparent that the exchange corrections are dominated by the auto SE contribution.
Thus, it becomes clear why the spin-orbit splitting increases with the intensity of the magnetic field: as $B_0$ ramps up, so it does the density of electrons in the occupied 0LL and therefore the magnitude of the renormalization.
Expectedly, we also observe that the exchange-corrected values approach the non-interacting reference as we decrease the intensity of the magnetic field.
This is verified analytically through Eq.~\eqref{eq:autoSE_Taylor} by noticing the absence of zeroth order terms in the low-field Taylor expansion of the auto SE correction.
The predictions of the Hartree-Fock calculations have to be contrasted with Larmor's theorem for spin-flip collective modes, excited with a zero wave vector perturbation~\cite{Dobers1988}.
Analogously to Kohn's theorem, this theorem states that electron-electron interactions do not renormalize the energy of the $q=0$ spin-flip excitations, which must be equal to $g \mu_B B_0$.
However, the theorem only holds for systems where the total spin is conserved, which is clearly not the case for TMDs, on account of the strong SOC interactions.
On the other hand, vertex corrections are likely to reduce the large spin-flip energies predicted at the Hartree-Fock level~\cite{Mahan2013}.
In any case, experiments that probe the quasiparticle spectrum, such as STM and ARPES, might be able to capture the large shifts predicted by our calculations.
\section{Discussion and conclusions}
\label{section:conclusion}
We have provided a thorough theoretical study of the optical properties of semiconducting TMD monolayers, described within the massive Dirac model, under the influence of strong out-of-plane magnetic fields that quantize the energy spectrum into a set of LLs.
We have analyzed in detail the longitudinal and transverse optical response, in both doped and undoped regimes, paying attention to the breakdown of the contributions coming from different spins and valleys.
We have also addressed the role of electron-electron interactions, treated at the Hartree-Fock level.
\subsection{Limits of the model}
Here, we briefly discuss some limitations of the model Hamiltonian applied in this work.
First, atomistic calculations~\cite{Chu2014,Lado2016} show a valley symmetry breaking of the LL spectrum that is not captured through Dirac models.
\begin{comment}
This is probably coming from particle-hole symmetry breaking.
We failed to find a paper that computes LLs for a tight-binding model with 2nd nearest neighbor hopping.
\end{comment}
Thus, the resulting magneto-optical spectra should feature a valley splitting of the peaks.
Second, we have ignored the paramagnetic shift of the valence bands associated to the coupling between the magnetic field and the valley-dependent atomic orbital momentum, $L_z = \tau 2$, of the highest energy valence states~\cite{Kosmider2013}.
\begin{comment}
This effect is not related to the one above since the atomistic calculations include the magnetic field via Peierls substitution.
\end{comment}
This results in another valley-dependent contribution.
Third, we have also ignored Zeeman splitting, that can be easily added to our results.
Finally, we have not considered excitonic effects, that are expected to have a strong impact in the optical response.
These are the scope of an incoming publication~\cite{Have2018}.
At charge neutrality, the excitonic effects not considered in this work are known to renormalize strongly the optical response functions.
Therefore, our results in the undoped regime are meant to be taken, at most, as a qualitative description.
However, in the doped case, we expect our analysis to be robust against exciton formation.
To sustain this statement, we first note that the exciton size in TMDs monolayers are not strongly affected by the presence of an out-of-plane magnetic field~\cite{Have2018}.
Then, we compare the $\bm{B}=\bm{0}$ exciton size ---typically in the order of a few nanometers~\cite{Chaves2017}--- with the 2D Thomas-Fermi screening length, which we have estimated to be $\sim 0.17\si{\nano\meter}$ and independent of the carrier density.
These numbers lead us to conclude that excitons in TMDs are effectively screened in any doped regime for which the Thomas-Fermi approximation holds.
\subsection{Main results}
We now summarize our main results.
At $\bm{B}=\bm{0}$, TMDs are known to present valley-dependent circular dichroism~\cite{Cao2012}: photons with a given circular polarization induce transitions in a valley-selective manner.
This permits to induce optical valley orientation.
Given that TMDs have strong SOC interactions, valley orientation also implies spin orientation in these materials.
In this work, we have found that the application of an out-of-plane magnetic field preserves these effects, although the resulting optical spectrum contains a much richer structure.
In the case of doped TMDs, the application of the magnetic field brings two main novelties that are absent in the undoped regime:
\begin{enumerate}
\item The lowest energy peak in $\chi_{xx}(\omega)$ has dominant contributions from optical transitions within a single spin and valley (see Fig.~\ref{fig:chixx_doped_+schemes}-(c)).
As a result, at that energy, linearly polarized light can induce both a valley and spin imbalance.
This provides a new mechanism for optical orientation, attained with linearly polarized light.
\item The AC Hall response is finite, as shown in Fig.~\ref{fig:chixy_doped}-(a).
This implies a net circular dichroism, i.e., a net difference in absorption of $\sigma^+$ and $\sigma^-$ photons.
\end{enumerate}
The main consequences of the exchange SE interactions are:
\begin{enumerate}
\item In the intrinsic case, the effective band gap is severely renormalized, resulting in a larger value.
\item In n-doped systems with a spin-polarized groundstate, our calculations show a strong exchange-driven renormalization of the spin-orbit splitting of the 0LLs in the conduction band, which exceeds $100\si{\milli\electronvolt}$ for $B_0 = 2\si{\tesla}$.
\end{enumerate}
These results point out the strong influence of electron-electron interactions in the electronic and optical properties of doped TMDs.
Future work will address spin and valley Stoner instabilities driven by Coulomb interactions in doped TMDs (see for instance Ref.~\onlinecite{Szulakowska2018}).
\section*{Acknowledgements}
We thank Andre J. Chaves and Luis Brey for fruitful discussions.
G. C. thanks Departamento de F\'{i}sica Aplicada at Universidad de Alicante for their hospitality.
G. C. and J. F.-R. acknowledge financial support from FCT for the P2020-PTDC/FIS-NAN/4662/2014 project.
J. H. acknowledges financial support by the QUSCOPE Center, sponsored by the Villum foundation.
J. F.-R. acknowledges financial support from FCT for the P2020-PTDC/FIS-NAN/3668/2014 and the UTAP-EXPL/NTec/0046/2017 projects, as well as Generalitat Valenciana funding Prometeo2017/139 and MINECO-Spain (Grant No. MAT2016-78625-C2).
N. M. R. P. acknowledges financial support from the European Commission through the project ``Graphene-Driven Revolutions in ICT and Beyond'' (Ref. No. 785219) and the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Financing UID/FIS/04650/2013.
Additionally, N. M. R. P. acknowledges COMPETE2020, PORTUGAL2020, FEDER and the Portuguese Foundation for Science and Technology (FCT) for the Grants No. PTDC/FIS-NAN/3668/2013 and No. POCI-01-0145-FEDER-028114.
| -59,011.942409 |
[
-3.12109375,
2.974609375
] | 56.278027 |
[
-3.1640625,
0.2247314453125,
-1.822265625,
-6.2421875,
-0.904296875,
9.046875
] |
[
5.49609375,
9.671875,
4.53125,
7.65234375
] | 623 | 10,887 |
[
-3.384765625,
4.1015625
] | 28.344902 |
[
-6.21484375,
-3.73046875,
-4.0859375,
-2.447265625,
1.7626953125,
11.8984375
] | 1.232584 | 12.62723 | 19.96877 | 2.108583 |
[
2.073793888092041
] | -35,094.066973 | 5.887113 | -57,827.656501 | 0.327649 | 6.213894 |
[
-2.9296875,
-3.904296875,
-3.701171875,
-4.78125,
2.498046875,
12.34375
] |
[
-5.796875,
-2.40625,
-2.314453125,
-1.8359375,
3.53125,
5.3203125
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.